k8s 1.4 新版本部署node
測試環境:nginx
node-1: 10.6.0.140 node-2: 10.6.0.187 node-3: 10.6.0.188
kubernetes 集羣,包含 master 節點,與 node 節點。git
hostnamectl --static set-hostname hostname 10.6.0.140 - k8s-master 10.6.0.187 - k8s-node-1 10.6.0.188 - k8s-node-2
配置 /etc/hostsgithub
添加docker
10.6.0.140 k8s-master 10.6.0.187 k8s-node-1 10.6.0.188 k8s-node-2
部署:json
1、安裝k8sapi
yum install -y socat
------------------------------------------------------ cat <<EOF> /etc/yum.repos.d/k8s.repo [kubelet] name=kubelet baseurl=http://files.rm-rf.ca/rpms/kubelet/ enabled=1 gpgcheck=0 EOF ------------------------------------------------------
yum makecache yum install -y kubelet kubeadm kubectl kubernetes-cni
因爲 google 被牆, 因此使用 kubeadm init 建立 集羣 的時候會出現卡住網絡
國內已經有人將鏡像上傳至 docker hub 裏面了app
咱們直接下載:測試
docker pull chasontang/kube-proxy-amd64:v1.4.0 docker pull chasontang/kube-discovery-amd64:1.0 docker pull chasontang/kubedns-amd64:1.7 docker pull chasontang/kube-scheduler-amd64:v1.4.0 docker pull chasontang/kube-controller-manager-amd64:v1.4.0 docker pull chasontang/kube-apiserver-amd64:v1.4.0 docker pull chasontang/etcd-amd64:2.2.5 docker pull chasontang/kube-dnsmasq-amd64:1.3 docker pull chasontang/exechealthz-amd64:1.1 docker pull chasontang/pause-amd64:3.0
下載之後使用 docker tag 命令將其作別名改成 gcr.io/google_containers
docker tag chasontang/kube-proxy-amd64:v1.4.0 gcr.io/google_containers/kube-proxy-amd64:v1.4.0 docker tag chasontang/kube-discovery-amd64:1.0 gcr.io/google_containers/kube-discovery-amd64:1.0 docker tag chasontang/kubedns-amd64:1.7 gcr.io/google_containers/kubedns-amd64:1.7 docker tag chasontang/kube-scheduler-amd64:v1.4.0 gcr.io/google_containers/kube-scheduler-amd64:v1.4.0 docker tag chasontang/kube-controller-manager-amd64:v1.4.0 gcr.io/google_containers/kube-controller-manager-amd64:v1.4.0 docker tag chasontang/kube-apiserver-amd64:v1.4.0 gcr.io/google_containers/kube-apiserver-amd64:v1.4.0 docker tag chasontang/etcd-amd64:2.2.5 gcr.io/google_containers/etcd-amd64:2.2.5 docker tag chasontang/kube-dnsmasq-amd64:1.3 gcr.io/google_containers/kube-dnsmasq-amd64:1.3 docker tag chasontang/exechealthz-amd64:1.1 gcr.io/google_containers/exechealthz-amd64:1.1 docker tag chasontang/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
清除原來下載的鏡像
docker rmi chasontang/kube-proxy-amd64:v1.4.0 docker rmi chasontang/kube-discovery-amd64:1.0 docker rmi chasontang/kubedns-amd64:1.7 docker rmi chasontang/kube-scheduler-amd64:v1.4.0 docker rmi chasontang/kube-controller-manager-amd64:v1.4.0 docker rmi chasontang/kube-apiserver-amd64:v1.4.0 docker rmi chasontang/etcd-amd64:2.2.5 docker rmi chasontang/kube-dnsmasq-amd64:1.3 docker rmi chasontang/exechealthz-amd64:1.1 docker rmi chasontang/pause-amd64:3.0
啓動 kubelet
systemctl enable kubelet
systemctl start kubelet
利用 kubeadm 建立 集羣
[root@k8s-master ~]#kubeadm init --api-advertise-addresses=10.6.0.140 --use-kubernetes-version v1.4.0
<master/tokens> generated token: "eb4d40.67aac8417294a8cf" <master/pki> created keys and certificates in "/etc/kubernetes/pki" <util/kubeconfig> created "/etc/kubernetes/kubelet.conf" <util/kubeconfig> created "/etc/kubernetes/admin.conf" <master/apiclient> created API client configuration <master/apiclient> created API client, waiting for the control plane to become ready <master/apiclient> all control plane components are healthy after 10.304645 seconds <master/apiclient> waiting for at least one node to register and become ready <master/apiclient> first node has registered, but is not ready yet <master/apiclient> first node has registered, but is not ready yet <master/apiclient> first node has registered, but is not ready yet <master/apiclient> first node has registered, but is not ready yet <master/apiclient> first node has registered, but is not ready yet <master/apiclient> first node is ready after 3.004762 seconds <master/discovery> created essential addon: kube-discovery, waiting for it to become ready <master/discovery> kube-discovery is ready after 4.002661 seconds <master/addons> created essential addon: kube-proxy <master/addons> created essential addon: kube-dns Kubernetes master initialised successfully! You can now join any number of machines by running the following on each node: kubeadm join --token 8609e3.c2822cf312e597e1 10.6.0.140
查看 kubelet 狀態
systemctl status kubelet
子節點 啓動 kubelet 首先必須啓動 docker
systemctl enable kubelet
systemctl start kubelet
下面子節點加入集羣
kubeadm join --token 8609e3.c2822cf312e597e1 10.6.0.140
查看 kubelet 狀態
systemctl status kubelet
查看集羣狀態
[root@k8s-master ~]#kubectl get node NAME STATUS AGE k8s-master Ready 1d k8s-node-1 Ready 1d k8s-node-2 Ready 1d
此時可看到 三個節點 都已經 Ready , 可是其實 Pod 只會運行在 node 節點
若是須要全部節點,包括master 也運行 Pod 須要運行
kubectl taint nodes --all dedicated-
安裝 POD 網絡
這裏使用官方推薦的 weave 網絡
kubectl apply -f https://git.io/weave-kube
查看全部pod 狀態
[root@k8s-master ~]#kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-k8s-master 1/1 Running 1 49m kube-system kube-apiserver-k8s-master 1/1 Running 1 48m kube-system kube-controller-manager-k8s-master 1/1 Running 1 48m kube-system kube-discovery-1971138125-0oq58 1/1 Running 1 49m kube-system kube-dns-2247936740-ojzhw 3/3 Running 3 49m kube-system kube-proxy-amd64-1hhdf 1/1 Running 1 49m kube-system kube-proxy-amd64-4c2qt 1/1 Running 0 47m kube-system kube-proxy-amd64-tc3kw 1/1 Running 1 47m kube-system kube-scheduler-k8s-master 1/1 Running 1 48m kube-system weave-net-9mrlt 2/2 Running 2 46m kube-system weave-net-oyguh 2/2 Running 4 46m kube-system weave-net-zc67d 2/2 Running 0 46m
使用 GlusterFS 做爲 volume
官方詳細說明:
https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/glusterfs
1. 配置 GlusterFS 集羣,以及設置好 GlusterFS 的 volume , node 客戶端安裝 glusterfs-client
2. k8s-master 建立一個 endpoints.
我這邊 GlusterFS 有3個節點
vi glusterfs-endpoints.json
# 每個 GlusterFS 節點,必須寫一列. 端口隨意填寫(1-65535)
{ "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "subsets": [ { "addresses": [ { "ip": "10.6.0.140" } ], "ports": [ { "port": 1 } ] }, { "addresses": [ { "ip": "10.6.0.187" } ], "ports": [ { "port": 1 } ] }, { "addresses": [ { "ip": "10.6.0.188" } ], "ports": [ { "port": 1 } ] } ] }
建立 endpoints
[root@k8s-master ~]#kubectl create -f glusterfs-endpoints.json endpoints "glusterfs-cluster" created
查看 endpoints
[root@k8s-master ~]#kubectl get endpoints NAME ENDPOINTS AGE glusterfs-cluster 10.6.0.140:1,10.6.0.187:1,10.6.0.188:1 37s
3. k8s-master 建立一個 service.
vi glusterfs-service.json
# 這裏注意以前填寫的 port
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "spec": { "ports": [ {"port": 1} ] } }
建立 service
[root@k8s-master ~]#kubectl create -f glusterfs-service.json service "glusterfs-cluster" created
查看 service
[root@k8s-master ~]#kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE glusterfs-cluster 100.71.255.174 <none> 1/TCP 14s
4. k8s-master 建立一個 Pod 來測試掛載
vi glusterfs-pod.json
# glusterfs 下 path 配置 glusterfs volume 的名稱
readOnly: true (只讀) and readOnly: false
{ "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "glusterfs" }, "spec": { "containers": [ { "name": "glusterfs", "image": "gcr.io/google_containers/pause-amd64:3.0", "volumeMounts": [ { "mountPath": "/mnt/glusterfs", "name": "glusterfsvol" } ] } ], "volumes": [ { "name": "glusterfsvol", "glusterfs": { "endpoints": "glusterfs-cluster", "path": "models", "readOnly": false } } ] } }
查看 掛載的 volume
[root@k8s-node-2 ~]# mount | grep models 10.6.0.140:models on /var/lib/kubelet/pods/947390da-8f6a-11e6-9ade-d4ae52d1f0c9/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
編寫一個 Deployment 的 yaml 文件
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
使用 kubectl create 進行建立
kubectl create -f nginx.yaml --record
查看 pod
[root@k8s-master ~]#kubectl get pod NAME READY STATUS RESTARTS AGE nginx-deployment-646889141-459i5 1/1 Running 0 9m nginx-deployment-646889141-vxn29 1/1 Running 0 9m
查看 deployment
[root@k8s-master ~]#kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 2 2 2 2 10m