##環境node
docker1 centos7 192.168.75.200git
docker2 centos7 192.168.75.201docker
docker3 centos7 192.168.75.202json
物理網絡 192.168.75.1/24bootstrap
Docker version 1.10.3, build 3999ccb-unsupported ,安裝過程略centos
##1.安裝kubernetesapi
安裝請參考官網https://kubernetes.io/docs/getting-started-guides/kubeadm/tomcat
####安裝kubelet,kubectl和kubeadmbash
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo服務器
[kubernetes]
name=Kubernetes
baseurl=http://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF
# setenforce 0
####設置代理,否則國內連不上google,45.76.203.146:3129是我在牆外的一個squid服務器
# ssh -N -L 3129:127.0.0.1:3129 root@45.76.203.146
# export http_proxy=http://127.0.0.1:3129
# yum install -y docker kubelet kubeadm kubectl kubernetes-cni
# unset http_proxy
# systemctl enable docker && systemctl start docker
# systemctl enable kubelet && systemctl start kubelet
####初始化master
# kubeadm init
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [init] Using Kubernetes version: v1.5.2 [tokens] Generated token: "eb2b99.e5d156dd860ef80d" [certificates] Generated Certificate Authority key and certificate. [certificates] Generated API Server key and certificate [certificates] Generated Service Account signing keys [certificates] Created keys and certificates in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [apiclient] Created API client, waiting for the control plane to become ready 到這裏卡死,Ctrl+C終止
####被牆怎麼辦
由於gcr.io被牆了,另一個quay.io網站的image打開也很慢,致使calico一直沒正常啓動
解決方案:
我是在國外一臺服務器上開了個haproxy的tcp代理到gcr.io的443端口,再經過ssh把443端口映射到本地443端口,最後改hosts文件把gcr.io改成127.0.0.1
另一個quay.io/calico/node的image我是導入的
經過docker save imageid >imageid.img 導出image
拷到k8s上再去docker load < imageid.img
再經過 docker rename imageid REPOSITORY:TAG 去重命名image
####再次初始化
# kubeadm init --pod-network-cidr=10.1.0.0/16
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [init] Using Kubernetes version: v1.5.2 [tokens] Generated token: "f2ebdb.c0223c3c42185110" [certificates] Generated Certificate Authority key and certificate. [certificates] Generated API Server key and certificate [certificates] Generated Service Account signing keys [certificates] Created keys and certificates in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 22.639406 seconds [apiclient] Waiting for at least one node to register and become ready [apiclient] First node is ready after 1.506328 seconds [apiclient] Creating a test deployment [apiclient] Test deployment succeeded [token-discovery] Created the kube-discovery deployment, waiting for it to become ready [token-discovery] kube-discovery is ready after 3.005029 seconds [addons] Created essential addon: kube-proxy [addons] Created essential addon: kube-dns Your Kubernetes master has initialized successfully! You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node: kubeadm join --token=5a674b.d33b282a7825fd65 192.168.75.201
####node加入master
# kubeadm join --token=5a674b.d33b282a7825fd65 192.168.75.201
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [tokens] Validating provided token [discovery] Created cluster info discovery client, requesting info from "http://192.168.75.201:9898/cluster-info/v1/?token-id=5a674b" [discovery] Cluster info object received, verifying signature using given token [discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.75.201:6443] [bootstrap] Trying to connect to endpoint https://192.168.75.201:6443 [bootstrap] Detected server version: v1.5.2 [bootstrap] Successfully established connection with endpoint "https://192.168.75.201:6443" [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server: Issuer: CN=kubernetes | Subject: CN=system:node:docker1 | CA: false Not before: 2017-01-22 05:00:00 +0000 UTC Not After: 2018-01-22 05:00:00 +0000 UTC [csr] Generating kubelet configuration [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
####在master上查看node
# kubectl get nodes
NAME STATUS AGE docker1 Ready 46s docker2 Ready,master 11m
####k8s配置安裝calico及插件,也能夠把文件下下來修改
# cd /etc/kubernetes && wget http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
修改calico.yaml,修改etcd地址calico的cidr
修改etcd_endpoint,這個是本身搭建的etcd
...... cidr: 10.1.0.0/16 ......
# kubectl apply -f calico.yaml
####安裝dashboard
# cd /etc/kubernetes && wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml Accessing
# kubectl apply -f kubernetes-dashboard.yaml deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created
####查看namespace爲kube-system下的pod運行狀況
# kubectl get po -o wide --namespace=kube-system
NAME READY STATUS RESTARTS AGE IP NODE calico-etcd-ncsts 1/1 Running 0 1h 192.168.75.201 docker2 calico-node-qlc5c 2/2 Running 0 1h 192.168.75.200 docker1 calico-node-xvmvf 2/2 Running 0 1h 192.168.75.201 docker2 calico-policy-controller-807063459-2jh90 1/1 Running 0 1h 192.168.75.200 docker1 dummy-2088944543-xhc51 1/1 Running 0 1h 192.168.75.201 docker2 etcd-docker2 1/1 Running 0 1h 192.168.75.201 docker2 kube-apiserver-docker2 1/1 Running 0 1h 192.168.75.201 docker2 kube-controller-manager-docker2 1/1 Running 0 1h 192.168.75.201 docker2 kube-discovery-1769846148-f66cz 1/1 Running 0 1h 192.168.75.201 docker2 kube-dns-2924299975-k7bbj 4/4 Running 65 1h 10.1.72.134 docker1 kube-proxy-9bdlh 1/1 Running 0 1h 192.168.75.200 docker1 kube-proxy-c02sl 1/1 Running 0 1h 192.168.75.201 docker2 kube-scheduler-docker2 1/1 Running 0 1h 192.168.75.201 docker2 kubernetes-dashboard-3203831700-g2v0c 1/1 Running 17 1h 10.1.72.135 docker1
####訪問dashboard
默認經過master訪問dashboard要認證
能夠經過kubeproxy訪問
# kubectl proxy --accept-hosts='192.168.75.201' --address='192.168.75.201'
Starting to serve on 192.168.75.201:8001
打開http://192.168.75.201:8001/ui/
##2.安裝glusterfs
由於環境裏沒那麼多機器 只好複用了
在2臺機器上安裝安裝glusterfs
# yum install centos-release-gluster38 -y
# yum install glusterfs{,-server,-fuse,-geo-replication,-libs,-api,-cli,-client-xlators} heketi -y
# systemctl start glusterd && systemctl enable glusterd
在docker1上執行gluster,和docker2創建集羣
# gluster
gluster> peer probe 192.168.75.201 peer probe: success. gluster> peer status Number of Peers: 1 Hostname: 192.168.75.201 Uuid: 5a50cb29-feaa-4935-8b2d-70e39a0557ba State: Peer in Cluster (Connected)
####添加捲,測試後刪除
gluster> volume create vol1 replica 2 192.168.75.200:/media/gfs 192.168.75.201:/media/gfs force volume create: vol1: success: please start the volume to access data
####激活卷,測試後刪除
gluster> volume start vol1 volume start: vol1: success
##3.配置heketi
# vi /etc/heketi/heketi.json
...... #修改端口,防止端口衝突 "port": "1234", ...... #容許認證 "use_auth": true, ...... #admin用戶的key改成adminkey "key": "adminkey" ...... #修改執行插件爲ssh,並配置ssh的所需證書,注意要能對集羣中的機器免密ssh登錄,使用ssh-copy-id把pub key拷到每臺glusterfs服務器上 "executor": "ssh", "sshexec": { "keyfile": "/root/.ssh/id_rsa", "user": "root" }, "_db_comment": "Database file name", "brick_min_size_gb" : 1, "db": "/var/lib/heketi/heketi.db" }
####啓動heketi
# nohup heketi -config=/etc/heketi/heketi.json &
####配置heketi
註釋每臺glusterfs上的/etc/sudoers中的require tty,否則加第二個node死活報錯,最後把日誌級別調高才看到日誌裏有記錄sudo提示require tty
#Defaults requiretty
####添加cluster
# heketi-cli -user=admin -server=http://192.168.75.200:1234 -secret=adminkey -json=true cluster create
{"id":"d102a74079dd79aceb3c70d6a7e8b7c4","nodes":[],"volumes":[]}
####將3個glusterfs做爲node添加到cluster
# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true node add -cluster="d102a74079dd79aceb3c70d6a7e8b7c4" -management-host-name=192.168.75.200 -storage-host-name=192.168.75.200 -zone=1
{"zone":1,"hostnames":{"manage":["192.168.75.200"],"storage":["192.168.75.200"]},"cluster":"d102a74079dd79aceb3c70d6a7e8b7c4","id":"c3638f57b5c5302c6f7cd5136c8fdc5e","devices":[]}
# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true node add -cluster="d102a74079dd79aceb3c70d6a7e8b7c4" -management-host-name=192.168.75.201 -storage-host-name=192.168.75.201 -zone=1
{"zone":1,"hostnames":{"manage":["192.168.75.201"],"storage":["192.168.75.201"]},"cluster":"d102a74079dd79aceb3c70d6a7e8b7c4","id":"0245885cb56c482828413002c7ee994c","devices":[]}
# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true node add -cluster="d102a74079dd79aceb3c70d6a7e8b7c4" -management-host-name=192.168.75.202 -storage-host-name=192.168.75.202 -zone=1
{"zone":1,"hostnames":{"manage":["192.168.75.202"],"storage":["192.168.75.202"]},"cluster":"d102a74079dd79aceb3c70d6a7e8b7c4","id":"4c71d1213937ba01058b6cd7c9d84954","devices":[]}
####建立device,3臺虛擬機上各添加了一塊硬盤sdb
# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true device add -name="/dev/sdb" -node="c3638f57b5c5302c6f7cd5136c8fdc5e"
Device added successfully
# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true device add -name="/dev/sdb" -node="0245885cb56c482828413002c7ee994c"
Device added successfully
# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true device add -name="/dev/sdb" -node="4c71d1213937ba01058b6cd7c9d84954"
Device added successfully
####添加volumn,這一步不必作,可讓pvc自動建立
若是添加的volumn小的話可能會提示No Space,看日誌是由於min brick limit 要解決這一問題要在heketi.json添加"brick_min_size_gb" : 1 ,1爲1G
...... "brick_min_size_gb" : 1, "db": "/var/lib/heketi/heketi.db" ......
size要比brick_min_size_gb大,若是設成1仍是報min brick limit,replica必須大於1
# heketi-cli -server="http://192.168.75.200:1234" -user="admin" -secret="adminkey" -json=true volume create -size=3 -replica=2
{"size":3,"name":"vol_dfe295ac4c128e5e8a8cf7c7ce068d97","durability":{"type":"replicate","replicate":{"replica":2},"disperse":{"data":4,"redundancy":2}},"snapshot":{"enable":false,"factor":1},"id":"dfe295ac4c128e5e8a8cf7c7ce068d97","cluster":"d102a74079dd79aceb3c70d6a7e8b7c4","mount":{"glusterfs":{"device":"192.168.75.200:vol_dfe295ac4c128e5e8a8cf7c7ce068d97","options":{"backupvolfile-servers":"192.168.75.201"}}},"bricks":[{"id":"4479d5b212a7ca18caa0277e3f339f90","path":"/var/lib/heketi/mounts/vg_407cbaba20295ac218f0a44adc8b7e6f/brick_4479d5b212a7ca18caa0277e3f339f90/brick","device":"407cbaba20295ac218f0a44adc8b7e6f","node":"c3638f57b5c5302c6f7cd5136c8fdc5e","size":1572864},{"id":"c9489325b3e67bdb84250417c403f361","path":"/var/lib/heketi/mounts/vg_27f69c1a59a19e6d73b5477fdd121303/brick_c9489325b3e67bdb84250417c403f361/brick","device":"27f69c1a59a19e6d73b5477fdd121303","node":"0245885cb56c482828413002c7ee994c","size":1572864},{"id":"e3106832525b143a3913951ea7f24500","path":"/var/lib/heketi/mounts/vg_407cbaba20295ac218f0a44adc8b7e6f/brick_e3106832525b143a3913951ea7f24500/brick","device":"407cbaba20295ac218f0a44adc8b7e6f","node":"c3638f57b5c5302c6f7cd5136c8fdc5e","size":1572864},{"id":"fc3ede0060546f7eab48307efff05985","path":"/var/lib/heketi/mounts/vg_27f69c1a59a19e6d73b5477fdd121303/brick_fc3ede0060546f7eab48307efff05985/brick","device":"27f69c1a59a19e6d73b5477fdd121303","node":"0245885cb56c482828413002c7ee994c","size":1572864}]}
##4.配置kubernetes使用glusterfs
參考https://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims
不過沒寫glusterfs的pv的樣例,找了好久也沒找到,只找到老版本的樣例,
####建立storageclass
# vi /etc/kubernetes/storageclass-glusterfs.yaml
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: gfs1 provisioner: kubernetes.io/glusterfs parameters: resturl: "http://192.168.75.200:1234" restauthenabled: "true" restuser: "admin" restuserkey: "adminkey" #secretNamespace和secretName未定義,具體用法看官網
####經過配置建立glusterfs的名爲gfs1的storageclass
# kubectl apply -f /etc/kubernetes/storageclass-glusterfs.yaml
####建立persistent volume或persistent volume claim
參考官方文檔
https://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes-1
建立pv,不過pv沒樣例,報沒有定義volume type,最後拿老的文檔拼出來的yaml居然成功了,運氣啊,不過咱們不用pv,仍是用pvc
# vi /etc/kubernetes/pv-pv0001.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 labels: type: glusterfs pv: pv0001 annotations: volume.beta.kubernetes.io/storage-class: "fortest" spec: capacity: storage: 2G accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle glusterfs: endpoints: "glusterfs-cluster" path: "abc123"
# kubectl create -f /etc/kubernetes/pv-pv0001.yaml
persistentvolume "pv0001" created
#####這裏建立個pvc,請求模式爲可多人讀寫,storage-class爲剛剛建立的gfs1,限制請求的存儲大小爲100M,匹配標籤pvc爲fortest的資源
# vi /etc/kubernetes/pvc-fortest.yaml
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: default-pvc1 namespace: default annotations: volume.beta.kubernetes.io/storage-class: "gfs1" spec: accessModes: - ReadWriteMany resources: requests: storage: 2G #下面這三行加了沒用,pvc一直處在pending狀態,去掉就行了 # selector: # matchLabels: # pvc: "default_pvc1"
# kubectl create -f /etc/kubernetes/pvc-default-pvc1.yaml
persistentvolumeclaim "default-pvc1" created
####查看pvc狀態爲bound
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE default-pvc1 Bound pvc-a06d5668-e201-11e6-a4fd-080027520bff 2Gi RWX 10m
####建立一個pod,使用pvc爲fortest
# vi /etc/kubernetes/pod-test1.yaml
kind: Pod apiVersion: v1 metadata: name: test1 namespace: default labels: pvc: fortest env: test app: tomcat spec: containers: - name: test-tomcat image: docker.io/tomcat volumeMounts: - mountPath: "/tmp" name: vl-test-tomcat volumes: - name: vl-test-tomcat persistentVolumeClaim: claimName: fortest
# kubectl create -f /etc/kubernetes/pod-test1.yaml
pod "test1" created
####查看pod發現test1在運行
# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE test1 1/1 Running 0 37s 10.1.72.155 docker1
####進入pod1查看volume被成功掛在到/tmp
# kubectl exec -it -p test1 /bin/bash
W0124 15:13:46.183022 31589 cmd.go:325] -p POD_NAME is DEPRECATED and will be removed in a future version. Use exec POD_NAME instead. root@test1:/usr/local/tomcat# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/docker-253:0-293364-ccad18bbad5ecb98b6fd9471dff9db415c2d7886e058fbe77ca3965079d0fdf6 10G 396M 9.7G 4% / tmpfs 371M 0 371M 0% /dev tmpfs 371M 0 371M 0% /sys/fs/cgroup 192.168.75.200:vol_39ac5a28aca8928778c8715216fd247c 2.0G 66M 2.0G 4% /tmp /dev/mapper/centos-root 13G 6.0G 7.0G 47% /etc/hosts shm 64M 0 64M 0% /dev/shm tmpfs 371M 12K 371M 1% /run/secrets/kubernetes.io/serviceaccount
##5.以service暴露應用
將pod爲test1的tomcat暴露給外網,dns能夠解析service的clusterip,域名爲<svc-name>.<namespace>.svc.cluster.local,dns服務器只有node,master和容器能夠訪問,外部沒法訪問,由於dns暴露出來的方式也是clusterip,這裏是10.96.0.10
###使用nodePort方式
建立service配置文件
# cat svc_test1.yaml
apiVersion: v1 kind: Service metadata: labels: provider: test1 name: svc-test1 namespace: default spec: selector: env: test app: tomcat ports: - name: http port: 8080 protocol: TCP targetPort: 8080 nodePort: 30080 type: NodePort
建立service
# kubectl apply -f svc_test1.yaml
service "svc-test1" created
訪問任意一臺node上的30080端口,都可打開看到tomcat首頁
dns解析
# dig svc-test1.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.1 <<>> svc-test1.default.svc.cluster.local @10.96.0.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24787 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;svc-test1.default.svc.cluster.local. IN A ;; ANSWER SECTION: svc-test1.default.svc.cluster.local. 30 IN A 10.96.59.39 ;; Query time: 4 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: Tue Feb 07 16:23:44 CST 2017 ;; MSG SIZE rcvd: 69
###使用clusterIP方式
# cat svc_ci_test1.yaml
apiVersion: v1 kind: Service metadata: labels: provider: test1 name: svc-ci-test1 namespace: default spec: selector: env: test app: tomcat ports: - name: http port: 8080 protocol: TCP targetPort: 8080 type: ClusterIP
經過clusterIP訪問,10.96.59.39爲自動分配的clusterip
# curl http://10.106.204.126:8080/
# dig svc-ci-test1.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.1 <<>> svc-ci-test1.default.svc.cluster.local @10.96.0.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 53338 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;svc-ci-test1.default.svc.cluster.local. IN A ;; ANSWER SECTION: svc-ci-test1.default.svc.cluster.local. 30 IN A 10.106.204.126 ;; Query time: 5 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: Tue Feb 07 16:24:09 CST 2017 ;; MSG SIZE rcvd: 72
能夠指定個域名或ip,會把域名給cname過去
# cat svc_en_test1.yaml
apiVersion: v1 kind: Service metadata: labels: provider: test1 name: svc-en-test1 namespace: default spec: selector: env: test app: tomcat ports: - name: http port: 8080 protocol: TCP targetPort: 8080 externalName: www.baidu.com type: ExternalName
# dig svc-en-test1.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.9.4-RedHat-9.9.4-38.el7_3.1 <<>> svc-en-test1.default.svc.cluster.local @10.96.0.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10681 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;svc-en-test1.default.svc.cluster.local. IN A ;; ANSWER SECTION: svc-en-test1.default.svc.cluster.local. 30 IN CNAME www.baidu.com. www.baidu.com. 30 IN CNAME www.a.shifen.com. www.a.shifen.com. 30 IN A 61.135.169.121 www.a.shifen.com. 30 IN A 61.135.169.125 ;; Query time: 22 msec ;; SERVER: 10.96.0.10#53(10.96.0.10) ;; WHEN: Tue Feb 07 17:10:20 CST 2017 ;; MSG SIZE rcvd: 142
##ReplicationController
檢測探針官方文檔
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
rc操做文檔
https://kubernetes.io/docs/user-guide/replication-controller/operations/#sample-file
rc resizing文檔
https://kubernetes.io/docs/user-guide/resizing-a-replication-controller/
rolling update
https://kubernetes.io/docs/user-guide/rolling-updates/
使用ReplicationController管理多副本的應用方便水平擴容和收縮。
rc配置文件
# cat rc-test.yaml
kind: ReplicationController apiVersion: v1 metadata: name: rctest1 labels: env: default app: rctest1tomcat namespace: default spec: replicas: 2 selector: rcname: rctest1 template: metadata: labels: rcname: rctest1 spec: volumes: - name: vl-rc-rctest1-tomcat persistentVolumeClaim: claimName: default-pvc1 containers: - name: tomcat image: tomcat volumeMounts: - mountPath: "/tmp" name: vl-rc-rctest1-tomcat ports: - containerPort: 8080 protocol: TCP imagePullPolicy: IfNotPresent readinessProbe: initialDelaySeconds : 5 httpGet: path: / port: 8080 httpHeaders: - name: X-my-header value: xxxx timeoutSeconds: 3 restartPolicy: Always
####建立rc
# kubectl apply -f rc-test.yaml
replicationcontroller "rctest1" created
# kubectl get rc
NAME DESIRED CURRENT READY AGE rctest1 2 2 2 6m
# kubectl describe rc rctest1
Name: rctest1 Namespace: default Image(s): tomcat Selector: rcname=rctest1 Labels: app=rctest1tomcat env=default Replicas: 2 current / 2 desired Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Volumes: vl-rc-rctest1-tomcat: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: default-pvc1 ReadOnly: false Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7m 7m 1 {replication-controller } Normal SuccessfulCreate Created pod: rctest1-cqkbw 7m 7m 1 {replication-controller } Normal SuccessfulCreate Created pod: rctest1-x3kd9
####resizing rc
自動收縮和擴容可使用 Horizontal Pod Autoscaler ,參考https://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/walkthrough/
# kubectl scale rc rctest1 --replicas=3
replicationcontroller "rctest1" scaled
# kubectl get rc
NAME DESIRED CURRENT READY AGE rctest1 3 3 3 15m
####更新
具體請參考文檔
####更新image
爲了省事,直接在docker1上打個新的tag
# docker tag tomcat tomcat:v1
####rolling update rc
能夠改文件進行rolling update,也能夠指定新的image進行
默認一分鐘間隔升降一臺,因此操做比較耗時
# kubectl rolling-update rctest1 --image=tomcat:v1
Created rctest1-8fea061ac5168e37a3c989adfba63bd1 Scaling up rctest1-8fea061ac5168e37a3c989adfba63bd1 from 0 to 2, scaling down rctest1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling rctest1-8fea061ac5168e37a3c989adfba63bd1 up to 1 Scaling rctest1 down to 1 Scaling rctest1-8fea061ac5168e37a3c989adfba63bd1 up to 2 Scaling rctest1 down to 0 Update succeeded. Deleting old controller: rctest1 Renaming rctest1 to rctest1-8fea061ac5168e37a3c989adfba63bd1 replicationcontroller "rctest1" rolling updated
####查看rolling update 成功
# kubectl get rc
NAME DESIRED CURRENT READY AGE rctest1 2 2 2 2m
# kubectl describe rc rctest1
Name: rctest1 Namespace: default Image(s): tomcat:v1 Selector: deployment=8fea061ac5168e37a3c989adfba63bd1,rcname=rctest1 Labels: app=rctest1tomcat env=default Replicas: 2 current / 2 desired Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Volumes: vl-rc-rctest1-tomcat: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: default-pvc1 ReadOnly: false No events.
####檢測探針的效果測試,這個之後再說吧