原理和架構圖參考上一篇,這裏只記錄操做步驟。因爲東西較多,篇幅也會較長。node
etcd version: 3.2.11
kube version: 1.8.4
contiv version: 1.1.7
docker version: 17.03.2-ce
OS version: debian stretchgit
三個ETCD節點(contiv插件也要使用etcd,這裏每一個節點複用跑2個etcd實例)github
192.168.5.84 etcd0,contiv0 192.168.5.85 etcd1,contiv1 192.168.2.77 etcd2,contiv2
兩個lvs節點,這裏lvs代理了三個服務,分別是apiserver、contiv的netmaster、以及因爲contiv不支持配置多個etcd因此代理三個etcd實例提供一個vip出來給contiv服務redis
192.168.2.56 master 192.168.2.57 backup
4個k8s節點(3個master,1個node)docker
192.168.5.62 master01 192.168.5.63 master02 192.168.5.107 master03 192.168.5.68 node
一、部署ETCD,因爲這幾個節點系統版本較低,因此沒有使用systemdbootstrap
a、部署k8s使用的etcd集羣,直接使用etcd二進制文件啓動便可,啓動腳本以下:vim
# cat etcd-start.sh #獲取IP localip=`ifconfig em2|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'` pubip=0.0.0.0 #啓動服務 etcd --name etcd0 -data-dir /var/lib/etcd \ --initial-advertise-peer-urls http://${localip}:2380 \ --listen-peer-urls http://${localip}:2380 \ --listen-client-urls http://${pubip}:2379 \ --advertise-client-urls http://${pubip}:2379 \ --initial-cluster-token my-etcd-token \ --initial-cluster etcd0=http://192.168.5.84:2380,etcd1=http://192.168.5.85:2380,etcd2=http://192.168.2.77:2380 \ --initial-cluster-state new >> /var/log/etcd.log 2>&1 &
# cat etcd-start.sh #獲取IP localip=`ifconfig em2|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'` pubip=0.0.0.0 #啓動服務 etcd --name etcd1 -data-dir /var/lib/etcd \ --initial-advertise-peer-urls http://${localip}:2380 \ --listen-peer-urls http://${localip}:2380 \ --listen-client-urls http://${pubip}:2379 \ --advertise-client-urls http://${pubip}:2379 \ --initial-cluster-token my-etcd-token \ --initial-cluster etcd0=http://192.168.5.84:2380,etcd1=http://192.168.5.85:2380,etcd2=http://192.168.2.77:2380 \ --initial-cluster-state new >> /var/log/etcd.log 2>&1 &
# cat etcd-start.sh #獲取IP localip=`ifconfig bond0|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'` pubip=0.0.0.0 #啓動服務 etcd --name etcd2 -data-dir /var/lib/etcd \ --initial-advertise-peer-urls http://${localip}:2380 \ --listen-peer-urls http://${localip}:2380 \ --listen-client-urls http://${pubip}:2379 \ --advertise-client-urls http://${pubip}:2379 \ --initial-cluster-token my-etcd-token \ --initial-cluster etcd0=http://192.168.5.84:2380,etcd1=http://192.168.5.85:2380,etcd2=http://192.168.2.77:2380 \ --initial-cluster-state new >> /var/log/etcd.log 2>&1 &
b、部署contiv使用的etcd:api
# cat etcd-2-start.sh #!/bin/bash #獲取IP localip=`ifconfig em2|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'` pubip=0.0.0.0 #啓動服務 etcd --name contiv0 -data-dir /var/etcd/contiv-data \ --initial-advertise-peer-urls http://${localip}:6667 \ --listen-peer-urls http://${localip}:6667 \ --listen-client-urls http://${pubip}:6666 \ --advertise-client-urls http://${pubip}:6666 \ --initial-cluster-token contiv-etcd-token \ --initial-cluster contiv0=http://192.168.5.84:6667,contiv1=http://192.168.5.85:6667,contiv2=http://192.168.2.77:6667 \ --initial-cluster-state new >> /var/log/etcd-contiv.log 2>&1 &
# cat etcd-2-start.sh #獲取IP localip=`ifconfig em2|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'` pubip='0.0.0.0' #啓動服務 etcd --name contiv1 -data-dir /var/etcd/contiv-data \ --initial-advertise-peer-urls http://${localip}:6667 \ --listen-peer-urls http://${localip}:6667 \ --listen-client-urls http://${pubip}:6666 \ --advertise-client-urls http://${pubip}:6666 \ --initial-cluster-token contiv-etcd-token \ --initial-cluster contiv0=http://192.168.5.84:6667,contiv1=http://192.168.5.85:6667,contiv2=http://192.168.2.77:6667 \ --initial-cluster-state new >> /var/log/etcd-contiv.log 2>&1 &
# cat etcd-2-start.sh #獲取IP localip=`ifconfig bond0|grep -w inet| awk '{print $2}'|awk -F: '{print $2}'` pubip=0.0.0.0 #啓動服務 etcd --name contiv2 -data-dir /var/etcd/contiv-data \ --initial-advertise-peer-urls http://${localip}:6667 \ --listen-peer-urls http://${localip}:6667 \ --listen-client-urls http://${pubip}:6666 \ --advertise-client-urls http://${pubip}:6666 \ --initial-cluster-token contiv-etcd-token \ --initial-cluster contiv0=http://192.168.5.84:6667,contiv1=http://192.168.5.85:6667,contiv2=http://192.168.2.77:6667 \ --initial-cluster-state new >> /var/log/etcd-contiv.log 2>&1 &
c、啓動服務,直接執行腳本便可。bash
# bash etcd-start.sh # bash etcd-2-start.sh
d、驗證集羣狀態網絡
# etcdctl member list 4e2d8913b0f6d79d, started, etcd2, http://192.168.2.77:2380, http://0.0.0.0:2379 7b72fa2df0544e1b, started, etcd0, http://192.168.5.84:2380, http://0.0.0.0:2379 930f118a7f33cf1c, started, etcd1, http://192.168.5.85:2380, http://0.0.0.0:2379
# etcdctl --endpoints=http://192.168.6.17:6666 member list 21868a2f15be0a01, started, contiv0, http://192.168.5.84:6667, http://0.0.0.0:6666 63df25ae8bd96b52, started, contiv1, http://192.168.5.85:6667, http://0.0.0.0:6666 cf59e48c1866f41d, started, contiv2, http://192.168.2.77:6667, http://0.0.0.0:6666
e、配置lvs代理contiv的etcd,vip爲192.168.6.17。這裏順便把其餘兩個服務的代理配置所有貼上來,實際上僅僅是多了兩段配置而已,apiserver的vip爲192.168.6.16
# vim vi_bgp_VI1_yizhuang.inc vrrp_instance VII_1 { virtual_router_id 102 interface eth0 include /etc/keepalived/state_VI1.conf preempt_delay 120 garp_master_delay 0 garp_master_refresh 5 lvs_sync_daemon_interface eth0 authentication { auth_type PASS auth_pass opsdk } virtual_ipaddress { #k8s-apiserver 192.168.6.16 #etcd 192.168.6.17 } }
這裏單獨使用了一個state.conf配置文件來區分主備角色,也就是master和backup節點的配置僅有這一部分不一樣,其餘配置能夠直接複製過去。
# vim /etc/keepalived/state_VI1.conf #uy-s-07 state MASTER priority 150 #uy-s-45 # state BACKUP # priority 100
# vim /etc/keepalived/k8s.conf virtual_server 192.168.6.16 6443 { lb_algo rr lb_kind DR persistence_timeout 0 delay_loop 20 protocol TCP real_server 192.168.5.62 6443 { weight 10 TCP_CHECK { connect_timeout 10 } } real_server 192.168.5.63 6443 { weight 10 TCP_CHECK { connect_timeout 10 } } real_server 192.168.5.107 6443 { weight 10 TCP_CHECK { connect_timeout 10 } } } virtual_server 192.168.6.17 6666 { lb_algo rr lb_kind DR persistence_timeout 0 delay_loop 20 protocol TCP real_server 192.168.5.84 6666 { weight 10 TCP_CHECK { connect_timeout 10 } } real_server 192.168.5.85 6666 { weight 10 TCP_CHECK { connect_timeout 10 } } real_server 192.168.2.77 6666 { weight 10 TCP_CHECK { connect_timeout 10 } } } virtual_server 192.168.6.16 9999 { lb_algo rr lb_kind DR persistence_timeout 0 delay_loop 20 protocol TCP real_server 192.168.5.62 9999 { weight 10 TCP_CHECK { connect_timeout 10 } } real_server 192.168.5.63 9999 { weight 10 TCP_CHECK { connect_timeout 10 } } real_server 192.168.5.107 9999 { weight 10 TCP_CHECK { connect_timeout 10 } } }
爲etcd的各real-server設置vip:
# vim /etc/network/interfaces auto lo:17 iface lo:17 inet static address 192.168.6.17 netmask 255.255.255.255 # ifconfig lo:17 192.168.6.17 netmask 255.255.255.255 up
爲apiserver的各real-server設置vip:
# vim /etc/network/interfaces auto lo:16 iface lo:16 inet static address 192.168.6.16 netmask 255.255.255.255 # ifconfig lo:16 192.168.6.16 netmask 255.255.255.255 up
爲全部real-server設置內核參數:
# vim /etc/sysctl.conf net.ipv4.conf.lo.arp_ignore = 1 net.ipv4.conf.lo.arp_announce = 2 net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.ip_forward = 1 net.netfilter.nf_conntrack_max = 2048000
啓動服務,查看服務狀態:
# /etc/init.d/keepalived start # ipvsadm -ln IP Virtual Server version 1.2.1 (size=1048576) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.6.16:6443 rr -> 192.168.5.62:6443 Route 10 1 0 -> 192.168.5.63:6443 Route 10 0 0 -> 192.168.5.107:6443 Route 10 4 0 TCP 192.168.6.16:9999 rr -> 192.168.5.62:9999 Route 10 0 0 -> 192.168.5.63:9999 Route 10 0 0 -> 192.168.5.107:9999 Route 10 0 0 TCP 192.168.6.17:6666 rr -> 192.168.2.77:6666 Route 10 24 14 -> 192.168.5.84:6666 Route 10 22 13 -> 192.168.5.85:6666 Route 10 18 14
二、部署k8s,因爲上篇已經說了詳細步驟,這裏會略過一些內容
a、安裝kubeadm,kubectl,kubelet,因爲目前倉庫已經更新到最新版本1.9了,因此這裏若是要安裝低版本須要手動指定版本號
# aptitude install -y kubeadm=1.8.4-00 kubectl=1.8.4-00 kubelet=1.8.4-00
b、使用kubeadm初始化第一個master節點。因爲使用的是contiv插件,因此這裏能夠不設置網絡參數podSubnet。由於contiv沒有使用controller-manager的subnet-allocating特性,另外,weave也沒有使用這個特性。
# cat kubeadm-config.yml apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration api: advertiseAddress: "192.168.5.62" etcd: endpoints: - "http://192.168.5.84:2379" - "http://192.168.5.85:2379" - "http://192.168.2.77:2379" kubernetesVersion: "v1.8.4" apiServerCertSANs: - uy06-04 - uy06-05 - uy08-10 - uy08-11 - 192.168.6.16 - 192.168.6.17 - 127.0.0.1 - 192.168.5.62 - 192.168.5.63 - 192.168.5.107 - 192.168.5.108 - 30.0.0.1 - 10.244.0.1 - 10.96.0.1 - kubernetes - kubernetes.default - kubernetes.default.svc - kubernetes.default.svc.cluster - kubernetes.default.svc.cluster.local tokenTTL: 0s networking: podSubnet: 30.0.0.0/10
執行初始化:
# kubeadm init --config=kubeadm-config.yml [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.4 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Starting the kubelet service [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [uy06-04 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local uy06-04 uy06-05 uy08-10 uy08-11 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.62 192.168.6.16 192.168.6.17 127.0.0.1 192.168.5.62 192.168.5.63 192.168.5.107 192.168.5.108 30.0.0.1 10.244.0.1 10.96.0.1] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] This often takes around a minute; or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 28.502953 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node uy06-04 as master by adding a label and a taint [markmaster] Master uy06-04 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: 0c8921.578cf94fe0721e01 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 0c8921.578cf94fe0721e01 192.168.5.62:6443 --discovery-token-ca-cert-hash sha256:58cf1826d49e44fb6ff1590ddb077dd4e530fe58e13c1502ec07ce41ba6cc39e
c、驗證經過證書是否能訪問到API(這裏每一個節點都務必驗證一下,證書問題會致使各類其它問題)
# cd /etc/kubernetes/pki/ # curl --cacert ca.crt --cert apiserver-kubelet-client.crt --key apiserver-kubelet-client.key https://192.168.5.62:6443
d、讓master節點參與調度
# kubectl taint nodes --all node-role.kubernetes.io/master-
e、安裝contiv
下載安裝包並解壓
# curl -L -O https://github.com/contiv/install/releases/download/1.1.7/contiv-1.1.7.tgz # tar xvf contiv-1.1.7.tgz
修改yaml文件
# cd contiv-1.1.7/ # vim install/k8s/k8s1.6/contiv.yaml 一、修改ca路徑,並將k8s的ca複製到該路徑下 "K8S_CA": "/var/contiv/ca.crt" 二、修改netmaster的部署類型,把ReplicaSet改成DaemonSet(實現netmaster的高可用),這裏使用了nodeSeletor,須要把三個master都打上master標籤 nodeSelector: node-role.kubernetes.io/master: "" 三、註釋掉replicas指令
另外須要注意的是:
Contiv提供了一個安裝腳本,執行腳本安裝:
# ./install/k8s/install.sh -n 192.168.6.16 -w routing -s etcd://192.168.6.17:6666 Installing Contiv for Kubernetes secret "aci.key" created Generating local certs for Contiv Proxy Setting installation parameters Applying contiv installation To customize the installation press Ctrl+C and edit ./.contiv.yaml. Extracting netctl from netplugin container dafec6d9f0036d4743bf4b8a51797ddd19f4402eb6c966c417acf08922ad59bb clusterrolebinding "contiv-netplugin" created clusterrole "contiv-netplugin" created serviceaccount "contiv-netplugin" created clusterrolebinding "contiv-netmaster" created clusterrole "contiv-netmaster" created serviceaccount "contiv-netmaster" created configmap "contiv-config" created daemonset "contiv-netplugin" created daemonset "contiv-netmaster" created Creating network default:contivh1 daemonset "contiv-netplugin" deleted clusterrolebinding "contiv-netplugin" configured clusterrole "contiv-netplugin" configured serviceaccount "contiv-netplugin" unchanged clusterrolebinding "contiv-netmaster" configured clusterrole "contiv-netmaster" configured serviceaccount "contiv-netmaster" unchanged configmap "contiv-config" unchanged daemonset "contiv-netplugin" created daemonset "contiv-netmaster" configured Installation is complete ========================================================= Contiv UI is available at https://192.168.6.16:10000 Please use the first run wizard or configure the setup as follows: Configure forwarding mode (optional, default is routing). netctl global set --fwd-mode routing Configure ACI mode (optional) netctl global set --fabric-mode aci --vlan-range <start>-<end> Create a default network netctl net create -t default --subnet=<CIDR> default-net For example, netctl net create -t default --subnet=20.1.1.0/24 -g 20.1.1.1 default-net =========================================================
這裏使用了三個參數:
-n 表示netmaster的地址。爲了實現高可用,這裏我起了三個netmaster,而後用lvs代理三個節點提供vip -w 表示轉發模式 -s 表示外部etcd地址,若是指定了外部etcd則不會建立etcd容器,並且無需手動處理。
另外,contiv是自帶UI的,監聽10000端口,上面安裝完成後有提示,能夠經過UI來管理網絡。默認帳號和密碼是admin/admin。
不過,若是你知道要作什麼的話,用命令會更方便快捷。
建立一個subnet:
# netctl net create -t default --subnet=30.0.0.0/10 -g 30.0.0.1 default-net # netctl network ls Tenant Network Nw Type Encap type Packet tag Subnet Gateway IPv6Subnet IPv6Gateway Cfgd Tag ------ ------- ------- ---------- ---------- ------- ------ ---------- ----------- --------- default contivh1 infra vxlan 0 132.1.1.0/24 132.1.1.1 default default-net data vxlan 0 30.0.0.0/10 30.0.0.1
建立好網絡以後,這時kube-dns pod就能拿到IP地址並運行起來了。
f、部署另外兩個master節點
將第一個節點的配置文件和證書所有複製過來:
# scp -r 192.168.5.62:/etc/kubernetes/* /etc/kubernetes/
爲新的master節點生成新的證書:
# cat uy06-05.sh #!/bin/bash #apiserver-kubelet-client openssl genrsa -out apiserver-kubelet-client.key 2048 openssl req -new -key apiserver-kubelet-client.key -out apiserver-kubelet-client.csr -subj "/O=system:masters/CN=kube-apiserver-kubelet-client" openssl x509 -req -set_serial $(date +%s%N) -in apiserver-kubelet-client.csr -CA ca.crt -CAkey ca.key -out apiserver-kubelet-client.crt -days 365 -extensions v3_req -extfile apiserver-kubelet-client-openssl.cnf #controller-manager openssl genrsa -out controller-manager.key 2048 openssl req -new -key controller-manager.key -out controller-manager.csr -subj "/CN=system:kube-controller-manager" openssl x509 -req -set_serial $(date +%s%N) -in controller-manager.csr -CA ca.crt -CAkey ca.key -out controller-manager.crt -days 365 -extensions v3_req -extfile controller-manager-openssl.cnf #scheduler openssl genrsa -out scheduler.key 2048 openssl req -new -key scheduler.key -out scheduler.csr -subj "/CN=system:kube-scheduler" openssl x509 -req -set_serial $(date +%s%N) -in scheduler.csr -CA ca.crt -CAkey ca.key -out scheduler.crt -days 365 -extensions v3_req -extfile scheduler-openssl.cnf #admin openssl genrsa -out admin.key 2048 openssl req -new -key admin.key -out admin.csr -subj "/O=system:masters/CN=kubernetes-admin" openssl x509 -req -set_serial $(date +%s%N) -in admin.csr -CA ca.crt -CAkey ca.key -out admin.crt -days 365 -extensions v3_req -extfile admin-openssl.cnf #node openssl genrsa -out $(hostname).key 2048 openssl req -new -key $(hostname).key -out $(hostname).csr -subj "/O=system:nodes/CN=system:node:$(hostname)" -config kubelet-openssl.cnf openssl x509 -req -set_serial $(date +%s%N) -in $(hostname).csr -CA ca.crt -CAkey ca.key -out $(hostname).crt -days 365 -extensions v3_req -extfile kubelet-openssl.cnf
這裏生成了四套證書,使用的openssl配置文件實際上是相同的:
[ v3_req ] # Extensions to add to a certificate request keyUsage = critical, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth
用新的證書替換舊證書,這幾套證書只有apiserver-kubelet-client的證書是路徑引用的,其餘的都是直接引用的證書加密過的內容:
#!/bin/bash VIP=192.168.5.62 APISERVER_PORT=6443 HOSTNAME=$(hostname) CA_CRT=$(cat ca.crt |base64 -w0) CA_KEY=$(cat ca.key |base64 -w0) ADMIN_CRT=$(cat admin.crt |base64 -w0) ADMIN_KEY=$(cat admin.key |base64 -w0) CONTROLLER_CRT=$(cat controller-manager.crt |base64 -w0) CONTROLLER_KEY=$(cat controller-manager.key |base64 -w0) KUBELET_CRT=$(cat $(hostname).crt |base64 -w0) KUBELET_KEY=$(cat $(hostname).key |base64 -w0) SCHEDULER_CRT=$(cat scheduler.crt |base64 -w0) SCHEDULER_KEY=$(cat scheduler.key |base64 -w0) #admin sed -e "s/VIP/$VIP/g" -e "s/APISERVER_PORT/$APISERVER_PORT/g" -e "s/CA_CRT/$CA_CRT/g" -e "s/ADMIN_CRT/$ADMIN_CRT/g" -e "s/ADMIN_KEY/$ADMIN_KEY/g" admin.temp > admin.conf cp -a admin.conf /etc/kubernetes/admin.conf #kubelet sed -e "s/VIP/$VIP/g" -e "s/APISERVER_PORT/$APISERVER_PORT/g" -e "s/HOSTNAME/$HOSTNAME/g" -e "s/CA_CRT/$CA_CRT/g" -e "s/CA_KEY/$CA_KEY/g" -e "s/KUBELET_CRT/$KUBELET_CRT/g" -e "s/KUBELET_KEY/$KUBELET_KEY/g" kubelet.temp > kubelet.conf cp -a kubelet.conf /etc/kubernetes/kubelet.conf #controller-manager sed -e "s/VIP/$VIP/g" -e "s/APISERVER_PORT/$APISERVER_PORT/g" -e "s/CA_CRT/$CA_CRT/g" -e "s/CONTROLLER_CRT/$CONTROLLER_CRT/g" -e "s/CONTROLLER_KEY/$CONTROLLER_KEY/g" controller-manager.temp > controller-manager.conf cp -a controller-manager.conf /etc/kubernetes/controller-manager.conf #scheduler sed -e "s/VIP/$VIP/g" -e "s/APISERVER_PORT/$APISERVER_PORT/g" -e "s/CA_CRT/$CA_CRT/g" -e "s/SCHEDULER_CRT/$SCHEDULER_CRT/g" -e "s/SCHEDULER_KEY/$SCHEDULER_KEY/g" scheduler.temp > scheduler.conf cp -a scheduler.conf /etc/kubernetes/scheduler.conf #manifest kube-apiserver-client cp -a apiserver-kubelet-client.key /etc/kubernetes/pki/ cp -a apiserver-kubelet-client.crt /etc/kubernetes/pki/
另外,因爲contiv的netmaster使用了nodeSelector
,這裏記得要把這兩個新部署master節點也打上master角色標籤。默認狀況下,新加入集羣的節點是沒有角色標籤的。
# kubectl label node uy06-05 node-role.kubernetes.io/master= # kubectl label node uy08-10 node-role.kubernetes.io/master=
替換證書以後,還要將集羣中全部須要訪問apiserver的地方修改成vip,以及修改advertise-address
爲本機地址,修改本地配置以後記得重啓kubelet服務。
# sed -i "s@192.168.5.62@192.168.6.16@g" admin.conf # sed -i "s@192.168.5.62@192.168.6.16@g" controller-manager.conf # sed -i "s@192.168.5.62@192.168.6.16@g" kubelet.conf # sed -i "s@192.168.5.62@192.168.6.16@g" scheduler.conf
# kubectl edit cm cluster-info -n kube-public # kubectl edit cm kube-proxy -n kube-system
# vim manifests/kube-apiserver.yaml --advertise-address=192.168.5.63
# systemctl restart kubelet
g、驗證,嘗試經過vip請求apiserver將node節點加入到集羣中。
# kubeadm join --token 0c8921.578cf94fe0721e01 192.168.6.16:6443 --discovery-token-ca-cert-hash sha256:58cf1826d49e44fb6ff1590ddb077dd4e530fe58e13c1502ec07ce41ba6cc39e [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [discovery] Trying to connect to API Server "192.168.6.16:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.6.16:6443" [discovery] Requesting info from "https://192.168.6.16:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.6.16:6443" [discovery] Successfully established connection with API Server "192.168.6.16:6443" [bootstrap] Detected server version: v1.8.4 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1) Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
h、至此,整個kubernetes集羣搭建完成。
# kubectl get no NAME STATUS ROLES AGE VERSION uy06-04 Ready master 1d v1.8.4 uy06-05 Ready master 1d v1.8.4 uy08-10 Ready master 1d v1.8.4 uy08-11 Ready <none> 1d v1.8.4
# kubectl get po --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE development snowflake-f88456558-55jk8 1/1 Running 0 3h development snowflake-f88456558-5lkjr 1/1 Running 0 3h development snowflake-f88456558-mm7hc 1/1 Running 0 3h development snowflake-f88456558-tpbhw 1/1 Running 0 3h kube-system contiv-netmaster-6ctqj 3/3 Running 0 6h kube-system contiv-netmaster-w4tx9 3/3 Running 0 3h kube-system contiv-netmaster-wrlgc 3/3 Running 0 3h kube-system contiv-netplugin-nbhkm 2/2 Running 0 6h kube-system contiv-netplugin-rf569 2/2 Running 0 3h kube-system contiv-netplugin-sczzk 2/2 Running 0 3h kube-system contiv-netplugin-tlf77 2/2 Running 0 5h kube-system heapster-59ff54b574-jq52w 1/1 Running 0 3h kube-system heapster-59ff54b574-nhl56 1/1 Running 0 3h kube-system heapster-59ff54b574-wchcr 1/1 Running 0 3h kube-system kube-apiserver-uy06-04 1/1 Running 0 7h kube-system kube-apiserver-uy06-05 1/1 Running 0 5h kube-system kube-apiserver-uy08-10 1/1 Running 0 3h kube-system kube-controller-manager-uy06-04 1/1 Running 0 7h kube-system kube-controller-manager-uy06-05 1/1 Running 0 5h kube-system kube-controller-manager-uy08-10 1/1 Running 0 3h kube-system kube-dns-545bc4bfd4-fcr9q 3/3 Running 0 7h kube-system kube-dns-545bc4bfd4-ml52t 3/3 Running 0 3h kube-system kube-dns-545bc4bfd4-p6d7r 3/3 Running 0 3h kube-system kube-dns-545bc4bfd4-t8ttx 3/3 Running 0 3h kube-system kube-proxy-bpdr9 1/1 Running 0 3h kube-system kube-proxy-cjnt5 1/1 Running 0 5h kube-system kube-proxy-l4w49 1/1 Running 0 7h kube-system kube-proxy-wmqgg 1/1 Running 0 3h kube-system kube-scheduler-uy06-04 1/1 Running 0 7h kube-system kube-scheduler-uy06-05 1/1 Running 0 5h kube-system kube-scheduler-uy08-10 1/1 Running 0 3h kube-system kubernetes-dashboard-5c54687f9c-ssklk 1/1 Running 0 3h production frontend-987698689-7pc56 1/1 Running 0 3h production redis-master-5f68fbf97c-jft59 1/1 Running 0 3h production redis-slave-74855dfc5-2bfwj 1/1 Running 0 3h production redis-slave-74855dfc5-rcrkm 1/1 Running 0 3h staging cattle-5f67c7948b-2j8jf 1/1 Running 0 2h staging cattle-5f67c7948b-4zcft 1/1 Running 0 2h staging cattle-5f67c7948b-gk87r 1/1 Running 0 2h staging cattle-5f67c7948b-gzhc5 1/1 Running 0 2h
# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"}
# kubectl cluster-info Kubernetes master is running at https://192.168.6.16:6443 Heapster is running at https://192.168.6.16:6443/api/v1/namespaces/kube-system/services/heapster/proxy KubeDNS is running at https://192.168.6.16:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
補充:
默認狀況下,kubectl沒有權限查看pod的日誌,受權方法:
# vim kubelet.rbac.yaml # This role allows full access to the kubelet API apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: kubelet-api-admin labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups: - "" resources: - nodes/proxy - nodes/log - nodes/stats - nodes/metrics - nodes/spec verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-apiserver-kubelet-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubelet-api-admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserver-kubelet-client
# kubectl apply -f kubelet.rbac.yaml