因爲以前已經寫了兩篇部署kubernetes的文章,整個過程基本一致,因此這篇只着重說一下coredns和kube-router的部署。node
kube version: 1.9.1
docker version: 17.03.2-ce
OS version: debian stretchgit
依然是三個master節點、一個node節點。github
一、準備鏡像,自行科學下載。redis
# docker images| grep 1.9.1 gcr.io/google_containers/kube-apiserver-amd64 v1.9.1 gcr.io/google_containers/kube-controller-manager-amd64 v1.9.1 gcr.io/google_containers/kube-scheduler-amd64 v1.9.1 gcr.io/google_containers/kube-proxy-amd64 v1.9.1
二、安裝新版本的kubeadm、kubectl、kubelet。docker
# aptitude install -y kubeadm kubectl kubelet
三、部署第一個master節點。準備kubeadm的配置文件,這裏官方提供的配置說明並不完善,應該說沒法使用,通過了一番查找和測試。bootstrap
# cat kubeadm-config-191.yml apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration api: advertiseAddress: "192.168.5.62" etcd: endpoints: - "http://192.168.5.84:2379" - "http://192.168.5.85:2379" - "http://192.168.2.77:2379" kubernetesVersion: "v1.9.1" apiServerCertSANs: - uy06-04 - uy06-05 - uy08-10 - uy08-11 - 192.168.6.16 - 192.168.6.17 - 127.0.0.1 - 192.168.5.62 - 192.168.5.63 - 192.168.5.107 - 192.168.5.108 - 30.0.0.1 - 10.96.0.1 - kubernetes - kubernetes.default - kubernetes.default.svc - kubernetes.default.svc.cluster - kubernetes.default.svc.cluster.local tokenTTL: 0s networking: podSubnet: 30.0.0.0/10 apiServerExtraArgs: enable-swagger-ui: "true" insecure-bind-address: 0.0.0.0 insecure-port: "8088" endpoint-reconciler-type: "lease" controllerManagerExtraArgs: address: 0.0.0.0 schedulerExtraArgs: address: 0.0.0.0 featureGates: CoreDNS: true kubeProxy: config: featureGates: "SupportIPVSProxyMode=true" mode: "ipvs"
須要提醒一下的是,這裏開啓的是kube-proxy的ipvs模式,部署的時候部署的依然是kube-proxy,而不是kube-router。
若是你打算使用kube-router做爲網絡插件,是能夠不考慮kube-proxy的配置的,kube-proxy會被刪掉。kube-router不只取代kube-proxy代理svc,而且它也是網絡插件。api
四、使用kubeadm執行初始化。網絡
# kubeadm init --config=kubeadm-config-191.yml --ignore-preflight-errors=all [init] Using Kubernetes version: v1.9.1 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1 [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [uy06-04 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local uy06-04 uy06-05 uy08-10 uy08-11 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.5.62 192.168.6.16 192.168.6.17 127.0.0.1 192.168.5.62 192.168.5.63 192.168.5.107 192.168.5.108 30.0.0.1 10.96.0.1] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 90.501851 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node uy06-04 as master by adding a label and a taint [markmaster] Master uy06-04 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: b1bd11.9ecfaaad5274f9d1 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token b1bd11.9ecfaaad5274f9d1 192.168.5.62:6443 --discovery-token-ca-cert-hash sha256:09438d4384c393880a5ac18e2d3d06b547dae7242061c18c03f0fbb1bad76ade
驗證kube-proxy的mode:app
# kubectl exec -it kube-proxy-hr48q -n kube-system -- sh # ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 Jan11 ? 00:04:12 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf root 29012 0 0 19:24 ? 00:00:00 sh root 29043 29012 0 19:24 ? 00:00:00 ps -ef # cat /var/lib/kube-proxy/config.conf apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 clusterCIDR: 30.0.0.0/10 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false featureGates: SupportIPVSProxyMode=true healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: minSyncPeriod: 0s scheduler: "" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: ipvs <- 這裏 oomScoreAdj: -999 portRange: "" resourceContainer: /kube-proxy udpTimeoutMilliseconds: 250ms
五、使master節點參與調度。frontend
# kubectl taint nodes --all node-role.kubernetes.io/master-
六、部署kube-router。
a、下載yaml文件。
# curl -L -O https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter-all-features.yaml
b、修改busybox的鏡像拉取策略爲imagePullPolicy: IfNotPresent
,由於這裏自動拉取鏡像時老是鏈接超時,致使pod沒法啓動。
c、應用yaml文件。
# kubectl apply -f kubeadm-kuberouter-all-features.yaml
七、這時,核心組件應該都運行起來了。
# kubectl get po --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-65dcdb4cf-mlr9j 1/1 Running 0 22h kube-system kube-apiserver-uy06-04 1/1 Running 0 22h kube-system kube-controller-manager-uy06-04 1/1 Running 0 22h kube-system kube-proxy-hr48q 1/1 Running 0 22h kube-system kube-router-9lh8x 1/1 Running 0 22h kube-system kube-scheduler-uy06-04 1/1 Running 0 22h
八、刪除kube-proxy,並清除iptables規則。
# kubectl delete ds kube-proxy -n kube-system # docker run --privileged --net=host gcr.io/google_containers/kube-proxy-amd64:v1.7.3 kube-proxy --cleanup-iptables
九、部署另外兩個master節點,嘗試經過vip請求apiserver將node節點添加到集羣,以及剩餘的其餘配置,請參考以前的兩篇。
十、部署完成後是這個樣子。
# kubectl get no NAME STATUS ROLES AGE VERSION uy02-07 Ready <none> 1d v1.9.1 uy05-13 Ready master 2d v1.9.1 uy08-07 Ready <none> 1d v1.9.1 uy08-08 Ready <none> 1d v1.9.1
# kubectl get po --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default frontend-66d686db4b-jkbdk 1/1 Running 0 1d default redis-master-5fd44c4c6-gf4zm 1/1 Running 0 1d default redis-slave-74fc6595b4-kp8sl 1/1 Running 0 1d default redis-slave-74fc6595b4-shtx6 1/1 Running 0 1d default snowflake-5c98868c55-8crlt 1/1 Running 0 3h default snowflake-5c98868c55-9psss 1/1 Running 0 1d default snowflake-5c98868c55-ccsfc 1/1 Running 0 1d default snowflake-5c98868c55-p2tjh 1/1 Running 0 1d kube-system coredns-65dcdb4cf-bv95f 1/1 Running 0 2d kube-system coredns-65dcdb4cf-cv48z 1/1 Running 0 1h kube-system coredns-65dcdb4cf-grxkw 1/1 Running 0 1d kube-system coredns-65dcdb4cf-n5kkm 1/1 Running 0 1d kube-system heapster-7bddb97655-5hbsp 1/1 Running 0 1d kube-system heapster-7bddb97655-8dqgd 1/1 Running 0 1h kube-system heapster-7bddb97655-fd4mb 1/1 Running 0 1d kube-system heapster-7bddb97655-gznsm 1/1 Running 0 1d kube-system kube-apiserver-uy05-13 1/1 Running 0 1d kube-system kube-apiserver-uy08-07 1/1 Running 0 1d kube-system kube-apiserver-uy08-08 1/1 Running 0 1d kube-system kube-controller-manager-uy05-13 1/1 Running 0 23h kube-system kube-controller-manager-uy08-07 1/1 Running 0 23h kube-system kube-controller-manager-uy08-08 1/1 Running 0 23h kube-system kube-router-57mws 1/1 Running 0 1d kube-system kube-router-j6rks 1/1 Running 0 2d kube-system kube-router-mfwqv 1/1 Running 0 1d kube-system kube-router-txp8p 1/1 Running 0 1d kube-system kube-scheduler-uy05-13 1/1 Running 0 23h kube-system kube-scheduler-uy08-07 1/1 Running 0 23h kube-system kube-scheduler-uy08-08 1/1 Running 1 23h kube-system kubernetes-dashboard-79cb6d66b9-74cf4 1/1 Running 0 3h kubernator kubernator-659cf655b6-9prx2 1/1 Running 0 1d monitoring alertmanager-main-0 2/2 Running 0 1h monitoring alertmanager-main-1 2/2 Running 0 1h monitoring alertmanager-main-2 2/2 Running 0 1h monitoring grafana-6b67b479d5-zj66c 2/2 Running 0 1h monitoring kube-state-metrics-6f7b5c94f-v42tj 2/2 Running 0 1h monitoring node-exporter-5b8p2 1/1 Running 0 1h monitoring node-exporter-m85xx 1/1 Running 0 1h monitoring node-exporter-pg2qz 1/1 Running 0 1h monitoring node-exporter-x9lb6 1/1 Running 0 1h monitoring prometheus-k8s-0 2/2 Running 0 1h monitoring prometheus-k8s-1 2/2 Running 0 1h monitoring prometheus-operator-8697c7fff9-dpn9r 1/1 Running 0 1h
# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}
# kubectl cluster-info Kubernetes master is running at https://192.168.6.15:6443 Heapster is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/heapster/proxy KubeDNS is running at https://192.168.6.15:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
補充:
安裝ipvsadm查看lvs規則。
# aptitude install -y ipvsadm # ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.5.42:30001 rr -> 20.0.2.17:9090 Masq 1 0 497 TCP 192.168.5.42:30211 rr -> 20.0.2.12:80 Masq 1 0 497 TCP 192.168.5.42:30900 rr -> 20.0.2.20:9090 Masq 1 0 250 -> 20.0.4.15:9090 Masq 1 0 250 TCP 192.168.5.42:30902 rr -> 20.0.2.19:3000 Masq 1 0 500 TCP 192.168.5.42:30903 rr -> 20.0.0.19:9093 Masq 1 0 166 -> 20.0.2.21:9093 Masq 1 0 166 -> 20.0.4.14:9093 Masq 1 0 166 TCP 192.168.5.42:31001 rr -> 20.0.0.8:80 Masq 1 0 497 TCP 10.96.0.1:443 rr persistent 10800 -> 192.168.5.42:6443 Masq 1 2 0 -> 192.168.5.104:6443 Masq 1 0 0 -> 192.168.5.105:6443 Masq 1 0 0 TCP 10.96.0.10:53 rr -> 20.0.0.2:53 Masq 1 0 0 -> 20.0.1.2:53 Masq 1 0 0 -> 20.0.2.2:53 Masq 1 0 0 -> 20.0.4.9:53 Masq 1 0 0 TCP 10.97.245.128:6379 rr -> 20.0.0.7:6379 Masq 1 0 0 TCP 10.98.159.23:80 rr -> 20.0.2.17:9090 Masq 1 0 0 TCP 10.101.179.96:8080 rr -> 20.0.4.12:8080 Masq 1 0 0 TCP 10.101.209.232:80 rr -> 20.0.2.12:80 Masq 1 0 0 TCP 10.101.255.18:9090 rr -> 20.0.2.20:9090 Masq 1 0 0 -> 20.0.4.15:9090 Masq 1 0 0 TCP 10.104.53.117:8080 rr -> 20.0.4.13:8080 Masq 1 0 0 TCP 10.105.5.201:3000 rr -> 20.0.2.19:3000 Masq 1 0 0 TCP 10.105.21.201:80 rr -> 20.0.0.4:8082 Masq 1 0 0 -> 20.0.1.3:8082 Masq 1 0 0 -> 20.0.2.3:8082 Masq 1 0 0 -> 20.0.4.10:8082 Masq 1 0 0 TCP 10.105.113.2:6379 rr -> 20.0.1.8:6379 Masq 1 0 0 -> 20.0.2.8:6379 Masq 1 0 0 TCP 10.105.159.162:9093 rr -> 20.0.0.19:9093 Masq 1 0 0 -> 20.0.2.21:9093 Masq 1 0 0 -> 20.0.4.14:9093 Masq 1 0 0 TCP 10.110.48.172:80 rr -> 20.0.0.8:80 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 20.0.0.2:53 Masq 1 0 14 -> 20.0.1.2:53 Masq 1 0 15 -> 20.0.2.2:53 Masq 1 0 15 -> 20.0.4.9:53 Masq 1 0 15