上一篇咱們部署了kubernetes的master集羣,參考:二進制搭建kubernetes多master集羣【3、配置k8s master及高可用】html
本文在如下主機上操做部署k8s nodenode
k8s-node1:192.168.80.10linux
k8s-node2:192.168.80.11nginx
k8s-node3:192.168.80.12git
如下kubeadm和kubectl命令操做都是在k8s-master1上執行的。github
kubernetes work 節點運行以下組件:web
docker和flannel部署參考:二進制搭建kubernetes多master集羣【2、配置flannel網絡】 、 docker-ce安裝docker
1、安裝依賴包json
yum install -y epel-release wget conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs
2、部署kubelet組件bootstrap
kublet 運行在每一個 worker 節點上,接收 kube-apiserver 發送的請求,管理 Pod 容器,執行交互式命令,如 exec、run、logs 等。
kublet 啓動時自動向 kube-apiserver 註冊節點信息,內置的 cadvisor 統計和監控節點的資源使用狀況。
爲確保安全,本文檔只開啓接收 https 請求的安全端口,對請求進行認證和受權,拒絕未受權的訪問(如 apiserver、heapster)。
一、下載和分發kubelet二進制文件
wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz tar -xzvf kubernetes-server-linux-amd64.tar.gz cp kubernetes/server/bin/ cp kubelet kube-proxy /usr/local/bin scp kubelet kube-proxy k8s-node2:/usr/local/bin scp kubelet kube-proxy k8s-node3:/usr/local/bin
二、建立kubelet bootstrap kubeconfig文件 (k8s-master1上執行)
#建立 token export BOOTSTRAP_TOKEN=$(kubeadm token create \ --description kubelet-bootstrap-token \ --groups system:bootstrappers:k8s-master1 \ --kubeconfig ~/.kube/config) # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=https://114.67.81.105:8443 \ --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig
三、查看 kubeadm 爲各節點建立的 token:
[root@k8s-master1 ~]# kubeadm token list --kubeconfig ~/.kube/config TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 8w6j3n.ruh4ne95icbae4ie 23h 2018-12-21T20:42:29+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-master3 e7n0o5.1y8sjblh43z8ftz1 23h 2018-12-21T20:41:53+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-master2 ydbwyk.yz8e97df5d5u2o70 22h 2018-12-21T19:28:43+08:00 authentication,signing kubelet-bootstrap-token system:bootstrappers:k8s-master1
查看各 token 關聯的 Secret:(紅色的爲建立生成的token)
[root@k8s-master1 ~]# kubectl get secrets -n kube-system NAME TYPE DATA AGE attachdetach-controller-token-z2w72 kubernetes.io/service-account-token 3 119m bootstrap-signer-token-hz8dr kubernetes.io/service-account-token 3 119m bootstrap-token-8w6j3n bootstrap.kubernetes.io/token 7 20m bootstrap-token-e7n0o5 bootstrap.kubernetes.io/token 7 20m bootstrap-token-ydbwyk bootstrap.kubernetes.io/token 7 93m certificate-controller-token-bjhbq kubernetes.io/service-account-token 3 119m clusterrole-aggregation-controller-token-qkqxg kubernetes.io/service-account-token 3 119m cronjob-controller-token-v7vz5 kubernetes.io/service-account-token 3 119m daemon-set-controller-token-7khdh kubernetes.io/service-account-token 3 119m default-token-nwqsr kubernetes.io/service-account-token 3 119m
四、分發bootstrap kubeconfig文件
[root@k8s-master1 ~]# scp kubelet-bootstrap-k8s-master1.kubeconfig k8s-node1:/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig [root@k8s-master1 ~]# scp kubelet-bootstrap-k8s-master2.kubeconfig k8s-node2:/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig [root@k8s-master1 ~]# scp kubelet-bootstrap-k8s-master3.kubeconfig k8s-node3:/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig
五、建立和分發kubelet參數配置文件
從 v1.10 開始,kubelet 部分參數需在配置文件中配置,kubelet --help
會提示:
DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag
建立 kubelet 參數配置模板文件:(紅色字體改爲對應node主機ip)
cat > kubelet.config.json <<EOF { "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/cert/ca.pem" }, "webhook": { "enabled": true, "cacheTTL": "2m0s" }, "anonymous": { "enabled": false } }, "authorization": { "mode": "Webhook", "webhook": { "cacheAuthorizedTTL": "5m0s", "cacheUnauthorizedTTL": "30s" } }, "address": "192.168.80.10", "port": 10250, "readOnlyPort": 0, "cgroupDriver": "cgroupfs", "hairpinMode": "promiscuous-bridge", "serializeImagePulls": false, "featureGates": { "RotateKubeletClientCertificate": true, "RotateKubeletServerCertificate": true }, "clusterDomain": "cluster.local.", "clusterDNS": ["10.254.0.2"] } EOF
爲各節點建立和分發 kubelet 配置文件:
scp kubelet.config.json k8s-node1:/etc/kubernetes/cert/kubelet.config.json scp kubelet.config.json k8s-node2:/etc/kubernetes/cert/kubelet.config.json scp kubelet.config.json k8s-node3:/etc/kubernetes/cert/kubelet.config.json
六、建立和分發kubelet systemd unit文件 (紅色字體改爲對應node主機ip)
[root@k8s-node1 ~]# cat /etc/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/local/bin/kubelet \ --bootstrap-kubeconfig=/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig \ --cert-dir=/etc/kubernetes/cert \ --kubeconfig=/etc/kubernetes/cert/kubelet.kubeconfig \ --config=/etc/kubernetes/cert/kubelet.config.json \ --hostname-override=192.168.80.10 \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 \ --allow-privileged=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
--hostname-override
選項,則 kube-proxy
也須要設置該選項,不然會出現找不到 Node 的狀況;--bootstrap-kubeconfig
:指向 bootstrap kubeconfig 文件,kubelet 使用該文件中的用戶名和 token 向 kube-apiserver 發送 TLS Bootstrapping 請求;--cert-dir
目錄建立證書和私鑰文件,而後寫入 --kubeconfig
文件;爲各節點建立和分發 kubelet systemd unit 文件:
scp /etc/systemd/system/kubelet.service k8s-node2:/etc/systemd/system/kubelet.service
scp /etc/systemd/system/kubelet.service k8s-node3:/etc/systemd/system/kubelet.service
七、Bootstrap Token Auth和授予權限
kublet 啓動時查找配置的 --kubeletconfig 文件是否存在,若是不存在則使用 --bootstrap-kubeconfig 向 kube-apiserver 發送證書籤名請求 (CSR)。
kube-apiserver 收到 CSR 請求後,對其中的 Token 進行認證(事先使用 kubeadm 建立的 token),認證經過後將請求的 user 設置爲 system:bootstrap:,group 設置爲 system:bootstrappers,這一過程稱爲 Bootstrap Token Auth。
默認狀況下,這個 user 和 group 沒有建立 CSR 的權限,kubelet 啓動失敗,錯誤日誌以下:
sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests' May 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378 26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
解決辦法是:建立一個 clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定:
[root@k8s-master1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers
八、啓動kubelet服務
mkdir -p /var/log/kubernetes && mkdir -p /var/lib/kubelet systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet
kubelet 啓動後使用 --bootstrap-kubeconfig 向 kube-apiserver 發送 CSR 請求,當這個 CSR 被 approve 後,kube-controller-manager 爲 kubelet 建立 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。
注意:kube-controller-manager 須要配置 --cluster-signing-cert-file
和 --cluster-signing-key-file
參數,纔會爲 TLS Bootstrap 建立證書和私鑰。
此時kubelet的進程有,可是監聽端口還未啓動,須要進行下面步驟!
九、approve kubelet csr請求
能夠手動或自動 approve CSR 請求。推薦使用自動的方式,由於從 v1.8 版本開始,能夠自動輪轉approve csr 後生成的證書。
i、手動approve csr請求
查看 CSR 列表:
[root@k8s-master1 ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-P7XcQAc2yNlXn1pUmQFxXNCdGyyt8ccVuW3bmoUZiK4 30m system:bootstrap:e7n0o5 Pending node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM 79m system:bootstrap:ydbwyk Pending node-csr-u2sVzVkFYnMxPIYWjXHbqRJROtTZBYzA1s2vATPLzyo 30m system:bootstrap:8w6j3n Pending
approve CSR
[root@k8s-master1 ~]# kubectl certificate approve node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM certificatesigningrequest.certificates.k8s.io "node-csr gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM" approved
查看 Approve 結果:
[root@k8s-master1 ~]# kubectl describe csr node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM Name: node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM Labels: <none> Annotations: <none> CreationTimestamp: Thu, 20 Dec 2018 19:55:39 +0800 Requesting User: system:bootstrap:ydbwyk Status: Approved,Issued Subject: Common Name: system:node:192.168.80.10 Serial Number: Organization: system:nodes Events: <none>
Requesting User
:請求 CSR 的用戶,kube-apiserver 對它進行認證和受權;Subject
:請求籤名的證書信息;ii、自動approve csr請求
建立三個 ClusterRoleBinding,分別用於自動 approve client、renew client、renew server 證書:
[root@k8s-master1 ~]# cat > csr-crb.yaml <<EOF # Approve all CSRs for the group "system:bootstrappers" kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: auto-approve-csrs-for-group subjects: - kind: Group name: system:bootstrappers apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient apiGroup: rbac.authorization.k8s.io --- # To let a node of the group "system:nodes" renew its own credentials kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-client-cert-renewal subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient apiGroup: rbac.authorization.k8s.io --- # A ClusterRole which instructs the CSR approver to approve a node requesting a # serving cert matching its client cert. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: approve-node-server-renewal-csr rules: - apiGroups: ["certificates.k8s.io"] resources: ["certificatesigningrequests/selfnodeserver"] verbs: ["create"] --- # To let a node of the group "system:nodes" renew its own server credentials kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: node-server-cert-renewal subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: approve-node-server-renewal-csr apiGroup: rbac.authorization.k8s.io EOF
生效配置:
[root@k8s-master1 ~]# kubectl apply -f csr-crb.yaml
十、查看kubelet狀況
等待一段時間(1-10 分鐘),三個節點的 CSR 都被自動 approve:
[root@k8s-master1 ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-P7XcQAc2yNlXn1pUmQFxXNCdGyyt8ccVuW3bmoUZiK4 35m system:bootstrap:e7n0o5 Approved,Issued node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM 84m system:bootstrap:ydbwyk Approved,Issued node-csr-u2sVzVkFYnMxPIYWjXHbqRJROtTZBYzA1s2vATPLzyo 35m system:bootstrap:8w6j3n Approved,Issued
全部節點均 ready:
[root@k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.80.10 Ready <none> 69m v1.12.3 192.168.80.11 Ready <none> 36m v1.12.3 192.168.80.12 Ready <none> 36m v1.12.3
kube-controller-manager 爲各 node 生成了 kubeconfig 文件和公私鑰:
[root@k8s-node1 ~]# ll /etc/kubernetes/cert/ total 40 -rw------- 1 root root 1675 Dec 20 19:10 ca-key.pem -rw-r--r-- 1 root root 1367 Dec 20 19:10 ca.pem -rw------- 1 root root 1679 Dec 20 19:10 flanneld-key.pem -rw-r--r-- 1 root root 1399 Dec 20 19:10 flanneld.pem -rw------- 1 root root 2170 Dec 20 20:43 kubelet-bootstrap.kubeconfig -rw------- 1 root root 1277 Dec 20 20:43 kubelet-client-2018-12-20-20-43-59.pem lrwxrwxrwx 1 root root 59 Dec 20 20:43 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2018-12-20-20-43-59.pem -rw-r--r-- 1 root root 800 Dec 20 20:18 kubelet.config.json -rw-r--r-- 1 root root 2185 Dec 20 20:43 kubelet.crt -rw------- 1 root root 1675 Dec 20 20:43 kubelet.key -rw------- 1 root root 2310 Dec 20 20:43 kubelet.kubeconfig
十一、Kubelet提供的API接口
kublet 啓動後監聽多個端口,用於接收 kube-apiserver 或其它組件發送的請求:
[root@k8s-node1 ~]# netstat -lnpt|grep kubelet tcp 0 0 127.0.0.1:41980 0.0.0.0:* LISTEN 7891/kubelet tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 7891/kubelet tcp 0 0 192.168.80.10:10250 0.0.0.0:* LISTEN 7891/kubelet
例如執行 kubectl ec -it nginx-ds-5rmws -- sh
命令時,kube-apiserver 會向 kubelet 發送以下請求:
POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1
kubelet 接收 10250 端口的 https 請求:
詳情參考:https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L434:3
因爲關閉了匿名認證,同時開啓了 webhook 受權,全部訪問 10250 端口 https API 的請求都須要被認證和受權。
預約義的 ClusterRole system:kubelet-api-admin 授予訪問 kubelet 全部 API 的權限:
[root@k8s-master1 ~]# kubectl describe clusterrole system:kubelet-api-admin Name: system:kubelet-api-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- nodes/log [] [] [*] nodes/metrics [] [] [*] nodes/proxy [] [] [*] nodes/spec [] [] [*] nodes/stats [] [] [*] nodes [] [] [get list watch proxy]
十二、kubet api認證和受權
kublet的配置文件kubelet.config.json配置了以下認證參數:
同時配置了以下受權參數:
kubelet 收到請求後,使用 clientCAFile 對證書籤名進行認證,或者查詢 bearer token 是否有效。若是二者都沒經過,則拒絕請求,提示 Unauthorized:
[root@k8s-node1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.80.10:10250/metrics Unauthorized [root@k8s-node1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.80.10:10250/metrics Unauthorized
經過認證後,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 發送請求,查詢證書或 token 對應的 user、group 是否有操做資源的權限(RBAC);
證書認證和受權:
# 權限不足的證書; $ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.80.10:10250/metrics Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics) $ # 使用部署 kubectl 命令行工具時建立的、具備最高權限的 admin 證書; $ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert ./admin.pem --key ./admin-key.pem https://192.168.80.10:10250/metrics|head # HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request. # TYPE apiserver_client_certificate_expiration_seconds histogram apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0
--cacert
、--cert
、--key
的參數值必須是文件路徑,如上面的 ./admin.pem
不能省略 ./
,不然返回 401 Unauthorized
;bear token 認證和受權:
建立一個 ServiceAccount,將它和 ClusterRole system:kubelet-api-admin 綁定,從而具備調用 kubelet API 的權限:
kubectl create sa kubelet-api-test kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}') TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}') echo ${TOKEN} $ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.80.10:10250/metrics|head # HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request. # TYPE apiserver_client_certificate_expiration_seconds histogram apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0 apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0
注意:
3、部署kube-proxy組件
kube-proxy 運行在全部 worker 節點上,,它監聽 apiserver 中 service 和 Endpoint 的變化狀況,建立路由規則來進行服務負載均衡。
本文檔講解部署 kube-proxy 的部署,使用 ipvs 模式。
一、建立kube-proxy證書
[root@k8s-master1 cert]# cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "4Paradigm" } ] } EOF
system:kube-proxy
;system:node-proxier
將User system:kube-proxy
與 Role system:node-proxier
綁定,該 Role 授予了調用 kube-apiserver
Proxy 相關 API 的權限;生成證書和私鑰:
[root@k8s-master1 cert]# cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \ -ca-key=/etc/kubernetes/cert/ca-key.pem \ -config=/etc/kubernetes/cert/ca-config.json \ -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
二、建立和分發kubeconfig文件
[root@k8s-master1 cert]#kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/cert/ca.pem \ --embed-certs=true \ --server=https://114.67.81.105:8443 \ --kubeconfig=kube-proxy.kubeconfig [root@k8s-master1 cert]#kubectl config set-credentials kube-proxy \ --client-certificate=kube-proxy.pem \ --client-key=kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig [root@k8s-master1 cert]#kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig [root@k8s-master1 cert]#kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
--embed-certs=true
:將 ca.pem 和 admin.pem 證書內容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加時,寫入的是證書文件路徑);分發kubeconfig文件
[root@k8s-master1 cert]# scp kube-proxy.kubeconfig k8s-node1:/etc/kubernetes/cert/ [root@k8s-master1 cert]# scp kube-proxy.kubeconfig k8s-node2:/etc/kubernetes/cert/ [root@k8s-master1 cert]# scp kube-proxy.kubeconfig k8s-node3:/etc/kubernetes/cert/
三、建立kube-proxy配置文件
從 v1.10 開始,kube-proxy 部分參數能夠配置文件中配置。可使用 --write-config-to
選項生成該配置文件,或者參考 kubeproxyconfig 的類型定義源文件 :https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go
建立 kube-proxy config 文件模板:
[root@k8s-master1 cert]# cat >kube-proxy.config.yaml <<EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 192.168.80.10 clientConnection: kubeconfig: /etc/kubernetes/cert/kube-proxy.kubeconfig clusterCIDR: 172.30.0.0/16 healthzBindAddress: 192.168.80.10:10256 hostnameOverride: k8s-node1 kind: KubeProxyConfiguration metricsBindAddress: 192.168.80.10:10249 mode: "ipvs" EOF
bindAddress
: 監聽地址;clientConnection.kubeconfig
: 鏈接 apiserver 的 kubeconfig 文件;clusterCIDR
: kube-proxy 根據 --cluster-cidr
判斷集羣內部和外部流量,指定 --cluster-cidr
或 --masquerade-all
選項後 kube-proxy 纔會對訪問 Service IP 的請求作 SNAT;hostnameOverride
: 參數值必須與 kubelet 的值一致,不然 kube-proxy 啓動後會找不到該 Node,從而不會建立任何 ipvs 規則;mode
: 使用 ipvs 模式;爲各節點建立和分發 kube-proxy 配置文件:
[root@k8s-master1 cert]# scp kube-proxy.config.yaml k8s-node1:/etc/kubernetes/cert/ [root@k8s-master1 cert]# scp kube-proxy.config.yaml k8s-node2:/etc/kubernetes/cert/ [root@k8s-master1 cert]# scp kube-proxy.config.yaml k8s-node3:/etc/kubernetes/cert/
四、建立和分發kube-proxy systemd unit文件
[root@k8s-node1 cert]# cat /etc/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/local/bin/kube-proxy \ --config=/etc/kubernetes/cert/kube-proxy.config.yaml \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/lib/kube-proxy/log \ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
分發 kube-proxy systemd unit 文件:
[root@k8s-master1 cert]# scp /etc/systemd/system/kube-proxy.service k8s-node1:/etc/systemd/system/kube-proxy.service [root@k8s-master1 cert]# scp /etc/systemd/system/kube-proxy.service k8s-node2:/etc/systemd/system/kube-proxy.service [root@k8s-master1 cert]# scp /etc/systemd/system/kube-proxy.service k8s-node3:/etc/systemd/system/kube-proxy.service
五、啓動kube-proxy服務
[root@k8s-node1 cert]# mkdir -p /var/lib/kube-proxy/log [root@k8s-node1 cert]# systemctl daemon-reload [root@k8s-node1 cert]# systemctl enable kube-proxy [root@k8s-node1 cert]# systemctl restart kube-proxy
六、檢查啓動結果
[root@k8s-node1 cert]# systemctl status kube-proxy|grep Active
確保狀態爲 active (running)
,不然查看日誌,確認緣由:
journalctl -u kube-proxy
查看監聽端口狀態
[root@k8s-node1 cert]# netstat -lnpt|grep kube-proxy tcp 0 0 192.168.80.10:10256 0.0.0.0:* LISTEN 9617/kube-proxy tcp 0 0 192.168.80.10:10249 0.0.0.0:* LISTEN 9617/kube-proxy
七、查看ipvs路由規則
[root@k8s-node1 cert]# yum install ipvsadm [root@k8s-node1 cert]#ipvsadm -ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.254.0.1:443 rr -> 192.168.80.7:6443 Masq 1 0 0 -> 192.168.80.8:6443 Masq 1 0 0 -> 192.168.80.9:6443 Masq 1 0 0
可見將全部到 kubernetes cluster ip 443 端口的請求都轉發到 kube-apiserver 的 6443 端口。
恭喜!至此node節點部署完成。
4、驗證集羣功能
一、查看節點情況
[root@k8s-master1 cert]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.80.10 Ready <none> 15h v1.12.3 192.168.80.11 Ready <none> 14h v1.12.3 192.168.80.12 Ready <none> 14h v1.12.3
都爲 Ready 時正常。
二、建立nginx web測試文件
[root@k8s-master1 ~]# cat nginx-web.yml apiVersion: v1 kind: Service metadata: name: nginx-web labels: tier: frontend spec: type: NodePort selector: tier: frontend ports: - name: http port: 80 targetPort: 80 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-con labels: tier: frontend spec: replicas: 3 template: metadata: labels: tier: frontend spec: containers: - name: nginx-pod image: nginx ports: - containerPort: 80
執行nginx-web.yaml文件
[root@k8s-master1 ~]# kubectl create -f nginx-web.yml
三、查看各個Node上Pod IP的連通性
[root@k8s-master1 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-con-594b8d6b48-9p9sf 1/1 Running 0 37s 172.30.70.2 192.168.80.12 <none> nginx-con-594b8d6b48-rxzwx 1/1 Running 0 37s 172.30.67.2 192.168.80.11 <none> nginx-con-594b8d6b48-zd9g7 1/1 Running 0 37s 172.30.6.2 192.168.80.10 <none>
可見,nginx 的 Pod IP 分別是 172.30.70.2
、172.30.67.2
、172.30.6.2
,在全部 Node 上分別 ping 這三個 IP,看是否連通:
[root@k8s-node1 cert]# ping 172.30.6.2 PING 172.30.6.2 (172.30.6.2) 56(84) bytes of data. 64 bytes from 172.30.6.2: icmp_seq=1 ttl=64 time=0.058 ms 64 bytes from 172.30.6.2: icmp_seq=2 ttl=64 time=0.053 ms [root@k8s-node1 cert]# ping 172.30.67.2 PING 172.30.67.2 (172.30.67.2) 56(84) bytes of data. 64 bytes from 172.30.67.2: icmp_seq=1 ttl=63 time=0.467 ms 64 bytes from 172.30.67.2: icmp_seq=1 ttl=63 time=0.425 ms [root@k8s-node1 cert]# ping 172.30.70.2 PING 172.30.70.2 (172.30.70.2) 56(84) bytes of data. 64 bytes from 172.30.70.2: icmp_seq=1 ttl=63 time=0.562 ms 64 bytes from 172.30.70.2: icmp_seq=2 ttl=63 time=0.451 ms
四、查看server的集羣ip
[root@k8s-master1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 17h nginx-web NodePort 10.254.88.134 <none> 80:30164/TCP 47m
五、訪問服務可達性
#1、用局域網的任意其餘主機訪問應用,nodeip:nodeprot方式 (這裏nodeip是私網,因此用局域網的其餘主機訪問) [root@etcd1 ~]# curl -I 192.168.80.10:30164 HTTP/1.1 200 OK Server: nginx/1.15.7 Date: Fri, 21 Dec 2018 04:32:58 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 27 Nov 2018 12:31:56 GMT Connection: keep-alive ETag: "5bfd393c-264" Accept-Ranges: bytes
#二、在flannel網絡的主機上使用集羣ip訪問應用
[root@k8s-node1 cert]# curl -I 10.254.88.134
HTTP/1.1 200 OK
Server: nginx/1.15.7
Date: Fri, 21 Dec 2018 04:35:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 27 Nov 2018 12:31:56 GMT
Connection: keep-alive
ETag: "5bfd393c-264"
Accept-Ranges: bytes
結果訪問都正確,狀態碼200。集羣功能正常。