一步一步搞定Kubernetes二進制部署(三)——組件安裝(單節點)
前言
前面兩篇文章咱們將基礎的環境構建完成,包括etcd集羣(含證書建立)、flannel網絡設置、docker引擎安裝部署等,本文將在三臺服務器上搞定這次單節點的二進制方式部署的Kubernetes集羣。node
master節點上進行配置
一、建立工做目錄
[root@master01 k8s]# mkdir -p /opt/kubernetes/{cfg,bin,ssl}
二、部署apiserver組件
2.1製做apiserver證書
2.1.1建立apiserver證書目錄,編寫證書生成腳本
[root@master01 k8s]# mkdir k8s-cert [root@master01 k8s]# cd k8s-cert/ [root@master01 k8s-cert]# cat k8s-cert.sh #先前已經在etcd集羣搭建的時候給出該類文本的介紹和相關解釋了,這裏就再也不贅述了主要注意下面的地址部分的規劃寫入 cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.0.128", #master01 "192.168.0.131", #master02 "192.168.0.100", #漂移地址VIP "192.168.0.132", #負載均衡服務器地址 "192.168.0.133", #負載均衡服務器地址 "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2.1.2執行腳本,而且將通訊證書拷貝到方纔建立的工做目錄的ssl目錄下
[root@master01 k8s-cert]# bash k8s-cert.sh #查看執行腳本以後的相關文件 [root@master01 k8s-cert]# ls admin.csr admin.pem ca-csr.json k8s-cert.sh kube-proxy-key.pem server-csr.json admin-csr.json ca-config.json ca-key.pem kube-proxy.csr kube-proxy.pem server-key.pem admin-key.pem ca.csr ca.pem kube-proxy-csr.json server.csr
#將安裝apiserver組件前須要的證書存放到工做目錄中 [root@master01 k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/ [root@master01 k8s-cert]# ls /opt/kubernetes/ssl/ ca-key.pem ca.pem server-key.pem server.pem
2.2解壓Kubernetes壓縮包,拷貝命令工具到工做目錄路徑的bin目錄下
軟件包連接:
連接:https://pan.baidu.com/s/1COp94_Y47TU0G8-QSYb5Nw
提取碼:ftzq
linux
[root@master01 k8s]# ls apiserver.sh controller-manager.sh etcd-v3.3.10-linux-amd64 k8s-cert master.zip cfssl.sh etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz scheduler.sh [root@master01 k8s]# tar zxf kubernetes-server-linux-amd64.tar.gz [root@master01 k8s]# ls apiserver.sh etcd-cert k8s-cert master.zip cfssl.sh etcd-v3.3.10-linux-amd64 kubernetes scheduler.sh controller-manager.sh etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@master01 k8s]# ls kubernetes/ -R kubernetes/: addons kubernetes-src.tar.gz LICENSES server kubernetes/addons: kubernetes/server: bin kubernetes/server/bin: apiextensions-apiserver kube-apiserver.docker_tag kube-proxy cloud-controller-manager kube-apiserver.tar kube-proxy.docker_tag cloud-controller-manager.docker_tag kube-controller-manager kube-proxy.tar cloud-controller-manager.tar kube-controller-manager.docker_tag kube-scheduler hyperkube kube-controller-manager.tar kube-scheduler.docker_tag kubeadm kubectl kube-scheduler.tar kube-apiserver kubelet mounter #進入命令目錄移動須要的命令工具到先前建立的工做目錄的bin目錄下 [root@master01 k8s]# cd kubernetes/server/bin/ [root@master01 bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
2.3製做token令牌
#執行命令生成隨機序列號,將此序列號寫入token.csv中 [root@master01 k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 7f42570ec314322c3d629868855d406f [root@master01 k8s]# cat /opt/kubernetes/cfg/token.csv 7f42570ec314322c3d629868855d406f,kubelet-bootstrap,10001,"system:kubelet-bootstrap" #以逗號間隔,分別表示序列號,用戶名,id,角色
2.4開啓apiserver服務
編寫apiserver腳本git
[root@master01 k8s]# vim apiserver.sh #!/bin/bash MASTER_ADDRESS=$1 ETCD_SERVERS=$2 #在k8s工做目錄裏生成kube-apiserver 配置文件 cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --bind-address=${MASTER_ADDRESS} \\ --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ --allow-privileged=true \\ --service-cluster-ip-range=10.0.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/etcd/ssl/ca.pem \\ --etcd-certfile=/opt/etcd/ssl/server.pem \\ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF #生成啓動腳本 cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF #啓動apiserver組件 systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
[root@master01 k8s]# bash apiserver.sh 192.168.0.128 https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 #檢查進程是否啓動成功 [root@master01 k8s]# ps aux | grep kube root 56487 36.9 16.6 397952 311740 ? Ssl 19:42 0:07 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 --bind-address=192.168.0.128 --secure-port=6443 --advertise-address=192.168.0.128 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem root 56503 0.0 0.0 112676 984 pts/4 R+ 19:43 0:00 grep --color=auto kube
查看配置文件github
[root@master01 k8s]# cat /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.0.128:2379,https://192.168.0.129:2379,https://192.168.0.130:2379 \ --bind-address=192.168.0.128 \ --secure-port=6443 \ --advertise-address=192.168.0.128 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --kubelet-https=true \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
#查看監聽的https端口docker
[root@master01 k8s]# netstat -natp | grep 6443 tcp 0 0 192.168.0.128:6443 0.0.0.0:* LISTEN 56487/kube-apiserve tcp 0 0 192.168.0.128:6443 192.168.0.128:45162 ESTABLISHED 56487/kube-apiserve tcp 0 0 192.168.0.128:45162 192.168.0.128:6443 ESTABLISHED 56487/kube-apiserve [root@master01 k8s]# netstat -natp | grep 8080 tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 56487/kube-apiserve [root@master01 k8s]#
三、啓動scheduler服務
[root@master01 k8s]# ./scheduler.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
scheduler.sh腳本以下:shell
#!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler
檢查進程json
[root@master01 k8s]# ps aux | grep kube-scheudler root 56652 0.0 0.0 112676 988 pts/4 S+ 19:49 0:00 grep --color=auto kube-scheudler
四、啓動controller-manager服務
經過腳本啓動bootstrap
[root@master01 k8s]# ./controller-manager.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
腳本內容以下vim
#!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=10.0.0.0/24 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager
此時查看master節點狀態api
[root@master01 k8s]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
狀態爲健康則說明目前配置是沒有問題的
接下來須要的就是在節點上的部署了
Node節點部署
首先須要將一些文件或命令工具遠程拷貝到node節點上去,所以有些文件須要從master節點上編寫遠程拷貝過去
一、在master節點上把 kubelet、kube-proxy拷貝到node節點上去
[root@master01 bin]# pwd /root/k8s/kubernetes/server/bin [root@master01 bin]# scp kubelet kube-proxy root@192.168.0.129:/opt/kubernetes/bin/ root@192.168.0.129's password: kubelet 100% 168MB 84.2MB/s 00:02 kube-proxy 100% 48MB 104.6MB/s 00:00 [root@master01 bin]# scp kubelet kube-proxy root@192.168.0.130:/opt/kubernetes/bin/ root@192.168.0.130's password: kubelet 100% 168MB 123.6MB/s 00:01 kube-proxy 100% 48MB 114.6MB/s 00:00
二、在master節點上建立配置目錄,而且編寫配置腳本
[root@master01 k8s]# mkdir kubeconfig [root@master01 k8s]# cd kubeconfig/ [root@master01 kubeconfig]# cat kubeconfig APISERVER=$1 SSL_DIR=$2 # 建立kubelet bootstrapping kubeconfig export KUBE_APISERVER="https://$APISERVER:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=$SSL_DIR/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=7f42570ec314322c3d629868855d406f \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=$SSL_DIR/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=$SSL_DIR/kube-proxy.pem \ --client-key=$SSL_DIR/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig [root@master01 kubeconfig]#
設置環境變量
[root@master01 kubeconfig]# vim /etc/profile #將該行命令寫入到此文件末尾 export PATH=$PATH:/opt/kubernetes/bin/ [root@master01 kubeconfig]# source /etc/profile [root@master01 kubeconfig]# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/opt/kubernetes/bin/ #查看集羣狀態 [root@master01 kubeconfig]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} controller-manager Healthy ok
二、生成配置文件
[root@master01 k8s-cert]# cd - /root/k8s/kubeconfig [root@master01 kubeconfig]# bash kubeconfig 192.168.0.128 /root/k8s/k8s-cert/ Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". #查看生成的配置文件(兩個) [root@master01 kubeconfig]# ls bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig
三、將這兩個配置文件拷貝到node節點上
[root@master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.0.129:/opt/kubernetes/cfg/ root@192.168.0.129's password: bootstrap.kubeconfig 100% 2166 1.2MB/s 00:00 kube-proxy.kubeconfig 100% 6268 8.1MB/s 00:00 [root@master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.0.130:/opt/kubernetes/cfg/ root@192.168.0.130's password: bootstrap.kubeconfig 100% 2166 1.4MB/s 00:00 kube-proxy.kubeconfig 100% 6268 7.4MB/s 00:00 [root@master01 kubeconfig]#
四、建立bootstrap角色賦予權限用於鏈接apiserver請求籤名,此步驟很是關鍵
[root@master01 kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap #執行結果以下 clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
node節點上的操做
在兩個節點上都開啓kubelet服務
[root@node01 opt]# bash kubelet.sh 192.168.0.129 #第二個爲130 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@node01 opt]# ps aux | grep kubelet root 73575 1.0 1.0 535312 42456 ? Ssl 20:14 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.0.129 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 root 73651 0.0 0.0 112676 984 pts/3 R+ 20:15 0:00 grep --color=auto kubelet
在master節點上驗證
[root@master01 kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk 8s kubelet-bootstrap Pending node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg 24s kubelet-bootstrap Pending [root@master01 kubeconfig]#
PS:pending表示等待集羣給該節點頒發證書
[root@master01 kubeconfig]# kubectl certificate approve node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk certificatesigningrequest.certificates.k8s.io/node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk approved [root@master01 kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk 3m46s kubelet-bootstrap Approved,Issued node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg 4m2s kubelet-bootstrap Pending
PS:Approved,Issued表示已經被容許加入集羣中
#查看集羣節點,成功加入node02節點
[root@master01 kubeconfig]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.0.130 Ready <none> 69s v1.12.3
此時順便也將node01搞定
[root@master01 kubeconfig]# kubectl certificate approve node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg certificatesigningrequest.certificates.k8s.io/node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg approved [root@master01 kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-gt0pU-SbuA0k8z53lmir1q6m-i7Owo3JC8eKm2oujUk 6m20s kubelet-bootstrap Approved,Issued node-csr-i4n0MgQnmFT7NT_VszB8DXohWN1ilhJKnyQJq_9rodg 6m36s kubelet-bootstrap Approved,Issued [root@master01 kubeconfig]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.0.129 Ready <none> 7s v1.12.3 192.168.0.130 Ready <none> 2m55s v1.12.3
在兩個節點上啓動代理proxy服務
[root@node01 opt]# bash proxy.sh 192.168.0.129 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. #檢查proxy服務狀態 [root@node01 opt]# systemctl status kube-proxy.service ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 一 2020-05-04 20:45:26 CST; 1min 9s ago Main PID: 77325 (kube-proxy) Memory: 7.6M CGroup: /system.slice/kube-proxy.service ‣ 77325 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168....
至此單節點的Kubernetes集羣已經配置完畢了,我分爲了三篇文章一步一步來作的。
最後仍是給你們看一下集羣中node節點的配置文件內容
node01節點 [root@node01 cfg]# cat kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.0.129 \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet.config \ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@node01 cfg]# cat kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.0.129 \ --cluster-cidr=10.0.0.0/24 \ --proxy-mode=ipvs \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" node02節點 [root@node02 cfg]# cat kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.0.130 \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet.config \ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@node02 cfg]# cat kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.0.130 \ --cluster-cidr=10.0.0.0/24 \ --proxy-mode=ipvs \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"