內容要點:node
1、實驗環境linux
2、單master羣集部署算法
3、多master羣集部署docker
1、實驗環境:json
基於上篇博客:http://www.javashuo.com/article/p-biqisrrb-oa.html 部署的環境上bootstrap
2、單master羣集部署vim
單master羣集架構圖:api
如下是自籤SSL證書列表:bash
一、首先,咱們要了解在 Master 上,要部署如下三大核心組件:服務器
kube-apiserver:是集羣的統一入口,各組件協調者,全部對象資源的增刪改查和監聽操做都交給 APIServer 處理後再提交給 Etcd 存儲;
kube-controller-manager:處理羣集中常規後臺任務,一個資源對應一個控制器,而 controller-manager 就是負責管理這些控制器的;
kube-scheduler:根據調度算法爲新建立的 Pod 選擇一個 Node 節點,能夠任意部署,能夠部署在同一個節點上,也能夠部署在不一樣節點上。
操做流程:配置文件 -----> systemd 管理組件 -----> 啓動
—— 部署開始:
接下來是在 master 上的操做,生成 api-server 證書:
將宿主機上下載好的 master.zip 包上傳到 /root/k8s/ 目錄下,並解壓: [root@localhost k8s]# unzip master.zip [root@localhost k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@localhost k8s]# mkdir k8s-cert //建立 apiserver自簽證書的目錄 [root@localhost k8s]# cd k8s-cert/ [root@localhost k8s-cert]# vim k8s-cert.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.109.138", //第一臺master "192.168.109.230", //第二臺master "192.168.109.100", //vip虛擬地址 "192.168.109.133", //第一臺調度服務器地址(master) "192.168.109.137", //第二臺調度服務器地址(backup) "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 接下來生成 k8s 證書: [root@localhost k8s-cert]# bash k8s-cert.sh [root@localhost k8s-cert]# ls *pem //查看證書,此處應有8個 admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem admin.pem ca.pem kube-proxy.pem server.pem [root@localhost k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/ [root@localhost k8s-cert]# cd .. [root@localhost k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz //解壓縮包 [root@localhost ~]# cd /root/k8s/kubernetes/server/bin/ //複製關鍵命令文件: [root@localhost bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/ [root@localhost bin]# cd /root/k8s/ //使用下面命令隨機生成序列號: [root@localhost k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 1232eb0133309f6ccde54802cc0b3ebe [root@localhost k8s]# vim /opt/kubernetes/cfg/token.csv 1232eb0133309f6ccde54802cc0b3ebe,kubelet-bootstrap,10001,"system:kubelet-bootstrap" 序列號,用戶名,id號,角色 //此時,二進制文件,token,證書都準備好,開啓 apiserver [root@localhost k8s]# bash apiserver.sh 192.168.109.138 https://192.168.109.138:2379,https://192.168.109.131:2379,https://192.168.109.132:2379 //檢查進程是否啓動成功: [root@localhost k8s]# ps aux | grep kube //查看配置文件: [root@localhost k8s]# cat /opt/kubernetes/cfg/kube-apiserver //檢查監聽端口,是否都正常: [root@localhost k8s]# netstat -natp | grep 6443 [root@localhost k8s]# netstat -natp | grep 8080 //啓動 schedule 服務: [root@localhost k8s]# ./scheduler.sh 127.0.0.1 [root@localhost k8s]# ps aux | grep ku [root@localhost k8s]# chmod +x controller-manager.sh [root@localhost k8s]# ./controller-manager.sh 127.0.0.1 //查看 master 節點的狀態: [root@localhost k8s]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
二、在 node 節點上的部署:
首先如下是 node 節點上的 三大核心組件:
kubelet:是master在node節點上的agent,能夠管理本機運行容器的生命週期,例如建立容器、Pod掛載數據卷、下載secret、獲取容器和節點狀態等工做,kubelet 將每一個 Pod轉換成一組容器。
kube-proxy:在 node節點上實現 Pod網絡代理,維護網絡規劃和四層負載均衡工做。
docker:容器(咱們已經安裝好了)
—— 部署開始:
//先在master上,把 kubelet、kube-proxy 拷貝到 node節點上去: [root@localhost ~]# cd k8s/kubernetes/server/bin/ [root@localhost bin]# ls apiextensions-apiserver kube-apiserver.docker_tag kube-proxy cloud-controller-manager kube-apiserver.tar kube-proxy.docker_tag cloud-controller-manager.docker_tag kube-controller-manager kube-proxy.tar cloud-controller-manager.tar kube-controller-manager.docker_tag kube-scheduler hyperkube kube-controller-manager.tar kube-scheduler.docker_tag kubeadm kubectl kube-scheduler.tar kube-apiserver kubelet mounter [root@localhost bin]# scp kubelet kube-proxy root@192.168.109.131:/opt/kubernetes/bin/ [root@localhost bin]# scp kubelet kube-proxy root@192.168.109.132:/opt/kubernetes/bin/ //在 node01節點上操做(將宿主機上的 node.zip包 到/root 目錄下再解壓): [root@localhost ~]# ls anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 視頻 文檔 音樂 flannel.sh initial-setup-ks.cfg README.md 模板 圖片 下載 桌面 [root@localhost ~]# unzip node.zip //解壓,得到 kubelet.sh proxy.sh Archive: node.zip inflating: proxy.sh inflating: kubelet.sh ————接下來在 master 上操做: [root@localhost k8s]# mkdir kubeconfig [root@localhost k8s]# cd kubeconfig/ [root@localhost kubeconfig]# cat /opt/kubernetes/cfg/token.csv //獲取 token信息 1232eb0133309f6ccde54802cc0b3ebe,kubelet-bootstrap,10001,"system:kubelet-bootstrap" [root@localhost kubeconfig]# vim kubeconfig APISERVER=$1 SSL_DIR=$2 # 建立kubelet bootstrapping kubeconfig export KUBE_APISERVER="https://$APISERVER:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=$SSL_DIR/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=1232eb0133309f6ccde54802cc0b3ebe \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 建立kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=$SSL_DIR/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=$SSL_DIR/kube-proxy.pem \ --client-key=$SSL_DIR/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig //設置環境變量(能夠寫入到 /etc/profile 中): [root@localhost kubeconfig]# export PATH=$PATH:/opt/kubernetes/bin/ //檢查健康狀態: [root@localhost kubeconfig]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} //生成配置文件: [root@localhost kubeconfig]# bash kubeconfig 192.168.109.138 /root/k8s/k8s-cert/ [root@localhost kubeconfig]# ls bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig //拷貝配置文件到 node節點上: [root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.109.131:/opt/kubernetes/cfg/ [root@localhost kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.109.132:/opt/kubernetes/cfg/ //建立 bootstrap角色賦予權限用於鏈接 apiserver請求籤名(相當重要): [root@localhost kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap ————接下來在 node01 節點上的操做: [root@localhost ~]# bash kubelet.sh 192.168.109.131 //檢查 kubelet 服務啓動: [root@localhost ~]# ps aux|grep kube ————在master上: //檢查到 node01 節點的請求: [root@localhost kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-M9Iv_3cKuOZaiKSvoQGIarJHOaK1S9FnRs6SGIXP9nk 5s kubelet-bootstrap Pending(意思:等待羣集給該節點頒發證書) //接下來贊成請求,頒發證書便可: [root@localhost kubeconfig]# kubectl certificate approve node-csr-M9Iv_3cKuOZaiKSvoQGIarJHOaK1S9FnRs6SGIXP9nk [root@localhost kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-M9Iv_3cKuOZaiKSvoQGIarJHOaK1S9FnRs6SGIXP9nk 7m7s kubelet-bootstrap Approved,Issued (Approved,Issued:就表示已經被容許加入羣集) //查看羣集節點,成功加入 node01 節點: [root@localhost kubeconfig]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.109.131 Ready <none> 3m8s v1.12.3 ————在 node01上操做,啓動 proxy服務: [root@localhost ~]# bash proxy.sh 192.168.109.131 [root@localhost ~]# systemctl status kube-proxy.service //查看狀態是否正常 ————部署 node02 : 爲了提升效率,咱們將 node01上現成的 /opt/kubernetes目錄複製到其餘節點進行修改便可: [root@localhost ~]# scp -r /opt/kubernetes/ root@192.168.109.132:/opt/ //再把kubelet,kube-proxy的service文件拷貝到node2中 [root@localhost ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.109.132:/usr/lib/systemd/system/ --接下來就是在 node02 節點上的操做: //首先,先刪除複製過來的證書,由於待會 node02 會自行申請屬於本身的證書: [root@localhost ~]# cd /opt/kubernetes/ssl/ [root@localhost ssl]# rm -rf * //修改配置文件 kubelet 、kubelet.config 、kube-proxy(三個配置文件) [root@localhost ssl]# cd /opt/kubernetes/cfg/ [root@localhost cfg]# vim kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.109.132 \ ##改爲本身的IP地址 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet.config \ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@localhost cfg]# vim kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.109.132 ##改爲本身的IP地址 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.0.0.2 clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true [root@localhost cfg]# vim kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.109.132 \ ##改爲本身的IP地址 --cluster-cidr=10.0.0.0/24 \ --proxy-mode=ipvs \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" //啓動服務: [root@localhost cfg]# systemctl start kubelet.service [root@localhost cfg]# systemctl start kube-proxy.service //和以前同樣,在 master 上操做查看請求: [root@localhost kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-M9Iv_3cKuOZaiKSvoQGIarJHOaK1S9FnRs6SGIXP9nk 29m kubelet-bootstrap Approved,Issued node-csr-vOfkpLYSYqFtD__GgZZZiV7NU_WaqECDvBbFuGyckRc 2m21s kubelet-bootstrap Pending //接下來和剛剛同樣,贊成受權,頒發證書便可: [root@localhost kubeconfig]# kubectl certificate approve node-csr-vOfkpLYSYqFtD__GgZZZiV7NU_WaqECDvBbFuGyckRc //查看羣集中的節點: [root@localhost kubeconfig]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.109.131 Ready <none> 34s v1.12.3 192.168.109.132 Ready <none> 25m v1.12.3
!!至此,咱們一個單節點的 Master 部署就完成了,接下來是帶來 多節點 Master部署
3、多 Master 節點部署:
多 Master節點集羣圖:
在有單 Master 節點部署環境的基礎上,在部署一個 Master02 便可。
角色 | IP地址 |
master02 | 192.168.109.230 |
–部署開始:
//首先關閉防火牆: [root@localhost ~]# systemctl stop firewalld.service [root@localhost ~]# setenforce 0 //在 master01上,直接將 kubernetes目錄拷貝到 master02上便可: [root@localhost kubeconfig]# scp -r /opt/kubernetes/ root@192.168.109.230:/opt //在複製 master01 上的三個組件啓動腳本:kube-apiserver.service、kube-controller-manager.service、kube-scheduler.service [root@localhost kubeconfig]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.220.129:/usr/lib/systemd/system/ //接下來,在 master02上,修改配置文件 kube-apiserver中的IP地址: [root@localhost cfg]# pwd /opt/kubernetes/cfg [root@localhost cfg]# vim kube-apiserver . (省略部分) . --etcd-servers=https://192.168.109.138:2379,https://192.168.109.131:2379,https://192.168.109.132:2379 \ --bind-address=192.168.109.230 \ ##改爲本身的ip地址 --secure-port=6443 \ --advertise-address=192.168.109.230 \ ##改爲本身的ip地址 --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ . (省略部分) . //拷貝 master01 上已有的 etcd 證書給 master02 使用: [root@localhost kubeconfig]# scp -r /opt/etcd/ root@192.168.109.230:/opt/ //接下來,啓動 master02中的三個組件: [root@localhost cfg]# systemctl start kube-apiserver.service [root@localhost cfg]# systemctl start kube-controller-manager.service [root@localhost cfg]# systemctl start kube-scheduler.service //增長環境變量: [root@localhost cfg]# vim /etc/profile 在末尾添加: export PATH=$PATH:/opt/kubernetes/bin/ [root@localhost cfg]# source /etc/profile //環境變量生效 //master02 上查看節點狀況(和 master01如出一轍): [root@localhost cfg]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.109.131 Ready <none> 44m v1.12.3 192.168.109.132 Ready <none> 70m v1.12.3