前言html
kubernetes master 節點包含的組件:
kube-apiserver :集羣核心,集羣API接口、集羣各個組件通訊的中樞;集羣安全控制;
kube-scheduler: 集羣調度器 ,根據node負載(cpu、內存、存儲、策略等)將pod分配到合適node。
kube-controller-manager:集羣狀態管理器 。當集羣狀態與集羣指望值不一樣時,該控制器會根據已有策略將其恢復到指定狀態。node
注意:linux
三者的功能緊密相關;集羣只能有一個 kube-scheduler、kube-controller-manager 進程處於工做狀態,若是運行多個,則須要經過選舉產生一個 leadergit
環境說明:
github
192.168.214.88 master1
web
192.168.214.89 master2docker
192.168.214.90 master3bootstrap
下載並解壓安裝包
vim
[root@master1 ~]# wget https://dl.k8s.io/v1.12.2/kubernetes-server-linux-amd64.tar.gz [root@master1 ~]# tar zxvf kubernetes-server-linux-amd64.tar.gz [root@master1 ~]# tree kubernetes kubernetes ├── addons ├── kubernetes-src.tar.gz ├── LICENSES └── server └── bin ├── apiextensions-apiserver ├── cloud-controller-manager ├── cloud-controller-manager.docker_tag ├── cloud-controller-manager.tar ├── hyperkube ├── kubeadm ├── kube-apiserver ├── kube-apiserver.docker_tag ├── kube-apiserver.tar ├── kubeconfig ├── kube-controller-manager ├── kube-controller-manager.docker_tag ├── kube-controller-manager.tar ├── kubectl ├── kubelet ├── kube-proxy ├── kube-proxy.docker_tag ├── kube-proxy.tar ├── kube-scheduler ├── kube-scheduler.docker_tag ├── kube-scheduler.tar └── mounter
將服務相關的命令拷貝到/usr/local/bin/目錄,並添加執行權限
api
[root@master1 ~]# cp -r kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/bin/ [root@master1 ~]# chmod +x kube*
生成集羣管理員admin.kubeconfig配置文件供kubectl調用
[root@master1 ssl]# export KUBE_APISERVER="https://192.168.214.88:6443" [root@master1 ssl]# kubectl config set-cluster kubernetes \ #設置集羣參數 > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=admin.kubeconfig Cluster "kubernetes" set. #設置客戶端認證參數 [root@master1 log]# kubectl config set-credentials admin \ > --client-certificate=/opt/kubernetes/ssl/admin.pem \ > --embed-certs=true \ > --client-key=/opt/kubernetes/ssl/admin-key.pem \ > --kubeconfig=admin.kubeconfig User "admin" set. [root@master1ssl]# kubectl config set-context kubernetes \ #設置上下文參數 > --cluster=kubernetes \ > --user=admin \ > --kubeconfig=admin.kubeconfig Context "kubernetes" modified. [root@master1 log]# kubectl config use-context kubernetes \ > --kubeconfig=admin.kubeconfig #設置默認上下文 Switched to context "kubernetes".
說明:
生成的admin.kubeconfig裏的內容同時被保存到~/.kube/config文件,該文件擁有對集羣的最高權限,須要妥善保管;
admin.pem 證書 OU 字段值爲 system:masters,kube-apiserver 預約義的 RoleBinding cluster-admin 將 Group system:masters 與 Role cluster-admin 綁定,該 Role 授予了調用kube-apiserver 相關 API 的權限;
kubelet、kube-proxy 等 Node 機器上的進程與 Master 機器的 kube-apiserver 進程通訊時須要認證和受權;kubernetes 1.4 開始支持由 kube-apiserver 爲客戶端生成 TLS 證書的 TLS Bootstrapping 功能,這樣就不須要爲每一個客戶端生成證書了;該功能當前僅支持爲 kubelet 生成證書
如下操做只須要在master節點上執行,生成的*.kubeconfig文件能夠直接拷貝到node節點的/opt/kubernetes/ssl目錄下。
建立TLS Bootstrapping Token
[root@master1 ~]# export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') [root@master1 ~]# cat > token.csv << EOF > ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" > EOF [root@master1 ssl]# cat token.csv bfdf3a25e9cf9f5278ea4c9ff9227e23,kubelet-bootstrap,10001,"system:kubelet-bootstrap" [root@master1 ~]# mv token.csv /opt/kubernetes/ssl
說明:
Token能夠是任意的包含128 bit的字符串,可使用安全的隨機數發生器生成。建立完成後,查看文件,確認其中的 ${BOOTSTRAP_TOKEN} 環境變量已經被真實的值替換。
BOOTSTRAP_TOKEN 將被寫入到 kube-apiserver 使用的 token.csv 文件和 kubelet 使用的 bootstrap.kubeconfig 文件,若是後續從新生成了 BOOTSTRAP_TOKEN,則須要:
更新 token.csv 文件,分發到全部機器 (master 和 node)的 /opt/kubernetes/ssl 目錄下,分發到node節點上非必需;
從新生成 bootstrap.kubeconfig 文件,分發到全部 node 機器的 /opt/kubernetes/ssl 目錄下;
重啓 kube-apiserver 和 kubelet 進程;
從新 approve kubelet 的 csr 請求;
建立kubectl bootstrapping.kubeconfig文件
[root@master1 ssl]# export KUBE_APISERVER="https://192.168.214.88:6443" [root@master1 ssl]# kubectl config set-cluster kubernetes \ #設置集羣參數 > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set. [root@master1 ssl]# kubectl config set-credentials kubelet-bootstrap \ ##設置客戶端認證參數 > --token=${BOOTSTRAP_TOKEN} \ > --kubeconfig=bootstrap.kubeconfig User "kubelet-bootstrap" set. [root@master1 ssl]# kubectl config set-context default \ ##設置上下文參數 > --cluster=kubernetes \ > --user=kubelet-bootstrap \ > --kubeconfig=bootstrap.kubeconfig Context "default" created. [root@master1 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #設置默認上下文 Switched to context "default".
建立kube-proxy.kubeconfig文件
[root@master1 ssl]# export KUBE_APISERVER="https://192.168.214.88:6443" [root@master1 ssl]# kubectl config set-cluster kubernetes \ #設置集羣參數 > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=kube-proxy.kubeconfig Cluster "kubernetes" set. [root@master1 ssl]# kubectl config set-credentials kube-proxy \ #設置客戶端認證參數 > --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \ > --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \ > --embed-certs=true \ > --kubeconfig=kube-proxy.kubeconfig User "kube-proxy" set. [root@master1 ssl]# kubectl config set-context default \ #設置上下文參數 > --cluster=kubernetes \ > --user=kube-proxy \ > --kubeconfig=kube-proxy.kubeconfig Context "default" created. [root@master1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig #設置默認上下文 Switched to context "default".
說明:
設置集羣參數和客戶端認證參數時 –embed-certs 都爲 true,這會將 certificate-authority、client-certificate 和 client-key 指向的證書文件內容寫入到生成的 kube-proxy.kubeconfig 文件中;
kube-proxy.pem 證書中 CN 爲 system:kube-proxy,kube-apiserver 預約義的 RoleBinding cluster-admin 將User system:kube-proxy 與 Role system:node-proxier 綁定,該 Role 授予了調用 kube-apiserver Proxy 相關 API 的權限;
生成高級審計配置
[root@master1 ssl]# cat >> audit-policy.yaml <<EOF > apiVersion: audit.k8s.io/v1beta1 > kind: Policy > rules: > - level: Metadata > EOF
能夠提早將kubeconfig文件分發到各node節點,部署node時要用到
[root@master1 ssl]# scp -r /opt/kubernetes/ssl/*.kubeconfig node1:/opt/kubernetes/ssl/ [root@master1 ssl]# scp -r /opt/kubernetes/ssl/*.kubeconfig node2:/opt/kubernetes/ssl/ [root@master1 ssl]# scp -r /opt/kubernetes/ssl/*.kubeconfig node3:/opt/kubernetes/ssl/
建立service文件
建立/usr/lib/systemd/system/kube-apiserver.service
[root@master1 ~]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/usr/local/bin/kube-apiserver \ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \ --advertise-address=192.168.214.88 \ --bind-address=192.168.214.88 \ --insecure-bind-address=127.0.0.1 \ --kubelet-https=true \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/ssl/token.csv \ --feature-gates=CustomPodDNS=true \ --service-cluster-ip-range=172.21.0.0/16 \ --service-node-port-range=8400-20000 \ --tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \ --tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \ --etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \ --etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \ --etcd-servers=https://192.168.214.200:2379,https://192.168.214.201:2379,https://192.168.214.202:2379 \ --logtostderr=false \ --log-dir=/var/log/kube-apiserver \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/lib/audit.log \ --event-ttl=1h \ --v=2 \ --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \ --requestheader-allowed-names= \ --requestheader-extra-headers-prefix=X-Remote-Extra- \ --requestheader-group-headers=X-Remote-Group \ --requestheader-username-headers=X-Remote-User \ #--proxy-client-cert-file=/opt/kubernetes/ssl/kubelet-client.crt \ #--proxy-client-key-file=/opt/kubernetes/ssl/kubelet-client.key \ --enable-aggregator-routing=true \ --runtime-config=rbac.authorization.k8s.io/v1beta1,settings.k8s.io/v1alpha1=true,api/all=true Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install]
--admission-control #插件,Kubernetes中許多高級功能須要激活Admission Controller插件,以便更好的支持該功能(必須包含 ServiceAccount)。參考:Admission Controller
--advertise-address #廣播API Server給全部集羣成員的地址
--bind-address #不能爲127.0.0.1
--insecure-bind-address #非安全端口的服務IP地址,默認是本地地址,能夠不用證書驗證。
--kubelet-https[=true] #使用https創建kubelet鏈接
--authorization-mode #受權模式,指定在安全端口使用RRBAC和node模式,拒絕未經過受權的請求。參考https://kubernetes.io/docs/reference/access-authn-authz/rbac/ ,http://docs.kubernetes.org.cn/156.html
--enable-bootstrap-token-auth #啓用啓動引導令牌,參考https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/
--token-auth-file #生成的token文件位置
kube-scheduler、kube-controller-manager 通常和 kube-apiserver 部署在同一臺機器上,它們使用非安全端口和 kube-apiserver通訊;
kubelet、kube-proxy 部署在其它 Node 節點上,若是經過安全端口訪問 kube-apiserver,則必須先經過 TLS 證書認證,再經過 RBAC 受權;
kube-proxy、kubelet 經過在使用的證書裏指定相關的 User、Group 來達到經過 RBAC 受權的目的;
--feature-gates=CustomPodDNS=true #啓用該特性後,用戶能夠將Pod的dnsPolicy字段設置爲"None",而且能夠在Pod。Spec中添加新的字段dnsConfig,
其中dnsConfig用來定義DNS參數,而dnsPolicy用來給Pod選取預設的DNS 參考http://www.jintiankansha.me/t/Js1R84GGAl
--service-cluster-ip-range #kubernetes集羣中service的虛擬IP地址範圍
k8s會分配給Service一個固定IP,這是一個虛擬IP(也稱爲ClusterIP),並非一個真實存在的IP,而是由k8s虛擬出來的。
虛擬IP屬於k8s內部的虛擬網絡,外部是尋址不到的。在k8s系統中,其實是由k8s Proxy組件負責實現虛擬IP路由和轉發的,因此k8s Node中都必須運行了k8s Proxy,從而在容器覆蓋網絡之上又實現了k8s層級的虛擬轉發網絡。
--service-node-port-range #kubernetes集羣可映射的物理機端口範圍
--enable-swagger-ui=true #能夠經過/swagger-ui訪問Swagger UI
建立/usr/lib/systemd/system/kube-controller-manager.service
[root@master1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-controller-manager \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --allocate-node-cidrs=true \ --service-cluster-ip-range=172.21.0.0/16 \ --cluster-cidr=172.20.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --leader-elect=true \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
建立/usr/lib/systemd/system/kube-scheduler.service
[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/local/bin/kube-scheduler \ --address=127.0.0.1 \ --master=http://127.0.0.1:8080 \ --leader-elect=true \ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target
啓動服務並設置開機自啓動
[root@master1 ~]# systemctl daemon-reload [root@master1 ~]# systemctl start kube-apiserver.service [root@master1 ~]# systemctl start kube-controller-manager.service [root@master1 ~]# systemctl start kube-scheduler.service [root@master1 ~]# systemctl enable kube-apiserver.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-apiserver.service. [root@master1 ~]# systemctl enable kube-controller-manager.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-controller-manager.service [root@master1 ~]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
檢查各組件狀態
[root@master1 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}
將相關文件拷貝到另外兩臺master節點,包括相關證書、kubeconfig文件、二進制文件、service文件
[root@master1 kubernetes]# scp -r ssl/ master2:/opt/kubernetes/ [root@master1 kubernetes]# scp -r ssl/ master3:/opt/kubernetes/ [root@master1 kubernetes]# scp -r /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} master2:/usr/local/bin/ [root@master1 kubernetes]# scp -r /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl} master3:/usr/local/bin/ [root@master1 kubernetes]# scp /usr/lib/systemd/system/kube* master2:/usr/lib/systemd/system/ [root@master1 kubernetes]# scp /usr/lib/systemd/system/kube* master3:/usr/lib/systemd/system/
在master2,3節點上,根據ip修改APIserver配置文件、添加二進制文件執行權限、啓動相關服務並設置開機自啓
同時須要加載下環境變量,不然沒法kubectl
[root@master2 bin]# echo "export KUBECONFIG=/opt/kubernetes/admin.kubeconfig" >> /etc/profile [root@master2 bin]# source /etc/profile [root@master2 bin]# echo $KUBECONFIG /opt/kubernetes/admin.kubeconfig