上一遍是CA證書和etcd的部署,這一篇繼續搭建k8s,廢話很少說、開始部署。node
kubernetes master 節點包含的組件有:linux
目前這3個組件須要部署到同一臺機器上:(後面再部署高可用的master)git
kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能緊密相關; 同時只能有一個 kube-scheduler、kube-controller-manager 進程處於工做狀態,若是運行多個,則須要經過選舉產生一個 leader。github
master 節點與node 節點上的Pods 經過Pod 網絡通訊,因此須要在master 節點上部署Flannel 網絡。json
$ wget https://dl.k8s.io/v1.8.2/kubernetes-server-linux-amd64.tar.gz
$ tar -xzvf kubernetes-server-linux-amd64.tar.gz
將二進制文件拷貝到/usr/k8s/bin
目錄bootstrap
$ sudo cp -r /root/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} /usr/k8s/bin/
#全部組件之間都須要配置證書api
建立kubernetes 證書籤名請求:安全
$ cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "${NODE_IP}", "${MASTER_URL}", "${CLUSTER_KUBERNETES_SVC_IP}", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
kubernetes
的服務 IP (Service Cluster IP),通常是 kube-apiserver --service-cluster-ip-range
選項值指定的網段的第一個IP,如 「10.254.0.1」生成kubernetes 證書和私鑰:服務器
$ cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/ca-config.json \ -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes $ ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem $ sudo mkdir -p /etc/kubernetes/ssl/ $ sudo mv kubernetes*.pem /etc/kubernetes/ssl/
kubelet 首次啓動時向kube-apiserver 發送TLS Bootstrapping 請求,kube-apiserver 驗證請求中的token 是否與它配置的token.csv 一致,若是一致則自動爲kubelet 生成證書和密鑰。網絡
# 導入的 environment.sh 文件定義了 BOOTSTRAP_TOKEN 變量 $ cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF $ sudo mv token.csv /etc/kubernetes/
$ cat > kube-apiserver.service <<EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/usr/k8s/bin/kube-apiserver \\ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --advertise-address=${NODE_IP} \\ --bind-address=0.0.0.0 \\ --insecure-bind-address=${NODE_IP} \\ --authorization-mode=Node,RBAC \\ --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\ --kubelet-https=true \\ --experimental-bootstrap-token-auth \\ --token-auth-file=/etc/kubernetes/token.csv \\ --service-cluster-ip-range=${SERVICE_CIDR} \\ --service-node-port-range=${NODE_PORT_RANGE} \\ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --client-ca-file=/etc/kubernetes/ssl/ca.pem \\ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\ --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\ --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\ --etcd-servers=${ETCD_ENDPOINTS} \\ --enable-swagger-ui=true \\ --allow-privileged=true \\ --apiserver-count=2 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/lib/audit.log \\ --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\ --event-ttl=1h \\ --logtostderr=true \\ --v=6 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
experimental-bootstrap-token-auth
,須要替換成enable-bootstrap-token-auth
,由於這個參數在1.9.x裏面已經廢棄掉了--authorization-mode=RBAC
指定在安全端口使用RBAC 受權模式,拒絕未經過受權的請求--kubelet-certificate-authority
、--kubelet-client-certificate
和 --kubelet-client-key
選項,不然後續 kube-apiserver 校驗 kubelet 證書時出現 」x509: certificate signed by unknown authority「 錯誤--admission-control
值必須包含 ServiceAccount
,不然部署集羣插件時會失敗--bind-address
不能爲 127.0.0.1
--service-cluster-ip-range
指定 Service Cluster IP 地址段,該地址段不能路由可達--service-node-port-range=${NODE_PORT_RANGE}
指定 NodePort 的端口範圍etcd/registry
路徑下,能夠經過 --etcd-prefix
參數進行調整--authorization-mode
參數中添加Node
,即:--authorization-mode=Node,RBAC
,不然Node 節點沒法註冊--audit-log-path
參數是不夠的,這只是指定了日誌的路徑,還須要指定一個審查日誌策略文件:--audit-policy-file
,咱們也可使用日誌收集工具收集相關的日誌進行分析。審查日誌策略文件內容以下:(/etc/kubernetes/audit-policy.yaml)
apiVersion: audit.k8s.io/v1beta1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" # Resource "pods" doesn't match requests to any subresource of pods, # which is consistent with the RBAC policy. resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata # Long-running requests like watches that fall under this rule will not # generate an audit event in RequestReceived. omitStages: - "RequestReceived"
審查日誌的相關配置能夠查看文檔瞭解:https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
這裏解釋一下審查日誌策略是什麼東西、起什麼做用;
若是有人執行了 kube-apiserver 作了什麼操做 都會有日誌產生有記錄的 對後續排查問題大有用處,能夠把日誌重定向一個文件裏面來分析。
$ sudo cp kube-apiserver.service /etc/systemd/system/ $ sudo systemctl daemon-reload $ sudo systemctl enable kube-apiserver $ sudo systemctl start kube-apiserver $ sudo systemctl status kube-apiserver
$ cat > kube-controller-manager.service <<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/k8s/bin/kube-controller-manager \\ --address=127.0.0.1 \\ --master=http://${MASTER_URL}:8080 \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=${SERVICE_CIDR} \\ --cluster-cidr=${CLUSTER_CIDR} \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/etc/kubernetes/ssl/ca.pem \\ --leader-elect=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
--address
值必須爲 127.0.0.1
,由於當前 kube-apiserver 指望 scheduler 和 controller-manager 在同一臺機器
--master=http://${MASTER_URL}:8080
:使用http
(非安全端口)與 kube-apiserver 通訊,須要下面的haproxy
安裝成功後才能去掉8080端口。
--cluster-cidr
指定 Cluster 中 Pod 的 CIDR 範圍,該網段在各 Node 間必須路由可達(flanneld保證)
--service-cluster-ip-range
參數指定 Cluster 中 Service 的CIDR範圍,該網絡在各 Node 間必須路由不可達,必須和 kube-apiserver 中的參數一致
--cluster-signing-*
指定的證書和私鑰文件用來簽名爲 TLS BootStrap 建立的證書和私鑰
--root-ca-file
用來對 kube-apiserver 證書進行校驗,指定該參數後,纔會在Pod 容器的 ServiceAccount 中放置該 CA 證書文件
--leader-elect=true
部署多臺機器組成的 master 集羣時選舉產生一處於工做狀態的 kube-controller-manager
進程
$ sudo cp kube-controller-manager.service /etc/systemd/system/ $ sudo systemctl daemon-reload $ sudo systemctl enable kube-controller-manager $ sudo systemctl start kube-controller-manager $ sudo systemctl status kube-controller-manager
$ cat > kube-scheduler.service <<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/k8s/bin/kube-scheduler \\ --address=127.0.0.1 \\ --master=http://${MASTER_URL}:8080 \\ --leader-elect=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
--address
值必須爲 127.0.0.1
,由於當前 kube-apiserver 指望 scheduler 和 controller-manager 在同一臺機器--master=http://${MASTER_URL}:8080
:使用http
(非安全端口)與 kube-apiserver 通訊,須要下面的haproxy
啓動成功後才能去掉8080端口--leader-elect=true
部署多臺機器組成的 master 集羣時選舉產生一處於工做狀態的 kube-controller-manager
進程在這裏提一句三個組件配置不是很複雜
$ sudo cp kube-scheduler.service /etc/systemd/system/ $ sudo systemctl daemon-reload $ sudo systemctl enable kube-scheduler $ sudo systemctl start kube-scheduler $ sudo systemctl status kube-scheduler
如今還驗證不了kube-apiserver狀態 ,缺乏kubectl工具
kubectl
默認從~/.kube/config
配置文件中獲取訪問kube-apiserver 地址、證書、用戶名等信息,須要正確配置該文件才能正常使用kubectl
命令。
須要將下載的kubectl 二進制文件和生產的~/.kube/config
配置文件拷貝到須要使用kubectl 命令的機器上。
不少童鞋說這個地方不知道在哪一個節點上執行,kubectl
只是一個和kube-apiserver
進行交互的一個命令行工具,因此你想安裝到那個節點都想,master或者node任意節點均可以,好比你先在master節點上安裝,這樣你就能夠在master節點使用kubectl
命令行工具了,若是你想在node節點上使用(固然安裝的過程確定會用到的),你就把master上面的kubectl
二進制文件和~/.kube/config
文件拷貝到對應的node節點上就好了。
$ source /usr/k8s/bin/env.sh $ export KUBE_APISERVER="https://${MASTER_URL}:6443"
注意這裏的KUBE_APISERVER
地址,由於咱們尚未安裝haproxy
,因此暫時須要手動指定使用apiserver
的6443端口,等haproxy
安裝完成後就能夠用使用443端口轉發到6443端口去了。
~/.kube/config
配置文件$ wget https://dl.k8s.io/v1.8.2/kubernetes-client-linux-amd64.tar.gz # 若是服務器上下載不下來,能夠想辦法下載到本地,而後scp上去便可 $ tar -xzvf kubernetes-client-linux-amd64.tar.gz $ sudo cp kubernetes/client/bin/kube* /usr/k8s/bin/ $ sudo chmod a+x /usr/k8s/bin/kube* $ export PATH=/usr/k8s/bin:$PATH
kubectl 與kube-apiserver 的安全端口通訊,須要爲安全通訊提供TLS 證書和密鑰。建立admin 證書籤名請求:
$ cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF
kube-apiserver
使用RBAC 對客戶端(如kubelet、kube-proxy、Pod)請求進行受權kube-apiserver
預約義了一些RBAC 使用的RoleBindings,如cluster-admin 將Group system:masters
與Role cluster-admin
綁定,該Role 授予了調用kube-apiserver
全部API 的權限system:masters
,kubectl使用該證書訪問kube-apiserver
時,因爲證書被CA 簽名,因此認證經過,同時因爲證書用戶組爲通過預受權的system:masters
,因此被授予訪問全部API 的權限#hosts屬性爲空表示不指定在那臺機器上安裝
生成admin 證書和私鑰:
$ cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/ca-config.json \ -profile=kubernetes admin-csr.json | cfssljson -bare admin $ ls admin admin.csr admin-csr.json admin-key.pem admin.pem $ sudo mv admin*.pem /etc/kubernetes/ssl/
# 設置集羣參數 $ kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} # 設置客戶端認證參數 $ kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem \ --token=${BOOTSTRAP_TOKEN} # 設置上下文參數 $ kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin # 設置默認上下文 $ kubectl config use-context kubernetes
admin.pem
證書O 字段值爲system:masters
,kube-apiserver
預約義的 RoleBinding cluster-admin
將 Group system:masters
與 Role cluster-admin
綁定,該 Role 授予了調用kube-apiserver
相關 API 的權限~/.kube/config
文件將~/.kube/config
文件拷貝到運行kubectl
命令的機器的~/.kube/
目錄下去。
如今可使用kubectl get cs查看狀態了
get是 kubernetes api中的一個方法吧