在部署以前,首先感謝 手動搭建高可用的kubernetes 集羣 博文的做者【陽明】,本文對kubernetes版本作了升級,其中一部份內容做了一下修改及完善。html
角色 | IP地址 |
---|---|
Master01&&etcd01&&haproxy01 | 10.100.4.181 |
Master02&&etcd02&&haproxy02 | 10.100.4.182 |
Node01 && etcd03 | 10.100.4.183 |
Node02 | 10.100.4.184 |
Node03 | 10.100.4.185 |
後面的部署將會使用到的全局變量,定義以下(根據本身的機器、網絡修改)node
# TLS Bootstrapping 使用的Token,可使用命令 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成 BOOTSTRAP_TOKEN="3da3ebeda2462bce41766a086f8eb9fb" # 建議使用未用的網段來定義服務網段和Pod 網段 # 服務網段(Service CIDR),部署前路由不可達,部署後集羣內部使用IP:Port可達 SERVICE_CIDR="10.254.0.0/16" # Pod 網段(Cluster CIDR),部署前路由不可達,部署後路由可達(flanneld 保證) CLUSTER_CIDR="172.30.0.0/16" # 服務端口範圍(NodePort Range) NODE_PORT_RANGE="20000-40000" # etcd集羣服務地址列表,根據本身的規劃修改此地址 ETCD_ENDPOINTS="https://10.100.4.181:2379,https://10.100.4.182:2379,https://10.100.4.183:2379" # flanneld 網絡配置前綴 FLANNEL_ETCD_PREFIX="/kubernetes/network" # kubernetes 服務IP(預先分配,通常爲SERVICE_CIDR中的第一個IP) CLUSTER_KUBERNETES_SVC_IP="10.254.0.1" # 集羣 DNS 服務IP(從SERVICE_CIDR 中預先分配) CLUSTER_DNS_SVC_IP="10.254.0.2" # 集羣 DNS 域名 CLUSTER_DNS_DOMAIN="cluster.local." # MASTER API Server 地址 MASTER_URL="k8s-api.virtual.local"
將上面變量保存爲: env.sh,而後將腳本拷貝到全部機器的/usr/k8s/bin
目錄。linux
$ mkdir -pv /usr/k8s/bin # 我這裏在 Master01 上建立環境變量而後複製到其它4臺服務器 $ scp /usr/k8s/bin/env.sh root@10.100.4.182:/usr/k8s/bin/ $ scp /usr/k8s/bin/env.sh root@10.100.4.183:/usr/k8s/bin/ $ scp /usr/k8s/bin/env.sh root@10.100.4.184:/usr/k8s/bin/ $ scp /usr/k8s/bin/env.sh root@10.100.4.185:/usr/k8s/bin/
爲方便後面遷移,咱們在集羣內定義一個域名用於訪問 apiserver,在每一個節點的/etc/hosts
文件中添加記錄:10.100.4.181 k8s-api.virtual.local k8s-api
nginx
$ vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.100.4.181 k8s-api.virtual.local k8s-api
其中 10.100.4.181 爲 master01 的 IP,暫時使用該 IP 來作 apiserver 的負載地址。git
kubernetes 系統各個組件須要使用 TLS 證書對通訊進行加密,這裏咱們使用 CloudFlare 的 PKI 工具集 cfssl 來生成 Certificate Authority(CA) 證書和密鑰文件, CA 是自簽名的證書,用來簽名後續建立的其餘 TLS 證書。github
在 Master01 上面安裝後複製到其它全部服務器上的 /usr/k8s/bin/
目錄。web
$ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ chmod +x cfssl_linux-amd64 $ sudo mv cfssl_linux-amd64 /usr/k8s/bin/cfssl $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ chmod +x cfssljson_linux-amd64 $ sudo mv cfssljson_linux-amd64 /usr/k8s/bin/cfssljson $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl-certinfo_linux-amd64 $ sudo mv cfssl-certinfo_linux-amd64 /usr/k8s/bin/cfssl-certinfo $ export PATH=/usr/k8s/bin:$PATH $ scp /usr/k8s/bin/cfssl* root@10.100.4.182:/usr/k8s/bin/ $ scp /usr/k8s/bin/cfssl* root@10.100.4.183:/usr/k8s/bin/ $ scp /usr/k8s/bin/cfssl* root@10.100.4.184:/usr/k8s/bin/ $ scp /usr/k8s/bin/cfssl* root@10.100.4.185:/usr/k8s/bin/
爲了方便,將/usr/k8s/bin
設置成環境變量,爲了重啓也有效,能夠將上面的export PATH=/usr/k8s/bin:$PATH
添加到/etc/profile.d/k8s.sh
文件中。redis
建立 ca-config.json 文件docker
$ mkdir ssl && cd ssl $ cat > ca-config.json << EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
建立 ca-csr.json 文件express
$ cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
生成CA 證書和私鑰:
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca $ ls ca* ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
將生成的 CA 證書、密鑰文件、配置文件拷貝到全部機器的/etc/kubernetes/ssl
目錄下面:
$ sudo mkdir -pv /etc/kubernetes/ssl $ sudo cp -v ca* /etc/kubernetes/ssl $ ls /etc/kubernetes/ssl/ ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem # 拷貝證書到全部機器 $ scp /etc/kubernetes/ssl/ca* root@10.100.4.182:/etc/kubernetes/ssl/ $ scp /etc/kubernetes/ssl/ca* root@10.100.4.183:/etc/kubernetes/ssl/ $ scp /etc/kubernetes/ssl/ca* root@10.100.4.184:/etc/kubernetes/ssl/ $ scp /etc/kubernetes/ssl/ca* root@10.100.4.185:/etc/kubernetes/ssl/
kubernetes 系統使用 etcd 存儲全部的數據,咱們這裏部署3個節點的etcd 集羣,這3個節點直接複用 master01,master02,node01 三個節點,分別命名爲 etcd0一、etcd0二、etcd03:
使用到的變量以下:
$ cat > /usr/k8s/bin/etcd_env.sh <<EOF export NODE_NAME=etcd01 # 當前部署的機器名稱(隨便定義,只要能區分不一樣機器便可) export NODE_IP=10.100.4.181 # 當前部署的機器IP export NODE_IPS="10.100.4.181 10.100.4.182 10.100.4.183" # etcd 集羣全部機器 IP # etcd 集羣間通訊的IP和端口 export ETCD_NODES=etcd01=https://10.100.4.181:2380,etcd02=https://10.100.4.182:2380,etcd03=https://10.100.4.183:2380 EOF $ source /usr/k8s/bin/etcd_env.sh # 導入用到的其它全局變量:ETCD_ENDPOINTS、FLANNEL_ETCD_PREFIX、CLUSTER_CIDR $ source /usr/k8s/bin/env.sh
注意:以上變量在三臺 etcd 服務器上都要操做,注意修更名稱和 NODE_IP。
到 https://github.com/coreos/etcd/releases 頁面下載最新版本的二進制文件:
$ cd /usr/local/src/ $ wget https://github.com/coreos/etcd/releases/download/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz $ tar -xvf etcd-v3.2.9-linux-amd64.tar.gz $ sudo mv etcd-v3.2.9-linux-amd64/etcd* /usr/k8s/bin/ $ ls /usr/k8s/bin/etcd* /usr/k8s/bin/etcd /usr/k8s/bin/etcdctl /usr/k8s/bin/etcd_env.sh
以上操做在三臺 ETCD 服務器都要操做。
爲了保證通訊安全,客戶端(如etcdctl)與 etcd 集羣、etcd 集羣之間的通訊須要使用TLS 加密。
建立 etcd 證書籤名請求:
$ cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "${NODE_IP}" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
hosts
字段指定受權使用該證書的etcd節點IP生成etcd
證書和私鑰:
$ cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd $ ls etcd* etcd.csr etcd-csr.json etcd-key.pem etcd.pem $ sudo mkdir -pv /etc/etcd/ssl $ sudo mv etcd*.pem /etc/etcd/ssl/
以上操做在三臺 ETCD 服務器都要操做。
# 必需要先建立工做目錄,生產中建議是單獨的磁盤做爲數據存儲目錄 $ sudo mkdir -pv /var/lib/etcd $ cat > etcd.service <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/var/lib/etcd/ ExecStart=/usr/k8s/bin/etcd \\ --name=${NODE_NAME} \\ --cert-file=/etc/etcd/ssl/etcd.pem \\ --key-file=/etc/etcd/ssl/etcd-key.pem \\ --peer-cert-file=/etc/etcd/ssl/etcd.pem \\ --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\ --trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \\ --initial-advertise-peer-urls=https://${NODE_IP}:2380 \\ --listen-peer-urls=https://${NODE_IP}:2380 \\ --listen-client-urls=https://${NODE_IP}:2379,http://127.0.0.1:2379 \\ --advertise-client-urls=https://${NODE_IP}:2379 \\ --initial-cluster-token=etcd-cluster-0 \\ --initial-cluster=${ETCD_NODES} \\ --initial-cluster-state=new \\ --data-dir=/var/lib/etcd Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
mv etcd.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etcd systemctl start etcd systemctl status etcd
最早啓動的 etcd 進程會卡住一段時間,等待其餘節點啓動加入集羣,在全部的 etcd 節點重複上面的步驟,直到全部的機器etcd 服務都已經啓動。
部署完 etcd 集羣后,在任一 etcd 節點上執行下面命令:
for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/k8s/bin/etcdctl \ --endpoints=https://${ip}:2379 \ --cacert=/etc/kubernetes/ssl/ca.pem \ --cert=/etc/etcd/ssl/etcd.pem \ --key=/etc/etcd/ssl/etcd-key.pem \ endpoint health; done
輸出以下結果
https://10.100.4.181:2379 is healthy: successfully committed proposal: took = 1.778779ms https://10.100.4.182:2379 is healthy: successfully committed proposal: took = 1.982324ms https://10.100.4.183:2379 is healthy: successfully committed proposal: took = 1.730901ms
能夠看到上面的信息3個節點上的 etcd 均爲 healthy ,則表示集羣服務正常。
kubectl
默認從~/.kube/config
配置文件中獲取訪問kube-apiserver
地址、證書、用戶名等信息,須要正確配置該文件才能正常使用kubectl
命令。
須要將下載的kubectl
二進制文件和生產的~/.kube/config
配置文件拷貝到須要使用kubectl
命令的機器上 ( 我這裏拷貝到了全部機器上 )。
注意:如下操做步驟都在Master01 服務器上操做,須要複製到其它4臺服務器上的文件會有說明和執行命令。
$ source /usr/k8s/bin/env.sh $ export KUBE_APISERVER="https://${MASTER_URL}:6443"
注意這裏的KUBE_APISERVER地址,由於咱們尚未安裝haproxy,因此暫時須要手動指定使用apiserver的6443端口,等haproxy安裝完成後就能夠用使用443端口轉發到6443端口去了。
下載地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.9.md#v197
若是服務器上下載不下來,能夠想辦法下載到本地,而後rz
傳上去便可
$ wget https://dl.k8s.io/v1.9.7/kubernetes-client-linux-amd64.tar.gz $ tar -xzvf kubernetes-client-linux-amd64.tar.gz $ sudo cp -v kubernetes/client/bin/kube* /usr/k8s/bin/ $ sudo chmod a+x /usr/k8s/bin/kube* $ source /etc/profile.d/k8s.sh # 複製 kubectl 到其它節點 $ scp /usr/k8s/bin/kubectl root@10.100.4.182:/usr/k8s/bin/ $ scp /usr/k8s/bin/kubectl root@10.100.4.183:/usr/k8s/bin/ $ scp /usr/k8s/bin/kubectl root@10.100.4.184:/usr/k8s/bin/ $ scp /usr/k8s/bin/kubectl root@10.100.4.185:/usr/k8s/bin/
kubectl 與 kube-apiserver 的安全端口通訊,須要爲安全通訊提供TLS 證書和密鑰。建立admin 證書籤名請求:
$ cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF
生成admin 證書和私鑰:
$ cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/ca-config.json \ -profile=kubernetes admin-csr.json | cfssljson -bare admin $ ls admin* admin.csr admin-csr.json admin-key.pem admin.pem $ sudo mv admin*.pem /etc/kubernetes/ssl/ # 複製到其它4臺服務器 $ scp /etc/kubernetes/ssl/admin* root@10.100.4.182:/etc/kubernetes/ssl/ $ scp /etc/kubernetes/ssl/admin* root@10.100.4.183:/etc/kubernetes/ssl/ $ scp /etc/kubernetes/ssl/admin* root@10.100.4.184:/etc/kubernetes/ssl/ $ scp /etc/kubernetes/ssl/admin* root@10.100.4.185:/etc/kubernetes/ssl/
# 設置集羣參數 $ kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} # 設置客戶端認證參數 $ kubectl config set-credentials admin \ --client-certificate=/etc/kubernetes/ssl/admin.pem \ --embed-certs=true \ --client-key=/etc/kubernetes/ssl/admin-key.pem \ --token=${BOOTSTRAP_TOKEN} # 設置上下文參數 $ kubectl config set-context kubernetes \ --cluster=kubernetes \ --user=admin # 設置默認上下文 $ kubectl config use-context kubernetes
kubeconfig
被保存到 ~/.kube/config
文件將~/.kube/config
文件拷貝到運行kubectl
命令的機器的~/.kube/
目錄下去。
# 在其它 4 臺服務器上建立 ~/.kube 目錄 $ mkdir ~/.kube # 複製 ~/.kube/config 文件到其它 4 臺服務器 $ scp .kube/config root@10.100.4.182:~/.kube/ $ scp .kube/config root@10.100.4.183:~/.kube/ $ scp .kube/config root@10.100.4.184:~/.kube/ $ scp .kube/config root@10.100.4.185:~/.kube/
須要在全部的Node節點安裝。
$ export NODE_IP=10.100.4.183 # 當前部署節點的IP # 導入全局變量 $ source /usr/k8s/bin/env.sh
etcd 集羣啓用了雙向 TLS 認證,因此須要爲 flanneld 指定與etcd 集羣通訊的CA 和密鑰。
建立flanneld 證書籤名請求:
$ cat > flanneld-csr.json <<EOF { "CN": "flanneld", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
生成flanneld 證書和私鑰:
$ export PATH=/usr/k8s/bin:$PATH $ cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/ca-config.json \ -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld $ ls flanneld* flanneld.csr flanneld-csr.json flanneld-key.pem flanneld.pem # 在全部服務器上建立證書目錄包括master節點 $ sudo mkdir -pv /etc/flanneld/ssl $ sudo mv flanneld*.pem /etc/flanneld/ssl $ ls /etc/flanneld/ssl flanneld-key.pem flanneld.pem # 複製flannel 證書和私鑰到兩臺Master節點 $ scp /etc/flanneld/ssl/flanneld*.pem root@10.100.4.181:/etc/flanneld/ssl/ $ scp /etc/flanneld/ssl/flanneld*.pem root@10.100.4.182:/etc/flanneld/ssl/
該步驟只需在第一次部署 Flannel 網絡時執行,後續在其餘節點上部署Flanneld 時無需再寫入該信息。
在 etcd03 節點,也就是 node01 節點上執行。
$ etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/flanneld/ssl/flanneld.pem \ --key-file=/etc/flanneld/ssl/flanneld-key.pem \ set ${FLANNEL_ETCD_PREFIX}/config '{"Network":"'${CLUSTER_CIDR}'", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}' # 獲得以下反饋信息 {"Network":"172.30.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}
前往flanneld release頁面下載最新版的flanneld 二進制文件。
$ cd /usr/local/src && mkdir flannel $ wget https://github.com/coreos/flannel/releases/download/v0.9.0/flannel-v0.9.0-linux-amd64.tar.gz $ tar -xzvf flannel-v0.9.0-linux-amd64.tar.gz -C flannel $ sudo cp flannel/{flanneld,mk-docker-opts.sh} /usr/k8s/bin
建立 flanneld 的 systemd unit 文件
cat > flanneld.service << EOF [Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service [Service] Type=notify ExecStart=/usr/k8s/bin/flanneld \\ -etcd-cafile=/etc/kubernetes/ssl/ca.pem \\ -etcd-certfile=/etc/flanneld/ssl/flanneld.pem \\ -etcd-keyfile=/etc/flanneld/ssl/flanneld-key.pem \\ -etcd-endpoints=${ETCD_ENDPOINTS} \\ -etcd-prefix=${FLANNEL_ETCD_PREFIX} ExecStartPost=/usr/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure [Install] WantedBy=multi-user.target RequiredBy=docker.service EOF
mk-docker-opts.sh腳本將分配給flanneld 的Pod 子網網段信息寫入到/run/flannel/docker 文件中,後續docker 啓動時使用這個文件中的參數值爲 docker0 網橋
flanneld 使用系統缺省路由所在的接口和其餘節點通訊,對於有多個網絡接口的機器(內網和公網),能夠用 –iface 選項值指定通訊接口(上面的 systemd unit 文件沒指定這個選項)
cp -v flanneld.service /etc/systemd/system/ systemctl daemon-reload systemctl enable flanneld systemctl start flanneld systemctl status flanneld
ifconfig flannel.1
在任意一臺 etcd 節點執行
$ # 查看集羣 Pod 網段(/16) $ etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/flanneld/ssl/flanneld.pem \ --key-file=/etc/flanneld/ssl/flanneld-key.pem \ get ${FLANNEL_ETCD_PREFIX}/config { "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } } $ # 查看已分配的 Pod 子網段列表(/24) $ etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/flanneld/ssl/flanneld.pem \ --key-file=/etc/flanneld/ssl/flanneld-key.pem \ ls ${FLANNEL_ETCD_PREFIX}/subnets /kubernetes/network/subnets/172.30.43.0-24 /kubernetes/network/subnets/172.30.24.0-24 /kubernetes/network/subnets/172.30.40.0-24 $ # 查看某一 Pod 網段對應的 flanneld 進程監聽的 IP 和網絡參數 $ etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/flanneld/ssl/flanneld.pem \ --key-file=/etc/flanneld/ssl/flanneld-key.pem \ get ${FLANNEL_ETCD_PREFIX}/subnets/172.30.43.0-24 {"PublicIP":"10.100.4.185","BackendType":"vxlan","BackendData":{"VtepMAC":"82:bb:54:d4:29:36"}}
在各個節點部署完Flanneld 後,查看已分配的Pod 子網段列表:
$ etcdctl \ --endpoints=${ETCD_ENDPOINTS} \ --ca-file=/etc/kubernetes/ssl/ca.pem \ --cert-file=/etc/flanneld/ssl/flanneld.pem \ --key-file=/etc/flanneld/ssl/flanneld-key.pem \ ls ${FLANNEL_ETCD_PREFIX}/subnets /kubernetes/network/subnets/172.30.43.0-24 /kubernetes/network/subnets/172.30.24.0-24 /kubernetes/network/subnets/172.30.40.0-24
當前三個Node節點分配的 Pod 網段分別是:172.30.43.0-2四、172.30.24.0-2四、172.30.40.0-24。
kubernetes master 節點包含的組件有:
目前這3個組件須要部署到同一臺機器上:(後面再部署高可用的master)
kube-scheduler、kube-controller-manager 和 kube-apiserver 三者的功能緊密相關;
同時只能有一個 kube-scheduler、kube-controller-manager 進程處於工做狀態,若是運行多個,則須要經過選舉產生一個 leader;
注意:如下操做在 master01 和 master02 上面都要操做。
$ export NODE_IP=10.100.4.181 # 當前部署的 master 機器IP $ source /usr/k8s/bin/env.sh
在 kubernetes changelog 頁面下載最新版本的文件:
$ cd /usr/local/src $ wget https://dl.k8s.io/v1.9.7/kubernetes-server-linux-amd64.tar.gz $ tar -xzvf kubernetes-server-linux-amd64.tar.gz
將二進制文件拷貝到/usr/k8s/bin目錄
$ sudo cp -rv kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} /usr/k8s/bin/
建立kubernetes 證書籤名請求:
cat > kubernetes-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "${NODE_IP}", "${MASTER_URL}", "${CLUSTER_KUBERNETES_SVC_IP}", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
生成kubernetes 證書和私鑰:
$ cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/ca-config.json \ -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes $ ls kubernetes* kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem $ sudo mkdir -pv /etc/kubernetes/ssl/ $ sudo mv kubernetes*.pem /etc/kubernetes/ssl/
建立kube-apiserver 使用的客戶端token 文件
kubelet 首次啓動時向kube-apiserver 發送TLS Bootstrapping 請求,kube-apiserver 驗證請求中的token 是否與它配置的token.csv 一致,若是一致則自動爲kubelet 生成證書和密鑰。
$ # 導入的 environment.sh 文件定義了 BOOTSTRAP_TOKEN 變量 $ cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF $ sudo mv token.csv /etc/kubernetes/
建立kube-apiserver 的systemd unit文件
cat > kube-apiserver.service <<EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] ExecStart=/usr/k8s/bin/kube-apiserver \\ --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\ --advertise-address=${NODE_IP} \\ --bind-address=0.0.0.0 \\ --insecure-bind-address=${NODE_IP} \\ --authorization-mode=Node,RBAC \\ --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/etc/kubernetes/token.csv \\ --service-cluster-ip-range=${SERVICE_CIDR} \\ --service-node-port-range=${NODE_PORT_RANGE} \\ --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \\ --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \\ --client-ca-file=/etc/kubernetes/ssl/ca.pem \\ --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/etc/kubernetes/ssl/ca.pem \\ --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \\ --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \\ --etcd-servers=${ETCD_ENDPOINTS} \\ --enable-swagger-ui=true \\ --allow-privileged=true \\ --apiserver-count=2 \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/lib/audit.log \\ --audit-policy-file=/etc/kubernetes/audit-policy.yaml \\ --event-ttl=1h \\ --logtostderr=true \\ --v=6 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
審查日誌策略文件內容以下:(/etc/kubernetes/audit-policy.yaml)
apiVersion: audit.k8s.io/v1beta1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" # Resource "pods" doesn't match requests to any subresource of pods, # which is consistent with the RBAC policy. resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata # Long-running requests like watches that fall under this rule will not # generate an audit event in RequestReceived. omitStages: - "RequestReceived"
啓動 kube-apiserver
暫時先啓動 Master01 節點
cp kube-apiserver.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver
建立kube-controller-manager 的systemd unit 文件
cat > kube-controller-manager.service <<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/k8s/bin/kube-controller-manager \\ --address=127.0.0.1 \\ --master=http://${MASTER_URL}:8080 \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=${SERVICE_CIDR} \\ --cluster-cidr=${CLUSTER_CIDR} \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/etc/kubernetes/ssl/ca.pem \\ --leader-elect=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
啓動kube-controller-manager
暫時先啓動 Master01 節點
cp kube-controller-manager.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager
建立kube-scheduler 的systemd unit文件
cat > kube-scheduler.service <<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] ExecStart=/usr/k8s/bin/kube-scheduler \\ --address=127.0.0.1 \\ --master=http://${MASTER_URL}:8080 \\ --leader-elect=true \\ --v=2 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
啓動 kube-scheduler
暫時先啓動 Master01 節點
cp kube-scheduler.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler
$ kubectl get componentstatuses NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} }
# 啓動 apiserver systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver # controller-manager systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager # kube-scheduler systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler
按照上面的方式在master01與master02機器上安裝kube-apiserver
、kube-controller-manager
、kube-scheduler
,可是如今咱們仍是手動指定訪問的6443
和8080
端口的,由於咱們的域名k8s-api.virtual.local
對應的master01節點直接經過 http 和 https 還不能訪問,這裏咱們使用 haproxy 來代替請求。
明白什麼意思嗎?就是咱們須要將http默認的80端口請求轉發到apiserver的8080端口,將https默認的443端口請求轉發到apiserver的6443端口,因此咱們這裏使用haproxy來作請求轉發。
在兩臺Master節點上安裝
$ yum install -y haproxy
因爲集羣內部有的組建是經過非安全端口訪問 apiserver 的,有的是經過安全端口訪問 apiserver 的,因此咱們要配置http 和https 兩種代理方式,配置文件 /etc/haproxy/haproxy.cfg
:
#--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt # #--------------------------------------------------------------------- #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- listen stats bind *:9000 mode http stats enable stats hide-version stats uri /stats stats refresh 30s stats realm Haproxy\ Statistics stats auth Admin:Password frontend k8s-api bind 10.100.4.181:443 # Master02 節點修改成 10.100.4.182 mode tcp option tcplog tcp-request inspect-delay 5s tcp-request content accept if { req.ssl_hello_type 1 } default_backend k8s-api backend k8s-api mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-api-1 10.100.4.181:6443 check server k8s-api-2 10.100.4.182:6443 check frontend k8s-http-api bind 10.100.4.181:80 # Master02 節點修改成 10.100.4.182 mode tcp option tcplog default_backend k8s-http-api backend k8s-http-api mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-http-api-1 10.100.4.181:8080 check server k8s-http-api-2 10.100.4.182:8080 check
經過上面的配置文件咱們能夠看出經過https的訪問將請求轉發給apiserver 的6443端口了,http的請求轉發到了apiserver 的8080端口。
$ vim /etc/rsyslog.conf # Provides UDP syslog reception $ModLoad imudp # 取消註釋 $UDPServerRun 514 # 取消註釋 # 在local7.* 下面添加下面這行 local2.* /var/log/haproxy.log
重啓 rsyslog 服務
systemctl restart rsyslog
systemctl start haproxy
systemctl enable haproxy systemctl status haproxy
而後咱們能夠經過上面9000端口監控咱們的haproxy的運行狀態(10.100.4.181:9000/stats):
上面咱們的 haproxy 的確能夠代理咱們的兩個 master 上的 apiserver 了,可是還不是高可用的,若是 master01 這個節點 down 掉了,那麼咱們haproxy 就不能正常提供服務了。這裏咱們可使用兩種方法來實現高可用
方式1:使用公有云的 SLB
這種方式其實是最省心的,在阿里雲上建一個內網的SLB,將master01 與master02 添加到SLB 機器組中,轉發80(http)和443(https)端口便可(注意下面的提示)
注意:阿里雲的負載均衡是四層TCP負責,不支持後端ECS實例既做爲Real Server又做爲客戶端向所在的負載均衡實例發送請求。由於返回的數據包只在雲服務器內部轉發,不通過負載均衡,因此在後端ECS實例上去訪問負載均衡的服務地址是不通的。什麼意思?就是若是你要使用阿里雲的SLB的話,那麼你不能在apiserver節點上使用SLB(好比在apiserver 上安裝kubectl,而後將apiserver的地址設置爲SLB的負載地址使用),由於這樣的話就可能形成迴環了,因此簡單的作法是另外用兩個新的節點作HA實例,而後將這兩個實例添加到SLB 機器組中
方式2:使用 keepalived
KeepAlived
是一個高可用方案,經過 VIP(即虛擬 IP)和心跳檢測來實現高可用。其原理是存在一組(兩臺)服務器,分別賦予 Master、Backup 兩個角色,默認狀況下Master 會綁定VIP 到本身的網卡上,對外提供服務。Master、Backup 會在必定的時間間隔向對方發送心跳數據包來檢測對方的狀態,這個時間間隔通常爲 2 秒鐘,若是Backup 發現Master 宕機,那麼Backup 會發送ARP 包到網關,把VIP 綁定到本身的網卡,此時Backup 對外提供服務,實現自動化的故障轉移,當Master 恢復的時候會從新接管服務。很是相似於路由器中的虛擬路由器冗餘協議(VRRP)
開啓路由轉發,這裏咱們定義虛擬IP爲:10.100.4.186
$ vi /etc/sysctl.conf # 添加如下內容 net.ipv4.ip_forward = 1 net.ipv4.ip_nonlocal_bind = 1 # 驗證並生效 $ sysctl -p # 驗證是否生效 $ cat /proc/sys/net/ipv4/ip_forward 1
安裝 keepalived:
$ yum install -y keepalived
咱們這裏將master01 設置爲Master,master02 設置爲Backup,修改配置:
Master01 配置文件
$ vi /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from haadmin@buhui.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node1 } # haproxy 服務監控腳本,若是killall -0 nginx返回值爲1那麼優先級不變,不然優先級減5 vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight -5 } vrrp_script chk_apiserver { script "killall -0 kube-apiserver" interval 2 weight -5 } vrrp_instance VI_1 { state MASTER interface eno16777728 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.100.4.186 } # 調用vrrp_script定義的腳本 track_script { chk_haproxy chk_apiserver } } virtual_server 10.100.4.186 80 { delay_loop 5 lvs_sched wlc lvs_method NAT persistence_timeout 1800 protocol TCP real_server 10.100.4.181 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 } } } virtual_server 10.100.4.186 443 { delay_loop 5 lvs_sched wlc lvs_method NAT persistence_timeout 1800 protocol TCP real_server 10.100.4.181 443 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 } } }
Master02 配置文件
! Configuration File for keepalived global_defs { notification_email { root@localhost } notification_email_from haadmin@buhui.com smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id node1 } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight -5 } vrrp_script chk_apiserver { script "killall -0 kube-apiserver" interval 2 weight -5 } vrrp_instance VI_1 { state BACKUP interface eno16777728 virtual_router_id 51 priority 98 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 10.100.4.186 } # 調用vrrp_script定義的腳本 track_script { chk_haproxy chk_apiserver } } virtual_server 10.100.4.186 80 { delay_loop 5 lvs_sched wlc lvs_method NAT persistence_timeout 1800 protocol TCP real_server 10.100.4.182 80 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 } } } virtual_server 10.100.4.186 443 { delay_loop 5 lvs_sched wlc lvs_method NAT persistence_timeout 1800 protocol TCP real_server 10.100.4.182 443 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 } } }
啓動 Keepalived
systemctl start keepalived
systemctl enable keepalived systemctl status keepalived # 查看日誌 journalctl -f -u keepalived
驗證虛擬IP
在 Master01 節點上執行操做
# 使用ifconfig -a 命令查看不到,要使用ip addr [root@k8s-master01 keepalived]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno16777728: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:47:0a:db brd ff:ff:ff:ff:ff:ff inet 10.100.4.181/24 brd 10.100.4.255 scope global eno16777728 valid_lft forever preferred_lft forever inet 10.100.4.186/32 scope global eno16777728 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe47:adb/64 scope link valid_lft forever preferred_lft forever
kubectl
生成的config
文件 (~/.kube/config) 中的 server 地址 6443 端口去掉,另外 /etc/systemd/system/kube-controller-manager.service
和/etc/systemd/system/kube-scheduler.service
的--master
參數中的8080端口去掉了,而後分別重啓這兩個組件便可。# controller-manager systemctl daemon-reload systemctl restart kube-controller-manager systemctl status kube-controller-manager # kube-scheduler systemctl restart kube-scheduler systemctl status kube-scheduler
驗證apiserver:關閉master01 節點上的kube-apiserver 進程,而後查看虛擬ip是否漂移到了master02 節點。
而後咱們就能夠將第一步在/etc/hosts
裏面設置的域名對應的IP 更改成咱們的虛擬IP了。
驗證集羣狀態
[root@k8s-master01 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-0 Healthy {"health": "true"}
中止Master01 節點的 kube-apiserver 服務
$ systemctl stop kube-apiserver
驗證 VIP 是否在Master02節點,獲取集羣狀態信息
[root@k8s-master02 ~]# ip a|grep 186 inet 10.100.4.186/32 scope global eno16777728 [root@k8s-master02 ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"}
kubernetes Node 節點包含以下組件:
在三臺 Node節點上執行
$ source /usr/k8s/bin/env.sh $ export KUBE_APISERVER="https://${MASTER_URL}" // 若是你沒有安裝`haproxy`的話,仍是須要使用6443端口的哦 $ export NODE_IP=10.100.4.183 # 當前部署的 Node節點 IP
按照上面的步驟安裝配置好flanneld,上面咱們已經在三臺 Node 節點安裝了。
修改/etc/sysctl.conf
文件,添加下面的規則:
$ vim /etc/sysctl.conf net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1
執行下面的命令當即生效:
$ sysctl -p
執行sysctl -p 時出現:
$ sysctl -p sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
解決方法:selinux 必須配置爲disabled 使用 getenforce 獲取顯示爲 disabled 內核加載 br_netfilter 模塊從新執行 sysctl -p
$ modprobe br_netfilter $ sysctl -p
你能夠用二進制或yum install
的方式來安裝 docker,而後修改 docker 的 systemd unit 文件 檢查文件系統信息 若是你用的是 xfs 類型的文件系統,默認docker的存儲驅動是 devicemaper
若是要使用 overlay2
須要 xfs 文件系統的 ftype=1
纔可使用,查看 xfs 的 ftype:
$ xfs_info /var/
我這裏因爲是新安裝的操做系統分區裏沒有任何文件因此能夠直接從新格式化分區修改 ftype=1;我這裏演示如何將一個新的分區格式化爲 ftype=1
mkfs.xfs -fn ftype=1 /dev/vdb
以後咱們能夠將這個獨立的分區掛載到 /var/lib/docker 目錄上做爲docker的工做目錄;
$ mount /dev/vdb /data/ $ mkdir /data/docker $ ln -sv /data/docker/ /var/lib/docker
安裝 Docker
$ sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 $ sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo $ yum -y install docker-ce **修改 docker 的 systemd unit 文件** ```bash $ vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker EnvironmentFile=-/run/flannel/docker ExecStart=/usr/bin/dockerd --log-level=info $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
啓動 docker
systemctl daemon-reload
systemctl stop firewalld
systemctl disable firewalld
systemctl enable docker systemctl start docker systemctl status docker
檢查 docker0 網卡是否與 flannel.1 網卡在同一網絡
$ ifconfig flannel.1 $ ifconfig docker0
爲了加快 pull image 的速度,可使用國內的倉庫鏡像服務器,同時增長下載的併發數。(若是 dockerd 已經運行,則須要重啓 dockerd 生效。)
$ vim /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"], "max-concurrent-downloads": 10 } # 重啓 docker systemctl restart docker.service
檢查docker的存儲驅動
kubelet 啓動時向 kube-apiserver 發送 TLS bootstrapping 請求,須要先將 bootstrap token 文件中的 kubelet-bootstrap 用戶賦予system:node-bootstrapper 角色,而後kubelet 纔有權限建立認證請求(certificatesigningrequests):
kubelet就是運行在Node節點上的,因此這一步安裝是在全部的Node節點上,若是你想把你的Master也當作Node節點的話,固然也能夠在Master節點上安裝的。
在 Master01 節點上操做
[root@k8s-master01 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding "kubelet-bootstrap" created
爲 Node 請求建立一個RBAC 受權規則:
[root@k8s-master01 ~]# kubectl create clusterrolebinding kubelet-nodes --clusterrole=system:node --group=system:nodes clusterrolebinding "kubelet-nodes" created
而後下載最新的 kubelet 和kube-proxy 二進制文件(前面下載kubernetes 目錄下面其實也有):
安裝 kubelet 在三臺Node節點上
$ cd /usr/local/src $ wget https://dl.k8s.io/v1.9.7/kubernetes-server-linux-amd64.tar.gz $ tar -xzvf kubernetes-server-linux-amd64.tar.gz $ cd kubernetes $ tar -xzvf kubernetes-src.tar.gz $ sudo cp -rv ./server/bin/{kube-proxy,kubelet} /usr/k8s/bin/
在三臺Node節點上
$ # 設置集羣參數 $ kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig $ # 設置客戶端認證參數 $ kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig $ # 設置上下文參數 $ kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig $ # 設置默認上下文 $ kubectl config use-context default --kubeconfig=bootstrap.kubeconfig $ mv bootstrap.kubeconfig /etc/kubernetes/
–embed-certs 爲 true 時表示將 certificate-authority 證書寫入到生成的 bootstrap.kubeconfig 文件中;
設置 kubelet 客戶端認證參數時沒有指定祕鑰和證書,後續由 kube-apiserver 自動生成;
**檢查 bootstrap.kubeconfig **
$ cat /etc/kubernetes/bootstrap.kubeconfig
建立kubelet 的systemd unit 文件
$ sudo mkdir /var/lib/kubelet # 必須先建立工做目錄 cat > kubelet.service <<EOF [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/usr/k8s/bin/kubelet \\ --fail-swap-on=false \\ --cgroup-driver=cgroupfs \\ --address=${NODE_IP} \\ --hostname-override=${NODE_IP} \\ --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --require-kubeconfig \\ --cert-dir=/etc/kubernetes/ssl \\ --cluster-dns=${CLUSTER_DNS_SVC_IP} \\ --cluster-domain=${CLUSTER_DNS_DOMAIN} \\ --hairpin-mode promiscuous-bridge \\ --allow-privileged=true \\ --serialize-image-pulls=false \\ --logtostderr=true \\ --v=2 \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target EOF
啓動 kubelet
$ mv kubelet.service /etc/systemd/system/kubelet.service systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet
kubelet 首次啓動時向kube-apiserver 發送證書籤名請求,必須經過後kubernetes 系統纔會將該 Node 加入到集羣。查看未受權的CSR 請求:
在 Master01 節點上操做
$ kubectl get csr $ kubectl get nodes No resources found.
經過CSR 請求:
$ for i in `kubectl get csr|awk '{print $1}'|grep -v "NAME"`;do kubectl certificate approve $i;done # 查看 Node 節點 [root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 10.100.4.183 Ready <none> 2m v1.9.7 10.100.4.184 Ready <none> 39s v1.9.7 10.100.4.185 Ready <none> 2m v1.9.7
自動生成了kubelet kubeconfig 文件和公私鑰:
[root@k8s-node01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig -rw-------. 1 root root 2283 5月 4 17:16 /etc/kubernetes/kubelet.kubeconfig [root@k8s-node01 ~]# ls -l /etc/kubernetes/ssl/kubelet* -rw-r--r--. 1 root root 1046 5月 4 17:16 /etc/kubernetes/ssl/kubelet-client.crt -rw-------. 1 root root 227 5月 4 17:15 /etc/kubernetes/ssl/kubelet-client.key -rw-r--r--. 1 root root 1111 5月 4 17:02 /etc/kubernetes/ssl/kubelet.crt -rw-------. 1 root root 1675 5月 4 17:02 /etc/kubernetes/ssl/kubelet.key
在三臺 Node 節點建立kube-proxy 證書籤名請求:
$ cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
生成 kube-proxy 客戶端證書和私鑰
$ cfssl gencert -ca=/etc/kubernetes/ssl/ca.pem \ -ca-key=/etc/kubernetes/ssl/ca-key.pem \ -config=/etc/kubernetes/ssl/ca-config.json \ -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy $ ls kube-proxy* kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem $ sudo mv kube-proxy*.pem /etc/kubernetes/ssl/
建立kube-proxy kubeconfig 文件
$ # 設置集羣參數 $ kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig $ # 設置客戶端認證參數 $ kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \ --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig $ # 設置上下文參數 $ kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig $ # 設置默認上下文 $ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig $ mv kube-proxy.kubeconfig /etc/kubernetes/
建立 kube-proxy 的systemd unit 文件
$ sudo mkdir -pv /var/lib/kube-proxy # 必須先建立工做目錄 cat > kube-proxy.service <<EOF [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/usr/k8s/bin/kube-proxy \\ --bind-address=${NODE_IP} \\ --hostname-override=${NODE_IP} \\ --cluster-cidr=${SERVICE_CIDR} \\ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\ --logtostderr=true \\ --v=2 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
啓動kube-proxy
$ mv kube-proxy.service /etc/systemd/system/ systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy systemctl status kube-proxy
在Master01 節點,定義 yaml 文件:(將下面內容保存爲:nginx-ds.yaml)
$ vim nginx-ds.yaml apiVersion: v1 kind: Service metadata: name: nginx-ds labels: app: nginx-ds spec: type: NodePort selector: app: nginx-ds ports: - name: http port: 80 targetPort: 80 --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: nginx-ds labels: addonmanager.kubernetes.io/mode: Reconcile spec: template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: nginx:1.7.9 ports: - containerPort: 80
建立 Pod 和 Service服務:
[root@k8s-master01 pod]# kubectl create -f nginx-ds.yaml service "nginx-ds" created daemonset "nginx-ds" created
執行下面的命令查看Pod 和SVC:
[root@k8s-master01 pod]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE nginx-ds-hzqm2 1/1 Running 0 2m 172.30.40.2 10.100.4.183 nginx-ds-jhhgb 1/1 Running 0 2m 172.30.43.2 10.100.4.185 nginx-ds-xf5qq 1/1 Running 0 2m 172.30.24.2 10.100.4.184 [root@k8s-master01 pod]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 2h nginx-ds NodePort 10.254.136.253 <none> 80:32766/TCP 3m
能夠看到:
在全部 Node 上執行:
curl 10.254.136.253
curl 10.100.4.183:32766
執行上面的命令預期都會輸出nginx 歡迎頁面內容,表示咱們的Node 節點正常運行了。
官方文件目錄:kubernetes/cluster/addons/dns
$ mkdir /data/k8s/kubedns -pv # 建立 kube-dns.yaml 文件 $ vim kube-dns.yaml # Copyright 2016 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml # in sync with this file. # Warning: This is a file generated from the base underscore template file: kube-dns.yaml.base apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.254.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP --- apiVersion: v1 kind: ServiceAccount metadata: name: kube-dns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true containers: - name: kubedns image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.7 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --no-negcache - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.7 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. serviceAccountName: kube-dns
執行建立文件
[root@k8s-master01 kubedns]# kubectl create -f kube-dns.yaml service "kube-dns" created serviceaccount "kube-dns" created configmap "kube-dns" created deployment "kube-dns" created
檢查 kubedns 功能
新建一個Deployment
$ cd /data/app/pod cat > my-nginx.yaml<<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-nginx spec: replicas: 2 template: metadata: labels: run: my-nginx spec: containers: - name: my-nginx image: nginx:1.7.9 ports: - containerPort: 80 EOF $ kubectl create -f my-nginx.yaml deployment "my-nginx" created
Expose 該Deployment,生成my-nginx 服務
$ kubectl expose deploy my-nginx [root@k8s-master01 pod]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 2h my-nginx ClusterIP 10.254.51.165 <none> 80/TCP 3s nginx-ds NodePort 10.254.136.253 <none> 80:32766/TCP 13m
而後建立另一個Pod,查看/etc/resolv.conf
是否包含kubelet
配置的--cluster-dns
和--cluster-domain
,是否可以將服務my-nginx 解析到上面顯示的CLUSTER-IP 10.254.51.165 上
$ cat > pod-nginx.yaml<<EOF apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 EOF $ kubectl create -f pod-nginx.yaml pod "nginx" created $ kubectl exec nginx -i -t -- /bin/bash root@nginx:/# cat /etc/resolv.conf nameserver 10.254.0.2 search default.svc.cluster.local. svc.cluster.local. cluster.local. options ndots:5 root@nginx:/# ping my-nginx PING my-nginx.default.svc.cluster.local (10.254.51.165): 48 data bytes ^C--- my-nginx.default.svc.cluster.local ping statistics --- 2 packets transmitted, 0 packets received, 100% packet loss root@nginx:/# ping kubernetes PING kubernetes.default.svc.cluster.local (10.254.0.1): 48 data bytes ^C--- kubernetes.default.svc.cluster.local ping statistics --- 2 packets transmitted, 0 packets received, 100% packet loss
官方文件目錄:kubernetes/cluster/addons/dashboard
使用的文件以下:
$ ls *.yaml dashboard-controller.yaml dashboard-rbac.yaml dashboard-service.yaml
定義一個名爲dashboard 的ServiceAccount,而後將它和Cluster Role view 綁定:
$ mkdir -pv /data/k8s/dashboard/ && cd /data/k8s/dashboard/ $ cat > dashboard-rbac.yaml<<EOF apiVersion: v1 kind: ServiceAccount metadata: name: dashboard namespace: kube-system --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1alpha1 metadata: name: dashboard subjects: - kind: ServiceAccount name: dashboard namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io EOF
配置 dashboard-controller.yaml
cat > dashboard-controller.yaml <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: serviceAccountName: dashboard containers: - name: kubernetes-dashboard image: kubernets/kubernetes-dashboard-amd64:v1.8.3 resources: # keep request = limit to keep this container in guaranteed class limits: cpu: 100m memory: 300Mi requests: cpu: 100m memory: 100Mi ports: - containerPort: 9090 args: - --heapster-host=http://heapster livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 tolerations: - key: "CriticalAddonsOnly" operator: "Exists" EOF
配置 dashboard-service
cat > dashboard-service.yaml <<EOF apiVersion: v1 kind: Service metadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: k8s-app: kubernetes-dashboard ports: - port: 80 targetPort: 9090 type: NodePort EOF
執行全部定義文件
$ ls *.yaml dashboard-controller.yaml dashboard-rbac.yaml dashboard-service.yaml $ kubectl create -f . deployment "kubernetes-dashboard" created serviceaccount "dashboard" created clusterrolebinding "dashboard" created service "kubernetes-dashboard" created
檢查執行結果
查看分配的 NodePort
$ kubectl get services kubernetes-dashboard -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes-dashboard NodePort 10.254.204.176 <none> 80:32092/TCP 49s
檢查 controller
$ kubectl get deployment kubernetes-dashboard -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kubernetes-dashboard 1 1 1 1 1m $ kubectl get pods -n kube-system | grep dashboard kubernetes-dashboard-85f875c69c-mbljw 1/1 Running 0 2m
訪問dashboard
kubernetes-dashboard 服務暴露了 NodePort,可使用 http://NodeIP:nodePort 地址訪問 dashboard
因爲缺乏 Heapster 插件,當前 dashboard 不能展現 Pod、Nodes 的 CPU、內存等 metric 圖形。
到 heapster release 頁面下載最新版的 heapster
$ cd /usr/local/src $ wget https://github.com/kubernetes/heapster/archive/v1.4.3.tar.gz $ tar -xzvf v1.4.3.tar.gz
部署相關文件目錄:/usr/local/src/heapster-1.4.3/deploy/kube-config
$ cd /usr/local/src/heapster-1.4.3/deploy/kube-config/ $ ls influxdb/ grafana.yaml heapster.yaml influxdb.yaml $ls rbac/ heapster-rbac.yaml
爲方便測試訪問,修改 grafana.yaml下面的服務類型設置爲type=NodePort
修改 influxdb.yaml、grafana.yaml、heapster.yaml的 image 鏡像地址
index.tenxcloud.com/jimmy/heapster-amd64:v1.3.0-beta.1
index.tenxcloud.com/jimmy/heapster-influxdb-amd64:v1.1.1
index.tenxcloud.com/jimmy/heapster-grafana-amd64:v4.0.2
執行全部文件
$ kubectl create -f rbac/heapster-rbac.yaml clusterrolebinding "heapster" created $ kubectl create -f influxdb deployment "monitoring-grafana" created service "monitoring-grafana" created serviceaccount "heapster" created deployment "heapster" created service "heapster" created deployment "monitoring-influxdb" created service "monitoring-influxdb" created
檢查執行結果
檢查 Deployment
$ kubectl get deployments -n kube-system | grep -E 'heapster|monitoring' heapster 1 1 1 1 29m monitoring-grafana 1 1 1 1 29m monitoring-influxdb 1 1 1 1 29m
檢查 Pods
$ kubectl get pods -n kube-system | grep -E 'heapster|monitoring' heapster-9bd589759-nz29g 1/1 Running 0 30m monitoring-grafana-5c8d68cb94-xtszf 1/1 Running 0 30m monitoring-influxdb-774cf8fcc6-b7qw7 1/1 Running 0 30m
訪問 grafana
上面咱們修改grafana 的Service 爲NodePort 類型:
[root@k8s-master01 kube-config]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE heapster ClusterIP 10.254.170.2 <none> 80/TCP 30m kube-dns ClusterIP 10.254.0.2 <none> 53/UDP,53/TCP 1h kubernetes-dashboard NodePort 10.254.204.176 <none> 80:32092/TCP 48m monitoring-grafana NodePort 10.254.112.219 <none> 80:30879/TCP 30m monitoring-influxdb ClusterIP 10.254.109.148 <none> 8086/TCP 30m
則咱們就能夠經過任意一個節點加上上面的30879端口就能夠訪問grafana 了。
Ingress 其實就是從 kuberenets 集羣外部訪問集羣的一個入口,將外部的請求轉發到集羣內不一樣的 Service 上,其實就至關於 nginx、apache 等負載均衡代理服務器,再加上一個規則定義,路由信息的刷新須要靠 Ingress controller 來提供
Ingress controller 能夠理解爲一個監聽器,經過不斷地與 kube-apiserver 打交道,實時的感知後端 service、pod 等的變化,當獲得這些變化信息後,Ingress controller 再結合 Ingress 的配置,更新反向代理負載均衡器,達到服務發現的做用。其實這點和服務發現工具 consul的consul-template 很是相似。
$ mkdir /data/k8s/ingress $ cd /data/k8s/ingress cat > namespace.yaml <<EOF apiVersion: v1 kind: Namespace metadata: name: ingress-nginx EOF $ kubectl create -f namespace.yaml namespace "ingress-nginx" created
cat > rbac.yaml <<EOF apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress-nginx EOF
cat > deployment.yaml <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx spec: replicas: 2 selector: matchLabels: app: ingress-nginx template: metadata: labels: app: ingress-nginx annotations: prometheus.io/port: '10254' prometheus.io/scrape: 'true' spec: serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true containers: - name: nginx-ingress-controller image: lizhenliang/nginx-ingress-controller:0.9.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services # - --annotations-prefix=nginx.ingress.kubernetes.io env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 EOF
cat > default-backend.yaml <<EOF apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-http-backend labels: app: default-http-backend namespace: ingress-nginx spec: replicas: 1 template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: ingress-nginx labels: app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend EOF
cat > tcp-services-configmap.yaml <<EOF kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx EOF
cat > udp-services-configmap.yaml <<EOF kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx EOF
$ kubectl create -f . $ kubectl get pods -n ingress-nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE default-http-backend-7ddd8d57f4-dtvgd 1/1 Running 0 7m 172.30.43.4 10.100.4.185 nginx-ingress-controller-7494c4c66d-9r6j5 1/1 Running 0 7m 10.100.4.184 10.100.4.184
建立 nginxds-ingress.yaml ,代理咱們以前建立的 nginx-ds 服務
cat > nginxds-ingress.yaml <<EOF apiVersion: extensions/v1beta1 kind: Ingress metadata: name: hmdc spec: rules: - host: test.nginxds.com http: paths: - backend: serviceName: nginx-ds servicePort: 80 EOF
建立 ingress
$ kubectl create -f nginxds-ingress.yaml ingress "hmdc" created $ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE hmdc test.nginxds.com 80 6s
在本地電腦添加一條hosts test.nginxds.com
解析到 nginx-ingress-controlle 所在 的Node 節點的IP上,經過kubectl get pods -n ingress-nginx -o wide
能夠獲取IP
10.100.4.184 test.nginxds.com
修改 nginx 容器的默認首頁
在瀏覽器上訪問 test.nginxds.com 測試
經過上圖能夠看到負載均衡的效果。
https://blog.qikqiak.com/post/manual-install-high-available-kubernetes-cluster/#11-%E9%83%A8%E7%BD%B2heapster-%E6%8F%92%E4%BB%B6-a-id-heapster-a
https://www.cnblogs.com/iiiiher/p/8176769.html
https://jimmysong.io/kubernetes-handbook/