高可用環境架構:html
組件版本:node
軟件 | 版本 |
---|---|
Linux操做系統 | CentOS7.5_x64 |
Kubernetes | 1.12 |
Docker | 18.xx-ce |
Etcd | 3.x |
Flannel | 0.10 |
服務器角色:linux
角色 | IP | 組件 |
---|---|---|
master01 | 192.168.1.43 | kube-apiserver,kube-controller-manager,kube-scheduler etcd |
master02 | 192.168.1.63 | kube-apiserver,kube-controller-manager,kube-scheduler etcd |
node01 | 192.168.1.30 | kubelet,kube-proxy,docker,flannel,etcd |
node02 | 192.168.1.51 | kubelet,kube-proxy,docker,flannel |
node03 | 192.168.1.141 | kubelet,kube-proxy,docker,flannel |
Load Balancer (Master) | 192.168.1.31 192.168.1.230 (VIP) | Nginx L4 |
Load Balancer (Backup) | 192.168.1.186 | Nginx L4 |
自籤SSL證書:nginx
組件 | 使用的證書 |
---|---|
etcd | ca.pem,server.pem,server-key.pem |
flannel | ca.pem,server.pem,server-key.pem |
kube-apiserver | ca.pem,server.pem,server-key.pem |
kubelet | ca.pem,ca-key.pem |
kube-proxy | ca.pem,kube-proxy.pem,kube-proxy-key.pem |
kubectl | ca.pem,admin.pem,admin-key.pem |
準備工做:git
關閉防火牆: # systemctl stop firewalld && systemctl disable firewalld 同步時間:(ssl驗證時間) # yum -y install ntpdate && ntpdate time.windows.com
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl #cfssl來生成證書 curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson #cfssljson傳入json文件生成證書 curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo #cfssl-cetinfo查看生成證書信息 chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
# mkdir ~/k8s/etcd-cert -p # cd ~/k8s/etcd-cert
ca根證書:github
# cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
ca請求籤名證書:docker
# cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
爲ETCD頒發ssl證書:(將etcd節點ip加入其中)數據庫
# cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.1.43", "192.168.1.30", "192.168.1.51" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF
生成證書:json
初始化ca根證書: cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #會生成ca-key.pem,ca.pem 生成證書: cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server #會生成server-key,server #說明: #-ca=ca.pem 指定ca #-ca-key=ca-key.pem 指定ca私鑰 #-config=ca-config.json 指定ca配置文件 #-profile=www 應用配置文件中的www
二進制包下載:https://github.com/etcd-io/etcd/releasesbootstrap
解壓二進制包:
# cd ~/k8s # tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz 建立etcd目錄: # mkdir /opt/etcd/{cfg,bin,ssl} -p #配置,可執行,證書目錄
移動可執行文件到etcd目錄:
# cd ~/k8s/etcd-v3.3.10-linux-amd64 # mv etcd etcdctl /opt/etcd/bin/ # ls /opt/etcd/bin/ etcd etcdctl
把剛生成的拷貝ssl文件到etc目錄:
# cd ~/k8s/etcd-cert # cp *pem # ls /opt/etcd/ssl/ ca-key.pem ca.pem server-key.pem server.pem
建立etcd配置文件:
# cat <<EOF >/opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.1.43:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.1.43:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.43:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.43:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.43:2380,etcd02=https://192.168.1.30:2380,etcd03=https://192.168.1.51:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF
建立systemctld管理文件:
# cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd ExecStart=/opt/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
開機並啓動etcd:
# systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
拷貝etcd文件到node1,node2
# scp -r /opt/etcd/ root@192.168.1.30:/opt/ # scp -r /opt/etcd/ root@192.168.1.51:/opt/ # scp /usr/lib/systemd/system/etcd.service root@192.168.1.51:/usr/lib/systemd/system/ # scp /usr/lib/systemd/system/etcd.service root@192.168.1.30:/usr/lib/systemd/system/
修改配置文件
node1: # cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.1.30:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.1.30:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.30:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.30:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.43:2380,etcd02=https://192.168.1.30:2380,etcd03=https://192.168.1.51:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" node2: # cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.1.51:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.1.51:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.51:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.51:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.43:2380,etcd02=https://192.168.1.30:2380,etcd03=https://192.168.1.51:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" 開機並啓動etcd: # systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd
查看etcd集羣狀態:
# cd /root/k8s/etcd-cert # /opt/etcd/bin/etcdctl \ > --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ > --endpoints="https://192.168.1.43:2379,https://192.168.1.30:2379,https://192.168.1.51:2379" \ > cluster-health member 8da171dbef9ded69 is healthy: got healthy result from https://192.168.1.51:2379 member d250ef9d0d70c7c9 is healthy: got healthy result from https://192.168.1.30:2379 member f3b3c9aa5b97cee8 is healthy: got healthy result from https://192.168.1.43:2379 cluster is healthy
# yum install -y yum-utils device-mapper-persistent-data lvm2 #安裝依賴包 # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo #添加Docker軟件包源 # yum install -y docker-ce #安裝Docker CE # systemctl start docker && systemctl enable docker #啓動Docker服務並設置開機啓動
工做原理:
# cd /root/k8s/etcd-cert # /opt/etcd/bin/etcdctl \ --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \ --endpoints="https://192.168.1.43:2379,https://192.168.1.30:2379,https://192.168.1.51:2379" \ set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
下載二進制包:https://github.com/coreos/flannel/releases
解壓二進制包: # tar -zxvf flannel-v0.10.0-linux-amd64.tar.gz 建立k8s目錄 # mkdir /opt/kubernetes/{cfg,bin,ssl} -p 移動可執行文件到k8s目錄 # mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
建立flannel配置文件:
# cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.1.43:2379,https://192.168.1.30:2379,https://192.168.1.51:2379 \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF
建立flannel system管理文件:
cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF
配置Docker啓動指定子網段:
# vim /usr/lib/systemd/system/docker.service EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID
重啓docker和flannel
# systemctl daemon-reload && systemctl start flanneld && systemctl enable flanneld # systemctl restart docker
檢查是否生效
# ps -ef |grep docker root 42770 1 0 12:41 ? 00:00:00 /usr/bin/dockerd --bip=172.17.75.1/24 --ip-masq=false --mtu=1450 # ip addr 3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether ce:e0:c4:9f:7b:64 brd ff:ff:ff:ff:ff:ff inet 172.17.75.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::cce0:c4ff:fe9f:7b64/64 scope link valid_lft forever preferred_lft forever 4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:41:6d:53:ce brd ff:ff:ff:ff:ff:ff inet 172.17.75.1/24 brd 172.17.75.255 scope global docker0 valid_lft forever preferred_lft forever
拷貝文件到其餘節點:
scp -r /opt/kubernetes/ root@192.168.1.51:/opt scp -r /usr/lib/systemd/system/{flanneld,docker}.service root@192.168.1.51:/usr/lib/systemd/system/
最後保證全網互通。
# docker run -it busybox sh # ping 172.17.67.2
在部署Kubernetes以前必定要確保etcd、flannel、docker是正常工做的,不然先解決問題再繼續。
建立ca證書:
建立目錄: # cd ~/k8s # mkdir k8s-cert # cd k8s-cert # cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF # cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF 初始化 ca: # cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
生成api server證書(注意受權ip訪問apiserver,高可用須要加入master ip,lb ip,VIP)
cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.1.43", "192.168.1.63", "192.168.1.31", "192.168.1.186", "192.168.1.230", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF 生成證書: # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
生成kube-proxy證書
cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF 生成證書: # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 最終生成如下證書文件: # ls *.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem 建立k8s目錄: # mkdir /opt/kubernetes/{cfg,bin,ssl} -p 拷貝ssl到k8s目錄下: # cp ca*.pem server*.pem /opt/kubernetes/ssl/
下載二進制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md
下載這個包(kubernetes-server-linux-amd64.tar.gz)就夠了,包含了所需的全部組件。
# cd ~/k8s # tar -zxvf kubernetes-server-linux-amd64.tar.gz # cd ~/k8s/kubernetes/server/bin/ # cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin/
建立token文件:
生成token: # head -c 16 /dev/urandom | od -An -t x | tr -d ' ' # vim /opt/kubernetes/cfg/token.csv 2f7a15198f7c0c3af3ba7f264b6885c2,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
第一列:隨機字符串,本身可生成
第二列:用戶名
第三列:UID
第四列:用戶組
建立apiserver配置文件:(注意修改master地址,etcd服務)
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=https://192.168.1.43:2379,https://192.168.1.30:2379,https://192.168.1.51:2379 \\ --bind-address=192.168.1.43 \\ --secure-port=6443 \\ --advertise-address=192.168.1.43 \\ --allow-privileged=true \\ --service-cluster-ip-range=10.0.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/etcd/ssl/ca.pem \\ --etcd-certfile=/opt/etcd/ssl/server.pem \\ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF
配置好前面生成的證書,確保能鏈接etcd。
參數說明:
systemd管理apiserver:
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
啓動
# systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver # ps -ef | grep kube-apiserver
建立配置文件:
# cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect" EOF
參數說明:
建立systemd管理文件:
# cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
啓動:
# systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler # ps -ef | grep kube-scheduler
建立controller-manager配置文件:
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=10.0.0.0/24 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --experimental-cluster-signing-duration=87600h0m0s" EOF
systemd管理controller-manager組件:
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
啓動
# systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager # ps -ef | grep kube-controller-manager
全部組件都已經啓動成功,經過kubectl工具查看當前集羣組件狀態:
# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} 如上輸出說明組件都正常。
Master apiserver啓用TLS認證後,Node節點kubelet組件想要加入集羣,必須使用CA簽發的有效證書才能與apiserver通訊,當Node節點不少時,簽署證書是一件很繁瑣的事情,所以有了TLS Bootstrapping機制,kubelet會以一個低權限用戶自動向apiserver申請證書,kubelet的證書由apiserver動態簽署。
認證大體工做流程如圖所示:
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
建立kubelet bootstrapping kubeconfig(在master上)
# cd ~/k8s # mkdir kubeconfig # cd kubeconfig/ 設置kubectl環境變量: # vi /etc/profile # export PATH=$PATH:/opt/kubernetes/bin/ # source /etc/profile # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=/root/k8s/k8s-cert/ca.pem \ --embed-certs=true \ --server=https://192.168.1.43:6443 \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=2f7a15198f7c0c3af3ba7f264b6885c2 \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
建立kube-proxy kubeconfig文件:(在master上)
kubectl config set-cluster kubernetes \ --certificate-authority=/root/k8s/k8s-cert/ca.pem \ --embed-certs=true \ --server=https://192.168.1.43:6443 \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/root/k8s/k8s-cert/kube-proxy.pem \ --client-key=/root/k8s/k8s-cert/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# ls bootstrap.kubeconfig kube-proxy.kubeconfig
拷貝配置文件到node
# scp kube-proxy.kubeconfig bootstrap.kubeconfig root@192.168.1.30:/opt/kubernetes/cfg/ # scp kube-proxy.kubeconfig bootstrap.kubeconfig root@192.168.1.51:/opt/kubernetes/cfg/
將前面下載的二進制包中的kubelet和kube-proxy拷貝到/opt/kubernetes/bin目錄下。
# cd ~/k8s/kubernetes/server/bin # scp kubelet kube-proxy root@192.168.1.30:/opt/kubernetes/bin/ # scp kubelet kube-proxy root@192.168.1.51:/opt/kubernetes/bin/
建立kubelet配置文件:
cat <<EOF >/opt/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \\ --v=4 \\ --hostname-override=192.168.1.30 \\ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\ --config=/opt/kubernetes/cfg/kubelet.config \\ --cert-dir=/opt/kubernetes/ssl \\ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF
參數說明:
其中/opt/kubernetes/cfg/kubelet.config配置文件以下:
cat <<EOF >/opt/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.1.30 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.0.0.2 clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true EOF
systemd管理kubelet組件:
cat <<EOF >/usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF
啓動:
# systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet # ps -ef | grep kubelet
在Master審批Node加入集羣:
啓動後還沒加入到集羣中,須要手動容許該節點才能夠。
在Master節點查看請求籤名的Node:
# kubectl get csr # kubectl certificate approve XXXXID # kubectl get node
建立kube-proxy配置文件:
cat <<EOF >/opt/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \\ --v=4 \\ --hostname-override=192.168.1.30 \\ --cluster-cidr=10.0.0.0/24 \\ --proxy-mode=ipvs \\ --masquerade-all=true \\ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF
systemd管理kube-proxy組件:
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
啓動:
# systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy # ps -ef | grep kube-proxy
拷貝配置文件到其餘node:
配置文件: # scp -r /opt/kubernetes/ root@192.168.1.51:/opt/ systemd管理文件: # scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.1.51:/usr/lib/systemd/system/ 刪除ssl文件(master頒發): # rm -f /opt/kubernetes/ssl/* 修改配置文件(節點ip): # cd /opt/kubernetes/cfg kubelet,kubelet.config,kube-proxy,
啓動:
# systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy # ps -ef | grep kube-proxy # systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet # ps -ef | grep kubelet
在Master審批Node加入集羣:
啓動後還沒加入到集羣中,須要手動容許該節點才能夠。
在Master節點查看請求籤名的Node:
# kubectl get csr # kubectl certificate approve XXXXID # kubectl get node
# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.1.30 Ready <none> 14h v1.12.7 192.168.1.51 Ready <none> 23s v1.12.7 # kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
至此單master搭建完畢,下面拓展多master
拷貝全部組件到master02: # scp -r /opt/kubernetes/ root@192.168.1.63:/opt 拷貝systemd文件拷貝: # scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.1.63:/usr/lib/systemd/system/ 拷貝etcd文件: # scp -r /opt/etcd/ root@192.168.1.63:/opt/ 修改apiserver地址(address): # vi /opt/kubernetes/cfg/kube-apiserver
啓動:
啓動kube-apiserver: # systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver 啓動kube-scheduler: # systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler 啓動kube-controller-manager: # systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager 查看三個組件啓動: #ps -ef | grep kube
查看集羣狀態:
設置kubectl環境變量: # vi /etc/profile # export PATH=$PATH:/opt/kubernetes/bin/ # source /etc/profile # kubectl get node NAME STATUS ROLES AGE VERSION 192.168.1.30 Ready <none> 15h v1.12.7 192.168.1.51 Ready <none> 53m v1.12.7 # kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
nginx-master:
配置源: # vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 安裝nginx: # yum -y install nginx 添加L4負載均衡: # vim /etc/nginx/nginx.conf stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.1.43:6443; server 192.168.1.63:6443; } server { listen 6443; proxy_pass k8s-apiserver; } }
啓動:
關閉selinux: # setenforce 0 # vi /etc/selinux/config 將SELINUX=enforcing改成SELINUX=disabled #systemctl start nginx # netstat -anpt | grep 6443 # echo "master" > /usr/share/nginx/html/index.html
nginx-backup:
配置源: # vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 安裝nginx: # yum -y install nginx 拷貝到backup: # scp /etc/nginx/nginx.conf root@192.168.1.31:/etc/nginx/ 關閉selinux: # setenforce 0 # vi /etc/selinux/config 將SELINUX=enforcing改成SELINUX=disabled #systemctl start nginx # netstat -anpt | grep 6443 # echo "backup" > /usr/share/nginx/html/index.html
master和backup安裝keeplived:
# yum -y install keepalived
master的keeplived配置文件:
# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { # 接收郵件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 郵件發送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 # VRRP 路由 ID實例,每一個實例是惟一的 priority 100 # 優先級,備服務器設置 90 advert_int 1 # 指定VRRP 心跳包通告間隔時間,默認1秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.230/24 } track_script { check_nginx } }
backup的keeplived配置文件:
! Configuration File for keepalived global_defs { # 接收郵件地址 notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } # 郵件發送地址 notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP interface ens32 virtual_router_id 51 # VRRP 路由 ID實例,每一個實例是惟一的 priority 90 # 優先級,備服務器設置 90 advert_int 1 # 指定VRRP 心跳包通告間隔時間,默認1秒 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.230/24 } track_script { check_nginx } }
nginx檢查腳本:
# vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi
啓動:
# systemctl start keepalived 關閉master的nginx進行測試: # systemctl stop nginx
# cd /opt/kubernetes/cfg # vi bootstrap.kubeconfig # vi kubelet.kubeconfig # vi kube-proxy.kubeconfig # systemctl restart kubelet # systemctl restart kube-proxy
kubectl create clusterrolebinding system:anonymous --clusterrole=cluster-admin --user=system:anonymous
# kubectl run nginx --image=nginx --replicas=3 # kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
查看Pod,Service:
# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-j4bjq 1/1 Running 0 19m nginx-dbddb74b8-kpqht 1/1 Running 0 19m nginx-dbddb74b8-xjn5k 1/1 Running 0 19m # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 16h nginx NodePort 10.0.0.33 <none> 88:32694/TCP 20m
地址:https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard
# cd /k8s/Dashboard # ls dashboard-configmap.yaml dashboard-controller.yaml dashboard-rbac.yaml dashboard-secret.yaml dashboard-service.yaml k8s-admin.yaml # kubectl apply -f . # kubectl get pod,svc -o wide --all-namespaces | grep dashboard kube-system pod/kubernetes-dashboard-65f974f565-crvwj 1/1 Running 1 6m1s 172.17.75.2 192.168.1.30 <none> kube-system service/kubernetes-dashboard NodePort 10.0.0.192 <none> 443:30001/TCP 6m k8s-app=kubernetes-dashboard
訪問(儘可能用火狐):https://192.168.1.30:30001
查看token:
# kubectl get secret --all-namespaces | grep dashboard kube-system dashboard-admin-token-nrvzx kubernetes.io/service-account-token 3 9m16s kube-system kubernetes-dashboard-certs Opaque 0 9m17s kube-system kubernetes-dashboard-key-holder Opaque 2 9m17s kube-system kubernetes-dashboard-token-cqqm8 kubernetes.io/service-account-token 3 9m17s # kubectl describe secret dashboard-admin-token-nrvzx -n kube-system