1:自籤ETCD證書
2:ETCD部署
3:Node安裝docker
4:Flannel部署(先寫入子網到etcd)
---------master----------
5:自籤APIServer證書
6:部署APIServer組件(token,csv)
7:部署controller-manager(指定apiserver證書)和scheduler組件
----------node----------
8:生成kubeconfig(bootstrap,kubeconfig和kube-proxy.kubeconfig)
9:部署kubelet組件
10:部署kube-proxy組件
----------加入羣集----------
11:kubectl get csr && kubectl certificate approve 容許辦法證書,加入羣集
12:添加一個node節點
13:查看kubectl get node 節點node
master節點:linux
CentOS 7-3:192.168.18.128web
node節點:docker
CentOS 7-4:192.168.18.148 dockerjson
CentOS 7-5:192.168.18.145 dockerbootstrap
[root@master ~]# mkdir k8s [root@master ~]# cd k8s/ [root@master k8s]# mkdir etcd-cert [root@master k8s]# mv etcd-cert.sh etcd-cert [root@master k8s]# ls etcd-cert etcd.sh [root@master k8s]# vim cfssl.sh curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo [root@master k8s]# bash cfssl.sh [root@master k8s]# ls /usr/local/bin/ cfssl cfssl-certinfo cfssljson [root@master k8s]# cd etcd-cert/ `定義CA證書` cat > ca-config.json <<EOF { "signing":{ "default":{ "expiry":"87600h" }, "profiles":{ "www":{ "expiry":"87600h", "usages":[ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF `實現證書籤名` cat > ca-csr.json <<EOF { "CN":"etcd CA", "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"Nanjing", "ST":"Nanjing" } ] } EOF `生產證書,生成ca-key.pem ca.pem` [root@master etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2020/01/15 11:26:22 [INFO] generating a new CA key and certificate from CSR 2020/01/15 11:26:22 [INFO] generate received request 2020/01/15 11:26:22 [INFO] received CSR 2020/01/15 11:26:22 [INFO] generating key: rsa-2048 2020/01/15 11:26:23 [INFO] encoded CSR 2020/01/15 11:26:23 [INFO] signed certificate with serial number 58994014244974115135502281772101176509863440005 `指定etcd三個節點之間的通訊驗證` cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.18.128", "192.168.18.148", "192.168.18.145" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing" } ] } EOF `生成ETCD證書 server-key.pem server.pem` [root@master etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server 2020/01/15 11:28:07 [INFO] generate received request 2020/01/15 11:28:07 [INFO] received CSR 2020/01/15 11:28:07 [INFO] generating key: rsa-2048 2020/01/15 11:28:07 [INFO] encoded CSR 2020/01/15 11:28:07 [INFO] signed certificate with serial number 153451631889598523484764759860297996765909979890 2020/01/15 11:28:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
[root@master etcd-cert]# ls ca-config.json etcd-cert.sh server-csr.json ca.csr etcd-v3.3.10-linux-amd64.tar.gz server-key.pem ca-csr.json flannel-v0.10.0-linux-amd64.tar.gz server.pem ca-key.pem kubernetes-server-linux-amd64.tar.gz ca.pem server.csr [root@master etcd-cert]# mv *.tar.gz ../ [root@master etcd-cert]# cd ../ [root@master k8s]# ls cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz [root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz [root@master k8s]# ls etcd-v3.3.10-linux-amd64 Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md [root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p [root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/ `證書拷貝` [root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ `進入卡住狀態等待其餘節點加入` [root@master k8s]# bash etcd.sh etcd01 192.168.18.128 etcd02=https://192.168.18.148:2380,etcd03=https://192.168.18.145:2380 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@master ~]# ps -ef | grep etcd root 3479 1780 0 11:48 pts/0 00:00:00 bash etcd.sh etcd01 192.168.18.128 etcd02=https://192.168.195.148:2380,etcd03=https://192.168.195.145:2380 root 3530 3479 0 11:48 pts/0 00:00:00 systemctl restart etcd root 3540 1 1 11:48 ? 00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.18.128:2380 --listen-client-urls=https://192.168.18.128:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.18.128:2379 --initial-advertise-peer-urls=https://192.168.18.128:2380 --initial-cluster=etcd01=https://192.168.18.128:2380,etcd02=https://192.168.195.148:2380,etcd03=https://192.168.195.145:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem root 3623 3562 0 11:49 pts/1 00:00:00 grep --color=auto etcd
`拷貝證書去其餘節點` [root@master k8s]# scp -r /opt/etcd/ root@192.168.18.148:/opt/ The authenticity of host '192.168.18.148 (192.168.18.148)' can't be established. ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk. ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.18.148' (ECDSA) to the list of known hosts. root@192.168.18.148's password: etcd 100% 518 426.8KB/s 00:00 etcd 100% 18MB 105.0MB/s 00:00 etcdctl 100% 15MB 108.2MB/s 00:00 ca-key.pem 100% 1679 1.4MB/s 00:00 ca.pem 100% 1265 396.1KB/s 00:00 server-key.pem 100% 1675 1.0MB/s 00:00 server.pem 100% 1338 525.6KB/s 00:00 [root@master k8s]# scp -r /opt/etcd/ root@192.168.18.145:/opt/ The authenticity of host '192.168.18.145 (192.168.18.145)' can't be established. ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk. ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.18.145' (ECDSA) to the list of known hosts. root@192.168.18.145's password: etcd 100% 518 816.5KB/s 00:00 etcd 100% 18MB 87.4MB/s 00:00 etcdctl 100% 15MB 108.6MB/s 00:00 ca-key.pem 100% 1679 1.3MB/s 00:00 ca.pem 100% 1265 411.8KB/s 00:00 server-key.pem 100% 1675 1.4MB/s 00:00 server.pem 100% 1338 639.5KB/s 00:00 `啓動腳本拷貝其餘節點` [root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.18.148:/usr/lib/systemd/system/ root@192.168.18.148's password: etcd.service 100% 923 283.4KB/s 00:00 [root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.18.145:/usr/lib/systemd/system/ root@192.168.18.145's password: etcd.service 100% 923 347.7KB/s 00:00
`修改` [root@node1 ~]# systemctl stop firewalld.service [root@node1 ~]# setenforce 0 [root@node1 ~]# vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.18.148:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.18.148:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.18.148:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.18.148:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.18.128:2380,etcd02=https://192.168.18.148:2380,etcd03=https://192.168.18.145:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@node1 ~]# systemctl start etcd [root@node1 ~]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2020-01-15 17:53:24 CST; 5s ago #狀態爲Active
`修改` [root@node2 ~]# systemctl stop firewalld.service [root@node2 ~]# setenforce 0 [root@node2 ~]# vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.18.145:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.18.145:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.18.145:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.18.145:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.18.128:2380,etcd02=https://192.168.18.148:2380,etcd03=https://192.168.18.145:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@node2 ~]# systemctl start etcd [root@node2 ~]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2020-01-15 17:55:24 CST; 5s ago #狀態爲Active
`回到7-3上輸入如下命令:` [root@master k8s]# cd etcd-cert/ [root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379" cluster-health member 9104d301e3b6da41 is healthy: got healthy result from https://192.168.18.148:2379 member 92947d71c72a884e is healthy: got healthy result from https://192.168.18.145:2379 member b2a6d67e1bc8054b is healthy: got healthy result from https://192.168.18.128:2379 cluster is healthy `狀態爲healthy健康`
`安裝依賴包` [root@node1 ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y `設置阿里雲鏡像源` [root@node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo `安裝Docker-ce` [root@node1 ~]# yum install -y docker-ce `啓動Docker並設置爲開機自啓動` [root@node1 ~]# systemctl start docker.service [root@node1 ~]# systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. `檢查相關進程開啓狀況` [root@node1 ~]# ps aux | grep docker root 5551 0.1 3.6 565460 68652 ? Ssl 09:13 0:00 /usr/bin/docke d -H fd:// --containerd=/run/containerd/containerd.sock root 5759 0.0 0.0 112676 984 pts/1 R+ 09:16 0:00 grep --color=auto docker `鏡像加速服務` [root@node1 ~]# tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"] } EOF #網絡優化部分 echo 'net.ipv4.ip_forward=1' > /etc/sysctl.cnf sysctl -p [root@node1 ~]# service network restart Restarting network (via systemctl): [ 肯定 ] [root@node1 ~]# systemctl restart docker [root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart docker
`安裝依賴包` [root@node2 ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y `設置阿里雲鏡像源` [root@node2 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo `安裝Docker-ce` [root@node2 ~]# yum install -y docker-ce `啓動Docker並設置爲開機自啓動` [root@node2 ~]# systemctl start docker.service [root@node2 ~]# systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. `檢查相關進程開啓狀況` [root@node2 ~]# ps aux | grep docker root 5570 0.5 3.5 565460 66740 ? Ssl 09:18 0:00 /usr/bin/docke d -H fd:// --containerd=/run/containerd/containerd.sock root 5759 0.0 0.0 112676 984 pts/1 R+ 09:18 0:00 grep --color=auto docker `鏡像加速服務` [root@node2 ~]# tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"] } EOF [root@node2 ~]# service network restart Restarting network (via systemctl): [ 肯定 ] [root@node2 ~]# systemctl restart docker [root@node2 ~]# systemctl daemon-reload [root@node2 ~]# systemctl restart docker
`在master服務器中寫入分配的子網段到ETCD中,供flannel使用` [root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}} `查看寫入的信息` [root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379" get /coreos.com/network/config { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}} `將flannel的軟件包拷貝到全部node節點(只須要部署在node節點便可)` [root@master etcd-cert]# cd ../ [root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.18.148:/root root@192.168.18.148's password: flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 55.6MB/s 00:00 [root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.18.145:/root root@192.168.18.145's password: flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 69.5MB/s 00:00
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz `建立k8s工做目錄` [root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/ [root@node1 ~]# vim flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld `開啓flannel網絡功能` [root@node1 ~]# bash flannel.sh https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. `配置docker鏈接flannel` [root@node1 ~]# vim /usr/lib/systemd/system/docker.service #service段落作以下改動 9 [Service] 10 Type=notify 11 # the default is not to use systemd for cgroups because the delegate issues s till 12 # exists and systemd currently does not support the cgroup feature set requir ed 13 # for containers run by docker 14 EnvironmentFile=/run/flannel/subnet.env #在13下添加此行 15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run /containerd/containerd.sock #15行中在-H前添加$DOCKER_NETWORK_OPTIONS 16 ExecReload=/bin/kill -s HUP $MAINPID 17 TimeoutSec=0 18 RestartSec=2 19 Restart=always #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@node1 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.32.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.32.1/24 --ip-masq=false --mtu=1450" #此處bip指定啓動時的子網 `重啓docker服務` [root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart docker `查看flannel網絡` [root@node1 ~]# ifconfig flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.32.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::344b:13ff:fecb:1e2d prefixlen 64 scopeid 0x20<link> ether 36:4b:13:cb:1e:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0
[root@node2 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz `建立k8s工做目錄` [root@node2 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@node2 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/ [root@node2 ~]# vim flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld `開啓flannel網絡功能` [root@node2 ~]# bash flannel.sh https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. `配置docker鏈接flannel` [root@node2 ~]# vim /usr/lib/systemd/system/docker.service #service段落作以下改動 9 [Service] 10 Type=notify 11 # the default is not to use systemd for cgroups because the delegate issues s till 12 # exists and systemd currently does not support the cgroup feature set requir ed 13 # for containers run by docker 14 EnvironmentFile=/run/flannel/subnet.env #在13下添加此行 15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run /containerd/containerd.sock #15行中在-H前添加$DOCKER_NETWORK_OPTIONS 16 ExecReload=/bin/kill -s HUP $MAINPID 17 TimeoutSec=0 18 RestartSec=2 19 Restart=always #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@node2 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.40.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.40.1/24 --ip-masq=false --mtu=1450" #此處bip指定啓動時的子網 `重啓docker服務` [root@node2 ~]# systemctl daemon-reload [root@node2 ~]# systemctl restart docker `查看flannel網絡` [root@node2 ~]# ifconfig flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.40.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::cc6f:baff:fe89:3b93 prefixlen 64 scopeid 0x20<link> ether ce:6f:ba:89:3b:93 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 240 overruns 0 carrier 0 collisions 0
[root@node1 ~]# docker run -it centos:7 /bin/bash Unable to find image 'centos:7' locally 7: Pulling from library/centos ab5ef0e58194: Pull complete Digest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813c Status: Downloaded newer image for centos:7 #此時會自動進入容器 [root@3cdebf0d2bb8 /]# yum install net-tools -y [root@3cdebf0d2bb8 /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.32.2 netmask 255.255.255.0 broadcast 172.17.32.255 ether 02:42:ac:11:20:02 txqueuelen 0 (Ethernet) RX packets 16774 bytes 13938639 (13.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7361 bytes 400658 (391.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #eth0網卡爲172.17.32.2 `測試ping通` [root@3cdebf0d2bb8 /]# ping 172.17.40.2 PING 172.17.40.2 (172.17.40.2) 56(84) bytes of data. 64 bytes from 172.17.40.2: icmp_seq=1 ttl=62 time=0.279 ms 64 bytes from 172.17.40.2: icmp_seq=2 ttl=62 time=1.07 ms 64 bytes from 172.17.40.2: icmp_seq=3 ttl=62 time=0.397 ms ^C --- 172.17.40.2 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.279/0.610/1.075/0.307 ms #此時能夠ping通
[root@node2 ~]# docker run -it centos:7 /bin/bash Unable to find image 'centos:7' locally 7: Pulling from library/centos ab5ef0e58194: Pull complete Digest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813c Status: Downloaded newer image for centos:7 #此時會自動進入容器 [root@036c7eb6be88 /]# yum install net-tools -y [root@036c7eb6be88 /]# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.40.2 netmask 255.255.255.0 broadcast 172.17.40.255 ether 02:42:ac:11:28:02 txqueuelen 0 (Ethernet) RX packets 16859 bytes 13953367 (13.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7528 bytes 409881 (400.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 #eth0網卡爲172.17.40.2 `測試ping通` [root@036c7eb6be88 /]# ping 172.17.32.2 PING 172.17.32.2 (172.17.32.2) 56(84) bytes of data. 64 bytes from 172.17.32.2: icmp_seq=1 ttl=62 time=0.411 ms 64 bytes from 172.17.32.2: icmp_seq=2 ttl=62 time=0.699 ms 64 bytes from 172.17.32.2: icmp_seq=3 ttl=62 time=0.684 ms ^C --- 172.17.32.2 ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 5004ms rtt min/avg/max/mdev = 0.411/0.744/1.299/0.269 ms #此時能夠ping通
`在master上操做,api-server生成證書,須要先上傳master.zip到master節點上` [root@master k8s]# unzip master.zip Archive: master.zip inflating: apiserver.sh inflating: controller-manager.sh inflating: scheduler.sh [root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p `建立apiserver自簽證書目錄` [root@master k8s]# mkdir k8s-cert [root@master k8s]# cd k8s-cert/ [root@master k8s-cert]# ls #須要上傳k8s-cert.sh到此目錄下 k8s-cert.sh `創建ca證書` [root@master k8s-cert]# cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF [root@master k8s-cert]# cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Nanjing", "ST": "Nanjing", "O": "k8s", "OU": "System" } ] } EOF `證書籤名(生成ca.pem ca-key.pem)` [root@master k8s-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2020/02/05 10:15:09 [INFO] generating a new CA key and certificate from CSR 2020/02/05 10:15:09 [INFO] generate received request 2020/02/05 10:15:09 [INFO] received CSR 2020/02/05 10:15:09 [INFO] generating key: rsa-2048 2020/02/05 10:15:09 [INFO] encoded CSR 2020/02/05 10:15:09 [INFO] signed certificate with serial number 154087341948227448402053985122760482002707860296 `創建apiserver證書` [root@master k8s-cert]# cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.18.128", #master1(centos 7-3) "192.168.18.132", #master2(mini-1) "192.168.18.100", #vip(自行設定負載均衡用) "192.168.18.147", #lb (mini-2) "192.168.18.133", #lb (mini-3) "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "k8s", "OU": "System" } ] } EOF `證書籤名(生成server.pem server-key.pem)` [root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server 2020/02/05 11:43:47 [INFO] generate received request 2020/02/05 11:43:47 [INFO] received CSR 2020/02/05 11:43:47 [INFO] generating key: rsa-2048 2020/02/05 11:43:47 [INFO] encoded CSR 2020/02/05 11:43:47 [INFO] signed certificate with serial number 359419453323981371004691797080289162934778938507 2020/02/05 11:43:47 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). `創建admin證書` [root@master k8s-cert]# cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "system:masters", "OU": "System" } ] } EOF `證書籤名(生成admin.pem admin-key.epm)` [root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 2020/02/05 11:46:04 [INFO] generate received request 2020/02/05 11:46:04 [INFO] received CSR 2020/02/05 11:46:04 [INFO] generating key: rsa-2048 2020/02/05 11:46:04 [INFO] encoded CSR 2020/02/05 11:46:04 [INFO] signed certificate with serial number 361885975538105795426233467824041437549564573114 2020/02/05 11:46:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). `創建kube-proxy證書` [root@master k8s-cert]# cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "k8s", "OU": "System" } ] } EOF `證書籤名(生成kube-proxy.pem kube-proxy-key.pem)` [root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2020/02/05 11:47:55 [INFO] generate received request 2020/02/05 11:47:55 [INFO] received CSR 2020/02/05 11:47:55 [INFO] generating key: rsa-2048 2020/02/05 11:47:56 [INFO] encoded CSR 2020/02/05 11:47:56 [INFO] signed certificate with serial number 34747850270017663665747172643822215922289240826 2020/02/05 11:47:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# bash k8s-cert.sh 2020/02/05 11:50:08 [INFO] generating a new CA key and certificate from CSR 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 473883155883308900863805079252124099771123043047 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 66483817738746309793417718868470334151539533925 2020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 245658866069109639278946985587603475325871008240 2020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:09 [INFO] encoded CSR 2020/02/05 11:50:09 [INFO] signed certificate with serial number 696729766024974987873474865496562197315198733463 2020/02/05 11:50:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@master k8s-cert]# ls *pem admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem admin.pem ca.pem kube-proxy.pem server.pem [root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/ [root@master k8s-cert]# cd .. `解壓kubernetes壓縮包` [root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz [root@master k8s]# cd /root/k8s/kubernetes/server/bin `複製關鍵命令文件` [root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/ [root@master k8s]# cd /root/k8s `隨機生成序列號` [root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 9b3186df3dc799376ad43b6fe0108571 [root@master k8s]# vim /opt/kubernetes/cfg/token.csv 9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap" #序列號,用戶名,id,角色 `二進制文件,token,證書都準備好,開啓apiserver` [root@master k8s]# bash apiserver.sh 192.168.18.128 https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. `檢查進程是否啓動成功` [root@master k8s]# ps aux | grep kube root 7034 0.6 1.2 46672 23460 ? Ssl 12:23 0:33 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root 7104 0.0 2.0 108508 38552 ? Ssl 12:24 0:02 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s root 8146 77.5 14.7 363196 275780 ? Ssl 13:44 0:05 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 --bind-address=192.168.18.128 --secure-port=6443 --advertise-address=192.168.18.128 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem root 8154 0.0 0.0 112676 980 pts/1 R+ 13:44 0:00 grep --color=auto kube `查看配置文件` [root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 \ --bind-address=192.168.18.128 \ --secure-port=6443 \ --advertise-address=192.168.18.128 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --kubelet-https=true \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" `監聽的https端口` [root@master k8s]# netstat -ntap | grep 6443 tcp 0 0 192.168.18.128:6443 0.0.0.0:* LISTEN 8146/kube-apiserver tcp 0 0 192.168.18.128:6443 192.168.18.128:56724 ESTABLISHED 8146/kube-apiserver tcp 0 0 192.168.18.128:56724 192.168.18.128:6443 ESTABLISHED 8146/kube-apiserver [root@master k8s]# netstat -ntap | grep 8080 tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 8146/kube-apiserver ......如下省略多行 `啓動scheduler服務` [root@master k8s]# ./scheduler.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@master k8s]# ps aux | grep ku postfix 6212 0.0 0.0 91732 1364 ? S 11:29 0:00 pickup -l -t unix -u root 7034 1.1 1.0 45360 20332 ? Ssl 12:23 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root 7042 0.0 0.0 112676 980 pts/1 R+ 12:23 0:00 grep --color=auto ku [root@master k8s]# chmod +x controller-manager.sh `啓動controller-manager` [root@master k8s]# ./controller-manager.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. `查看master 節點狀態` [root@master k8s]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
`把 kubelet、kube-proxy拷貝到node節點上去` [root@master k8s]# cd kubernetes/server/bin/ [root@master bin]# scp kubelet kube-proxy root@192.168.18.148:/opt/kubernetes/bin/ root@192.168.18.148's password: kubelet 100% 168MB 81.1MB/s 00:02 kube-proxy 100% 48MB 77.6MB/s 00:00 [root@master bin]# scp kubelet kube-proxy root@192.168.18.145:/opt/kubernetes/bin/ root@192.168.18.145's password: kubelet 100% 168MB 86.8MB/s 00:01 kube-proxy 100% 48MB 90.4MB/s 00:00
[root@node1 ~]# ls anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 視頻 文檔 音樂 flannel.sh initial-setup-ks.cfg README.md 模板 圖片 下載 桌面 [root@node1 ~]# unzip node.zip Archive: node.zip inflating: proxy.sh inflating: kubelet.sh
[root@master bin]# cd /root/k8s/ [root@master k8s]# mkdir kubeconfig [root@master k8s]# cd kubeconfig/ `上傳kubeconfig.sh腳本到此目錄中,並對其進行重命名` [root@master kubeconfig]# ls kubeconfig.sh [root@master kubeconfig]# mv kubeconfig.sh kubeconfig [root@master kubeconfig]# vim kubeconfig #刪除前9行,以前生成令牌的時候已經執行過 # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=9b3186df3dc799376ad43b6fe0108571 \ #令牌中的序列號須要作更改是咱們以前生成的令牌 --kubeconfig=bootstrap.kubeconfig #修改完成後按Esc退出插入模式,輸入:wq保存退出 ----如何獲取序列號---- [root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv 9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap" #咱們須要用到其中的序列號"9b3186df3dc799376ad43b6fe0108571"每一個人的序列號是不一樣的 --------------------- `設置環境變量(能夠寫入到/etc/profile中)` [root@master kubeconfig]# vim /etc/profile #按大寫字母G到最末行,按小寫字母o在下行插入 export PATH=$PATH:/opt/kubernetes/bin/ #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@master kubeconfig]# source /etc/profile [root@master kubeconfig]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} [root@master kubeconfig]# kubectl get node No resources found. #此時尚未節點被添加
[root@master kubeconfig]# bash kubeconfig 192.168.18.128 /root/k8s/k8s-cert/ Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". [root@master kubeconfig]# ls bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig `拷貝配置文件到兩個node節點` [root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.18.148:/opt/kubernetes/cfg/ root@192.168.18.148's password: bootstrap.kubeconfig 100% 2168 2.2MB/s 00:00 kube-proxy.kubeconfig 100% 6270 3.5MB/s 00:00 [root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.18.145:/opt/kubernetes/cfg/ root@192.168.18.145's password: bootstrap.kubeconfig 100% 2168 3.1MB/s 00:00 kube-proxy.kubeconfig 100% 6270 7.9MB/s 00:00 `建立bootstrap角色賦予權限用於鏈接apiserver請求籤名(關鍵步驟)` [root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@node1 ~]# bash kubelet.sh 192.168.18.148 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. `檢查kubelet服務啓動` [root@node1 ~]# ps aux | grep kube root 8807 0.0 0.8 300512 16260 ? Ssl 09:45 0:05 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem root 35040 0.4 2.1 369632 40832 ? Ssl 14:53 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.18.148 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 root 35078 0.0 0.0 112676 984 pts/1 S+ 14:54 0:00 grep --color=auto kube [root@node1 ~]# systemctl status kubelet.service ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 三 2020-02-05 14:54:45 CST; 21s ago #狀態爲running運行中
`node1會自動尋找apiserver去進行申請證書,咱們就能夠檢查到node01節點的請求` [root@master kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg 18s kubelet-bootstrap Pending #此時狀態爲Pending等待集羣給該節點頒發證書 `繼續查看證書狀態` [root@master kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg 3m59s kubelet-bootstrap Approved,Issued #此時狀態爲Approved,Issued已經被容許加入羣集 `查看羣集節點,成功加入node1節點` [root@master kubeconfig]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.18.148 Ready <none> 6m54s v1.12.3 `在node1節點操做,啓動proxy服務` [root@node1 ~]# bash proxy.sh 192.168.195.148 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@node1 ~]# systemctl status kube-proxy.service ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 四 2020-02-06 11:11:56 CST; 20s ago #狀態爲running運行中
`在node01節點操做把現成的/opt/kubernetes目錄複製到node2節點進行修改便可` [root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.18.145:/opt/ The authenticity of host '192.168.18.145 (192.168.18.145)' can't be established. ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk. ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.18.145' (ECDSA) to the list of known hosts. root@192.168.18.145's password: flanneld 100% 238 572.7KB/s 00:00 bootstrap.kubeconfig 100% 2168 4.9MB/s 00:00 kube-proxy.kubeconfig 100% 6270 12.0MB/s 00:00 kubelet 100% 378 642.2KB/s 00:00 kubelet.config 100% 268 565.0KB/s 00:00 kubelet.kubeconfig 100% 2297 3.5MB/s 00:00 kube-proxy 100% 191 396.6KB/s 00:00 mk-docker-opts.sh 100% 2139 3.2MB/s 00:00 scp: /opt//kubernetes/bin/flanneld: Text file busy kubelet 100% 168MB 96.9MB/s 00:01 kube-proxy 100% 48MB 108.9MB/s 00:00 kubelet.crt 100% 2193 2.4MB/s 00:00 kubelet.key 100% 1675 2.5MB/s 00:00 kubelet-client-2020-02-06-11-03-32.pem 100% 1277 2.2MB/s 00:00 kubelet-client-current.pem 100% 1277 684.2KB/s 00:00 `把node1中的kubelet,kube-proxy的service文件拷貝到node2中` [root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.18.145:/usr/lib/systemd/system/ root@192.168.18.145's password: kubelet.service 100% 264 291.3KB/s 00:00 kube-proxy.service 100% 231 407.8KB/s 00:00 `到node2上操做,進行修改:首先刪除複製過來的證書,等會node2會自行申請證書` [root@node2 ~]# cd /opt/kubernetes/ssl/ [root@node2 ssl]# rm -rf * `修改配置文件kubelet kubelet.config kube-proxy(三個配置文件)` [root@node2 ssl]# cd ../cfg/ [root@node2 cfg]# vim kubelet 4 --hostname-override=192.168.18.145 \ #第4行,主機名改成node2節點的IP地址 #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@node2 cfg]# vim kubelet.config 4 address: 192.168.18.145 #第4行,地址改成node2節點的IP地址 #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@node2 cfg]# vim kube-proxy 4 --hostname-override=192.168.195.145 #第4行,改成node2節點的IP地址 #修改完成後按Esc退出插入模式,輸入:wq保存退出 `啓動服務` [root@node2 cfg]# systemctl start kubelet.service [root@node2 cfg]# systemctl enable kubelet.service Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@node2 cfg]# systemctl start kube-proxy.service [root@node2 cfg]# systemctl enable kube-proxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@master k8s]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA 99s kubelet-bootstrap Pending #此時出現新的受權許可加入羣集 [root@master k8s]# kubectl certificate approve node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA certificatesigningrequest.certificates.k8s.io/node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA approved `查看羣集中的節點` [root@master k8s]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.18.145 Ready <none> 28s v1.12.3 192.168.18.148 Ready <none> 26m v1.12.3 #此時兩個節點都已加入到羣集中