6臺centos7設備:
192.168.1.11 master01
192.168.1.12 node1
192.168.1.13 node2
192.168.1.14 master02
192.168.1.15 lb1
192.168.1.16 lb2
VIP:192.168.1.100html
1:自籤ETCD證書
2:ETCD部署
3:Node安裝docker
4:Flannel部署(先寫入子網到etcd)
---------master----------
5:自籤APIServer證書
6:部署APIServer組件(token,csv)
7:部署controller-manager(指定apiserver證書)和scheduler組件
----------node----------
8:生成kubeconfig(bootstrap,kubeconfig和kube-proxy.kubeconfig)
9:部署kubelet組件
10:部署kube-proxy組件
----------加入羣集----------
11:kubectl get csr && kubectl certificate approve 容許辦法證書,加入羣集
12:添加一個node節點
13:查看kubectl get node 節點node
[root@master ~]# mkdir k8s [root@master ~]# cd k8s/ [root@master k8s]# mkdir etcd-cert [root@master k8s]# mv etcd-cert.sh etcd-cert [root@master k8s]# ls etcd-cert etcd.sh [root@master k8s]# vim cfssl.sh curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo [root@master k8s]# bash cfssl.sh [root@master k8s]# ls /usr/local/bin/ cfssl cfssl-certinfo cfssljson [root@master k8s]# cd etcd-cert/ `定義CA證書` cat > ca-config.json <<EOF { "signing":{ "default":{ "expiry":"87600h" }, "profiles":{ "www":{ "expiry":"87600h", "usages":[ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF `實現證書籤名` cat > ca-csr.json <<EOF { "CN":"etcd CA", "key":{ "algo":"rsa", "size":2048 }, "names":[ { "C":"CN", "L":"Nanjing", "ST":"Nanjing" } ] } EOF `生產證書,生成ca-key.pem ca.pem` [root@master etcd-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2020/01/15 11:26:22 [INFO] generating a new CA key and certificate from CSR 2020/01/15 11:26:22 [INFO] generate received request 2020/01/15 11:26:22 [INFO] received CSR 2020/01/15 11:26:22 [INFO] generating key: rsa-2048 2020/01/15 11:26:23 [INFO] encoded CSR 2020/01/15 11:26:23 [INFO] signed certificate with serial number 58994014244974115135502281772101176509863440005 `指定etcd三個節點之間的通訊驗證` cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ "192.168.1.11", "192.168.1.12", "192.168.1.13" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing" } ] } EOF `生成ETCD證書 server-key.pem server.pem` [root@master etcd-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server 2020/01/15 11:28:07 [INFO] generate received request 2020/01/15 11:28:07 [INFO] received CSR 2020/01/15 11:28:07 [INFO] generating key: rsa-2048 2020/01/15 11:28:07 [INFO] encoded CSR 2020/01/15 11:28:07 [INFO] signed certificate with serial number 153451631889598523484764759860297996765909979890 2020/01/15 11:28:07 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
上傳如下三個壓縮包到/root/k8s目錄:linux
[root@master k8s]# ls cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz [root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz [root@master k8s]# ls etcd-v3.3.10-linux-amd64 Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md [root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p [root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/ `證書拷貝` [root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ `進入卡住狀態等待其餘節點加入` [root@master k8s]# bash etcd.sh etcd01 192.168.1.11 etcd02=https://192.168.12.148:2380,etcd03=https://192.168.1.13:2380 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service
[root@master ~]# ps -ef | grep etcd root 3479 1780 0 11:48 pts/0 00:00:00 bash etcd.sh etcd01 192.168.1.11 etcd02=https://192.168.1.12:2380,etcd03=https://192.168.1.13:2380 root 3530 3479 0 11:48 pts/0 00:00:00 systemctl restart etcd root 3540 1 1 11:48 ? 00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.1.11:2380 --listen-client-urls=https://192.168.1.11:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.1.11:2379 --initial-advertise-peer-urls=https://192.168.1.11:2380 --initial-cluster=etcd01=https://192.168.1.11:2380,etcd02=https://192.168.1.12:2380,etcd03=https://192.168.1.13:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem root 3623 3562 0 11:49 pts/1 00:00:00 grep --color=auto etcd
拷貝證書去2個node節點` [root@master k8s]# scp -r /opt/etcd/ root@192.168.1.12:/opt/ root@192.168.1.12's password: etcd 100% 518 426.8KB/s 00:00 etcd 100% 18MB 105.0MB/s 00:00 etcdctl 100% 15MB 108.2MB/s 00:00 ca-key.pem 100% 1679 1.4MB/s 00:00 ca.pem 100% 1265 396.1KB/s 00:00 server-key.pem 100% 1675 1.0MB/s 00:00 server.pem 100% 1338 525.6KB/s 00:00 [root@master k8s]# scp -r /opt/etcd/ root@192.168.1.13:/opt/ root@192.168.1.13's password: etcd 100% 518 816.5KB/s 00:00 etcd 100% 18MB 87.4MB/s 00:00 etcdctl 100% 15MB 108.6MB/s 00:00 ca-key.pem 100% 1679 1.3MB/s 00:00 ca.pem 100% 1265 411.8KB/s 00:00 server-key.pem 100% 1675 1.4MB/s 00:00 server.pem 100% 1338 639.5KB/s 00:00 複製etcd的啓動腳本到2個node節點 [root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.1.12:/usr/lib/systemd/system/ root@192.168.1.12's password: etcd.service 100% 923 283.4KB/s 00:00 [root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.1.13:/usr/lib/systemd/system/ root@192.168.1.13's password: etcd.service 100% 923 347.7KB/s 00:00
node1 [root@node1 ~]# systemctl stop firewalld.service [root@node1 ~]# setenforce 0 [root@node1 ~]# vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.1.12:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.1.12:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.12:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.12:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.11:2380,etcd02=https://192.168.1.12:2380,etcd03=https://192.168.1.13:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@node1 ~]# systemctl start etcd [root@node1 ~]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2020-01-15 17:53:24 CST; 5s ago #狀態爲Active
node2 [root@node1 ~]# systemctl stop firewalld.service [root@node1 ~]# setenforce 0 [root@node1 ~]# vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.1.13:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.1.13:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.13:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.13:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.11:2380,etcd02=https://192.168.1.12:2380,etcd03=https://192.168.1.13:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@node1 ~]# systemctl start etcd [root@node1 ~]# systemctl status etcd ● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2020-01-15 17:53:24 CST; 5s ago #狀態爲Active
[root@master k8s]# cd etcd-cert/ [root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379" cluster-health member 9104d301e3b6da41 is healthy: got healthy result from https://192.168.1.11:2379 member 92947d71c72a884e is healthy: got healthy result from https://192.168.1.12:2379 member b2a6d67e1bc8054b is healthy: got healthy result from https://192.168.1.13:2379 cluster is healthy
`安裝依賴包` [root@node1 ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y `設置阿里雲鏡像源` [root@node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo `安裝Docker-ce` [root@node1 ~]# yum install -y docker-ce `啓動Docker並設置爲開機自啓動` [root@node1 ~]# systemctl start docker.service [root@node1 ~]# systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. `檢查相關進程開啓狀況` [root@node1 ~]# ps aux | grep docker root 5551 0.1 3.6 565460 68652 ? Ssl 09:13 0:00 /usr/bin/docke d -H fd:// --containerd=/run/containerd/containerd.sock root 5759 0.0 0.0 112676 984 pts/1 R+ 09:16 0:00 grep --color=auto docker `鏡像加速服務` [root@node1 ~]# tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"] } EOF #網絡優化部分 echo 'net.ipv4.ip_forward=1' > /etc/sysctl.cnf sysctl -p [root@node1 ~]# service network restart Restarting network (via systemctl): [ 肯定 ] [root@node1 ~]# systemctl restart docker [root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart docker
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}} 查看寫入的信息 [root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379" get /coreos.com/network/config { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}} 將flannel的軟件包拷貝到全部node節點(只須要部署在node節點便可) [root@master etcd-cert]# cd ../ [root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.1.12:/root root@192.168.1.12's password: flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 55.6MB/s 00:00 [root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.1.13:/root root@192.168.1.13's password: flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 69.5MB/s 00:00
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz `建立k8s工做目錄` [root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/ [root@node1 ~]# vim flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld `開啓flannel網絡功能` [root@node1 ~]# bash flannel.sh https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service. `配置docker鏈接flannel` [root@node1 ~]# vim /usr/lib/systemd/system/docker.service #service段落作以下改動 9 [Service] 10 Type=notify 11 # the default is not to use systemd for cgroups because the delegate issues s till 12 # exists and systemd currently does not support the cgroup feature set requir ed 13 # for containers run by docker 14 EnvironmentFile=/run/flannel/subnet.env #在13下添加此行 15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run /containerd/containerd.sock #15行中在-H前添加$DOCKER_NETWORK_OPTIONS 16 ExecReload=/bin/kill -s HUP $MAINPID 17 TimeoutSec=0 18 RestartSec=2 19 Restart=always #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@node1 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.32.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.32.1/24 --ip-masq=false --mtu=1450" #此處bip指定啓動時的子網 `重啓docker服務` [root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart docker `查看flannel網絡` [root@node1 ~]# ifconfig flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.17.32.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::344b:13ff:fecb:1e2d prefixlen 64 scopeid 0x20<link> ether 36:4b:13:cb:1e:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0
[root@master k8s]# unzip master.zip Archive: master.zip inflating: apiserver.sh inflating: controller-manager.sh inflating: scheduler.sh [root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p `建立apiserver自簽證書目錄` [root@master k8s]# mkdir k8s-cert [root@master k8s]# cd k8s-cert/ [root@master k8s-cert]# ls #須要上傳k8s-cert.sh到此目錄下 k8s-cert.sh `創建ca證書` [root@master k8s-cert]# cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF [root@master k8s-cert]# cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Nanjing", "ST": "Nanjing", "O": "k8s", "OU": "System" } ] } EOF `證書籤名(生成ca.pem ca-key.pem)` [root@master k8s-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2020/02/05 10:15:09 [INFO] generating a new CA key and certificate from CSR 2020/02/05 10:15:09 [INFO] generate received request 2020/02/05 10:15:09 [INFO] received CSR 2020/02/05 10:15:09 [INFO] generating key: rsa-2048 2020/02/05 10:15:09 [INFO] encoded CSR 2020/02/05 10:15:09 [INFO] signed certificate with serial number 154087341948227448402053985122760482002707860296 `創建apiserver證書` [root@master k8s-cert]# cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.1.11", #master1 "192.168.1.14", #master2 "192.168.1.100", #vip "192.168.1.15", #lb "192.168.1.16", #lb "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "k8s", "OU": "System" } ] } EOF `證書籤名(生成server.pem server-key.pem)` [root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server 2020/02/05 11:43:47 [INFO] generate received request 2020/02/05 11:43:47 [INFO] received CSR 2020/02/05 11:43:47 [INFO] generating key: rsa-2048 2020/02/05 11:43:47 [INFO] encoded CSR 2020/02/05 11:43:47 [INFO] signed certificate with serial number 359419453323981371004691797080289162934778938507 2020/02/05 11:43:47 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). `創建admin證書` [root@master k8s-cert]# cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "system:masters", "OU": "System" } ] } EOF `證書籤名(生成admin.pem admin-key.epm)` [root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 2020/02/05 11:46:04 [INFO] generate received request 2020/02/05 11:46:04 [INFO] received CSR 2020/02/05 11:46:04 [INFO] generating key: rsa-2048 2020/02/05 11:46:04 [INFO] encoded CSR 2020/02/05 11:46:04 [INFO] signed certificate with serial number 361885975538105795426233467824041437549564573114 2020/02/05 11:46:04 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). `創建kube-proxy證書` [root@master k8s-cert]# cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "NanJing", "ST": "NanJing", "O": "k8s", "OU": "System" } ] } EOF `證書籤名(生成kube-proxy.pem kube-proxy-key.pem)` [root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2020/02/05 11:47:55 [INFO] generate received request 2020/02/05 11:47:55 [INFO] received CSR 2020/02/05 11:47:55 [INFO] generating key: rsa-2048 2020/02/05 11:47:56 [INFO] encoded CSR 2020/02/05 11:47:56 [INFO] signed certificate with serial number 34747850270017663665747172643822215922289240826 2020/02/05 11:47:56 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# bash k8s-cert.sh 2020/02/05 11:50:08 [INFO] generating a new CA key and certificate from CSR 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 473883155883308900863805079252124099771123043047 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 66483817738746309793417718868470334151539533925 2020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:08 [INFO] encoded CSR 2020/02/05 11:50:08 [INFO] signed certificate with serial number 245658866069109639278946985587603475325871008240 2020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2020/02/05 11:50:08 [INFO] generate received request 2020/02/05 11:50:08 [INFO] received CSR 2020/02/05 11:50:08 [INFO] generating key: rsa-2048 2020/02/05 11:50:09 [INFO] encoded CSR 2020/02/05 11:50:09 [INFO] signed certificate with serial number 696729766024974987873474865496562197315198733463 2020/02/05 11:50:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@master k8s-cert]# ls *pem admin-key.pem ca-key.pem kube-proxy-key.pem server-key.pem admin.pem ca.pem kube-proxy.pem server.pem [root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/ [root@master k8s-cert]# cd .. `解壓kubernetes壓縮包` [root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz [root@master k8s]# cd /root/k8s/kubernetes/server/bin `複製關鍵命令文件` [root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/ [root@master k8s]# cd /root/k8s `隨機生成序列號` [root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 9b3186df3dc799376ad43b6fe0108571 [root@master k8s]# vim /opt/kubernetes/cfg/token.csv 9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap" #序列號,用戶名,id,角色 `二進制文件,token,證書都準備好,開啓apiserver` [root@master k8s]# bash apiserver.sh 192.168.1.11 https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. `檢查進程是否啓動成功` [root@master k8s]# ps aux | grep kube `查看配置文件` [root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.1.11:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 \ --bind-address=192.168.1.11 \ --secure-port=6443 \ --advertise-address=192.168.1.11 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --kubelet-https=true \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" `監聽的https端口` [root@master k8s]# netstat -ntap | grep 6443 `啓動scheduler服務` [root@master k8s]# ./scheduler.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@master k8s]# ps aux | grep ku postfix 6212 0.0 0.0 91732 1364 ? S 11:29 0:00 pickup -l -t unix -u root 7034 1.1 1.0 45360 20332 ? Ssl 12:23 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root 7042 0.0 0.0 112676 980 pts/1 R+ 12:23 0:00 grep --color=auto ku [root@master k8s]# chmod +x controller-manager.sh `啓動controller-manager` [root@master k8s]# ./controller-manager.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. `查看master 節點狀態` [root@master k8s]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
[root@master k8s]# cd kubernetes/server/bin/ [root@master bin]# scp kubelet kube-proxy root@192.168.1.12:/opt/kubernetes/bin/ root@192.168.1.12's password: kubelet 100% 168MB 81.1MB/s 00:02 kube-proxy 100% 48MB 77.6MB/s 00:00 [root@master bin]# scp kubelet kube-proxy root@192.168.1.13:/opt/kubernetes/bin/ root@192.168.1.13's password: kubelet 100% 168MB 86.8MB/s 00:01 kube-proxy 100% 48MB 90.4MB/s 00:00
[root@node1 ~]# ls anaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 視頻 文檔 音樂 flannel.sh initial-setup-ks.cfg README.md 模板 圖片 下載 桌面 [root@node1 ~]# unzip node.zip Archive: node.zip inflating: proxy.sh inflating: kubelet.sh
[root@master bin]# cd /root/k8s/ [root@master k8s]# mkdir kubeconfig [root@master k8s]# cd kubeconfig/ `上傳kubeconfig.sh腳本到此目錄中,並對其進行重命名` [root@master kubeconfig]# ls kubeconfig.sh [root@master kubeconfig]# mv kubeconfig.sh kubeconfig [root@master kubeconfig]# vim kubeconfig #刪除前9行,以前生成令牌的時候已經執行過 # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=9b3186df3dc799376ad43b6fe0108571 \ #令牌中的序列號須要作更改是咱們以前生成的令牌 --kubeconfig=bootstrap.kubeconfig #修改完成後按Esc退出插入模式,輸入:wq保存退出 ----如何獲取序列號---- [root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv 9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap" #咱們須要用到其中的序列號"9b3186df3dc799376ad43b6fe0108571"每一個人的序列號是不一樣的 --------------------- `設置環境變量(能夠寫入到/etc/profile中)` [root@master kubeconfig]# vim /etc/profile #按大寫字母G到最末行,按小寫字母o在下行插入 export PATH=$PATH:/opt/kubernetes/bin/ #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@master kubeconfig]# source /etc/profile [root@master kubeconfig]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} [root@master kubeconfig]# kubectl get node No resources found. #此時尚未節點被添加 [root@master kubeconfig]# bash kubeconfig 192.168.1.11 /root/k8s/k8s-cert/ Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". [root@master kubeconfig]# ls bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig `拷貝配置文件到兩個node節點` [root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.1.12:/opt/kubernetes/cfg/ root@192.168.1.12's password: bootstrap.kubeconfig 100% 2168 2.2MB/s 00:00 kube-proxy.kubeconfig 100% 6270 3.5MB/s 00:00 [root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.1.13:/opt/kubernetes/cfg/ root@192.168.1.13's password: bootstrap.kubeconfig 100% 2168 3.1MB/s 00:00 kube-proxy.kubeconfig 100% 6270 7.9MB/s 00:00 `建立bootstrap角色賦予權限用於鏈接apiserver請求籤名(關鍵步驟)` [root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@node1 ~]# bash kubelet.sh 192.168.1.12 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. 檢查kubelet服務啓動 [root@node1 ~]# ps aux | grep kube [root@node1 ~]# systemctl status kubelet.service ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 三 2020-02-05 14:54:45 CST; 21s ago #狀態爲running運行中
node1會自動尋找apiserver去進行申請證書,咱們就能夠檢查到node01節點的請求 [root@master kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg 18s kubelet-bootstrap Pending #此時狀態爲Pending等待集羣給該節點頒發證書 `繼續查看證書狀態` [root@master kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg 3m59s kubelet-bootstrap Approved,Issued #此時狀態爲Approved,Issued已經被容許加入羣集 `查看羣集節點,成功加入node1節點` [root@master kubeconfig]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.18.148 Ready <none> 6m54s v1.12.3
[root@node1 ~]# bash proxy.sh 192.168.1.12 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@node1 ~]# systemctl status kube-proxy.service ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 四 2020-02-06 11:11:56 CST; 20s ago #狀態爲running運行中
[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.1.13:/opt/ [root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.1.13:/usr/lib/systemd/system/ root@192.168.1.13's password: kubelet.service 100% 264 291.3KB/s 00:00 kube-proxy.service 100% 231 407.8KB/s 00:00
[root@node2 ~]# cd /opt/kubernetes/ssl/ [root@node2 ssl]# rm -rf * `修改配置文件kubelet kubelet.config kube-proxy(三個配置文件)` [root@node2 ssl]# cd ../cfg/ [root@node2 cfg]# vim kubelet 4 --hostname-override=192.168.1.13\ #第4行,主機名改成node2節點的IP地址 #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@node2 cfg]# vim kubelet.config 4 address: 192.168.1.13 #第4行,地址改成node2節點的IP地址 #修改完成後按Esc退出插入模式,輸入:wq保存退出 [root@node2 cfg]# vim kube-proxy 4 --hostname-override=192.168.1.13 #第4行,改成node2節點的IP地址 #修改完成後按Esc退出插入模式,輸入:wq保存退出 `啓動服務` [root@node2 cfg]# systemctl start kubelet.service [root@node2 cfg]# systemctl enable kubelet.service Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@node2 cfg]# systemctl start kube-proxy.service [root@node2 cfg]# systemctl enable kube-proxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. 第八步:回到master上查看node2節點請求 [root@master k8s]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA 99s kubelet-bootstrap Pending #此時出現新的受權許可加入羣集 [root@master k8s]# kubectl certificate approve node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA certificatesigningrequest.certificates.k8s.io/node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh4w8WJA approved
[root@master k8s]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.1.12 Ready <none> 28s v1.12.3 192.168.1.13 Ready <none> 26m v1.12.3 #此時兩個節點都已加入到羣集中
在master1上操做,複製kubernetes目錄到master2 [root@master1 k8s]# scp -r /opt/kubernetes/ root@192.168.1.14:/opt 複製master1中的三個組件啓動腳本kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service到master2 [root@master1 k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.1.14:/usr/lib/systemd/system/
[root@master2 ~]# cd /opt/kubernetes/cfg/ [root@master2 cfg]# ls kube-apiserver kube-controller-manager kube-scheduler token.csv [root@master2 cfg]# vim kube-apiserver 5 --bind-address=192.168.1.14 \ 7 --advertise-address=192.168.1.14 \ #第5和7行IP地址須要改成master2的地址 #修改完成後按Esc退出插入模式,輸入:wq保存退出
[root@master1 k8s]# scp -r /opt/etcd/ root@192.168.1.14:/opt/ root@192.168.1.14's password: etcd 100% 516 535.5KB/s 00:00 etcd 100% 18MB 90.6MB/s 00:00 etcdctl 100% 15MB 80.5MB/s 00:00 ca-key.pem 100% 1675 1.4MB/s 00:00 ca.pem 100% 1265 411.6KB/s 00:00 server-key.pem 100% 1679 2.0MB/s 00:00 server.pem 100% 1338 429.6KB/s 00:00
[root@master2 cfg]# systemctl start kube-apiserver.service [root@master2 cfg]# systemctl enable kube-apiserver.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. [root@master2 cfg]# systemctl status kube-apiserver.service ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since 五 2020-02-07 09:16:57 CST; 56min ago [root@master2 cfg]# systemctl start kube-controller-manager.service [root@master2 cfg]# systemctl enable kube-controller-manager.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@master2 cfg]# systemctl status kube-controller-manager.service ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since 五 2020-02-07 09:17:02 CST; 57min ago [root@master2 cfg]# systemctl start kube-scheduler.service [root@master2 cfg]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@master2 cfg]# systemctl status kube-scheduler.service ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since 五 2020-02-07 09:17:07 CST; 58min ago
[root@master2 cfg]# vim /etc/profile #末尾添加 export PATH=$PATH:/opt/kubernetes/bin/ [root@master2 cfg]# source /etc/profile [root@master2 cfg]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.1.12 Ready <none> 21h v1.12.3 192.168.1.13 Ready <none> 22h v1.12.3 #此時能夠看到node1和node2的加入狀況
`lb1` [root@lb1 ~]# ls anaconda-ks.cfg keepalived.conf 公共 視頻 文檔 音樂 initial-setup-ks.cfg nginx.sh 模板 圖片 下載 桌面 `lb2` [root@lb2 ~]# ls anaconda-ks.cfg keepalived.conf 公共 視頻 文檔 音樂 initial-setup-ks.cfg nginx.sh 模板 圖片 下載 桌面
[root@lb1 ~]# systemctl stop firewalld.service [root@lb1 ~]# setenforce 0 [root@lb1 ~]# vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 #修改完成後按Esc退出插入模式,輸入:wq保存退出 `從新加載yum倉庫` [root@lb1 ~]# yum list `安裝nginx服務` [root@lb1 ~]# yum install nginx -y [root@lb1 ~]# vim /etc/nginx/nginx.conf #在12行下插入如下內容 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.1.11:6443; #此處爲master1的ip地址 server 192.168.1.12:6443; #此處爲master2的ip地址 } server { listen 6443; proxy_pass k8s-apiserver; } } #修改完成後按Esc退出插入模式,輸入:wq保存退出 `檢測語法` [root@lb1 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@lb1 ~]# cd /usr/share/nginx/html/ [root@lb1 html]# ls 50x.html index.html [root@lb1 html]# vim index.html 14 <h1>Welcome to mater nginx!</h1> #14行中添加master以做區分 #修改完成後按Esc退出插入模式,輸入:wq保存退出 `啓動服務` [root@lb2 ~]# systemctl start nginx 瀏覽器驗證訪問,輸入192.168.18.147,能夠訪問master的nginx主頁 在這裏插入圖片描述 部署keepalived服務 [root@lb1 html]# yum install keepalived -y `修改配置文件` [root@lb1 html]# cd ~ [root@lb1 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf cp:是否覆蓋"/etc/keepalived/keepalived.conf"? yes #用咱們以前上傳的keepalived.conf配置文件,覆蓋安裝完成後原有的配置文件 [root@lb1 ~]# vim /etc/keepalived/keepalived.conf 18 script "/etc/nginx/check_nginx.sh" #18行目錄改成/etc/nginx/,腳本後寫 23 interface ens33 #eth0改成ens33,此處的網卡名稱能夠使用ifconfig命令查詢 24 virtual_router_id 51 #vrrp路由ID實例,每一個實例是惟一的 25 priority 100 #優先級,備服務器設置90 31 virtual_ipaddress { 32 192.168.1.100/24 #vip地址改成以前設定好的192.168.18.100 #38行如下刪除 #修改完成後按Esc退出插入模式,輸入:wq保存退出 `寫腳本` [root@lb1 ~]# vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") #統計數量 if [ "$count" -eq 0 ];then systemctl stop keepalived fi #匹配爲0,關閉keepalived服務 #寫入完成後按Esc退出插入模式,輸入:wq保存退出 [root@lb1 ~]# chmod +x /etc/nginx/check_nginx.sh [root@lb1 ~]# ls /etc/nginx/check_nginx.sh /etc/nginx/check_nginx.sh #此時腳本爲可執行狀態,綠色 [root@lb1 ~]# systemctl start keepalived [root@lb1 ~]# ip a 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33 valid_lft 1370sec preferred_lft 1370sec inet `192.168.1.100/24` scope global secondary ens33 #此時漂移地址在lb1中 valid_lft forever preferred_lft forever inet6 fe80::1cb1:b734:7f72:576f/64 scope link valid_lft forever preferred_lft forever inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed valid_lft forever preferred_lft forever
[root@lb2 ~]# systemctl stop firewalld.service [root@lb2 ~]# setenforce 0 [root@lb2 ~]# vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0 #修改完成後按Esc退出插入模式,輸入:wq保存退出 `從新加載yum倉庫` [root@lb2 ~]# yum list `安裝nginx服務` [root@lb2 ~]# yum install nginx -y [root@lb2 ~]# vim /etc/nginx/nginx.conf #在12行下插入如下內容 stream { log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { server 192.168.18.128:6443; #此處爲master1的ip地址 server 192.168.18.132:6443; #此處爲master2的ip地址 } server { listen 6443; proxy_pass k8s-apiserver; } } #修改完成後按Esc退出插入模式,輸入:wq保存退出 `檢測語法` [root@lb2 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@lb2 ~]# vim /usr/share/nginx/html/index.html 14 <h1>Welcome to backup nginx!</h1> #14行中添加backup以做區分 #修改完成後按Esc退出插入模式,輸入:wq保存退出 `啓動服務` [root@lb2 ~]# systemctl start nginx 瀏覽器驗證訪問,輸入192.168.18.133,能夠訪問master的nginx主頁 在這裏插入圖片描述 部署keepalived服務 [root@lb2 ~]# yum install keepalived -y `修改配置文件` [root@lb2 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf cp:是否覆蓋"/etc/keepalived/keepalived.conf"? yes #用咱們以前上傳的keepalived.conf配置文件,覆蓋安裝完成後原有的配置文件 [root@lb2 ~]# vim /etc/keepalived/keepalived.conf 18 script "/etc/nginx/check_nginx.sh" #18行目錄改成/etc/nginx/,腳本後寫 22 state BACKUP #22行角色MASTER改成BACKUP 23 interface ens33 #eth0改成ens33 24 virtual_router_id 51 #vrrp路由ID實例,每一個實例是惟一的 25 priority 90 #優先級,備服務器爲90 31 virtual_ipaddress { 32 192.168.1.100/24 #vip地址改成以前設定好的192.168.18.100 #38行如下刪除 #修改完成後按Esc退出插入模式,輸入:wq保存退出 `寫腳本` [root@lb2 ~]# vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") #統計數量 if [ "$count" -eq 0 ];then systemctl stop keepalived fi #匹配爲0,關閉keepalived服務 #寫入完成後按Esc退出插入模式,輸入:wq保存退出 [root@lb2 ~]# chmod +x /etc/nginx/check_nginx.sh [root@lb2 ~]# ls /etc/nginx/check_nginx.sh /etc/nginx/check_nginx.sh #此時腳本爲可執行狀態,綠色 [root@lb2 ~]# systemctl start keepalived [root@lb2 ~]# ip a 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff inet 192.168.1.16/24 brd 192.168.1.255 scope global dynamic ens33 valid_lft 958sec preferred_lft 958sec inet6 fe80::578f:4368:6a2c:80d7/64 scope link valid_lft forever preferred_lft forever inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed valid_lft forever preferred_lft forever #此時沒有192.168.18.100,由於地址在lb1(master)上
建立dashborad工做目錄 [root@localhost k8s]# mkdir dashboard 拷貝官方的文件 https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard [root@localhost dashboard]# kubectl create -f dashboard-rbac.yaml role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created [root@localhost dashboard]# kubectl create -f dashboard-secret.yaml secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-key-holder created [root@localhost dashboard]# kubectl create -f dashboard-configmap.yaml configmap/kubernetes-dashboard-settings created [root@localhost dashboard]# kubectl create -f dashboard-controller.yaml serviceaccount/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created [root@localhost dashboard]# kubectl create -f dashboard-service.yaml service/kubernetes-dashboard created //完成後查看建立在指定的kube-system命名空間下 [root@localhost dashboard]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kubernetes-dashboard-65f974f565-m9gm8 0/1 ContainerCreating 0 88s //查看如何訪問 [root@localhost dashboard]# kubectl get pods,svc -n kube-system NAME READY STATUS RESTARTS AGE pod/kubernetes-dashboard-65f974f565-m9gm8 1/1 Running 0 2m49s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes-dashboard NodePort 10.0.0.243 <none> 443:30001/TCP 2m24s //訪問nodeIP就能夠訪問 https://192.168.1.12:30001/
[root@localhost dashboard]# vim dashboard-cert.sh cat > dashboard-csr.json <<EOF { "CN": "Dashboard", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF K8S_CA=$1 cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard kubectl delete secret kubernetes-dashboard-certs -n kube-system kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system #dashboard-controller.yaml 增長證書兩行,而後apply # args: # # PLATFORM-SPECIFIC ARGS HERE # - --auto-generate-certificates # - --tls-key-file=dashboard-key.pem # - --tls-cert-file=dashboard.pem [root@localhost dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/ 2020/02/05 15:29:08 [INFO] generate received request 2020/02/05 15:29:08 [INFO] received CSR 2020/02/05 15:29:08 [INFO] generating key: rsa-2048 2020/02/05 15:29:09 [INFO] encoded CSR 2020/02/05 15:29:09 [INFO] signed certificate with serial number 150066859036029062260457207091479364937405390263 2020/02/05 15:29:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). secret "kubernetes-dashboard-certs" deleted secret/kubernetes-dashboard-certs created [root@localhost dashboard]# vim dashboard-controller.yaml args: # PLATFORM-SPECIFIC ARGS HERE - --auto-generate-certificates - --tls-key-file=dashboard-key.pem - --tls-cert-file=dashboard.pem //從新部署 [root@localhost dashboard]# kubectl apply -f dashboard-controller.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply serviceaccount/kubernetes-dashboard configured Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply deployment.apps/kubernetes-dashboard configured
[root@localhost dashboard]# kubectl create -f k8s-admin.yaml serviceaccount/dashboard-admin created clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created //保存 [root@localhost dashboard]# kubectl get secret -n kube-system NAME TYPE DATA AGE dashboard-admin-token-qctfr kubernetes.io/service-account-token 3 65s default-token-mmvcg kubernetes.io/service-account-token 3 7d15h kubernetes-dashboard-certs Opaque 11 10m kubernetes-dashboard-key-holder Opaque 2 63m kubernetes-dashboard-token-nsc84 kubernetes.io/service-account-token 3 62m //查看令牌 [root@localhost dashboard]# kubectl describe secret dashboard-admin-token-qctfr -n kube-system Name: dashboard-admin-token-qctfr Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 73f19313-47ea-11ea-895a-000c297a15fb Type: kubernetes.io/service-account-token Data ==== ca.crt: 1359 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcWN0ZnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzNmMTkzMTMtNDdlYS0xMWVhLTg5NWEtMDAwYzI5N2ExNWZiIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.v4YBoyES2etex6yeMPGfl7OT4U9Ogp-84p6cmx3HohiIS7sSTaCqjb3VIvyrVtjSdlT66ZMRzO3MUgj1HsPxgEzOo9q6xXOCBb429m9Qy-VK2JxuwGVD2dIhcMQkm6nf1Da5ZpcYFs8SNT-djAjZNB_tmMY_Pjao4DBnD2t_JXZUkCUNW_O2D0mUFQP2beE_NE2ZSEtEvmesB8vU2cayTm_94xfvtNjfmGrPwtkdH0iy8sH-T0apepJ7wnZNTGuKOsOJf76tU31qF4E5XRXIt-F2Jmv9pEOFuahSBSaEGwwzXlXOVMSaRF9cBFxn-0iXRh0Aq0K21HdPHW1b4-ZQwA