環境準備:node
這裏準備三臺虛擬機,一主兩從的集羣模式linux
主機名 |
IP |
用途 | 部署軟件 |
---|---|---|---|
k8s-master | 10.211.55.22 | master |
apiserver,scheduler,controller-manager etcd,flanneld |
k8s-node-1 | 10.211.55.23 | node | kubelet,kube-proxy etcd,flanneld |
k8s-node-2 | 10.211.55.24 | node | kubelet,kube-proxy etcd,flanneld |
個人虛擬機能夠直接ping通主機名,若是不能ping通記得每一臺的機器加下hostsgit
[root@k8s-master ~]# ping k8s-node-1 PING k8s-node-1.localdomain (10.211.55.23) 56(84) bytes of data. 64 bytes from k8s-node-1.shared (10.211.55.23): icmp_seq=1 ttl=64 time=0.935 ms 64 bytes from k8s-node-1.shared (10.211.55.23): icmp_seq=2 ttl=64 time=0.282 ms
系統版本以及測試環境須要關閉的東西:github
[root@k8s-master ~]# cat /etc/redhat-release CentOS Linux release 7.2.1511 (Core) [root@k8s-master ~]# [root@k8s-master ~]# uname -r 3.10.0-327.el7.x86_64 [root@k8s-master ~]# [root@k8s-master ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled) Active: inactive (dead) since 六 2020-03-14 17:52:39 CST; 3h 32min ago Main PID: 705 (code=exited, status=0/SUCCESS) 3月 14 17:50:44 Daya-01 systemd[1]: Starting firewalld... 3月 14 17:50:44 Daya-01 systemd[1]: Started firewalld ... 3月 14 17:52:38 k8s-master systemd[1]: Stopping firewa... 3月 14 17:52:39 k8s-master systemd[1]: Stopped firewal... Hint: Some lines were ellipsized, use -l to show in full. [root@k8s-master ~]# [root@k8s-master ~]# getenforce Disabled
軟件包下載地址:docker
軟件包 | 下載地址 |
---|---|
kubernetes-node-linux-amd64.tar.gz | https://dl.k8s.io/v1.15.1/kubernetes-node-linux-amd64.tar.gz |
kubernetes-server-linux-amd64.tar.gz | https://dl.k8s.io/v1.15.1/kubernetes-server-linux-amd64.tar.gz |
flannel-v0.11.0-linux-amd64.tar.gz | https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz |
etcd-v3.3.10-linux-amd64.tar.gz | https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz |
關閉swap(三臺機器相同操做)json
[root@k8s-master ~]# swapoff -a && sysctl -w vm.swappiness=0 vm.swappiness = 0 [root@k8s-master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
設置docker所需參數並安裝啓動docker(三臺機器相同操做):
bootstrap
cat > /etc/sysctl.d/kubernetes.conf <<EOF net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 EOF sysctl -p /etc/sysctl.d/kubernetes.conf # 配置yum源 cd /etc/yum.repo.d/ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum clean all ;yum repolist -y # 安裝docker ,版本 18.06.2 yum install docker-ce-18.06.2.ce-3.el7 -y systemctl start docker && systemctl enable docker
建立相關目錄(三臺機器相同操做):
vim
# 建立安裝包存儲目錄 mkdir /data/{install,ssl_config} -p mkdir /data/ssl_config/{etcd,kubernetes} -p # 建立安裝目錄 mkdir /cloud/k8s/etcd/{bin,cfg,ssl} -p mkdir /cloud/k8s/kubernetes/{bin,cfg,ssl} -p
master節點配置ssh登陸互信:
centos
[root@k8s-master .ssh]# ssh-copy-id k8s-master The authenticity of host 'k8s-master (10.211.55.22)' can't be established. ECDSA key fingerprint is ee:4e:aa:d1:10:bb:f5:ec:0f:19:73:63:90:42:b4:b4. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@k8s-master's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'k8s-master'" and check to make sure that only the key(s) you wanted were added. [root@k8s-master .ssh]# [root@k8s-master .ssh]# ssh-copy-id k8s-node-1 The authenticity of host 'k8s-node-1 (10.211.55.23)' can't be established. ECDSA key fingerprint is ee:4e:aa:d1:10:bb:f5:ec:0f:19:73:63:90:42:b4:b4. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@k8s-node-1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'k8s-node-1'" and check to make sure that only the key(s) you wanted were added. [root@k8s-master .ssh]# ssh-copy-id k8s-node-2 The authenticity of host 'k8s-node-2 (10.211.55.24)' can't be established. ECDSA key fingerprint is ee:4e:aa:d1:10:bb:f5:ec:0f:19:73:63:90:42:b4:b4. Are you sure you want to continue connecting (yes/no)? yes /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@k8s-node-2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'k8s-node-2'" and check to make sure that only the key(s) you wanted were added. 進行驗證 [root@k8s-master ~]# for i in k8s-master k8s-node-1 k8s-node-2 ; do ssh $i hostname ; done k8s-master k8s-node-1 k8s-node-2
將命令添加到環境變量:api
echo 'export PATH=$PATH:/cloud/k8s/etcd/bin/:/cloud/k8s/kubernetes/bin/' >>/etc/profile
生成相關etcd證書(此步驟只在master節點操做):
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
建立etcd相關證書:
cd /data/ssl_config/etcd/ cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
建立etcd ca配置證書:
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
建立etcd server證書:
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "k8s-master", "k8s-node-1", "k8s-node-2", "10.211.55.22", "10.211.55.23", "10.211.55.24" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
生成etcd ca證書和私鑰:
cd /data/ssl_config/etcd/ # 生成ca證書 cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # 生成server證書 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
建立kubernetes相關證書:
cd /data/ssl_config/kubernetes/ cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
建立ca證書:
cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
生成api-server證書:
cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "10.211.55.22", "k8s-1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
建立kubernetes proxy證書:
cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
生成kubernetes CA證書和私鑰:
# 生成ca證書 cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # 生成 api-server 證書 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server # 生成 kube-proxy 證書 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
部署ETCD集羣(三臺機器相同):
cd /data/install/ tar -xvf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64/ cp etcd etcdctl /cloud/k8s/etcd/bin/
編輯配置文件(文件三個節點相同,可是裏面的IP地址記得變動):
### k8s-master節點 #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.211.55.22:2380" ETCD_LISTEN_CLIENT_URLS="https://10.211.55.22:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.211.55.22:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.211.55.22:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.211.55.22:2380,etcd02=https://10.211.55.23:2380,etcd03=https://10.211.55.24:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" ### k8s-node-1 #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.211.55.23:2380" ETCD_LISTEN_CLIENT_URLS="https://10.211.55.23:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.211.55.23:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.211.55.23:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.211.55.22:2380,etcd02=https://10.211.55.23:2380,etcd03=https://10.211.55.24:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" ### k8s-node-2 #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.211.55.24:2380" ETCD_LISTEN_CLIENT_URLS="https://10.211.55.24:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.211.55.24:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.211.55.24:2379" ETCD_INITIAL_CLUSTER="etcd01=https://10.211.55.22:2380,etcd02=https://10.211.55.23:2380,etcd03=https://10.211.55.24:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
建立etcd的進程管理文件(三臺機器相同):
vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/cloud/k8s/etcd/cfg/etcd ExecStart=/cloud/k8s/etcd/bin/etcd \ --name= ${ETCD_NAME} \ --data-dir= ${ETCD_DATA_DIR} \ --listen-peer-urls= ${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls= ${ETCD_LISTEN_CLIENT_URLS} ,http://127.0.0.1:2379 \ --advertise-client-urls= ${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls= ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster= ${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token= ${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/cloud/k8s/etcd/ssl/server.pem \ --key-file=/cloud/k8s/etcd/ssl/server-key.pem \ --peer-cert-file=/cloud/k8s/etcd/ssl/server.pem \ --peer-key-file=/cloud/k8s/etcd/ssl/server-key.pem \ --trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
配置證書文件並拷貝到node節點:
cd /data/ssl_config/etcd/ cp ca*pem server*pem /cloud/k8s/etcd/ssl cd /cloud/k8s/ scp -r etcd k8s-node-1:/cloud/k8s/ scp -r etcd k8s-node-2:/cloud/k8s/ scp /usr/lib/systemd/system/etcd.service k8s-node-1:/usr/lib/systemd/system/etcd.service scp /usr/lib/systemd/system/etcd.service k8s-ndoe-2:/usr/lib/systemd/system/etcd.service
同時啓動三臺etcd並查看集羣狀態:
systemctl daemon-reload systemctl enable etcd systemctl start etcd [root@k8s-master bin]# ./etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem --cert-file=/cloud/k8s/etcd/ssl/server.pem --key-file=/cloud/k8s/etcd/ssl/server-key.pem cluster-health member 67f5fb1fce7850cb is healthy: got healthy result from https://10.211.55.23:2379 member bd1ce44b83380692 is healthy: got healthy result from https://10.211.55.22:2379 member ddc9604a558260cb is healthy: got healthy result from https://10.211.55.24:2379 cluster is healthy
部署flannel網絡:
向etcd集羣寫入集羣pod網段信息(master節點操做):
[root@k8s-master bin]# cd /cloud/k8s/etcd/bin/ [root@k8s-master bin]# ./etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem --cert-file=/cloud/k8s/etcd/ssl/server.pem --key-file=/cloud/k8s/etcd/ssl/server-key.pem --endpoints="https://10.211.55.22:2379,https://10.211.55.23:2379,https://10.211.55.24:2379" set /coreos.com/network/config '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}' { "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}
部署配置flannel:
cd /data/install/ tar xf flannel-v0.11.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /cloud/k8s/kubernetes/bin/ [root@k8s-master cfg]# cat flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://10.211.55.22:2379,https://10.211.55.23:2379,https://10.211.55.24:2379 -etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem -etcd-certfile=/cloud/k8s/etcd/ssl/server.pem -etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"
建立進程管理文件;
vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/cloud/k8s/kubernetes/cfg/flanneld ExecStart=/cloud/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/cloud/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
配置docker啓動指定子網段:
vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/ kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
將所需文件同步至其餘node節點:
cd /cloud/k8s/ scp -r kubernetes k8s-node-1:/cloud/k8s/ scp -r kubernetes k8s-node-2:/cloud/k8s/ scp /cloud/k8s/kubernetes/cfg/flanneld k8s-node-1:/cloud/k8s/kubernetes/cfg/flanneld scp /cloud/k8s/kubernetes/cfg/flanneld k8s-node-2:/cloud/k8s/kubernetes/cfg/flanneld scp /usr/lib/systemd/system/docker.service k8s-node-1:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/docker.service k8s-node-2:/usr/lib/systemd/system/docker.service scp /usr/lib/systemd/system/flanneld.service k8s-node-1:/usr/lib/systemd/system/flanneld.service scp /usr/lib/systemd/system/flanneld.service k8s-node-2:/usr/lib/systemd/system/flanneld.service # 啓動服務(每臺節點都操做) systemctl daemon-reload systemctl start flanneld systemctl enable flanneld systemctl restart docker
驗證flannel配置是否成功
[root@k8s-master bin]# ip a|grep flannel 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN inet 172.18.15.0/32 scope global flannel.1
部署master節點
master須要三個重要組件:kube-apiserver kube-scheduler kube-controller-manager 其中scheduler 和controller-manager能夠以集羣形式運行,經過選舉機制產生一個工做進程,其餘進程處於阻塞模式
配置所需文件以及相關證書:
cd /data/install/ tar xf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /cloud/k8s/kubernetes/bin/ cd /data/ssl_config/kubernetes/ cp *pem /cloud/k8s/kubernetes/ssl/
部署kube-apiserver組件:
# 建立 TLS Bootstrapping Token # head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 88029d7c2d35a6621699685cb25b466d # 此token後面kubelet配置文件中會用到,寫錯將沒法正常鏈接apiserver # vim /cloud/k8s/kubernetes/cfg/token.csv 88029d7c2d35a6621699685cb25b466d,kubelet-bootstrap,10001, "system:kubelet-bootstrap"
建立apisrever配置文件:
vim /cloud/k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://10.211.55.22:2379,https://10.211.55.23:2379,https://10.211.55.24:2379 \ --bind-address=10.211.55.22 \ --secure-port=6443 \ --advertise-address=10.211.55.22 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/cloud/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"
建立進程管理文件:
vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-apiserver ExecStart=/cloud/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動服務並驗證:
[root@k8s-master kubernetes]# systemctl daemon-reload [root@k8s-master kubernetes]# systemctl start kube-apiserver.service [root@k8s-master kubernetes]# ps -ef |grep apiserver root 8225 1 23 19:48 ? 00:00:04 /cloud/k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.211.55.22:2379,https://10.211.55.23:2379,https://10.211.55.24:2379 --bind-address=10.211.55.22 --secure-port=6443 --advertise-address=10.211.55.22 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem --etcd-certfile=/cloud/k8s/etcd/ssl/server.pem --etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem root 8261 1518 0 19:49 pts/0 00:00:00 grep --color=auto apiserver
部署kube-scheduler
編輯配置文件:
vim /cloud/k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
建立進程管理文件:
vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-apiserver ExecStart=/cloud/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target [root@k8s-master ~]# cat /cloud/k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect" [root@k8s-master ~]# cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-scheduler ExecStart=/cloud/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動服務並驗證:
[root@k8s-master kubernetes]# systemctl daemon-reload [root@k8s-master kubernetes]# systemctl enable kube-scheduler.service Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@k8s-master kubernetes]# systemctl restart kube-scheduler.service [root@k8s-master kubernetes]# [root@k8s-master kubernetes]# systemctl status kube-scheduler.service ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2020-03-14 19:52:49 CST; 19s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 8635 (kube-scheduler) Memory: 45.0M CGroup: /system.slice/kube-scheduler.service └─8635 /cloud/k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.280532 8635 shared_informer.go:176] caches populated Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.380933 8635 shared_informer.go:176] caches populated Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.481138 8635 shared_informer.go:176] caches populated Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.582034 8635 shared_informer.go:176] caches populated Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.682507 8635 shared_informer.go:176] caches populated Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.782895 8635 shared_informer.go:176] caches populated Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.883521 8635 shared_informer.go:176] caches populated Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.883621 8635 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler... Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.895745 8635 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler Mar 14 19:52:51 k8s-master kube-scheduler[8635]: I0314 19:52:51.997305 8635 shared_informer.go:176] caches populated
部署kube-controller-manager:
編輯配置文件:
vim /cloud/k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem"
配置進程管理文件
vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/cloud/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動服務並驗證:
[root@k8s-master kubernetes]# systemctl daemon-reload [root@k8s-master kubernetes]# systemctl enable kube-controller-manager Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service. [root@k8s-master kubernetes]# systemctl restart kube-controller-manager [root@k8s-master kubernetes]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2020-03-14 19:55:16 CST; 6s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 8982 (kube-controller) Memory: 121.9M CGroup: /system.slice/kube-controller-manager.service └─8982 /cloud/k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-... Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.739674 8982 garbagecollector.go:199] syncing garbage collector with updated resources from..., Resourc Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: ge.k8s.io/v1beta1, Resource=csidrivers storage.k8s.io/v1beta1, Resource=csinodes], removed: [] Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.739702 8982 garbagecollector.go:205] reset restmapper Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.739768 8982 graph_builder.go:220] synced monitors; added 0, kept 49, removed 0 Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.739782 8982 graph_builder.go:252] started 0 new monitors, 49 currently running Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.739787 8982 garbagecollector.go:220] resynced monitors Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.739805 8982 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.840190 8982 shared_informer.go:176] caches populated Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.840223 8982 controller_utils.go:1036] Caches are synced for garbage collector controller Mar 14 19:55:19 k8s-master kube-controller-manager[8982]: I0314 19:55:19.840234 8982 garbagecollector.go:240] synced garbage collector Hint: Some lines were ellipsized, use -l to show in full.
查看master狀態:
kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"}
下面來部署node節點:
node節點須要docker kubelet kube-proxy便可,kubelet用於接受apiserver發來的請求,從而管理pod執行交互命令等 kube-proxy則用來監聽apiserver中service和endpoint的變化狀況,建立路由規則來進行服務的負載均衡
將所需命令拷貝至node節點:
[root@k8s-master kubernetes]# scp -rp bootstrap.kubeconfig kube-proxy.kubeconfig k8s-node-1:/cloud/k8s/kubernetes/cfg/ bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00 [root@k8s-master kubernetes]# scp -rp bootstrap.kubeconfig kube-proxy.kubeconfig k8s-node-2:/cloud/k8s/kubernetes/cfg/ bootstrap.kubeconfig 100% 2166 2.1KB/s 00:00 kube-proxy.kubeconfig 100% 6272 6.1KB/s 00:00
建立kubelet bootstrap.kubeconfig文件
[root@k8s-master kubernetes]# cat environment.sh # 建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=449bbeb0ea7e50f321087a123a509a19 KUBE_APISERVER="https://10.211.55.22:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
執行腳本生成配置文件
sh environment.sh
建立kubelet.config 文件:
vim envkubelet.kubeconfig.sh # 建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=449bbeb0ea7e50f321087a123a509a19 KUBE_APISERVER="https://10.211.55.22:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubelet.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet \ --kubeconfig=kubelet.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=kubelet.kubeconfig
執行腳本生成文件:
sh envkubelet.kubeconfig.sh
建立kube-proxy kubeconfig文件
vim env_proxy.sh # 建立kube-proxy kubeconfig文件 BOOTSTRAP_TOKEN=449bbeb0ea7e50f321087a123a509a19 KUBE_APISERVER="https://10.211.55.22:6443" kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
執行腳本生成文件:
sh env_proxy.sh
將生成的文件拷貝到全部node節點:
scp -rp bootstrap.kubeconfig kube-proxy.kubeconfig k8s-node-1:/cloud/k8s/kubernetes/cfg/ scp -rp bootstrap.kubeconfig kube-proxy.kubeconfig k8s-node-2:/cloud/k8s/kubernetes/cfg/
部署kubelet(兩臺node節點操做):
建立kubelet參數配置模版文件:
vim /cloud/k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 10.211.55.23 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true
建立kubelet配置文件:
vim /cloud/k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node-1 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/cloud/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/cloud/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
建立進程管理文件:
vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kubelet ExecStart=/cloud/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
在master節點執行kubectl將kubelet-bootstrap用戶綁定到系統集羣角色
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
node節點啓動kubelet:
systemctl daemon-reload systemctl enable kubelet systemctl restart kubelet
若是啓動後有以下報錯:
Mar 14 20:16:54 k8s-node-1 kubelet: I0314 20:16:54.509597 7907 bootstrap.go:148] No valid private key and/or certificate found, reusing existing private key or creating a new one Mar 14 20:16:54 k8s-node-1 kubelet: I0314 20:16:54.521982 7907 bootstrap.go:293] Failed to connect to apiserver: the server has asked for the client to provide credentials
須要檢查下bootstrap.kubeconfig配置文件中的token是否填錯了,致使沒法經過認證
批准kubelet的註冊請求:
[root@k8s-master bin]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-Hdu4yWbOWhQY6N1vYxp-7qWWOWMzs84IdJqVfDoRmxU 14s kubelet-bootstrap Pending node-csr-TKCdht4JTnx57jyMp7qvhnW79L2ermZhRB01QrgzP9A 79s kubelet-bootstrap Pending [root@k8s-master bin]# kubectl certificate approve node-csr-Hdu4yWbOWhQY6N1vYxp-7qWWOWMzs84IdJqVfDoRmxU node-csr-TKCdht4JTnx57jyMp7qvhnW79L2ermZhRB01QrgzP9A certificatesigningrequest.certificates.k8s.io/node-csr-Hdu4yWbOWhQY6N1vYxp-7qWWOWMzs84IdJqVfDoRmxU approved certificatesigningrequest.certificates.k8s.io/node-csr-TKCdht4JTnx57jyMp7qvhnW79L2ermZhRB01QrgzP9A approved [root@k8s-master bin]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-Hdu4yWbOWhQY6N1vYxp-7qWWOWMzs84IdJqVfDoRmxU 114s kubelet-bootstrap Approved,Issued node-csr-TKCdht4JTnx57jyMp7qvhnW79L2ermZhRB01QrgzP9A 2m59s kubelet-bootstrap Approved,Issued
此時集羣node狀態:
[root@k8s-master bin]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-node-1 Ready <none> 59s v1.15.1 k8s-node-2 Ready <none> 59s v1.15.1
部署kube-proxy組件:
編輯配置文件:
vim /cloud/k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-2 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
建立進程管理文件:
vim /cloud/k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-2 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig" [root@k8s-node-1 cfg]# cat /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-proxy ExecStart=/cloud/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動並檢查服務:
systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy [root@k8s-node-1 ~]# systemctl status kube-proxy.service ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled) Active: active (running) since 六 2020-03-14 21:14:43 CST; 1h 48min ago Main PID: 13129 (kube-proxy) Memory: 8.8M CGroup: /system.slice/kube-proxy.service ‣ 13129 /cloud/k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=k8s-2 --cluster-cidr=10.0.0.0/24 --kubeconfig=/cloud/k8s/kubernetes/cfg/ku... 3月 14 23:03:08 k8s-node-1 kube-proxy[13129]: I0314 23:03:08.025576 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:08 k8s-node-1 kube-proxy[13129]: I0314 23:03:08.092317 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:10 k8s-node-1 kube-proxy[13129]: I0314 23:03:10.034811 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:10 k8s-node-1 kube-proxy[13129]: I0314 23:03:10.099997 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:12 k8s-node-1 kube-proxy[13129]: I0314 23:03:12.044579 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:12 k8s-node-1 kube-proxy[13129]: I0314 23:03:12.111595 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:14 k8s-node-1 kube-proxy[13129]: I0314 23:03:14.053605 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:14 k8s-node-1 kube-proxy[13129]: I0314 23:03:14.121189 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:16 k8s-node-1 kube-proxy[13129]: I0314 23:03:16.064692 13129 config.go:132] Calling handler.OnEndpointsUpdate 3月 14 23:03:16 k8s-node-1 kube-proxy[13129]: I0314 23:03:16.132973 13129 config.go:132] Calling handler.OnEndpointsUpdate