kubernetes學習之二進制部署1.16

服務器規劃和系統初始化

1、服務器規劃

10.255.20.205 Master01 kube-apiserver、kube-controller-manager、kube-scheduler、ETCD 10.255.20.6 Master02 kube-apiserver、kube-controller-manager、kube-scheduler、ETCD 10255.20.242 Master03 kube-apiserver、kube-controller-manager-kube-scheduler、ETCD 10.255.20.117 Node01 kubelet、kube-proxy、docker 10.255.20.176    Node02        kubelet、kube-proxy、docker

2、系統初始化(全部節點所有執行)

關閉防火牆: # systemctl stop firewalld # systemctl disable firewalld
 關閉selinux: # setenforce 0 # 臨時 # sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
 關閉swap: # swapoff -a # 臨時 # vim /etc/fstab # 永久
 同步系統時間: # ntpdate time.windows.com
 添加hosts: # vim /etc/hosts
10.255.20.205 master01 10.255.20.6 master02 10.255.20.242 master03 10.255.20.117 node01 10.255.20.176 node02 修改主機名: hostnamectl set-hostname node-name

2.安裝依賴和升級內核到5.x的最新版本node

function Install_depend_environment(){ rpm -qa | grep nfs-utils &> /dev/null && echo -e "已完成依賴環境安裝,退出依賴環境安裝步驟 " && return yum install -y nfs-utils curl yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl telnet echo -e "升級Centos7系統內核到5版本,解決Docker-ce版本兼容問題" rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org && \ rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm && \ yum --disablerepo=\* --enablerepo=elrepo-kernel repolist && \ yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64 && \ yum remove -y kernel-tools-libs.x86_64 kernel-tools.x86_64 && \ yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64 && \ grub2-set-default 0 modprobe br_netfilter cat <<EOF >  /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf ls /proc/sys/net/bridge }

3.重啓機器以後,查看內核加載linux

[root@master03 ~]# ls /proc/sys/net/bridge
bridge-nf-call-arptables  bridge-nf-call-ip6tables  bridge-nf-call-iptables  bridge-nf-filter-pppoe-tagged  bridge-nf-filter-vlan-tagged  bridge-nf-pass-vlan-input-dev

ETCD集羣安裝

1、生成ETCD證書(能夠任選一個ETCD的節點)

# cd TLS/etcd/ #ls #默認下面有這幾個文件
ca-config.json  ca-csr.json   server-csr.json generate_etcd_cert.sh # ca-config.json 和 ca-csr.json 用於生成ca.pem ca-key.pem ca.csr # server-csr.json 用於生成server.pem server-key.pem server.csr # generate_etcd_cert.sh 這是生成ca和server的腳本

1.安裝cfssl工具git

[root@master01 ~]# cd TLS
[root@master01 TLS]# ls
cfssl  cfssl-certinfo cfssljson cfssl.sh etcd k8s [root@master01 TLS]# ./cfssl.sh

2.修改請求文件中hosts字段包含全部etcd節點IPgithub

[root@master01 TLS]# cd etcd/
[root@master01 etcd]# vim server-csr.json 
{ "CN": "etcd", "hosts": [ "10.255.20.205", #master1的IP
        "10.255.20.6",     #master2的IP
        "10.255.20.242"   #master3的IP
 ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] }

3.生成web

[root@master01 etcd]# ./generate_etcd_cert.sh #能夠執行這個腳本直接生成ca和server,也能夠分開執行,下面演示分開執行

4.查看生成的ca和serverdocker

[root@master01 etcd]# ls *pem
ca-key.pem  ca.pem  server-key.pem  server.pem

2、部署三個ETCD節點

1.在一個節點部署好ETCDjson

[root@master01]# tar zxvf etcd.tar.gz
[root@master01]# cd etcd
[root@master01]# cp TLS/etcd/{ca,server,server-key}.pem ssl #證書
[root@master01]# cp -r etcd /opt
[root@master01] cp etcd.service /usr/lib/systemd/system

2.將master1上部署好的etcd分發到剩下兩個節點bootstrap

[root@master01 ~]# scp –r etcd root@10.255.20.242:/opt 
[root@master01 ~]# scp etcd.service root@10.255.20.242:/usr/lib/systemd/system
 [root@master01 ~]# scp –r etcd root@10.255.20.6:/opt 
[root@master01 ~]# scp etcd.service root@10.255.20.6:/usr/lib/systemd/system

3.分別在3個節點修改etcd配置文件vim

主要是節點名稱IPwindows

vi /opt/etcd/cfg/etcd.conf #[Member]
ETCD_NAME="master01" #每一個節點都修改成本身的節點名稱 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://10.255.20.205:2380" #改成本身的IP  ETCD_LISTEN_CLIENT_URLS="https://10.255.20.205:2379" #改成本身的IP

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.255.20.205:2380" #改成本身的IP  ETCD_ADVERTISE_CLIENT_URLS="https://10.255.20.205:2379" #改成本身的IP  ETCD_INITIAL_CLUSTER="master01=https://10.255.20.205:2380,master02=https://10.255.20.6:2380,master03=https://10.255.20.242:2380" #三個IP和端口 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"

4.啓動

# systemctl start etcd # systemctl enable etcd

5.查看集羣健康狀態

# /opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://10.255.20.205:2379,https://10.255.20.117:2379,https://10.255.20.176:2379" cluster-health member 37f20611ff3d9209 is healthy: got healthy result from https://10.255.20.205:2379 member b10f0bac3883a232 is healthy: got healthy result from https://10.255.20.6:2379 member b46624837acedac9 is healthy: got healthy result from https://10.255.20.242:2379 cluster is healthy

Master節點部署

1、生成apiserver證書(隨便再一個master節點操做,而後分發到另外兩個節點)

1.修改server-csr.json

須要提早把LB和全部master節點都寫上,若是後期須要加新master節點須要從新生成證書,在分發證書

[root@master01 k8s]# cd TLS/k8s
[root@master01 k8s]# vi server-csr.json 
{ "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "10.255.20.205",   #master1 ip
      "10.255.20.6",     #master2 ip
      "10.255.20.242",   #master3 ip
      "10.255.20.165"    #滴滴雲內網LB的ip
 ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] }

2.生成證書

[root@master01 k8s]# ./generate_k8s_cert.sh
[root@master01 k8s]# ls *pem
ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

2、部署kube-apiserver、kube-controller-manager、kube-scheduler

全部master組件的bin和cfg都在k8s-master裏

1.拷貝證書和拷貝三個master組件的啓動文件

[root@master01]# tar zxvf k8s-master.tar.gz
[root@master01]# cd k8s-master/kubernetes/
[root@master01 k8s-master]# tree kubernetes/
kubernetes/ ├── bin │   ├── kube-apiserver │   ├── kube-controller-manager │   ├── kubectl │   └── kube-scheduler ├── cfg │   ├── kube-apiserver.conf │   ├── kube-controller-manager.conf │   ├── kube-scheduler.conf │   └── token.csv ├── logs └── ssl [root@master01 kubernetes]# cp /root/TLS/k8s/*.pem ssl
[root@master01 kubernetes]# cd ..
[root@master01 k8s-master]# cp kubernetes /opt 
[root@master01 k8s-master]# cp kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system

2.修改apiserver相關配置文件

[root@master01 k8s-master]# cd kubernetes/cfg/
[root@master01 cfg] vim kube-apiserver.conf KUBE_APISERVER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --etcd-servers=https://10.255.20.205:2379,https://10.255.20.117:2379,https://10.255.20.176:2379 \ #ETCD集羣連接 --bind-address=10.255.20.205 \ #本機IP --secure-port=6443 \ --advertise-address=10.255.20.205 \ #本機IP --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ #service的clusterIP地址段, 跟kube-controller-manager.conf和kube-proxy-config.yml對應
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth=true \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-32767 \ --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \ --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

3.修改kube-controller-manager相關配置文件

[root@master01 cfg]vim kube-controller-manager.conf KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --leader-elect=true \ --master=127.0.0.1:8080 \ #連接api-server,爲本地不安全地址 --address=127.0.0.1 \ --allocate-node-cidrs=true \ --cluster-cidr=10.244.0.0/16 \ #Pod的IP地址段
--service-cluster-ip-range=10.0.0.0/24 \ #service clusterIP段,跟kube-apiserver.conf和kube-proxy-config.yaml對應
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --experimental-cluster-signing-duration=87600h0m0s"

4.修改kube-scheduler相關配置文件

[root@master01 cfg] vim  kube-scheduler.conf KUBE_SCHEDULER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --leader-elect \ --master=127.0.0.1:8080 \ #連接apiserver --address=127.0.0.1"

5.啓動和開機自啓master組件

# systemctl start kube-apiserver # systemctl start kube-controller-manager # systemctl start kube-scheduler # systemctl enable kube-apiserver # systemctl enable kube-controller-manager # systemctl enable kube-scheduler

6.分發master相關組件到其餘兩個節點(kubernetes目錄和啓動文件)

#在準備好master的master01上操做
[root@master01]  scp –r /opt/kubernetes root@10.255.20.6:/opt [root@master01] scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@10.255.20.6:/usr/lib/systemd/system 

7.在其餘兩個節點修改apiserver的配置文件爲本機IP,並啓動和開機自啓相關master組件

#####修改剩下兩個節點apiserver的配置文件爲本地IP##### #在剩下兩個節點操做
vim /opt/kubernetes/cfg/kube-apiserver.conf KUBE_APISERVER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --etcd-servers=https://10.255.20.205:2379,https://10.255.20.6:2379,https://10.255.20.242:2379 \ --bind-address=10.255.20.6 \ --secure-port=6443 \ --advertise-address=10.255.20.6 \
# systemctl start kube-apiserver # systemctl start kube-controller-manager # systemctl start kube-scheduler # systemctl enable kube-apiserver # systemctl enable kube-controller-manager # systemctl enable kube-scheduler

3、啓用TLS bootstrapping

1.爲kubelet TLB bootstrapping受權

[root@master01]cat /opt/kubernetes/cfg/token.csv c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

2.給kubelet-bootstrap受權(一個master節點執行就能夠)

[root@master01]kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap

注意:token能夠本身生成替換

head -c 16 /dev/urandom | od -An -t x | tr -d ' '
#但apiserver配置的token必需要與node節點bootstrap.kubeconfig配置裏一致。
master的kube-apiserver.conf中的配置 --token-auth-file=/opt/kubernetes/cfg/token.csv
node的 bootstrap.kubeconfig裏 token: c47ffb939f5ca36231d9e3121a252940就是apiserver.conf裏token.csv的內容

部署Node節點相關組件

1、安裝docker(兩個node節點)

二進制包下載地址:https://download.docker.com/linux/static/stable/x86_64/

[root@node01 ]# tar zxvf k8s-node.tar.gz
[root@node01 ]# tar zxvf docker-18.09.6.tgz
[root@node01 ]# mv docker/* /usr/bin
[root@node01 ]# mkdir /etc/docker
[root@node01 ]# mv daemon.json /etc/docker
[root@node01 ]# mv docker.service /usr/lib/systemd/system
[root@node01 ]# systemctl start docker
[root@node01 ]# systemctl enable docker

2、部署kubelet和kube-proxy

1.拷貝工做目錄和啓動配置文件(兩個node節點)

[root@node01]# tree kubernetes
kubernetes/ ├── bin │   ├── kubelet │   └── kube-proxy ├── cfg │   ├── bootstrap.kubeconfig │   ├── kubelet.conf │   ├── kubelet-config.yml │   ├── kube-proxy.conf │   ├── kube-proxy-config.yml │   └── kube-proxy.kubeconfig ├── logs └── ssl [root@node01] # mv kubernetes /opt
[root@node01] # cp kubelet.service kube-proxy.service /usr/lib/systemd/system

2.從master上拷貝2node節點須要的證書

[root@master01]# cd TLS/k8s
[root@master01 k8s]# scp ca.pem kube-proxy*.pem root@10.255.20.117:/opt/kubernetes/ssl/
[root@master01 k8s]# scp ca.pem kube-proxy*.pem root@10.255.20.176:/opt/kubernetes/ssl/

3.在兩個node節點修改配置文件中apiserver的IP

爲啥3個master節點只寫了一個節點的IP呢,由於6443是apiserver的IP,用了滴滴雲的負載均衡,給master作了高可用,滴滴雲LB的6443轉發到3臺master節點的6443,生成kubernetes證書的時候也把滴滴雲的負載均衡IP加進去了 10.255.20.165

bootstrap.kubeconfig:    server: https://10.255.20.165:6443 kube-proxy.kubeconfig:    server: https://10.255.20.165:6443

4.在兩個node節點修改配置文件中本身的主機名

kubelet.conf:--hostname-override=node01 kube-proxy-config.yml:hostnameOverride: node01 kubelet.conf:--hostname-override=node02 kube-proxy-config.yml:hostnameOverride: node02 #上面的主機名,是註冊到master顯示的名稱

5.啓動kubelet和kube-proxy

# systemctl start kubelet # systemctl start kube-proxy # systemctl enable kubelet  # systemctl enable kube-proxy

注意:node節點組件啓動的時候,就向master申請證書了,須要到master去頒發下證書

6.在master上給兩個node頒發證書

[root@master01 ]# kubectl get csr
[root@master01 ]# kubectl certificate approve node-csr-MYUxbmf_nmPQjmH3LkbZRL2uTO-_FCzDQUoUfTy7YjI
[root@master01 ]# kubectl get node
NAME STATUS ROLES AGE VERSION node01 NotReady <none>   18h   v1.16.0 node02 NotReady <none>   18h   v1.16.0

###爲啥是notReady呢,由於尚未cni網絡插件

7.部署CNI網絡插件

二進制包下載地址:https://github.com/containernetworking/plugins/releases

7.1.1每一個node上部署CNI插件包和建立CNI插件目錄

# mkdir /opt/cni/bin /etc/cni/net.d -p # tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin

7.1.2確保kubelet啓動CNI

# cat /opt/kubernetes/cfg/kubelet.conf 
--network-plugin=cni

7.1.3在master上部署flannel

[root@master ] # kubectl apply -f kube-flannel.yaml
[root@master ] # kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE kube-flannel-ds-amd64-5xmhh   1/1     Running   6 171m kube-flannel-ds-amd64-ps5fx   1/1 Running 0 150m 注意:kube-flannel.yaml 裏 net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } ##上面這段配置中10.244.0.0/16必須得跟kube-controller-manager.conf 裏--cluster-cidr=10.244.0.0/16 這段配置的網段同樣

注意:flannel在每一個node上都會啓動一個容器

8.受權apiserver訪問kubelet

爲提供安全性,kubelet禁止匿名訪問,必須受權才能夠,受權以後才能查看pod日誌之類的。

[root@master]# cat /opt/kubernetes/cfg/kubelet-config.yml 
…… authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /opt/kubernetes/ssl/ca.pem …… [root@master]# kubectl apply –f apiserver-to-kubelet-rbac.yaml

部署WebUI和DNS

1、部署WebUI

https://kubernetes.io/docs/tasks/access-application-clu ster/web-ui-dashboard/

[root@master]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml

# vi recommended.yaml
… kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard … [root@master]# kubectl apply -f recommended.yaml

1.建立service account並綁定默認cluster-admin管理員集羣角色

[root@master]# cat dashboard-adminuser.yaml 
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard

2.獲取token

[root@master]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

訪問地址:http://NodeIP:30001

使用輸出的token登陸Dashboard

2、部署DNS

[root@master]# kubectl apply –f coredns.yaml
[root@master]# kubectl get pods -n kube-system
相關文章
相關標籤/搜索