1、基礎概念node
一、概念linux
Kubernetes(一般寫成「k8s」)Kubernetes是Google開源的容器集羣管理系統。其設計目標是在主機集羣之間提供一個可以自動化部署、可拓展、應用容器可運營的平臺。git
Kubernetes一般結合docker容器工具工做,而且整合多個運行着docker容器的主機集羣。github
二、功能特性算法
a、自動化容器部署 b、自動化擴/縮容器規模 c、提供容器間的負載均衡 d、快速更新和快速回滾
三、相關組件說明docker
3.一、Master節點組件json
master節點上主要運行四個組件:api-server、scheduler、controller-manager、etcd。bootstrap
APIServer:APIServer負責對外提供RESTful的Kubernetes API服務,它是系統管理指令的統一入口,任何對資源進行增刪改查的操做都給APIServer處理後再提交給etcd。vim
schedule:scheduler的職責很明確,就是負責調度pod到合適的Node上。若是把scheduler當作一個黑匣子,那麼它的輸入是pod和由多個Node組成的列表,輸出是Pod和一後端
個Node的綁定,即將這個pod部署到這個Node上。Kubernetes目前提供了調度算法,可是一樣也保了接口,用戶能夠根據本身的需求定義本身的調度算法。
controller-manager:若是說APIServer作的是「前臺」的工做的話,那controller manager就是負責「後臺」的。每一個資源通常都對一個控制器,而controller manager就是
負責管理這些控制器的。好比咱們經過APIServer建立一個pod,當這個pod建立成功後,APIServer的任務就算完成了。然後面保證Pod的狀態始終和咱們預期的同樣的重任
就由controller manager去保證了。
etcd:etcd是一個高可用的鍵值存儲系統,Kubernetes使用它來存儲各個資源的狀態,從而實現了Restful的API。
3.二、Node節點組件
每一個Node節點主要由三個模塊組成:kubelet、kube-proxy、runtime。
runtime:指的是容器運行環境,目前Kubernetes支持docker和rkt兩種容器。
kube-proxy:該模塊實現了Kubernetes中的服務發現和反向代理功能。反向代理方面:kube-proxy支持TCP和UDP鏈接轉發,默認基於Round Robin算法將客戶端流量轉發到
與service對應的一組後端pod。服務發現方面,kube-proxy使用etcd的watch機制,監控集羣中service和endpoint對象數據的動態變化,而且維護一個service到endpoint
的映射關係,從而保證了後端pod的IP變化不會對訪問者形成影響。另外kube-proxy還支持session affinity。
kubelet:Kubelet是Master在每一個Node節點上面的agent,是Node節點上面最重要的模塊,它負責維護和管理該Node上面的全部容器可是若是容器不是經過Kubernetes建立
的,它並不會管理。本質上,它負責使Pod得運行狀態與指望的狀態一致。
3.三、pod
Pod是k8s進行資源調度的最小單位,每一個Pod中運行着一個或多個密切相關的業務容器,這些業務容器共享這個Pause容器的IP和Volume,咱們以這個不易死亡的Pause容器
做爲Pod的根容器,以它的狀態表示整個容器組的狀態。一個Pod一旦被建立就會放到Etcd中存儲,而後由Master調度到一個Node綁定,由這個Node上的Kubelet進行實例化。
每一個Pod會被分配一個單獨的Pod IP,Pod IP + ContainerPort 組成了一個Endpoint。
3.四、Service
Service其功能使應用暴露,Pods 是有生命週期的,也有獨立的 IP 地址,隨着 Pods 的建立與銷燬,一個必不可少的工做就是保證各個應用可以感知這種變化。這就要提
到 Service 了,Service 是 YAML 或 JSON 定義的由 Pods 經過某種策略的邏輯組合。更重要的是,Pods 獨立 IP 須要經過 Service 暴露到網絡中。
2、安裝部署
部署方式有多中,此篇文章咱們採用二進制方式部署。
一、環境介紹
主機名 | IP | 安裝軟件包 | 系統版本 |
k8s-master | 192.168.248.65 | kube-apiserver,kube-controller-manager,kube-scheduler | Red Hat Enterprise Linux Server release 7.3 |
k8s-node1 | 192.168.248.66 | etcd,kubelet,kube-proxy,flannel,docker | Red Hat Enterprise Linux Server release 7.3 |
k8s-node2 | 192.168.248.67 | etcd,kubelet,kube-proxy,flannel,docker | Red Hat Enterprise Linux Server release 7.3 |
k8s-node3 | 192.168.248.68 | etcd,kubelet,kube-proxy,flannel,docker | Red Hat Enterprise Linux Server release 7.3 |
軟件部署版本及下載連接
版本
kubenetes version v1.15.0
etcd version v3.3.10
flannel version v0.11.0
下載連接
kubernetes網址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1150
server端二進制文件:https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz
node端二進制文件:https://dl.k8s.io/v1.15.0/kubernetes-node-linux-amd64.tar.gz
etcd網址:https://github.com/etcd-io/etcd/releases
flannel網址:https://github.com/coreos/flannel/releases
二、服務器初始化環境準備
同步系統時間
# ntpdate time1.aliyun.com # echo "*/5 * * * * /usr/sbin/ntpdate -s time1.aliyun.com" > /var/spool/cron/root
修改主機名
# hostnamectl --static set-hostname k8s-master # hostnamectl --static set-hostname k8s-node1 # hostnamectl --static set-hostname k8s-node2 # hostnamectl --static set-hostname k8s-node3
添加hosts解析
[root@k8s-master ~]# cat /etc/hosts 192.168.248.65 k8s-master 192.168.248.66 k8s-node1 192.168.248.67 k8s-node2 192.168.248.68 k8s-node3
關閉並禁用firewalld及selinux
# systemctl stop firewalld # systemctl disable firewalld # setenforce 0 # vim /etc/sysconfig/selinux SELINUX=disabled
關閉swap
# swapoff -a && sysctl -w vm.swappiness=0 # sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
設置系統參數
# cat /etc/sysctl.d/kubernetes.conf net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1
三、kubernetes集羣安裝部署
全部node節點安裝docker-ce
# wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # yum makecache # yum install docker-ce-18.06.2.ce-3.el7 -y # systemctl start docker && systemctl enable docker
建立安裝目錄
# mkdir /data/{install,ssl_config} -pv # mkdir /data/ssl_config/{etcd,kubernetes} -pv # mkdir /cloud/k8s/etcd/{bin,cfg,ssl} -pv # mkdir /cloud/k8s/kubernetes/{bin,cfg,ssl} -pv
添加環境變量
vim /etc/profile ######Kubernetes######## export PATH=$PATH:/cloud/k8s/etcd/bin/:/cloud/k8s/kubernetes/bin/
四、建立ssl證書
下載證書生成工具
[root@k8s-master ~]# wget -P /usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 [root@k8s-master ~]# wget -P /usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@k8s-master ~]# wget -P /usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@k8s-master ~]# mv /usr/local/bin/cfssl_linux-amd64 /usr/local/bin/cfssl [root@k8s-master ~]# mv /usr/local/bin/cfssljson_linux-amd64 /usr/local/bin/cfssljson [root@k8s-master ~]# mv /usr/local/bin/cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo [root@k8s-master ~]# chmod +x /usr/local/bin/*
建立etcd相關證書
# etcd證書ca配置 [root@k8s-master etcd]# pwd /data/ssl_config/etcd [root@k8s-master etcd]# cat ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } # etcd ca配置文件 [root@k8s-master etcd]# cat ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } # etcd server 證書 [root@k8s-master etcd]# cat server-csr.json { "CN": "etcd", "hosts": [ "k8s-node3", "k8s-node2", "k8s-node1", "192.168.248.66", "192.168.248.67", "192.168.248.68" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } # 生成etcd ca證書和私鑰 # cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
建立kubernetes相關證書
# kubernetes 證書ca配置 [root@k8s-master kubernetes]# pwd /data/ssl_config/kubernetes [root@k8s-master kubernetes]# cat ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } # 建立ca證書配置 [root@k8s-master kubernetes]# cat ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } # 生成API_SERVER證書 [root@k8s-master kubernetes]# cat server-csr.json { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.248.65", "k8s-master", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } # 建立 Kubernetes Proxy 證書 [root@k8s-master kubernetes]# cat kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } # 生成ca證書 # cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # 生成 api-server 證書 # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server # 生成 kube-proxy 證書 # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
五、部署etcd集羣(在全部node節點操做)
解壓並配置etcd軟件包
# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz # cp etcd-v3.3.10-linux-amd64/{etcd,etcdctl} /cloud/k8s/etcd/bin/
編寫etcd配置文件
[root@k8s-node1 ~]# cat /cloud/k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.248.66:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.248.66:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.66:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.66:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@k8s-node2 ~]# cat /cloud/k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.248.67:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.248.67:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.67:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.67:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@k8s-node3 ~]# cat /cloud/k8s/etcd/cfg/etcd #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.248.68:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.248.68:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.68:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.68:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
建立etcd啓動文件
[root@k8s-node1 ~]# cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/cloud/k8s/etcd/cfg/etcd ExecStart=/cloud/k8s/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/cloud/k8s/etcd/ssl/server.pem \ --key-file=/cloud/k8s/etcd/ssl/server-key.pem \ --peer-cert-file=/cloud/k8s/etcd/ssl/server.pem \ --peer-key-file=/cloud/k8s/etcd/ssl/server-key.pem \ --trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
將生成的etcd證書文件拷貝到全部node節點
[root@k8s-master etcd]# pwd /data/ssl_config/etcd [root@k8s-master etcd]# scp *.pem k8s-node1:/cloud/k8s/etcd/ssl/ [root@k8s-master etcd]# scp *.pem k8s-node2:/cloud/k8s/etcd/ssl/ [root@k8s-master etcd]# scp *.pem k8s-node3:/cloud/k8s/etcd/ssl/
啓動etcd集羣服務
systemctl daemon-reload systemctl enable etcd systemctl start etcd
查看啓動狀態(任意一個node節點上執行便可)
[root@k8s-node1 ssl]# etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem --cert-file=/cloud/k8s/etcd/ssl/server.pem --key-file=/cloud/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379" cluster-health member 2830381866015ef6 is healthy: got healthy result from https://192.168.248.67:2379 member 355a96308320dc2a is healthy: got healthy result from https://192.168.248.66:2379 member a9a44d5d05a31ce0 is healthy: got healthy result from https://192.168.248.68:2379 cluster is healthy
六、部署flannel網絡(全部node節點)
向etcd集羣中寫入pod網段信息(任意一臺node節點上執行)
etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem \ --cert-file=/cloud/k8s/etcd/ssl/server.pem \ --key-file=/cloud/k8s/etcd/ssl/server-key.pem \ --endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379" \ set /coreos.com/network/config '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'
查看寫入etcd集羣中的網段信息
# etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem \ --cert-file=/cloud/k8s/etcd/ssl/server.pem \ --key-file=/cloud/k8s/etcd/ssl/server-key.pem \ --endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379" \ get /coreos.com/network/config # [root@k8s-node1 ssl]# etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem \ > --cert-file=/cloud/k8s/etcd/ssl/server.pem \ > --key-file=/cloud/k8s/etcd/ssl/server-key.pem \ > --endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379" \ > ls /coreos.com/network/subnets /coreos.com/network/subnets/172.18.95.0-24 /coreos.com/network/subnets/172.18.22.0-24 /coreos.com/network/subnets/172.18.54.0-24
解壓並配置flannel網絡插件
# tar xf flannel-v0.11.0-linux-amd64.tar.gz # mv flanneld mk-docker-opts.sh /cloud/k8s/kubernetes/bin/
配置flannel
[root@k8s-node1 cfg]# cat /cloud/k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379 -etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem -etcd-certfile=/cloud/k8s/etcd/ssl/server.pem -etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"
配置flanneld啓動文件
[root@k8s-node1 cfg]# cat /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/cloud/k8s/kubernetes/cfg/flanneld ExecStart=/cloud/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/cloud/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
配置Docker啓動指定子網段
[root@k8s-node1 cfg]# cat /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
啓動服務
systemctl daemon-reload systemctl start flanneld systemctl enable flanneld systemctl restart docker
驗證fiannel網絡配置
node節點之間相互ping測docker0網卡的ip地址,能ping通說明flanneld網絡插件部署成功。
七、部署master節點組件
解壓master節點安裝包
# tar xf kubernetes-server-linux-amd64.tar.gz # cp kubernetes//server/bin/{kube-scheduler,kube-apiserver,kube-controller-manager,kubectl} /cloud/k8s/kubernetes/bin/
配置kubernetes相關證書
# cp /data/ssl_config/kubernetes/*.pem /cloud/k8s/kubernetes/ssl/
部署 kube-apiserver 組件
建立 TLS Bootstrapping Token
[root@k8s-master cfg]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' #生成隨機字符串 [root@k8s-master cfg]# pwd /cloud/k8s/kubernetes/cfg [root@k8s-master cfg]# cat token.csv a081e7ba91d597006cbdacfa8ee114ac,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
apiserver配置文件
[root@k8s-master cfg]# cat kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379 \ --bind-address=192.168.248.65 \ --secure-port=6443 \ --advertise-address=192.168.248.65 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/cloud/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"
kube-apiserver啓動文件
[root@k8s-master cfg]# cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/cloud/k8s/kubernetes/cfg/kube-apiserver ExecStart=/cloud/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動kube-apiserver服務
[root@k8s-master cfg]# systemctl daemon-reload [root@k8s-master cfg]# systemctl enable kube-apiserver [root@k8s-master cfg]# systemctl start kube-apiserver [root@k8s-master cfg]# ps -ef |grep kube-apiserver root 1050 1 4 09:02 ? 00:25:21 /cloud/k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379 --bind-address=192.168.248.65 --secure-port=6443 --advertise-address=192.168.248.65 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem --etcd-certfile=/cloud/k8s/etcd/ssl/server.pem --etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem root 1888 1083 0 18:15 pts/0 00:00:00 grep --color=auto kube-apiserve
部署kube-scheduler組件
建立kube-scheduler配置文件
[root@k8s-master cfg]# cat /cloud/k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
建立kube-scheduler啓動文件
[root@k8s-master cfg]# cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/cloud/k8s/kubernetes/cfg/kube-scheduler ExecStart=/cloud/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動kube-scheduler服務
[root@k8s-master cfg]# systemctl daemon-reload [root@k8s-master cfg]# systemctl enable kube-scheduler.service [root@k8s-master cfg]# systemctl start kube-scheduler.service [root@k8s-master cfg]# ps -ef |grep kube-scheduler root 1716 1 0 16:12 ? 00:00:19 /cloud/k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root 1897 1083 0 18:21 pts/0 00:00:00 grep --color=auto kube-scheduler
部署kube-controller-manager組件
建立kube-controller-manager配置文件
[root@k8s-master cfg]# cat /cloud/k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem"
建立kube-controller-manager啓動文件
[root@k8s-master cfg]# cat /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/cloud/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動kube-controller-manager服務
[root@k8s-master cfg]# systemctl daemon-reload [root@k8s-master cfg]# systemctl enable kube-controller-manager [root@k8s-master cfg]# systemctl start kube-controller-manager [root@k8s-master cfg]# ps -ef |grep kube-controller-manager root 1709 1 2 16:12 ? 00:03:11 /cloud/k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem --root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem root 1907 1083 0 18:29 pts/0 00:00:00 grep --color=auto kube-controller-manager
查看集羣狀態
[root@k8s-master cfg]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"}
八、部署node節點組件(全部node節點操做)
解壓node節點安裝包
[root@k8s-node1 install]# tar xf kubernetes-node-linux-amd64.tar.gz [root@k8s-node1 install]# cp kubernetes/node/bin/{kubelet,kube-proxy} /cloud/k8s/kubernetes/bin/
建立kubelet bootstrap.kubeconfig 文件
[root@k8s-master kubernetes]# cat environment.sh # 建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac KUBE_APISERVER="https://192.168.248.65:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 執行environment.sh生成bootstrap.kubeconfig[object Object]
建立 kubelet.kubeconfig 文件
[root@k8s-master kubernetes]# cat envkubelet.kubeconfig.sh # 建立kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac KUBE_APISERVER="https://192.168.248.65:6443" # 設置集羣參數 kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kubelet.kubeconfig # 設置客戶端認證參數 kubectl config set-credentials kubelet \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=kubelet.kubeconfig # 設置上下文參數 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet \ --kubeconfig=kubelet.kubeconfig # 設置默認上下文 kubectl config use-context default --kubeconfig=kubelet.kubeconfig #執行envkubelet.kubeconfig.sh腳本,生成kubelet.kubeconfig[object Object]
建立kube-proxy.kubeconfig文件
[root@k8s-master kubernetes]# cat env_proxy.sh # 建立kube-proxy kubeconfig文件 BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac KUBE_APISERVER="https://192.168.248.65:6443" kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig #執行env_proxy.sh腳本生成kube-proxy.kubeconfig文件
將以上生成的kubeconfig複製到全部node節點
[root@k8s-master kubernetes]# scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node1:/cloud/k8s/kubernetes/cfg/ [root@k8s-master kubernetes]# scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node2:/cloud/k8s/kubernetes/cfg/ [root@k8s-master kubernetes]# scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node3:/cloud/k8s/kubernetes/cfg/
全部node節點建立kubelet 參數配置模板文件
[root@k8s-node1 cfg]# cat kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.248.66 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true [root@k8s-node2 cfg]# cat kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.248.67 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true [root@k8s-node3 cfg]# cat kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.248.68 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true
建立kubelet配置文件
[root@k8s-node1 cfg]# cat /cloud/k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node1 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/cloud/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/cloud/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@k8s-node2 cfg]# cat /cloud/k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node2 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/cloud/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/cloud/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@k8s-node3 cfg]# cat /cloud/k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node3 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/cloud/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/cloud/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
建立kubelet啓動文件
[root@k8s-node1 cfg]# cat /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kubelet ExecStart=/cloud/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
將kubelet-bootstrap用戶綁定到系統集羣角色(不綁定角色,kubelet將沒法啓動成功)
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
啓動kubelet服務(全部node節點)
[root@k8s-node1 cfg]# systemctl daemon-reload [root@k8s-node1 cfg]# systemctl enable kubelet [root@k8s-node1 cfg]# systemctl start kubelet [root@k8s-node1 cfg]# ps -ef |grep kubelet root 3306 1 2 09:02 ? 00:14:47 /cloud/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=k8s-node1 --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig --config=/cloud/k8s/kubernetes/cfg/kubelet.config --cert-dir=/cloud/k8s/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 root 87181 12020 0 19:22 pts/0 00:00:00 grep --color=auto kubelet
在master節點上approve kubelet CSR 請求
kubectl get csr kubectl certificate approve $NAME csr 狀態變爲 Approved,Issued 便可
查看集羣狀態及node節點
[root@k8s-master kubernetes]# kubectl get cs,node NAME STATUS MESSAGE ERROR componentstatus/controller-manager Healthy ok componentstatus/scheduler Healthy ok componentstatus/etcd-2 Healthy {"health":"true"} componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} NAME STATUS ROLES AGE VERSION node/k8s-node1 Ready <none> 4d2h v1.15.0 node/k8s-node2 Ready <none> 4d2h v1.15.0 node/k8s-node3 Ready <none> 4d2h v1.15.0
部署 node kube-proxy 組件
[root@k8s-node1 cfg]# cat /cloud/k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node1 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
建立kube-proxy啓動文件
[root@k8s-node1 cfg]# cat /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-proxy ExecStart=/cloud/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
啓動kube-proxy服務
[root@k8s-node1 cfg]# systemctl daemon-reload [root@k8s-node1 cfg]# systemctl enable kube-proxy [root@k8s-node1 cfg]# systemctl start kube-proxy [root@k8s-node1 cfg]# ps -ef |grep kube-proxy root 966 1 0 09:02 ? 00:01:20 /cloud/k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=k8s-node1 --cluster-cidr=10.0.0.0/24 --kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig root 87093 12020 0 19:22 pts/0 00:00:00 grep --color=auto kube-proxy
部署Coredns組件
[root@k8s-master ~]# cat coredns.yaml # Warning: This is a file generated from the base underscore template file: coredns.yaml.base apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:coredns rules: - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 proxy . /etc/resolv.conf cache 30 loop reload loadbalance } --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: seccomp.security.alpha.kubernetes.io/pod: 'docker/default' spec: serviceAccountName: coredns tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule - key: "CriticalAddonsOnly" operator: "Exists" containers: - name: coredns image: coredns/coredns:1.3.1 imagePullPolicy: IfNotPresent resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true dnsPolicy: Default volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.0.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP
[root@k8s-master ~]# kubectl apply -f coredns.yaml serviceaccount/coredns unchanged clusterrole.rbac.authorization.k8s.io/system:coredns unchanged clusterrolebinding.rbac.authorization.k8s.io/system:coredns unchanged configmap/coredns unchanged deployment.extensions/coredns unchanged service/kube-dns unchanged [root@k8s-master ~]# kubectl get deployment -n kube-system NAME READY UP-TO-DATE AVAILABLE AGE coredns 3/3 3 3 33h [root@k8s-master ~]# kubectl get deployment -n kube-system -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR coredns 3/3 3 3 33h coredns coredns/coredns:1.3.1 k8s-app=kube-dns [root@k8s-master ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-b49c586cf-nwzv6 1/1 Running 1 33h 172.18.54.3 k8s-node3 <none> <none> coredns-b49c586cf-qv5b9 1/1 Running 1 33h 172.18.22.3 k8s-node1 <none> <none> coredns-b49c586cf-rcqhc 1/1 Running 1 33h 172.18.95.2 k8s-node2 <none> <none> [root@k8s-master ~]# kubectl get svc -n kube-system -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 33h k8s-app=kube-dns
到此kubernetes V1.15.0乞丐版部署完成。