k8s集羣部署(3)

1、利用ansible部署kubernetes集羣環境準備

  基於二進制方式部署和利用ansible-playbook實現自動化:既提供一鍵安裝腳本,也能夠分步執行安裝各個組件,同時講解每一步主要參數配置和注意事項;二進制方式部署有助於理解系統各組件的交互原理和熟悉組件啓動參數,有助於快速排查解決實際問題。node

版本組件docker

 

  kubernetes v1.9.7

 

  etcd v3.3.4

 

  docker 18.03.0-ce

 

  calico/node:v3.0.6

 

  calico/cni:v2.0.5

 

  calico/kube-controllers:v2.0.4

 

  centos 7.3+
 
集羣規劃和基礎參數設定:
  1.高可用集羣所需節點配置以下:
    部署節點×1:運行ansible腳本的節點
    etcd節點×3:注意etcd集羣節點必須是1,3,5,7……奇數個節點
    master節點×1:運行集羣主要組件
    node節點×3:真正應用部署的節點,根據須要增長機器配置和節點數
  2.在部署節點準備ansible:使用ansible的docker環境啓動:
    1° 下載內部源配置腳本並運行腳本:
wget http://download2.yunwei.edu/shell/yum-repo.sh
bash yum-repo.sh

    2°下載並安裝docker:shell

wget http://download2.yunwei.edu/shell/docker.tar.gz
解壓後切換到docker目錄下
運行docker.sh腳本
查看docker服務是否啓動
docker image
#/bin/bash

tar zxvf docker-app.tar.gz -C /usr/local/bin/

mkdir -p /etc/docker
mkdir -p /etc/docker/certs.d/reg.yunwei.edu

cp ca.crt /etc/docker/certs.d/reg.yunwei.edu/

echo "172.16.254.20 reg.yunwei.edu">>/etc/hosts

cat <<EOF>/etc/docker/daemon.json
{
  "registry-mirrors": ["http://cc83932c.m.daocloud.io"],
  "max-concurrent-downloads": 10,
  "log-driver": "json-file",
  "log-level": "warn",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
    }
}
EOF

cat <<EOF>/etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
Environment="PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStart=/usr/local/bin/dockerd
ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
RestartSec=5
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload && systemctl enable docker.service && systemctl start docker.service
docker.sh   

    3°下載並運行docker版的ansible:json

docker pull reg.yunwei.edu/learn/ansible:alpine3
docker run -itd -v /etc/ansible:/etc/ansible -v /etc/kubernetes/:/etc/kubernetes/ -v /root/.kube:/root/.kube -v /usr/local/bin/:/usr/local/bin/ 1acb4fd5df5b  /bin/sh

    4° 配置每臺機器之間主機名解析vim

    5° 進入ansible容器,設置免密鑰登陸centos

測試各節點是否正常
ansible all -m ping

  3.在部署節點上傳ansible工做文件:api

 wget http://download2.yunwei.edu/shell/kubernetes.tar.gz
解壓後:
bash       harbor-offline-installer-v1.4.0.tgz  k8s197.tar.gz           scope.yaml
ca.tar.gz  image.tar.gz                         kube-yunwei-197.tar.gz  sock-shop
 
    
解壓kube-yunwei-197.tar.gz 並將kube-yunwei-197下的全部文件移動到ansible下,刪除kube-yunwei-197
[root@localhost kubernetes]# tar xf kube-yunwei-197.tar.gz [root@localhost kubernetes]# ls bash harbor-offline-installer-v1.4.0.tgz k8s197.tar.gz kube-yunwei-197.tar.gz sock-shop ca.tar.gz image.tar.gz kube-yunwei-197 scope.yaml [root@localhost kubernetes]# cd kube-yunwei-197 [root@localhost kube-yunwei-197]# ls 01.prepare.yml 03.docker.yml 05.kube-node.yml 99.clean.yml bin manifests tools 02.etcd.yml 04.kube-master.yml 06.network.yml ansible.cfg example roles [root@localhost kube-yunwei-197]# mv * /etc/ansible/

 

解壓 k8s197.tar.gz 並將bin下的全部文件移動到ansible下的bin目錄下,刪除kubernetes下的bin
[root@localhost kubernetes]# tar xf k8s197.tar.gz [root@localhost kubernetes]# ls bash ca.tar.gz image k8s197.tar.gz kube
-yunwei-197.tar.gz sock-shop bin harbor-offline-installer-v1.4.0.tgz image.tar.gz kube-yunwei-197 scope.yaml [root@localhost kubernetes]# cd bin [root@localhost bin]# ls bridge docker dockerd etcdctl kubectl portmap calicoctl docker-compose docker-init flannel kubelet cfssl docker-containerd docker-proxy host-local kube-proxy cfssl-certinfo docker-containerd-ctr docker-runc kube-apiserver kube-scheduler cfssljson docker-containerd-shim etcd kube-controller-manager loopback [root@localhost bin]# mv * /etc/ansible/bin/ [root@localhost bin]# ls [root@localhost bin]# cd /etc/ansible/bin/ [root@localhost bin]# ls bridge docker dockerd etcdctl kubectl portmap calicoctl docker-compose docker-init flannel kubelet VERSION.md cfssl docker-containerd docker-proxy host-local kube-proxy cfssl-certinfo docker-containerd-ctr docker-runc kube-apiserver kube-scheduler cfssljson docker-containerd-shim etcd kube-controller-manager loopback

 

切換到example目錄下,將 hosts.s-master.example 文件複製到ansible目錄下,並更名爲hosts
[root@localhost kubernetes]# cd /etc/ansible/ [root@localhost ansible]# ls 01.prepare.yml 03.docker.yml 05.kube-node.yml 99.clean.yml bin manifests tools 02.etcd.yml 04.kube-master.yml 06.network.yml ansible.cfg example roles [root@localhost ansible]# cd example/ [root@localhost example]# ls hosts.s-master.example [root@localhost example]# cp hosts.s-master.example ../hosts [root@localhost example]# cd .. [root@localhost ansible]# ls 01.prepare.yml 03.docker.yml 05.kube-node.yml 99.clean.yml bin hosts roles 02.etcd.yml 04.kube-master.yml 06.network.yml ansible.cfg example manifests tools [root@localhost ansible]# vim hosts

 

# 在deploy節點生成CA相關證書,以及kubedns.yaml配置文件
- hosts: deploy
  roles:
  - deploy

# 集羣節點的公共配置任務
- hosts:
  - kube-master
  - kube-node
  - deploy
  - etcd
  - lb
  roles:
  - prepare

# [可選]多master部署時的負載均衡配置
- hosts: lb
  roles:
  - lb
01.prepare.yml
- hosts: etcd
  roles:
  - etcd
02.etcd.yml
- hosts:
  - kube-master
  - kube-node
  roles:
  - docker
03.docker.yml
- hosts: kube-master
  roles:
  - kube-master
  - kube-node
  # 禁止業務 pod調度到 master節點
  tasks:
  - name: 禁止業務 pod調度到 master節點
    shell: "{{ bin_dir }}/kubectl cordon {{ NODE_IP }} "
    when: DEPLOY_MODE != "allinone"
    ignore_errors: true
04.kube-master.yml
- hosts: kube-node
  roles:
  - kube-node
05.kube-node.yml
# 集羣網絡插件部署,只能選擇一種安裝
- hosts:
  - kube-master
  - kube-node
  roles:
  - { role: calico, when: "CLUSTER_NETWORK == 'calico'" }
  - { role: flannel, when: "CLUSTER_NETWORK == 'flannel'" }
06.network.yml

 

編輯hosts文件
# 部署節點:運行ansible 腳本的節點 [deploy]
192.168.42.30 # etcd集羣請提供以下NODE_NAME、NODE_IP變量,請注意etcd集羣必須是1,3,5,7...奇數個節點 [etcd] 192.168.42.121 NODE_NAME=etcd1 NODE_IP="192.168.42.121" 192.168.42.122 NODE_NAME=etcd2 NODE_IP="192.168.42.122" 192.168.42.172 NODE_NAME=etcd3 NODE_IP="192.168.42.172" [kube-master] 192.168.42.121 NODE_IP="192.168.42.121" [kube-node] 192.168.42.121 NODE_IP="192.168.42.121" 192.168.42.122 NODE_IP="192.168.42,122" 192.168.42.172 NODE_IP="192.168.42.172" [all:vars] # ---------集羣主要參數--------------- #集羣部署模式:allinone, single-master, multi-master DEPLOY_MODE=single-master #集羣 MASTER IP MASTER_IP="192.168.42.121" #集羣 APISERVER KUBE_APISERVER="https://192.168.42.121:6443" #TLS Bootstrapping 使用的 Token,使用 head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 生成 BOOTSTRAP_TOKEN="d18f94b5fa585c7123f56803d925d2e7" # 集羣網絡插件,目前支持calico和flannel CLUSTER_NETWORK="calico" # 部分calico相關配置,更全配置能夠去roles/calico/templates/calico.yaml.j2自定義 # 設置 CALICO_IPV4POOL_IPIP=「off」,能夠提升網絡性能,條件限制詳見 05.安裝calico網絡組件.md CALICO_IPV4POOL_IPIP="always" # 設置 calico-node使用的host IP,bgp鄰居經過該地址創建,可手動指定端口"interface=eth0"或使用以下自動發現 IP_AUTODETECTION_METHOD="can-reach=223.5.5.5" # 部分flannel配置,詳見roles/flannel/templates/kube-flannel.yaml.j2 FLANNEL_BACKEND="vxlan" # 服務網段 (Service CIDR),部署前路由不可達,部署後集羣內使用 IP:Port 可達 SERVICE_CIDR="10.68.0.0/16" # POD 網段 (Cluster CIDR),部署前路由不可達,**部署後**路由可達 CLUSTER_CIDR="172.20.0.0/16" # 服務端口範圍 (NodePort Range) NODE_PORT_RANGE="20000-40000" # kubernetes 服務 IP (預分配,通常是 SERVICE_CIDR 中第一個IP) CLUSTER_KUBERNETES_SVC_IP="10.68.0.1" # 集羣 DNS 服務 IP (從 SERVICE_CIDR 中預分配) CLUSTER_DNS_SVC_IP="10.68.0.2" # 集羣 DNS 域名 CLUSTER_DNS_DOMAIN="cluster.local." # etcd 集羣間通訊的IP和端口, **根據實際 etcd 集羣成員設置** ETCD_NODES="etcd1=https://192.168.42.121:2380,etcd2=https://192.168.42.122:2380,etcd3=https://192.168.42.172:2380" # etcd 集羣服務地址列表, **根據實際 etcd 集羣成員設置** ETCD_ENDPOINTS="https://192.168.42.121:2379,https://192.168.42.122:2379,https://192.168.42.172:2379" # 集羣basic auth 使用的用戶名和密碼 BASIC_AUTH_USER="admin" BASIC_AUTH_PASS="admin" # ---------附加參數-------------------- #默認二進制文件目錄 bin_dir="/usr/local/bin" #證書目錄 ca_dir="/etc/kubernetes/ssl" #部署目錄,即 ansible 工做目錄 base_dir="/etc/ansible"

2、部署kubernetes過程

進入容器,查看ansible目錄下是否有文件,而且查看可否ping通其餘節點
[root@localhost ansible]# docker exec -it 0918862b8730 /bin/sh / # cd /etc/ansible/ /etc/ansible # ls 01.prepare.yml 06.network.yml hosts 02.etcd.yml 99.clean.yml manifests 03.docker.yml ansible.cfg roles 04.kube-master.yml bin tools 05.kube-node.yml example /etc/ansible # ansible all -m ping 192.168.42.122 | SUCCESS => { "changed": false, "ping": "pong" } 192.168.42.172 | SUCCESS => { "changed": false, "ping": "pong" } 192.168.42.121 | SUCCESS => { "changed": false, "ping": "pong" } 192.168.42.30 | SUCCESS => { "changed": false, "ping": "pong" }

 

ansible-playbook 01.prepare.yml  
ansible-playbook 02.etcd.yml   
ansible-playbook 03.docker.yml
ansible-playbook 04.kube-master.yml  
ansible-playbook 05.kube-node.yml
在執行06.network.yml以前要確保其餘節點有鏡像,因此解壓image.tar.gz

 

[root@cicd kubernetes]# ls
bash  ca.tar.gz                           image         k8s197.tar.gz      kube-yunwei-197         scope.yaml
bin   harbor-offline-installer-v1.4.0.tgz  image.tar.gz  kubernetes.tar.gz  kube-yunwei-197.tar.gz  sock-shop
[root@cicd kubernetes]# cd image
[root@cicd image]# ls
calico-cni-v2.0.5.tar               coredns-1.0.6.tar.gz  influxdb-v1.3.3.tar
calico-kube-controllers-v2.0.4.tar  grafana-v4.4.3.tar    kubernetes-dashboard-amd64-v1.8.3.tar.gz
calico-node-v3.0.6.tar              heapster-v1.5.1.tar   pause-amd64-3.1.tar
[root@cicd image]# scp ./* node1:/root/image
[root@cicd image]# scp ./* node2:/root/image
[root@cicd image]# scp ./* node3:/root/image

 

在node節點:
[root@node1 image]# for i in `ls`;do docker load -i $i;done

 

部署節點:
ansible-playbook 06.network.yml

 

CoreDNS,該DNS服務器利用SkyDNS的庫來爲Kubernetes pod和服務提供DNS請求。
/etc/ansible # ls 01.prepare.yml 03.docker.yml 06.network.yml bin manifests 02.etcd.retry 04.kube-master.yml 99.clean.yml example roles 02.etcd.yml 05.kube-node.yml ansible.cfg hosts tools /etc/ansible # cd manifests/ /etc/ansible/manifests # ls coredns dashboard efk heapster ingress kubedns /etc/ansible/manifests # cd coredns/ /etc/ansible/manifests/coredns # ls coredns.yaml /etc/ansible/manifests/coredns # kubectl create -f . serviceaccount "coredns" created clusterrole "system:coredns" created clusterrolebinding "system:coredns" created configmap "coredns" created deployment "coredns" created service "coredns" created


/etc/ansible/manifests # ls
coredns    dashboard  efk        heapster   ingress    kubedns
/etc/ansible/manifests # cd dashboard/
/etc/ansible/manifests/dashboard # ls
1.6.3                      kubernetes-dashboard.yaml  ui-read-rbac.yaml
admin-user-sa-rbac.yaml    ui-admin-rbac.yaml
/etc/ansible/manifests/dashboard # kubectl create -f .
serviceaccount "admin-user" created
clusterrolebinding "admin-user" created
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
clusterrole "ui-admin" created
rolebinding "ui-admin-binding" created
clusterrole "ui-read" created
rolebinding "ui-read-binding" created

 

[root@cicd ansible]# kubectl get pod -n kube-system -o wide
NAME                                    READY     STATUS                  RESTARTS   AGE       IP               NODE
coredns-6ff7588dc6-l8q4h                1/1       Running                 0          7m        172.20.0.2       192.168.42.122
coredns-6ff7588dc6-x2jq5                1/1       Running                 0          7m        172.20.1.2       192.168.42.172
kube-flannel-ds-c688h                   1/1       Running                 0          14m       192.168.42.172   192.168.42.172
kube-flannel-ds-d4p4j                   0/1       Running             0          14m       192.168.42.122   192.168.42.122
kube-flannel-ds-f8gp2                   1/1       Running                 0          14m       192.168.42.121   192.168.42.121
kubernetes-dashboard-545b66db97-z9nr4   1/1       Running                 0          1m        172.20.1.3       192.168.42.172

 

[root@cicd ansible]# kubectl cluster-info
Kubernetes master is running at https://192.168.42.121:6443
CoreDNS is running at https://192.168.42.121:6443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
kubernetes-dashboard is running at https://192.168.42.121:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

 

 

 密鑰獲取: kubectl -n kube-system describe secret $(kubectl -n kube-system get secret|grep admin-user|awk '{print $1}')
相關文章
相關標籤/搜索