kubespray容器化部署kubernetes高可用集羣

1、基礎環境

  • docker版本1.12.6
  • CentOS 7

1.準備好要部署的機器

IP ROLE
172.30.33.89 k8s-registry-lb
172.30.33.90 k8s-master01-etcd01
172.30.33.91 k8s-master02-etcd02
172.30.33.92 k8s-master03-etcd03
172.30.33.93 k8s-node01-ingress01
172.30.33.94 k8s-node02-ingress02
172.30.33.31 ansible-client

2.準備部署機器 ansible-client

3.準備所須要鏡像,因爲被牆,所需鏡像能夠在百度雲去下載,點擊這裏

IMAGE VERSION
quay.io/coreos/hyperkube v1.6.7_coreos.0
quay.io/coreos/etcd v3.1.10
calico/ctl v1.1.3
calico/node v2.4.1
calico/cni v1.10.0
calico/kube-policy-controller v0.7.0
quay.io/calico/routereflector v0.3.0
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.3
gcr.io/google_containers/nginx-ingress-controller 0.9.0-beta.11
gcr.io/google_containers/defaultbackend 1.3
gcr.io/google_containers/cluster-proportional-autoscaler-amd64 1.1.1
gcr.io/google_containers/fluentd-elasticsearch 1.22
gcr.io/google_containers/kibana v4.6.1
gcr.io/google_containers/elasticsearch v2.4.1
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.4
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.4
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.4
andyshinn/dnsmasq 2.72
nginx 1.11.4-alpine
gcr.io/google_containers/heapster-grafana-amd64 v4.4.1
gcr.io/google_containers/heapster-amd64 v1.4.0
gcr.io/google_containers/heapster-influxdb-amd64 v1.1.1
gcr.io/google_containers/pause-amd64 3.0
lachlanevenson/k8s-helm v2.2.2
gcr.io/kubernetes-helm/tiller v2.2.2

4.load全部下載的鏡像

# 在ansible-client上操做 $ IP=(172.30.33.89 172.30.33.90 172.30.33.91 172.30.33.92 172.30.33.93 172.30.33.94) $ for i in ${IP[*]}; do scp -r kubespray_images_v1.6.7 $i:~/; done # 對全部要部署的節點操做 $ IMAGES=$(ls -l ~/kubespray_images_v1.6.7|awk -F' ' '{ print $9 }') $ for x in ${images[*]}; do docker load -i kubespray_images_v1.6.7/$x; done 

2、搭建集羣

1.獲取kubespray源碼

$ git clone https://github.com/kubernetes-incubator/kubespray.git 

2.編輯配置文件

$ vim ~/kubespray/inventory/group_vars/k8s-cluster.yml --- # 啓動集羣的基礎系統,支持ubuntu, coreos, centos, none bootstrap_os: centos # etcd數據存放位置 etcd_data_dir: /var/lib/etcd # kubernetes所需二進制文件將要安裝的位置 bin_dir: /usr/local/bin # kubrnetes配置文件存放目錄 kube_config_dir: /etc/kubernetes # 生成證書和token的腳本的存放位置 kube_script_dir: "{ { bin_dir } }/kubernetes-scripts" # kubernetes manifest文件存放目錄 kube_manifest_dir: "{ { kube_config_dir } }/manifests" # kubernetes 命名空間 system_namespace: kube-system # 日誌存放位置 kube_log_dir: "/var/log/kubernetes" # kubernetes證書存放位置 kube_cert_dir: "{ { kube_config_dir } }/ssl" # kubernetes token存放位置 kube_token_dir: "{ { kube_config_dir } }/tokens" # basic auth 認證文件存放位置 kube_users_dir: "{ { kube_config_dir } }/users" # 關閉匿名受權 kube_api_anonymous_auth: false ## kubernetes使用版本 kube_version: v1.6.7 # 安裝過程當中緩存文件下載位置(最少1G) local_release_dir: "/tmp/releases" # 重試次數,好比下載失敗等狀況 retry_stagger: 5 # 證書組 kube_cert_group: kube-cert # 集羣日誌等級 kube_log_level: 2 # HTTP下api server的basic auth認證用戶名密碼 kube_api_pwd: "test123" kube_users: kube: pass: "{ {kube_api_pwd} }" role: admin root: pass: "{ {kube_api_pwd} }" role: admin ## 開關認證 (basic auth, static token auth) #kube_oidc_auth: false #kube_basic_auth: false #kube_token_auth: false ## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/ ## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...) # kube_oidc_url: https:// ... # kube_oidc_client_id: kubernetes ## Optional settings for OIDC # kube_oidc_ca_file: { { kube_cert_dir } }/ca.pem # kube_oidc_username_claim: sub # kube_oidc_groups_claim: groups # 網絡插件 (calico, weave or flannel) kube_network_plugin: calico # 開啓 kubernetes network policies enable_network_policy: false # Kubernetes 服務的地址範圍. kube_service_addresses: 10.233.0.0/18 # pod 地址範圍 kube_pods_subnet: 10.233.64.0/18 # 網絡節點大小分配 kube_network_node_prefix: 24 # api server 監聽地址及端口 kube_apiserver_ip: "{ { kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') } }" kube_apiserver_port: 6443 # (https) kube_apiserver_insecure_port: 8080 # (http) # 默認dns後綴 cluster_name: cluster.local # 爲使用主機網絡的pods使用/etc/resolv.conf解析DNS的子域 ndots: 2 # DNS 組件dnsmasq_kubedns/kubedns dns_mode: dnsmasq_kubedns # dns模式,能夠是 docker_dns, host_resolvconf or none resolvconf_mode: docker_dns # 部署netchecker來檢測DNS和HTTP狀態 deploy_netchecker: false # skydns service IP配置 skydns_server: "{ { kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') } }" dns_server: "{ { kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') } }" dns_domain: "{ { cluster_name } }" # docker 存儲目錄 docker_daemon_graph: "/var/lib/docker" ## A string of extra options to pass to the docker daemon. ## This string should be exactly as you wish it to appear. ## An obvious use case is allowing insecure-registry access ## to self hosted registries like so: docker_options: "--insecure-registry={ { kube_service_addresses } } --graph={ { docker_daemon_graph } } --iptables=false --storage-driver=devicemapper" docker_bin_dir: "/usr/bin" # 組件部署方式 # Settings for containerized control plane (etcd/kubelet/secrets) etcd_deployment_type: docker kubelet_deployment_type: docker cert_management: script vault_deployment_type: docker # K8s image pull policy (imagePullPolicy) k8s_image_pull_policy: IfNotPresent # Monitoring apps for k8s efk_enabled: true # Helm deployment helm_enabled: false 

3.生成集羣配置

配置完基本集羣參數後,還須要生成一個集羣配置文件,用於指定須要在哪幾臺服務器安裝,和指定 master、node 節點分佈,以及 etcd 集羣等安裝在那幾臺機器上。node

# 定義集羣IP $ IP=(172.30.33.89 172.30.33.90 172.30.33.91 172.30.33.92 172.30.33.93 172.30.33.94) # 利用kubespray自帶的python腳本生成配置 $ CONFIG_FILE=~/kubespray/inventory/inventory.cfg python3 ~/kubespray/contrib/inventory_builder/inventory.py ${ IP[*] } 

生成的配置以下,最好是在配置上加上ansible_user=root,我最開始在搭建的時候沒有指定,報錯了python

node1    ansible_user=root ansible_host=172.30.33.90 ip=172.30.33.90 node2 ansible_user=root ansible_host=172.30.33.91 ip=172.30.33.91 node3 ansible_user=root ansible_host=172.30.33.92 ip=172.30.33.92 node4 ansible_user=root ansible_host=172.30.33.93 ip=172.30.33.93 node5 ansible_user=root ansible_host=172.30.33.94 ip=172.30.33.94 [kube-master] node1 node2 node3 [kube-node] node1 node2 node3 node4 node5 [etcd] node1 node2 node3 [k8s-cluster:children] kube-node kube-master [calico-rr] 

5.docker,efk,etcd配置修改

提早修改ansbile中有關docker,efk,etcd的配置,由於後面在部署的過程當中,ansible會檢測docker的版本並下載最新的版本,可是因爲牆的緣由,致使沒法下載,會一直卡在下載的地方,因此這裏,咱們要提早修改,同時須要升級etcd的版本,默認的3.0.6的版本,存在不穩定因素。nginx

修改docker配置,將下面關於docker安裝的部分所有註釋掉git

vim ~/kubespray/roles/docker/tasks/main.yml

# - name: ensure docker repository public key is installed # action: "{ { docker_repo_key_info.pkg_key } }" # args: # id: "{ {item} }" # keyserver: "{ {docker_repo_key_info.keyserver} }" # state: present # register: keyserver_task_result # until: keyserver_task_result|succeeded # retries: 4 # delay: "{ { retry_stagger | random + 3 } }" # with_items: "{ { docker_repo_key_info.repo_keys } }" # when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) # - name: ensure docker repository is enabled # action: "{ { docker_repo_info.pkg_repo } }" # args: # repo: "{ {item} }" # state: present # with_items: "{ { docker_repo_info.repos } }" # when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_repo_info.repos|length > 0) # - name: Configure docker repository on RedHat/CentOS # template: # src: "rh_docker.repo.j2" # dest: "/etc/yum.repos.d/docker.repo" # when: ansible_distribution in ["CentOS","RedHat"] and not is_atomic # - name: ensure docker packages are installed # action: "{ { docker_package_info.pkg_mgr } }" # args: # pkg: "{ {item.name} }" # force: "{ {item.force|default(omit)} }" # state: present # register: docker_task_result # until: docker_task_result|succeeded # retries: 4 # delay: "{ { retry_stagger | random + 3 } }" # with_items: "{ { docker_package_info.pkgs } }" # notify: restart docker # when: not (ansible_os_family in ["CoreOS", "Container Linux by CoreOS"] or is_atomic) and (docker_package_info.pkgs|length > 0) # 若是你是本身安裝的docker,記得將這段註釋掉,除非你以爲template中的docker.service你能用 #- name: Set docker systemd config # include: systemd.yml 

修改efk配置,註釋掉KIBANA_BASE_URL這段,不然後面你搭建efk以後,沒法訪問kibanagithub

vim ~/kubespray/roles/kubernetes-apps/efk/kibana/templates/kibana-deployment.yml.j2

# - name: "KIBANA_BASE_URL" # value: "{ { kibana_base_url } }" 

修改download配置,更改etcd和kubedns版本docker

1.6.7版本中,爲了使用更高版本的calico node,我本身多添加了一個變量calico_node_versionbootstrap

vim ~/kubespray/roles/download/defaults/main.yml
etcd_version: v3.1.10
calico_node_version: "v2.4.1" kubedns_version: 1.14.4 calico_policy_version: "v0.7.0" 

注意: 若是你修改了kubedns_version版本,那麼也須要修改/root/kubespray/roles/kubernetes-apps/ansible/defaults/main.yml文件中的kubedns_version版本ubuntu

(可選)6.修改docker.service

# 若是你的docker.service中沒有MountFlags則不須要這一步 # 註釋掉/usr/lib/systemd/system/docker.service 中的MountFlags=slave 

7.在ansible-client上一鍵部署

$ ansible-playbook -i ~/kubespray/inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa 

部署成功後以下 相關node信息 相關pod信息 vim

 

 

參考:centos

https://kevinguo.me/2017/07/06/kubespray-deploy-kubernetes-1/#1%E5%87%86%E5%A4%87%E5%A5%BD%E8%A6%81%E9%83%A8%E7%BD%B2%E7%9A%84%E6%9C%BA%E5%99%A8

相關文章
相關標籤/搜索