###Centos7.6使用kubeadm快速部署kubernetes集羣 爲何要使用kubeadm來部署kubernetes?由於kubeadm是kubernetes原生的部署工具,簡單快捷方便,便於新手快速搭建學習,經過kubeadm配合kubernetes相關組件的docker鏡像部署出來的集羣環境和二進制文件搭建起來的集羣環境基本上沒什麼區別。可是須要注意這種方式不建議用於生產環境!主要用於研究學習kubernetes! 關於kubeadm: Easily bootstrap a secure Kubernetes cluster ####1.1.服務器規劃 | 主機名 | 內網ip地址 | 角色 | 系統版本 | | :----------: | :--------: | :----: | :----------: | | kubernetes01 | 10.5.0.206 | Master | CentOS Linux release 7.6.1810 (Core) | | kubernetes02 | 10.5.0.207 | Worker | CentOS Linux release 7.6.1810 (Core) | | kubernetes03 | 10.5.0.208 | Worker | CentOS Linux release 7.6.1810 (Core) | | kubernetes04 | 10.5.0.209 | Worker | CentOS Linux release 7.6.1810 (Core) | | kubernetes05 | 10.5.0.210 | Worker | CentOS Linux release 7.6.1810 (Core) | | kubernetes06 | 10.5.0.213 | Worker | CentOS Linux release 7.6.1810 (Core) | | kubernetes07 | 10.5.0.214 | Worker | CentOS Linux release 7.6.1810 (Core) | | kubernetes08 | 10.5.0.218 | Worker | CentOS Linux release 7.6.1810 (Core) | | kubernetes09 | 10.5.0.219 | Worker | CentOS Linux release 7.6.1810 (Core) | ####1.2.Master節點 Master 節點主要包含了三個Kubernetes項目中最最最重要的組件:apiserver,scheduler,controller-manager! apiserver:提供了管理集羣的API接口 scheduler:負責分配調度Pod到集羣內的node節點 controller-manager:由一系列的控制器組成,經過apiserver監控整個集羣的狀態html
#####1.2.1.確認系統版本,修改主機名node
1.查看系統版本 [root@iZ2ze7ftggknd1fplnxygqZ ~]# cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) 2.修改主機名 hostnamectl set-hostname kubernetes01 3.別忘了修改/etc/hosts文件 [root@kubernetes01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # kubernetes-cluster 10.5.0.206 kubernetes01 ...
#####1.2.2.關閉防火牆linux
systemctl stop firewalld && systemctl disable firewalld
#####1.2.3.檢查selinux是否關閉git
[root@kubernetes01 ~]# setenforce 0 setenforce: SELinux is disabled
#####1.2.4.提早處理路由問題github
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF 以後 sysctl --system
#####1.2.5.安裝docker-ce, 注意docker-ce的版本和kubernetes版本的兼容性!docker
使用yum安裝docekr-ce,版本v18.06.1 [root@kubernetes01 ~]# yum -y install yum-utils device-mapper-persistent-data lvm2 [root@kubernetes01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@kubernetes01 ~]# yum -y install docker-ce-18.06.1.ce [root@kubernetes01 ~]# /bin/systemctl start docker.service [root@kubernetes01 ~]# docker --version Docker version 18.06.1-ce, build e68fc7a
#####1.2.6.安裝kubelet kubeadm kubectlshell
1.配置某雲的yum源 cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 EOF 2.安裝key文件 wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg rpm -import rpm-package-key.gpg 3.yum安裝 yum install -y kubelet-1.12.1 yum install -y kubectl-1.12.1 yum install -y kubeadm-1.12.1
#####1.2.7.版本檢查bootstrap
[root@kubernetes01 ~]# kubelet --version Kubernetes v1.12.1 [root@kubernetes01 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? [root@kubernetes01 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} v1.12.1kubeadm須要的kubernetes組件docker鏡像版本: k8s.gcr.io/kube-apiserver:v1.12.1 k8s.gcr.io/kube-controller-manager:v1.12.1 k8s.gcr.io/kube-scheduler:v1.12.1 k8s.gcr.io/kube-proxy:v1.12.1 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.2
#####1.2.8.下載kubernetes相關組件的docker鏡像centos
因爲國內網絡環境的「特殊性」,這裏另闢蹊徑。 [root@kubernetes01 ~]# cat pull_k8s_images.sh #!/bin/bash images=(kube-proxy:v1.12.1 kube-scheduler:v1.12.1 kube-controller-manager:v1.12.1 kube-apiserver:v1.12.1 etcd:3.2.24 coredns:1.2.2 pause:3.1 ) for imageName in ${images[@]} ; do docker pull anjia0532/google-containers.${imageName} docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName docker rmi anjia0532/google-containers.$imageName done
#####1.2.9.查看鏡像信息api
各位還記得開頭提起過的scheduler,controller-manager,apiserver這三個基本組件的做用嗎?😂別忘記~~ [root@kubernetes01 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.12.1 61afff57f010 5 months ago 96.6MB k8s.gcr.io/kube-apiserver v1.12.1 dcb029b5e3ad 5 months ago 194MB k8s.gcr.io/kube-scheduler v1.12.1 d773ad20fd80 5 months ago 58.3MB k8s.gcr.io/kube-controller-manager v1.12.1 aa2dd57c7329 5 months ago 164MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 6 months ago 220MB k8s.gcr.io/coredns 1.2.2 367cdc8433a4 7 months ago 39.2MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 15 months ago 742kB
#####1.2.10.使用kubeadm部署kubernetes集羣master節點
[root@kubernetes01 ~]# kubeadm init --kubernetes-version=v1.12.1 preflight檢測沒有問題後通過一段時間,看到這樣的提示算是完成了對Kubernetes Master節點的部署。 Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.5.0.206:6443 --token bh3pih.cuir6xpjl7zn7pf2 --discovery-token-ca-cert-hash sha256:ae00fc1ad4a680c01be4deaae6f6e4cf554867664bc5c16e0b3f98d4f2adcf2c 在開始使用以前,須要以常規用戶身份運行如下命令: 上面那段英文中有說明! 由於Kubernetes集羣默認是須要加密訪問的! so執行👇 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
#####1.2.11.健康檢查
1.查看主要組件的健康狀態 [root@kubernetes01 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} 2.查看master節點狀態 [root@kubernetes01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kubernetes01 NotReady master 4m15s v1.12.1
#####1.2.12.部署網絡插件weave
[root@kubernetes01 ~]# kubectl apply -f https://git.io/weave-kube-1.6 serviceaccount/weave-net created serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.extensions/weave-net created 等一下子,查看Master節點狀態,STATUS已經變了,這是由於部署的網絡組件生效了 [root@kubernetes01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION kubernetes-master Ready master 21m v1.12.1
#####1.2.13查看Master節點上網絡weave相關Pod的狀態
[root@kubernetes01 ~]# kubectl get pods -n kube-system -l name=weave-net -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE weave-net-vhs56 2/2 Running 0 6m59s 10.5.0.206 kubernetes-master <none>
#####1.2.14部署可視化插件
1.獲取可視化插件docker鏡像,修改tag docker pull anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 docker tag anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1 docker rmi anjia0532/google-containers.kubernetes-dashboard-amd64:v1.10.0 2.獲取並修改可視化插件YAML文件的最後部分,便於後期經過token登錄可視化頁面,這裏須要特別注意的是暴露了30001端口,這若是在生產環境是極不安全的! [root@kubernetes01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml [root@kubernetes01 ~]# tail -n 20 kubernetes-dashboard.yaml effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30001 selector: k8s-app: kubernetes-dashboard 3.部署可視化插件 [root@kubernetes01 ~]# kubectl apply -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard configured 4.查看可視化插件對應的Pod狀態 [root@kubernetes01 ~]# kubectl get pods -n kube-system | grep dash kubernetes-dashboard-65c76f6c97-f29nm 1/1 Running 0 3m8s 5.獲取token值 [root@kubernetes01 ~]# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token Name: namespace-controller-token-mt4sh Type: kubernetes.io/service-account-token token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi1tdDRzaCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImY5YzE3YWQzLTUxYzItMTFlOS05NWZiLTAwMTYzZTBlNDRiYyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.W2flckBO8CrzGyJzw2aJH5obQSjy4PNSll7uHOiIXPk4dnOTEzI-BfM4C9QrNDjbNTu8gIdLHntLj1181Sf_sRMidB_vhUPg6CFA1zy3XmYH21eVqjSxEBNXMSfrJHBgXnBzaHieaXqF55_etABB0j4xLM7V-bRsQ9AB0G3cv1IYU_gYG3BozksvAObmDEY4GgCI7f0-nu2YRqOMPJPhXWzKOGUvBBPyj171Xo06QvF6p9zpTMSoLa3aV-gU4XA2nMf2_aDdgFrGVI4p95ziewyu0o-W-DiEnXW1hRtwgg-PRe3QPU9ps3TALlr3U8rwh3xVmlqnRuNGVDqzmclVdQ 訪問https://10.5.0.206:30001經過token登錄控制面板,注意是https協議!
#####1.2.15部署容器存儲插件 這裏須要知道Rook項目是基於Ceph的Kubernetes存儲插件,一個可用於生產級別的作持久化存儲的插件,值得好好把玩。
cd /usr/local/src yum -y install git git clone https://github.com/rook/rook.git cd /usr/local/src/rook/cluster/examples/kubernetes/ceph kubectl apply -f operator.yaml kubectl apply -f cluster.yaml
####1.3.Worker節點
和安裝Master節點類似,首先把準備工做作好,主機名修改,關閉防火牆,提早處理路由問題,配置yum源等等,因爲節點數9個,因此這裏簡單的使用了下ansible playbook配合shell腳本進行安裝,節省時間。 1.docker-ce的安裝腳本 cat install_dockerce.sh #!/bin/bash yum -y install yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum -y install docker-ce-18.06.1.ce 2.kubernetes相關組件的安裝腳本 cat install_kubectl.sh #!/bin/bash # install kubelet and kubeadm and kubectls yum install -y kubelet-1.12.1 yum install -y kubectl-1.12.1 yum install -y kubeadm-1.12.1 # install kube-proxy and pause images=(kube-proxy:v1.12.1 pause:3.1 ) for imageName in ${images[@]} ; do docker pull anjia0532/google-containers.$imageName docker tag anjia0532/google-containers.$imageName k8s.gcr.io/$imageName docker rmi anjia0532/google-containers.$imageName done # join cluster kubeadm join 10.5.0.206:6443 --token bh3pih.cuir6xpjl7zn7pf2 --discovery-token-ca-cert-hash sha256:ae00fc1ad4a680c01be4deaae6f6e4cf554867664bc5c16e0b3f98d4f2adcf2c
####1.4其它
遇到的一些小問題: kubeadmv1.12.1沒法正確安裝的問題,節點報錯[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]:的問題,從k8s.gcr.io拉取鏡像失敗的問題,這些問題都很好解決,卡住了別怕!一點一點兒克服困難。
####1.5總結 文章中使用kubeadm部署了1臺Kubernetes Master節點,部署了9臺Kubernetes Worker節點,部署了可視化插件,部署了容器存儲插件,部署了容器的網絡插件。總的來講kubeadm是玩起來是至關方便😄,可是缺點也顯而易見,好比沒有作到Master的高可用,安全性不足等等等😭...so並不具有生產環境使用的標準。這裏我的推薦生產環境研究使用kubeasz和kubespray部署!最後的最後,學習kubernetes須要的就是探索精神!☀️ PS:服務器使用的是國內某☁️的機器 歡迎你們留言討論哦~~~