節點名稱 | ip地址 | 部署說明 | Pod 網段 | Service網段 | 系統說明 |
k8s-master | 192.168.56.11 | docker、kubeadm、kubectl、kubelet | 10.244.0.0/16 |
10.96.0.0/12 |
Centos 7.4 |
k8s-node01 | 192.168.56.12 | docker、kubeadm、kubelet | 10.244.0.0/16 |
10.96.0.0/12 |
Centos 7.4 |
k8s-node02 | 192.168.56.13 | docker、kubeadm、kubelet | 10.244.0.0/16 |
10.96.0.0/12 |
Centos 7.4 |
[root@k8s-master ~]# cd /etc/yum.repos.d/ 配置阿里雲的源:https://opsx.alibaba.com/mirror [root@k8s-master yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #配置dokcer源 [root@k8s-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo #配置kubernetes源 > [kubernetes] > name=Kubernetes > baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ > enabled=1 > gpgcheck=1 > repo_gpgcheck=1 > gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg > EOF [root@k8s-master yum.repos.d]# yum repolist #查看可用源
將源拷貝到node01和node02節點node
[root@k8s-master yum.repos.d]# scp kubernetes.repo docker-ce.repo k8s-node1:/etc/yum.repos.d/ kubernetes.repo 100% 276 276.1KB/s 00:00 docker-ce.repo 100% 2640 1.7MB/s 00:00 [root@k8s-master yum.repos.d]# scp kubernetes.repo docker-ce.repo k8s-node2:/etc/yum.repos.d/ kubernetes.repo 100% 276 226.9KB/s 00:00 docker-ce.repo 100% 2640 1.7MB/s 00:00
[root@k8s-master yum.repos.d]# yum install -y docker-ce kubelet kubeadm kubectl
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker
啓動docker,docker須要到自動到docker倉庫中所依賴的鏡像文件,這些鏡像文件會由於在國外倉庫而下載沒法完成,因此最好預先下載鏡像文件,kubeadm也能夠支持本地私有倉庫進行獲取鏡像文件。linux
在master節點上使用docker pull拉取鏡像,再經過tag打標籤 docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1 docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker pull xiyangxixia/k8s-scheduler:v1.11.1 docker tag xiyangxixia/k8s-scheduler:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1 docker pull xiyangxixia/k8s-controller-manager:v1.11.1 docker tag xiyangxixia/k8s-controller-manager:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1 docker pull xiyangxixia/k8s-apiserver-amd64:v1.11.1 docker tag xiyangxixia/k8s-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1 docker pull xiyangxixia/k8s-etcd:3.2.18 docker tag xiyangxixia/k8s-etcd:3.2.18 k8s.gcr.io/etcd-amd64:3.2.18 docker pull xiyangxixia/k8s-coredns:1.1.3 docker tag xiyangxixia/k8s-coredns:1.1.3 k8s.gcr.io/coredns:1.1.3 docker pull xiyangxixia/k8s-pause:3.1 docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1 docker pull xiyangxixia/k8s-flannel:v0.10.0-s390x docker tag xiyangxixia/k8s-flannel:v0.10.0-s390x quay.io/coreos/flannel:v0.10.0-s390x docker pull xiyangxixia/k8s-flannel:v0.10.0-ppc64le docker tag xiyangxixia/k8s-flannel:v0.10.0-ppc64le quay.io/coreos/flannel:v0.10.0-ppc64l docker pull xiyangxixia/k8s-flannel:v0.10.0-arm docker tag xiyangxixia/k8s-flannel:v0.10.0-arm quay.io/coreos/flannel:v0.10.0-arm docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64 docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 node節點上拉取的鏡像
docker pull xiyangxixia/k8s-pause:3.1 docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1 docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1 docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64 docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
[root@k8s-master ~]# vim /etc/sysconfig/kubelet #修改kubelet禁止提示swap警告 KUBELET_EXTRA_ARGS="--fail-swap-on=false" #若是配置了swap否則提示出錯信息 更改kubelet配置,不提示swap警告信息,最好關閉swap [root@k8s-master ~]# swapoff -a #關閉swap [root@k8s-master ~]# kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap #初始化 [init] using Kubernetes version: v1.11.1 [preflight] running pre-flight checks I0821 18:14:22.223765 18053 kernel_validator.go:81] Validating kernel version I0821 18:14:22.223894 18053 kernel_validator.go:96] Validating kernel config [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03 [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.11] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.11 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 51.033696 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node k8s-master as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node k8s-master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation [bootstraptoken] using token: dx7mko.j2ug1lqjra5bf6p2 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.56.11:6443 --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf
若是是普通用戶部署,要使用kubectl,須要配置kubectl的環境變量,這些命令也是kubeadm init
輸出的一部分:git
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
若是是使用root用戶部署,可使用export進行定義環境變量github
[root@k8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
此處也須要記錄輸出的kubeadm join
命令,後面的node節點加入到集羣就須要用到此命令:
docker
kubeadm join 192.168.56.11:6443 --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf
該令牌用於主節點和加入節點之間的相互認證。這裏包含的令牌是祕密的。保持安全,由於擁有此令牌的任何人均可以向集羣添加通過身份驗證的節點。可使用該kubeadm token
命令列出,建立和刪除這些令牌。到此,集羣的初始化已經完成,可使用kubectl get cs進行查看集羣的健康狀態信息:bootstrap
[root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady master 43m v1.11.2
從上面的結果能夠看到,master的組件controller-manager、scheduler、etcd都處於正常狀態。那麼apiserver到哪去了?要知道kubectl是經過apiserver進行通訊,從而在etcd中獲取到集羣的狀態信息,因此能夠獲取到集羣的狀態信息,即表示apiserver是處於正常運行的狀態。使用kubectl get node獲取節點信息,能夠看到master節點的狀態是NotReady,這是由於尚未部署好Pod網絡。vim
安裝Pod網絡插件,用於保證Pod之間的相互通訊。在每一個集羣當中只能有一個Pod網絡,在部署flannel以前須要更改的內核參數將橋接的IPv4的流量進行轉發給iptables鏈,這是CNI插件的運行的前提要求。centos
[root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 1 [root@k8s-master ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/c5d10c8/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds created [root@k8s-master ~]# kubectl get node #再查看master節點的狀態信息,就是已是Ready狀態了 NAME STATUS ROLES AGE VERSION k8s-master Ready master 3h v1.11.2
[root@k8s-node01 ~]# yum install -y docker kubeadm kubelet [root@k8s-node01 ~]# systemctl enable dokcer kubelet [root@k8s-node01 ~]# systemctl start docker 這裏須要提早拉取好鏡像,若是docker配置了代理另說,按本次方法,提早拉取node節點所須要的鏡像。 [root@k8s-node01 ~]# kubeadm join 192.168.56.11:6443 --token dx7mko.j2ug1lqjra5bf6p2 --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 7h v1.11.2 k8s-node01 NotReady <none> 3h v1.11.2
加入集羣后,查看節點狀態信息,看到node01節點的狀態爲NotReady,是由於node01節點上尚未鏡像或者是還在拉取鏡像。等待拉取完鏡像就會啓動對應的Pod。api
[root@k8s-node01 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5502c29b43df f0fad859c909 "/opt/bin/flanneld -…" 3 minutes ago Up 3 minutes k8s_kube-flannel_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_1 db1cc0a6fec4 d5c25579d0ff "/usr/local/bin/kube…" 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_0 bc54ad3399e8 k8s.gcr.io/pause:3.1 "/pause" 9 minutes ago Up 9 minutes k8s_POD_kube-proxy-vxckf_kube-system_23dc0141-a5af-11e8-84d2-000c2972dc1f_0 cbfca066b71d k8s.gcr.io/pause:3.1 "/pause" 10 minutes ago Up 10 minutes k8s_POD_kube-flannel-ds-pgpr7_kube-system_23dc27e3-a5af-11e8-84d2-000c2972dc1f_0 [root@k8s-master ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE coredns-78fcdf6894-nmcmz 1/1 Running 0 1d 10.244.0.3 k8s-master coredns-78fcdf6894-p5pfm 1/1 Running 0 1d 10.244.0.2 k8s-master etcd-k8s-master 1/1 Running 1 1d 192.168.56.11 k8s-master kube-apiserver-k8s-master 1/1 Running 8 1d 192.168.56.11 k8s-master kube-controller-manager-k8s-master 1/1 Running 4 1d 192.168.56.11 k8s-master kube-flannel-ds-n5c86 1/1 Running 0 1d 192.168.56.11 k8s-master kube-flannel-ds-pgpr7 1/1 Running 1 1d 192.168.56.12 k8s-node01 kube-proxy-rxlt7 1/1 Running 1 1d 192.168.56.11 k8s-master kube-proxy-vxckf 1/1 Running 0 1d 192.168.56.12 k8s-node01 kube-scheduler-k8s-master 1/1 Running 2 1d 192.168.56.11 k8s-master [root@k8s-master ~]# kubectl get node #此時再查看狀態已經變成Ready NAME STATUS ROLES AGE VERSION k8s-master Ready master 1d v1.11.2 k8s-node01 Ready <none> 1d v1.11.2
[root@k8s-node02 ~]# yum install -y docker kubeadm kubelet [root@k8s-node02 ~]# systemctl enable docker kubelet [root@k8s-node02 ~]# systemctl start docker
一樣預先拉取好node節點所需鏡像,在此處犯錯的還有,根據官方說明tonken的默認有效時間爲24h,因爲時間差,致使這裏的token失效,可使用kubeadm token list查看token,發現以前初始化的tonken已經失效了。安全
[root@k8s-master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS dx7mko.j2ug1lqjra5bf6p2 <invalid> 2018-08-22T18:15:43-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
那麼此處須要從新生成token,生成的方法以下:
[root@k8s-master ~]# kubeadm token create
1vxhuq.qi11t7yq2wj20cpe
若是沒有值--discovery-token-ca-cert-hash,能夠經過在master節點上運行如下命令鏈來獲取:
[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ openssl dgst -sha256 -hex | sed 's/^.* //' 8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
此時,再運行kube join命令將node02加入到集羣當中,此處的--discovery-token-ca-cert-hash依舊可使用初始化時的證書
[root@k8s-node02 ~]# kubeadm join 192.168.56.11:6443 --token 1vxhuq.qi11t7yq2wj20cpe --discovery-token-ca-cert-hash sha256:93fe958796db44dcc23764cb8d9b6a2e67bead072e51a3d4d3c2d36b5d1007cf [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 1d v1.11.2 k8s-node01 Ready <none> 1d v1.11.2 k8s-node02 Ready <none> 2h v1.11.2
若是在集羣安裝過程當中有遇到其餘問題,可使用如下命令進行重置:
$ kubeadm reset $ ifconfig cni0 down && ip link delete cni0 $ ifconfig flannel.1 down && ip link delete flannel.1 $ rm -rf /var/lib/cni/