Kubernetes集羣的搭建方法其實有多種,好比我在以前的文章《利用K8S技術棧打造我的私有云(連載之:K8S集羣搭建)》中使用的就是二進制的安裝方法。雖然這種方法有利於咱們理解 k8s集羣,但卻過於繁瑣。而 kubeadm是 Kubernetes官方提供的用於快速部署Kubernetes集羣的工具,其歷經發展現在已經比較成熟了,利用其來部署 Kubernetes集羣能夠說是很是好上手,操做起來也簡便了許多,所以本文詳細敘述之。node
注: 本文首發於 My Personal Blog:CodeSheep·程序羊,歡迎光臨 小站linux
本文準備部署一個 一主兩從 的 三節點 Kubernetes集羣,總體節點規劃以下表所示:docker
主機名 | IP | 角色 |
---|---|---|
k8s-master | 192.168.39.79 | k8s主節點 |
k8s-node-1 | 192.168.39.77 | k8s從節點 |
k8s-node-2 | 192.168.39.78 | k8s從節點 |
下面介紹一下各個節點的軟件版本:shell
CentOS-7.4-64Bit
1.13.1
1.13.1
全部節點都須要安裝如下組件:json
Docker
:不用多說了吧kubelet
:運行於全部 Node上,負責啓動容器和 Podkubeadm
:負責初始化集羣kubectl
: k8s命令行工具,經過其能夠部署/管理應用 以及CRUD各類資源systemctl disable firewalld.service
systemctl stop firewalld.service
複製代碼
setenforce 0
vi /etc/selinux/config
SELINUX=disabled
複製代碼
swapoff -a
複製代碼
hostnamectl --static set-hostname k8s-master
hostnamectl --static set-hostname k8s-node-1
hostnamectl --static set-hostname k8s-node-2
複製代碼
編輯 /etc/hosts
文件,加入如下內容:bootstrap
192.168.39.79 k8s-master
192.168.39.77 k8s-node-1
192.168.39.78 k8s-node-2
複製代碼
不贅述 ! ! !api
cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
複製代碼
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
複製代碼
爲了應對網絡不順暢通的問題,咱們國內網絡環境只能提早手動下載相關鏡像並從新打 tag :瀏覽器
docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1
docker rmi mirrorgooglecontainers/pause:3.1
docker rmi mirrorgooglecontainers/etcd:3.2.24
docker rmi coredns/coredns:1.2.6
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
複製代碼
而後再在 Master節點上執行以下命令初始化 k8s集羣:bash
kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16
複製代碼
--kubernetes-version
: 用於指定 k8s版本--apiserver-advertise-address
:用於指定使用 Master的哪一個network interface進行通訊,若不指定,則 kubeadm會自動選擇具備默認網關的 interface--pod-network-cidr
:用於指定Pod的網絡範圍。該參數使用依賴於使用的網絡方案,本文將使用經典的flannel網絡方案。執行命令後,控制檯給出了以下所示的詳細集羣初始化過程:網絡
[root@localhost ~]# kubeadm init --config kubeadm-config.yaml
W1224 11:01:25.408209 10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0 podSubnet」
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env」
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml」
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki」
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes」
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests」
[control-plane] Creating static Pod manifest for "kube-apiserver」
[control-plane] Creating static Pod manifest for "kube-controller-manager」
[control-plane] Creating static Pod manifest for "kube-scheduler」
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests」
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005638 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system」 Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''」
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public」 namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123
[root@localhost ~]#
複製代碼
在 Master上用 root用戶執行下列命令來配置 kubectl:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo $KUBECONFIG
複製代碼
安裝 Pod網絡是 Pod之間進行通訊的必要條件,k8s支持衆多網絡方案,這裏咱們依然選用經典的 flannel方案
sysctl net.bridge.bridge-nf-call-iptables=1
複製代碼
kubectl apply -f kube-flannel.yaml
複製代碼
kube-flannel.yaml
文件在此
一旦 Pod網絡安裝完成,能夠執行以下命令檢查一下 CoreDNS Pod此刻是否正常運行起來了,一旦其正常運行起來,則能夠繼續後續步驟
kubectl get pods --all-namespaces -o wide
複製代碼
同時咱們能夠看到主節點已經就緒:kubectl get nodes
在兩個 Slave節點上分別執行以下命令來讓其加入Master上已經就緒了的 k8s集羣:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
複製代碼
若是 token忘記,則能夠去 Master上執行以下命令來獲取:
kubeadm token list
複製代碼
上述kubectl join命令的執行結果以下:
[root@localhost ~]# kubeadm join 192.168.39.79:6443 --token yndddp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.39.79:6443」 [discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443」
[discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443」 [discovery] Successfully established connection with API Server "192.168.39.79:6443」
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml’ [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml」 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env」 [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap… [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. 複製代碼
kubectl get nodes
複製代碼
kubectl get pods --all-namespaces -o wide
複製代碼
好了,集羣如今已經正常運行了,接下來看看如何正常的拆卸集羣。
首先處理各節點:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
複製代碼
一旦節點移除以後,則能夠執行以下命令來重置集羣:
kubeadm reset
複製代碼
就像給elasticsearch配一個可視化的管理工具同樣,咱們最好也給 k8s集羣配一個可視化的管理工具,便於管理集羣。
所以咱們接下來安裝 v1.10.0
版本的 kubernetes-dashboard,用於集羣可視化的管理。
docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
複製代碼
kubectl create -f dashboard.yaml
複製代碼
dashboard.yaml
文件在此
kubectl get pods --namespace=kube-system
複製代碼
[root@k8s-master ~]# kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-4rds2 1/1 Running 0 81m
coredns-86c58d9df4-rhtgq 1/1 Running 0 81m
etcd-k8s-master 1/1 Running 0 80m
kube-apiserver-k8s-master 1/1 Running 0 80m
kube-controller-manager-k8s-master 1/1 Running 0 80m
kube-flannel-ds-amd64-8qzpx 1/1 Running 0 78m
kube-flannel-ds-amd64-jvp59 1/1 Running 0 77m
kube-flannel-ds-amd64-wztbk 1/1 Running 0 78m
kube-proxy-crr7k 1/1 Running 0 81m
kube-proxy-gk5vf 1/1 Running 0 78m
kube-proxy-ktr27 1/1 Running 0 77m
kube-scheduler-k8s-master 1/1 Running 0 80m
kubernetes-dashboard-79ff88449c-v2jnc 1/1 Running 0 21s
複製代碼
kubectl get service --namespace=kube-system
複製代碼
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5h38m
kubernetes-dashboard NodePort 10.99.242.186 <none> 443:31234/TCP 14
複製代碼
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr【如遇輸入,一路回車便可】
複製代碼
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
複製代碼
dashboard.key
和 dashboard.crt
置於路徑 /home/share/certs
下,該路徑會配置到下面即將要操做的dashboard-user-role.yaml
文件中
kubectl create -f dashboard-user-role.yaml
複製代碼
dashboard-user-role.yaml
文件在此
kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
複製代碼
[root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
Name: admin-token-9d4vl
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw
複製代碼
token既然生成成功,接下來就能夠打開瀏覽器,輸入 token來登陸進集羣管理頁面:
因爲能力有限,如有錯誤或者不當之處,還請你們批評指正,一塊兒學習交流!
可 長按 或 掃描 下面的 當心心 來訂閱做者公衆號 CodeSheep,獲取更多 務實、能看懂、可復現的 原創文 ↓↓↓