Kubeadm是管理集羣生命週期的重要工具,從建立到配置再到升級,Kubeadm處理現有硬件上的生產集羣的引導,並以最佳實踐方式配置核心Kubernetes組件,以便爲新節點提供安全而簡單的鏈接流程並支持輕鬆升級。隨着Kubernetes 1.13 的發佈,如今Kubeadm正式成爲GA。html
首先準備2臺虛擬機(CPU最少2核),我是使用Hyper-V建立的2臺Ubuntu18.04虛擬機,IP和機器名以下:node
172.17.20.210 masternginx
172.17.20.211 node1docker
Kubernetes 1.8開始要求必須禁用Swap,若是不關閉,默認配置下kubelet將沒法啓動。bootstrap
編輯/etc/fstab
文件:ubuntu
sudo vim /etc/fstab UUID=8be04efd-f7c5-11e8-be8b-00155d000500 / ext4 defaults 0 0 UUID=C0E3-6A72 /boot/efi vfat defaults 0 0 #/swap.img none swap sw 0 0
如上,將/swap.img
所在的行註釋掉,而後運行:vim
sudo swapoff -a
在Ubuntu18.04+版本中,DNS由systemd
全面接管,接口監聽在127.0.0.53:53
,配置文件在/etc/systemd/resolved.conf
中。api
有時候會致使沒法解析域名的問題,可以使用以下2種方式來解決:安全
1.最簡單的就是關閉systemd-resolvd服務bash
sudo systemctl stop systemd-resolved sudo systemctl disable systemd-resolved
而後手動修改/etc/resolv.conf
文件就能夠了。
2.更加推薦的作法是修改systemd-resolv的設置:
sudo vim /etc/systemd/resolved.conf # 修改成以下 [Resolve] DNS=1.1.1.1 1.0.0.1 #FallbackDNS= #Domains= LLMNR=no #MulticastDNS=no #DNSSEC=no #Cache=yes #DNSStubListener=yes
DNS=設置的是域名解析服務器的IP地址,這裏分別設爲1.1.1.1和1.0.0.1
LLMNR=設置的是禁止運行LLMNR(Link-Local Multicast Name Resolution),不然systemd-resolve會監聽5535端口。
Kubernetes從1.6開始使用CRI(Container Runtime Interface)容器運行時接口。默認的容器運行時仍然是Docker,是使用kubelet中內置dockershim CRI來實現的。
Docker的安裝能夠參考以前的博客:Docker初體驗。
須要注意的是,Kubernetes 1.13已經針對Docker的1.11.1, 1.12.1, 1.13.1, 17.03, 17.06, 17.09, 18.06等版本作了驗證,最低支持的Docker版本是1.11.1,最高支持是18.06,而Docker最新版本已是
18.09
了,故咱們安裝時須要指定版本爲18.06.1-ce
:sudo apt install docker-ce=18.06.1~ce~3-0~ubuntu
部署以前,咱們須要安裝三個包:
kubeadm: 引導啓動k8s集羣的命令行工具。
kubelet: 在羣集中全部節點上運行的核心組件, 用來執行如啓動pods和containers等操做。
kubectl: 操做集羣的命令行工具。
首先添加apt-key:
sudo apt update && sudo apt install -y apt-transport-https curl curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
添加kubernetes源:
sudo vim /etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
安裝:
sudo apt update sudo apt install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
K8s的控制面板組件運行在Master節點上,包括etcd和API server(Kubectl即是經過API server與k8s通訊)。
在執行初始化以前,咱們還有一下3點須要注意:
1.選擇一個網絡插件,並檢查它是否須要在初始化Master時指定一些參數,好比咱們可能須要根據選擇的插件來設置--pod-network-cidr
參數。參考:Installing a pod network add-on。
2.kubeadm使用eth0的默認網絡接口(一般是內網IP)作爲Master節點的advertise address,若是咱們想使用不一樣的網絡接口,可使用--apiserver-advertise-address=<ip-address>
參數來設置。若是適應IPv6,則必須使用IPv6d的地址,如:--apiserver-advertise-address=fd00::101
。
3.使用kubeadm config images pull
來預先拉取初始化須要用到的鏡像,用來檢查是否能鏈接到Kubenetes的Registries。
Kubenetes默認Registries地址是k8s.gcr.io
,很明顯,在國內並不能訪問gcr.io,所以在kubeadm v1.13以前的版本,安裝起來很是麻煩,可是在1.13
版本中終於解決了國內的痛點,其增長了一個--image-repository
參數,默認值是k8s.gcr.io
,咱們將其指定爲國內鏡像地址:registry.aliyuncs.com/google_containers
,其它的就能夠徹底按照官方文檔來愉快的玩耍了。
其次,咱們還須要指定--kubernetes-version
參數,由於它的默認值是stable-1
,會致使從https://dl.k8s.io/release/stable-1.txt
下載最新的版本號,咱們能夠將其指定爲固定版本(最新版:v1.13.1)來跳過網絡請求。
如今,咱們就來試一下:
# 使用calico網絡 --pod-network-cidr=192.168.0.0/16 sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --pod-network-cidr=192.168.0.0/16 # 輸出 [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.20.210] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.17.20.210 127.0.0.1 ::1] [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.17.20.210 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed. [apiclient] All control plane components are healthy after 42.003645 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 6pkrlg.8glf2fqpuf3i489m [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.17.20.210:6443 --token 6pkrlg.8glf2fqpuf3i489m --discovery-token-ca-cert-hash sha256:eebfe256113bee397b218ba832f412273ae734bd4686241fb910885d26efd222
此次很是順利的就部署成功了,若是咱們想使用非root用戶操做kubectl
,可使用如下命令,這也是kubeadm init
輸出的一部分:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
爲了讓Pods間能夠相互通訊,咱們必須安裝一個網絡插件,而且必須在部署任何應用以前安裝,CoreDNS也是在網絡插件安裝以後纔會啓動的。
網絡的插件完整列表,請參考 Networking and Network Policy。
在安裝以前,咱們先查看一下當前Pods的狀態:
kubectl get pods --all-namespaces # 輸出 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-78d4cf999f-6pgfr 0/1 Pending 0 87s kube-system coredns-78d4cf999f-m9kgs 0/1 Pending 0 87s kube-system etcd-master 1/1 Running 0 47s kube-system kube-apiserver-master 1/1 Running 0 38s kube-system kube-controller-manager-master 1/1 Running 0 55s kube-system kube-proxy-mkg24 1/1 Running 0 87s kube-system kube-scheduler-master 1/1 Running 0 41s
如上,能夠看到CoreDND的狀態是Pending
,這是由於咱們尚未安裝網絡插件。
Calico是一個純三層的虛擬網絡方案,Calico 爲每一個容器分配一個 IP,每一個 host 都是 router,把不一樣 host 的容器鏈接起來。與 VxLAN 不一樣的是,Calico 不對數據包作額外封裝,不須要 NAT 和端口映射,擴展性和性能都很好。
默認狀況下,Calico網絡插件使用的的網段是192.168.0.0/16
,在init
的時候,咱們已經經過--pod-network-cidr=192.168.0.0/16
來適配Calico,固然你也能夠修改calico.yml
文件來指定不一樣的網段。
可使用以下命令命令來安裝Canal
插件:
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml # 上面的calico.yaml會去quay.io拉取鏡像,若是沒法拉取,可以使用下面的國內鏡像 kubectl apply -f http://mirror.faasx.com/k8s/calico/v3.3.2/rbac-kdd.yaml kubectl apply -f http://mirror.faasx.com/k8s/calico/v3.3.2/calico.yaml
關於更多Canal
的信息能夠查看Calico官方文檔:kubeadm quickstart。
稍等片刻,再使用kubectl get pods --all-namespaces
命令來查看網絡插件的安裝狀況:
kubectl get pods --all-namespaces # 輸出 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-x96gn 2/2 Running 0 47s kube-system coredns-78d4cf999f-6pgfr 1/1 Running 0 54m kube-system coredns-78d4cf999f-m9kgs 1/1 Running 0 54m kube-system etcd-master 1/1 Running 3 53m kube-system kube-apiserver-master 1/1 Running 3 53m kube-system kube-controller-manager-master 1/1 Running 3 53m kube-system kube-proxy-mkg24 1/1 Running 2 54m kube-system kube-scheduler-master 1/1 Running 3 53m
如上,STATUS所有變爲了Running
,表示安裝成功,接下來就能夠加入其餘節點以及部署應用了。
默認狀況下,因爲安全緣由,集羣並不會將pods部署在Master節點上。可是在開發環境下,咱們可能就只有一個Master節點,這時可使用下面的命令來解除這個限制:
kubectl taint nodes --all node-role.kubernetes.io/master- ## 輸出 node/master untainted
要爲羣集添加工做節點,須要爲每臺計算機執行如下操做:
kubeadm init
命令輸出的:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
若是咱們忘記了Master節點的加入token,可使用以下命令來查看:
kubeadm token list # 輸出 TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 6pkrlg.8glf2fqpuf3i489m 22h 2018-12-07T13:46:33Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
默認狀況下,token的有效期是24小時,若是咱們的token已通過期的話,可使用如下命令從新生成:
kubeadm token create # 輸出 u2mt59.tyqpo0v5wf05lx2q
若是咱們也沒有--discovery-token-ca-cert-hash
的值,可使用如下命令生成:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' # 輸出 eebfe256113bee397b218ba832f412273ae734bd4686241fb910885d26efd222
如今,咱們登陸到工做節點服務器,而後運行以下命令加入集羣(這也是上面init
輸出的一部分):
sudo kubeadm join 172.17.20.210:6443 --token 6pkrlg.8glf2fqpuf3i489m --discovery-token-ca-cert-hash sha256:eebfe256113bee397b218ba832f412273ae734bd4686241fb910885d26efd222 # 輸出 [sudo] password for raining: [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "172.17.20.210:6443" [discovery] Created cluster-info discovery client, requesting info from "https://172.17.20.210:6443" [discovery] Requesting info from "https://172.17.20.210:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.17.20.210:6443" [discovery] Successfully established connection with API Server "172.17.20.210:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
等待一會,咱們能夠在Master節點上使用kubectl get nodes
命令來查看節點的狀態:
kubectl get nodes # 輸出 NAME STATUS ROLES AGE VERSION master Ready master 17m v1.13.1 node1 Ready <none> 15m v1.13.1
如上所有Ready
,大功告成,咱們能夠運行一些命令來測試一下集羣是否正常。
首先驗證kube-apiserver, kube-controller-manager, kube-scheduler, pod network 是否正常:
# 部署一個 Nginx Deployment,包含兩個Pod # https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ kubectl create deployment nginx --image=nginx:alpine kubectl scale deployment nginx --replicas=2 # 驗證Nginx Pod是否正確運行,而且會分配192.168.開頭的集羣IP kubectl get pods -l app=nginx -o wide # 輸出以下: NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-54458cd494-p8jzs 1/1 Running 0 31s 192.168.1.2 node1 <none> <none> nginx-54458cd494-v2m4b 1/1 Running 0 24s 192.168.1.3 node1 <none> <none>
再驗證一下kube-proxy
是否正常:
# 以 NodePort 方式對外提供服務 https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/ kubectl expose deployment nginx --port=80 --type=NodePort # 查看集羣外可訪問的Port kubectl get services nginx # 輸出 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx NodePort 10.110.49.49 <none> 80:31899/TCP 4s # 能夠經過任意 NodeIP:Port 在集羣外部訪問這個服務,本示例中部署的2臺集羣IP分別是172.17.20.210和172.17.20.211 curl http://172.17.20.210:31899 curl http://172.17.20.211:31899
最後驗證一下dns, pod network是否正常:
# 運行Busybox並進入交互模式 kubectl run -it curl --image=radial/busyboxplus:curl # 輸入`nslookup nginx`查看是否能夠正確解析出集羣內的IP,已驗證DNS是否正常 [ root@curl-66959f6557-6sfqh:/ ]$ nslookup nginx # 輸出 Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: nginx Address 1: 10.110.49.49 nginx.default.svc.cluster.local # 經過服務名進行訪問,驗證kube-proxy是否正常 [ root@curl-66959f6557-6sfqh:/ ]$ curl http://nginx/ # 輸出以下: # <!DOCTYPE html> ---省略 # 分別訪問一下2個Pod的內網IP,驗證跨Node的網絡通訊是否正常 [ root@curl-66959f6557-6sfqh:/ ]$ curl http://192.168.1.2/ [ root@curl-66959f6557-6sfqh:/ ]$ curl http://192.168.1.3/
驗證經過,集羣搭建成功,接下來咱們就能夠參考官方文檔來部署其餘服務,愉快的玩耍了。
想要撤銷kubeadm執行的操做,首先要排除節點,並確保該節點爲空, 而後再將其關閉。
在Master節點上運行:
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name>
而後在須要移除的節點上,重置kubeadm的安裝狀態:
sudo kubeadm reset
若是你想從新配置集羣,使用新的參數從新運行kubeadm init
或者kubeadm join
便可。