操做節點:全部節點(k8s-master
)均需執行html
# 在master節點 $ hostnamectl set-hostname k8s-master #設置master節點的hostname # slave1節點 $ hostnamectl set-hostname k8s-worker-node1 # slave2節點 $ hostnamectl set-hostname k8s-worker-node2
操做節點: 全部的master和slave節點(k8s-master,k8s-slave
)須要執行node
$ iptables -P FORWARD ACCEPT $ /etc/init.d/ufw stop $ ufw disable
*關閉swappython
swapoff -a # 防止開機自動掛載 swap 分區 sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward=1 vm.max_map_count=262144 EOF modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
$ apt-get update && apt-get install -y apt-transport-https ca-certificates software-properties-common $ curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - $ curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add $ add-apt-repository "deb [arch=amd64] https://mirrors.ustc.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs) stable" $ add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" $ apt-get update #若上步出現NO_PUBLICKEY問題,參考https://www.cnblogs.com/jiangzuo/p/13667011.html
操做節點: 全部節點linux
$ apt-get install docker-ce=5:20.10.8~3-0~ubuntu-bionic ## 啓動docker $ systemctl enable docker && systemctl start docker
操做節點: 全部的master和slave節點(k8s-master,k8s-slave
) 須要執行nginx
$ apt-get install kubelet=1.21.1-00 kubectl=1.21.1-00 kubeadm=1.21.1-00 ## 查看kubeadm 版本 $ kubeadm version ## 設置kubelet開機啓動 $ systemctl enable kubelet
操做節點: 只在master節點(k8s-master
)執行web
$ kubeadm config print init-defaults > kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.136.138 # 修改成master節點ip bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: node # 刪掉此行,刪掉此行,刪掉此行 taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers # 修改此處鏡像repo kind: ClusterConfiguration kubernetesVersion: 1.21.0 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 # 添加此行 serviceSubnet: 10.96.0.0/12 scheduler: {}
操做節點:只在master節點(k8s-master
)執行docker
# 提早下載鏡像到本地 $ kubeadm config images pull --config kubeadm.yaml [config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 [config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0 [config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.4.1 [config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0 failed to pull image "registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0": output: Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/coredns/coredns, repository does not exist or may require 'docker login': denied: requested access to the resource is denied , error: exit status 1 To see the stack trace of this error execute with --v=5 or higher
提示找不到coredns
的鏡像,咱們能夠經過以下方式解決:shell
$ docker pull coredns/coredns:1.8.0 $ docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
操做節點:只在master節點(k8s-master
)執行bootstrap
$ kubeadm init --config kubeadm.yaml
若初始化成功後,最後會提示以下信息:ubuntu
... To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.136.138:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:3a7987c9f5007ebac7980e6614281ee0e064c760c8db012471f9f662289cc9ce
接下來按照上述提示信息操做,配置kubectl客戶端的認證
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
⚠️注意:此時使用 kubectl get nodes查看節點應該處於notReady狀態,由於還未配置網絡插件
若執行初始化過程當中出錯,根據錯誤信息調整後,執行kubeadm reset後再次執行init操做便可
操做節點:全部的slave節點(k8s-slave
)須要執行
在每臺slave節點,執行以下命令,該命令是在kubeadm init成功後提示信息中打印出來的,須要替換成實際init後打印出的命令。
kubeadm join 192.168.136.135:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
操做節點:只在master節點(k8s-master
)執行
安裝operator
$ kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
等待operator pod安裝啓動完成
$ kubectl -n tigera-operator get po NAME READY STATUS RESTARTS AGE tigera-operator-698876cbb5-kfpb2 1/1 Running 0 38m
鏡像拉取比較慢,能夠手動去節點docker pull拉取
編輯calico配置
$ vim custom-resources.yaml apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: # Configures Calico networking. calicoNetwork: # Note: The ipPools section cannot be modified post-install. ipPools: - blockSize: 26 cidr: 10.244.0.0/16 #修改和pod cidr一致 encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() --- # This section configures the Calico API server. # For more information, see: https://docs.projectcalico.org/v3.20/reference/installation/api#operator.tigera.io/v1.APIServer apiVersion: operator.tigera.io/v1 kind: APIServer metadata: name: default spec: {}
建立calico配置
$ kubectl apply -f custom-resources.yaml
等待operator自動建立calico的pod
# operator會自動建立calico-apiserver和calico-system兩個命名空間以及必要的pod,等待pod啓動完成便可 $ kubectl get ns NAME STATUS AGE calico-apiserver Active 13m calico-system Active 19m $ kubectl -n calico-apiserver get po NAME READY STATUS RESTARTS AGE calico-apiserver-554fbf9554-b6kzv 1/1 Running 0 13m $ kubectl -n calico-system get po NAME READY STATUS RESTARTS AGE calico-kube-controllers-868b656ff4-hn6qv 1/1 Running 0 20m calico-node-qqrp9 1/1 Running 0 20m calico-node-r45z2 1/1 Running 0 20m calico-typha-5b64cf4b48-vws5j 1/1 Running 0 20m calico-typha-5b64cf4b48-w6wqf 1/1 Running 0 20m
操做節點: 在master節點(k8s-master
)執行
$ kubectl get nodes #觀察集羣節點是否所有Ready
建立測試nginx服務
$ kubectl run test-nginx --image=nginx:alpine
查看pod是否建立成功,並訪問pod ip測試是否可用
$ kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-nginx-5bd8859b98-5nnnw 1/1 Running 0 9s 10.244.1.2 k8s-slave1 <none> <none> $ curl 10.244.1.2 ... <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
若是你的集羣安裝過程當中遇到了其餘問題,咱們能夠使用下面的命令來進行重置:
# 在所有集羣節點執行 kubeadm reset ifconfig cni0 down && ip link delete cni0 ifconfig flannel.1 down && ip link delete flannel.1 rm -rf /run/flannel/subnet.env rm -rf /var/lib/cni/ mv /etc/kubernetes/ /tmp mv /var/lib/etcd /tmp mv ~/.kube /tmp iptables -F iptables -t nat -F ipvsadm -C ip link del kube-ipvs0 ip link del dummy0