在之前的文章 Ubuntu 18 Kubernetes集羣的安裝和部署 以及Helm的安裝 和 Centos 使用kubeadm安裝Kubernetes 1.15.3,因爲某些緣由須要更新版本,索性直接安裝最新的版原本試一下。html
1.安裝並啓用 Docker node
sudo apt install docker.io
sudo systemctl enable docker
docker --versionlinux
2.添加 Kubernetes signing key 和Repositorynginx
sudo apt install curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
3.安裝Kubeadmgit
sudo apt install kubeadm kubeadm version #經常使用命令 重啓kubelet服務: systemctl daemon-reload systemctl restart kubelet sudo systemctl restart kubelet.service sudo systemctl daemon-reload sudo systemctl stop kubelet sudo systemctl enable kubelet sudo systemctl start kubelet
4.禁用 swapoffgithub
sudo swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab #永久關閉 vim /etc/fstab 註釋掉最後一行的swap
以上的指令我只在一臺Ubuntu上執行的(若是你有多臺計算機,須要在全部的計算機上執行以上指令,我這裏是經過拷貝虛擬機來實現的)docker
5.準備2臺虛擬機k8s-master和k8s-node(我這裏把上面的計算機命名爲 k8s_master ,copy它並命名爲k8s_node)json
sudo hostnamectl set-hostname k8s-master #在k8s-master 上執行 IP:192.168.100.11 sudo hostnamectl set-hostname k8s-node #k8s-node 上執行 IP:192.168.100.12
1.在master上初始化 Kubernetes ,使用kubeadm config print init-defaults能夠打印集羣初始化默認的使用的配置,使用kubeadm默認配置初始化的集羣,會在master節點打上node-role.kubernetes.io/master:NoSchedule的污點,阻止master節點接受調度運行工做負載。這裏測試環境只有兩個節點,因此將這個taint的effect從NoSchedule改成PreferNoSchedule 還有就是修訂kubernet版本1.20.5vim
apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.100.11 bindPort: 6443 nodeRegistration: taints: - effect: PreferNoSchedule key: node-role.kubernetes.io/master --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: v1.20.5 networking: podSubnet: 10.244.0.0/16
請參考Container runtimes執行api
# Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "insecure-registries":["192.168.100.30:8080"] } EOF mkdir -p /etc/systemd/system/docker.service.d #我順便吧docker的私有倉庫也加在裏面 # Restart docker. systemctl daemon-reload systemctl restart docker
若是遇到port 10251 and 10252 are in use 錯誤請執行 netstat -lnp | grep 1025 而後kill 進程ID
2下面的命令是配置常規用戶如何使用kubectl訪問集羣:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看一下集羣狀態,確認個組件都處於healthy狀態:以下錯誤須要修復
root@k8s-master:~# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"}
解決k8s Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused,出現這種狀況是kube-controller-manager.yaml和kube-scheduler.yaml設置的默認端口是0,在文件中註釋掉就能夠了。(每臺master節點都要執行操做)
vim /etc/kubernetes/manifests/kube-controller-manager.yaml vim /etc/kubernetes/manifests/kube-scheduler.yaml # 註釋掉port=0這一行 #全部節點重啓kubelet systemctl restart kubelet.service
#再次執行kubectl get cs
root@k8s-master:~# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
接下來安裝flannel network add-on:
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml
這裏注意kube-flannel.yml這個文件裏的flannel的鏡像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
若是Node有多個網卡的話,參考flannel issues 39701,目前須要在kube-flannel.yml中使用–iface參數指定集羣主機內網網卡的名稱,不然可能會出現dns沒法解析。須要將kube-flannel.yml下載到本地,flanneld啓動參數加上–iface=<iface-name>
containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.13.1-rc2 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 ......
使用kubectl get pod --all-namespaces=true -o wide 或者 kubectl get pod -n kube-system 確保全部的Pod都處於Running狀態。
4 測試集羣DNS是否可用
kubectl run curl --image=radial/busyboxplus:curl -it kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
If you don't see a command prompt, try pressing enter.
進入後執行nslookup kubernetes.default確認解析正常:
nslookup kubernetes.default Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes.default Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
5 向Kubernetes集羣中添加Node節點
下面將node2這個主機添加到Kubernetes集羣中,在node2上執行:
kubeadm join 192.168.100.11:6443 --token ez5vpw.0bczsqcmuu6u063t --discovery-token-ca-cert-hash sha256:df94524441a7d8a0d880f9738fcf33ebffcbc75039bcaf120f2922297ff8f9a4
node2加入集羣非常順利,下面在master節點上執行命令查看集羣中的節點:
root@k8s-master:~# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 9h v1.20.5 k8s-node Ready <none> 9h v1.20.5
6.如何從集羣中移除Node
若是須要從集羣中移除node2這個Node執行下面的命令:
在k8s-master節點上執行:
kubectl drain k8s-node --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node
在k8s-node上執行:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
下載文件 wget wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
在文件中搜索 serviceAccountName: nginx-ingress-serviceaccount,大約在215行左右。
而後添加 hostNetwork: true,再註釋掉下面args中的幾個參數,修改後的內容以下:【kubectl apply -f mandatory.yaml】 順便把authorization.k8s.io/v1beta1改成 authorization.k8s.io/v1
terminationGracePeriodSeconds: 300 serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true #這個是添加的內容 nodeSelector: kubernetes.io/os: linux containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/udp-services #- --publish-service=$(POD_NAMESPACE)/ingress-nginx # 註釋掉的內容 - --annotations-prefix=nginx.ingress.kubernetes.io
其中nodeSelector是指能夠在哪些node節點上運行ingress-controller的Pod。kubernetes.io/os: linux這個是默認值,你能夠按需修改,os linux默認指全部node節點,由於k8s默認給全部節點打了這個label。
你能夠在master上使用命令kubectl get node --show-labels查看label,你也能夠給具體的節點設置特定的label用在這裏使用。好比kubernetes.io/hostname: 192.168.1.65 就是我只可能在這個節點上運行ingress-controller程序的的配置.
內網環境能夠手動下載安裝,下載地址:https://github.com/kubernetes/helm/releases
curl -O https://get.helm.sh/helm-v3.5.3-linux-amd64.tar.gz tar -zxvf helm-v3.5.3-linux-amd64.tar.gz cd linux-amd64/ cp helm /usr/local/bin/
若是您已經有了 Kubernetes 集羣,只須要一行命令便可安裝 Kuboard: kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml 而後訪問您集羣中任意節點的 32567 端口(http://any-of-your-node-ip:32567),便可打開 Kuboard 界面用一下命令獲取token
kubectl create clusterrolebinding serviceaccounts-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts kubectl create serviceaccount dashboard -n default kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath="{.secrets[0].name}") -o jsonpath="{.data.token}" | base64 --decode