手工離線部署k8s(v1.9)node
1. 環境準備(採用一個master節點+兩個node節點)
master 192.168.2.40
node-1 192.168.2.41
node-2 192.168.2.42linux
綁定hostsgit
2.將master和node-一、node-2綁定hostsgithub
#vi /etc/hostsdocker
192.168.2.40 master
192.168.2.41 node-1
192.168.2.42 node-2json
3. master節點與node節點ssh密碼登陸bootstrap
[root@master ~]# ssh-keygen [root@master ~]# ssh-copy-id node-1 [root@master ~]# ssh-copy-id node-2
4.關閉全部服務器防火牆和selinuxvim
#systemctl stop firewalld.service #systemctl disable firewalld.service #sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config #grep SELINUX=disabled /etc/selinux/config #setenforce 0
5.全部服務器關閉swapcentos
# swapoff -a && sed -i '/swap/d' /etc/fstab
6.全部服務器配置系統路由參數,防止kubeadm報路由警告api
#echo -e "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1\nvm.swappiness = 0" >> /etc/sysctl.conf #sysctl -p
注:
[root@master soft]# sysctl -p sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: 沒有那個文件或目錄 sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 沒有那個文件或目錄 [root@master soft]# modprobe bridge [root@master soft]# lsmod|grep bridge bridge 119562 0 stp 12976 1 bridge llc 14552 2 stp,bridge [root@master soft]# sysctl -p net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness = 0
7.操做系統版本:centos7.2
8.軟件版本
kubernetes v1.9
docker:17.03
kubeadm:v1.9.0
kube-apiserver:v1.9.0
kube-controller-manager:v1.9.0
kube-scheduler:v1.9.0
k8s-dns-sidecar:1.14.7
k8s-dns-kube-dns:1.14.7
k8s-dns-dnsmasq-nanny:1.14.7
kube-proxy:v1.9.0
etcd:3.1.10
pause :3.0
flannel:v0.9.1
kubernetes-dashboard:v1.8.1
注意:採用kubeadm安裝,kubeadm爲kubernetes官方推薦的自動化部署工具,他將kubernetes的組件以pod的形式部署在master和node節點上,並自動完成證書認證等操做。
由於kubeadm默認要從google的鏡像倉庫下載鏡像,但目前國內沒法訪問google鏡像倉庫,因此提前將鏡像下好了,只須要將離線包的鏡像導入到節點中就能夠了。
1)全部服務器,下載相關包至/home/soft
連接:https://pan.baidu.com/s/1eUixGvo 密碼:65yo
2)全部服務器,解壓下載下來的離線包
#yum install -y bzip2
#tar -xjvf k8s_images.tar.bz2
3)全部服務器,安裝docker-ce17.03(kubeadmv1.9最大支持docker-ce17.03)
安裝依賴包
#yum install -y https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm #yum install -y ftp://ftp.icm.edu.pl/vol/rzm6/linux-slc/centos/7.1.1503/cr/x86_64/Packages/libseccomp-2.2.1-1.el7.x86_64.rpm #yum install -y http://rpmfind.net/linux/centos/7.4.1708/os/x86_64/Packages/libtool-ltdl-2.4.2-22.el7_3.x86_64.rpm #cd k8s_images #rpm -ihv docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm #rpm -ivh docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
注意:修改docker的鏡像源爲國內的daocloud的。
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://3272dd08.m.daocloud.io
4)全部服務器,啓動docker-ce
#systemctl start docker.service &&systemctl enable docker.service
5)全部服務器,導入鏡像
docker load </home/soft/k8s_images/docker_images/k8s-dns-dnsmasq-nanny-amd64_v1.14.7.tar docker load </home/soft/k8s_images/docker_images/k8s-dns-kube-dns-amd64_1.14.7.tar docker load </home/soft/k8s_images/docker_images/k8s-dns-sidecar-amd64_1.14.7.tar docker load </home/soft/k8s_images/docker_images/kube-apiserver-amd64_v1.9.0.tar docker load </home/soft/k8s_images/docker_images/kube-controller-manager-amd64_v1.9.0.tar docker load </home/soft/k8s_images/docker_images/kube-scheduler-amd64_v1.9.0.tar docker load </home/soft/k8s_images/docker_images/flannel:v0.9.1-amd64.tar docker load </home/soft/k8s_images/docker_images/pause-amd64_3.0.tar docker load </home/soft/k8s_images/docker_images/kube-proxy-amd64_v1.9.0.tar docker load </home/soft/k8s_images/kubernetes-dashboard_v1.8.1.tar docker load </home/soft/k8s_images/docker_images/etcd-amd64_v3.1.10.tar
6)安裝kubelet 、kubeadm、 kubectl包
rpm -ivh kubernetes-cni-0.6.0-0.x86_64.rpm --nodeps --force yum localinstall -y socat-1.7.3.2-2.el7.x86_64.rpm yum localinstall -y kubelet-1.9.9-9.x86_64.rpm yum localinstall -y kubectl-1.9.0-0.x86_64.rpm yum localinstall -y kubeadm-1.9.0-0.x86_64.rpm
1)啓動kubelete
#systemctl start kubelet && systemctl enable kubelet
2)開始初始化master
#kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0
注:kubernetes默認支持多重網絡插件如flannel、weave、calico,這裏使用flanne,就必需要設置
--pod-network-cidr參數,10.244.0.0/16是kube-flannel.yml裏面配置的默認網段,若是須要修改的話,須要把kubeadm init的--pod-network-cidr參數和後面的kube-flannel.yml裏面修改爲同樣的網段就能夠了。
--kubernetes-version 最好指定版本,不然會請求 https://storage.googleapis.com/kubernetes-release/release/stable-1.9.txt ,若是沒"翻""牆",就超時報錯
--token-ttl 默認的token有效期24小時, 設置爲0表示永不過時
3)發現kubelet啓動不了,報錯了,查看日誌/var/log/messages以下:
kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
解決方法:發現原來是kubelet默認的cgroup的driver和docker的不同,docker默認的cgroupfs,kubelet默認爲systemd,能夠用docker info | grep cgroup查看當前docker驅動方式
編輯 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd" 改成Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
重啓reload
systemctl daemon-reload && systemctl restart kubelet 查看狀態 #systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since 三 2018-04-11 15:11:22 CST; 22s ago Docs: http://kubernetes.io/docs/ Main PID: 15942 (kubelet) Memory: 40.3M CGroup: /system.slice/kubelet.service └─15942 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kub... 4月 11 15:11:32 master kubelet[15942]: E0411 15:11:32.415152 15942 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubel...refused 4月 11 15:11:32 master kubelet[15942]: E0411 15:11:32.416006 15942 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubel...refused 4月 11 15:11:32 master kubelet[15942]: E0411 15:11:32.426454 15942 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/confi...refused 4月 11 15:11:34 master kubelet[15942]: E0411 15:11:34.653755 15942 eviction_manager.go:238] eviction manager: unexpected...t found 4月 11 15:11:34 master kubelet[15942]: W0411 15:11:34.657127 15942 cni.go:171] Unable to update cni config: No networks ...i/net.d 4月 11 15:11:34 master kubelet[15942]: E0411 15:11:34.657315 15942 kubelet.go:2105] Container runtime network not ready:...ialized 4月 11 15:11:35 master kubelet[15942]: I0411 15:11:35.238311 15942 kubelet_node_status.go:273] Setting node annotation t.../detach 4月 11 15:11:35 master kubelet[15942]: I0411 15:11:35.240636 15942 kubelet_node_status.go:82] Attempting to register node master 4月 11 15:11:39 master kubelet[15942]: W0411 15:11:39.658588 15942 cni.go:171] Unable to update cni config: No networks ...i/net.d 4月 11 15:11:39 master kubelet[15942]: E0411 15:11:39.658802 15942 kubelet.go:2105] Container runtime network not ready:...ialized Hint: Some lines were ellipsized, use -l to show in full.
此時須要將環境reset一下,執行 #kubeadm reset 在從新執行 #kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0
4)成功初始化以下:
[root@master k8s_images]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.224.0.0/16 --token-ttl=0 [init] Using Kubernetes version: v1.9.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Starting the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.40] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 29.003450 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node master as master by adding a label and a taint [markmaster] Master master tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: d0c1ec.7d7a61a4e9ba83f8 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token d0c1ec.7d7a61a4e9ba83f8 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:7b38dad17cd1378446121952632d78d041dfcddc27b4663d011113a3b6326a65
將kubeadm join xxx保存下來,等下node節點須要使用,若是忘記了,能夠在master上經過kubeadmin token list獲得,也能夠重新生成一個
當前生成token以下:
kubeadm join --token d0c1ec.7d7a61a4e9ba83f8 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:7b38dad17cd1378446121952632d78d041dfcddc27b4663d011113a3b6326a65
### 注意:kubeadm init 輸出的 join 指令中 token 只有 24h 的有效期,若是過時後,須要從新生成,具體請參考:
# kubeadm token create --print-join-command
5)按照上面提示,此時root用戶還不能使用kubelet控制集羣須要,配置下環境變量
對於非root用戶
#mkdir -p $HOME/.kube #cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #chown $(id -u):$(id -g) $HOME/.kube/config
對於root用戶
#export KUBECONFIG=/etc/kubernetes/admin.conf
也能夠直接放到~/.bash_profile
#echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source一下環境變量
source ~/.bash_profile
6)kubectl version測試
[root@master k8s_images]# kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
6.安裝網絡,可使用flannel、calico、weave、macvlan這裏咱們用flannel。
1)下載此文件
#wget https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
或直接使用離線包裏面的
2)若要修改網段,修改配置文件kube-flannel.yml,須要kubeadm --pod-network-cidr=和這裏同步,修改network項
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
3)執行加載網絡
#kubectl create -f /home/soft/k8s_images/kube-flannel.yml
7.部署kubernetes-dashboard,kubernetes-dashboard是可選組件,由於,實在很差用,功能太弱了。 建議在部署master時一塊兒把kubernetes-dashboard一塊兒部署了,否則在node節點加入集羣后,kubernetes-dashboard會被kube-scheduler調度node節點上,這樣根kube-apiserver通訊須要額外配置。
下載kubernetes-dashboard的配置文件或直接使用離線包裏面的kubernetes-dashboard.yaml
1)建立kubernetes-dashboard
#kubectl create -f /home/soft/k8s_images/kubernetes-dashboard.yaml
2) 若是想修改端口,或外部可訪問
# ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 32666 selector: k8s-app: kubernetes-dashboard
注意:32666是映射端口,跟docker run -d xxx:xxx差很少,映射出去便可。訪問https://master_ip:32666
若是出現pod失敗須要刪除可以使用如下命令,刪除pod
kuberctl delete po -n kube-system <pod-name>
查看pod建立失敗緣由
# kubectl describe pod <pod-name> --namespace=kube-system
3)默認驗證方式有kubeconfig和token,這裏咱們使用basicauth的方式進行apiserver的驗證
建立/etc/kubernetes/manifests/pki/basic_auth_file 用於存放用戶名和密碼。basic_auth_file文件格式爲user,password,userid
[root@master pki]# echo 'admin,admin,2' > /etc/kubernetes/pki/basic_auth_file
4)給kube-apiserver添加basic_auth驗證
[root@master pki]# grep 'auth' /etc/kubernetes/manifests/kube-apiserver.yaml - --enable-bootstrap-token-auth=true - --authorization-mode=Node,RBAC
添加
- --basic_auth_file=/etc/kubernetes/pki/basic_auth_file
注意:!!!!若是這時直接kubectl apply -f xxxxxxxxx 執行更新kube-apiserver.yaml文件,會出現以下報錯:
The connection to the server 192.168.2.40:6443 was refused - did you specify the right host or port?
解決方法:
在kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml以前,先執行systemctl daemon-reload再執行systemctl restart kubelet,確認是否重啓是否成功
# kubectl get node
# kubectl get pod --all-namespaces
5)更新應用/etc/kubernetes/manifests/kube-apiserver.yaml
[root@master manifests]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml pod "kube-apiserver" created
6)k8s1.6後版本都採用RBAC受權模型。默認狀況下cluster-admin是擁有所有權限的,將admin和cluster-admin角色進行clusterrolebinding綁定,這樣admin就有cluster-admin的權限。
[root@master ~]# kubectl create clusterrolebinding login-on-dashboard-with-cluster-admin --clusterrole=cluster-admin --user=admin clusterrolebinding "login-on-dashboard-with-cluster-admin" created
檢查是否正常獲取到集羣信息
# kubectl get clusterrolebinding/login-on-dashboard-with-cluster-admin -o yaml
7)查看所pod狀態,已經都running
[root@master k8s_images]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 1/1 Running 0 9m kube-system kube-apiserver-master 1/1 Running 0 9m kube-system kube-controller-manager-master 1/1 Running 0 9m kube-system kube-dns-6f4fd4bdf-qj7s5 3/3 Running 0 37m kube-system kube-flannel-ds-4mvmz 1/1 Running 0 9m kube-system kube-proxy-67jq2 1/1 Running 0 37m kube-system kube-scheduler-master 1/1 Running 0 9m kube-system kubernetes-dashboard-58f5cb49c-xsqf5 1/1 Running 0 32s
8)測試鏈接
[root@master ~]# curl --insecure https://master:6443 -basic -u admin:admin { "paths": [ "/api", "/api/v1", "/apis", "/apis/", "/apis/admissionregistration.k8s.io", "/apis/admissionregistration.k8s.io/v1beta1", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/apis/apiregistration.k8s.io", "/apis/apiregistration.k8s.io/v1beta1", "/apis/apps", "/apis/apps/v1", "/apis/apps/v1beta1", "/apis/apps/v1beta2", "/apis/authentication.k8s.io", "/apis/authentication.k8s.io/v1", "/apis/authentication.k8s.io/v1beta1", "/apis/authorization.k8s.io", "/apis/authorization.k8s.io/v1", "/apis/authorization.k8s.io/v1beta1", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/autoscaling/v2beta1", "/apis/batch", "/apis/batch/v1", "/apis/batch/v1beta1", "/apis/certificates.k8s.io", "/apis/certificates.k8s.io/v1beta1", "/apis/events.k8s.io", "/apis/events.k8s.io/v1beta1", "/apis/extensions", "/apis/extensions/v1beta1", "/apis/networking.k8s.io", "/apis/networking.k8s.io/v1", "/apis/policy", "/apis/policy/v1beta1", "/apis/rbac.authorization.k8s.io", "/apis/rbac.authorization.k8s.io/v1", "/apis/rbac.authorization.k8s.io/v1beta1", "/apis/storage.k8s.io", "/apis/storage.k8s.io/v1", "/apis/storage.k8s.io/v1beta1", "/healthz", "/healthz/autoregister-completion", "/healthz/etcd", "/healthz/ping", "/healthz/poststarthook/apiservice-openapi-controller", "/healthz/poststarthook/apiservice-registration-controller", "/healthz/poststarthook/apiservice-status-available-controller", "/healthz/poststarthook/bootstrap-controller", "/healthz/poststarthook/ca-registration", "/healthz/poststarthook/generic-apiserver-start-informers", "/healthz/poststarthook/kube-apiserver-autoregistration", "/healthz/poststarthook/rbac/bootstrap-roles", "/healthz/poststarthook/start-apiextensions-controllers", "/healthz/poststarthook/start-apiextensions-informers", "/healthz/poststarthook/start-kube-aggregator-informers", "/healthz/poststarthook/start-kube-apiserver-informers", "/logs", "/metrics", "/swagger-2.0.0.json", "/swagger-2.0.0.pb-v1", "/swagger-2.0.0.pb-v1.gz", "/swagger.json", "/swaggerapi", "/ui", "/ui/", "/version" ]
9) Firefox訪問測試(不建議用谷歌),由於是自籤的證書,因此瀏覽器會報證書未受信任問題。
注:1.8版本的dashboard集成了運行命令(至關於執行了 kubectl exec -it etcd-vm1 -n kube-system /bin/sh ),使用起來仍是挺方便的。
8.node節點操做(2個node節點服務器需操做)
1)node-一、node-2修改kubelet配置文件cgroup的driver由systemd改成cgroupfs
#vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf Environment=」KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs」 #systemctl daemon-reload #systemctl enable kubelet&&systemctl restart kubelet
2) node-一、node-2節點加入集羣,使用master上面的kubeadm後的kubeadm join --xxx 命令加入
#kubeadm join --token d0c1ec.7d7a61a4e9ba83f8 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:7b38dad17cd1378446121952632d78d041dfcddc27b4663d011113a3b6326a65
3)在master節點上檢查一下
[root@master k8s_images]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 1h v1.9.0 node-1 Ready <none> 1m v1.9.0 node-2 Ready <none> 58s v1.9.0
4) 測試集羣
在master節點上發起個建立應用請求,建個名爲httpd-app的應用,鏡像爲httpd,有兩個副本pod
[root@master k8s_images]# kubectl run httpd-app --image=httpd --replicas=2 deployment "httpd-app" created [root@master k8s_images]# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE httpd-app 2 2 2 0 1m [root@master k8s_images]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE httpd-app-5fbccd7c6c-5j5zb 1/1 Running 0 3m 10.224.2.2 node-2 httpd-app-5fbccd7c6c-rnkcm 1/1 Running 0 3m 10.224.1.2 node-1
由於建立的資源不是service因此不會調用kube-proxy,直接訪問測試
#curl http://10.224.2.2
#curl http://10.224.1.2
刪除應用httpd-app
[root@master ~]# kubectl delete deployment httpd-app [root@master ~]# kubectl get pods
至此kubernetes基本集羣安裝完成。
一、若是集羣中主master進行從新初始化,而且以前已經加入過node節點,這時若是在原node節點執行kubeadm join --token xxxx時,會提示如下報錯:
[root@node-1 ~]# kubeadm join --token 6540e9.c83615e67d622766 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:34dd77dc3b800a93ffb5fc27b9d7d1e28118f7bb51b0b630afe1153ebcd4f4b8 [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Some fatal errors occurred: [ERROR Port-10250]: Port 10250 is in use [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
解決方法:當集羣從新初始化時,原有節點一樣也要執行重置命令後,方可從新將節點加入集羣
[root@node-1 ~]# kubeadm reset [preflight] Running pre-flight checks. [reset] Stopping the kubelet service. [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Removing kubernetes-managed containers. [reset] No etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd. [reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes] [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
從新加入成功
[root@node-1 ~]# kubeadm join --token 6540e9.c83615e67d622766 192.168.2.40:6443 --discovery-token-ca-cert-hash sha256:34dd77dc3b800a93ffb5fc27b9d7d1e28118f7bb51b0b630afe1153ebcd4f4b8 [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Starting the kubelet service [discovery] Trying to connect to API Server "192.168.2.40:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.2.40:6443" [discovery] Requesting info from "https://192.168.2.40:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.2.40:6443" [discovery] Successfully established connection with API Server "192.168.2.40:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.