最近聽我朋友說他們公司準備上雲,全線把服務遷到 k8s 上面,一下感受,咱們就 lower 了很多,以前服務器一直跑的就是 docker ,想一想弄到 k8s 應該仍是沒有啥,因而咱們也開始改造了html
參考了很多文檔,有興趣的能夠讀原文 https://kubernetes.io/docs/setup/independent/install-kubeadm/node
https://blog.csdn.net/networken/article/details/84991940linux
https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-with-kubeadm.htmlgit
主機名 | ip | 角色 |
kmaster | 192.168.9.88 | master |
knode1 | 192.168.9.81 | node |
konde2 | 192.168.9.82 | node |
分別在三臺機器 中執行github
cat >> /etc/hosts <<EOF
192.168.9.88 kmaster
192.168.9.81 knode1
192.168.9.82 knode2
EOFdocker
setenforce 0
bootstrap
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
centos
[root@knode1 ~]# cat >> /etc/hosts <<EOF
> 192.168.9.88 kmaster
> 192.168.9.81 knode1
> 192.168.9.82 knode2
> EOFapi
[root@knode1 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config && setenforce 0bash
[root@knode1 ~]# swapoff -a
[root@knode1 ~]# yes | cp /etc/fstab /etc/fstab_bak
[root@knode1 ~]# cat /etc/fstab_bak |grep -v swap > /etc/fstab
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF #執行腳本 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#kube-proxy開啓ipvs
yum install ipset ipvsadm -y
設置 docker 的源
#配置docker yum源 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#安裝指定版本,這裏安裝18.06
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl start docker && systemctl enable docker
安裝 kubeadmin kube kubctl
#配置kubernetes.repo的源,因爲官方源國內沒法訪問,這裏使用阿里雲yum源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF #在全部節點上安裝指定版本 kubelet、kubeadm 和 kubectl yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1 #啓動kubelet服務 systemctl enable kubelet && systemctl start kubelet
部署 master
kubeadm init \ --apiserver-advertise-address=192.168.9.88 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.13.1 \ --pod-network-cidr=10.244.0.0/16
注意這裏執行初始化用到了- -image-repository選項,指定初始化須要的鏡像源從阿里雲鏡像倉庫拉取
這裏有點慢,若是 輸出下面這些,就能夠了
[preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.9.88] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.9.88 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.9.88 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 22.007105 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kmaster" as an annotation [mark-control-plane] Marking the node kmaster as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kmaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: hpvjuo.divmu5zdcqb7oysy [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.9.88:6443 --token hpvjuo.divmu5zdcqb7oysy --discovery-token-ca-cert-hash sha256:a5e36c51c68ad1f1e07286c8c9c58bf5b8794c25182b18b15c1dcb6e99462eb2
#建立普通用戶並設置密碼123456 useradd k8s && echo "k8s:123456" | chpasswd k8s #追加sudo權限,並配置sudo免密 sed -i '/^root/a\k8s ALL=(ALL) NOPASSWD:ALL' /etc/sudoers
[root@kmaster ~]# su - k8s [k8s@kmaster ~]$ [k8s@kmaster ~]$ [k8s@kmaster ~]$ [k8s@kmaster ~]$ [k8s@kmaster ~]$ mkdir -p $HOME/.kube [k8s@kmaster ~]$ [k8s@kmaster ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [k8s@kmaster ~]$ [k8s@kmaster ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[k8s@kmaster ~]$ #啓用 kubectl 命令自動補全功能(註銷從新登陸生效)
[k8s@kmaster ~]$ echo "source <(kubectl completion bash)" >> ~/.bashrc
[k8s@kmaster ~]$ kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
[k8s@kmaster ~]$ kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [k8s@kmaster ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION kmaster NotReady master 6m51s v1.13.1 [k8s@kmaster ~]$ kubectl describe node kmaster Name: kmaster Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/hostname=kmaster node-role.kubernetes.io/master=
查看 pod的狀況
[k8s@kmaster ~]$ kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-78d4cf999f-l9f7v 0/1 Pending 0 2m32s <none> <none> <none> <none> coredns-78d4cf999f-n8g4g 0/1 Pending 0 2m32s <none> <none> <none> <none> etcd-kmaster 1/1 Running 0 6m51s 192.168.9.88 kmaster <none> <none> kube-apiserver-kmaster 1/1 Running 0 6m48s 192.168.9.88 kmaster <none> <none> kube-controller-manager-kmaster 1/1 Running 0 6m54s 192.168.9.88 kmaster <none> <none> kube-proxy-57lvg 1/1 Running 0 7m32s 192.168.9.88 kmaster <none> <none> kube-scheduler-kmaster 1/1 Running 0 6m45s 192.168.9.88 kmaster <none> <none>
部署網絡插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[k8s@kmaster ~]$ kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-78d4cf999f-l9f7v 1/1 Running 0 9m18s 10.244.0.3 kmaster <none> <none> coredns-78d4cf999f-n8g4g 1/1 Running 0 9m18s 10.244.0.2 kmaster <none> <none> etcd-kmaster 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none> kube-apiserver-kmaster 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none> kube-controller-manager-kmaster 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none> kube-flannel-ds-amd64-dkb2t 1/1 Running 0 2m44s 192.168.9.88 kmaster <none> <none> kube-proxy-57lvg 1/1 Running 0 14m 192.168.9.88 kmaster <none> <none> kube-scheduler-kmaster 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none>
至此,Kubernetes 的 Master 節點就部署完成了。若是你只須要一個單節點的 Kubernetes,如今你就可使用了。
[root@knode1 ~]# kubeadm join 192.168.9.88:6443 --token hpvjuo.divmu5zdcqb7oysy --discovery-token-ca-cert-hash sha256:a5e36c51c68ad1f1e07286c8c9c58bf5b8794c25182b18b15c1dcb6e99462eb2 [preflight] Running pre-flight checks [discovery] Trying to connect to API Server "192.168.9.88:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.9.88:6443" [discovery] Requesting info from "https://192.168.9.88:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.9.88:6443" [discovery] Successfully established connection with API Server "192.168.9.88:6443" [join] Reading configuration from the cluster... [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "knode1" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
#執行如下命令將節點接入集羣
kubeadm join 192.168.92.56:6443 --token 67kq55.8hxoga556caxty7s --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69
#若是執行kubeadm init時沒有記錄下加入集羣的命令,能夠經過如下命令從新建立 kubeadm token create --print-join-command
查看節點的狀態
[k8s@kmaster ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION kmaster Ready master 19m v1.13.1 knode1 NotReady <none> 2m5s v1.13.1 knode2 NotReady <none> 2m9s v1.13.1
稍等處刻
[k8s@kmaster ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION kmaster Ready master 24m v1.13.1 knode1 Ready <none> 7m46s v1.13.1 knode2 Ready <none> 7m50s v1.13.1 [k8s@kmaster ~]$ kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-78d4cf999f-l9f7v 1/1 Running 0 20m 10.244.0.3 kmaster <none> <none> kube-system coredns-78d4cf999f-n8g4g 1/1 Running 0 20m 10.244.0.2 kmaster <none> <none> kube-system etcd-kmaster 1/1 Running 0 24m 192.168.9.88 kmaster <none> <none> kube-system kube-apiserver-kmaster 1/1 Running 0 24m 192.168.9.88 kmaster <none> <none> kube-system kube-controller-manager-kmaster 1/1 Running 0 24m 192.168.9.88 kmaster <none> <none> kube-system kube-flannel-ds-amd64-44x4d 1/1 Running 0 8m48s 192.168.9.81 knode1 <none> <none> kube-system kube-flannel-ds-amd64-465pk 1/1 Running 2 8m51s 192.168.9.82 knode2 <none> <none> kube-system kube-flannel-ds-amd64-dkb2t 1/1 Running 0 13m 192.168.9.88 kmaster <none> <none> kube-system kube-proxy-4rgz9 1/1 Running 0 8m48s 192.168.9.81 knode1 <none> <none> kube-system kube-proxy-57lvg 1/1 Running 0 25m 192.168.9.88 kmaster <none> <none> kube-system kube-proxy-hbbqj 1/1 Running 0 8m51s 192.168.9.82 knode2 <none> <none> kube-system kube-scheduler-kmaster 1/1 Running 0 24m 192.168.9.88 kmaster <none> <none>
[k8s@kmaster ~]$ kubectl taint node kmaster node-role.kubernetes.io/master-
node/kmaster untainted
若是要恢復Master Only狀態,執行以下命令:
kubectl taint node k8s-master node-role.kubernetes.io/master=""
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: 「ipvs」:
[k8s@kmaster ~]$ kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
以後重啓各個節點上的kube-proxy pod:
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
[k8s@kmaster ~]$ kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-4rgz9" deleted
pod "kube-proxy-57lvg" deleted
pod "kube-proxy-hbbqj" deleted
[k8s@kmaster ~]$ kubectl logs kube-proxy-6btv9 -n kube-system I0125 07:52:50.004289 1 server_others.go:189] Using ipvs Proxier. W0125 07:52:50.004834 1 proxier.go:365] IPVS scheduler not specified, use rr by default I0125 07:52:50.004997 1 server_others.go:216] Tearing down inactive rules. I0125 07:52:50.052950 1 server.go:464] Version: v1.13.1 I0125 07:52:50.067533 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0125 07:52:50.067821 1 config.go:102] Starting endpoints config controller I0125 07:52:50.069525 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller I0125 07:52:50.069231 1 config.go:202] Starting service config controller I0125 07:52:50.070363 1 controller_utils.go:1027] Waiting for caches to sync for service config controller I0125 07:52:50.169786 1 controller_utils.go:1034] Caches are synced for endpoints config controller I0125 07:52:50.170565 1 controller_utils.go:1034] Caches are synced for service config controller