新版的K8S 1.16版本在穩定性和可用性方面有了較大的提高,特別是支持後端PV的擴容API接口已經更新爲beta版本,在使用有狀態的數據存儲POD時管理會更加方便,也更符合生產需求。 下面新版K8S 1.16.2 快速部署說明。node
主機名 | IP |
---|---|
k8s-master | 192.168.20.70 |
k8s-worker-1 | 192.168.20.71 |
k8s-worker-2 | 192.168.20.72 |
系統組件 | 版本 |
---|---|
CentOS7 | 內核4.4.178 |
docker | 18.09.5 |
k8s | 1.16.2 |
# cat /etc/sysctl.d/k8s.conf net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_local_port_range = 10000 65000 fs.file-max = 2000000 net.ipv4.ip_forward = 1 vm.swappiness = 0
# ll /tmp/ total 33512 -rw-r--r-- 1 root root 19623520 Apr 18 2019 docker-ce-18.09.5-3.el7.x86_64.rpm -rw-r--r-- 1 root root 14689524 Apr 18 2019 docker-ce-cli-18.09.5-3.el7.x86_64.rpm mv docker-ce.repo /etc/yum.repos.d/ yum install docker-ce-*
cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart Docker systemctl daemon-reload systemctl restart docker
update-alternatives --set iptables /usr/sbin/iptables-legacy
1.配置國內鏡像源:linux
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable --now kubelet
echo "KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" > /etc/sysconfig/kubelet
如下操做在Master上執行。git
#!/bin/bash images=(kube-apiserver-amd64:v1.16.2 kube-controller-manager-amd64:v1.16.2 kube-scheduler-amd64:v1.16.2 kube-proxy-amd64:v1.16.2 pause-amd64:3.1 coredns-amd64:1.6.2 etcd:3.3.15-0 ) for image in ${images[@]} ; do imageName=`echo $image |sed 's/-amd64//g'` docker pull mirrorgooglecontainers/$image docker tag mirrorgooglecontainers/$image k8s.gcr.io/$imageName docker rmi mirrorgooglecontainers/$image done
kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=ImagePull
[root@k8s-master ~]# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 6175/sshd tcp 0 0 127.0.0.1:21925 0.0.0.0:* LISTEN 6188/containerd tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 8681/kubelet tcp 0 0 127.0.0.1:19944 0.0.0.0:* LISTEN 8681/kubelet tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 9253/kube-proxy tcp 0 0 192.168.20.70:2379 0.0.0.0:* LISTEN 9053/etcd tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 9053/etcd tcp 0 0 192.168.20.70:2380 0.0.0.0:* LISTEN 9053/etcd tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 9053/etcd tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 9006/kube-controlle tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 8989/kube-scheduler tcp6 0 0 :::22 :::* LISTEN 6175/sshd tcp6 0 0 :::10250 :::* LISTEN 8681/kubelet tcp6 0 0 :::10251 :::* LISTEN 8989/kube-scheduler tcp6 0 0 :::6443 :::* LISTEN 9098/kube-apiserver tcp6 0 0 :::10252 :::* LISTEN 9006/kube-controlle tcp6 0 0 :::10256 :::* LISTEN 9253/kube-proxy
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
在安裝網絡插件前,master節點狀態還處於NotReady狀態:github
# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady master 16m v1.16.2
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
此時節點信息已經處於Ready:docker
[root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 25m v1.16.2
node節點下載對應的鏡像:json
docker pull registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-pause:3.1 docker pull registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-coredns:1.6.2 docker tag registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-coredns:1.6.2 k8s.gcr.io/coredns:1.6.2 docker tag registry.cn-hangzhou.aliyuncs.com/openthings/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1
[root@k8s-master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS orof8e.2u2qtt10j4p4lnx9 20h 2019-10-25T16:28:28+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ openssl dgst -sha256 -hex | sed 's/^.* //'
提示:若是執行join命令時提示token過時,按照提示在Master 上執行kubeadm token create 生成一個新的token。bootstrap
kubeadm join 192.168.20.70:6443 --token orof8e.2u2qtt10j4p4lnx9 --discovery-token-ca-cert-hash sha256:c752a1110d36d9bda79672d0d31425dfe113b9691bf3d2dc7123ac36b271e858
經過上面的同一條命令,在多個節點上執行,能夠添加多個node節點。後端
[root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 3h45m v1.16.2 k8s-worker-1 Ready <none> 27s v1.16.2
kubectl label node k8s-worker-1 node-role.kubernetes.io/worker=worker kubectl label node k8s-worker-2 node-role.kubernetes.io/worker=worker
查看節點信息:api
[root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 3h52m v1.16.2 k8s-worker-1 Ready worker 7m19s v1.16.2 k8s-worker-2 Ready worker 4m48s v1.16.2
1.下載metric-server的yaml文件:https://github.com/AndySkyL/k8s/tree/master/k8s_deploy/k8s-1.16/kubeadm-deploy/metric-server瀏覽器
kubectl create -f ./
# kubectl get apiservices |grep 'metrics' v1beta1.metrics.k8s.io kube-system/metrics-server True 20m # kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master 250m 12% 1230Mi 48% k8s-worker-1 80m 4% 575Mi 14% k8s-worker-2 77m 3% 524Mi 13%
==K8S 1.16須要使用 Dashboard V2 版本,默認使用Metric-server,使用V1版本會報錯 ==
# 建立存儲證書的目錄 mkdir key && cd key # 建立Dashboard 的namespace kubectl create namespace kubernetes-dashboard # 生成key openssl genrsa -out dashboard.key 2048 # 生成證書請求 openssl req -new -key dashboard.key -out dashboard.csr -days 3650 -subj "/O=k8s/CN=dashboard" # 生成自簽證書 openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
[root@k8s-master key]# ll total 12 -rw-r--r-- 1 root root 993 Oct 25 14:23 dashboard.crt -rw-r--r-- 1 root root 899 Oct 25 14:23 dashboard.csr -rw-r--r-- 1 root root 1679 Oct 25 14:21 dashboard.key
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key \ --from-file=dashboard.crt -n kubernetes-dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml # 在service中修改以下信息: ... spec: ports: - nodePort: 30727 port: 443 protocol: TCP targetPort: 8443 ... # 部署dashboard kubectl create -f recommended.yaml
# cat admin-account.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
kubectl describe secret -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret|grep admin|awk '{print $1}')
在不安裝ipvs的狀況下,集羣會使用iptables的方式,這裏須要對配置進行從新自定義。
yum install -y ipvsadm ipset conntrack
kubectl get cm kube-proxy -n kube-system -o yaml | sed 's/mode: ""/mode: "ipvs"/' | kubectl apply -f -
for i in $(kubectl get po -n kube-system | awk '/kube-proxy/ {print $1}'); do kubectl delete po $i -n kube-system done
若是安裝metric-server 啓動後apiservices正常,可是pod日誌報以下錯誤(沒法解析宿主機的主機名,找不到主機):
unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_su1: unable to fetch metrics from Kubelet k8s-worker-1 (k8s-worker-1): Get https://k8s-worker-1:10250/stats/summary?only_cpu_and_memorylookup k8s-worker-1 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-worker-2: unable trom Kubelet k8s-worker-2 (k8s-worker-2): Get https://k8s-worker-2:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-.0.10:53: no such host,
解決方式:
確認metric-server的deployment中開啓了--kubelet-preferred-address-types=InternalIP 參數:
image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6 command: - /metrics-server - --metric-resolution=30s - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP