k8s全稱kubernetes,這個名字你們應該都不陌生,k8s是爲容器服務而生的一個可移植容器的編排管理工具,愈來愈多的公司正在擁抱k8s,而且當前k8s已經主導了雲業務流程,推進了微服務架構等熱門技術的普及和落地,正在如火如荼的發展。那麼稱霸容器領域的k8s到底是有什麼魔力呢?node
首先,咱們從容器技術談起,在容器技術以前,你們開發用虛擬機比較多,好比vmware和openstack,咱們可使用虛擬機在咱們的操做系統中模擬出多臺子電腦(Linux),子電腦之間是相互隔離的,可是虛擬機對於開發和運維人員而言,存在啓動慢,佔用空間大,不易遷移的缺點。舉一個我親身經歷過的場景吧,以前在vmware中開發了一個線下平臺,爲了保證每次可以順利使用,咱們就把這個虛擬機導出爲OVF,而後隨身攜帶,用的時候在服務器中部署,這裏就充分體現了虛擬機的缺點。linux
接着,容器化技術應運而生,它不須要虛擬出整個操做系統,只須要虛擬一個小規模的環境便可,並且啓動速度很快,除了運行其中應用之外,基本不消耗額外的系統資源。Docker是應用最爲普遍的容器技術,經過打包鏡像,啓動容器來建立一個服務。可是隨着應用愈來愈複雜,容器的數量也愈來愈多,由此衍生了管理運維容器的重大問題,並且隨着雲計算的發展,雲端最大的挑戰,容器在漂移。在此業務驅動下,k8s問世,提出了一套全新的基於容器技術的分佈式架構領先方案,在整個容器技術領域的發展是一個重大突破與創新。git
那麼,K8S實現了什麼?github
從架構設計層面,咱們關注的可用性,伸縮性均可以結合k8s獲得很好的解決,若是你想使用微服務架構,搭配k8s,真的是完美,再從部署運維層面,服務部署,服務監控,應用擴容和故障處理,k8s都提供了很好的解決方案。docker
具體來講,主要包括如下幾點:express
總而言之,k8s可使咱們應用的部署和運維更加方便apache
kubeadm是Kubernetes項目自帶的及集羣構建工具,負責執行構建一個最小化的可用集羣以及將其啓動等的必要基本步驟,kubeadm是Kubernetes集羣全生命週期的管理工具,可用於實現集羣的部署、升級、降級及拆除。kubeadm部署Kubernetes集羣是將大部分資源以pod的方式運行,例如(kube-proxy、kube-controller-manager、kube-scheduler、kube-apiserver、flannel)都是以pod方式運行。json
Kubeadm僅關心如何初始化並啓動集羣,餘下的其餘操做,例如安裝Kubernetes Dashboard、監控系統、日誌系統等必要的附加組件則不在其考慮範圍以內,須要管理員自行部署。bootstrap
Kubeadm集成了Kubeadm init和kubeadm join等工具程序,其中kubeadm init用於集羣的快速初始化,其核心功能是部署Master節點的各個組件,而kubeadm join則用於將節點快速加入到指定集羣中,它們是建立Kubernetes集羣最佳實踐的「快速路徑」。另外,kubeadm token可於集羣構建後管理用於加入集羣時使用的認證令牌(token),而kubeadm reset命令的功能則是刪除集羣構建過程當中生成的文件以重置回初始狀態vim
Kubeadm安裝方式自1.14版本之後,安裝方法幾乎沒有任何變化,此博客能夠嘗試安裝最新的k8s集羣,centos採用的是7.x版本
表1-1 高可用Kubernetes集羣規劃
主機名 | IP地址 | 說明 |
---|---|---|
k8s-master01 ~ 03 | 192.168.40.100 ~ 103 | master節點 * 3 |
k8s-master-lb | 192.168.40.200 | keepalived虛擬IP |
k8s-node01 ~ 02 | 192.168.40.110 ~ 111 | worker節點 * 2 |
配置信息 | 備註 |
---|---|
系統版本 | CentOS 7.9 |
Docker版本 | 19.03.x |
Pod網段 | 172.168.0.0/12 |
Service網段 | 10.96.0.0/12 |
全部節點配置hosts,修改/etc/hosts以下:
[root@k8s-master01 ~]# cat /etc/hosts
192.168.40.100 k8s-master01
192.168.40.101 k8s-master02
192.168.40.102 k8s-master03
192.168.40.200 k8s-master-lb # 若是不是高可用集羣,該IP爲Master01的IP
192.168.40.110 k8s-node01
192.168.40.111 k8s-node02
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
必備工具安裝
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
systemctl disable --now firewalld systemctl disable --now dnsmasq systemctl disable --now NetworkManager setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
swapoff -a && sysctl -w vm.swappiness=0 sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm yum install ntpdate -y
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' >/etc/timezone ntpdate time2.aliyun.com
crontab -e */5 * * * * /usr/sbin/ntpdate time2.aliyun.com
ulimit -SHn 65535 vim /etc/security/limits.conf # 末尾添加以下內容 * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimited
ssh-keygen -t rsa for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
yum update -y --exclude=kernel* && reboot
cd /root wget https://mirror.rackspace.com/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm wget https://mirror.rackspace.com/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
從master01節點傳到其餘節點
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
cd /root && yum localinstall -y kernel-ml*
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
[root@k8s-master01 ~]# grubby --default-kernel /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
[root@k8s-master01 ~]# uname -a Linux k8s-master01 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri One 20 18:33:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
yum install ipvsadm ipset sysstat conntrack libseccomp -y
modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack
4.19版本以上添加
vim /etc/modules-load.d/ipvs.conf
# 加入如下內容
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
而後執行
systemctl enable --now systemd-modules-load.service
cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
reboot lsmod | grep --color=auto -e ip_vs -e nf_conntrack
本博客主要安裝的是集羣中用到的各類組件,好比Docker-ce、Kubernetes各組件等。
全部節點安裝Docker-ce 19.03
yum install docker-ce-19.03.* -y
舒適提示:
因爲新版kubelet建議使用systemd,因此能夠把docker的CgroupDriver改爲systemd
mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"] } EOF
systemctl daemon-reload && systemctl enable --now docker
yum list kubeadm.x86_64 --showduplicates | sort -r
yum install kubeadm-1.20.2.* -y
cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2" EOF
systemctl daemon-reload
systemctl enable --now kubelet
yum install keepalived haproxy -y
[root@k8s-master01 ~]# vim /etc/haproxy/haproxy.cfg global maxconn 2000 ulimit-n 16384 log 127.0.0.1 local0 err stats timeout 30s defaults log global mode http option httplog timeout connect 5000 timeout client 50000 timeout server 50000 timeout http-request 15s timeout http-keep-alive 15s frontend monitor-in bind *:33305 mode http option httplog monitor-uri /monitor frontend k8s-master bind 0.0.0.0:16443 bind 127.0.0.1:16443 mode tcp option tcplog tcp-request inspect-delay 5s default_backend k8s-master backend k8s-master mode tcp option tcplog option tcp-check balance roundrobin default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 server k8s-master01 192.168.40.100:6443 check server k8s-master02 192.168.40.101:6443 check server k8s-master03 192.168.40.102:6443 check
全部Master節點配置KeepAlived,配置不同,注意區分 [root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf ,注意每一個節點的IP和網卡(interface參數)
[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state MASTER interface ens33 mcast_src_ip 192.168.40.100 virtual_router_id 51 priority 101 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.40.200 } track_script { chk_apiserver } }
Master02節點的配置
[root@k8s-master02 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.40.101 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.40.200 } track_script { chk_apiserver } }
Master03節點的配置
[root@k8s-master03 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL script_user root enable_script_security } vrrp_script chk_apiserver { script "/etc/keepalived/check_apiserver.sh" interval 5 weight -5 fall 2 rise 1 } vrrp_instance VI_1 { state BACKUP interface ens33 mcast_src_ip 192.168.40.102 virtual_router_id 51 priority 100 advert_int 2 authentication { auth_type PASS auth_pass K8SHA_KA_AUTH } virtual_ipaddress { 192.168.40.200 } track_script { chk_apiserver } }
全部master節點配置KeepAlived健康檢查文件
[root@k8s-master01 keepalived]# vim /etc/keepalived/check_apiserver.sh #!/bin/bash err=0 for k in $(seq 1 3) do check_code=$(pgrep haproxy) if [[ $check_code == "" ]]; then err=$(expr $err + 1) sleep 1 continue else err=0 break fi done if [[ $err != "0" ]]; then echo "systemctl stop keepalived" /usr/bin/systemctl stop keepalived exit 1 else exit 0 fi
全部master添加權限
chmod +x /etc/keepalived/check_apiserver.sh
[root@k8s-master01 ~]# systemctl daemon-reload [root@k8s-master01 ~]# systemctl enable --now haproxy [root@k8s-master01 ~]# systemctl enable --now keepalived
重要:若是安裝了keepalived和haproxy,須要測試keepalived是不是正常的
測試VIP
[root@k8s-master01 ~]# ping 192.168.40.200 -c 4 PING 192.168.40.200 (192.168.40.200) 56(84) bytes of data. 64 bytes from 192.168.40.200: icmp_seq=1 ttl=64 time=0.464 ms 64 bytes from 192.168.40.200: icmp_seq=2 ttl=64 time=0.063 ms 64 bytes from 192.168.40.200: icmp_seq=3 ttl=64 time=0.062 ms 64 bytes from 192.168.40.200: icmp_seq=4 ttl=64 time=0.063 ms --- 192.168.40.200 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3106ms rtt min/avg/max/mdev = 0.062/0.163/0.464/0.173 ms [root@k8s-master01 ~]# telnet 192.168.40.200 16443 Trying 192.168.40.200... Connected to 192.168.40.200. Escape character is '^]'. Connection closed by foreign host.
若是ping不通且telnet沒有出現 ] ,則認爲VIP不能夠,不可在繼續往下執行,須要排查keepalived的問題,好比防火牆和selinux,haproxy和keepalived的狀態,監聽端口等
全部節點查看防火牆狀態必須爲disable和inactive:systemctl status firewalld
全部節點查看selinux狀態,必須爲disable:getenforce
master節點查看haproxy和keepalived狀態:systemctl status keepalived haproxy
master節點查看監聽端口:netstat -lntp
Master01節點建立kubeadm-config.yaml配置文件以下:
Master01:(# 注意,若是不是高可用集羣,192.168.40.200:16443改成master01的地址,16443改成apiserver的端口,默認是6443,注意更改v1.18.5本身服務器kubeadm的版本:kubeadm version)
[root@k8s-master01 keepalived]# vim kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: 7t2weq.bjbawausm0jaxury ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.40.100 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: certSANs: - 192.168.40.200 timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 192.168.40.200:16443 controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.0 networking: dnsDomain: cluster.local podSubnet: 172.168.0.0/12 serviceSubnet: 10.96.0.0/12 scheduler: {}
更新kubeadm文件
kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
kubeadm config images pull --config /root/new.yaml
systemctl enable --now kubelet (若是啓動失敗無需管理,初始化成功之後便可啓動)
kubeadm init --config /root/new.yaml --upload-certs
若是初始化失敗,重置後再次初始化,命令以下
kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of the control-plane node running the following command on each as root: kubeadm join 192.168.40.200:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:a331f5be163bf8c31f0216f37d9adc221daafd9e10ad835e34671605c51d3093 \ --control-plane --certificate-key e99ca34b778bb14b78698b8bbaffce9fac88862095f906d57f1a8778ac904dbe Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.40.200:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:a331f5be163bf8c31f0216f37d9adc221daafd9e10ad835e34671605c51d3093
cat <<EOF >> /root/.bashrc export KUBECONFIG=/etc/kubernetes/admin.conf EOF source /root/.bashrc
[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master01 NotReady control-plane,master 74s v1.20.2
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE coredns-777d78ff6f-kstsz 0/1 Pending 0 14m <none> <none> coredns-777d78ff6f-rlfr5 0/1 Pending 0 14m <none> <none> etcd-k8s-master01 1/1 Running 0 14m 192.168.40.100 k8s-master01 kube-apiserver-k8s-master01 1/1 Running 0 13m 192.168.40.100 k8s-master01 kube-controller-manager-k8s-master01 1/1 Running 0 13m 192.168.40.100 k8s-master01 kube-proxy-8d4qc 1/1 Running 0 14m 192.168.40.100 k8s-master01 kube-scheduler-k8s-master01 1/1 Running 0 13m 192.168.40.100 k8s-master01
初始化其餘master加入集羣(master0二、master03執行下面命令就可)
kubeadm join 192.168.40.200:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:a331f5be163bf8c31f0216f37d9adc221daafd9e10ad835e34671605c51d3093 \ --control-plane --certificate-key e99ca34b778bb14b78698b8bbaffce9fac88862095f906d57f1a8778ac904dbe
若是token過時後(超過2個小時沒加入)就生成新的token,正常不須要:
kubeadm token create --print-join-command
Master須要生成--certificate-key
kubeadm init phase upload-certs --upload-certs
kubeadm join 192.168.40.200:16443 --token 7t2weq.bjbawausm0jaxury \ --discovery-token-ca-cert-hash sha256:a331f5be163bf8c31f0216f37d9adc221daafd9e10ad835e34671605c51d3093
[root@k8s-master01]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 NotReady control-plane,master 8m53s v1.20.2 k8s-master02 NotReady control-plane,master 2m25s v1.20.2 k8s-master03 NotReady control-plane,master 31s v1.20.2 k8s-node01 NotReady <none> 32s v1.20.2 k8s-node02 NotReady <none> 88s v1.20.2
vim calico-etcd.yaml
--- # Source: calico/templates/calico-etcd-secrets.yaml # The following contains k8s Secrets for use with a TLS enabled etcd cluster. # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/ apiVersion: v1 kind: Secret type: Opaque metadata: name: calico-etcd-secrets namespace: kube-system data: # Populate the following with etcd TLS configuration if desired, but leave blank if # not using TLS for etcd. # The keys below should be uncommented and the values populated with the base64 # encoded contents of each file that would be associated with the TLS data. # Example command for encoding a file contents: cat <file> | base64 -w 0 # etcd-key: null # etcd-cert: null # etcd-ca: null --- # Source: calico/templates/calico-config.yaml # This ConfigMap is used to configure a self-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # Configure this with the location of your etcd cluster. etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>" # If you're using TLS enabled etcd uncomment the following. # You must also populate the Secret below with these files. etcd_ca: "" # "/calico-secrets/etcd-ca" etcd_cert: "" # "/calico-secrets/etcd-cert" etcd_key: "" # "/calico-secrets/etcd-key" # Typha is disabled. typha_service_name: "none" # Configure the backend to use. calico_backend: "bird" # Configure the MTU to use for workload interfaces and tunnels. # - If Wireguard is enabled, set to your network MTU - 60 # - Otherwise, if VXLAN or BPF mode is enabled, set to your network MTU - 50 # - Otherwise, if IPIP is enabled, set to your network MTU - 20 # - Otherwise, if not using any encapsulation, set to your network MTU. veth_mtu: "1440" # The CNI network configuration to install on each node. The special # values in this config will be automatically populated. cni_network_config: |- { "name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "log_level": "info", "etcd_endpoints": "__ETCD_ENDPOINTS__", "etcd_key_file": "__ETCD_KEY_FILE__", "etcd_cert_file": "__ETCD_CERT_FILE__", "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__", "mtu": __CNI_MTU__, "ipam": { "type": "calico-ipam" }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig": "__KUBECONFIG_FILEPATH__" } }, { "type": "portmap", "snat": true, "capabilities": {"portMappings": true} }, { "type": "bandwidth", "capabilities": {"bandwidth": true} } ] } --- # Source: calico/templates/calico-kube-controllers-rbac.yaml # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers rules: # Pods are monitored for changing labels. # The node controller monitors Kubernetes nodes. # Namespace and serviceaccount labels are used for policy. - apiGroups: [""] resources: - pods - nodes - namespaces - serviceaccounts verbs: - watch - list - get # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects: - kind: ServiceAccount name: calico-kube-controllers namespace: kube-system --- --- # Source: calico/templates/calico-node-rbac.yaml # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get - apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list # Pod CIDR auto-detection on kubeadm needs access to config maps. - apiGroups: [""] resources: - configmaps verbs: - get - apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system --- # Source: calico/templates/calico-node.yaml # This manifest installs the calico-node container, as well # as the CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node spec: nodeSelector: kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 priorityClassName: system-node-critical initContainers: # This container installs the CNI binaries # and CNI network config file on each node. - name: install-cni image: registry.cn-beijing.aliyuncs.com/dotbalo/cni:v3.15.3 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # The location of the etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir - mountPath: /calico-secrets name: etcd-certs securityContext: privileged: true # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes # to communicate with Felix over the Policy Sync API. - name: flexvol-driver image: registry.cn-beijing.aliyuncs.com/dotbalo/pod2daemon-flexvol:v3.15.3 volumeMounts: - name: flexvol-driver-host mountPath: /host/driver securityContext: privileged: true containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: registry.cn-beijing.aliyuncs.com/dotbalo/node:v3.15.3 env: # The location of the etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # Set noderef for node controller. - name: CALICO_K8S_NODE_REF valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s,bgp" # Auto-detect the BGP IP address. - name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" # Enable or Disable VXLAN on the default IP pool. - name: CALICO_IPV4POOL_VXLAN value: "Never" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Set MTU for the VXLAN tunnel device. - name: FELIX_VXLANMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Set MTU for the Wireguard tunnel device. - name: FELIX_WIREGUARDMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. # - name: CALICO_IPV4POOL_CIDR # value: "192.168.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: exec: command: - /bin/calico-node - -felix-live - -bird-live periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -felix-ready - -bird-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false - mountPath: /calico-secrets name: etcd-certs - name: policysync mountPath: /var/run/nodeagent volumes: # Used by calico-node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the etcd TLS secrets with mode 400. # See https://kubernetes.io/docs/concepts/configuration/secret/ - name: etcd-certs secret: secretName: calico-etcd-secrets defaultMode: 0400 # Used to create per-pod Unix Domain Sockets - name: policysync hostPath: type: DirectoryOrCreate path: /var/run/nodeagent # Used to install Flex Volume Driver - name: flexvol-driver-host hostPath: type: DirectoryOrCreate path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system --- # Source: calico/templates/calico-kube-controllers.yaml # See https://github.com/projectcalico/kube-controllers apiVersion: apps/v1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: # The controllers can only have a single active instance. replicas: 1 selector: matchLabels: k8s-app: calico-kube-controllers strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: nodeSelector: kubernetes.io/os: linux tolerations: # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule serviceAccountName: calico-kube-controllers priorityClassName: system-cluster-critical # The controllers must run in the host network namespace so that # it isn't governed by policy that would prevent it from working. hostNetwork: true containers: - name: calico-kube-controllers image: registry.cn-beijing.aliyuncs.com/dotbalo/kube-controllers:v3.15.3 env: # The location of the etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # Choose which controllers to run. - name: ENABLED_CONTROLLERS value: policy,namespace,serviceaccount,workloadendpoint,node volumeMounts: # Mount in the etcd TLS secrets. - mountPath: /calico-secrets name: etcd-certs readinessProbe: exec: command: - /usr/bin/check-status - -r volumes: # Mount in the etcd TLS secrets with mode 400. # See https://kubernetes.io/docs/concepts/configuration/secret/ - name: etcd-certs secret: secretName: calico-etcd-secrets defaultMode: 0400 --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system --- # Source: calico/templates/calico-typha.yaml --- # Source: calico/templates/configure-canal.yaml --- # Source: calico/templates/kdd-crds.yaml
sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.40.100:2379,https://192.168.40.101:2379,https://192.168.40.102:2379"#g' calico-etcd.yaml ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'` sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'` sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@# value: "192.168.0.0/16"@ value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
kubectl apply -f calico-etcd.yaml
[root@k8s-master01 ~]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-5f6d4b864b-pwvnb 1/1 Running 0 3m29s calico-node-5lz9m 1/1 Running 0 3m29s calico-node-8z4bg 1/1 Running 0 3m29s calico-node-lmzvf 1/1 Running 0 3m29s calico-node-mpngv 1/1 Running 0 3m29s calico-node-vmqsl 1/1 Running 0 3m29s coredns-54d67798b7-8525g 1/1 Running 0 39m coredns-54d67798b7-fxs72 1/1 Running 0 39m etcd-k8s-master01 1/1 Running 0 39m etcd-k8s-master02 1/1 Running 0 33m etcd-k8s-master03 1/1 Running 0 31m kube-apiserver-k8s-master01 1/1 Running 0 39m kube-apiserver-k8s-master02 1/1 Running 0 33m kube-apiserver-k8s-master03 1/1 Running 0 30m kube-controller-manager-k8s-master01 1/1 Running 1 39m kube-controller-manager-k8s-master02 1/1 Running 0 33m kube-controller-manager-k8s-master03 1/1 Running 0 31m kube-proxy-hnkmj 1/1 Running 0 39m kube-proxy-jk4dm 1/1 Running 0 32m kube-proxy-nbcg2 1/1 Running 0 32m kube-proxy-qv9k7 1/1 Running 0 32m kube-proxy-x6xdc 1/1 Running 0 33m kube-scheduler-k8s-master01 1/1 Running 1 39m kube-scheduler-k8s-master02 1/1 Running 0 33m kube-scheduler-k8s-master03 1/1 Running 0 30m
在新版的Kubernetes中系統資源的採集均使用Metrics-server,能夠經過Metrics採集節點和Pod的內存、磁盤、CPU和網絡的使用率
將Master01節點的front-proxy-ca.crt複製到全部Node節點
scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node01:/etc/kubernetes/pki/front-proxy-ca.crt scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-node02:/etc/kubernetes/pki/front-proxy-ca.crt
[root@k8s-master01 ~]# vim comp.yaml
apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --metric-resolution=30s - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:v0.4.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS periodSeconds: 10 securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100
[root@k8s-master01 ~]# kubectl create -f comp.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@k8s-master01 ~]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master01 109m 2% 1296Mi 33% k8s-master02 99m 2% 1124Mi 29% k8s-master03 104m 2% 1082Mi 28% k8s-node01 55m 1% 761Mi 19% k8s-node02 53m 1% 663Mi 17%
[root@k8s-master01 ~]# mkdir dashboard [root@k8s-master01 dashboard]# vim dashboard.yaml
# Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. apiVersion: v1 kind: Namespace metadata: name: kubernetes-dashboard --- apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: ports: - port: 443 targetPort: 8443 selector: k8s-app: kubernetes-dashboard --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kubernetes-dashboard type: Opaque --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-csrf namespace: kubernetes-dashboard type: Opaque data: csrf: "" --- apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-key-holder namespace: kubernetes-dashboard type: Opaque --- kind: ConfigMap apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-settings namespace: kubernetes-dashboard --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard rules: # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster", "dashboard-metrics-scraper"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"] verbs: ["get"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard rules: # Allow Metrics Scraper to get metrics from the Metrics server - apiGroups: ["metrics.k8s.io"] resources: ["pods", "nodes"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kubernetes-dashboard subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kubernetes-dashboard --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: registry.cn-beijing.aliyuncs.com/dotbalo/dashboard:v2.0.4 imagePullPolicy: Always ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates - --namespace=kubernetes-dashboard # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- kind: Service apiVersion: v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: ports: - port: 8000 targetPort: 8000 selector: k8s-app: dashboard-metrics-scraper --- kind: Deployment apiVersion: apps/v1 metadata: labels: k8s-app: dashboard-metrics-scraper name: dashboard-metrics-scraper namespace: kubernetes-dashboard spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: dashboard-metrics-scraper template: metadata: labels: k8s-app: dashboard-metrics-scraper annotations: seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' spec: containers: - name: dashboard-metrics-scraper image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-scraper:v1.0.4 ports: - containerPort: 8000 protocol: TCP livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 volumeMounts: - mountPath: /tmp name: tmp-volume securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 serviceAccountName: kubernetes-dashboard nodeSelector: "kubernetes.io/os": linux # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule volumes: - name: tmp-volume emptyDir: {}
[root@k8s-master01 dashboard]# vim dashboard-user.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system
[root@k8s-master01 dashboard]# kubectl create -f . serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created
--test-type --ignore-certificate-errors
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
將 type:ClusterIP更改的ClusterIP爲 NodePort
kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
訪問Dashboard:https://192.168.40.200:28282(請更改28282爲本身的端口)
[root@k8s-master01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') Name: admin-user-token-r4vcp Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 2112796c-1c9e-11e9-91ab-000c298bf023 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXI0dmNwIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMTEyNzk2Yy0xYzllLTExZTktOTFhYi0wMDBjMjk4YmYwMjMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bWYmwgRb-90ydQmyjkbjJjFt8CdO8u6zxVZh-19rdlL_T-n35nKyQIN7hCtNAt46u6gfJ5XXefC9HsGNBHtvo_Ve6oF7EXhU772aLAbXWkU1xOwQTQynixaypbRIas_kiO2MHHxXfeeL_yYZRrgtatsDBxcBRg-nUQv4TahzaGSyK42E_4YGpLa3X3Jc4t1z0SQXge7lrwlj8ysmqgO4ndlFjwPfvg0eoYqu9Qsc5Q7tazzFf9mVKMmcS1ppPutdyqNYWL62P1prw_wclP0TezW1CsypjWSVT4AuJU8YmH8nTNR1EXn8mJURLSjINv6YbZpnhBIPgUGk1JYVLcn47w
將token值輸入到token後,單擊登陸便可訪問Dashboard
將Kube-proxy改成ipvs模式,由於在初始化集羣的時候註釋了ipvs配置,因此須要自行修改一下:
在master01節點執行
kubectl edit cm kube-proxy -n kube-system
mode: 「ipvs」
kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
[root@k8s-master01 ~]# curl 127.0.0.1:10249/proxyMode ipvs
注意事項
注意:kubeadm安裝的集羣,證書有效期默認是一年。master節點的kube-apiserver、kube-scheduler、kube-controller-manager、etcd都是以容器運行的。能夠經過kubectl get po -n kube-system查看。
啓動和二進制不一樣的是,
kubelet的配置文件在/etc/sysconfig/kubelet和/var/lib/kubelet/config.yaml
其餘組件的配置文件在/etc/Kubernetes/manifests目錄下,好比kube-apiserver.yaml,該yaml文件更改後,kubelet會自動刷新配置,也就是會重啓pod。不能再次建立該文件
Kubeadm安裝後,master節點默認不容許部署pod,能夠經過如下方式打開:
查看Taints:
[root@k8s-master01 ~]# kubectl describe node -l node-role.kubernetes.io/master= | grep Taints Taints: node-role.kubernetes.io/master:NoSchedule Taints: node-role.kubernetes.io/master:NoSchedule Taints: node-role.kubernetes.io/master:NoSchedule
刪除Taint:
[root@k8s-master01 ~]# kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule- node/k8s-master01 untainted node/k8s-master02 untainted node/k8s-master03 untainted [root@k8s-master01 ~]# kubectl describe node -l node-role.kubernetes.io/master= | grep Taints Taints: <none> Taints: <none> Taints: <none>
本文分享完畢,感謝支持點贊~~