# kubeadm是一位高中生的做品,他叫Lucas Kaldstrom,芬蘭人,17歲用業餘時間完成的一個社區項目: # kubeadm的源代碼,就在kubernetes/cmd/kubeadm目錄下,是kubernetes項目的一部分, # 其中,app/phases文件夾下代碼,對應的就是工做原理中詳細介紹的每個具體步驟: # 兩條指令完成一個Kubernetes集羣的部署: # 建立一個Master節點 # init # 將一個Node節點加入到當前集羣中 # kubeadm join <Master 節點的IP和端口> # kubeadm工做原理 # Kubernetes部署時,他的每個組件都是一個須要被執行的、單獨的二進制文件 # kubeadm的方案 # kubelet直接運行在宿主機上,而後使用容器部署到其餘的kubernetes組件: # 1.在機器上手動安裝Kubeadm,kubelet和kubectl三個二進制文件,kubeadm做者已經爲各個發行版linux準備好了安裝包 # 你只需執行: apt-get install kubeadm # 2.使用kubeadm init 部署Master節點 # kubeadm init 工做流程 # 執行kubeadm init指令後, kubeadm首先要作的,是一系列的檢查工做, # 以肯定這臺機器能夠用來部署kubernetes這一步檢查,稱之爲「Preflight checks" ,能夠省去不少後續麻煩。 # preflight check包括(部分) # Linux內部版本是否3.10以上? # Linux Cgroups模塊是否可用? # 機器的hostname是否標準? 在Kubenetes項目,機器的名字以及一切存儲在Etcd中的API對象, # 必須使用標準的DNS命名、 # 用戶安裝的Kubeadm和Kubelet版本是否匹配? # kubenetes工做端口10250/10251/10252端口是否是被佔用? # ip、mount等linux指令是否存在? # Docker是否已經安裝? # 經過Preflight Checks以後,kubeadm生成Kubernetes對外提供服務所需各類證書和對應的目錄: # kubernetes對外提供服務時,除非專門開啓「不安全模式」,不然要經過HTTPS才能訪問kube-apiserver, # 這須要爲kubernetes集羣配置好證書文件。 # kubeadm爲kubernetes項目生成證書文件都放在Master節點的/etc/kubernetes/pki目錄下,在這個目錄下, # 最主要證書文件是ca.cra和對應的私鑰ca.key;
CentOS7.3 kubernetes-cni-0.7.5-0.x86_64 kubectl-1.17.0-0.x86_64 kubelet-1.17.0-0.x86_64 kubeadm-1.17.0-0.x86_64 docker-ce-18.09.9-3.el7.x86_64
節點名 | IP | 軟件版本 | 說明 |
---|---|---|---|
Master | 116.196.83.113 | docker:1809/kubernetes1.6 | 阿里雲 |
Node1 | 121.36.43.223 | docker:1809/kubernetes1.6 | 阿里雲 |
Node2 | 120.77.248.31 | docker:1809/kubernetes1.6 | 阿里雲 |
注意事項:html
1. 跟傳統服務器上部署k8s集羣同樣操做卻kubeadm init一直超時報錯? # 通常狀況下,"kubeadm"部署集羣時指定"--apiserver-advertise-address=<public_ip>"參數, # 便可在其餘機器上,經過公網IP join到本機器,然而,阿里雲和一些其餘雲服務器沒配置公網IP, # etcd會沒法啓動,致使初始化失敗.咱們只須要本身建立一個公網IP便可.
# 初始化 init_security() { systemctl stop firewalld systemctl disable firewalld &>/dev/null setenforce 0 sed -i '/^SELINUX=/ s/enforcing/disabled/' /etc/selinux/config sed -i '/^GSSAPIAu/ s/yes/no/' /etc/ssh/sshd_config sed -i '/^#UseDNS/ {s/^#//;s/yes/no/}' /etc/ssh/sshd_config systemctl enable sshd crond &> /dev/null rpm -e postfix --nodeps echo -e "\033[32m [安全配置] ==> OK \033[0m" } init_security init_yumsource() { if [ ! -d /etc/yum.repos.d/backup ];then mkdir /etc/yum.repos.d/backup fi mv /etc/yum.repos.d/* /etc/yum.repos.d/backup 2>/dev/null if ! ping -c2 www.baidu.com &>/dev/null then echo "您沒法上外網,不能配置yum源" exit fi curl -o /etc/yum.repos.d/163.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo &>/dev/null curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo &>/dev/null yum clean all timedatectl set-timezone Asia/Shanghai echo "nameserver 114.114.114.114" > /etc/resolv.conf echo "nameserver 8.8.8.8" >> /etc/resolv.conf chattr +i /etc/resolv.conf yum -y install ntpdate ntpdate -b ntp1.aliyun.com # 對時很重要 echo -e "\033[32m [YUM Source] ==> OK \033[0m" } init_yumsource # 關掉swap分區 swapoff -a # 若是想永久關掉swap分區,打開以下文件註釋掉swap哪一行便可. vim /etc/fstab # 配置主機名解析 tail -3 /etc/hosts 116.196.83.113 master 121.36.43.223 node1 120.77.248.31 node2
安裝一些必要的系統工具node
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 # 添加軟件源信息 # docker 官方源 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # 阿里雲源 sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
安裝docker-cepython
# 若是想安裝特定版本的Docker-ce版本,先列出repo中可用版本,而後選擇安裝 yum list docker-ce --showduplicates |sort -r yum install docker-ce-<VERSION STRING> # 選擇安裝 docker-ce-18.09.9-3.el7 yum -y install docker-ce-18.09.9-3.el7 # Docker鏡像加速 # 沒有啓動/etc/docker 目錄不存在,須要本身創建,啓動會本身建立; # 爲了指望咱們鏡像下載快一點,應該定義一個鏡像加速器,加速器在國內 mkdir /etc/docker cat <<EOF > /etc/docker/daemon.json { "registry-mirrors": ["https://registry.docker-cn.com"] } EOF systemctl start docker && systemctl enable docker && systemctl daemon-reload systemctl start docker && systemctl enable docker && systemctl daemon-reload # 守護進程重啓 docker info |grep Cgroup # 注意看出來信息是不是cgroupfs # 這個時候咱們過濾信息會有兩個警告,這一步必定要作,否則可能初始化集羣會報錯 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 EOF
# 這三個包在全部機器上安裝 # kubeadm: 從零開始配置K8s cluster的tools; # kubelet: 集羣的每一個機器上都須要運行的組件,用來啓動pods和containers # kubectl: 用來和集羣交互的命令行工具
vim /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg enabled=1 yum -y install ipset fast makecache kubelet kubeadm kubectl ipvsadm systemctl --system # 若是 net.bridge.bridge-nf-call-iptables 報錯,加載 br_netfilter 模塊,就是以前建立的k8s.conf文件 # modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
# 加載 ipvs 相關內核模塊 若是從新開機,須要從新加載(能夠寫在 /etc/rc.local 中開機自動加載) # cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack_ipv4 # lsmod | grep ip_vs 查看是否加載成功 # 配置啓動kubelet(全部節點) # 若是使用谷歌的鏡像: cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=k8s.gcr.io/pause:3.1" EOF # 若是docker使用systemd就不用作上面這一步,須要修改daemon.json文件 { "exec-opts": ["native.cgroupdriver=systemd"] } # 使用systemd做爲docker的cgroup driver能夠確保服務器節點在資源緊張的狀況更加穩定 # 每一個節點都啓動kubelet systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet # 這個時候看狀態會看到錯誤信息,等kubeadm init 生成CA證書會被自動解決;
全部節點獲取鏡像linux
cat k8s2.sh for i in `kubeadm config images list`; do imageName=${i#k8s.gcr.io/} docker pull registry.aliyuncs.com/google_containers/$imageName docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.aliyuncs.com/google_containers/$imageName done 每一個節點執行此腳本
初始化Master節點nginx
# 通常狀況下,"kubeadm"部署集羣時指定"--apiserver-advertise-address=<public_ip>"參數, # 便可在其餘機器上,經過公網IP join到本機器,然而,阿里雲和一些其餘雲服務器沒配置公網IP, # etcd會沒法啓動,致使初始化失敗. ipconfig eth0:1 116.196.83.113 netmask 255.255.255.255 broadcast 116.196.83.113 up # 說明與注意 # 1. 必須用up啓動,讓這個IP生效. # 2. 這種方法只是臨時的,若是reboot的話,則會所有消失. # 咱們能夠將增長ip的命令填寫到/etc/rc.local文件中 # 接下來咱們只須要配置master節點,運行初始化過程以下: kubeadm init --kubernetes-version=v1.17.0 --pod-network-cidr=10.244.0.0/16 \ --apiserver-advertise-address=116.196.83.113 --ignore-preflight-errors=Swap # 注意此處的版本,版本更替有點快. # --apiserver-advertise-address: 指定Master的那個IP地址與Cluster的其餘節點通訊 # --service-cidr: 指定Service網絡的範圍,及負載均衡使用的IP地址段. # --pod-network-cidr: 指定Pod網絡的範圍,即Pod的IP地址段. # --image-repository: Kubernetes默認Registries地址是k8s.gcr.io,在國內並不能訪問gcr.io, # 在1.13版本咱們能夠增長-image-repository參數,默認值是k8s.gcr.io, # 將其指定爲阿里雲鏡像地址: registry.aliyuncs.com/.... # --kubernetes-version=v1.17.0,指定要安裝的版本號 # --ignore-prefilght-errors=: 忽略運行時的錯誤.
若是出現如下信息就說明初始化成功git
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 116.196.83.113:6443 --token dm73l2.y68gl7lwq18kpuss --discovery-token-ca-cert-hash sha256:5139a172cd23276b70ec964795a6833c11e104c4b5c212aeb7fca23a3027914f
github
#出來一長串信息記錄了完成初始化輸出內容,根據內容能夠看出手動初始化安裝一個Kubernetes集羣所須要的關鍵步驟 # 有如下關鍵內容 # [kubelet] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml" # [certificates] 生成相關的各類證書 # [kubeconfig] 生成相關的kubeconfig文件 # [bootstraptoken] 生成的token記錄下來,後邊使用kubeadm join往集羣中添加節點會用到 # 配置使用kubectl # 以下操做在master節點操做 rm -rf $HOME/.kube mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master ~] kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady master 84s v1.16.0 # 將admin.conf傳給其餘節點,否則網絡插件裝不上去 [root@master ~]# scp /etc/kubernetes/admin.conf k8s-node1:/etc/kubernetes/admin.conf [root@master ~]# scp /etc/kubernetes/admin.conf k8s-node2:/etc/kubernetes/admin.conf # 下面命令在node節點上執行 echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile source ~/.bash_profile # 配置使用網絡插件 # 將node節點加入到主節點(全部node節點)
kubeadm join 116.196.83.113:6443 --token dm73l2.y68gl7lwq18kpuss --discovery-token-ca-cert-hash sha256:5139a172cd23276b70ec964795a6833c11e104c4b5c212aeb7fca23a3027914f
web
# 配置網絡插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
docker
正常來講過一段時間master節點就會出現下面信息,即表明成功 kubectl get nodes # 查看節點狀態 NAME STATUS ROLES AGE VERSION master Ready master 44m v1.17.0 node1 Ready <none> 16m v1.17.0 node2 Ready <none> 15m v1.17.0
# 若是主節點一直處於NotReady,coredns處於pending,多是網絡插件的問題,能夠先下載flannel.yml網絡插件, # 手動裝 # wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f kube-flannel.yml # 若是一直裝不上,能夠先裝一個第三方的,而後刪除這個pod,過一會就行了 # docker pull jmgao1983/flannel
# 整個集羣全部節點(包括master)重置/移除節點 # 驅離k8s-node-1節點上的pod(master上執行) [root@k8s-master ~]# kubectl drain k8s-node-1 --delete-local-data --force --ignore-daemonsets # 刪除節點 (master上執行) [root@k8s-master ~]# kubectl delete node k8s-node-1 # 重置節點 (node上-也就是在被刪除的節點上) [root@k8s-node-1 ~]# kubeadm reset # 1:須要把 master 也驅離、刪除、重置,第一次沒有驅離和刪除 master,最後的結果是查看結 果一切正常, # 但 coredns 死活不能用; # 2.master上在reset以後須要刪除以下文件 rm -rf /var/lib/cni/$HOME/.kube/config
# kubeadm生成的token過時後,集羣增長節點會報錯,該token就不可用了 # 解決辦法 # 1、 # 1.從新生成新的token; # kubeadm token create # kiyfhw.xiacqbch8o8fa8qj # kubeadm token list # 2.獲取ca證書sha256編碼的hash值 # openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | \ # openssl rsa -pubin -outform der 2 >/dev/null \ #| openssl dgst -sha256 -hex | sed 's/^.* //' # 3.節點加入集羣 # kubeadm join 18.16.202.35:6443 --token kiyfhw.xiacqbch8o8fa8qj \ # --discovery-token-ca-cert-hash sha256:5417eb1b68bd4e7a4c82aded83abc55ec91bd601e45734d6abde8b1ebb057 # 幾秒鐘後,kubectl get nodes在主服務器上運行時輸出此節點,若是嫌繁瑣可直接使用 # kubeadm token create --print-join-command # 2、 # token=$(kubeadm token generate) kubeadm token create $token --print-join-command --ttl=0
kubectl taint nodes --all node-role.kubernetes.io/master- # k8s集羣若是重啓後kubelet起不來,從selinux,防火牆,swap分區以及路由轉發,環境變量排查一下
# 1. 查看node狀態 kubectl get node # 簡寫no也行 NAME STATUS ROLES AGE VERSION master Ready master 91m v1.17.0 node1 Ready <none> 62m v1.17.0 node2 Ready <none> 62m v1.17.0 kubectl get node node1 node2 # 可用空格寫多個 NAME STATUS ROLES AGE VERSION node1 Ready <none> 63m v1.17.0 node2 Ready <none> 63m v1.17.0 # 2. 刪除節點 kubectl delete node node1 # 3. 查看節點詳細信息,用於排錯. kubectl describe node node1
1. 查看全部pod kubectl get pods 2. 查看某一個Pod kubectl get pod nginx1 3. 查看Pod的詳細信息 kubectl describe pod nginx1
1. 查看service信息 kubectl get service # 查看service的信息,能夠簡寫svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 84m # 使用service暴露端口,service自己有一個cluster ip,此ip不能被ping, # 只能看到默認空間的. 2. 查看全部名稱空間的資源 kubectl get service --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 98m kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 98m 3. 查看kube-system名稱空間的資源 kubectl get pods -n kube-system # 查看kube-system名稱空間內的資源,能夠在pod前加上svc等同時查看 NAME READY STATUS RESTARTS AGE coredns-6955765f44-h4wp5 1/1 Running 0 45m coredns-6955765f44-zg7bf 1/1 Running 0 45m etcd-master 1/1 Running 0 45m kube-apiserver-master 1/1 Running 0 45m kube-controller-manager-master 1/1 Running 0 45m kube-flannel-ds-amd64-9l5rn 1/1 Running 0 20m kube-flannel-ds-amd64-9vtfm 1/1 Running 0 16m kube-flannel-ds-amd64-zzqbb 1/1 Running 0 16m kube-proxy-d2qfg 1/1 Running 0 16m kube-proxy-lr945 1/1 Running 0 16m kube-proxy-tnqsz 1/1 Running 0 45m kube-scheduler-master 1/1 Running 0 45m
1. 查看集羣信息 kubectl cluster-info 2. 查看各組件信息 kubectl get pod -n kube-system -o wide # 在Kubectl各個組件都是以應用部署的,故須要看到ip地址才能查看組件信息. -n: --namespace命名空間,給k8s不一樣的應用分類用的 -o: 顯示pod運行在哪一個節點上和ip地址. 3. 查看組件狀態 kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} 4. 查詢api server
至此,基礎環境部署是完成了,接下來咱們去建立個Pod,大概熟悉下kubernetes
json
# 舊方式建立Pod kubectl run nginx-test1 --image=daocloud.io/library/nginx --port=80 --replicas=1 # 此時會有一個警告,由於這個方式建立Pod比較舊了. # 新方式建立Pod kubectl run --generator=run-pod/v1 nginx-test2 --image=daocloud.io/library/nginx --port=80 --replicas=1 kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-test1-6d4686d78d-ftdj9 1/1 Running 0 2m20s 10.244.2.3 node2 <none> <none> nginx-test2 1/1 Running 0 76s 10.244.1.3 node1 <none> <none> # 去相應的節點訪問指定IP便可訪問 curl -I -s 10.244.1.3 |grep 200 HTTP/1.1 200 OK kubectl get deployment -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR nginx-test1 1/1 1 1 10m nginx-test1 daocloud.io/library/nginx run=nginx-test1 # 咱們能夠發現舊方式建立的Pod能直接建立deployent,新方式是直接建立Pod, # deployment裏面的服務是能夠集羣內每一個節點訪問的,可是Pod只能被所屬節點訪問.
# 查看Pod定義的詳細信息 kubectl get pods second-nginx -o yaml kubectl get pods second-nginx -o json # 以GoTemplate方式過濾指定的信息--查詢Pod的運行狀態(相似docker的inspect) kubectl get pods nginx-test2 --output=go-template --template={{.status.phase}} Running # 查看Pod中定義執行命令的輸出 ---和docker logs同樣 kubectl log pod名稱
# 查看Pod的狀態和聲明週期事件 kubectl describe pod nginx-test2 Name: nginx-test2 # 名字段含義 Namespace: default Priority: 0 Node: node1/192.168.0.110 Start Time: Sun, 15 Dec 2019 19:52:10 +0800 Labels: run=nginx-test2 Annotations: <none> Status: Running IP: 10.244.1.3 IPs: IP: 10.244.1.3 Containers: # Pod中容器的信息 nginx-test2: # 容器的ID Container ID: docker://3df2a2e16d6eaf909022627fac23c829bad006657fb03b4275bb536c8f5c9d90 Image: daocloud.io/library/nginx #容器的鏡像 Image ID: docker-pullable://daocloud.io/library/nginx@sha256:f83b2ff11fc3fb90aebdebf76 Port: 80/TCP Host Port: 0/TCP State: Running # 容器狀態 Started: Sun, 15 Dec 2019 19:52:14 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-8lcxt (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: # 容器的數據卷 default-token-8lcxt: Type: Secret (a volume populated by a Secret) SecretName: default-token-8lcxt Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: # 與Pod相關的事件表 Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 22m default-scheduler Successfully assigned default/nginx-test2 to node1 Normal Pulling 22m kubelet, node1 Pulling image "daocloud.io/library/nginx" Normal Pulled 22m kubelet, node1 Successfully pulled image "daocloud.io/library/nginx" Normal Created 22m kubelet, node1 Created container nginx-test2 Normal Started 22m kubelet, node1 Started container nginx-test2
# 擴展Pod數量爲4 kubectl scale --replicas=4 deployment nginx-test1 kubectl get pods NAME READY STATUS RESTARTS AGE nginx-test1-6d4686d78d-dsgdv 1/1 Running 0 89s nginx-test1-6d4686d78d-ftdj9 1/1 Running 0 31m nginx-test1-6d4686d78d-k49br 1/1 Running 0 89s nginx-test1-6d4686d78d-wsnsh 1/1 Running 0 89s nginx-test2 1/1 Running 0 30m # 縮減只用修改replicas後面數字便可 kubectl scale --replicas=1 deployment nginx-test1 kubectl get pods NAME READY STATUS RESTARTS AGE nginx-test1-6d4686d78d-ftdj9 1/1 Running 0 32m nginx-test2 1/1 Running 0 31m
# 爲了驗證更加明顯,更新時將nginx替換爲httpd服務 kubectl set image deployment nginx-test nginx-test=httpd # 實時查看更新過程 kubectl get deployment -w NAME READY UP-TO-DATE AVAILABLE AGE nginx-test1 4/5 3 4 54m nginx-test1 5/5 3 5 54m nginx-test1 4/5 3 4 54m nginx-test1 4/5 4 4 54m nginx-test1 5/5 4 5 54m # 咱們能夠去相應節點去訪問試試 curl 10.244.1.20 <html><body><h1>It works!</h1></body></html> # 更新後回滾原來的nginx kubectl rollout undo deployment nginx-test1 deployment.apps/nginx-test1 rolled back # 實時查看回滾的進度 kubectl get deployment -w NAME READY UP-TO-DATE AVAILABLE AGE nginx-test1 4/5 3 4 56m nginx-test1 5/5 3 5 57m nginx-test1 4/5 3 4 57m nginx-test1 4/5 4 4 57m nginx-test1 5/5 4 5 57m # 回滾完成後驗證. curl -s 10.244.1.23 -I |grep Server Server: nginx/1.17.6
kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2 kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE myapp 2/2 2 2 3m21s nginx-test1 5/5 5 5 89m kubectl get pods -o wide |grep myapp # 咱們去相應的節點訪問,經過循環不斷訪問,能夠看出更新的效果.可是由於更換pod後可能IP會換. while true; do curl 10.244.1.25; sleep 1 ;done # 滾動更新 kubectl set image deployment myapp myapp=ikubernetes/myapp:v2 # 接下來咱們能夠看deployment控制器詳細信息 kubectl describe deployment myapp |grep myapp:v2 Image: ikubernetes/myapp:v2 # 接下來咱們回滾一下試試 kubectl rollout undo deployment myapp kubectl describe deployment myapp |grep Image: Image: ikubernetes/myapp:v1 curl 10.244.1.27 Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
將pod建立完成後,訪問該pod內的服務只能在集羣內部經過Pod的地址去訪問該服務;當該pod出現故障後,該pod的控制器會從新建立一個包括該服務的pod,此時訪問該服務需要獲取該服務所在的新的pod地址去訪問,對此能夠建立一個service,當新的pod建立完成後,service會經過pod的label鏈接到該服務,只需經過service便可訪問該服務;
# 接下來咱們能夠刪除當前Pod kubectl delete pod myapp-7c468db58f-4grch # 刪除Pod後,查看Pod信息發現又建立了一個新的Pod。 kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE myapp-7c468db58f-j7qdj 1/1 Running 0 32s 10.244.2.29 node myapp-7c468db58f-pms57 1/1 Running 0 14m 10.244.1.27 node # 咱們能夠建立一個service,並將包含myapp的標籤加入進來. # service的建立經過「kubectl expose」命令建立,該命令的具體用法能夠經過kubectl expose --help查看, # service建立完成後,經過service地址訪問pod中的服務依然只能經過集羣地址去訪問.
kubectl expose deployment nginx-test1 --name=nginx --port=80 --target-port=80 --protocol=TCP
# 查看一下service,待會直接訪問這個service的IP地址. kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h55m myapp ClusterIP 10.96.241.42 <none> 80/TCP 88m nginx ClusterIP 10.96.11.13 <none> 80/TCP 4m41s # 由於是經過service地址去訪問nginx,Pod被刪除從新建立後,依然能夠經過service訪問Service下的Pod中的服務. # 但前提是須要配置Pod地址爲core dns服務的地址,新建的Pod中DNS地址 curl 10.96.11.13 -I HTTP/1.1 200 OK Server: nginx/1.17.6 Date: Sun, 15 Dec 2019 14:50:36 GMT Content-Type: text/html Content-Length: 612 Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT Connection: keep-alive ETag: "5dd3e500-264" Accept-Ranges: bytes kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 5h6m
kubectl describe svc nginx Name: nginx Namespace: default Labels: run=nginx-test1 # 標籤是不變的 Annotations: <none> Selector: run=nginx-test1 Type: ClusterIP IP: 10.96.11.13 Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.1.22:80,10.244.1.23:80,10.244.1.24:80 + 2 more... Session Affinity: None Events: <none> # 查看Pod的標籤 kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-test1-7798fd9994-559tc 1/1 Running 0 141m pod-template-hash=7798fd9994,run=nginx-test1 # coredns 服務隊service名稱的解析是實時的,在service被從新建立後或者修改service的ip地址後, # 依然能夠經過service名稱訪問pod中的服務; # 刪除並從新建立一個名稱爲nginx的service kubectl delete svc nginx
service端口暴露
建立好pod及service後,不管是經過pod地址及service地址在集羣外部都沒法訪問pod中的服務:若是想要在集羣外部訪問pod中的服務,須要修改service的類型爲NodePort,修改後會自動添加nat規則,此時就能夠經過nade節點地址訪問pod中的服務;
# 咱們先建立一個名稱爲web的service kubectl expose deployment nginx-test1 --name=web # 編輯配置文件,修改本身要暴露的端口 kubectl edit svc web apiVersion: v1 kind: Service metadata: creationTimestamp: "2019-12-15T15:15:03Z" labels: run: nginx-test1 name: web namespace: default resourceVersion: "49527" selfLink: /api/v1/namespaces/default/services/web uid: 82ca9472-3e55-495f-94a3-3c826a6f6f6e spec: clusterIP: 10.96.18.152 externalTrafficPolicy: Cluster ports: - nodePort: 31688 # 添加此行 port: 80 protocol: TCP targetPort: 80 selector: run: nginx-test1 sessionAffinity: None type: NodePort # 修改此處 status: loadBalancer: {} netstat -lntp |grep 30837 tcp6 0 0 :::30837 :::* LISTEN 114918/kube-proxy # 在外部能夠經過node節點的地址及該端口訪問pod內的服務;