1、Kubernetes概述node
Kubernetes是Google在2014年開源的一個容器集羣管理系統,Kubernetes簡稱K8S。linux
K8S用於容器化應用程序的部署,擴展和管理。nginx
K8S提供了容器編排,資源調度,彈性伸縮,部署管理,服務發現等一系列功能。git
Kubernetes目標是讓部署容器化應用簡單高效。github
自我修復算法
在節點故障時從新啓動失敗的容器,替換和從新部署,保證預期的副本數量;殺死健康檢查失敗的容器,而且在未準備好以前不會處理客戶端請求,確保線上服務不中斷。sql
彈性伸縮docker
使用命令、UI或者基於CPU使用狀況自動快速擴容和縮容應用程序實例,保證應用業務高峯併發時的高可用性;業務低峯時回收資源,以最小成本運行服務。shell
自動部署和回滾bootstrap
K8S採用滾動更新策略更新應用,一次更新一個Pod,而不是同時刪除全部Pod,若是更新過程當中出現問題,將回滾更改,確保升級不受影響業務。
服務發現和負載均衡
K8S爲多個容器提供一個統一訪問入口(內部IP地址和一個DNS名稱),而且負載均衡關聯的全部容器,使得用戶無需考慮容器IP問題。
機密和配置管理
管理機密數據和應用程序配置,而不須要把敏感數據暴露在鏡像裏,提升敏感數據安全性。並能夠將一些經常使用的配置存儲在K8S中,方便應用程序使用。
存儲編排
掛載外部存儲系統,不管是來自本地存儲,公有云(如AWS),仍是網絡存儲(如NFS、GlusterFS、Ceph)都做爲集羣資源的一部分使用,極大提升存儲使用靈活性。
批處理
提供一次性任務,定時任務;知足批量數據處理和分析的場景。
kube-apiserver
Kubernetes API,
集羣的統一入口,各組件協調者,以RESTful API提供接口服務,全部對象資源的增刪改查和監聽操做都交給APIServer處理後再提交給Etcd存儲。
kube-controller-manager
處理集羣中常規後臺任務,一個資源對應一個控制器,而ControllerManager就是負責管理這些控制器的。
kube-scheduler
根據調度算法爲新建立的Pod選擇一個Node節點,能夠任意部署,能夠部署在同一個節點上,也能夠部署在不一樣的節點上。
etcd
分佈式鍵值存儲系統。用於保存集羣狀態數據,好比Pod、Service等對象信息。
kubelet
kubelet是Master在Node節點上的Agent,管理本機運行容器的生命週期,好比建立容器、Pod掛載數據卷、下載secret、獲取容器和節點狀態等工做。kubelet將每一個Pod轉換成一組容器。
kube-proxy
在Node節點上實現Pod網絡代理,維護網絡規則和四層負載均衡工做。
docker或rocket
容器引擎,運行容器。
Pod
最小部署單元
一組容器的集合
一個Pod中的容器共享網絡命名空間
Pod是短暫的
Controllers
ReplicaSet :確保預期的Pod副本數量
Deployment :無狀態應用部署
StatefulSet :有狀態應用部署
DaemonSet :確保全部Node運行同一個Pod
Job :一次性任務
Cronjob :定時任務
更高級層次對象,部署和管理Pod
Service
Label :標籤,附加到某個資源上,用於關聯對象、查詢和篩選
Namespaces:命名空間,將對象邏輯上隔離
Annotations :註釋
minikube
Minikube是一個工具,能夠在本地快速運行一個單點的Kubernetes,僅用於嘗試Kubernetes或平常開發的用戶使用。部署地址:https://kubernetes.io/docs/setup/minikube/
kubeadm
Kubeadm也是一個工具,提供kubeadm init和kubeadm join,用於快速部署Kubernetes集羣。部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
二進制包
推薦,從官方下載發行版的二進制包,手動部署每一個組件,組成Kubernetes集羣。下載地址:https://github.com/kubernetes/kubernetes/releases
如下操做,在三臺節點都執行
環境:centos 7.4 +
硬件需求:CPU>=2c ,內存>=2G
IP | 角色 | 安裝軟件 |
---|---|---|
192.168.73.138 | k8s-Master | kube-apiserver kube-schduler kube-controller-manager docker flannel kubelet |
192.168.73.139 | k8s-node01 | kubelet kube-proxy docker flannel |
192.168.73.140 | k8s-node01 | kubelet kube-proxy docker flannel |
PS : 如下全部操做,在三臺節點所有執行
一、關閉防火牆及selinux
$ sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0
二、關閉 swap 分區
三、分別在192.168.73.13八、192.168.73.13九、192.168.73.140上設置主機名及配置hosts
)$ hostnamectl set-hostname k8s-master(192.168.73.138主機打命令)
$ hostnamectl set-hostname k8s-node01(192.168.73.139主機打命令
)$ hostnamectl set-hostname k8s-node02 (192.168.73.140主機打命令
四、在全部主機上上添加以下命令
$ cat >> /etc/hosts << EOF192.168.4.34 k8s-master192.168.4.35 k8s-node01192.168.4.36 k8s-node02EOF
五、內核調整,將橋接的IPv4流量傳遞到iptables的鏈
$ cat > /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF$ sysctl --system
六、設置系統時區並同步時間服務器
$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo $ yum -y install docker-ce-18.06.1.ce-3.el7 $ systemctl enable docker && systemctl start docker $ docker --version Docker version 18.06.1-ce, build e68fc7a
$ cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
2.2.6上全部主機都須要操做,因爲版本更新頻繁,這裏指定版本號部署
$ yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 $ systemctl enable kubelet
只須要在Master 節點執行,這裏的apiserve須要修改爲本身的master地址
[root@k8s-master ~]# kubeadm init \ --apiserver-advertise-address=192.168.73.138 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.15.0 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.244.0.0/16
因爲默認拉取鏡像地址k8s.gcr.io國內沒法訪問,這裏指定阿里雲鏡像倉庫地址。
輸出結果:
[preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.4.34] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1] [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" ......(省略) [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.73.138:6443 --token 2nm5l9.jtp4zwnvce4yt4oj \ --discovery-token-ca-cert-hash sha256:12f628a21e8d4a7262f57d4f21bc85f8802bb717d
根據輸出提示操做:
[root@k8s-master ~]# mkdir -p $HOME/.kube [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
默認token的有效期爲24小時,當過時以後,該token就不可用了,
若是後續有nodes節點加入,解決方法以下:
從新生成新的token
kubeadm token create [root@k8s-master ~]# kubeadm token create 0w3a92.ijgba9ia0e3scicg [root@k8s-master ~]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS 0w3a92.ijgba9ia0e3scicg 23h 2019-09-08T22:02:40+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token t0ehj8.k4ef3gq0icr3etl0 22h 2019-09-08T20:58:34+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token [root@k8s-master ~]#
獲取ca證書sha256編碼hash值
[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' ce07a7f5b259961884c55e3ff8784b1eda6f8b5931e6fa2ab0b30b6a4234c09a
節點加入集羣
[root@k8s-node01 ~]# kubeadm join --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 192.168.73.138:6443 --skip-preflight-chec
在兩個 Node 節點執行
使用kubeadm join 註冊Node節點到Matser
kubeadm join 的內容,在上面kubeadm init 已經生成好了
[root@k8s-node01 ~]# kubeadm join 192.168.4.34:6443 --token 2nm5l9.jtp4zwnvce4yt4oj \ --discovery-token-ca-cert-hash sha256:12f628a21e8d4a7262f57d4f21bc85f8802bb717dd6f513bf9d33f254fea3e89
輸出內容:
[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
只須要在Master 節點執行
[
修改鏡像地址:(有可能默認不能拉取,確保可以訪問到quay.io這個registery,不然修改以下內容)
[
106行,120行的內容,替換以下image,替換以後查看以下爲正確進入編輯,把
[root@k8s-master ~]# cat -n kube-flannel.yml|grep lizhenliang/flannel:v0.11.0-amd64 106 image: lizhenliang/flannel:v0.11.0-amd64 120 image: lizhenliang/flannel:v0.11.0-amd64 [root@k8s-master ~]# kubectl apply -f kube-flannel.yml [root@k8s-master ~]# ps -ef|grep flannel root 2032 2013 0 21:00 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
查看集羣的node狀態,安裝完網絡工具以後,只有顯示以下狀態,全部節點所有都Ready好了以後才能繼續後面的操做
[root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 37m v1.15.0 k8s-node01 Ready <none> 5m22s v1.15.0 k8s-node02 Ready <none> 5m18s v1.15.0 [root@k8s-master ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-bccdc95cf-h2ngj 1/1 Running 0 14m coredns-bccdc95cf-m78lt 1/1 Running 0 14m etcd-k8s-master 1/1 Running 0 13m kube-apiserver-k8s-master 1/1 Running 0 13m kube-controller-manager-k8s-master 1/1 Running 0 13m kube-flannel-ds-amd64-j774f 1/1 Running 0 9m48s kube-flannel-ds-amd64-t8785 1/1 Running 0 9m48s kube-flannel-ds-amd64-wgbtz 1/1 Running 0 9m48s kube-proxy-ddzdx 1/1 Running 0 14m kube-proxy-nwhzt 1/1 Running 0 14m kube-proxy-p64rw 1/1 Running 0 13m kube-scheduler-k8s-master 1/1 Running 0 13m
只有所有都爲1/1則能夠成功執行後續步驟,若是flannel需檢查網絡狀況,從新進行以下操做
kubectl delete -f kube-flannel.yml
而後從新wget,而後修改鏡像地址,而後
kubectl apply -f kube-flannel.yml
在Kubernetes集羣中建立一個pod,而後暴露端口,驗證是否正常訪問:
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed [root@k8s-master ~]# kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/nginx-554b9c67f9-wf5lm 1/1 Running 0 24s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 39m service/nginx NodePort 10.1.224.251 <none> 80:31745/TCP 9
訪問地址:http://NodeIP:Port ,此例就是:http://192.168.73.138:32039
[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml [root@k8s-master ~]# vim kubernetes-dashboard.yaml 修改內容: 109 spec: 110 containers: 111 - name: kubernetes-dashboard 112 image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 # 修改此行 ...... 157 spec: 158 type: NodePort # 增長此行 159 ports: 160 - port: 443 161 targetPort: 8443 162 nodePort: 30001 # 增長此行 163 selector: 164 k8s-app: kubernetes-dashboard [root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
在火狐瀏覽器訪問(google受信任問題不能訪問)地址: https://NodeIP:30001
建立service account並綁定默認cluster-admin管理員集羣角色:
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin [root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-d9jh2 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 4aa1906e-17aa-4880-b848-8b3959483323 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJ...(省略以下)...AJdQ token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDlqaDIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGFhMTkwNmUtMTdhYS00ODgwLWI4NDgtOGIzOTU5NDgzMzIzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.OkF6h7tVQqmNJniCHJhY02G6u6dRg0V8PTiF8xvMuJJUphLyWlWctgmplM4kjKVZo0fZkAthL7WAV5p_AwAuj4LMfo1X5IpxUomp4YZyhqgsBM0A2ksWoKoLDjbizFwOty8TylWlsX1xcJXZjmP9OvNgjjSq5J90N5PnxYIIgwAMP3fawTP7kUXxz5WhJo-ogCijJCFyYBHoqHrgAbk9pusI8DpGTNIZxBMxkwPPwFwzNCOfKhD0c8HjhNeliKsOYLryZObRdmTQXmxsDfxynTKsRxv_EPQb99yW9GXJPQL0OwpYb4b164CFv857ENitvvKEOU6y55P9hFkuQuAJdQ
![](http://static.javashuo.com/static/loading.gif)
解決其餘瀏覽器不能訪問的問題
[root@k8s-master ~]# cd /etc/kubernetes/pki/ [root@k8s-master pki]# mkdir ui [root@k8s-master pki]# cp apiserver.crt ui/ [root@k8s-master pki]# cp apiserver.key ui/ [root@k8s-master pki]# cd ui/ [root@k8s-master ui]# mv apiserver.crt dashboard.pem [root@k8s-master ui]# mv apiserver.key dashboard-key.pem [root@k8s-master ui]# kubectl delete secret kubernetes-dashboard-certs -n kube-system [root@k8s-master ui]# kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
[root@k8s-master]# vim kubernetes-dashboard.yaml #回到這個yaml的路徑下修改 修改 dashboard-controller.yaml 文件,在args下面增長證書兩行 - --tls-key-file=dashboard-key.pem - --tls-cert-file=dashboard.pem [root@k8s-master ~]kubectl apply -f kubernetes-dashboard.yaml [root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system serviceaccount/dashboard-admin created [root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin [root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}') Name: dashboard-admin-token-zbn9f Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: 40259d83-3b4f-4acc-a4fb-43018de7fc19 Type: kubernetes.io/service-account-token Data ==== namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4temJuOWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNDAyNTlkODMtM2I0Zi00YWNjLWE0ZmItNDMwMThkZTdmYzE5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.E0hGAkeQxd6K-YpPgJmNTv7Sn_P_nzhgCnYXGc9AeXd9k9qAcO97vBeOV-pH518YbjrOAx_D6CKIyP07aCi_3NoPlbbyHtcpRKFl-lWDPdg8wpcIefcpbtS6uCOrpaJdCJjWFcAEHdvcfmiFpdVVT7tUZ2-eHpRTUQ5MDPF-c2IOa9_FC9V3bf6XW6MSCZ_7-fOF4MnfYRa8ucltEIhIhCAeDyxlopSaA5oEbopjaNiVeJUGrKBll8Edatc7-wauUIJXAN-dZRD0xTULPNJ1BsBthGQLyFe8OpL5n_oiHM40tISJYU_uQRlMP83SfkOpbiOpzuDT59BBJB57OQtl3w ca.crt: 1025 bytes
文中所用到的知識點和鏡像都是李振良老師的指導,在此聲明感謝
若有不對的地方和問題,歡迎指出和交流