阿里雲學生機ECS、Ubuntu、docker、kubectl1.15.四、kubelet1.15.四、kubeadm1.15.四、html
默認apt軟件源裏沒有這幾個軟件,須要添加谷歌官方的軟件源。但又因爲官方提供的源沒法訪問,須要改成阿里的源node
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main EOF apt-get update
命令說明:
1.經過下載工具下載位於https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
的deb軟件包密鑰,而後經過"apt-key"命令添加密鑰
2.經過cat把源deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
寫入到"/etc/apt/sources.list.d/kubernetes.list"linux
*此處下載工具使用curl,若未安裝,先執行以下命令安裝。"apt-transport-https"工具容許apt經過https來下載軟件,能夠不用安裝這個,只裝curlnginx
apt-get update && apt-get install -y apt-transport-https curl
完成以上步驟,可經過"apt-key list"命令看到相似以下的密鑰信息:
查看"/etc/apt/sources.list.d/kubernetes.list",以下:
docker
kubeadm、kubectl、kubelet三者的版本要一致,不然可能會部署失敗,小版本號不一樣倒也不會出什麼問題,不過儘可能安裝一致的版本。記住kubelet的版本不能夠超過API server的版本。例如1.8.0的API server能夠適配 1.7.0的kubelet,反之就不行了。
能夠經過"apt-cache madison"命令來查看可供安裝的軟件的版本號
例:json
apt-cache madison kubeadm kubelet kubectl
這裏安裝的版本是"1.15.4-00",別忘了後面的"-00"。
須要注意,安裝kubeadm的時候,會自動安裝kubectl、kubelet和cri-tool,安裝kubelet時會自動安裝kubernetes-cni,以下:
然而這並非一件好事,仔細看會發現自動安裝的kubectl和kubelet是最新版本,與kubeadm版本不一致。。。
因此應該先安裝kubectl和kubelet,命令以下:ubuntu
apt-get install kubectl=1.15.4-00 kubelet=1.15.4-00 kubeadm=1.15.4-00
若是不想讓軟件自動更新,能夠輸入:api
apt-mark hold kubeadm kubectl kubelet
容許更新:安全
apt-mark unhold kubeadm kubectl kubelet
在ubuntu下,可使用"ufw"管理防火牆。
查看防火牆狀態:bash
ufw status
禁用防火牆:
ufw diable
啓用防火牆:
ufw enable
阿里雲ecs沒有selinux,在此不做驗證,網上找到的方法以下:
swapoff -a
swapon -a
sudo mount -n -o remount,rw /
systemctl enable kubelet
編輯"/etc/docker/daemon.json"
添加以下信息:
"exec-opts": ["native.cgroupdriver=systemd"]
注意,須要在前面的鍵值對後用「,」逗號隔開,再添加新的配置項,不然配置文件會解析失敗。
修改完後保存,從新載入配置文件,重啓docker
systemctl daemon-reload systemctl restart docker
iptables -P FORWARD ACCEPT
kubeadm init --kubernetes-version=v1.15.4 --ignore-preflight-errors=NumCPU --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers
參數說明:
"kubernetes-version":指定k8s的版本,對於不一樣版本k8s,kubeadm會去拉取不一樣版本的鏡像。
"--pod-network-cidr": 指定使用"cidr"方式給pod分配IP,參數的數值必須跟後面網絡插件的配置文件一致。後面用到的網絡插件爲flannel,flannel配置文件中默認的數值是「10.244.0.0/16」
"--image-repository": 指定鏡像倉庫,kubeadm默認的倉庫地址無法訪問,須要指定爲阿里雲的地址——"registry.aliyuncs.com/google_containers"
--ignore-preflight-errors=<option>
<option>就是錯誤的類型,如上圖所示,錯誤提示是"[ERROR NumCPU]",那麼參數就寫成:
--ignore-preflight-errors=NumCPU
當看到上述信息就表示集羣Master節點初始化成功,在同一網絡下的機器上一樣地安裝kuneadm、kubelet並配置好環境以後,便可經過"kubeadm join"命令鏈接到Master節點使集羣成爲多節點集羣:
kubeadm join 192.168.1.73:6443 --token gkp9ws.rv2guafeusg7k746 \ --discovery-token-ca-cert-hash sha256:4578b17cd7198a66438b3d49bfb878093073df23cf6c5c7ac56b3e05d2e7aec0
該token默認有效期爲24小時,可經過"kubeadm token create --print-join-command"命令建立新token,並打印鏈接命令:
初始化成功後,在master節點上能夠經過kubectl命令來查看集羣上資源的狀態,可是kubectl在訪問API Server時須要認證,認證信息在"/etc/kubernetes/admin.conf"文件裏,須要把文件拷貝到"$HOME/.kube/config/"下,不然會出現"The connection to the server localhost:8080 was refused"這樣的錯誤。
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
這句話在初始化成功後會打印出來,直接拷貝。
也能夠直接添加環境變量:
export KUBECONFIG=/etc/kubernetes/admin.conf
能夠把這條變量添加到"/etc/profile",而後"source /etc/profile"。
配置好後,經過"kubectl get node"便可看到集羣的全部節點狀態:
能夠看到當前節點處於"NotReady"狀態,經過"kubectl describe node <your_node>"命令查看node狀況,<your_node>替換成本身的節點名,此時能夠看到這樣的信息:
在"Ready"一行,"status"爲"false","message"提示"Runtime newtwork not ready",意思是網絡插件未準備好,因此此時應該給集羣安裝網絡插件。
網絡插件有不少種,此處選擇"flannel",flannel的安裝比較簡單,直接指定配置文件,用"kubectl"安裝便可。配置文件以下:
--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - amd64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm64 hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - arm hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - ppc64le hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/os operator: In values: - linux - key: beta.kubernetes.io/arch operator: In values: - s390x hostNetwork: true tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
128行的network數值應與"kubeadm"初始化時指定的"--pod-network-cidr"一致,該值表示給pod分配ip時的ip前綴。
建立好配置文件後,經過"kubeclt"安裝flannel:
kubectl create -f <flannel_yaml_path>
把<flannel_yaml_path>替換成本身的配置文件路徑,執行命令後稍等片刻,單節點k8s集羣就部署好了。
默認狀況下,master節點會被打上一個叫"NoSchedule"的Taint(污點),能夠經過"kubectl describe"看到:
這個taint使得master節點不能被調度,也就是說master節點不能部署應用,因爲如今搭建的是單節點集羣,當前節點既充當master又得充當worker,因此須要把這個taint去掉:
kubectl taint node <node_name> <taint>-
<node_name>替換成本身節點名稱,<taint>替換成taint,如:
kubectl taint node yo node-role.kubernetes.io/master:NoSchedule-
注意別忘了taint後面的橫槓"-","-"表示「減號」,即把taint去掉。若是不加橫槓,表示加上taint。
docker pull nginx kubectl run nginx --image=nginx
稍等片刻nginx就部署好了,能夠經過"kubectl get pods --all-namespaces"查看,或者直接訪問"curl localhost:80"。
"kubectl get"能夠列出集羣環境中的某類資源,對於k8s,幾乎全部內容都是「資源」,如Node、Pod、Service等,只要把"<resource_type>"替換成想查看的資源類型便可。
如查看節點資源:
kubectl get node
對於pod等其它的資源,kubectl的用法會有些許不一樣。k8s使用"namespace"來對集羣中的資源進行分組管理,能夠把"namespace"當作「分組的名稱」,也能夠把"namespace"當作k8s集羣中的"集羣",不一樣"namespace"的資源在邏輯上彼此隔離,以此提升安全性,提升管理效率。用kubeectl查看這些個資源時,須要用"-n"來指定"namespace",如:
kubectl get pod -n kube-system
還能夠用"--all-namespaces"來查看全部"namespaces"下的資源:
kubectl get pod --all-namespaces
對於處於異常狀態的資源,可使用該命令查看其詳細信息,只要把<resource_type>替換成資源類別,把<resource_name>替換成資源名稱便可,固然也還須要用"-n"指明"namespaces"。
如:
kubectl describe pod -n kubernetes-dashboard kubernetes-dashboard-6b855d4584-9sgsk
而後就能夠看到該pod的事件信息:
該命令能夠查看當前全部docker容器,"-a"表示全部容器,當不加該參數時,顯示的則是正在運行的容器。因爲要查看的是k8s相關的容器,以此可使用管道命令和"grep"對顯示結果進行篩選:
docker ps -a | grep kube
對於處於"Exited"狀態的異常容器,使用"docker logs <container_id>"命令查看容器日誌。如:
docker logs 37443d902aee
此處"37443d902aee"是我機器上"kubernetes-dashboard"的容器id。