http://www.cnblogs.com/cocowool/p/kubeadm_install_kubernetes.htmlhtml
https://www.kubernetes.org.cn/doc-16node
基於Kubeadm部署Kubernetes1.13.3 HA 高可用集羣案例: https://github.com/yanghongfei/Kuberneteslinux
二進制搭建:https://note.youdao.com/ynoteshare1/index.html?id=b96debbbfdee1e4eb8755515aac61ca4&type=notebooknginx
1、環境準備git
1)服務端基本信息github
[root@k8s6 ~]# cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core) [root@k8s6 ~]# uname -r 3.10.0-693.el7.x86_64
2)機器準備,寫入hosts文件算法
192.168.10.22 k8s06 k06 k6 192.168.10.23 node01 n01 n1 192.168.10.24 node02 n02 n2
3) 關閉防火牆,禁止開機啓動(雲平臺服務自帶防火牆,無須設置防火牆)docker
centos7默認防火牆爲firewall的
systemctl stop firewalld.service 關閉防火牆
systemctl disable firewalld.service 禁止開機啓動
firewall-cmd --state 查看狀態
4)時間同步json
yum -y install ntp
systemctl start ntpd.service
netstat -lntup|grep ntpd
[root@pvz01 ~]# ntpdate -u 192.168.1.6 vim
2、設置yum源安裝kubernetes和docker。並安裝
1.1)尋找kubernetes的源
拷貝該連接地址
一樣拷貝該連接地址
1.1)2次拷貝的地址爲
https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
1.2)拷貝安裝密鑰
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg rpm --import rpm-package-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpm --import yum-key.gpg
1.3)編輯kubernetes的yum源
[root@k8s6 yum.repos.d]# cat /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg enabled=1
2.1)尋找docker的yum源。默認docker版本爲1.3,版本過低,不建議使用
複製該連接地址
[root@k8s6 yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3)查看當前可用的yum源是否有k8s和docker
4)yum安裝
yum install docker-ce kubelet kubeadm kubectl
注意,須要查看一下是否有密鑰
4.1)yum指定版本安裝
直接安裝默認最新版(初始化的版本問題) yum install docker-ce kubelet kubeadm kubectl -y 指定版本安裝 yum install docker-ce kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0 -y [root@master ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port?
3、配置K8s
1)vim /usr/lib/systemd/system/docker.service。無代理,無線配置。
[Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker Environment="HTTPS_PROXY=http.ik8s.io:10080" # 新增 Environment="NO_PROXY=127.0.0.0/8,127.20.0.0/16" # 新增
2)確認iptables的值是否爲1
[root@k8s6 ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables 1 [root@k8s6 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables 1
3)配置禁止使用緩存分區
[root@k8s6 ~]# cat /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
等價於:swapoff -a
3)啓動docker服務
先設置開機自啓動 [root@k8s6 ~]# systemctl enable docker [root@k8s6 ~]# systemctl enable kubelet [root@k8s6 ~]# systemctl daemon-reload [root@k8s6 ~]# systemctl start docker
4.1)啓動k8s,查看安裝k8s的安裝目錄
[root@k8s6 ~]# rpm -ql kubelet /etc/kubernetes/manifests /etc/sysconfig/kubelet /etc/systemd/system/kubelet.service /usr/bin/kubelet
4.2)初始化前拉取鏡像
須要pull的鏡像,因爲鏡像在國外,沒法直接安裝 k8s.gcr.io/kube-apiserver:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.2.24 k8s.gcr.io/coredns:1.2.6 更改成 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.3 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.3 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.3 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.3 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6 docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 給pull下來的鏡像打標記,讓它認爲是從k8s下拉取過來的 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
4.2)啓動k8s服務。注意 --pod-network-cidr 該參數須要和後面的 kube-flannel.yml 相對應
kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.200.0.0/16 --apiserver-advertise-address=192.168.10.22
其餘版本
kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=10.200.0.0/16 --apiserver-advertise-address=192.168.10.22 --ignore-preflight-errors="--fail-swap-on=false" kubeadm init --kubernetes-version=v1.14.0 --pod-network-cidr=200.200.0.0/16 --service-cidr=172.16.0.0/16 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --apiserver-advertise-address=10.10.12.143
kubeadm init --kubernetes-version=v1.14.1 --pod-network-cidr=200.200.0.0/16 --service-cidr=172.16.0.0/16 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.10.22
執行後的結果
[addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.10.22:6443 --token 9422jr.9eqpi4lvozb4auw6 --discovery-token-ca-cert-hash sha256:1e624e95c2b5efe6bebd7a649492327b5d89366ca8fd1e65bb508522a71ff3a8
執行提示操做命令
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
5)k8s的節點健康檢查
[root@k8s6 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@k8s6 ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}
查看節點是否已經準備好。這次爲NotReady
[root@k8s6 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s6 NotReady master 25m v1.13.3
6) 處理NotReady問題。該問題是k8s集羣的網絡問題
執行該命令。請勿直接執行,須要修改優化配置
[root@k8s6 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@k8s6 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.13.3 98db19758ad4 3 weeks ago 80.3MB k8s.gcr.io/kube-proxy v1.13.3 98db19758ad4 3 weeks ago 80.3MB k8s.gcr.io/kube-apiserver v1.13.3 fe242e556a99 3 weeks ago 181MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.13.3 fe242e556a99 3 weeks ago 181MB k8s.gcr.io/kube-controller-manager v1.13.3 0482f6400933 3 weeks ago 146MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.13.3 0482f6400933 3 weeks ago 146MB registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.13.3 3a6f709e97a0 3 weeks ago 79.6MB k8s.gcr.io/kube-scheduler v1.13.3 3a6f709e97a0 3 weeks ago 79.6MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 3 weeks ago 52.6MB # 須要有該網絡組建 k8s.gcr.io/coredns 1.2.6 f59dcacceff4 3 months ago 40MB registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 3 months ago 40MB k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 14 months ago 742kB
6.1)優化配置對比
[root@K8smaster ~]# diff kube-flannel.yml kube-flannel.yml.bak 129,130c129 < "Type": "vxlan", < "Directouing": true, --- > "Type": "vxlan"
6.2) 查看修改的文件。kube-flannel.yml
--- apiVersion: extensions/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unsed in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan", "Directouing": true, } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-amd64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-amd64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm64 namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm64 tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm64 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-arm namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: arm tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-arm command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-arm command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-ppc64le namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: ppc64le tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-ppc64le command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds-s390x namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: s390x tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni image: quay.io/coreos/flannel:v0.11.0-s390x command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-s390x command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
再次查看節點狀態
[root@k8s6 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s6 Ready master 41m v1.13.3
k8s的master節點已經啓動好了。
4、k8s客戶端節點操做
1)節點服務器
yum install docker-ce kubelet kubeadm -y
yum install docker-ce kubelet-1.14.0 kubeadm-1.14.0 -y systemctl enable docker systemctl enable kubelet systemctl daemon-reload systemctl start docker swapoff -a 服務端下載鏡像操做 docker save -o mynode.gz k8s.gcr.io/kube-proxy:v1.13.3 quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/pause:3.1 scp mynode.gz root@n1:/root/ 客戶端操做導入鏡像 docker load -i mynode.gz kubeadm join 192.168.10.22:6443 --token 9422jr.9eqpi4lvozb4auw6 --discovery-token-ca-cert-hash sha256:1e624e95c2b5efe6bebd7a649492327b5d89366ca8fd1e65bb508522a71ff3a8
2)節點服務器須要有的鏡像
[root@node02 ~]# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy v1.13.3 98db19758ad4 3 weeks ago 80.3MB quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 3 weeks ago 52.6MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 14 months ago 742kB
3.1)服務端查看存在的節點
[root@k8s6 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s6 Ready master 82m v1.13.3 node01 Ready <none> 23m v1.13.3 node02 Ready <none> 24m v1.13.3
3.2) 查看已經運行了的容器
[root@k8s6 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-g65pw 1/1 Running 0 86m coredns-86c58d9df4-rx4cd 1/1 Running 0 86m etcd-k8s6 1/1 Running 0 85m kube-apiserver-k8s6 1/1 Running 0 85m kube-controller-manager-k8s6 1/1 Running 0 85m kube-flannel-ds-amd64-7swcd 1/1 Running 0 29m kube-flannel-ds-amd64-hj2z2 1/1 Running 1 27m kube-flannel-ds-amd64-sj8vp 1/1 Running 0 73m kube-proxy-dl57g 1/1 Running 0 27m kube-proxy-f8wd8 1/1 Running 0 29m kube-proxy-jgzpw 1/1 Running 0 86m kube-scheduler-k8s6 1/1 Running 0 86m
3.3)容器運行的詳細信息
[root@k8s6 ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-86c58d9df4-g65pw 1/1 Running 0 90m 10.200.0.2 k8s6 <none> <none> coredns-86c58d9df4-rx4cd 1/1 Running 0 90m 10.200.0.3 k8s6 <none> <none> etcd-k8s6 1/1 Running 0 89m 192.168.10.22 k8s6 <none> <none> kube-apiserver-k8s6 1/1 Running 0 89m 192.168.10.22 k8s6 <none> <none> kube-controller-manager-k8s6 1/1 Running 0 89m 192.168.10.22 k8s6 <none> <none> kube-flannel-ds-amd64-7swcd 1/1 Running 0 33m 192.168.10.24 node02 <none> <none> kube-flannel-ds-amd64-hj2z2 1/1 Running 1 31m 192.168.10.23 node01 <none> <none> kube-flannel-ds-amd64-sj8vp 1/1 Running 0 77m 192.168.10.22 k8s6 <none> <none> kube-proxy-dl57g 1/1 Running 0 31m 192.168.10.23 node01 <none> <none> kube-proxy-f8wd8 1/1 Running 0 33m 192.168.10.24 node02 <none> <none> kube-proxy-jgzpw 1/1 Running 0 90m 192.168.10.22 k8s6 <none> <none> kube-scheduler-k8s6 1/1 Running 0 89m 192.168.10.22 k8s6 <none> <none>
3.4)查看運行的名稱空間
[root@k8s6 ~]# kubectl get ns NAME STATUS AGE default Active 88m kube-public Active 88m kube-system Active 88m
4、服務標籤選擇器
修改鏡像中默認應用:https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/
1)根據資源清單啓動服務。清單裏面定義了 app
[root@k8s6 manifests]# cat pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: nginx:1.14-alpine ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command: - "/bin/sh" - "-c" - "sleep 3600" [root@k8s6 manifests]# kubectl create -f pod-demo.yaml pod/pod-demo created [root@k8s6 manifests]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-deploy-79b598b88-pt9xq 0/1 ImagePullBackOff 0 112m pod-template-hash=79b598b88,run=nginx-deploy nginx-test-67d85d447c-brwb2 1/1 Running 0 112m pod-template-hash=67d85d447c,run=nginx-test nginx-test-67d85d447c-l7xvs 1/1 Running 0 112m pod-template-hash=67d85d447c,run=nginx-test nginx-test-67d85d447c-qmrrw 1/1 Running 0 112m pod-template-hash=67d85d447c,run=nginx-test nginx-test-67d85d447c-rsmdt 1/1 Running 0 112m pod-template-hash=67d85d447c,run=nginx-test nginx-test-67d85d447c-twk77 1/1 Running 0 112m pod-template-hash=67d85d447c,run=nginx-test pod-demo 2/2 Running 0 51s app=myapp,tier=frontend
2)若是更加定義的標籤app查看
[root@k8s6 manifests]# kubectl get pods -L app NAME READY STATUS RESTARTS AGE APP nginx-deploy-79b598b88-pt9xq 0/1 ImagePullBackOff 0 112m nginx-test-67d85d447c-brwb2 1/1 Running 0 112m nginx-test-67d85d447c-l7xvs 1/1 Running 0 112m nginx-test-67d85d447c-qmrrw 1/1 Running 0 112m nginx-test-67d85d447c-rsmdt 1/1 Running 0 112m nginx-test-67d85d447c-twk77 1/1 Running 0 112m pod-demo 2/2 Running 0 34s myapp [root@k8s6 manifests]# kubectl get pods -L app,run NAME READY STATUS RESTARTS AGE APP RUN nginx-deploy-79b598b88-pt9xq 0/1 ImagePullBackOff 0 114m nginx-deploy nginx-test-67d85d447c-brwb2 1/1 Running 0 114m nginx-test nginx-test-67d85d447c-l7xvs 1/1 Running 0 114m nginx-test nginx-test-67d85d447c-qmrrw 1/1 Running 0 114m nginx-test nginx-test-67d85d447c-rsmdt 1/1 Running 0 114m nginx-test nginx-test-67d85d447c-twk77 1/1 Running 0 114m nginx-test pod-demo 2/2 Running 0 2m53s myapp [root@k8s6 manifests]# kubectl get pods -l app NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 6s
3)給資源清單手動打標記
3.1)kubectl label --help 查看幫助
3.2)kubectl label pods 服務名 k=v 。爲資源標記定義爲 k=v
[root@k8s6 manifests]# kubectl get pods -l app --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-demo 2/2 Running 0 10m app=myapp,tier=frontend [root@k8s6 manifests]# kubectl label pods pod-demo release=stable pod/pod-demo labeled [root@k8s6 manifests]# kubectl get pods -l app --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-demo 2/2 Running 0 11m app=myapp,release=stable,tier=frontend
3.3) 標籤選擇器
標籤選擇器 等值關係:=,==,!= 集合關係: KEY in (VALUE1,VALUE2) KEY notin (VALUE1,VALUE2) KEY !KEY
根據標籤選擇器過濾出來須要的服務
[root@k8s6 manifests]# kubectl get pods -l release NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 17m [root@k8s6 manifests]# kubectl get pods -l release,app NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 17m [root@k8s6 manifests]# kubectl get pods -l release=stable NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 17m [root@k8s6 manifests]# kubectl get pods -l release=stable --show-labels NAME READY STATUS RESTARTS AGE LABELS pod-demo 2/2 Running 0 17m app=myapp,release=stable,tier=frontend
標籤選擇器的擴展用法
[root@k8s6 manifests]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deploy-79b598b88-pt9xq 0/1 ImagePullBackOff 0 132m nginx-test-67d85d447c-brwb2 1/1 Running 0 132m nginx-test-67d85d447c-l7xvs 1/1 Running 0 132m nginx-test-67d85d447c-qmrrw 1/1 Running 0 132m nginx-test-67d85d447c-rsmdt 1/1 Running 0 132m nginx-test-67d85d447c-twk77 1/1 Running 0 132m pod-demo 2/2 Running 0 20m [root@k8s6 manifests]# kubectl label pods nginx-test-67d85d447c-twk77 release=canary pod/nginx-test-67d85d447c-twk77 labeled [root@k8s6 manifests]# kubectl get pods -l release NAME READY STATUS RESTARTS AGE nginx-test-67d85d447c-twk77 1/1 Running 0 133m pod-demo 2/2 Running 0 21m [root@k8s6 manifests]# kubectl get pods -l release=canary NAME READY STATUS RESTARTS AGE nginx-test-67d85d447c-twk77 1/1 Running 0 133m [root@k8s6 manifests]# kubectl get pods -l release,app NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 22m [root@k8s6 manifests]# kubectl get pods -l release=stable,app=myapp NAME READY STATUS RESTARTS AGE pod-demo 2/2 Running 0 22m [root@k8s6 manifests]# kubectl get pods -l release!=canary NAME READY STATUS RESTARTS AGE nginx-deploy-79b598b88-pt9xq 0/1 ImagePullBackOff 0 135m nginx-test-67d85d447c-brwb2 1/1 Running 0 135m nginx-test-67d85d447c-l7xvs 1/1 Running 0 134m nginx-test-67d85d447c-qmrrw 1/1 Running 0 135m nginx-test-67d85d447c-rsmdt 1/1 Running 0 134m pod-demo 2/2 Running 0 22m [root@k8s6 manifests]# kubectl get pods -l "release in (canary,beta,alpha)" NAME READY STATUS RESTARTS AGE nginx-test-67d85d447c-twk77 1/1 Running 0 135m [root@k8s6 manifests]# kubectl get pods -l "release notin (canary,beta,alpha)" NAME READY STATUS RESTARTS AGE nginx-deploy-79b598b88-pt9xq 0/1 ErrImagePull 0 136m nginx-test-67d85d447c-brwb2 1/1 Running 0 136m nginx-test-67d85d447c-l7xvs 1/1 Running 0 135m nginx-test-67d85d447c-qmrrw 1/1 Running 0 136m nginx-test-67d85d447c-rsmdt 1/1 Running 0 135m pod-demo 2/2 Running 0 24m
4)內嵌字段的標籤選擇器
matchLabels:直接給定鍵值 matchExpressions:基於給定的表達式來定義使用標籤選擇器,{key:"KEY",operator:"OPERATOR",valuesL[VAL1,VAL2,....]} 操做符: In,NotIn: values 字段的值必須爲非空列表 Exists,NotExists:values 字段的值必須爲空列表
修改內嵌字段示例
[root@k8s6 manifests]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s6 Ready master 3d16h v1.13.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s6,node-role.kubernetes.io/master= node01 Ready <none> 3d15h v1.13.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node01 node02 Ready <none> 3d15h v1.13.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02 [root@k8s6 manifests]# kubectl label nodes node01 disktype=ssd node/node01 labeled [root@k8s6 manifests]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s6 Ready master 3d16h v1.13.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s6,node-role.kubernetes.io/master= node01 Ready <none> 3d15h v1.13.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node01 node02 Ready <none> 3d15h v1.13.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02
5)調度算法 。nodeSelector <map[string]string>
[root@k8s6 manifests]# cat pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: nginx:1.14-alpine ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command: - "/bin/sh" - "-c" - "sleep 3600" nodeSelector: disktype: ssd
服務跑在 disktype: ssd 指定標籤的機器上
6)內嵌標籤選擇器的annotations字段
nodeSelector <map[string] string> 節點標籤選擇器 nadeName <string> annotations: 與label不一樣的地方在於,它不能用於挑選資源對象,僅用於爲對象提供「元數據」
yaml文件編寫
[root@k8s6 manifests]# cat pod-demo.yaml apiVersion: v1 kind: Pod metadata: name: pod-demo namespace: default labels: app: myapp tier: frontend annotations: blog.com/created-by: "cluster admin" spec: containers: - name: myapp image: nginx:1.14-alpine ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent command: - "/bin/sh" - "-c" - "sleep 3600" nodeSelector: disktype: ssd
kubectl create -f pod-demo.yaml
[root@k8s6 manifests]# kubectl describe pods pod-demo ............. Annotations: blog.com/created-by: cluster admin Status: Running .............
https://cr.console.aliyun.com/cn-hangzhou/images
registry.cn-hangzhou.aliyuncs.com/mygcrio/flannel:v0.11.0-amd64