k8s中,主機分爲master和nodes,客戶端請求首先發給master,由master分析各node資源狀態,分配一個最佳node,而後由node主機經過docker把容器啓動起來html
scheduler:調度器,存在於master上,負責檢測node上的資源,根據用戶請求的資源量在node節點上建立容器,先作預選,而後作優選前端
Controller-Manager(控制器管理器):在master上,負責監控每個控制器是否健康,控制器管理器自身作冗餘。node
客戶端訪問經過節點網絡轉發到service網絡而後經過service網絡到達pod網絡mysql
centos版本信息說明linux
[root@master ~]# uname -r 3.10.0-862.el7.x86_64 [root@master ~]# cat /etc/redhat-release CentOS Linux release 7.5.1804 (Core)
部署方式:經過kubeadm安裝步驟(一個master節點和兩個node節點)nginx
關閉firewall和iptablesgit
建立docker-ce和kubernetes的yum倉庫:github
[root@master ~]# cd /etc/yum.repo.d/ [root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [root@master ~]# cat > kubernetes.repo <<EOF [kubernetes] name=Kubernetes Repo baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg enabled=1 EOF
安裝docker-ce kubelet kubeadm kubectlweb
[root@master ~]# yum -y install docker-ce kubelet kubeadm kubectl [root@master ~]# systemctl stop firewalld #關閉防火牆 [root@master ~]# systemctl disable firewalld [root@master ~]# systemctl enable docker kubelet
建立/etc/sysctl.d/k8s.conf文件,並配置kubelet不加載swapredis
[root@master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF [root@master ~]# cat > /etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS=--fail-swap-on=false EOF
以上命令在master和node節點都須要執行
由於天朝防火牆的關係,在中國訪問不了google的docker倉庫,可是咱們能夠在阿里雲上找到須要的鏡像,下載下來,而後從新打上標籤便可,可使用下面的腳本下載所需鏡像
#!/bin/bash image_aliyun=(kube-apiserver-amd64:v1.12.1 kube-controller-manager-amd64:v1.12.1 kube-scheduler-amd64:v1.12.1 kube-proxy-amd64:v1.12.1 pause-amd64:3.1 etcd-amd64:3.2.24) for image in ${image_aliyun[@]} do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$image docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$image k8s.gcr.io/${image/-amd64/} done
初始化
kubeadm init --apiserver-advertise-address=192.168.175.4 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=Swap 保存節點加入的命令:#kubeadm join 192.168.175.4:6443 --token wyy67p.9wmda1iw4o8ds0c5 --discovery-token-ca-cert-hash sha256:3de3e4401de1cdf3b4c778ad1ac3920d9f7b15ca34b4c5ebe44d92e60d1290e0 保存代用 若是忘記可使用kubeadm token create --print-join-command從新建立
完成以後執行一些初始化工做
[root@master ~]# mkdir -p $HOME/.kube [root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
查看信息
kubectl get cs kubectl get nodes
部署網絡插件flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml若是鏡像下載不了,可使用上面的方式在aliyun下載xiangy
將node節點加入到集羣:
[root@node1 ~]# systemctl enable docker kubelet [root@node1 ~]# kubeadm join 192.168.175.4:6443 --token wyy67p.9wmda1iw4o8ds0c5 --discovery-token-ca-cert-hash sha256:3de3e4401de1cdf3b4c778ad1ac3920d9f7b15ca34b4c5ebe44d92e60d1290e0
kubernetes把全部內容都抽象爲資源,把這些資源實例化之後稱之爲對象
[root@master manifests]# kubectl get pods myapp-6946649ccd-2tjs8 -o yaml apiVersion: v1 #聲明對應的對象屬於k8s的哪個api羣組的版本 kind: Pod #資源類別(service,deloyment都是類別) metadata: #元數據,是一個嵌套的字段 creationTimestamp: 2018-10-22T15:08:38Z generateName: myapp-6946649ccd- labels: pod-template-hash: 6946649ccd run: myapp name: myapp-6946649ccd-2tjs8 namespace: default ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: myapp-6946649ccd uid: 0e9fe6e8-d608-11e8-b847-000c29e073ed resourceVersion: "36407" selfLink: /api/v1/namespaces/default/pods/myapp-6946649ccd-2tjs8 uid: 5abff320-d60c-11e8-b847-000c29e073ed spec: #規格,定義建立的資源對象應該具備什麼樣的特性,靠控制器來知足對應的狀態,用戶定義 containers: - image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent name: myapp resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-962mh readOnly: true dnsPolicy: ClusterFirst nodeName: node2 priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: default-token-962mh secret: defaultMode: 420 secretName: default-token-962mh status: #顯示當前這個資源當前的狀態,若是當前資源狀態和目標狀態不一致,須要向目標狀態轉移,只讀 conditions: - lastProbeTime: null lastTransitionTime: 2018-10-22T15:08:38Z status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: 2018-10-22T15:08:40Z status: "True" type: Ready - lastProbeTime: null lastTransitionTime: 2018-10-22T15:08:40Z status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: 2018-10-22T15:08:38Z status: "True" type: PodScheduled containerStatuses: - containerID: docker://f9a63dc33340082c3a78196f624bc52c193d3f2694c05f91ecb82aa143a9e369 image: ikubernetes/myapp:v1 imageID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513 lastState: {} name: myapp ready: true restartCount: 0 state: running: startedAt: 2018-10-22T15:08:39Z hostIP: 192.168.175.5 phase: Running podIP: 10.244.2.15 qosClass: BestEffort startTime: 2018-10-22T15:08:38Z
建立資源的方法:
大部分資源的配置清單由5個1級字段組成:
apiVersion:group/version
[root@master manifests]# kubectl api-versions admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 apps/v1beta1 #alpha內測版,beta公測版,stable穩定版 apps/v1beta2 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1 authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1 autoscaling/v2beta2 batch/v1 batch/v1beta1 certificates.k8s.io/v1beta1 coordination.k8s.io/v1beta1 events.k8s.io/v1beta1 extensions/v1beta1 networking.k8s.io/v1 policy/v1beta1 rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 scheduling.k8s.io/v1beta1 storage.k8s.io/v1 storage.k8s.io/v1beta1 v1
kind:資源類別,標記打算建立一個怎麼樣的資源類別
metadata:元數據
spec:指望的狀態,不一樣的資源類型,spec各不相同,disired state
Pod的生命週期:狀態:Pending,Running,Failed,Succeeded,Unknown
建立Pod:
自定義資源示例
[root@master manifests]# cat pod-demo.yaml apiVersion: v1 kind: Pod #注意大小寫 metadata: name: pod-demo #pod名稱 namespace: default #可不定義,默認爲default labels: app: myapp tier: frontend #所屬層次 spec: containers: - name: myapp image: ikubernetes/myapp:v1 - name: busybox image: busybox:latest command: - "/bin/sh" - "-c" - "sleep 3600"
建立pod資源
kubectl create -f pod-demo.yaml #建立資源 kubectl describe pods pod-demo #顯示pod詳細信息 [root@master manifests]# kubectl describe pod pod-demo Name: pod-demo Namespace: default Priority: 0 PriorityClassName: <none> Node: node2/192.168.175.5 Start Time: Tue, 23 Oct 2018 02:33:51 +0800 Labels: app=myapp tier=frontend Annotations: <none> Status: Running IP: 10.244.2.20 Containers: #內部容器 myapp: Container ID: docker://20dabd0d998f5ebd2a7ad1b875e3517831b100f1df9340eefa9e18d89941a8ac Image: ikubernetes/myapp:v1 Image ID: docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513 Port: <none> Host Port: <none> State: Running Started: Tue, 23 Oct 2018 02:33:52 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-962mh (ro) busybox: Container ID: docker://d69f788cdf8772497c0afc19b469c3553167d3d5ccf03ef4876391a7ed532aa9 Image: busybox:latest Image ID: docker-pullable://busybox@sha256:2a03a6059f21e150ae84b0973863609494aad70f0a80eaeb64bddd8d92465812 Port: <none> Host Port: <none> Command: /bin/sh -c sleep 3600 State: Running Started: Tue, 23 Oct 2018 02:33:56 +0800 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-962mh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-962mh: Type: Secret (a volume populated by a Secret) SecretName: default-token-962mh Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 48s kubelet, node2 Container image "ikubernetes/myapp:v1" already present on machine Normal Created 48s kubelet, node2 Created container Normal Started 48s kubelet, node2 Started container Normal Pulling 48s kubelet, node2 pulling image "busybox:latest" Normal Pulled 44s kubelet, node2 Successfully pulled image "busybox:latest" Normal Created 44s kubelet, node2 Created container Normal Started 44s kubelet, node2 Started container Normal Scheduled 18s default-scheduler Successfully assigned default/pod-demo to node2 #成功調度到node2
kubectl logs pod-demo myapp #查看pod的日誌 kubectl logs pod-demo busybox kubectl get pods -w #-w持續監控 kubectl exec -it pod-demo -c myapp -- /bin/sh kubectl delete -f pod-demo.yaml #經過配置清單文件刪除對應資源
探針類型有三種:ExceAction、TCPSocketAction、HTTPGetAction
[root@master manifests]# cat liveness-exec.yaml apiVersion: v1 kind: Pod metadata: name: liveness-exec-pod namespace: default spec: containers: - name: liveness-exec-container image: busybox:latest imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c", "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 3600"] livenessProbe: exec: command: ["test", "-e", "/tmp/healthy"] initialDelaySeconds: 2 periodSeconds: 3
[root@master manifests]# cat liveness-httpget.yaml apiVersion: v1 kind: Pod metadata: name: liveness-httpget-pod namespace: default spec: containers: - name: liveness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 livenessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3
[root@master manifests]# cat rediness-httpget.yaml apiVersion: v1 kind: Pod metadata: name: readiness-httpget-pod namespace: default spec: containers: - name: readiness-httpget-container image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 readinessProbe: httpGet: port: http path: /index.html initialDelaySeconds: 1 periodSeconds: 3 [root@master manifests]#kubectl describe pod readiness-httpget-pod
postStart:容器建立後執行命令
[root@master manifests]# cat poststart-pod.yaml apiVersion: v1 kind: Pod metadata: name: poststart-pod namespace: default spec: containers: - name: busybox-httpd image: busybox:latest imagePullPolicy: IfNotPresent lifecycle: postStart: exec: command: ["mkdir", "-p", "/data/web/html"] command: ["/bin/sh", "-c","sleep 3600"]
ReplicaSet:代用戶建立指定數量的pod副本,並確保pod副本知足用戶數量,支持自動擴縮容,主要有三個組件組成:用戶指望的pod副本數、標籤選擇器、pod資源模板,完成pod資源的新建
不建議直接使用ReplicaSet
Deployment:建構在ReplicaSet之上,支持擴縮容、滾動更新、回滾,支持聲明式配置,通常用於無狀態服務
DaemonSet:用於確保集羣中的每個節點只運行一個特定的pod副本,集羣新增節點自動增長此類pod副本,經常使用於實現系統級無狀態服務
Job:用來執行特定的任務,任務完成自動退出
Cronjob:週期性執行特定的任務
StatefulSet:有狀態,持久存儲
[root@master manifests]# cat rs-demo.yaml apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp namespace: default spec: replicas: 2 selector: matchLabels: app: myapp release: canary template: metadata: name: myapp-pod labels: app: myapp release: canary environment: qa spec: containers: - name: myapp-container image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80
可使用命令: kubectl edit rs myapp(控制器的name)來編輯控制器來實時更新pods的副本數,以及一些其餘的參數
Deployment藉助於ReplicaSet實現滾動更新、藍綠部署以及其餘更新策略
Deployment建構於ReplicaSet之上,經過控制ReplicaSet來控制pods
[root@master manifests]# cat deploy-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 3 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 [root@master manifests]# kubectl apply -f deploy-demo.yaml #apply 和create的不一樣之處在於apply能夠執行屢次,不一樣之處會更新到etcd中,因而咱們就能夠直接修改資源清單配置文件,來實時更新配置,修改完成以後只須要再一次執行一次kubectl apply -f deploy-demo.yaml便可
[root@master manifests]# kubectl rollout undo deploy myapp-deploy [--revision=1 指定回滾到哪個版本] [root@master manifests]# kubectl get deployment -o wide NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR myapp-deploy 3 3 3 3 20m myapp ikubernetes/myapp:v2 app=myapp,release=canary #查看deployment控制器信息,能夠看出deployment是建構在ReplicaSet之上的 kubectl get rs -o wide #查看ReplicaSet控制器信息
[root@master ]# kubectl patch deployment myapp-deploy -p '{"spec":{"replicas":5}}' #給控制器打補丁 [root@master manifests]# kubectl patch deployment myapp-deploy -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavaliable":0}}}}' [root@master manifests] kubectl set image deployment myapp-deploy myapp=ikubernetes/myapp:v3 && kubectl rollout pause deployment myapp-deploy ##更新
[root@master manifests]# kubectl rollout status deploy myapp-deploy [root@master manifests]# kubectl get pods -l app=myapp -w kubectl rollout history deployment myapp-deploy #查看歷史更新版本
[root@master manifests]# cat ds-demo.yaml apiVersion: apps/v1 kind: Deployment metadata: name: redis namespace: default spec: replicas: 1 selector: matchLabels: app: redis role: logstor template: metadata: labels: app: redis role: logstor spec: containers: - name: redis image: redis:4.0-alpine ports: - name: redis containerPort: 6379 --- #隔離資源定義 apiVersion: apps/v1 kind: DaemonSet metadata: name: filebeat-ds namespace: default spec: selector: matchLabels: app: filebeat release: stable template: metadata: labels: app: filebeat release: stable spec: containers: - name: filebeat image: ikubernetes/filebeat:5.6.5-alpine env: - name: REDIS_HOST value: redis.default.svc.cluster.local - name: REDIS_LOG_LEVEL value: info
Ingress :簡單理解就是個規則定義;好比說某個域名對應某個 service,即當某個域名的請求進來時轉發給某個 service;這個規則將與 Ingress Controller 結合,而後 Ingress Controller 將其動態寫入到負載均衡器配置中,從而實現總體的服務發現和負載均衡
實質上能夠理解爲是個監視器,Ingress Controller 經過不斷地跟 kubernetes API 打交道,實時的感知後端 service、pod 等變化,好比新增和減小 pod,service 增長與減小等;當獲得這些變化信息後,Ingress Controller 再結合Ingress 生成配置,而後更新反向代理負載均衡器,並刷新其配置,達到服務發現的做用
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml #安裝ingress-Controller
建立一後端pod service:
[root@master ingress]# kubectl apply -f deploy-demo.yaml [root@master ingress]# cat deploy-demo.yaml apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: selector: app: myapp release: canary ports: - name: http targetPort: 80 port: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 3 selector: matchLabels: app: myapp release: canary template: metadata: labels: app: myapp release: canary spec: containers: - name: myapp image: ikubernetes/myapp:v2 ports: - name: http containerPort: 80
建立一個用於暴露端口的service
[root@master baremetal]# kubectl apply -f service-nodeport.yaml [root@master baremetal]# cat service-nodeport.yaml apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: type: NodePort ports: - name: http port: 80 targetPort: 80 protocol: TCP nodePort: 30080 - name: https port: 443 targetPort: 443 protocol: TCP nodePort: 30443 selector: app.kubernetes.io/name: ingress-nginx
建立Ingress文件
[root@master ingress]# kubectl apply -f ingress-myapp.yaml [root@master ingress]# cat ingress-myapp.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-myapp namespace: default annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: myapp.template.com http: paths: - path: backend: serviceName: myapp servicePort: 80
查看信息
[root@master ingress]# kubectl get ingress NAME HOSTS ADDRESS PORTS AGE ingress-myapp myapp.template.com 80 5h55 [root@master ingress]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE myapp ClusterIP 10.98.30.144 <none> 80/TCP 4h7m [root@master ingress]# kubectl get pods NAME READY STATUS RESTARTS AGE myapp-deploy-7b64976db9-lfnlv 1/1 Running 0 6h30m myapp-deploy-7b64976db9-nrfgs 1/1 Running 0 6h30m myapp-deploy-7b64976db9-pbqvh 1/1 Running 0 6h30m #訪問 [root@master ingress]# curl myapp.template.com:30080 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@master ingress]# cat tomcat-deploy.yaml apiVersion: v1 kind: Service metadata: name: tomcat namespace: default spec: selector: app: tomcat release: canary ports: - name: http targetPort: 8080 port: 8080 - name: ajp targetPort: 8009 port: 8009 --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deploy namespace: default spec: replicas: 3 selector: matchLabels: app: tomcat release: canary template: metadata: labels: app: tomcat release: canary spec: containers: - name: tomcat image: tomcat:8.5-alpine ports: - name: http containerPort: 8080 - name: ajp containerPort: 8009 [root@master ingress]# kubectl apply -f tomcat-deploy.yaml [root@master ingress]# openssl genrsa -out tls.key 2048 [root@master ingress]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=tomcat.template.com [root@master ingress]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key [root@master ingress]# kubectl get secret NAME TYPE DATA AGE default-token-962mh kubernetes.io/service-account-token 3 32h tomcat-ingress-secret kubernetes.io/tls 2 66m [root@master ingress]# cat ingress-tomcat-tls.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-tomcat-tls namespace: default annotations: kubernetes.io/ingress.class: "nginx" spec: tls: - hosts: - tomcat.template.com secretName: tomcat-ingress-secret rules: - host: tomcat.template.com http: paths: - path: backend: serviceName: tomcat servicePort: 8080 [root@master ingress]# kubectl apply -f ingress-tomcat-tls.yaml [root@master ingress]# curl -k https://tomcat.template.com:30443 #測試訪問
對於kubernetes來講,存儲卷不屬於容器,而是屬於pod,pod共享基礎架構容器pause的存儲卷
emptyDir:空目錄、臨時目錄,生命週期同Pod
示例配置
[root@master volumes]# cat pod-vol-demo.yaml apiVersion: v1 kind: Pod metadata: name: volume-demo namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent volumeMounts: - name: html mountPath: /data/ command: ["/bin/sh"] args: ["-c", "while true;do echo $(date) >> /data/index.html;sleep 2;done"] volumes: - name: html emptyDir: {}
hostPath:主機目錄,節點級存儲
示例配置:
[root@master volumes]# cat pod-vol-hostpath.yaml apiVersion: v1 kind: Pod metadata: name: pod-vol-hostpath namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html hostPath: path: /data/pod/volume1 type: DirectoryOrCreate
網絡存儲:
SAN:iSCSI
NAS:nfs、cifs
NFS示例配置:
[root@master volumes]# cat pod-vol-nfs.yaml apiVersion: v1 kind: Pod metadata: name: pod-vol-nfs namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html nfs: path: /data/volumes server: 192.168.175.4
分佈式存儲:glusterfs、rbd、cephfs…
雲存儲:EBS、Azure Disk
pvc(建立過程:選擇存儲系統 ,建立pv 定義pvc,定義pod綁定pvc)
yum -y install nfs-utils #安裝nfs [root@master volumes]# cat /etc/exports 定義存儲卷 /data/volumes/v1 192.168.175.0/24(rw,no_root_squash) /data/volumes/v2 192.168.175.0/24(rw,no_root_squash) /data/volumes/v3 192.168.175.0/24(rw,no_root_squash) /data/volumes/v4 192.168.175.0/24(rw,no_root_squash) /data/volumes/v5 192.168.175.0/24(rw,no_root_squash)
[root@master volumes]# cat pv-demo.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv001 labels: name: pv001 spec: nfs: path: /data/volumes/v1 server: 192.168.175.4 accessModes: ["ReadWriteMany", "ReadWriteOnce"] capacity: storage: 1Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv002 labels: name: pv002 spec: nfs: path: /data/volumes/v2 server: 192.168.175.4 accessModes: ["ReadWriteOnce"] capacity: storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv003 labels: name: pv003 spec: nfs: path: /data/volumes/v3 server: 192.168.175.4 accessModes: ["ReadWriteMany", "ReadWriteOnce"] capacity: storage: 20Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv004 labels: name: pv004 spec: nfs: path: /data/volumes/v4 server: 192.168.175.4 accessModes: ["ReadWriteMany", "ReadWriteOnce"] capacity: storage: 10Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: pv005 labels: name: pv005 spec: nfs: path: /data/volumes/v5 server: 192.168.175.4 accessModes: ["ReadWriteMany", "ReadWriteOnce"] capacity: storage: 1Gi
kubectl apply -f pv-demo.yaml NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain(回收策略,保留) Available 5m31s pv002 5Gi RWO Retain Available 5m31s pv003 20Gi RWO,RWX Retain Available 5m31s pv004 10Gi RWO,RWX Retain Available 5m31s pv005 1Gi RWO,RWX Retain Available 5m31s
pvc示例:
[root@master volumes]# cat pvc-demo.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc namespace: default spec: accessModes: ["ReadWriteMany"] resources: requests: storage: 6Gi --- apiVersion: v1 kind: Pod metadata: name: pod-vol-pvc namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html persistentVolumeClaim: claimName: mypvc
[root@master volumes]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 1Gi RWO,RWX Retain Available 33m pv002 5Gi RWO Retain Available 33m pv003 20Gi RWO,RWX Retain Available 33m pv004 10Gi RWO,RWX Retain Bound default/mypvc 33m pv005 1Gi RWO,RWX Retain Available 33m [root@master volumes]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc Bound pv004 10Gi RWO,RWX 13m #顯示已經綁定
容器化應用配置的方式:
經過命令行建立configmap:
kubectl create configmap nginx-config --from-literal=nginx_port=80 --from-literal=server_name=myapp.template.com kubectl get cm
建立pod引用configmap中的變量定義
[root@master configmap]# cat pod-configmap.yaml apiVersion: v1 kind: Pod metadata: name: pod-cm-1 namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 env: - name: NGINX_SERVER_PORT valueFrom: configMapKeyRef: name: nginx-config key: nginx_port - name: NGINX_SERVER_NAME valueFrom: configMapKeyRef: name: nginx-config key: server_name
還能夠經過edit直接編輯configmap中的鍵值,但不會實時更新到pod中
kubectl edit cm nginx-config
採用掛載卷方式,配置會同步更新到pod中
[root@master configmap]# cat pod-configmap2.yaml apiVersion: v1 kind: Pod metadata: name: pod-cm-2 namespace: default labels: app: myapp tier: frontend annotations: template.com/created-by: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 volumeMounts: - name: nginxconf mountPath: /etc/nginx/config.d/ readOnly: true volumes: - name: nginxconf configMap: name: nginx-config
[root@master configmap]# cat pod-configmap3.yaml apiVersion: v1 kind: Pod metadata: name: pod-cm-3 namespace: default labels: app: myapp tier: frontend annotations: template.com/created-by: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 volumeMounts: - name: nginxconf mountPath: /etc/nginx/conf.d/ readOnly: true volumes: - name: nginxconf configMap: name: nginx-www
命令行建立
kubectl create secret generic mysql-root-password --from-literal=password=myP@ss123 #命令行建立secret(注意這個密碼是僞加密) [root@master configmap]# kubectl describe secret mysql-root-password #查看信息 Name: mysql-root-password Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== password: 9 bytes [root@master configmap]# kubectl get secret mysql-root-password -o yaml #這個密碼是經過base64加密 apiVersion: v1 data: password: bXlQQHNzMTIz kind: Secret metadata: creationTimestamp: 2018-10-25T06:31:40Z name: mysql-root-password namespace: default resourceVersion: "193886" selfLink: /api/v1/namespaces/default/secrets/mysql-root-password uid: a1beaf36-d81f-11e8-95d7-000c29e073ed type: Opaque [root@master configmap]# echo bXlQQHNzMTIz | base64 -d #能夠經過base64 -d 解碼 myP@ss123
定義配置清單:
[root@master configmap]# cat pod-secret.yaml apiVersion: v1 kind: Pod metadata: name: pod-secret-1 namespace: default labels: app: myapp tier: frontend annotations: template.com/created-by: "cluster admin" spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - name: http containerPort: 80 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-root-password key: password # 注意,變量注入到pod中以後仍是以明文顯示,因此不安全 [root@master configmap]# kubectl apply -f pod-secret.yaml pod/pod-secret-1 created [root@master configmap]# kubectl exec pod-secret-1 -- /bin/printenv PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=pod-secret-1 MYSQL_ROOT_PASSWORD=myP@ss123
StatefulSet主要用於管理具備如下特性的應用程序:
有序、平滑的部署和擴展;
有序的滾動更新;
三個組件:headless service、StatefulSet、volumeClaimTemplate
[root@master statefulset]# showmount -e Export list for master: /data/volumes/v5 192.168.175.0/24 /data/volumes/v4 192.168.175.0/24 /data/volumes/v3 192.168.175.0/24 /data/volumes/v2 192.168.175.0/24 /data/volumes/v1 192.168.175.0/24 [root@master statefulset]# cat stateful-demo.yaml apiVersion: v1 kind: Service metadata: name: myapp labels: app: myapp spec: ports: - port: 80 name: web clusterIP: None selector: app: myapp-pod --- apiVersion: apps/v1 kind: StatefulSet metadata: name: myapp spec: serviceName: myapp replicas: 3 selector: matchLabels: app: myapp-pod template: metadata: labels: app: myapp-pod spec: containers: - name: myapp image: ikubernetes/myapp:v1 ports: - containerPort: 80 name: web volumeMounts: - name: myappdata mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: myappdata spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 2Gi [root@master statefulset]# kubectl get pods #顯示pod變成有順序的了 NAME READY STATUS RESTARTS AGE myapp-0 1/1 Running 0 2m21s myapp-1 1/1 Running 0 2m18s myapp-2 1/1 Running 0 2m15s [root@master statefulset]# kubectl get sts NAME DESIRED CURRENT AGE myapp 3 3 6m14s [root@master statefulset]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myappdata-myapp-0 Bound pv002 5Gi RWO 6m35s myappdata-myapp-1 Bound pv001 5Gi RWO,RWX 6m32s myappdata-myapp-2 Bound pv003 5Gi RWO,RWX 6m29s #只要pvc不刪除,使用同一個statefulset建立的後端pod就會一直綁定對應的volume,數據也就不會丟
pod_name.service.ns_name.svc.cluster.local
sts也能支持動態擴縮容
kubectl scale sts myapp --replicas=5
k8s的APIserver是分組的,請求的時候不須要標識是向哪一個版本的哪一個組的API發出請求,全部的請求由一個url path標識
Request path:
/apis/apps/v1/namespaces/default/deployments/myapp-deploy/
HTTP request verb:
get ,post,put,delete
API requests verb:
get,list,create,update,patc,watch,proxy,redirect,delete,deletecollection
Resource:
Subresource
Namespace
API group
RBAC : Role Based Access Control
定義role
[root@master manifests]# cat role-demo.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: pod-reader namespace: default rules: - apiGroups: - "" resources: - pods verbs: - get - list - watch
[root@master manifests]# kubectl create rolebinding template-read-pods --role=pod-reader --user=template --dry-run -o yaml > rolebinding-demo.yaml [root@master manifests]# cat rolebinding-demo.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding #用來綁定用戶和角色 metadata: name: template-read-pods roleRef: apiGroup: rbac.authorization.k8s.io kind: Role #綁定的角色,權限都在這裏定義 name: pod-reader subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: template #綁定的用戶帳號
#生成新context openssl genrsa -out template.key 2048 openssl req -new -key template.key -out template.csr -subj "/CN=template" openssl x509 -req -in template.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out template.crt -days 365 openssl x509 -in template.crt -text kubectl config set-credentials template --client-certificate=./template.crt --client-key=./template.key --embed-certs=true kubectl config set-context template@kubernetes --cluster=kubernetes --user=template kubectl config use-context template@kubernetes #使用新的context [root@master ~] kubectl create role template --verb=list,get,watch --resource=pods #建立角色
基於角色的訪問控制:讓一個用戶扮演一個角色,角色擁有權限,因而用戶就有了權限
role定義:
rolebinding:用來綁用戶和role的關係
clusterrole,clusterrolebinging
ClusterRoleBinding:
clusterRole:可以獲取全部空間中的信息,若是是經過rolebinding的話,仍是隻能獲取當前rolebinding定義的namespace中的信息使用rolebinding綁定clusterrole的好處是不用每個namespace中都定義一個role只須要在各個名稱空間中定義rolebinding而後再綁定clusterrole就行,不用再在每一個名稱空間都建立一個role由於使用rolebinding綁定clusterrole和用戶的時候,用戶仍是隻能訪問綁定的指定空間,而不是訪問整個集羣的namespace
[root@master ~]# kubectl create clusterrolebinding template-read-all-pods --clusterrole=cluster-reader --user=template --dry-run -o yaml apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: template-read-all-pods roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-reader subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: template #建立clusterrolebinding 須要先建立clusterrole,這樣角色的權限將是整個集羣的權限 [root@master ~]# cat rolebinding-clusterrole-demo.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: template-read-pods roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-reader subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: template #建立rolebinding綁定clusterrole,這樣clusterrole將降級爲只能在rolebinding所在的namespace中的權限
Service account是爲了方便Pod裏面的進程調用Kubernetes API或其餘外部服務,它不一樣於User account:
spec.serviceAccount
爲default(除非指定了其餘ServiceAccout)ca.crt
到/var/run/secrets/kubernetes.io/serviceaccount/
安裝及訪問:
認證方式:
token:
(1)建立ServiceAccount,根據其管理目標,使用rolebinding或clusterrolebinding綁定至合理role或clusterrole;
(2)獲取此ServiceAccount的secret,去查看secret的詳細信息,其中就有token
把ServiceAccount的token封裝爲kubeconfig文件
(1)建立ServiceAccount,根據其管理目標,使用rolebinding或clusterrolebinding綁定至合理role或clusterrole
(2)使用DEF_NS_ADMIN_TOKEN=$(kubectl get secret SERVICEACCOUNT_SECRET_NAME -o jsonpath={.data.token} | base64 -d )
(3)生成kubeconfig文件
kubectl create sa dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin [root@master ~]# kubectl get secrets -n kube-system NAME TYPE DATA AGE attachdetach-controller-token-vhr7x kubernetes.io/service-account-token 3 4d15h bootstrap-signer-token-fl7bb kubernetes.io/service-account-token 3 4d15h certificate-controller-token-p4szd kubernetes.io/service-account-token 3 4d15h clusterrole-aggregation-controller-token-hz2pt kubernetes.io/service-account-token 3 4d15h coredns-token-g9gp6 kubernetes.io/service-account-token 3 4d15h cronjob-controller-token-brhtp kubernetes.io/service-account-token 3 4d15h daemon-set-controller-token-4mmwg kubernetes.io/service-account-token 3 4d15h dashboard-admin-token-kzwk9 kubernetes.io/service-account-token 3 9 [root@master ~]# kubectl describe secrets dashboard-admin-token-kzwk9 -n kube-system #將下面的token備用 Name: dashboard-admin-token-kzwk9 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: dashboard-admin kubernetes.io/service-account.uid: dbe9eb4a-d94a-11e8-a89c-000c29e073ed Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4ta3p3azkiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZGJlOWViNGEtZDk0YS0xMWU4LWE4OWMtMDAwYzI5ZTA3M2VkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.DZ94phOCIWAxxs4l55irm1G_PhkRRilhJUMKMDheqCKOepT0NpZ07vp61q4YMmx0X0iT43R7LvhQSZ5p4fGn7ttjxGrDhox5tFvYpy6rCtdxEsYYeqWP_tHMqUMrF71TgbRBdj-LZWyec0YlshjgxhYJ4FV_hKZRAzidhlBg93fnWzDe31cSdg8H4j_5tRJU-JKajjbHXPVxGWPlN6WPPzd5iK2aDXt79k4PSgiC4czyCOTuRYj9INVGo8ZEUEkTUN3dUnXJKMMF-HUXIR67rHDapvcwjgMfVac6TpUO6HBR5ZPce3YKmstleaa2FbaMmNN-qJ0qKZoaOF245vTeqQ [root@master ~]# kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created #若是不能訪問谷歌倉庫須要先將dashboard的docker鏡像下載來從新tag一下 [root@master ~]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system #打補丁方便在本機外訪問,認證的帳號必須爲ServiceAccount 被dashboard pod拿來由kubernetes進行認證 [root@master ~]# kubectl get svc -n kube-system #訪問https://192.168.175.4:32767 輸入以前複製的token NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 4d15h kubernetes-dashboard NodePort 10.109.188.195 <none> 443:32767/TCP 10m -------------------------------------------------------------------------------- 第二種config方式登陸dashboard
[root@master pki]# kubectl get secret
NAME TYPE DATA AGE
admin-token-rkxvq kubernetes.io/service-account-token 3 35h
default-token-d75z4 kubernetes.io/service-account-token 3 29h
df-ns-admin-token-4rbwg kubernetes.io/service-account-token 3 34m
[root@master pki]# kubectl get secret df-ns-admin-token-4rbwg -o json
{
"apiVersion": "v1",
"data": {
"ca.crt": "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1UQXlNREV5TWpjME5sb1hEVEk0TVRBeE56RXlNamMwTmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTU9tCkVRL3l3TDdZRHVCTE9SZFZWdHl1NSs4dklIWGJEdWNmZ3N0Vy9Gck82emVLdUNXVzRQdjlPbjJwamxXRkxJdXYKZnhEMU15N3ppVzZjTW0xQkFRUjJpUEwrRE4rK0hYZ0V4ZkhDYTdJbkpHcFYzMU9lU3YzazMwMzljZVFQSUU4SQowQVljY2ZVU0w5SjMvdWpLZElPTTJGZDA2cWNUQmJhRyt0KzBGWGxrZ2NzNVRDa21lOE1xWTNVdjZJUkx6WmgzCmFEejBHVFg0VnpxWStFVXY3UHgzZ2JJeE0wR3ZqTnUvYUJvdWZrZ2RnSDRzL3hYNHVGckJsVytmUDRzRlBYYzIKbXJYd2E2NEY0ZHdLVDc5czY4NTBJMXZ3NS9URDFPRzdpcnNjUHdnMHZwUnlyKzlpTStjKzBWS3BiK1RCTnlzQQpjYkZJbWkzdnBpajliU2ZGVENzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKbjF0ZS95eW1RU3djbHh3OFFRSXhBVSs1L1EKYUN0bmMxOXRFT21jTWorQWJ4THdTS1YwNG1ubHRLRVFXSVBkRWF2RG9LeUQ0NFkzYUg5V2dXNXpra2tiQTJQSApUeDkzVWFtWXNVVkFUenFhOVZzd015dkhDM3RoUlFTRHpnYmxwK2grd1lOdTAyYUpreHJSR3ZCRjg1K282c2FoCktwUms2VHlzQWNVRUh1VHlpSVk5T3d4anBPUzVzVkJKV0NBQ1R5ZXYxRzY4SWkzd2xtY0M4UitaakpLSzh4VncKUmorYjNyeTZiL1A5WUdKYkt4Rm4wOU94eDVCNFhFVWduMjcwYjRSclNXeldOdEVFMkRoZkk1ajNnNGRkUHk3OApuQUNidHpBVUtkSzdXQVdOQXkyQzBFNDZOK3VIa3pObnYwdys1NE1HQy94N2R6TGFBampvTS8yZVRlaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=",
"namespace": "ZGVmYXVsdA==",
"token": "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltUm1MVzV6TFdGa2JXbHVMWFJ2YTJWdUxUUnlZbmRuSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1SbUxXNXpMV0ZrYldsdUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVptSmlOVGxtWVdFdFpEazVZaTB4TVdVNExUazBZemd0TURBd1l6STVaVEEzTTJWa0lpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZaR1l0Ym5NdFlXUnRhVzRpZlEubXpIN2ZMUlV6Y1JzUmJuUy1FbGxROGd5OEFMZWMxdjg3THF1SFpnNnVKTllOWm5wc1RnWm5LWXhMWnNvUUhTc3RKRGFCbDJHZnNUdWRaOHh3MERtNXFYSS1fMmRKSzhHY01TUXhJVnZtRkVVNTdjS2pMV3hpWkFSdTVzNDdkZFhfeTFyU1EyS2lWVEI2X1ZLaVgtT012Zjc5RUNiR0NVR05FOGdGV2NDZzZGeWJ3NGlFaGx6a3J4aUJGOGY0OExIdTdHVUNXbEZTZS1QMzRka2lxajFDQmd0LXlBNFJkZm9UTl9CaExJamtaaEVTLVlMZWR1NVEwR0lrcmFzVUhhWjQ2S0toa2thWjZ1QnQwSm5QNGRRd0dVVklVdHhJd1JudkJONmp2NmpKY3piUXV1Y3dYSXBjVDhQQk10QVVUa21yWGRhcE9JR0ZoWU96c00xNHA3WDRB"
},
"kind": "Secret",
"metadata": {
"annotations": {
"kubernetes.io/service-account.name": "df-ns-admin",
"kubernetes.io/service-account.uid": "fbb59faa-d99b-11e8-94c8-000c29e073ed"
},
"creationTimestamp": "2018-10-27T03:54:20Z",
"name": "df-ns-admin-token-4rbwg",
"namespace": "default",
"resourceVersion": "303749",
"selfLink": "/api/v1/namespaces/default/secrets/df-ns-admin-token-4rbwg",
"uid": "fbc27f91-d99b-11e8-94c8-000c29e073ed"
},
"type": "kubernetes.io/service-account-token"
}
DEF_NS_ADMIN_TOKEN=echo ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkltUm1MVzV6TFdGa2JXbHVMWFJ2YTJWdUxUUnlZbmRuSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1SbUxXNXpMV0ZrYldsdUlpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaVptSmlOVGxtWVdFdFpEazVZaTB4TVdVNExUazBZemd0TURBd1l6STVaVEEzTTJWa0lpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVJsWm1GMWJIUTZaR1l0Ym5NdFlXUnRhVzRpZlEubXpIN2ZMUlV6Y1JzUmJuUy1FbGxROGd5OEFMZWMxdjg3THF1SFpnNnVKTllOWm5wc1RnWm5LWXhMWnNvUUhTc3RKRGFCbDJHZnNUdWRaOHh3MERtNXFYSS1fMmRKSzhHY01TUXhJVnZtRkVVNTdjS2pMV3hpWkFSdTVzNDdkZFhfeTFyU1EyS2lWVEI2X1ZLaVgtT012Zjc5RUNiR0NVR05FOGdGV2NDZzZGeWJ3NGlFaGx6a3J4aUJGOGY0OExIdTdHVUNXbEZTZS1QMzRka2lxajFDQmd0LXlBNFJkZm9UTl9CaExJamtaaEVTLVlMZWR1NVEwR0lrcmFzVUhhWjQ2S0toa2thWjZ1QnQwSm5QNGRRd0dVVklVdHhJd1JudkJONmp2NmpKY3piUXV1Y3dYSXBjVDhQQk10QVVUa21yWGRhcE9JR0ZoWU96c00xNHA3WDRB | base64 -d #將token解碼保存至變量中
[root@master ~]# cd /etc/kubernetes/pki/
[root@master pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.175.4:6443" --embed-certs=true --kubeconfig=/root/def-ns-admin.conf
Cluster "kubernetes" set.
kubectl config set-credentials def-ns-admin --token=$DEF_NS_ADMIN_TOKEN --kubeconfig=/root/def-ns-admin.conf
kubectl config set-context def-ns-admin@kubernetes --cluster=kubernetes --user=def-ns-admin --kubeconfig=/root/def-ns-admin.conf
kubectl config use-context def-ns-admin@kubernetes --kubeconfig=/root/def-ns-admin.conf
sz /root/def-ns-admin.conf #將conf文件傳到本機,登陸的時候就是用這個conf就能夠了
CNI:Container Nerwork Interface:
解決方案:
flannel的配置參數:
安裝Calico
安裝文檔:https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/flannel
~shell~
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/rbac.yaml
kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/canal/canal.yaml
示例訪問策略配置:
[root@master networkpolicy]# cat ingress-def.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-ingress spec: podSelector: {} policyTypes: #訪問控制策略爲ingress,若是沒有定義ingress的訪問規則,則默認入方向爲拒絕全部,出方向爲容許全部 - Ingress
[root@master networkpolicy]# cat ingress-def.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-ingress spec: podSelector: {} ingress: #若是設置了ingress,可是沒有寫任何規則,則入方向爲容許全部 - {} policyTypes: - Ingress
[root@master networkpolicy]# cat allow-netpol-demo.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-myapp-ingress spec: podSelector: matchLabels: app: myapp ingress: - from: - ipBlock: cidr: 10.244.0.0/16 #容許10.244.0.0/16網段的主機訪問有標籤爲myapp的pod 的80端口但10.244.1.2除外 except: - 10.244.1.2/32 ports: - protocol: TCP port: 80
[root@master networkpolicy]# cat egress-def.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-egress spec: podSelector: {} egress: - {} policyTypes: - Egress
網絡策略:名稱空間拒絕全部出站,入站;放行全部出站目標爲本名稱空間內的全部Pod;
Chart --> Config -->Release
程序架構:
1.helm:客戶端,管理本地的Chart,管理Chart,與Tiller服務器交互,發送Chart,實例安裝、查詢、卸載等操做
2.Tiller:服務端,接收helm發來的Charts與Config,合併生成release
[root@master helm]# cat tiller-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
[root@master helm]# helm init --service-account tiller Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
helm repo update
release管理:
chart管理: