在k8s中,全部的配置都是 json格式的。但爲了讀寫方便,一般將這些配置寫成yaml 格式,其運行的時候,仍是會靠yaml引擎將其轉化爲json,apiserver 也僅接受json的數據類型。php
yaml 結構主要有字典與數組兩種結構:java
一、字典類型,其中有普通字典與多層嵌套字典,字典的鍵值使用 : 標識。
普通字典:
apiVersion: v1, 此時 apiVersion 爲key, v1 爲value。
多層嵌套字典:
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
格式化爲json後,實際就是:{"metadata":{"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard-certs","namespace":"kube-system"}}
# metadata 爲鍵,其值是一個字典,字典中值爲 labels,name,而labels 自己也是一個字典,它的鍵爲k8s-app,值爲kubernetes-dashboard
二、數組類型,數組和字典同樣,可多層嵌套,一般使用 - 標識,而且在不少狀況下,數組是與字典混合使用的。node
一、普通數組,如:
volumes:
- name: kubernetes-dashboard-certsmysql
解析爲json 則爲:linux
{"valumes":[{"name":"kubernetes-dashboard-certs"}]}nginx
二、多層嵌套數組,如:redis
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
ports:
- containerPort: 8443
protocol: TCP
解析爲json 則爲:
{"containers":[{"name":"kubernetes-dashboard","image":"k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0","ports":[{"containerPort":"8443","protocol":"TCP"}]}]}
# 字典中嵌套列表,列表的元素中又嵌套字典與列表。sql
YAML語法規則:docker
- 大小寫敏感
- 使用縮進表示層級關係
- 縮進時不容許使用Tal鍵,只容許使用空格
- 縮進的空格數目不重要,只要相同層級的元素左側對齊便可
- 「#」 表示註釋,從這個字符一直到行尾,都會被解析器忽略
- 「---」 爲可選的分隔符 ,當須要在一個文件中定義多個同級的結構的時候須要使用,如同時在一個yaml中定義 Secret 和 ServiceAccount 這兩個同級資源類型時,就須要使用 --- 進行分隔。json
例如:
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
在k8s 中經常使用的幾個配置項的解釋:
kind: 指定這個資源對象的類型,如 pod ,deployment,statefulset,job,cronjob
apiVersion: 資源對象所支持的分組與版本。
如pod,ServiceAccount,Service 的apiVersion一般爲: 「apiVersion: v1」 ,可選的有 v1beat1等。 則表示使用 v1 版本的api接口。
deployment的apiVersion一般爲 「apiVersion: extensions/v1」,可選的有 「apiVersion: extensions/v1beta1」
儘可能使用非beat版本的apiversion,由於在下一次更新後,這個apiversion有可能 被 改動或廢棄。
一旦yaml 指定了apiVersion的版本後,就決定了yaml中的字典的鍵對應的功能。不一樣的apiversion支持的不一樣的功能的鍵。
metadata: 經常使用的配置項有 name,namespace,即配置其顯示的名字與歸屬的命名空間。
name: memory-demo
namespace: mem-example
spec: 一個嵌套字典與列表的配置項,也是主要的配置項,支持的子項很是多,根據資源對象的不一樣,子項會有不一樣的配置。
如一個pod的spec配置:
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
一個service 的 spec 的配置:
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myap
k8s 中的資源對象:
主要有三種:
workload類型
服務發現及負載均衡
配置與存儲管理
workload:
pod ,deployment,statefulset,job,cronjob
服務發現及負載均衡:
service,ingress
配置與存儲管理:
configmap
Secret
LimitRange
pod,集羣調度的是小單位:
pod 的 yaml 文件常見配置:
apiVersion: v1 # 指明使用的 api 的版本
kind: Pod # 指明資源對象類型
metadata:
name: memory-demo # 指明 元數據中的名稱,的所屬的名稱空間選項。
namespace: mem-example
spec:
containers: # 使用一個列表,配置 容器的屬性
- name: memory-demo-ctr # 容器在k8s中顯示的名字
image: polinux/stress # 在容器倉庫中,真實存在的、具體的鏡像的名稱
resources: # 資源配置
limits: # 限制內存最多使用 200M
memory: "200Mi"
requests: # 申請100M的內存
memory: "100Mi"
command: ["stress"] # run 起來後,要執行的命令
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"] # 命令的參數
內存限制官方的說明:
The Container has no upper bound on the amount of memory it uses. The Container could use all of the memory available on the Node where it is running.
The Container is running in a namespace that has a default memory limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the memory limit.
定義有健康狀態檢查的pod:
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- args:
- /server
image: k8s.gcr.io/liveness
livenessProbe:
httpGet:
# when "host" is not defined, "PodIP" will be used
# host: my-host
# when "scheme" is not defined, "HTTP" scheme will be used. Only "HTTP" and "HTTPS" are allowed
# scheme: HTTPS
path: /healthz
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Awesome
initialDelaySeconds: 15 # pod 啓動後,多少秒纔開始進行健康狀態檢測,對一些啓動較慢的應用十分必要。如 java 應用 中和tomcat, 一些比較大的 jar 文件等。
timeoutSeconds: 1 # httpget 超時的時間
name: liveness
LimitRange 配置名稱空間下的資源限額。
memory 的配置:
資源限制,能夠具體的在某一個pod 中指定,能夠在名稱空間指定。如果即不在名稱空間指定,也不在pod 中指定,那麼pod 的最大可用內存爲 所在的node 節點的內存。如果在pod 中指定的內存限制 ,那麼可用內存爲配置文件中指定的大小。
LimitRange 定義的是默認的內存的使用,而 pod 中的resources 定義的則是特指的,適用於本pod的,能夠覆蓋掉默認的LimitRange的配置。
優先級: pod > 名稱空間
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
# resources 中有兩修配項,如果僅定義了其中的一個,如僅定義了申請內存的request,沒有定義限制內存的limit,則request使用自定義的,limit 使用 LimitRange 定義的。
官方關於LimitRange說明:https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/
cpu 的配置:
在LimitRange 中指定:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
cpu: 1
defaultRequest:
cpu: 0.5
type: Container
在pod 文件中指定:
apiVersion: v1
kind: Pod
metadata:
name: default-cpu-demo-2
spec:
containers:
- name: default-cpu-demo-2-ctr
image: nginx
resources:
limits:
cpu: "100m" # CPU資源以cpus爲單位。容許小數值。你能夠用後綴m來表示mili。例如100m cpu等同於100 milicpu,意思是0.1cpu。
綜合: 在建立一個pod 時,指定cpu 與 memory 的資源配額。
apiVersion: v1
kind: Pod
metadata:
name: quota-mem-cpu-demo
spec:
containers:
- name: quota-mem-cpu-demo-ctr
image: nginx
resources:
limits:
memory: "800Mi"
cpu: "2"
requests:
memory: "600Mi"
cpu: "1"
ResourceQuota 配置名稱空間下的資源限額:
apiVersion: v1
kind: ResourceQuota
metadata:
name: mem-cpu-demo
spec:
hard:
requests.cpu: "1"
requests.memory: 1Gi
limits.cpu: "2"
limits.memory: 2Gi
ReplicationController:
A ReplicationController ensures that a specified number of pod replicas are running at any one time.
In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.
主要做用使 pod 在用戶指望的副本數量,僅此而已。而 新出的ReplicaSet,deployment 不只能實現副本數量的控制 ,還能支持新的 yaml 格式的標籤的綁定,pod 數量 的動態修改,而且deployment 還支持平滑升級與回滾。
做爲k8s早期的副本控制的資源對象,如今已經不多使用了。
ReplicaSet:
ReplicaSet is the next-generation Replication Controller. The only difference between a ReplicaSet and a Replication Controller right now is the selector support.
ReplicaSet supports the new set-based selector requirements as described in the labels user guide whereas a Replication Controller only supports equality-based selector requirements.
A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However,
a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to pods along with a lot of other useful features. Therefore,
we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all.
This actually means that you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.
大概意思 就是 ReplicaSet 是 Replication Controller 的升級版,而deployment 是基於 ReplicaSet 的,更高一層的封閉,很是不建議用戶直接建立 ReplicaSet,只須要經過操做 deployment ,來控制 ReplicaSet 就能夠了。
ReplicaSet 的 yaml 文件示例:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
matchExpressions:
- {key: tier, operator: In, values: [frontend]}
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80
deployment 經常使用配置項:
k8s資源 對象的主要的操做類型。
A Deployment controller provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate.
You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment
示例:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment # 決定了Deployment 的名字
labels:
app: nginx
spec:
replicas: 3 # pod 副本的數量
selector:
matchLabels:
app: nginx #selector 下的 這個子項,決定了 deployment 能控制 全部 labels 鍵值對爲 app: nginx 的pod
template:
# spec 下的template, 有許多子項,如 配置Pod 的label,配置具體的容器的 地址,端口等。
metadata:
labels:
app: nginx # 與 deployment 的 selector 對應
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
建立後的deployment ,還支持在命令行進行修改,如:
kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
或:
kubectl edit deployment/nginx-deployment ,會彈出 一個vim 容器,修改保存便可。
如果發現修改或升級後有問題,還能夠進行回滾:
kubectl rollout undo deployment deployment/nginx-deployment (不指明 版本,默認回退到上一版本)
kubectl rollout undo deployment deployment/nginx-deployment --to-version=1 (指定回滾到 第1 版本。)
還能夠查看回滾的狀態:
$ kubectl rollout status deployments nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
查看 回滾的歷史版本:
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create -f https://k8s.io/examples/controllers/nginx-deployment.yaml --record
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
查看 指定版本的修改的內容:
$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.
動態的修改deployment 中的pod 的數量 :
kubectl scale --replicas=5 deployment myapp (增長)
kubectl scale --replicas=1 deployment myapp (減小)
StatefulSets:
配置有狀態的集羣時會用到。如 redissentinal,rediscluster,mysql主從。
daemonSet:
一個特殊的pod類型, 同一種功能的daemonSet 只能在 每個node 節點運行一個。
通常用來作一些 公共的基礎服務。如存儲容器,或日誌收集容器。
A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them.
As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
Some typical uses of a DaemonSet are:
running a cluster storage daemon, such as glusterd, ceph, on each node.
running a logs collection daemon on every node, such as fluentd or logstash.
running a node monitoring daemon on every node, such as Prometheus Node Exporter, collectd, Dynatrace OneAgent, Datadog agent, New Relic agent, Ganglia gmond or Instana agent.
daemonSet yaml 文件示例:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: k8s.gcr.io/fluentd-elasticsearch:1.20
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
經常使用命令: kubectl create -f fileName.yaml # 根據yaml 文件定義的內容,建立一個pod kubectl get pod podName -n nameSpace -o wide # 查看pod 的運行狀態及所處的節點 kubectl get pod podName --output=yaml --n nameSpace # 將一個運行的pod,導出爲一個yaml 文件 kubectl top pod podName -n nameSpace # 獲得一些硬件及負載信息 kubectl describe pod podName -n nameSpace # 查看Pod 運行的詳細信息,排錯的時候常常用到。