參考文檔:html
Kube-DNS在集羣範圍內完成服務名到ClusterIP的解析,對服務進行訪問,提供了服務發現機制的基本功能。 node
組件git |
版本github |
Remarkdocker |
kubernetesjson |
v1.9.2vim |
|
KubeDNSsegmentfault |
V1.4.8api |
服務發現機制同SkyDNS數組 |
Kubernetes支持kube-dns以Cluster Add-On的形式運行。Kubernetes會在集羣中調度一個DNS的Pod與Service。
kubernetes部署Pod服務時,爲避免部署時發生pull鏡像超時的問題,建議提早將相關鏡像pull到相關全部節點(實驗),或搭建本地鏡像系統。
# Pod內namespace共享的基礎pause鏡像; # 在kubelet的啓動參數中已指定pause鏡像,Pull到本地後修更名稱 [root@kubenode1 ~]# docker pull netonline/pause-amd64:3.0 [root@kubenode1 ~]# docker tag netonline/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0 [root@kubenode1 ~]# docker images
# kubedns [root@kubenode1 ~]# docker pull netonline/k8s-dns-kube-dns-amd64:1.14.8 # dnsmasq-nanny [root@kubenode1 ~]# docker pull netonline/k8s-dns-dnsmasq-nanny-amd64:1.14.8 # sidecar [root@kubenode1 ~]# docker pull netonline/k8s-dns-sidecar-amd64:1.14.8
# https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns [root@kubenode1 ~]# mkdir -p /usr/local/src/yaml/kubedns [root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns [root@kubenode1 kubedns]# wget -O kube-dns.yaml https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/kube-dns.yaml.base
# kube-dns將Service,ServiceAccount,ConfigMap,Deployment等4中服務放置在1個yaml文件中,如下章節分別針對各模塊修改,紅色加粗字體即修改部分; # 對Pod yaml文件的編寫這裏不作展開,可另參考資料,如《Kubernetes權威指南》; # 修改後的kube-dns.yaml:https://github.com/Netonline2016/kubernetes/blob/master/addons/kubedns/kube-dns.yaml # clusterIP與kubelet啓動參數--cluster-dns一致便可,在service cidr中預選1個地址作dns地址 [root@kubenode01 yaml]# vim kube-dns.yaml apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: 169.169.0.11 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP
# kube-dns ServiceAccount不用修改,kubernetes集羣預約義的ClusterRoleBinding system:kube-dns已將kube-system(系統服務通常部署在此)namespace中的ServiceAccout kube-dns 與預約義的ClusterRole system:kube-dns綁定,而ClusterRole system:kube-dns具備訪問kube-apiserver dns的api權限。 # RBAC受權請見:https://blog.frognew.com/2017/04/kubernetes-1.6-rbac.html [root@kubenode1 ~]# kubectl get clusterrolebinding system:kube-dns -o yaml
[root@kubenode1 ~]# kubectl get clusterrole system:kube-dns -o yaml
ConfigMap的典型用法是:
驗證kube-dns功能不須要作修改,若是須要自定義DNS與上游DNS服務器,可對ConfigMap進行修改,見第四章節。
# 第97,148,187行的三個容器的啓動鏡像; # 第127,168,200,201行的域名,域名同kubelet啓動參數中的」--cluster-domain」對應,注意域名」cluster.local.」後的「.」 [root@kubenode1 kubedns]# vim kube-dns.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true containers: - name: kubedns image: netonline/k8s-dns-kube-dns-amd64:1.14.8 resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: netonline/k8s-dns-dnsmasq-nanny-amd64:1.14.8 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --no-negcache - --log-facility=- - --server=/cluster.local./127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: netonline/k8s-dns-sidecar-amd64:1.14.8 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local.,5,SRV - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local.,5,SRV ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. serviceAccountName: kube-dns
[root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns/ [root@kubenode1 kubedns]# kubectl create -f kube-dns.yaml
# kube-dns Pod 3個容器已」Ready」,服務,deployment等也正常啓動 [root@kubenode1 kubedns]# kubectl get pod -n kube-system -o wide [root@kubenode1 kubedns]# kubectl get service -n kube-system -o wide [root@kubenode1 kubedns]# kubectl get deployment -n kube-system -o wide
# pull測試鏡像 [root@kubenode1 ~]# docker pull radial/busyboxplus:curl # 啓動測試Pod並進入Pod容器 [root@kubenode1 ~]# kubectl run curl --image=radial/busyboxplus:curl -i --tty # Pod容器中查看/etc/resolv.conf,dns記錄已寫入文件; # nslookup可查詢到kubernetes集羣系統的服務ip [ root@curl-545bbf5f9c-hxml9:/ ]$ cat /etc/resolv.conf [ root@curl-545bbf5f9c-hxml9:/ ]$ nslookup kubernetes.default
從kubernetes v1.6開始,用戶能夠在集羣內配置私有DNS區域(通常稱爲存根域Stub Domain)與外部上游域名服務。
# 集羣管理員可以使用ConfigMap指定自定義的存根域域上游DNS服務器; [root@kubenode1 ~]# cd /usr/local/src/yaml/kubedns/ # 直接修改kube-dns.yaml模版的ConfigMap服務部分 # stubDomains:可選項,存根域定義,json格式;key爲DNS後綴,value是1個json數組,表示1組DNS服務器地址;目標域名服務器能夠是kubernetes服務名;多個自定義dns記錄採用」,」分隔; # upstreamNameservers:DNS地址組成的數組,最多指定3個ip地址,json格式;若是指定此值,從節點的域名服務設置(/etc/resolv.conf)繼承來的值會被覆蓋 [root@kubenode1 kubedns]# vim kube-dns.yaml apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: stubDomains: | {"out.kubernetes": ["172.20.1.201"]} upstreamNameservers: | ["114.114.114.114", "223.5.5.5"]
# 先刪除原kube-dns,再建立新kube-dns; # 也能夠只刪除原kube-dns中的ConfigMap服務,再單首創建新的 ConfigMap服務 [root@kubenode1 kubedns]# kubectl delete -f -n kube-dns -n kube-system [root@kubenode1 kubedns]# kubectl create -f kube-dns.yaml
# 查看dnsmasq日誌,stub domain與upstreamserver已生效; # kubedns與sidecar兩個日誌也有stub domain與upstreamserver生效的輸出 [root@kubenode1 kubedns]# kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name) -c dnsmasq
# 在configmap中自定義的stub domain 172.20.1.201上安裝dnsmasq服務 [root@hanode01 ~]# yum install dnsmasq -y # 生成自定義的DNS記錄文件 [root@hanode01 ~]# echo "192.168.100.11 server.out.kubernetes" > /tmp/hosts # 啓動DNS服務; # -q:輸出查詢記錄; # -d:以debug模式啓動,前臺運行,觀察輸出日誌; # -h:不使用/etc/hosts; # -R:不使用/etc/resolv.conf; # -H:使用自定義的DNS記錄文件; # 啓動輸出日誌中warning提示沒有設置上游DNS服務器;同時讀入自定義DNS記錄文件 [root@hanode01 ~]# dnsmasq -q -d -h -R -H /tmp/hosts
# iptables放行udp 53端口 [root@hanode01 ~]# iptables -I INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT
# 下載鏡像 [root@kubenode1 ~]# docker pull busybox # 配置Pod yaml文件; # dnsPolicy設置爲ClusterFirst,默認也是ClusterFirst [root@kubenode1 ~]# touch dnstest.yaml [root@kubenode1 ~]# vim dnstest.yaml apiVersion: v1 kind: Pod metadata: name: dnstest namespace: default spec: dnsPolicy: ClusterFirst containers: - name: busybox image: busybox command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always # 建立Pod [root@kubenode1 ~]# kubectl create -f dnstest.yaml
# nslookup查詢server.out.kubernetes,返回定義的ip地址 [root@kubenode1 ~]# kubectl exec -it dnstest -- nslookup server.out.kubernetes
觀察stub domain 172.20.1.201上dnsmasq服務的輸出:kube節點172.30.200.23(Pod所在的節點,flannel網絡,snat出節點)對server.out.kubenetes的查詢,dnsmasq返回預約義的主機地址。