kubernetes進階(三)服務發現-coredns

服務發現,說白了就是服務(應用)之間相互定位的過程。node

服務發現須要解決的問題:nginx

  一、服務動態性強--容器在k8s中ip變化或遷移git

  二、更新發布頻繁--版本迭代快github

  三、支持自動伸縮--大促或流量高峯docker

咱們爲了解決pod地址變化的問題,咱們以前部署了service資源,將pod地址經過service資源暴露的固定地址,來解決以上問題,bootstrap

那麼,如何解決service資源名稱和service資源暴露出來的集羣網絡IP作自動的對應呢,從而達到服務的自動發現呢?api

在k8s中,coredns就是爲了解決以上問題。網絡

 

從coredns開始,咱們採用向k8s中交付容器的方式,來部署服務,而且使用聲明式的方式,來部署服務。app

首先在hdss7-200上建立一個nginx虛擬主機,用來獲取資源配置清單:tcp

vi /etc/nginx/conf.d/k8s-yaml.od.com.conf
server {
    listen       80;
    server_name  k8s-yaml.od.com;

    location / {
        autoindex on;
        default_type text/plain;
        root /data/k8s-yaml;
    }
}
# mkdir -p /data/k8s-yaml/coredns
# nginx -t # nginx -s reload

添加域名解析:hdss-11上

# vi /var/named/od.com.zone
在最後添加一條解析記錄

$ORIGIN od.com.
$TTL 600        ; 10 minutes
@               IN SOA  dns.od.com. dnsadmin.od.com. (
                                2019061803 ; serial
                                10800      ; refresh (3 hours)
                                900        ; retry (15 minutes)
                                604800     ; expire (1 week)
                                86400      ; minimum (1 day)
                                )
                                NS   dns.od.com.
$TTL 60 ; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200
# systemctl restart named

coredns github地址:

 
在hdss7-200上部署coredns:
# cd /data/k8s-yaml/coredns
# docker pull docker.io/coredns/coredns:1.6.1
# docker tag c0f6e815079e harbor.od.com/public/coredns:v1.6.1
# docker push harbor.od.com/public/coredns:v1.6.1

而後編輯資源配置清單:能夠從官網上參考資源配置清單

1.rbac.yaml--拿到集羣相關權限

# vi rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system

2.cm.yaml--configmap 對集羣的相關配置

# vi cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        ready
        kubernetes cluster.local 192.168.0.0/16  #service資源cluster地址
        forward . 10.4.7.11   #上級DNS地址
        cache 30
        loop
        reload
        loadbalance
       }

3.dp.yaml---pod控制器

# vi dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: harbor.od.com/public/coredns:v1.6.1
        args:
        - -conf
        - /etc/coredns/Corefile
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile

4.svc.yaml---service資源

# vi svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 192.168.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
  - name: metrics
    port: 9153
    protocol: TCP

而後使用http請求資源配置清單yaml的方式來建立資源:在任意node節點上建立

# kubectl create -f http://k8s-yaml.od.com/coredns/rbac.yaml
# kubectl create -f http://k8s-yaml.od.com/coredns/cm.yaml
# kubectl create -f http://k8s-yaml.od.com/coredns/dp.yaml
# kubectl create -f http://k8s-yaml.od.com/coredns/svc.yaml

查看運行狀況:

# kubectl get all -n kube-system

 

 查看coredns的cluster ip:

# kubectl get svc -o wide -n kube-system

 

 測試coredns:

# dig -t A www.baidu.com @192.168.0.2 +short

 

 看到已經能夠解析到百度。

測試coredns解析service資源名稱,首先查看kube-public下是否有service資源,若是沒有,建立一個,使用kubectl expose nginx-dp --port=80 -n kube-public

# kubectl expose nginx-dp --port=80 -n kube-public

測試:使用coredns測試解析,須要使用SQDN規則

# dig -t A nginx-dp.kube-public.svc.cluster.local. @192.168.0.2 +short

 

能夠看到咱們沒有手動添加任何解析記錄,咱們nginx-dp的service資源的IP,已經被解析了:

 

 那麼爲何呢?

推薦你們瞭解一下coredns都作了什麼:Kubernetes內部域名解析原理、弊端及優化方式

你們能夠看到,當我進入到pod內部之後,咱們會發現咱們的dns地址是咱們的coredns地址,以及搜索域:

 

 

 
如今,咱們已經解決了在集羣內部解析的問題,可是咱們怎麼作到在集羣外部訪問咱們的服務呢?
接下來咱們來學習k8s服務暴露。
相關文章
相關標籤/搜索