在分佈式微服務的訪問中,咱們在有可能的狀況下須要互相調用各個模塊的應用接口,這個時候就須要對對方的IP進行識別,咱們稱之爲服務發現。在k8s中,Service提供了集羣內部的虛擬IP,供集羣內部的容器訪問。html
# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.10.10.1 <none> 443/TCP 6d
my-service ClusterIP 10.10.10.150 <none> 80/TCP,443/TCP 2dnode
其中my-service服務對應的集羣IP則爲10.10.10.150linux
首先看一下node的標籤nginx
# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
172.18.98.46 Ready <none> 5d v1.9.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=172.18.98.46
172.18.98.47 Ready <none> 5d v1.9.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,env_role=dev,kubernetes.io/hostname=172.18.98.47vim
其中172.18.98.47有一個env_role=dev的標籤api
咱們起一個pod,pod的yaml配置以下app
# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
app: nginx-pod
spec:
#nodeName: 172.18.98.47
nodeSelector:
env_role: dev
containers:
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /index.html
port: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
restartPolicy: OnFailuredom
它會固定被分配到env_role: dev的node上。tcp
# kubectl create -f pod.yaml
pod "nginx-pod" created分佈式
咱們經過查看my-service的內容可知,該服務的選擇器爲該pod
# kubectl describe svc my-service
Name: my-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx-pod
Type: ClusterIP
IP: 10.10.10.150
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 172.17.94.6:80
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: 172.17.94.6:443
Session Affinity: None
Events: <none>
查看k8s集羣的endpoints
# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 172.18.98.48:6443 6d
my-service 172.17.94.6:80,172.17.94.6:443 2d
查看該pod的IP
# kubectl get pod -o wide | grep nginx-pod
nginx-pod 1/1 Running 0 41m 172.17.94.6 172.18.98.47
恰好爲172.17.94.6
如今進行DNS的yaml文件的配置
vim kube-dns.yaml
內容以下
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "kubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.10.10.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-dns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.7
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
# livenessProbe:
# httpGet:
# path: /healthcheck/kubedns
# port: 10054
# scheme: HTTP
# initialDelaySeconds: 60
# timeoutSeconds: 5
# successThreshold: 1
# failureThreshold: 5
# readinessProbe:
# httpGet:
# path: /readiness
# port: 8081
# scheme: HTTP
# initialDelaySeconds: 3
# timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
# livenessProbe:
# httpGet:
# path: /healthcheck/dnsmasq
# port: 10054
# scheme: HTTP
# initialDelaySeconds: 60
# timeoutSeconds: 5
# successThreshold: 1
# failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.7
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default
serviceAccountName: kube-dns
我把其中一些健康檢查給註釋掉了,啓動該文件
# kubectl create -f kube-dns.yaml
查看kube-system命名空間的信息
# kubectl get all -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/kube-dns 1 1 1 1 4h
NAME DESIRED CURRENT READY AGE
rs/kube-dns-769d6c4665 1 1 1 4h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/kube-dns 1 1 1 1 4h
NAME DESIRED CURRENT READY AGE
rs/kube-dns-769d6c4665 1 1 1 4h
NAME READY STATUS RESTARTS AGE
po/kube-dns-769d6c4665-ph7dm 3/3 Running 0 4h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kube-dns ClusterIP 10.10.10.2 <none> 53/UDP,53/TCP 4h
能夠看到dns的配置啓動成功。
驗證該dns
咱們啓動一個busybox的pod,yaml配置以下
vim busybox.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox:1.24
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
restartPolicy: Always
運行該pod
kubectl create -f busybox.yaml
# kubectl exec -it busybox -- nslookup kubernetes.default
Server: 10.10.10.2
Address 1: 10.10.10.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.10.10.1 kubernetes.default.svc.cluster.local
# kubectl exec -it busybox -- nslookup my-service.default
Server: 10.10.10.2
Address 1: 10.10.10.2 kube-dns.kube-system.svc.cluster.local
Name: my-service.default
Address 1: 10.10.10.150 my-service.default.svc.cluster.local
咱們能夠看到my-service的k8s集羣虛擬IP被解析出來了,爲10.10.10.150