本文依賴環境:Centos7部署Kubernetes集羣、基於Kubernetes集羣部署skyDNS服務html
該示例中,咱們將建立一個redis-master、兩個redis-slave、三個frontend。其中,slave會實時備份master中數據,frontend會向master中寫數據,以後會從slave中讀取數據。全部系統間的調用(例如slave找master同步數據;frontend找master寫數據;frontend找slave讀數據等),採用的是dns方式實現。node
本示例中依賴如下幾個鏡像,請提早準備好:redis
docker.io/redis:latest 1a8a9ee54eb7 docker registry.access.redhat.com/rhel7/pod-infrastructure:latest 34d3450d733bapi gcr.io/google_samples/gb-frontend:v3 c038466384ab網絡 gcr.io/google_samples/gb-redisslave:v1 5f026ddffa27frontend |
須要一套kubernetes運行環境,及Cluster DNS,以下:post
[root@k8s-master ~]# kubectl cluster-info Kubernetes master is running at http://localhost:8080 KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. [root@k8s-master ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} [root@k8s-master ~]# kubectl get nodes NAME STATUS AGE k8s-node-1 Ready 7d k8s-node-2 Ready 7d [root@k8s-master ~]# kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system kube-dns 1 1 1 1 5d kube-system kubernetes-dashboard-latest 1 1 1 1 6d [root@k8s-master ~]#
1)redis-master-controller.yaml學習
apiVersion: v1 kind: ReplicationController metadata: name: redis-master labels: name: redis-master spec: replicas: 1 selector: name: redis-master template: metadata: labels: name: redis-master spec: containers: - name: master image: redis ports: - containerPort: 6379
2)redis-master-service.yamlthis
apiVersion: v1 kind: Service metadata: name: redis-master labels: name: redis-master spec: ports: # the port that this service should serve on - port: 6379 targetPort: 6379 selector: name: redis-master
Master上執行:
[root@k8s-master yaml]# kubectl create -f redis-master-controller.yaml replicationcontroller " redis-master" created [root@k8s-master yaml]# kubectl create -f redis-master-service.yaml service " redis-master" created [root@k8s-master yaml]# kubectl get rc NAME DESIRED CURRENT READY AGE redis-master 1 1 1 1d [root@k8s-master yaml]# kubectl get pod NAME READY STATUS RESTARTS AGE redis-master-5wyku 1/1 Running 0 1d
1)redis-slave-controller.yaml
apiVersion: v1 kind: ReplicationController metadata: name: redis-slave labels: name: redis-slave spec: replicas: 2 selector: name: redis-slave template: metadata: labels: name: redis-slave spec: containers: - name: worker image: gcr.io/google_samples/gb-redisslave:v1 env: - name: GET_HOSTS_FROM value: dns ports: - containerPort: 6379
2)redis-slave-service.yaml
apiVersion: v1 kind: Service metadata: name: redis-slave labels: name: redis-slave spec: ports: - port: 6379 selector: name: redis-slave
Master上執行:
[root@k8s-master yaml]# kubectl create -f redis-slave-controller.yaml replicationcontroller "redis-slave" created [root@k8s-master yaml]# kubectl create -f redis-slave-service.yaml service "redis-slave" created [root@k8s-master yaml]# kubectl get rc NAME DESIRED CURRENT READY AGE redis-master 1 1 1 1d redis-slave 2 2 2 44m [root@k8s-master yaml]# kubectl get pod NAME READY STATUS RESTARTS AGE redis-master-5wyku 1/1 Running 0 1d redis-slave-7h295 1/1 Running 0 44m redis-slave-r355y 1/1 Running 0 44m
1)frontend-controller.yaml
apiVersion: v1 kind: ReplicationController metadata: name: frontend labels: name: frontend spec: replicas: 3 selector: name: frontend template: metadata: labels: name: frontend spec: containers: - name: frontend image: gcr.io/google_samples/gb-frontend:v3 env: - name: GET_HOSTS_FROM value: dns ports: - containerPort: 80
2)frontend-service.yaml
apiVersion: v1 kind: Service metadata: name: frontend labels: name: fronted spec: type: NodePort ports: - port: 80 nodePort: 30001 selector: name: frontend
Master上執行:
[root@k8s-master yaml]# kubectl create -f frontend-controller.yaml replicationcontroller "frontend" created [root@k8s-master yaml]# kubectl create -f frontend-service.yaml service "frontend" created [root@k8s-master yaml]# kubectl get rc NAME DESIRED CURRENT READY AGE frontend 3 3 3 28m redis-master 1 1 1 1d redis-slave 2 2 2 44m [root@k8s-master yaml]# kubectl get pod NAME READY STATUS RESTARTS AGE frontend-ax654 1/1 Running 0 29m frontend-k8caj 1/1 Running 0 29m frontend-x6bhl 1/1 Running 0 29m redis-master-5wyku 1/1 Running 0 1d redis-slave-7h295 1/1 Running 0 44m redis-slave-r355y 1/1 Running 0 44m [root@k8s-master yaml]# kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend 10.254.93.91 <nodes> 80/TCP 47m kubernetes 10.254.0.1 <none> 443/TCP 7d redis-master 10.254.132.210 <none> 6379/TCP 1d redis-slave 10.254.104.23 <none> 6379/TCP 1h
至此,Guestbook已經運行在了kubernetes中了,可是外部是沒法經過經過frontend-service的IP10.0.93.91這個IP來進行訪問的。Service的虛擬IP是kubernetes虛擬出來的內部網絡,在外部網絡中是沒法尋址到的,這時候就須要增長一層外網到內網的網絡轉發。咱們的示例中採用的是NodePort的方式實現的,以前在建立frontend-service時設置了nodePort: 30001,即kubernetes將會在每一個Node上設置端口,成爲NodePort,經過NodePort端口能夠訪問到真正的服務。