注:如下操做均基於centos7系統。php
ansilbe能夠經過yum或者pip安裝,因爲kubernetes-ansible用到了密碼,故而還須要安裝sshpass:html
pip install ansible wget http://sourceforge.net/projects/sshpass/files/latest/download tar zxvf download cd sshpass-1.05 ./configure && make && make install
# git clone https://github.com/eparis/kubernetes-ansible.git # cd kubernetes-ansible # #在group_vars/all.yml中配置用戶爲root # cat group_vars/all.yml | grep ssh ansible_ssh_user: root # # Each kubernetes service gets its own IP address. These are not real IPs. # # You need only select a range of IPs which are not in use elsewhere in your # # environment. This must be done even if you do not use the network setup # # provided by the ansible scripts. # cat group_vars/all.yml | grep kube_service_addresses kube_service_addresses: 10.254.0.0/16 # #配置root密碼 # echo "password" > ~/rootpassword
配置master、etcd和minion的IP地址:python
# cat inventory [masters] 192.168.0.7 [etcd] 192.168.0.7 [minions] # kube_ip_addr爲該minion上Pods的地址池,默認爲/24掩碼 192.168.0.3 kube_ip_addr=10.0.1.1 192.168.0.6 kube_ip_addr=10.0.2.1
測試各機器鏈接並配置ssh key:nginx
# ansible-playbook -i inventory ping.yml #這個命令會輸出一些錯誤信息,可忽略 # ansible-playbook -i inventory keys.yml
目前kubernetes-ansible對依賴處理的還不是很全面,須要先手動配置下:git
# # 安裝iptables # ansible all -i inventory --vault-password-file=~/rootpassword -a 'yum -y install iptables-services' # # 爲CentOS 7添加kubernetes源 # ansible all -i inventory --vault-password-file=~/rootpassword -a 'curl https://copr.fedoraproject.org/coprs/eparis/kubernetes-epel-7/repo/epel-7/eparis-kubernetes-epel-7-epel-7.repo -o /etc/yum.repos.d/eparis-kubernetes-epel-7-epel-7.repo' # # 配置ssh,防止ssh鏈接超時 # sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config # ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/ssh_config' # ansible all -i inventory --vault-password-file=~/rootpassword -a 'sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/g" /etc/ssh/sshd_config' # ansible all -i inventory --vault-password-file=~/rootpassword -a 'systemctl restart sshd'
配置docker網絡,實際上就是建立kbr0網橋、爲網橋配置ip並配置路由:github
# ansible-playbook -i inventory hack-network.yml PLAY [minions] **************************************************************** GATHERING FACTS *************************************************************** ok: [192.168.0.6] ok: [192.168.0.3] TASK: [network-hack-bridge | Create kubernetes bridge interface] ************** changed: [192.168.0.3] changed: [192.168.0.6] TASK: [network-hack-bridge | Configure docker to use the bridge inferface] **** changed: [192.168.0.6] changed: [192.168.0.3] PLAY [minions] **************************************************************** GATHERING FACTS *************************************************************** ok: [192.168.0.6] ok: [192.168.0.3] TASK: [network-hack-routes | stat path=/etc/sysconfig/network-scripts/ifcfg-{{ ansible_default_ipv4.interface }}] *** ok: [192.168.0.6] ok: [192.168.0.3] TASK: [network-hack-routes | Set up a network config file] ******************** skipping: [192.168.0.3] skipping: [192.168.0.6] TASK: [network-hack-routes | Set up a static routing table] ******************* changed: [192.168.0.3] changed: [192.168.0.6] NOTIFIED: [network-hack-routes | apply changes] ******************************* changed: [192.168.0.6] changed: [192.168.0.3] NOTIFIED: [network-hack-routes | upload script] ******************************* changed: [192.168.0.6] changed: [192.168.0.3] NOTIFIED: [network-hack-routes | run script] ********************************** changed: [192.168.0.3] changed: [192.168.0.6] NOTIFIED: [network-hack-routes | remove script] ******************************* changed: [192.168.0.3] changed: [192.168.0.6] PLAY RECAP ******************************************************************** 192.168.0.3 : ok=10 changed=7 unreachable=0 failed=0 192.168.0.6 : ok=10 changed=7 unreachable=0 failed=0
最後,在全部節點安裝並配置kubernetes:web
ansible-playbook -i inventory setup.yml
執行完成後能夠看到kube相關的服務都在運行了:docker
# # 服務運行狀態 # ansible all -i inventory -k -a 'bash -c "systemctl | grep -i kube"' SSH password: 192.168.0.3 | success | rc=0 >> kube-proxy.service loaded active running Kubernetes Kube-Proxy Server kubelet.service loaded active running Kubernetes Kubelet Server 192.168.0.7 | success | rc=0 >> kube-apiserver.service loaded active running Kubernetes API Server kube-controller-manager.service loaded active running Kubernetes Controller Manager kube-scheduler.service loaded active running Kubernetes Scheduler Plugin 192.168.0.6 | success | rc=0 >> kube-proxy.service loaded active running Kubernetes Kube-Proxy Server kubelet.service loaded active running Kubernetes Kubelet Server # # 端口監聽狀態 # ansible all -i inventory -k -a 'bash -c "netstat -tulnp | grep -E \"(kube)|(etcd)\""' SSH password: 192.168.0.7 | success | rc=0 >> tcp 0 0 192.168.0.7:7080 0.0.0.0:* LISTEN 14486/kube-apiserve tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 14544/kube-schedule tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 14515/kube-controll tcp6 0 0 :::7001 :::* LISTEN 13986/etcd tcp6 0 0 :::4001 :::* LISTEN 13986/etcd tcp6 0 0 :::8080 :::* LISTEN 14486/kube-apiserve 192.168.0.3 | success | rc=0 >> tcp 0 0 192.168.0.3:10250 0.0.0.0:* LISTEN 9500/kubelet tcp6 0 0 :::46309 :::* LISTEN 9524/kube-proxy tcp6 0 0 :::48500 :::* LISTEN 9524/kube-proxy tcp6 0 0 :::38712 :::* LISTEN 9524/kube-proxy 192.168.0.6 | success | rc=0 >> tcp 0 0 192.168.0.6:10250 0.0.0.0:* LISTEN 9474/kubelet tcp6 0 0 :::52870 :::* LISTEN 9498/kube-proxy tcp6 0 0 :::57961 :::* LISTEN 9498/kube-proxy tcp6 0 0 :::40720 :::* LISTEN 9498/kube-proxy
執行下面的命令看看服務是否都是正常的apache
# curl -s -L http://192.168.0.7:4001/version # check etcd etcd 0.4.6 # curl -s -L http://192.168.0.7:8080/api/v1beta1/pods | python -m json.tool # check apiserve { "apiVersion": "v1beta1", "creationTimestamp": null, "items": [], "kind": "PodList", "resourceVersion": 8, "selfLink": "/api/v1beta1/pods" } # curl -s -L http://192.168.0.7:8080/api/v1beta1/minions | python -m json.tool # check apiserve # curl -s -L http://192.168.0.7:8080/api/v1beta1/services | python -m json.tool # check apiserve # kubectl get minions NAME 192.168.0.3 192.168.0.6
首先建立一個Pod:json
# cat ~/apache.json { "id": "fedoraapache", "kind": "Pod", "apiVersion": "v1beta1", "desiredState": { "manifest": { "version": "v1beta1", "id": "fedoraapache", "containers": [{ "name": "fedoraapache", "image": "fedora/apache", "ports": [{ "containerPort": 80, "hostPort": 80 }] }] } }, "labels": { "name": "fedoraapache" } } # kubectl create -f apache.json # kubectl get pod fedoraapache NAME IMAGE(S) HOST LABELS STATUS fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Waiting # #因爲鏡像下載較慢,於是Waiting持續的時間會比較久,等鏡像下好後就會很快起來了 # kubectl get pod fedoraapache NAME IMAGE(S) HOST LABELS STATUS fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running # #到192.168.0.6機器上看看容器狀態 # docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 77dd7fe1b24f fedora/apache:latest "/run-apache.sh" 31 minutes ago Up 31 minutes k8s_fedoraapache.f14c9521_fedoraapache.default.etcd_1416396375_4114a4d0 1455249f2c7d kubernetes/pause:latest "/pause" About an hour ago Up About an hour 0.0.0.0:80->80/tcp k8s_net.e9a68336_fedoraapache.default.etcd_1416396375_11274cd2 # docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE fedora/apache latest 2e11d8fd18b3 7 weeks ago 554.1 MB kubernetes/pause latest 6c4579af347b 4 months ago 239.8 kB # iptables-save | grep 2.2 -A DOCKER ! -i kbr0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.0.2.2:80 -A FORWARD -d 10.0.2.2/32 ! -i kbr0 -o kbr0 -p tcp -m tcp --dport 80 -j ACCEPT # curl localhost # 說明Pod啓動OK了,而且端口也正常 Apache
Replication Controllers保證足夠數量的容器運行,以便均衡負載,並保證服務高可用:
A replication controller combines a template for pod creation (a 「cookie-cutter」 if you will) and a number of desired replicas, into a single API object. The replica controller also contains a label selector that identifies the set of objects managed by the replica controller. The replica controller constantly measures the size of this set relative to the desired size, and takes action by creating or deleting pods.
# cat replica.json { "id": "apacheController", "kind": "ReplicationController", "apiVersion": "v1beta1", "labels": {"name": "fedoraapache"}, "desiredState": { "replicas": 3, "replicaSelector": {"name": "fedoraapache"}, "podTemplate": { "desiredState": { "manifest": { "version": "v1beta1", "id": "fedoraapache", "containers": [{ "name": "fedoraapache", "image": "fedora/apache", "ports": [{ "containerPort": 80, }] }] } }, "labels": {"name": "fedoraapache"}, }, } } # kubectl create -f replica.json apacheController # kubectl get replicationController NAME IMAGE(S) SELECTOR REPLICAS apacheController fedora/apache name=fedoraapache 3 # kubectl get pod NAME IMAGE(S) HOST LABELS STATUS fedoraapache fedora/apache 192.168.0.6/ name=fedoraapache Running cf6726ae-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running cf679152-6fed-11e4-8a06-fa163e3873e1 fedora/apache 192.168.0.3/ name=fedoraapache Running
能夠看到,已經有三個容器在運行了。
經過Replication Controllers已經有多個Pod在運行了,但因爲每一個Pod都分配了不一樣的IP,而且隨着系統運行這些IP地址有可能會變化,那問題來了,如何從外部訪問這個服務呢?這就是service乾的事情了。
A Kubernetes service is an abstraction which defines a logical set of pods and a policy by which to access them - sometimes called a micro-service. The goal of services is to provide a bridge for non-Kubernetes-native applications to access backends without the need to write code that is specific to Kubernetes. A service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends. The set of pods targetted is determined by a label selector.
As an example, consider an image-process backend which is running with 3 live replicas. Those replicas are fungible - frontends do not care which backend they use. While the actual pods that comprise the set may change, the frontend client(s) do not need to know that. The service abstraction enables this decoupling.
Unlike pod IP addresses, which actually route to a fixed destination, service IPs are not actually answered by a single host. Instead, we use iptables (packet processing logic in Linux) to define 「virtual」 IP addresses which are transparently redirected as needed. We call the tuple of the service IP and the service port the portal. When clients connect to the portal, their traffic is automatically transported to an appropriate endpoint. The environment variables for services are actually populated in terms of the portal IP and port. We will be adding DNS support for services, too.
# cat service.json { "id": "fedoraapache", "kind": "Service", "apiVersion": "v1beta1", "selector": { "name": "fedoraapache", }, "protocol": "TCP", "containerPort": 80, "port": 8987 } # kubectl create -f service.json fedoraapache # kubectl get service NAME LABELS SELECTOR IP PORT kubernetes-ro component=apiserver,provider=kubernetes 10.254.0.2 80 kubernetes component=apiserver,provider=kubernetes 10.254.0.1 443 fedoraapache name=fedoraapache 10.254.0.3 8987 # # 切換到minion上 # curl 10.254.0.3:8987 Apache
也能夠爲service配置一個公網IP,前提是要配置一個cloud provider。目前支持的cloud provider有GCE、AWS、OpenStack、ovirt、vagrant等。
For some parts of your application (e.g. your frontend) you want to expose a service on an external (publically visible) IP address. To achieve this, you can set the createExternalLoadBalancer flag on the service. This sets up a cloud provider specific load balancer (assuming that it is supported by your cloud provider) and also sets up IPTables rules on each host that map packets from the specified External IP address to the service proxy in the same manner as internal service IP addresses.
注:對Openstack的支持是使用rackspace開源的github.com/rackspace/gophercloud來作的,
Currently, there are three types of application health checks that you can choose from:
HTTP Health Checks - The Kubelet will call a web hook. If it returns between 200 and 399, it is considered success, failure otherwise.
Container Exec - The Kubelet will execute a command inside your container. If it returns 「ok」 it will be considered a success.
* TCP Socket - The Kubelet will attempt to open a socket to your container. If it can establish a connection, the container is considered healthy, if it can’t it is considered a failure.
In all cases, if the Kubelet discovers a failure, the container is restarted.The container health checks are configured in the 「LivenessProbe」 section of your container config. There you can also specify an 「initialDelaySeconds」 that is a grace period from when the container is started to when health checks are performed, to enable your container to perform any necessary initialization.
Here is an example config for a pod with an HTTP health check:
kind: Pod
apiVersion: v1beta1
desiredState:
manifest:
version: v1beta1
id: php
containers:
- name: nginx
image: dockerfile/nginx
ports:
- containerPort: 80
# defines the health checking
livenessProbe:
# turn on application health checking
enabled: true
type: http
# length of time to wait for a pod to initialize
# after pod startup, before applying health checking
initialDelaySeconds: 30
# an http probe
httpGet:
path: /_status/healthz
port: 8080