咱們先不考慮配置文件的前提下:node
apiVersion: apps/v1 kind: StatefulSet #####固定hostname,有狀態的服務使用這個 statefalset有個問題,就是若是那個pod不是running狀態,這個主機名是沒法解析的,這樣就構成了一個死循環,我sed替換主機名的時候因爲pod還不是running狀態,她只能獲取本身的主機名。沒法獲取別人的主機名,因此在zookeeper中換成了換成了ip metadata: name: zookeeper spec: serviceName: zookeeper ####因此生成的3個pod的名字叫zookeeper-0,zookeeper-1,zookeeper-2 replicas: 3 revisionHistoryLimit: 10 selector: ##statefulset必須有的 matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: volumes: - name: volume-logs hostPath: path: /var/log/zookeeper containers: - name: zookeeper image: harbor.test.com/middleware/zookeeper:3.4.10 imagePullPolicy: IfNotPresent livenessProbe: tcpSocket: port: 2181 initialDelaySeconds: 30 timeoutSeconds: 3 periodSeconds: 5 successThreshold: 1 failureThreshold: 2 ports: - containerPort: 2181 protocol: TCP - containerPort: 2888 protocol: TCP - containerPort: 3888 protocol: TCP env: - name: SERVICE_NAME value: "zookeeper" - name: MY_POD_NAME #聲明k8s自帶的變量,這樣在pod建立以後,在其中能夠直接echo ${MY_POD_NAME}獲得hostname valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: volume-logs mountPath: /var/log/zookeeper nodeSelector: zookeeper: enable --- apiVersion: v1 kind: Service metadata: name: zookeeper #個人cluster名字爲這個,在任意一個生成的pod中能夠ping zookeeper,至關於zookeeper爲生成的3個pod的cluster_name,會發現每次ping出的地址不必定相同,nslookup zookeeper獲得的是3個pod的pod ip,共3條記錄。 spec: ports: - port: 2181 selector: app: zookeeper clusterIP: None #此句必須加上
[root@host5 src]# kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default zookeeper-0 1/1 Running 0 12m 192.168.55.69 host3 <none> <none> default zookeeper-1 1/1 Running 0 12m 192.168.31.93 host4 <none> <none> default zookeeper-2 1/1 Running 0 12m 192.168.55.70 host3 <none> <none>
bash-4.3# nslookup zookeeper nslookup: can't resolve '(null)': Name does not resolve Name: zookeeper Address 1: 192.168.55.70 zookeeper-2.zookeeper.default.svc.cluster.local Address 2: 192.168.55.69 zookeeper-0.zookeeper.default.svc.cluster.local Address 3: 192.168.31.93 zookeeper-1.zookeeper.default.svc.cluster.local bash-4.3# ping zookeeper-0.zookeeper PING zookeeper-0.zookeeper (192.168.55.69): 56 data bytes 64 bytes from 192.168.55.69: seq=0 ttl=63 time=0.109 ms 64 bytes from 192.168.55.69: seq=1 ttl=63 time=0.212 ms ^C --- zookeeper-0.zookeeper ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.109/0.160/0.212 ms bash-4.3# ping zookeeper-1.zookeeper PING zookeeper-1.zookeeper (192.168.31.93): 56 data bytes 64 bytes from 192.168.31.93: seq=0 ttl=62 time=0.535 ms 64 bytes from 192.168.31.93: seq=1 ttl=62 time=0.507 ms 64 bytes from 192.168.31.93: seq=2 ttl=62 time=0.587 ms ^C --- zookeeper-1.zookeeper ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.507/0.543/0.587 ms bash-4.3# ping zookeeper-2.zookeeper PING zookeeper-2.zookeeper (192.168.55.70): 56 data bytes 64 bytes from 192.168.55.70: seq=0 ttl=64 time=0.058 ms 64 bytes from 192.168.55.70: seq=1 ttl=64 time=0.081 ms ^C --- zookeeper-2.zookeeper ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.058/0.069/0.081 ms
k8s自帶的經常使用變量以下:redis
env: - name: MY_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_SERVICE_ACCOUNT valueFrom: fieldRef: fieldPath: spec.serviceAccountName spec.nodeName : pod所在節點的IP、宿主主機IP status.podIP :pod IP
咱們再看配置文件:docker
[root@docker06 conf]# cat zoo.cfg |grep -v ^#|grep -v ^$ tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data clientPort=2181 clientPortAddress= docker06 server.1=docker05:2888:3888 server.2=docker06:2888:3888 server.3=docker04:2888:3888 snapCount=10000 leaderServes=yes autopurge.snapRetainCount=3 autopurge.purgeInterval=2 maxClientCnxns=1000
咱們須要修改爲形如:shell
tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data clientPort=2181 clientPortAddress= docker06 #下面的3行是固定的,主要是這行須要修改爲本機的MY_POD_IP,咱們能夠用configmap掛載配置文件,而後在pod裏面用sed替換掉這行配置 server.1=zookeeper-0.zookeeper:2888:3888 server.2=zookeeper-1.zookeeper:2888:3888 server.3=zookeeper-2.zookeeper:2888:3888 snapCount=10000 leaderServes=yes autopurge.snapRetainCount=3 autopurge.purgeInterval=2 maxClientCnxns=1000
參考以下這種方式:api
先將配置文件經過configmap掛載進pod裏面,如fix-ip.shbash
apiVersion: v1 kind: ConfigMap metadata: name: redis-cluster labels: app: redis-cluster data: fix-ip.sh: | #!/bin/sh CLUSTER_CONFIG="/var/lib/redis/nodes.conf" if [ -f ${CLUSTER_CONFIG} ]; then if [ -z "${POD_IP}" ]; then echo "Unable to determine Pod IP address!" exit 1 fi echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}" sed -i.bak -e "/myself/s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG} fi exec "$@" redis.conf: |+ cluster-enabled yes cluster-require-full-coverage no cluster-node-timeout 15000 cluster-config-file /var/lib/redis/nodes.conf cluster-migration-barrier 1 appendonly yes protected-mode no
而後在啓動pod的時候執行這個腳本:app
apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-cluster labels: app: redis-cluster spec: serviceName: redis-cluster replicas: 6 selector: matchLabels: app: redis-cluster template: metadata: labels: app: redis-cluster spec: containers: - name: redis image: 10.11.100.85/library/redis ports: - containerPort: 6379 name: client - containerPort: 16379 name: gossip command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"] #此處先執行了那個腳本,而後啓動的redis readinessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: exec: command: - sh - -c - "redis-cli -h $(hostname) ping" initialDelaySeconds: 20 periodSeconds: 3 env: - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP volumeMounts: - name: conf mountPath: /etc/redis readOnly: false - name: data mountPath: /var/lib/redis readOnly: false volumes: - name: conf configMap: name: redis-cluster defaultMode: 0755 # items: # - key: redis.conf # path: redis.conf # - key: fix-ip.sh # path: fix-ip.sh volumeClaimTemplates: - metadata: name: data labels: name: redis-cluster spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 150Mi
注意:經過configmap生成的配置文件爲只讀,沒法經過sed修改,能夠經過掛載到臨時目錄,而後拷過去以後sed,可是這樣也存在一個問題,就是你動態修改了configmap,只會改變臨時目錄裏的文件,而不會改變考過去的文件tcp
實際生產環境的配置:ide
1.從新制做imageui
[root@host4 zookeeper]# ll 總用量 4 drwxr-xr-x 2 root root 45 5月 24 15:48 conf -rw-r--r-- 1 root root 143 5月 23 06:19 Dockerfile drwxr-xr-x 2 root root 20 5月 24 15:48 scripts [root@host4 conf]# cd conf [root@host4 conf]# ll 總用量 8 -rw-r--r-- 1 root root 1503 5月 23 04:15 log4j.properties -rw-r--r-- 1 root root 324 5月 24 15:48 zoo.cfg [root@host4 conf]# cat zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data clientPort=2181 clientPortAddress=PODIP #此處用ip,下面用主機名,緣由見本文上面 server.1=zookeeper-0.zookeeper:2888:3888 server.2=zookeeper-1.zookeeper:2888:3888 server.3=zookeeper-2.zookeeper:2888:3888 snapCount=10000 leaderServes=yes autopurge.snapRetainCount=3 autopurge.purgeInterval=2 maxClientCnxns=1000 [root@host4 conf]# cd ../scripts/ [root@host4 scripts]# ll 總用量 4 -rwxr-xr-x 1 root root 177 5月 24 15:48 sed.sh [root@host4 scripts]# cat sed.sh #!/bin/bash MY_ID=`echo ${MY_POD_NAME} |awk -F'-' '{print $NF}'` MY_ID=`expr ${MY_ID} + 1` echo ${MY_ID} > /data/myid sed -i 's/PODIP/'${MY_POD_IP}'/g' /conf/zoo.cfg exec "$@" [root@host4 scripts]# cd .. [root@host4 zookeeper]# ls conf Dockerfile scripts [root@host4 zookeeper]# cat Dockerfile FROM harbor.test.com/middleware/zookeeper:3.4.10 MAINTAINER rongruixue@163.com ARG zookeeper_version=3.4.10 COPY conf /conf/ COPY scripts /
這樣咱們docker build就製做出了image :harbor.test.com/middleware/zookeeper:v3.4.10
而後咱們啓經過yml啓動pod:
apiVersion: apps/v1 kind: StatefulSet metadata: name: zookeeper spec: # podManagementPolicy: Parallel #此配置決定是否讓3個pod同時起來,而不是按 0 1 2的順序 serviceName: zookeeper replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: volumes: - name: volume-logs hostPath: path: /var/log/zookeeper - name: volume-data hostPath: path: /opt/zookeeper/data terminationGracePeriodSeconds: 10 containers: - name: zookeeper image: harbor.test.com/middleware/zookeeper:v3.4.10 imagePullPolicy: Always ports: - containerPort: 2181 protocol: TCP - containerPort: 2888 protocol: TCP - containerPort: 3888 protocol: TCP env: - name: SERVICE_NAME value: "zookeeper" - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name volumeMounts: - name: volume-logs mountPath: /var/log/zookeeper #- name: volume-data 此處不能掛載、data到本地,不然若是兩個pod分配到同一個節點會相互覆蓋,myid也會被覆蓋 # mountPath: /data command: - /bin/bash - -c - -x - | /sed.sh #此腳本做用就是講podip寫入zoo.cfg配置文件中,而後寫/data/myid sleep 10 zkServer.sh start-foreground nodeSelector: zookeeper: enable --- apiVersion: v1 kind: Service metadata: name: zookeeper spec: ports: - port: 2181 selector: app: zookeeper clusterIP: None