建立一個mysql的NodePort服務,對應兩個pod實例,rc和service的配置以下:node
一、rc配置mysql
apiVersion: v1 kind: ReplicationController metadata: name: wordpress-mysql spec: replicas: 2 selector: name: wordpress-mysql template: metadata: labels: name: wordpress-mysql spec: containers: - name: wordpress-mysql image: 172.16.114.201/library/mysql:v1 ports: - containerPort: 3306 volumeMounts: - name: "wordpress-mysql-data" mountPath: "/var/lib/mysql" env: - name: MYSQL_PASS value: "123456" - name: ON_CREATE_DB value: "wordpress" volumes: - name: "wordpress-mysql-data" hostPath: path: "/root/wordpress-mysql/data"
二、service配置sql
apiVersion: v1 kind: Service metadata: name: wordpress-mysql spec: ports: - port: 3306 targetPort: 3306 nodePort: 30010 protocol: TCP type: NodePort selector: name: wordpress-mysql
三、建立的service狀況api
Name: wordpress-mysql Namespace: default Labels: <none> Selector: name=wordpress-mysql Type: NodePort IP: 10.254.67.85 Port: <unset> 3306/TCP NodePort: <unset> 30010/TCP Endpoints: 10.0.3.2:3306,10.0.45.6:3306 Session Affinity: None No events.
四、kube-proxy佔用端口狀況dom
[root@test-209 log]# netstat -anp | grep kube-proxy tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 10165/kube-proxy tcp 0 0 172.16.114.209:46010 172.16.114.208:8080 ESTABLISHED 10165/kube-proxy tcp 0 0 172.16.114.209:46014 172.16.114.208:8080 ESTABLISHED 10165/kube-proxy tcp 0 0 172.16.114.209:46012 172.16.114.208:8080 ESTABLISHED 10165/kube-proxy tcp6 0 0 :::30010 :::* LISTEN 10165/kube-proxy unix 2 [ ] DGRAM 36395 10165/kube-proxy unix 3 [ ] STREAM CONNECTED 36403 10165/kube-proxy
五、對應的iptables規則tcp
iptables -S -t nat | grep mysql -A KUBE-NODEPORTS -p tcp -m comment --comment "default/wordpress-mysql:" -m tcp --dport 30010 -j KUBE-MARK-MASQ -A KUBE-NODEPORTS -p tcp -m comment --comment "default/wordpress-mysql:" -m tcp --dport 30010 -j KUBE-SVC-GJ6HULPZPPQIKMS7 -A KUBE-SEP-7KXQQUXVSZ2LFV44 -s 10.0.45.6/32 -m comment --comment "default/wordpress-mysql:" -j KUBE-MARK-MASQ -A KUBE-SEP-7KXQQUXVSZ2LFV44 -p tcp -m comment --comment "default/wordpress-mysql:" -m tcp -j DNAT --to-destination 10.0.45.6:3306 -A KUBE-SEP-J7SZJXRP24HRFT23 -s 10.0.3.2/32 -m comment --comment "default/wordpress-mysql:" -j KUBE-MARK-MASQ -A KUBE-SEP-J7SZJXRP24HRFT23 -p tcp -m comment --comment "default/wordpress-mysql:" -m tcp -j DNAT --to-destination 10.0.3.2:3306 -A KUBE-SERVICES -d 10.254.67.85/32 -p tcp -m comment --comment "default/wordpress-mysql: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-GJ6HULPZPPQIKMS7 -A KUBE-SVC-GJ6HULPZPPQIKMS7 -m comment --comment "default/wordpress-mysql:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-J7SZJXRP24HRFT23 -A KUBE-SVC-GJ6HULPZPPQIKMS7 -m comment --comment "default/wordpress-mysql:" -j KUBE-SEP-7KXQQUXVSZ2LFV44
從以上信息看出,kube-proxy爲mysql服務在node節點上單獨起了一個端口30010,在iptables的規則中,目的端口30010被指向KUBE-SVC-GJ6HULPZPPQIKMS7,KUBE-SVC-GJ6HULPZPPQIKMS7又被指向KUBE-SEP-J7SZJXRP24HRFT23和KUBE-SEP-7KXQQUXVSZ2LFV44(他兩各50%的概率),KUBE-SEP-J7SZJXRP24HRFT23和KUBE-SEP-7KXQQUXVSZ2LFV44定義了DNAT轉換規則,將訪問重定向到10.0.45.6:3306和10.0.3.2:3306這兩個endpoint。所以,當外部訪問30010端口時,根據iptables的規則會將該消息分發給10.0.45.6:3306和10.0.3.2:3306這兩個地址(分發的概率是各50%)wordpress
建立一個zookeeper的ClusterIP服務,rc和service的配置以下:spa
一、rc配置unix
apiVersion: v1 kind: ReplicationController metadata: name: zookeeper1 spec: replicas: 1 selector: name: zookeeper1 template: metadata: labels: name: zookeeper1 spec: containers: - name: zookeeper1 image: 10.10.30.166/public/zookeeper:v1 ports: - containerPort: 2181 - containerPort: 2888 - containerPort: 3888 env: - name: ZOOKEEPER_ID value: "1" - name: ZOOKEEPER_SERVER_1 value: "zookeeper1" - name: ZOOKEEPER_SERVER_2 value: "zookeeper2" - name: ZOOKEEPER_SERVER_3 value: "zookeeper3"
二、service配置code
apiVersion: v1 kind: Service metadata: name: zookeeper1 spec: ports: - port: 2181 targetPort: 2181 protocol: TCP name: "1" - port: 2888 targetPort: 2888 protocol: TCP name: "2" - port: 3888 targetPort: 3888 protocol: TCP name: "3" type: ClusterIP selector: name: zookeeper1
三、建立service狀況
Name: zookeeper1 Namespace: default Labels: <none> Selector: name=zookeeper1 Type: ClusterIP IP: 10.254.181.6 Port: 1 2181/TCP Endpoints: 10.0.45.4:2181 Port: 2 2888/TCP Endpoints: 10.0.45.4:2888 Port: 3 3888/TCP Endpoints: 10.0.45.4:3888 Session Affinity: None No events.
四、iptables規則
iptables -S -t nat | grep zookeeper1 -A KUBE-SEP-BZJZKIUQRVYJVMQB -s 10.0.45.4/32 -m comment --comment "default/zookeeper1:3" -j KUBE-MARK-MASQ -A KUBE-SEP-BZJZKIUQRVYJVMQB -p tcp -m comment --comment "default/zookeeper1:3" -m tcp -j DNAT --to-destination 10.0.45.4:3888 -A KUBE-SEP-C3J2QHMJ3LTD3GR7 -s 10.0.45.4/32 -m comment --comment "default/zookeeper1:2" -j KUBE-MARK-MASQ -A KUBE-SEP-C3J2QHMJ3LTD3GR7 -p tcp -m comment --comment "default/zookeeper1:2" -m tcp -j DNAT --to-destination 10.0.45.4:2888 -A KUBE-SEP-RZ4H7H2HFI3XFCXZ -s 10.0.45.4/32 -m comment --comment "default/zookeeper1:1" -j KUBE-MARK-MASQ -A KUBE-SEP-RZ4H7H2HFI3XFCXZ -p tcp -m comment --comment "default/zookeeper1:1" -m tcp -j DNAT --to-destination 10.0.45.4:2181 -A KUBE-SERVICES -d 10.254.181.6/32 -p tcp -m comment --comment "default/zookeeper1:1 cluster IP" -m tcp --dport 2181 -j KUBE-SVC-HHEJUKXW5P7DV7BX -A KUBE-SERVICES -d 10.254.181.6/32 -p tcp -m comment --comment "default/zookeeper1:2 cluster IP" -m tcp --dport 2888 -j KUBE-SVC-2SVOYTXLXAXVV7L3 -A KUBE-SERVICES -d 10.254.181.6/32 -p tcp -m comment --comment "default/zookeeper1:3 cluster IP" -m tcp --dport 3888 -j KUBE-SVC-KAVJ7GO67HRSOAM3 -A KUBE-SVC-2SVOYTXLXAXVV7L3 -m comment --comment "default/zookeeper1:2" -j KUBE-SEP-C3J2QHMJ3LTD3GR7 -A KUBE-SVC-HHEJUKXW5P7DV7BX -m comment --comment "default/zookeeper1:1" -j KUBE-SEP-RZ4H7H2HFI3XFCXZ -A KUBE-SVC-KAVJ7GO67HRSOAM3 -m comment --comment "default/zookeeper1:3" -j KUBE-SEP-BZJZKIUQRVYJVMQB
從iptables的規則來看,對目的ip是10.254.181.6,端口是218一、2888或者3888的消息,規則指向了KUBE-SVC-HHEJUKXW5P7DV7BX、KUBE-SVC-2SVOYTXLXAXVV7L三、KUBE-SVC-KAVJ7GO67HRSOAM3,他們三又分別指向了KUBE-SEP-C3J2QHMJ3LTD3GR七、KUBE-SEP-RZ4H7H2HFI3XFCXZ、KUBE-SEP-BZJZKIUQRVYJVMQB,這三條規則定義了DNAT轉換規則,將訪問重定向到了10.0.45.4:388八、10.0.45.4:288八、10.0.45.4:2181