MetalLB
MetalLB , 經過K8S原生的方式提供LB類型的Service支持,開箱即用。html
MetalLB在Kubernetes內運行,監控服務對象的變化,一旦察覺有新的LoadBalancer服務運行,而且沒有可申請的負載均衡器以後,就會完成兩部分的工做:nginx
用戶須要在配置中提供一個地址池,MetalLB將會在其中選取地址分配給該服務。git
根據不一樣配置,MetalLB會以二層(ARP/NDP)或者BGP的方式進行地址的廣播。github
Calico: 部分支持 Canel: 支持 Flannel: 支持 Kube-router: 部分支持 Romana: 支持 Weave Net: 部分支持
工做模式
Layer2模式
MetalLB在這種模式下,只須要一段跟K8S管理網相同網段的地址便可。docker
MetalLB會從K8S節點中選一個Leader節點,在這個節點上面響應LB地址段的ARP請求,從而使上層路由把發往LB的流量都發到Leader節點。api
缺點也很明顯,全部對LB的請求都會發往Leader節點。若是當前Service下面的Pod分佈在不一樣節點,那麼這個流量還會從Leader發往相應的節點。瀏覽器
不過用在實驗環境裏這個模式不須要路由器支持BGP。緩存
圖片來源:https://zhuanlan.zhihu.com/p/103717169app
BGP模式
跟L2模式的區別就是可以經過BGP協議正確分佈流量了,再也不須要一個Leader節點。負載均衡
這種模式須要路由器支持接收MetalLB的BGP廣播,從而把請求分佈到正確的節點上。
缺點就是須要上層路由器支持BGP。並且由於BGP單Session的限制,若是Calico也是使用的BGP模式,就會有衝突從而致使MetalLB沒法正常工做。
圖片來源:https://zhuanlan.zhihu.com/p/103717169
部署MetalLB
若是kube-proxy啓用了IPVS,須要設置strictARP: true。
[root@K8S-PROD-M1 ~]# kubectl edit configmap -n kube-system kube-proxy ... ipvs: excludeCIDRs: null minSyncPeriod: 5s scheduler: wrr strictARP: true #將false設置成true syncPeriod: 5s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s kind: KubeProxyConfiguration metricsBindAddress: "" mode: ipvs ...
或者使用下面的方法:
# see what changes would be made, returns nonzero returncode if different kubectl get configmap kube-proxy -n kube-system -o yaml | \ sed -e "s/strictARP: false/strictARP: true/" | \ kubectl diff -f - -n kube-system # actually apply the changes, returns nonzero returncode on errors only kubectl get configmap kube-proxy -n kube-system -o yaml | \ sed -e "s/strictARP: false/strictARP: true/" | \ kubectl apply -f - -n kube-system
部署MetalLB
[root@K8S-PROD-M1 ~]# docker pull metallb/controller:v0.9.3
[root@K8S-PROD-M1 ~]# docker tag metallb/controller:v0.9.3 harbor.cluster.local/library/metallb/controller:v0.9.3
[root@K8S-PROD-M1 ~]# docker push harbor.cluster.local/library/metallb/controller:v0.9.3
* 獲取部署文件
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
而後修改metal.yaml中Deployment的鏡像地址爲私有鏡像地址。 * 執行部署
[root@K8S-PROD-M1 metallb]# kubectl apply -f metallb-namespace.yaml
[root@K8S-PROD-M1 metallb]# kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
[root@K8S-PROD-M1 metallb]# kubectl apply -f metallb.yaml
**配置MetalLB** 建立一個Configmap文件,爲MetalLB設置網址範圍以及協議相關的選擇和配置,這裏以一個簡單的二層配置爲例:LB地址段是192.168.122.100-192.168.122.200,跟K8S節點的管理網是同一個/24地址段。
[root@K8S-PROD-M1 metallb]# cat > config.yaml << EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
建立ConfigMap: [root@K8S-PROD-M1 metallb]# kubectl apply -f config.yaml 該ConfigMap一旦建立,IP地址池信息將被緩存至MetalLB Controller, 若是修改了IP地址池,則須要從新建立MetalLB Controller Pod。 **配置更新過程**
[root@K8S-PROD-M1 metallb]# kubectl -n metallb-system logs -f pod/controller-6c578774c8-7xnjb
...
{"caller":"main.go:63","event":"endUpdate","msg":"end of service update","service":"kubernetes-dashboard/kubernetes-dashboard","ts":"2020-09-24T03:21:19.959638677Z"}
{"caller":"main.go:126","event":"stateSynced","msg":"controller synced, can allocate IPs now","ts":"2020-09-24T03:21:19.95968369Z"}
...
**測試MetalLB** **建立SVC**
piVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
selector:
app: nginx
ports:
deploy-lbsvc-demo.yaml
[root@K8S-PROD-M1 metallb]# kubectl apply -f deploy-lbsvc-demo.yaml
查看SVC
[root@K8S-PROD-M1 metallb]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23d nginx LoadBalancer 10.106.22.150 192.168.122.100 80:31852/TCP 2m30s svc-demo-1 ClusterIP 10.98.47.47 <none> 80/TCP 5m10s svc-demo-2 ClusterIP 10.109.97.203 <none> 8080/TCP 47h
查看日誌
[root@K8S-PROD-M1 metallb]# kubectl -n metallb-system logs -f pod/controller-6c578774c8-7xnjb ... {"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/nginx","ts":"2020-09-24T06:24:46.415605635Z"} {"caller":"service.go:114","event":"ipAllocated","ip":"192.168.122.100","msg":"IP address assigned by controller","service":"default/nginx","ts":"2020-09-24T06:24:46.41577988Z"} {"caller":"main.go:96","event":"serviceUpdated","msg":"updated service object","service":"default/nginx","ts":"2020-09-24T06:24:46.53081167Z"} {"caller":"main.go:98","event":"endUpdate","msg":"end of service update","service":"default/nginx","ts":"2020-09-24T06:24:46.530878232Z"} {"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"default/nginx","ts":"2020-09-24T06:24:46.530912503Z"} {"caller":"main.go:75","event":"noChange","msg":"service converged, no change","service":"default/nginx","ts":"2020-09-24T06:24:46.531027842Z"} {"caller":"main.go:76","event":"endUpdate","msg":"end of service update","service":"default/nginx","ts":"2020-09-24T06:24:46.531200567Z"} ...
訪問SVC
iptables -t nat -A PREROUTING -m tcp -p tcp -d 192.168.191.32 --dport 31852 -j DNAT --to-destination 192.168.122.100:80
* 訪問Web UI 瀏覽器中訪問:http://192.168.191.32:31852便可訪問nginx首頁,或者:
[root@server ~]# curl http://192.168.122.100/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
**高級功能** **多個Service共用一個IP** 建立Service的時候加上一個annotation:metallb.universe.tf/allow-shared-ip: <some_key>,那麼使用相同Key的Service會共用同一個IP。 然共享IP的前提是這些Service都使用不一樣的Port。