前面文章介紹了基於nginx實現ingress controller的功能,本章節接續介紹kubernetes系列教程中另一個姐妹開源負載均衡的控制器:haproxy ingress controller。html
HAProxy Ingress watches in the k8s cluster and how it builds HAProxy configuration前端
和Nginx相相似,HAproxy經過監視kubernetes api獲取到service後端pod的狀態,動態更新haproxy配置文件,以實現七層的負載均衡。node
HAproxy Ingress控制器具有的特性以下:linux
HAproxy ingress控制器版本nginx
haproxy ingress安裝相對簡單,官方提供了安裝的yaml文件,先將文件下載查看一下kubernetes資源配置,包含的資源類型有:git
安裝文件路徑https://haproxy-ingress.github.io/resources/haproxy-ingress.yamlgithub
一、建立命名空間,haproxy ingress部署在ingress-controller這個命名空間,先建立nsweb
[root@node-1 ~]# kubectl create namespace ingress-controller namespace/ingress-controller created [root@node-1 ~]# kubectl get namespaces ingress-controller -o yaml apiVersion: v1 kind: Namespace metadata: creationTimestamp: "2019-12-27T09:56:04Z" name: ingress-controller resourceVersion: "13946553" selfLink: /api/v1/namespaces/ingress-controller uid: ea70b2f7-efe4-43fd-8ce9-3b917b09b533 spec: finalizers: - kubernetes status: phase: Active
二、安裝haproxy ingress控制器redis
[root@node-1 ~]# wget https://haproxy-ingress.github.io/resources/haproxy-ingress.yaml [root@node-1 ~]# kubectl apply -f haproxy-ingress.yaml serviceaccount/ingress-controller created clusterrole.rbac.authorization.k8s.io/ingress-controller created role.rbac.authorization.k8s.io/ingress-controller created clusterrolebinding.rbac.authorization.k8s.io/ingress-controller created rolebinding.rbac.authorization.k8s.io/ingress-controller created deployment.apps/ingress-default-backend created service/ingress-default-backend created configmap/haproxy-ingress created daemonset.apps/haproxy-ingress created
三、 檢查haproxy ingress安裝狀況,檢查haproxy ingress核心的DaemonSets,發現DS並未部署Pod,緣由是配置文件中定義了nodeSelector節點標籤選擇器,所以須要給node設置合理的標籤算法
[root@node-1 ~]# kubectl get daemonsets -n ingress-controller NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE haproxy-ingress 0 0 0 0 0 role=ingress-controller 5m51s
四、 給node設置標籤,讓DaemonSets管理的Pod能調度到node節點上,生產環境中根據狀況定義,將實現haproxy ingress功能的節點定義到特定的節點,對個node節點的訪問,須要藉助於負載均衡實現統一接入,本文主要以探究haproxy ingress功能,所以未部署負載均衡調度器,讀者可根據實際的狀況部署。以node-1和node-2爲例:
[root@node-1 ~]# kubectl label node node-1 role=ingress-controller node/node-1 labeled [root@node-1 ~]# kubectl label node node-2 role=ingress-controller node/node-2 labeled #查看labels的狀況 [root@node-1 ~]# kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS node-1 Ready master 104d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,role=ingress-controller node-2 Ready <none> 104d v1.15.3 app=web,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-2,kubernetes.io/os=linux,label=test,role=ingress-controller node-3 Ready <none> 104d v1.15.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-3,kubernetes.io/os=linux
五、再次查看ingress部署狀況,已完成部署,並調度至node-1和node-2節點上
[root@node-1 ~]# kubectl get daemonsets -n ingress-controller NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE haproxy-ingress 2 2 2 2 2 role=ingress-controller 15m [root@node-1 ~]# kubectl get pods -n ingress-controller -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES haproxy-ingress-bdns8 1/1 Running 0 2m27s 10.254.100.102 node-2 <none> <none> haproxy-ingress-d5rnl 1/1 Running 0 2m31s 10.254.100.101 node-1 <none> <none>
haproxy ingress部署時候也經過deployments部署了一個後端backend服務,這是部署haproxy ingress必須部署服務,不然ingress controller沒法啓動,能夠經過查看Deployments列表確認
[root@node-1 ~]# kubectl get deployments -n ingress-controller NAME READY UP-TO-DATE AVAILABLE AGE ingress-default-backend 1/1 1 1 18m
六、 查看haproxy ingress的日誌,經過查詢日誌可知,多個haproxy ingress是經過選舉實現高可用HA機制。
其餘資源包括ServiceAccount,ClusterRole,ConfigMaps請單獨確認,至此HAproxy ingress controller部署完畢。另外兩種部署方式:
Ingress控制器部署完畢後須要定義Ingress規則,以方便Ingress控制器可以識別到service後端Pod的資源,這個章節咱們未來介紹在HAproxy Ingress Controller環境下Ingress的使用。
一、環境準備,建立一個deployments並暴露其端口
#建立應用並暴露端口 [root@node-1 haproxy-ingress]# kubectl run haproxy-ingress-demo --image=nginx:1.7.9 --port=80 --replicas=1 --expose kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. service/haproxy-ingress-demo created deployment.apps/haproxy-ingress-demo created #查看應用 [root@node-1 haproxy-ingress]# kubectl get deployments haproxy-ingress-demo NAME READY UP-TO-DATE AVAILABLE AGE haproxy-ingress-demo 1/1 1 1 10s #查看service狀況 [root@node-1 haproxy-ingress]# kubectl get services haproxy-ingress-demo NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE haproxy-ingress-demo ClusterIP 10.106.199.102 <none> 80/TCP 17s
二、建立ingress規則,若是有多個ingress控制器,能夠經過ingress.class指定類型爲haproxy
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy-ingress-demo labels: ingresscontroller: haproxy annotations: kubernetes.io/ingress.class: haproxy spec: rules: - host: www.happylau.cn http: paths: - path: / backend: serviceName: haproxy-ingress-demo servicePort: 80
三、應用ingress規則,並查看ingress詳情,查看Events日誌發現控制器已正常更新
[root@node-1 haproxy-ingress]# kubectl apply -f ingress-demo.yaml ingress.extensions/haproxy-ingress-demo created #查看詳情 [root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-demo Name: haproxy-ingress-demo Namespace: default Address: Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- www.happylau.cn / haproxy-ingress-demo:80 (10.244.2.166:80) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"labels":{"ingresscontroller":"haproxy"},"name":"haproxy-ingress-demo","namespace":"default"},"spec":{"rules":[{"host":"www.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-ingress-demo","servicePort":80},"path":"/"}]}}]}} kubernetes.io/ingress.class: haproxy Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 27s ingress-controller Ingress default/haproxy-ingress-demo Normal CREATE 27s ingress-controller Ingress default/haproxy-ingress-demo Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-demo Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-demo
四、測試驗證ingress規則,能夠將域名寫入到hosts文件中,咱們直接使用gcurl測試,地址指向node-1或node-2都可
[root@node-1 haproxy-ingress]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.101 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
五、測試正常,接下來到haproxy ingress controller中剛查看對應生成規則配置文件
[root@node-1 ~]# kubectl exec -it haproxy-ingress-bdns8 -n ingress-controller /bin/sh #查看配置文件 /etc/haproxy # cat /etc/haproxy/haproxy.cfg # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # HAProxy Ingress Controller # # -------------------------- # # This file is automatically updated, do not edit # # # 全局配置文件內容 global daemon nbthread 2 cpu-map auto:1/1-2 0-1 stats socket /var/run/haproxy-stats.sock level admin expose-fd listeners maxconn 2000 hard-stop-after 10m lua-load /usr/local/etc/haproxy/lua/send-response.lua lua-load /usr/local/etc/haproxy/lua/auth-request.lua tune.ssl.default-dh-param 2048 ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK ssl-default-bind-options no-sslv3 no-tls-tickets #默認配置內容 defaults log global maxconn 2000 option redispatch option dontlognull option http-server-close option http-keep-alive timeout client 50s timeout client-fin 50s timeout connect 5s timeout http-keep-alive 1m timeout http-request 5s timeout queue 5s timeout server 50s timeout server-fin 50s timeout tunnel 1h #後端服務器,即經過service服務發現機制,和後端的Pod關聯 backend default_haproxy-ingress-demo_80 mode http balance roundrobin acl https-request ssl_fc http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor http-response set-header Strict-Transport-Security "max-age=15768000" server srv001 10.244.2.166:80 weight 1 check inter 2s #後端Pod的地址 server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s #默認安裝時建立的backend服務 ,初始安裝時須要使用到 backend _default_backend mode http balance roundrobin http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor server srv001 10.244.2.165:8080 weight 1 check inter 2s server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s backend _error413 mode http errorfile 400 /usr/local/etc/haproxy/errors/413.http http-request deny deny_status 400 backend _error495 mode http errorfile 400 /usr/local/etc/haproxy/errors/495.http http-request deny deny_status 400 backend _error496 mode http errorfile 400 /usr/local/etc/haproxy/errors/496.http http-request deny deny_status 400 #前端監聽的80端口轉發規則,並配置有https跳轉,對應的主機配置在/etc/haproxy/maps/_global_http_front.map文件中定義 frontend _front_http mode http bind *:80 http-request set-var(req.base) base,lower,regsub(:[0-9]+/,/) http-request redirect scheme https if { var(req.base),map_beg(/etc/haproxy/maps/_global_https_redir.map,_nomatch) yes } http-request set-header X-Forwarded-Proto http http-request del-header X-SSL-Client-CN http-request del-header X-SSL-Client-DN http-request del-header X-SSL-Client-SHA1 http-request del-header X-SSL-Client-Cert http-request set-var(req.backend) var(req.base),map_beg(/etc/haproxy/maps/_global_http_front.map,_nomatch) use_backend %[var(req.backend)] unless { var(req.backend) _nomatch } default_backend _default_backend #前端監聽的443轉發規則,對應域名在/etc/haproxy/maps/ _front001_host.map文件中 frontend _front001 mode http bind *:443 ssl alpn h2,http/1.1 crt /ingress-controller/ssl/default-fake-certificate.pem http-request set-var(req.hostbackend) base,lower,regsub(:[0-9]+/,/),map_beg(/etc/haproxy/maps/_front001_host.map,_nomatch) http-request set-header X-Forwarded-Proto https http-request del-header X-SSL-Client-CN http-request del-header X-SSL-Client-DN http-request del-header X-SSL-Client-SHA1 http-request del-header X-SSL-Client-Cert use_backend %[var(req.hostbackend)] unless { var(req.hostbackend) _nomatch } default_backend _default_backend #狀態監聽器 listen stats mode http bind *:1936 stats enable stats uri / no log option forceclose stats show-legends #監控健康檢查 frontend healthz mode http bind *:10253 monitor-uri /healthz
查看主機名隱射文件,包含有前端主機名和轉發到後端backend的名稱
/etc/haproxy/maps # cat /etc/haproxy/maps/_global_http_front.map # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # HAProxy Ingress Controller # # -------------------------- # # This file is automatically updated, do not edit # # # www.happylau.cn/ default_haproxy-ingress-demo_80
經過上面的基礎配置能夠實現基於haproxy的七層負載均衡實現,haproxy ingress controller經過kubernetes api動態識別到service後端規則配置並更新至haproxy.cfg配置文件中,從而實現負載均衡功能實現。
後端Pod是實時動態變化的,haproxy ingress經過service的服務發現機制,動態識別到後端Pod的變化狀況,並動態更新haproxy.cfg配置文件,並重載配置(實際不須要重啓haproxy服務),本章節將演示haproxy ingress動態更新和負載均衡功能。
一、動態更新,咱們以擴容pod的副本爲例,將副本數從replicas=1擴容至3個
[root@node-1 ~]# kubectl scale --replicas=3 deployment haproxy-ingress-demo deployment.extensions/haproxy-ingress-demo scaled [root@node-1 ~]# kubectl get deployments haproxy-ingress-demo NAME READY UP-TO-DATE AVAILABLE AGE haproxy-ingress-demo 3/3 3 3 43m #查看擴容後Pod的IP地址 [root@node-1 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES haproxy-ingress-demo-5d487d4fc-5pgjt 1/1 Running 0 43m 10.244.2.166 node-3 <none> <none> haproxy-ingress-demo-5d487d4fc-pst2q 1/1 Running 0 18s 10.244.0.52 node-1 <none> <none> haproxy-ingress-demo-5d487d4fc-sr8tm 1/1 Running 0 18s 10.244.1.149 node-2 <none> <none>
二、查看haproxy配置文件內容,能夠看到backend後端主機列表已動態發現新增的pod地址
backend default_haproxy-ingress-demo_80 mode http balance roundrobin acl https-request ssl_fc http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor http-response set-header Strict-Transport-Security "max-age=15768000" server srv001 10.244.2.166:80 weight 1 check inter 2s #新增的pod地址 server srv002 10.244.0.52:80 weight 1 check inter 2s server srv003 10.244.1.149:80 weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s
四、查看haproxy ingress日誌,日誌中提示HAProxy updated without needing to reload,即配置動態識別,不須要重啓haproxy服務就可以識別,自從1.8後haproxy能支持動態配置更新的能力,以適應微服務的場景,詳情查看文章說明
[root@node-1 ~]# kubectl logs haproxy-ingress-bdns8 -n ingress-controller -f I1227 12:21:11.523066 6 controller.go:274] Starting HAProxy update id=20 I1227 12:21:11.561001 6 instance.go:162] HAProxy updated without needing to reload. Commands sent: 3 I1227 12:21:11.561057 6 controller.go:325] Finish HAProxy update id=20: ingress=0.149764ms writeTmpl=37.738947ms total=37.888711ms
五、接下來測試負載均衡的功能,爲了驗證測試效果,往pod中寫入不一樣的內容,以測試負載均衡的效果
[root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-5pgjt /bin/bash root@haproxy-ingress-demo-5d487d4fc-5pgjt:/# echo "web-1" > /usr/share/nginx/html/index.html [root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-pst2q /bin/bash root@haproxy-ingress-demo-5d487d4fc-pst2q:/# echo "web-2" > /usr/share/nginx/html/index.html [root@node-1 ~]# kubectl exec -it haproxy-ingress-demo-5d487d4fc-sr8tm /bin/bash root@haproxy-ingress-demo-5d487d4fc-sr8tm:/# echo "web-3" > /usr/share/nginx/html/index.html
六、測試驗證負載均衡效果,haproxy採用輪詢的調度算法,所以能夠明顯看到輪詢效果
[root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102 web-1 [root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102 web-2 [root@node-1 ~]# curl http://www.happylau.cn --resolve www.happylau.cn:80:10.254.100.102 web-3
這個章節驗證了haproxy ingress控制器動態配置更新的能力,相比於nginx ingress控制器而言,haproxy ingress控制器不須要重載服務進程就可以動態識別到配置,在微服務場景下將具備很是大的優點;並經過一個實例驗證了ingress負載均衡調度能力。
這個小節將演示haproxy ingress基於虛擬雲主機功能的實現,定義兩個虛擬主機news.happylau.cn和sports.happylau.cn,將請求各自轉發至haproxy-1和haproxy-2
一、 準備環境測試環境,建立兩個應用haproxy-1和haproxy並暴露服務端口
[root@node-1 ~]# kubectl run haproxy-1 --image=nginx:1.7.9 --port=80 --replicas=1 --expose=true [root@node-1 ~]# kubectl run haproxy-2 --image=nginx:1.7.9 --port=80 --replicas=1 --expose=true 查看應用 [root@node-1 ~]# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE haproxy-1 1/1 1 1 39s haproxy-2 1/1 1 1 36s 查看service [root@node-1 ~]# kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE haproxy-1 ClusterIP 10.100.239.114 <none> 80/TCP 55s haproxy-2 ClusterIP 10.100.245.28 <none> 80/TCP 52s
三、定義ingress規則,定義不一樣的主機並將請求轉發至不一樣的service中
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy-ingress-virtualhost annotations: kubernetes.io/ingress.class: haproxy spec: rules: - host: news.happylau.cn http: paths: - path: / backend: serviceName: haproxy-1 servicePort: 80 - host: sports.happylau.cn http: paths: - path: / backend: serviceName: haproxy-2 servicePort: 80 #應用ingress規則並查看列表 [root@node-1 haproxy-ingress]# kubectl apply -f ingress-virtualhost.yaml ingress.extensions/haproxy-ingress-virtualhost created [root@node-1 haproxy-ingress]# kubectl get ingresses haproxy-ingress-virtualhost NAME HOSTS ADDRESS PORTS AGE haproxy-ingress-virtualhost news.happylau.cn,sports.happylau.cn 80 8s 查看ingress規則詳情 [root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-virtualhost Name: haproxy-ingress-virtualhost Namespace: default Address: Default backend: default-http-backend:80 (<none>) Rules: Host Path Backends ---- ---- -------- news.happylau.cn / haproxy-1:80 (10.244.2.168:80) sports.happylau.cn / haproxy-2:80 (10.244.2.169:80) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"name":"haproxy-ingress-virtualhost","namespace":"default"},"spec":{"rules":[{"host":"news.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-1","servicePort":80},"path":"/"}]}},{"host":"sports.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-2","servicePort":80},"path":"/"}]}}]}} kubernetes.io/ingress.class: haproxy Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 37s ingress-controller Ingress default/haproxy-ingress-virtualhost Normal CREATE 37s ingress-controller Ingress default/haproxy-ingress-virtualhost Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-virtualhost Normal UPDATE 20s ingress-controller Ingress default/haproxy-ingress-virtualhost
四、測試驗證虛擬機主機配置,經過curl直接解析的方式,或者經過寫hosts文件
五、查看配置配置文件內容,配置中更新了haproxy.cfg的front段和backend段的內容
/etc/haproxy/haproxy.cfg 配置文件內容 backend default_haproxy-1_80 #haproxy-1後端 mode http balance roundrobin acl https-request ssl_fc http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor http-response set-header Strict-Transport-Security "max-age=15768000" server srv001 10.244.2.168:80 weight 1 check inter 2s server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s #haproxy-2後端 backend default_haproxy-2_80 mode http balance roundrobin acl https-request ssl_fc http-request set-header X-Original-Forwarded-For %[hdr(x-forwarded-for)] if { hdr(x-forwarded-for) -m found } http-request del-header x-forwarded-for option forwardfor http-response set-header Strict-Transport-Security "max-age=15768000" server srv001 10.244.2.169:80 weight 1 check inter 2s server srv002 127.0.0.1:1023 disabled weight 1 check inter 2s server srv003 127.0.0.1:1023 disabled weight 1 check inter 2s server srv004 127.0.0.1:1023 disabled weight 1 check inter 2s server srv005 127.0.0.1:1023 disabled weight 1 check inter 2s server srv006 127.0.0.1:1023 disabled weight 1 check inter 2s server srv007 127.0.0.1:1023 disabled weight 1 check inter 2s 配置關聯內容 / # cat /etc/haproxy/maps/_global_http_front.map # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # HAProxy Ingress Controller # # -------------------------- # # This file is automatically updated, do not edit # # # news.happylau.cn/ default_haproxy-1_80 sports.happylau.cn/ default_haproxy-2_80
haproxy ingress支持自動跳轉的能力,須要經過annotations定義,經過ingress.kubernetes.io/ssl-redirect設置便可,默認爲false,設置爲true便可實現http往https跳轉的能力,固然能夠將配置寫入到ConfigMap中實現默認跳轉的能力,本文以編寫annotations爲例,實現訪問http跳轉https的能力。
一、定義ingress規則,設置ingress.kubernetes.io/ssl-redirect實現跳轉功能
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy-ingress-virtualhost annotations: kubernetes.io/ingress.class: haproxy ingress.kubernetes.io/ssl-redirect: true #實現跳轉功能 spec: rules: - host: news.happylau.cn http: paths: - path: / backend: serviceName: haproxy-1 servicePort: 80 - host: sports.happylau.cn http: paths: - path: / backend: serviceName: haproxy-2 servicePort: 80
按照上圖測試了一下功能,未能實現跳轉實現跳轉的功能,開源版本中未能找到更多文檔說明,企業版因爲鏡像須要認證受權下載,未能進一步作測試驗證。
haproxy ingress默認集成了一個
一、生成自簽名證書和私鑰
[root@node-1 haproxy-ingress]# openssl req -x509 -newkey rsa:2048 -nodes -days 365 -keyout tls.key -out tls.crt Generating a 2048 bit RSA private key ...........+++ .......+++ writing new private key to 'tls.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:CN State or Province Name (full name) []:GD Locality Name (eg, city) [Default City]:ShenZhen Organization Name (eg, company) [Default Company Ltd]:Tencent Organizational Unit Name (eg, section) []:HappyLau Common Name (eg, your name or your server's hostname) []:www.happylau.cn Email Address []:573302346@qq.com
二、建立Secrets,關聯證書和私鑰
[root@node-1 haproxy-ingress]# kubectl create secret tls haproxy-tls --cert=tls.crt --key=tls.key secret/haproxy-tls created [root@node-1 haproxy-ingress]# kubectl describe secrets haproxy-tls Name: haproxy-tls Namespace: default Labels: <none> Annotations: <none> Type: kubernetes.io/tls Data ==== tls.crt: 1424 bytes tls.key: 1704 bytes
三、編寫ingress規則,經過tls關聯Secrets
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: haproxy-ingress-virtualhost annotations: kubernetes.io/ingress.class: haproxy spec: tls: - hosts: - news.happylau.cn - sports.happylau.cn secretName: haproxy-tls rules: - host: news.happylau.cn http: paths: - path: / backend: serviceName: haproxy-1 servicePort: 80 - host: sports.happylau.cn http: paths: - path: / backend: serviceName: haproxy-2 servicePort: 80
四、應用配置並查看詳情,在TLS中能夠看到TLS關聯的證書
[root@node-1 haproxy-ingress]# kubectl apply -f ingress-virtualhost.yaml ingress.extensions/haproxy-ingress-virtualhost configured [root@node-1 haproxy-ingress]# kubectl describe ingresses haproxy-ingress-virtualhost Name: haproxy-ingress-virtualhost Namespace: default Address: Default backend: default-http-backend:80 (<none>) TLS: haproxy-tls terminates news.happylau.cn,sports.happylau.cn Rules: Host Path Backends ---- ---- -------- news.happylau.cn / haproxy-1:80 (10.244.2.168:80) sports.happylau.cn / haproxy-2:80 (10.244.2.169:80) Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"haproxy"},"name":"haproxy-ingress-virtualhost","namespace":"default"},"spec":{"rules":[{"host":"news.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-1","servicePort":80},"path":"/"}]}},{"host":"sports.happylau.cn","http":{"paths":[{"backend":{"serviceName":"haproxy-2","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["news.happylau.cn","sports.happylau.cn"],"secretName":"haproxy-tls"}]}} kubernetes.io/ingress.class: haproxy Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 37m ingress-controller Ingress default/haproxy-ingress-virtualhost Normal CREATE 37m ingress-controller Ingress default/haproxy-ingress-virtualhost Normal UPDATE 7s (x2 over 37m) ingress-controller Ingress default/haproxy-ingress-virtualhost Normal UPDATE 7s (x2 over 37m) ingress-controller Ingress default/haproxy-ingress-virtualhost
五、測試https站點訪問,能夠看到安全的https訪問
haproxy實現ingress實際是經過配置更新haproxy.cfg配置,結合service的服務發現機制動態完成ingress接入,相比於nginx來講,haproxy不須要重載實現配置變動。在測試haproxy ingress過程當中,有部分功能配置驗證沒有達到預期,更豐富的功能支持在haproxy ingress企業版中支持,社區版能支持藍綠髮布和WAF安全掃描功能,詳情能夠參考社區文檔haproxy藍綠髮布和WAF安全支持。
haproxy ingress控制器目前在社區活躍度通常,相比於nginx,traefik,istio還有必定的差距,實際環境中不建議使用社區版的haproxy ingress。
官方安裝文檔:https://haproxy-ingress.github.io/docs/getting-started/
haproxy ingress官方配置:https://www.haproxy.com/documentation/hapee/1-7r2/traffic-management/k8s-image-controller/
當你的才華撐不起你的野心時,你就應該靜下心來學習
#RBAC認證帳號,和角色關聯 apiVersion: v1 kind: ServiceAccount metadata: name: ingress-controller namespace: ingress-controller --- # 集羣角色,訪問資源對象和具體訪問權限 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: ingress-controller rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- #角色定義 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: ingress-controller namespace: ingress-controller rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get - create - update --- #集羣角色綁定ServiceAccount和ClusterRole關聯 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: ingress-controller - apiGroup: rbac.authorization.k8s.io kind: User name: ingress-controller --- #角色綁定 apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: ingress-controller namespace: ingress-controller roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: ingress-controller subjects: - kind: ServiceAccount name: ingress-controller namespace: ingress-controller - apiGroup: rbac.authorization.k8s.io kind: User name: ingress-controller --- #後端服務應用,haproxy ingress默認須要一個關聯的應用 apiVersion: apps/v1 kind: Deployment metadata: labels: run: ingress-default-backend name: ingress-default-backend namespace: ingress-controller spec: selector: matchLabels: run: ingress-default-backend template: metadata: labels: run: ingress-default-backend spec: containers: - name: ingress-default-backend image: gcr.io/google_containers/defaultbackend:1.0 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi --- #後端應用的service定義 apiVersion: v1 kind: Service metadata: name: ingress-default-backend namespace: ingress-controller spec: ports: - port: 8080 selector: run: ingress-default-backend --- #haproxy ingress配置,實現自定義配置功能 apiVersion: v1 kind: ConfigMap metadata: name: haproxy-ingress namespace: ingress-controller --- #haproxy ingress核心的DaemonSet apiVersion: apps/v1 kind: DaemonSet metadata: labels: run: haproxy-ingress name: haproxy-ingress namespace: ingress-controller spec: updateStrategy: type: RollingUpdate selector: matchLabels: run: haproxy-ingress template: metadata: labels: run: haproxy-ingress spec: hostNetwork: true #網絡模式爲hostNetwork,即便用宿主機的網絡 nodeSelector: #節點選擇器,將調度至包含特定標籤的節點 role: ingress-controller serviceAccountName: ingress-controller #實現RBAC認證受權 containers: - name: haproxy-ingress image: quay.io/jcmoraisjr/haproxy-ingress args: - --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend - --configmap=$(POD_NAMESPACE)/haproxy-ingress - --sort-backends ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: stat containerPort: 1936 livenessProbe: httpGet: path: /healthz port: 10253 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace