kubernetes 安裝kong、kong-ingress-controlor

1、關於kong的詳細內容這裏再也不贅述,能夠查看官網。html

kong升級到1.0之後功能愈來愈完善,並切新版本的kong能夠做爲service-mesh使用,並能夠將其做爲kubernetes的ingress-controlor。雖然在做爲service-mesh方面與istio還有差別,可是kong的發展前景很好,kong-ingress-controlor能夠自動發現kubernetes集羣裏面的ingress服務並統一管理。因此咱們的測試集羣正在試用kong,這裏先記錄一下部署過程。node

 

2、部署nginx

提早準備好:kubernetes 集羣(我線上使用的是1.13.2)、PV持久化(使用nfs作的)、helmgit

獲取charts:github

安裝好了helm,能夠直接使用:sql

helm  fetch stable/kong

這個默認repo獲取是須要FQ的。api

咱們使用的是根據官方的定製的:app

https://github.com/cuishuaigit/k8s-kongless

 

部署前能夠根據本身的須要進行定製:curl

修改values.yaml文件,我這裏取消了admin API的https,由於是純內網環境。而後作了admin、proxy(http、https)的nodeport端口分別爲3234四、32380、32343。而後就是設置了默認開啓 ingressController。

部署kong:

git clone https://github.com/cuishuaigit/k8s-kong cd k8s-kong helm install -n kong-ingress --tiller-namespace default .

測試環境的tiller是部署在default這個namespace下的。

部署完的效果:

root@ku13-1:~# kubectl get pods | grep kong kong-ingress-kong-5c968fdb74-gsrr8 1/1     Running     0 4h14m kong-ingress-kong-controller-5896fd6d67-4xcg5 2/2     Running     1 4h14m kong-ingress-kong-init-migrations-k9ztt 0/1     Completed   0 4h14m kong-ingress-postgresql-0                         1/1     Running     0          4h14m
root@ku13-1:/data/k8s-kong# kubectl get svc | grep kong kong-ingress-kong-admin NodePort 192.103.113.85    <none>        8444:32344/TCP 4h18m kong-ingress-kong-proxy NodePort 192.96.47.146     <none>        80:32380/TCP,443:32343/TCP 4h18m kong-ingress-postgresql ClusterIP 192.97.113.204    <none>        5432/TCP 4h18m kong-ingress-postgresql-headless ClusterIP None <none>        5432/TCP                      4h18m

 

而後根據https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/minikube.md部署了demo服務:

wget  https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/manifests/dummy-application.yaml

 # cat dummy-application.yaml

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: http-svc spec: replicas: 1 selector: matchLabels: app: http-svc template: metadata: labels: app: http-svc spec: containers: - name: http-svc image: gcr.io/google_containers/echoserver:1.8 ports: - containerPort: 8080
        env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP

 

# cat demo-service.yaml

apiVersion: v1 kind: Service metadata: name: http-svc labels: app: http-svc spec: type: NodePort ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: http-svc

 

kubectl create -f dummy-application.yaml  -f  demo-servcie.yaml

 

建立ingress rule:

#cat demo-ingress.yaml

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-bar spec: rules: - host: foo.bar http: paths: - path: / backend: serviceName: http-svc servicePort: 80

 

kubectl  create -f demon-ingress.yaml

 

使用curl測試:

root@ku13-1:/data/k8s-kong# curl http://192.96.47.146 -H Host:foo.bar

Hostname: http-svc-6f459dc547-qpqmv Pod Information: node name: ku13-2 pod name: http-svc-6f459dc547-qpqmv pod namespace: default pod IP: 192.244.32.25 Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=192.244.6.216 method=GET real path=/ query= request_version=1.1 request_uri=http://192.244.32.25:8080/ Request Headers: accept=*/* connection=keep-alive host=192.244.32.25:8080 user-agent=curl/7.47.0 x-forwarded-for=10.2.6.7 x-forwarded-host=foo.bar x-forwarded-port=8000 x-forwarded-proto=http x-real-ip=10.2.6.7 Request Body: -no body in request-

 

3、部署konga

konga是kong的一個dashboard,具體部署參考http://www.javashuo.com/article/p-kgljjxcv-gd.html

 

4、kong plugin

kong有不少插件,幫助用戶更好的使用kong來完成更增強大的代理功能。這裏介紹兩種,其餘的使用都是類似的,只是配置參數不一樣,具體參數配置參考https://docs.konghq.com/1.1.x/admin-api/#plugin-object

kong-ingress-controlor提供了四種crd:KongPlugin、KongIngress、KongConmuser、KongCredential

一、request-transform

建立yaml:

#cat demo-request-transformer.yaml

apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: transform-request-to-dummy namespace: default labels: global: "false" disable: false config: replace: headers: - 'host:llll' add: headers: - "x-myheader:my-header-value" plugin: request-transformer

 

建立插件:

kubectl create -f demo-request-transformer.yaml

 

二、file-log

建立yaml:

# cat demo-file-log.yaml

apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: echo-file-log namespace: default labels: global: "false" disable: false plugin: file-log config: path: /tmp/req.log reopen: true

 

建立插件:

kubectl create -f demo-file-log.yaml

 

三、插件應用

插件能夠與route、servcie綁定,綁定的方式就是使用annotation,0.20版本後的ingress controlor使用的是plugins.konghq.com.

1)route

在route層添加插件,就是在ingress裏面添加:

# cat demo-ingress.yaml

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-bar annotations: plugins.konghq.com: transform-request-to-dummy,echo-file-log spec: rules: - host: foo.bar http: paths: - path: / backend: serviceName: http-svc servicePort: 80

 

應用:

kubectl apply -f demo-ingress.yaml

去dashboard上查看效果:

 

 或者使用admin API查看:

curl http://10.1.2.8:32344/plugins | jq

32344是kong admin  API映射到node節點的端口, jq格式化輸出

 

2)service

在service層添加插件,直接在dummy的service的yaml裏面添加anntations:

# cat  demo-service.yaml

apiVersion: v1 kind: Service metadata: name: http-svc labels: app: http-svc annotations: plugins.konghq.com: transform-request-to-dummy,echo-file-log spec: type: NodePort ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: app: http-svc

 

應用:

kubectl apply -f demo-service.yaml

去dashboard上查看效果:

 

或者使用admin API:

curl http://10.1.2.8:32344/plugins | jq

 

四、插件效果

1)request-transformer

# curl  http://10.1.2.8:32380 -H Host:foo.bar

Hostname: http-svc-6f459d7-7qb2n Pod Information: node name: ku13-2 pod name: http-svc-6f459d7-7qb2n pod namespace: default pod IP: 192.244.32.37 Server values: server_version=nginx: 1.13.3 - lua: 10008 Request Information: client_address=192.244.6.216 method=GET real path=/ query= request_version=1.1 request_uri=http://llll:8080/
 Request Headers: accept=*/* connection=keep-alive host=llll user-agent=curl/7.47.0 x-forwarded-for=10.1.2.8 x-forwarded-host=foo.bar x-forwarded-port=8000 x-forwarded-proto=http x-myheader=my-header-value x-real-ip=10.1.2.8 Request Body: -no body in request-

能夠看到咱們在上面的plugin的設置生效了。host被替換成了llll,而且添加了x-myheader。

 

2)file-log

須要登錄kong的pod去查看:

kubectl exec -it kong-ingress-kong-5c9lo74-gsrr8 -- grep -c request  /tmp/req.log 51

能夠看到正確收集到日誌了。

 

五、使用注意事項

目前使用的時候若是當前的某個plugin被刪掉了,而annotations沒有修改,那麼會致使全部的plugin都不可用,這個官方正在修復這個bug。因此如今使用的時候要格外注意避免出現問題。

 

參考:

https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/custom-resources.md

https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/external-service/externalnamejwt.md

https://github.com/Kong/kubernetes-ingress-controller/blob/master/docs/deployment/minikube.md

https://github.com/cuishuaigit/k8s-kong

相關文章
相關標籤/搜索