個人工做中很重要的一部分是參加各類各樣的技術會議。最近參加的是去年11月的北美KubeCon,在會議的最後一天,全部人都焦頭爛額,我也一直機械地向不一樣的人重複個人自我介紹。後來,我已經十分煩躁,決定逃離人羣好好聽一場演講。無心間,我碰上了Darren Shepherd的演講,他是Rancher的CTO,他的演講主題是「K3s的背後:構建一個生產級輕量Kubernetes發行版」。我被演講深深吸引,此後我開始慢慢深刻了解K3s。node
K3s是由業界應用最爲普遍的Kubernetes管理平臺建立者Rancher Labs打造的面向物聯網和邊緣計算的輕量級Kubernetes發行版,它是100%開源的。它擁有小型的二進制文件而且針對ARM進行了優化使得它很是適合個人IoT家庭項目。接着,我開始思考如何讓K3s上運行的Kong網關暴露K3s server內的服務。git
出乎我意料的是,K3s在默認狀況下是帶有一個Ingress controller的。雖然默認的proxy/負載均衡器能夠工做,但我須要一些插件的功能它並不支持,除非我使用Kong網關。因此,讓咱們經過一個快速指南來了解如何在Ubuntu中啓動K3s,配置它以支持Kubernetes的Kong,並部署一些服務/插件。github
首先,從https://get.k3s.io 使用安裝腳本在systemd和基於openrc的系統上將K3s做爲一個服務進行安裝。可是咱們須要添加一些額外的環境變量來配置安裝。首先--no-deploy
,這一命令能夠關掉現有的ingress controller,由於咱們想要部署Kong以利用一些插件。其次--write-kubeconfig-mode
,它容許寫入kubeconfig文件。這對於容許將K3s集羣導入Rancher頗有用。web
$ curl -sfL https://get.k3s.io | sh -s - --no-deploy traefik --write-kubeconfig-mode 644 [INFO] Finding release for channel stable [INFO] Using v1.18.4+k3s1 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.18.4+k3s1/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.18.4+k3s1/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Skipping /usr/local/bin/kubectl symlink to k3s, already exists [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s
要檢查節點和 pod是否都已啓動並運行,使用 k3s kubectl...
運行與 kubectl 相同的命令。json
$ k3s kubectl get nodes NAME STATUS ROLES AGE VERSION ubuntu-xenial Ready master 4m38s v1.18.4+k3s1 $ k3s kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system metrics-server-7566d596c8-vqqz7 1/1 Running 0 4m30s kube-system local-path-provisioner-6d59f47c7-tcs2l 1/1 Running 0 4m30s kube-system coredns-8655855d6-rjzrq 1/1 Running 0 4m30s
K3s啓動並運行後,你能夠按照正常的步驟安裝Kong for Kubernetes,好比以下所示的manifest:ubuntu
$ k3s kubectl create -f https://bit.ly/k4k8s namespace/kong created customresourcedefinition.apiextensions.k8s.io/kongclusterplugins.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongconsumers.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongcredentials.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongingresses.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/kongplugins.configuration.konghq.com created customresourcedefinition.apiextensions.k8s.io/tcpingresses.configuration.konghq.com created serviceaccount/kong-serviceaccount created clusterrole.rbac.authorization.k8s.io/kong-ingress-clusterrole created clusterrolebinding.rbac.authorization.k8s.io/kong-ingress-clusterrole-nisa-binding created service/kong-proxy created service/kong-validation-webhook created deployment.apps/ingress-kong created
當Kong proxy和ingress controller安裝到K3s server上後,你檢查服務應該能看到kong-proxy LoadBalancer的外部IP。api
$ k3s kubectl get svc --namespace kong NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kong-validation-webhook ClusterIP 10.43.157.178 <none> 443/TCP 61s kong-proxy LoadBalancer 10.43.63.117 10.0.2.15 80:32427/TCP,443:30563/TCP
運行如下命令,將IP導出爲一個變量:服務器
$ PROXY_IP=$(k3s kubectl get services --namespace kong kong-proxy -o jsonpath={.status.loadBalancer.ingress[0].ip})
最後,在咱們拋出proxy後的任何服務以前,檢查proxy是否有響應:app
$ curl -i $PROXY_IP HTTP/1.1 404 Not Found Date: Mon, 29 Jun 2020 20:31:16 GMT Content-Type: application/json; charset=utf-8 Connection: keep-alive Content-Length: 48 X-Kong-Response-Latency: 0 Server: kong/2.0.4 {"message":"no Route matched with those values"}
它應該返回404,由於咱們尚未在K3s中添加任何服務。但正如你在請求頭(header)中看到的那樣,它正在被最新版本的Kong代理,並顯示了一些額外的信息,好比響應延遲。負載均衡
如今,讓咱們在K3s中設置一個回顯服務器(echo server)應用程序以演示如何使用Kong Ingress Controller:
$ k3s kubectl apply -f https://bit.ly/echo-service service/echo created deployment.apps/echo created
接下來,建立一個ingress規則以代理以前建立的echo-server:
$ echo " apiVersion: extensions/v1beta1 kind: Ingress metadata: name: demo spec: rules: - http: paths: - path: /foo backend: serviceName: echo servicePort: 80 " | k3s kubectl apply -f - ingress.extensions/demo created
測試Ingress 規則:
$ curl -i $PROXY_IP/foo HTTP/1.1 200 OK Content-Type: text/plain; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Date: Mon, 29 Jun 2020 20:31:07 GMT Server: echoserver X-Kong-Upstream-Latency: 0 X-Kong-Proxy-Latency: 1 Via: kong/2.0.4 Hostname: echo-78b867555-jkhhl Pod Information: node name: ubuntu-xenial pod name: echo-78b867555-jkhhl pod namespace: default pod IP: 10.42.0.7 <-- clipped -->
若是一切都部署正確,你應該看到以上響應。這驗證了Kong能夠正確路由流量到運行在Kubernetes中的應用程序。
Kong Ingress容許插件在服務級別上執行,也就是說,每當一個請求被髮送到一個特定的K3s服務時,不管它來自哪一個Ingress路徑,Kong都會執行一個插件。你也能夠在Ingress路徑上附加插件。但在下面的步驟中,我將使用限制速率插件來限制IP在任何一個特定的服務上發出過多的請求。
建立一個KongPlugin資源:
$ echo " apiVersion: configuration.konghq.com/v1 kind: KongPlugin metadata: name: rl-by-ip config: minute: 5 limit_by: ip policy: local plugin: rate-limiting " | k3s kubectl apply -f - kongplugin.configuration.konghq.com/rl-by-ip created
接下來,在須要限制速率的K3s服務上應用konghq.com/plugins註釋。
$ k3s kubectl patch svc echo -p '{"metadata":{"annotations":{"konghq.com/plugins": "rl-by-ip\n"}}}' service/echo patched
如今,任何發送到這項服務的請求都將受到Kong執行的速率限制的保護:
$ curl -I $PROXY_IP/foo HTTP/1.1 200 OK Content-Type: text/plain; charset=UTF-8 Connection: keep-alive Date: Mon, 29 Jun 2020 20:35:40 GMT Server: echoserver X-RateLimit-Remaining-Minute: 4 X-RateLimit-Limit-Minute: 5 RateLimit-Remaining: 4 RateLimit-Limit: 5 RateLimit-Reset: 20 X-Kong-Upstream-Latency: 5 X-Kong-Proxy-Latency: 2 Via: kong/2.0.4
從這一個小小的實踐能夠看出來,K3s其實擁有無限可能,由於你能夠將任何插件添加到任何Ingress路徑或服務上。你能夠從Kong Hub上找到全部插件。在家庭自動化項目中,這十分方便,你還能夠利用樹莓派來運行K3s,並經過各種插件賦予K3s更多可能。