本文主要利用Knative Serving 部署一個gRPC服務。git
此示例可用於在knative服務中試用gRPC,HTTP / 2和自定義端口配置。github
容器鏡像由兩個二進制文件構建:服務器和客戶端。這樣作是爲了便於測試,不建議將其用於生產容器。docker
1:clone代碼倉庫api
git clone -b "release-0.14" https://github.com/knative/docs knative-docs cd knative-docs/docs/serving/samples/grpc-ping-go
使用Docker來爲此服務構建容器鏡像,並將其推送到Docker Hub。服務器
將{username}替換爲您的Docker Hub用戶名,而後運行如下命令:app
# Build the container on your local machine. docker build --tag "{username}/grpc-ping-go" . # Push the container to docker registry. docker push "{username}/grpc-ping-go"
3: 更新項目中的service.yaml文件以引用步驟1中發佈的鏡像。less
用您的Docker Hub用戶名替換service.yaml中的{username}:tcp
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: grpc-ping namespace: default spec: template: spec: containers: - image: docker.io/{username}/grpc-ping-go ports: - name: h2c containerPort: 8080
4:使用 kubectl
部署服務測試
kubectl apply --filename service.yaml service.serving.knative.dev/grpc-ping created
部署後,您能夠使用kubectl
命令檢查建立的資源:ui
首先查看knative service:
# This will show the Knative service that we created: kubectl get ksvc grpc-ping --output yaml apiVersion: serving.knative.dev/v1 kind: Service metadata: annotations: serving.knative.dev/creator: jenkins serving.knative.dev/lastModifier: jenkins creationTimestamp: "2020-05-13T01:55:44Z" generation: 1 name: grpc-ping namespace: default resourceVersion: "2201773" selfLink: /apis/serving.knative.dev/v1/namespaces/default/services/grpc-ping uid: 7977a697-c413-459f-852c-60e5adf3dccc spec: template: metadata: creationTimestamp: null spec: containerConcurrency: 0 containers: - image: docker.io/iyacontrol/grpc-ping-go name: user-container ports: - containerPort: 8080 name: h2c readinessProbe: successThreshold: 1 tcpSocket: port: 0 resources: {} timeoutSeconds: 300 traffic: - latestRevision: true percent: 100 status: address: url: http://grpc-ping.default.svc.cluster.local conditions: - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: ConfigurationsReady - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: Ready - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: RoutesReady latestCreatedRevisionName: grpc-ping-gcltn latestReadyRevisionName: grpc-ping-gcltn observedGeneration: 1 traffic: - latestRevision: true percent: 100 revisionName: grpc-ping-gcltn url: http://grpc-ping.default.serverless.xx.me
查看knative route:
# This will show the Route, created by the service: kubectl get route grpc-ping --output yaml apiVersion: serving.knative.dev/v1 kind: Route metadata: annotations: serving.knative.dev/creator: jenkins serving.knative.dev/lastModifier: jenkins creationTimestamp: "2020-05-13T01:55:44Z" finalizers: - routes.serving.knative.dev generation: 1 labels: serving.knative.dev/service: grpc-ping name: grpc-ping namespace: default ownerReferences: - apiVersion: serving.knative.dev/v1 blockOwnerDeletion: true controller: true kind: Service name: grpc-ping uid: 7977a697-c413-459f-852c-60e5adf3dccc resourceVersion: "2201772" selfLink: /apis/serving.knative.dev/v1/namespaces/default/routes/grpc-ping uid: 8455e488-2ac2-4e1e-8b51-8d20e1836801 spec: traffic: - configurationName: grpc-ping latestRevision: true percent: 100 status: address: url: http://grpc-ping.default.svc.cluster.local conditions: - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: AllTrafficAssigned - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: IngressReady - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: Ready observedGeneration: 1 traffic: - latestRevision: true percent: 100 revisionName: grpc-ping-gcltn url: http://grpc-ping.default.serverless.xx.me
查看knative configurations:
# This will show the Configuration, created by the service: kubectl get configurations grpc-ping --output yaml apiVersion: serving.knative.dev/v1 kind: Configuration metadata: annotations: serving.knative.dev/creator: jenkins serving.knative.dev/lastModifier: jenkins creationTimestamp: "2020-05-13T01:55:44Z" generation: 1 labels: serving.knative.dev/route: grpc-ping serving.knative.dev/service: grpc-ping name: grpc-ping namespace: default ownerReferences: - apiVersion: serving.knative.dev/v1 blockOwnerDeletion: true controller: true kind: Service name: grpc-ping uid: 7977a697-c413-459f-852c-60e5adf3dccc resourceVersion: "2201750" selfLink: /apis/serving.knative.dev/v1/namespaces/default/configurations/grpc-ping uid: 1a8ab033-7a28-41f8-97ab-cc00560bf613 spec: template: metadata: creationTimestamp: null spec: containerConcurrency: 0 containers: - image: docker.io/iyacontrol/grpc-ping-go name: user-container ports: - containerPort: 8080 name: h2c readinessProbe: successThreshold: 1 tcpSocket: port: 0 resources: {} timeoutSeconds: 300 status: conditions: - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: Ready latestCreatedRevisionName: grpc-ping-gcltn latestReadyRevisionName: grpc-ping-gcltn observedGeneration: 1
經過configurations咱們能夠知道revision的名字是grpc-ping-gcltn,咱們的查看 knative revision:
# This will show the Revision, created by the Configuration: kubectl get revisions grpc-ping-gcltn -o yaml apiVersion: serving.knative.dev/v1 kind: Revision metadata: annotations: serving.knative.dev/creator: jenkins serving.knative.dev/lastPinned: "1589334954" creationTimestamp: "2020-05-13T01:55:44Z" generateName: grpc-ping- generation: 1 labels: serving.knative.dev/configuration: grpc-ping serving.knative.dev/configurationGeneration: "1" serving.knative.dev/route: grpc-ping serving.knative.dev/service: grpc-ping name: grpc-ping-gcltn namespace: default ownerReferences: - apiVersion: serving.knative.dev/v1 blockOwnerDeletion: true controller: true kind: Configuration name: grpc-ping uid: 1a8ab033-7a28-41f8-97ab-cc00560bf613 resourceVersion: "2201933" selfLink: /apis/serving.knative.dev/v1/namespaces/default/revisions/grpc-ping-gcltn uid: d3d13c00-1aa9-44e6-979d-60b50d40b519 spec: containerConcurrency: 0 containers: - image: docker.io/iyacontrol/grpc-ping-go name: user-container ports: - containerPort: 8080 name: h2c readinessProbe: successThreshold: 1 tcpSocket: port: 0 resources: {} timeoutSeconds: 300 status: conditions: - lastTransitionTime: "2020-05-13T01:56:54Z" message: The target is not receiving traffic. reason: NoTraffic severity: Info status: "False" type: Active - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: ContainerHealthy - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: Ready - lastTransitionTime: "2020-05-13T01:55:54Z" status: "True" type: ResourcesAvailable imageDigest: index.docker.io/iyacontrol/grpc-ping-go@sha256:bfe8362fd0f7ccf18502688baca084b6ea63b5725bfef287d8d7dcef9320a17b logUrl: http://localhost:8001/api/v1/namespaces/knative-monitoring/services/kibana-logging/proxy/app/kibana#/discover?_a=(query:(match:(kubernetes.labels.knative-dev%2FrevisionUID:(query:'d3d13c00-1aa9-44e6-979d-60b50d40b519',type:phrase)))) observedGeneration: 1 serviceName: grpc-ping-gcltn
測試gRPC服務須要使用根據服務器使用的相同protobuf定義構建的gRPC客戶端。
Dockerfile構建客戶端二進制文件。要運行客戶端,您將使用爲服務器部署的相同容器鏡像,並帶有對entrypoint命令的覆蓋,以使用客戶端二進制文件而不是服務器二進制文件。
將{username}替換爲您的Docker Hub用戶名,而後運行如下命令:
docker run --rm {username}/grpc-ping-go \ /client \ -server_addr="grpc-ping.default.serverless.xx.me:80" \ -insecure
使用容器標籤{username} / grpc-ping-go以後的參數代替Dockerfile CMD語句中定義的入口點命令。
運行測試以後有相似以下輸出:
2020/05/13 02:06:43 Ping got hello - pong 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466829984 +0000 UTC m=+1.361228108 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466881062 +0000 UTC m=+1.361279193 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466890156 +0000 UTC m=+1.361288283 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466896804 +0000 UTC m=+1.361294929 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466908132 +0000 UTC m=+1.361306260 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466915748 +0000 UTC m=+1.361313871 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466926437 +0000 UTC m=+1.361324564 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466934259 +0000 UTC m=+1.361332383 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466945454 +0000 UTC m=+1.361343587 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466953871 +0000 UTC m=+1.361351996 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.4669644 +0000 UTC m=+1.361362524 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466971662 +0000 UTC m=+1.361369790 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466985621 +0000 UTC m=+1.361383746 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.466993072 +0000 UTC m=+1.361391202 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467000507 +0000 UTC m=+1.361398632 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467007443 +0000 UTC m=+1.361405566 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467026014 +0000 UTC m=+1.361424141 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467034894 +0000 UTC m=+1.361433022 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467044127 +0000 UTC m=+1.361442256 2020/05/13 02:06:43 Got pong 2020-05-13 02:06:43.467052183 +0000 UTC m=+1.361450308
測試經過。
當咱們部署一個gRPC項目的時候,須要在service中spec的port作一些特殊的處理
ports: - name: h2c containerPort: 8080
name 是h2c。h2 is HTTP/2 over TLS (protocol negotiation via ALPN),h2c is HTTP/2 over TCP。