Openshift中Pod的SpringBoot2健康檢查

Openshift中Pod的SpringBoot2應用程序健康檢查

1. 準備測試的SpringBoot工程, 須要Java 8 JDK or greater and Maven 3.3.x or greater.php

git clone https://github.com/megadotnet/Openshift-healthcheck-demo.githtml

假設您已經掌握基本JAVA應用程序開發,Openshift容器平臺已經部署成功。咱們的測試工程依賴庫Spring Boot Actuator2, 有如新特性java

    • 支持 Jersey RESTful Web 服務
    • 支持基於反應式理念的 WebFlux Web App
    • 新的端點映射
    • 簡化用戶自定義端點的建立
    • 加強端點的安全性

Actuator提供了13個接口,以下:node

clipboard

在Spring Boot 2.x中爲了安全起見,Actuator只開放了兩個端點/actuator/health和/actuator/info,能夠在配置文件中設置開關。git

部署編譯jar文件到Openshift容器平臺github

Openshift的部署過程簡述以下(這裏採用是二進制部署方式):web

--> Found image dc046fe (16 months old) in image stream "openshift/s2i-java" under tag "latest" for "s2i-java:latest"spring

Java S2I builder 1.0docker

--------------------數據庫

Platform for building Java (fatjar) applications with maven or gradle

Tags: builder, maven-3, gradle-2.6, java, microservices, fatjar

* A source build using binary input will be created

* The resulting image will be pushed to image stream "health-demo:latest"

* A binary build was created, use 'start-build --from-dir' to trigger a new build

--> Creating resources with label app=health-demo ...

imagestream "health-demo" created

buildconfig "health-demo" created

--> Success

Uploading directory "oc-build" as binary input for the build ...

build "health-demo-1" started

--> Found image fb46616 (5 minutes old) in image stream "hshreport-stage/health-demo" under tag "latest" for "health-demo:latest"

Java S2I builder 1.0

--------------------

Platform for building Java (fatjar) applications with maven or gradle

Tags: builder, maven-3, gradle-2.6, java, microservices, fatjar

* This image will be deployed in deployment config "health-demo"

* Ports 7575/tcp, 8080/tcp will be load balanced by service "health-demo"

* Other containers can access this service through the hostname "health-demo"

--> Creating resources with label app=health-demo ...

deploymentconfig "health-demo" created

service "health-demo" created

--> Success

Run 'oc status' to view your app.

route "health-demo" exposed

以上過程最近暴露Router, 方便咱們演示


演示過程:

第一回合

修改剛部署的deploymentConfig的yaml,增長readiness配置,以下:

---
readinessProbe:
   failureThreshold: 3
   httpGet:
     path: /actuator/health
     port: 8080
     scheme: HTTP
   initialDelaySeconds: 10
   periodSeconds: 10
   successThreshold: 1
   timeoutSeconds: 1

對於以幾個參數,咱們說明下,你們須要理解

      • initialDelaySeconds:容器啓動後第一次執行探測是須要等待多少秒。
      • periodSeconds:執行探測的頻率。默認是10秒,最小1秒。
      • timeoutSeconds:探測超時時間。默認1秒,最小1秒。
      • successThreshold:探測失敗後,最少連續探測成功多少次才被認定爲成功。默認是 1。對於 liveness 必須是 1。最小值是 1。
      • failureThreshold:探測成功後,最少連續探測失敗多少次才被認定爲失敗。默認是 3。最小值是 1。

或是採用openshift cli命令行來配置readiness:

oc set probe dc/app-cli \

--readiness \

--get-url=http://:8080/notreal \

--initial-delay-seconds=5


$ oc get pod -w

#剛纔修改 deploymentConfig,pod從新部署了

NAME READY                 STATUS   RESTARTS  AGE

health-demo-1-build 0/1 Completed    0        16m

health-demo-2-sqh4z 1/1 Running      0        11m

執行HTTP API 來STOP中止Tomcat,curl http://${value-name-app}-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/stop,   注意此處URL相部署的DNS環境有關係

程序中日誌以下:

Stopping Tomcat context.

2020-01-11 22:17:21.004 INFO 1 --- [nio-8080-exec-9] o.apache.catalina.core.StandardWrapper : Waiting for [1] instance(s) to be deallocated for Servlet [dispatcherServlet]

2020-01-11 22:17:22.008 INFO 1 --- [nio-8080-exec-9] o.apache.catalina.core.StandardWrapper : Waiting for [1] instance(s) to be deallocated for Servlet [dispatcherServlet]

2020-01-11 22:17:23.012 INFO 1 --- [nio-8080-exec-9] o.apache.catalina.core.StandardWrapper : Waiting for [1] instance(s) to be deallocated for Servlet [dispatcherServlet]

2020-01-11 22:17:23.114 INFO 1 --- [nio-8080-exec-9] o.a.c.c.C.[Tomcat].[localhost].[/] : Destroying Spring FrameworkServlet 'dispatcherServlet'

#跟蹤pod的動態

$ oc get pod –w

NAME READY STATUS RESTARTS AGE

health-demo-1-build 0/1 Completed 0 16m

health-demo-2-sqh4z 1/1 Running 0 11m

health-demo-2-sqh4z 0/1 Running 0 13m

#咱們查閱pod的詳細描述

$ oc describe pod/health-demo-2-sqh4z

Name: health-demo-2-sqh4z

Namespace: hshreport-stage

Security Policy: restricted

Node: openshift-lb-02.hsh.io/10.108.78.145

Start Time: Sat, 11 Jan 2020 22:08:59 +0800

Labels: app=health-demo

deployment=health-demo-2

deploymentconfig=health-demo

Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hshreport-stage","name":"health-demo-2","uid":"e6436263-347b-11ea-856c...

openshift.io/deployment-config.latest-version=2

openshift.io/deployment-config.name=health-demo

openshift.io/deployment.name=health-demo-2

openshift.io/generated-by=OpenShiftNewApp

openshift.io/scc=restricted

Status: Running

IP: 10.131.5.124

Controllers: ReplicationController/health-demo-2

Containers:

health-demo:

Container ID: docker://25cdf63f55d839610287b4e2a3cc67182377bfe5010990357f83329286c7e64f

Image: docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6

Image ID: docker-pullable://docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6

Ports: 7575/TCP, 8080/TCP

State: Running

Started: Sat, 11 Jan 2020 22:09:09 +0800

Ready: False

Restart Count: 0

Readiness: http-get http://:8080/actuator/health delay=10s timeout=1s period=10s #success=1 #failure=3

Environment:

APP_OPTIONS: -Xmx512m -Xss512k -Djava.net.preferIPv4Stack=true -Dfile.encoding=utf-8

DEPLOYER: liu.xxxxx (Administrator) (cicd-1.1.24)

REVISION:

SPRING_PROFILES_ACTIVE: stage

TZ: Asia/Shanghai

Mounts:

/var/run/secrets/kubernetes.io/serviceaccount from default-token-n4klp (ro)

Conditions:

Type Status

Initialized True

Ready False

PodScheduled True

Volumes:

default-token-n4klp:

Type: Secret (a volume populated by a Secret)

SecretName: default-token-n4klp

Optional: false

QoS Class: BestEffort

Node-Selectors: region=primary

Tolerations: <none>

Events:

FirstSeen LastSeen Count From SubObjectPath Type Reason Message

--------- -------- ----- ---- ------------- -------- ------ -------

16m 16m 1 default-scheduler Normal Scheduled Successfully assigned health-demo-2-sqh4z to openshift-lb-02.hsh.io

16m 16m 1 kubelet, openshift-lb-02.hsh.io spec.containers{health-demo} Normal Pulling pulling image "docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6"

16m 16m 1 kubelet, openshift-lb-02.hsh.io spec.containers{health-demo} Normal Pulled Successfully pulled image "docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6"

15m 15m 1 kubelet, openshift-lb-02.hsh.io spec.containers{health-demo} Normal Created Created container

15m 15m 1 kubelet, openshift-lb-02.hsh.io spec.containers{health-demo} Normal Started Started container

15m 15m 1 kubelet, openshift-lb-02.hsh.io spec.containers{health-demo} Warning Unhealthy Readiness probe failed: Get http://10.131.5.124:8080/actuator/health: dial tcp 10.131.5.124:8080: getsockopt: connection refused

7m 5m 16 kubelet, openshift-lb-02.hsh.io spec.containers{health-demo} Warning Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404

注意上面的WARN事件,咱們發現POD沒有被重啓,由於咱們只配置了readiness

第二回合

對以前部署後應用,增長health check, /actuator/health 是SpringBoot2.0的示例工程默認的健康檢查的端點

修改deploymentConfig,增長readiness與liveness

---
livenessProbe:
   failureThreshold: 3
   httpGet:
     path: /actuator/health
     port: 8080
     scheme: HTTP
   initialDelaySeconds: 60
   periodSeconds: 10
   successThreshold: 1
   timeoutSeconds: 1
name: health-demo
ports:
   -
     containerPort: 7575
     protocol: TCP
   -
     containerPort: 8080
     protocol: TCP
readinessProbe:
   failureThreshold: 3
   httpGet:
     path: /actuator/health
     port: 8080
     scheme: HTTP
   initialDelaySeconds: 10
   periodSeconds: 10
   successThreshold: 1
   timeoutSeconds: 1

也能夠經過Web 控制檯來修改,示例截圖以下:

clipboard

clipboard

咱們看到 WEB UI的配置與yaml中參數是同樣的。

oc cli命令行方法 #Configure Liveness/Readiness probes on DCs

oc set probe dc cotd1 --liveness -- echo ok

oc set probe dc/cotd1 --readiness --get-url=http://:8080/index.php --initial-delay-seconds=2

TCP的示例

oc set probe dc/blog --readiness --liveness --open-tcp 8080

移動 probe

$ oc set probe dc/blog --readiness --liveness –remove

執行stop後,此時請求ROUTER已顯示,POD還在運行,瀏覽器返回

Application is not available

經過URL執行後,STOP tomcat,pod中程序部分日誌以下:

Stopping Tomcat context.

2020-01-11 22:17:21.004 INFO 1 --- [nio-8080-exec-9] o.apache.catalina.core.StandardWrapper : Waiting for [1] instance(s) to be deallocated for Servlet [dispatcherServlet]

2020-01-11 22:17:22.008 INFO 1 --- [nio-8080-exec-9] o.apache.catalina.core.StandardWrapper : Waiting for [1] instance(s) to be deallocated for Servlet [dispatcherServlet]

2020-01-11 22:17:23.012 INFO 1 --- [nio-8080-exec-9] o.apache.catalina.core.StandardWrapper : Waiting for [1] instance(s) to be deallocated for Servlet [dispatcherServlet]

2020-01-11 22:17:23.114 INFO 1 --- [nio-8080-exec-9] o.a.c.c.C.[Tomcat].[localhost].[/] : Destroying Spring FrameworkServlet 'dispatcherServlet'

過一下子,咱們監控pod動態

$ oc get pod -w

NAME READY STATUS RESTARTS AGE

health-demo-1-build 0/1 Completed 0 33m

health-demo-3-02v11 1/1 Running 0 5m

health-demo-3-02v11 0/1 Running 0 7m

health-demo-3-02v11 0/1 Running 1 7m

$ oc get pod

NAME READY STATUS RESTARTS AGE

health-demo-1-build 0/1 Completed 0 36m

health-demo-3-02v11 1/1 Running 1 8m


請求 curl http://${value-name-app}-MY_PROJECT_NAME.LOCAL_OPENSHIFT_HOSTNAME/api/greeting?name=s2i

後瀏覽器顯示:

{"content":"Hello, s2i!"} (the recovery took 41.783 seconds)

此時POD已經被重啓,應用程序的日誌以下:

2020-01-11 22:35:13.597 INFO 1 --- [ main] s.b.a.e.w.s.WebMvcEndpointHandlerMapping : Mapped "{[/actuator],methods=[GET],produces=[application/vnd.spring-boot.actuator.v2+json || application/json]}" onto protected java.util.Map<java.lang.String, java.util.Map<java.lang.String, org.springframework.boot.actuate.endpoint.web.Link>> org.springframework.boot.actuate.endpoint.web.servlet.WebMvcEndpointHandlerMapping.links(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)

2020-01-11 22:35:13.750 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup

2020-01-11 22:35:13.873 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''

2020-01-11 22:35:13.882 INFO 1 --- [ main] dev.snowdrop.example.ExampleApplication : Started ExampleApplication in 8.061 seconds (JVM running for 9.682)

2020-01-11 22:35:22.445 INFO 1 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring FrameworkServlet 'dispatcherServlet'

2020-01-11 22:35:22.445 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization started

2020-01-11 22:35:22.485 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 39 ms

$ oc describe pod/health-demo-3-02v11

Name: health-demo-3-02v11

Namespace: hshreport-stage

Security Policy: restricted

Node: openshift-node-04.hsh.io/10.108.78.139

Start Time: Sat, 11 Jan 2020 22:32:12 +0800

Labels: app=health-demo

deployment=health-demo-3

deploymentconfig=health-demo

Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hshreport-stage","name":"health-demo-3","uid":"23ad2f21-347f-11ea-856c...

openshift.io/deployment-config.latest-version=3

openshift.io/deployment-config.name=health-demo

openshift.io/deployment.name=health-demo-3

openshift.io/generated-by=OpenShiftNewApp

openshift.io/scc=restricted

Status: Running

IP: 10.129.5.178

Controllers: ReplicationController/health-demo-3

Containers:

health-demo:

Container ID: docker://3e5a6b081022c914d8e118dce829294570e54f441b84394a2b13f6eebb4f5c74

Image: docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6

Image ID: docker-pullable://docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6

Ports: 7575/TCP, 8080/TCP

State: Running

Started: Sat, 11 Jan 2020 22:35:04 +0800

Last State: Terminated

Reason: Error

Exit Code: 143

Started: Sat, 11 Jan 2020 22:32:15 +0800

Finished: Sat, 11 Jan 2020 22:35:03 +0800

Ready: True

Restart Count: 1

Liveness: http-get http://:8080/actuator/health delay=60s timeout=1s period=10s #success=1 #failure=3

Readiness: http-get http://:8080/actuator/health delay=10s timeout=1s period=10s #success=1 #failure=3

Environment:

APP_OPTIONS: -Xmx512m -Xss512k -Djava.net.preferIPv4Stack=true -Dfile.encoding=utf-8

DEPLOYER: liu.xxxxxx(Administrator) (cicd-1.1.24)

REVISION:

SPRING_PROFILES_ACTIVE: stage

TZ: Asia/Shanghai

Mounts:

/var/run/secrets/kubernetes.io/serviceaccount from default-token-n4klp (ro)

Conditions:

Type Status

Initialized True

Ready True

PodScheduled True

Volumes:

default-token-n4klp:

Type: Secret (a volume populated by a Secret)

SecretName: default-token-n4klp

Optional: false

QoS Class: BestEffort

Node-Selectors: region=primary

Tolerations: <none>

Events:

FirstSeen LastSeen Count From SubObjectPath Type Reason Message

--------- -------- ----- ---- ------------- -------- ------ -------

17m 17m 1 default-scheduler Normal Scheduled Successfully assigned health-demo-3-02v11 to openshift-node-04.hsh.io

15m 14m 3 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Warning Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 404

15m 14m 3 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Warning Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404

17m 14m 2 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Pulling pulling image "docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6"

17m 14m 2 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Pulled Successfully pulled image "docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6"

17m 14m 2 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Created Created container

14m 14m 1 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Killing Killing container with id docker://health-demo:pod "health-demo-3-02v11_hshreport-stage(27e5a1da-347f-11ea-856c-0050568d3d78)" container "health-demo" is unhealthy, it will be killed and re-created.

17m 14m 2 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Started Started container

第二次咱們執行 STOP的HTTP  API

$ oc get pod –w

NAME READY STATUS RESTARTS AGE

health-demo-1-build 0/1 Completed 0 47m

health-demo-3-02v11 1/1 Running 1 19m

health-demo-3-02v11 0/1 Running 1 19m

health-demo-3-02v11 0/1 Running 2 20m

health-demo-3-02v11 1/1 Running 2 20m

$ oc get pod

NAME READY STATUS RESTARTS AGE

health-demo-1-build 0/1 Completed 0 49m

health-demo-3-02v11 1/1 Running 2 21

HTTP 請求返回結果:

{"content":"Hello, s2i!"} (the recovery took 51.984 seconds)

$ oc describe pod/health-demo-3-02v11

Name: health-demo-3-02v11

Namespace: hshreport-stage

Security Policy: restricted

Node: openshift-node-04.hsh.io/10.108.78.139

Start Time: Sat, 11 Jan 2020 22:32:12 +0800

Labels: app=health-demo

deployment=health-demo-3

deploymentconfig=health-demo

Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"hshreport-stage","name":"health-demo-3","uid":"23ad2f21-347f-11ea-856c...

openshift.io/deployment-config.latest-version=3

openshift.io/deployment-config.name=health-demo

openshift.io/deployment.name=health-demo-3

openshift.io/generated-by=OpenShiftNewApp

openshift.io/scc=restricted

Status: Running

IP: 10.129.5.178

Controllers: ReplicationController/health-demo-3

Containers:

health-demo:

Container ID: docker://e12d1975aa26b07643ae1666ae6bce7ceab4f25fb4c6c947427ba526ad6fdf7b

Image: docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6

Image ID: docker-pullable://docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6

Ports: 7575/TCP, 8080/TCP

State: Running

Started: Sat, 11 Jan 2020 22:47:14 +0800

Last State: Terminated

Reason: Error

Exit Code: 143

Started: Sat, 11 Jan 2020 22:35:04 +0800

Finished: Sat, 11 Jan 2020 22:47:02 +0800

Ready: True

Restart Count: 2

Liveness: http-get http://:8080/actuator/health delay=60s timeout=1s period=10s #success=1 #failure=3

Readiness: http-get http://:8080/actuator/health delay=10s timeout=1s period=10s #success=1 #failure=3

Environment:

APP_OPTIONS: -Xmx512m -Xss512k -Djava.net.preferIPv4Stack=true -Dfile.encoding=utf-8

DEPLOYER: liu.xxxxx (Administrator) (cicd-1.1.24)

REVISION:

SPRING_PROFILES_ACTIVE: stage

TZ: Asia/Shanghai

Mounts:

/var/run/secrets/kubernetes.io/serviceaccount from default-token-n4klp (ro)

Conditions:

Type Status

Initialized True

Ready True

PodScheduled True

Volumes:

default-token-n4klp:

Type: Secret (a volume populated by a Secret)

SecretName: default-token-n4klp

Optional: false

QoS Class: BestEffort

Node-Selectors: region=primary

Tolerations: <none>

Events:

FirstSeen LastSeen Count From SubObjectPath Type Reason Message

--------- -------- ----- ---- ------------- -------- ------ -------

21m 21m 1 default-scheduler Normal Scheduled Successfully assigned health-demo-3-02v11 to openshift-node-04.hsh.io

21m 6m 3 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Pulling pulling image "docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6"

19m 6m 6 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Warning Unhealthy Liveness probe failed: HTTP probe failed with statuscode: 404

19m 6m 6 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Warning Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 404

18m 6m 2 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Killing Killing container with id docker://health-demo:pod "health-demo-3-02v11_hshreport-stage(27e5a1da-347f-11ea-856c-0050568d3d78)" container "health-demo" is unhealthy, it will be killed and re-created.

21m 6m 3 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Pulled Successfully pulled image "docker-registry.default.svc:5000/hshreport-stage/health-demo@sha256:292f09b7d9ca9bc12560febe3f4ba73e50b3c1a5701cbd55689186e844157fb6"

21m 6m 3 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Created Created container

21m 6m 3 kubelet, openshift-node-04.hsh.io spec.containers{health-demo} Normal Started Started container

#看下事件

$ oc get ev

LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE

29m 41m 6 health-demo-3-02v11 Pod spec.containers{health-demo} Warning Unhealthy kubelet, openshift-node-04.hsh.io Liveness probe failed: HTTP probe failed with statuscode: 404

29m 41m 6 health-demo-3-02v11 Pod spec.containers{health-demo} Warning Unhealthy kubelet, openshift-node-04.hsh.io Readiness probe failed: HTTP probe failed with statuscode: 404

29m 41m 2 health-demo-3-02v11 Pod spec.containers{health-demo} Normal Killing kubelet, openshift-node-04.hsh.io Killing container with id docker://health-demo:pod "health-demo-3-02v11_hshreport-stage(27e5a1da-347f-11ea-856c-0050568d3d78)" container "health-demo" is unhealthy, it will be killed and re-created.

19m 19m 1 health-demo-3-02v11 Pod spec.containers{health-demo} Normal Killing kubelet, openshift-node-04.hsh.io Killing container with id docker://health-demo:Need to kill Pod

44m 44m 1 health-demo-3-deploy Pod Normal Scheduled default-scheduler Successfully assigned health-demo-3-deploy to openshift-lb-02.hsh.io

44m 44m 1 health-demo-3-deploy Pod spec.containers{deployment} Normal Pulled kubelet, openshift-lb-02.hsh.io Container image "openshift/origin-deployer:v3.6.1" already present on machine

44m 44m 1 health-demo-3-deploy Pod spec.containers{deployment} Normal Created kubelet, openshift-lb-02.hsh.io Created container

44m 44m 1 health-demo-3-deploy Pod spec.containers{deployment} Normal Started kubelet, openshift-lb-02.hsh.io Started container

44m 44m 1 health-demo-3 ReplicationController Normal SuccessfulCreate replication-controller Created pod: health-demo-3-02v11

19m 19m 1 health-demo-3 ReplicationController Normal SuccessfulDelete

發現,POD的名稱health-demo-3-02v11沒有變,演示到這兒結束

小結

Liveness probes可作三種檢查

HTTP(S) checks—Checks a given URL endpoint served by the container, and evaluates the HTTP response code.

Container execution check—A command, typically a script, that’s run at intervals to verify that the container is behaving as expected. A non-zero exit code from the command results in a liveness check failure.

TCP socket checks—Checks that a TCP connection can be established on a specific TCP port in the application pod.


Readiness和liveness的區別

     readiness 就是意思是否能夠訪問,liveness就是是否存活。若是一個readiness 爲fail 的後果是把這個pod 的全部service 的endpoint裏面的改pod ip 刪掉,意思就這個pod對應的全部service都不會把請求轉到這pod來了。可是若是liveness 檢查結果是fail就會直接kill container,固然若是你的restart policy 是always 會重啓pod。

Readiness探針和Liveness探針都是用來檢測容器進程狀態的。區別在於前者關注的是是否把進程的服務地址加入Service的負載均衡列表;然後者則決定是否去重啓這個進程來排除故障。它們在進程的整個生命週期中都存在並且同時工做,職責分離。

Kubelet 能夠選擇是否執行在容器上運行的兩種探針執行和作出反應:

  • livenessProbe:指示容器是否正在運行。若是存活探測失敗,則 kubelet 會殺死容器,而且容器將受到其 重啓策略 的影響。若是容器不提供存活探針,則默認狀態爲 Success。
  • readinessProbe:指示容器是否準備好服務請求。若是就緒探測失敗,端點控制器將從與 Pod 匹配的全部 Service 的端點中刪除該 Pod 的 IP 地址。初始延遲以前的就緒狀態默認爲 Failure。若是容器不提供就緒探針,則默認狀態爲 Success。


最佳實踐

      通常來說,Readiness的執行間隔要比Liveness設置的較長一點比較好。由於當後端進程負載高的時候,咱們能夠暫時從轉發列表裏面摘除,可是Liveness決定的是進程是否重啓,其實這個時候進程不必定須要重啓。因此Liveness的檢測週期能夠稍微長一點,另外失敗的容忍數量也能夠多一點。具體根據實際狀況判斷吧。



今天先到這兒,但願對雲原生,技術領導力, 企業管理,系統架構設計與評估,團隊管理, 項目管理, 產品管管,團隊建設 有參考做用 , 您可能感興趣的文章:
Openshift部署流程介紹
Openshift V3系列各組件版本
領導人怎樣帶領好團隊
構建創業公司突擊小團隊
國際化環境下系統架構演化
微服務架構設計
視頻直播平臺的系統架構演化
微服務與Docker介紹
Docker與CI持續集成/CD
互聯網電商購物車架構演變案例
互聯網業務場景下消息隊列架構
互聯網高效研發團隊管理演進之一
消息系統架構設計演進
互聯網電商搜索架構演化之一
企業信息化與軟件工程的迷思
企業項目化管理介紹
軟件項目成功之要素
人際溝通風格介紹一
精益IT組織與分享式領導
學習型組織與企業
企業創新文化與等級觀念
組織目標與我的目標
初創公司人才招聘與管理
人才公司環境與企業文化
企業文化、團隊文化與知識共享
高效能的團隊建設
項目管理溝通計劃
構建高效的研發與自動化運維
某大型電商雲平臺實踐
互聯網數據庫架構設計思路
IT基礎架構規劃方案一(網絡系統規劃)
餐飲行業解決方案之客戶分析流程
餐飲行業解決方案之採購戰略制定與實施流程
餐飲行業解決方案之業務設計流程
供應鏈需求調研CheckList
企業應用之性能實時度量系統演變

若有想了解更多軟件設計與架構, 系統IT,企業信息化, 團隊管理 資訊,請關注個人微信訂閱號:

MegadotnetMicroMsg_thumb1_thumb1_thu[2]

做者:Petter Liu
出處:http://www.cnblogs.com/wintersun/ 本文版權歸做者和博客園共有,歡迎轉載,但未經做者贊成必須保留此段聲明,且在文章頁面明顯位置給出原文鏈接,不然保留追究法律責任的權利。 該文章也同時發佈在個人獨立博客中-Petter Liu Blog。

相關文章
相關標籤/搜索