KubeSphere排錯實戰(三)

接上兩篇:node

在以後使用kubesphere中也記錄了一些使用問題,但願能夠對其餘人有幫助,一塊體驗如絲般順滑的容器管理平臺。git

十四 異常容器刪除

以前利用helm部署過consul,後面刪除consulgithub

[root@master ~]# helm delete consul --purge

經查看consul的一個pod狀態一直爲Terminatingweb

[root@master ~]# kubectl get pods -n common-service
NAME             READY   STATUS        RESTARTS   AGE
consul-1         1/2     Terminating   1          24d
redis-master-0   1/1     Running       1          17d
redis-slave-0    1/1     Running       1          8d
redis-slave-1    1/1     Running       1          17d

查看狀態redis

[root@master ~]# kubectl describe pods consul-1 -n common-service

Events:
  Type     Reason      Age                     From             Message
  ----     ------      ----                    ----             -------
  Warning  FailedSync  3m41s (x4861 over 22h)  kubelet, node02  error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded

處置建議:docker

  • 升級到docker 18. 該版本使用了新的 containerd,針對不少bug進行了修復。
  • 若是出現terminating狀態的話,能夠提供讓容器專家進行排查,不建議直接強行刪除,會可能致使一些業務上問題。

懷疑是17版本dockerd的BUG。可經過 kubectl -n cn-staging delete pod apigateway-6dc48bf8b6-clcwk --force --grace-period=0 強制刪除pod,但 docker ps 仍看獲得這個容器shell

[root@master ~]# kubectl -n common-service delete pod consul-1 --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "consul-1" force deleted
[root@master ~]# kubectl get pods -n common-service
NAME             READY   STATUS    RESTARTS   AGE
redis-master-0   1/1     Running   1          17d
redis-slave-0    1/1     Running   1          8d
redis-slave-1    1/1     Running   1          17d

在node2上查看api

[root@node02 ~]# docker ps -a |grep consul
b5ea9ace7779        fc6c0a74553d                                    "/entrypoint.sh /run…"   3 weeks ago         Up 3 weeks                                     k8s_consul_consul-1_common-service_5eb39c90-8503-4125-a2f0-63f177e36293_1
13192855eb6f        mirrorgooglecontainers/pause-amd64:3.1          "/pause"                 3 weeks ago         Exited (0) 23 hours ago                        k8s_POD_consul-1_common-service_5eb39c90-8503-4125-a2f0-63f177e36293_0

使用便捷資源狀態來釋放,存在 Finalizers,k8s 資源的 metadata 裏若是存在 finalizers,那麼該資源通常是由某程序建立的,而且在其建立的資源的 metadata 裏的 finalizers 加了一個它的標識,這意味着這個資源被刪除時須要由建立資源的程序來作刪除前的清理,清理完了它須要將標識從該資源的 finalizers 中移除,而後纔會最終完全刪除資源。好比 Rancher 建立的一些資源就會寫入 finalizers 標識。負載均衡

處理建議:kubectl edit 手動編輯資源定義,刪掉 finalizers,這時再看下資源,就會發現已經刪掉了。curl

十五 k8s日誌異常排除

通過從v2.0升級到v2.1後面通過查看kubesphere沒有了日誌

KubeSphere排錯實戰(三)

首先排除負責日誌收集的相關pods是否正常,Fluent Bit + ElasticSearch

[root@master ~]# kubectl get po -n kubesphere-logging-system
NAME                                                              READY   STATUS      RESTARTS   AGE
elasticsearch-logging-curator-elasticsearch-curator-158086m9zv5   0/1     Completed   0          2d13h
elasticsearch-logging-curator-elasticsearch-curator-158095fmdlz   0/1     Completed   0          37h
elasticsearch-logging-curator-elasticsearch-curator-158103bwf8f   0/1     Completed   0          13h
elasticsearch-logging-data-0                                      1/1     Running     1          8d
elasticsearch-logging-data-1                                      1/1     Running     774        69d
elasticsearch-logging-discovery-0                                 1/1     Running     478        56d
elasticsearch-logging-kibana-94594c5f-q7sht                       1/1     Running     1          22d
fluent-bit-2b9kj                                                  2/2     Running     2          23h
fluent-bit-bf52m                                                  2/2     Running     2          23h
fluent-bit-pkb9f                                                  2/2     Running     2          22h
fluent-bit-twd98                                                  2/2     Running     2          23h
logging-fluentbit-operator-56c6b84b94-4nzzn                       1/1     Running     1          23h
logsidecar-injector-5cbf7bd868-cr2kh                              1/1     Running     1          11d
logsidecar-injector-5cbf7bd868-mp46g                              1/1     Running     1          22d

以前知道日誌經過es存儲,將kubesphere-logging-system的將es的service映射爲NodePort模式,查看索引,發現只有jaeger的

curl elasticsearch-logging-data.kubesphere-logging-system.svc:9200/_cat/indices

KubeSphere排錯實戰(三)

通過查看索引正常

查看Fluent bit 的日誌

KubeSphere排錯實戰(三)

[root@master ~]# kubectl -n kubesphere-logging-system logs -f fluent-bit-2b9kj -c fluent-bit
I0207 13:53:25.667667       1 fluentbitdaemon.go:135] Start Fluent-Bit daemon...
Fluent Bit v1.0.5
Copyright (C) Treasure Data

[2020/02/07 13:53:26] [ info] [storage] initializing...
[2020/02/07 13:53:26] [ info] [storage] in-memory
[2020/02/07 13:53:26] [ info] [storage] normal synchronization mode, checksum disabled
[2020/02/07 13:53:26] [ info] [engine] started (pid=15)
[2020/02/07 13:53:26] [ info] [filter_kube] https=1 host=kubernetes.default.svc port=443
[2020/02/07 13:53:26] [ info] [filter_kube] local POD info OK
[2020/02/07 13:53:26] [ info] [filter_kube] testing connectivity with API server...
[2020/02/07 13:53:36] [ warn] net_tcp_fd_connect: getaddrinfo(host='kubernetes.default.svc'): Name or service not known
[2020/02/07 13:53:36] [error] [filter_kube] upstream connection error
[2020/02/07 13:53:36] [ warn] [filter_kube] could not get meta for POD fluent-bit-2b9kj

以前因爲系統盤磁盤緣由,將docker容器數據遷移至數據盤,因爲是連接形式,致使收集日誌異常。

KubeSphere排錯實戰(三)

Step 1. 添加 containersLogMountedPath 到 ConfigMap ks-installer。具體路徑根據實際環境填充

[root@master docker]# docker info -f '{{.DockerRootDir}}'
/data/docker
[root@master docker]# ll /var/lib/docker
lrwxrwxrwx. 1 root root 12 Oct 10 19:01 /var/lib/docker -> /data/docker

KubeSphere排錯實戰(三)

Step 2. 等待 installer 自動更新 fluent-bit operator 的 ConfigMap,大概幾分鐘。直到 containersLogMountedPath 更新到 ConfigMap(儘可能不要直接修改這個 ConfigMap,以避免影響之後升級)。

KubeSphere排錯實戰(三)

Step 3. 重啓 Flunet Bit

# 刪除 fluent-bit Daemonset
[root@master ~]# kubectl scale -n kubesphere-logging-system deployment logging-fluentbit-operator --replicas=0
deployment.extensions/logging-fluentbit-operator scaled
[root@master ~]# kubectl delete -n kubesphere-logging-system daemonsets fluent-bit
daemonset.extensions "fluent-bit" deleted

# 重啓 Fluent-bit Operator Deployment
[root@master ~]# kubectl scale -n kubesphere-logging-system deployment logging-fluentbit-operator --replicas=1
deployment.extensions/logging-fluentbit-operator scaled

# 檢查 fluent-bit 是否起來
[root@master ~]# kubectl get po -n kubesphere-logging-system
NAME                                                              READY   STATUS              RESTARTS   AGE
elasticsearch-logging-curator-elasticsearch-curator-158086m9zv5   0/1     Completed           0          2d13h
elasticsearch-logging-curator-elasticsearch-curator-158095fmdlz   0/1     Completed           0          37h
elasticsearch-logging-curator-elasticsearch-curator-158103bwf8f   0/1     Completed           0          13h
elasticsearch-logging-data-0                                      1/1     Running             1          8d
elasticsearch-logging-data-1                                      1/1     Running             774        69d
elasticsearch-logging-discovery-0                                 1/1     Running             478        56d
elasticsearch-logging-kibana-94594c5f-q7sht                       1/1     Running             1          22d
fluent-bit-5rzpv                                                  0/2     ContainerCreating   0          3s
fluent-bit-nkzdv                                                  0/2     ContainerCreating   0          3s
fluent-bit-pwhw7                                                  0/2     ContainerCreating   0          3s
fluent-bit-w5t8k                                                  0/2     ContainerCreating   0          3s
logging-fluentbit-operator-56c6b84b94-d7vgn                       1/1     Running             0          5s
logsidecar-injector-5cbf7bd868-cr2kh                              1/1     Running             1          11d
logsidecar-injector-5cbf7bd868-mp46g                              1/1     Running             1          22d

當全部nodes的fluent-bit啓動後,能夠查看日誌已經恢復

KubeSphere排錯實戰(三)

參考:https://github.com/kubesphere/kubesphere/issues/1476

參考:https://github.com/kubesphere/kubesphere/issues/680

十六 k8s存儲

有pod運行異常,查看事件爲存儲異常,查看ceph狀態爲異常

[root@master test]# ceph -s
    cluster 774df8bf-d591-4824-949c-b53826d1b24a
     health HEALTH_WARN
            mon.master low disk space
     monmap e1: 1 mons at {master=10.234.2.204:6789/0}
            election epoch 14, quorum 0 master
     osdmap e3064: 3 osds: 3 up, 3 in
            flags sortbitwise,require_jewel_osds
      pgmap v9076023: 192 pgs, 2 pools, 26341 MB data, 8231 objects
            64888 MB used, 127 GB / 190 GB avail
                 192 active+clean
  client io 17245 B/s wr, 0 op/s rd, 4 op/s wr

kubelet默認有gc,在此進行手動清理docker文件

# 查看文件
[root@master overlay2]# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              34                  12                  8.463GB             5.225GB (61%)
Containers          46                  21                  836.6kB             836.5kB (99%)
Local Volumes       4                   0                   59.03MB             59.03MB (100%)
Build Cache         0                   0                   0B      

# 清理文件
[root@master overlay2]# docker system prune
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all dangling build cache
Are you sure you want to continue? [y/N] y

十七 修改kube-proxy模式爲iptables爲ipvs

  • 每一個節點之間modeprobe
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

yum install -y ipset ipvsadm

kubectl get configmap kube-proxy -n kube-system -oyaml

查看目前ipvsadm沒有規則

[root@master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
  • 在master節點修改kube-proxy的configmap中的ipvs的mode爲ipvs

KubeSphere排錯實戰(三)

進行以前kube-proxy pod的刪除

[root@master ~]# kubectl get pod  -n kube-system|grep kube-proxy|awk '{print "kubectl delete po "$1" -n kube-system"}'|sh
pod "kube-proxy-2wnst" deleted  
pod "kube-proxy-bfrk9" deleted
pod "kube-proxy-kvslw" deleted

KubeSphere排錯實戰(三)

經過ipvsadm查看已經切換過來。

十八 應用安裝

在排錯實戰二中記錄了利用終端來安裝應用,在kubesphere2.1中將能夠利用web界面在企業空間下安裝意見添加的應用倉庫中的應用,再次記錄下操做步驟

  • 在企業空間的應用倉庫中添加repo

KubeSphere排錯實戰(三)

  • 在具體的項目中的應用安裝選擇來自應用模版

KubeSphere排錯實戰(三)

  • 選擇repo源,搜索須要的charts包

KubeSphere排錯實戰(三)

十九 服務治理

經過KubeSphere很是的將本身的應用賦能服務治理能力,利用istio的sidercar模式注入envoy來實現服務網格的一些列金絲雀發佈,負載均衡,流量檢測管控,限流熔斷降級等。目前將本身的應用測試了微服務的治理,感受很是好用,後期有機會記錄下過程。

KubeSphere排錯實戰(三)

KubeSphere排錯實戰(三)

本身整理了k8s學習筆記,有興起的能夠一快學習交流:https://github.com/redhatxl/awesome-kubernetes-notes支持國產容器管理平臺KubeSphere,爲社區盡本身的一份綿薄之力。

相關文章
相關標籤/搜索