kubesphere 3.0 安裝部署填坑記錄

kubesphere 3.0 安裝部署填坑記錄

標籤(空格分隔): kubernetes系列html


[toc]node


一:系統環境

1.1 系統環境初始化

系統:Centos7.8x64

cat /etc/hosts 
-----
192.168.100.11  node01.flyfish.cn
192.168.100.12  node02.flyfish.cn
192.168.100.13  node03.flyfish.cn
192.168.100.14  node04.flyfish.cn
192.168.100.15  node05.flyfish.cn
192.168.100.16  node06.flyfish.cn
192.168.100.17  node07.flyfish.cn
192.168.100.18  node08.flyfish.cn
-----

本次安裝以 前三臺部署
k8s 部署說明

image_1eq1hbnj55tcn2nvf51juculo9.png-39.8kB


1.2 系統配置初始化

安裝基礎工具
  yum install -y wget && yum install -y vim && yum install -y lsof && yum install -y net-tools

image_1eq1hfpkp1a1bmpp1tm2b3k5drm.png-258.4kB

關閉防火牆或者阿里雲開通安全組端口訪問

systemctl stop firewalld

systemctl disable firewalld

執行關閉命令: systemctl stop firewalld.service

再次執行查看防火牆命令:systemctl status firewalld.service

執行開機禁用防火牆自啓命令  : systemctl disable firewalld.service

關閉 selinux:

sed -i 's/enforcing/disabled/' /etc/selinux/config

setenforce 0

cat /etc/selinux/config

image_1eq1hk3a91ejq1oj6np093njqe13.png-86.5kB


關閉 swap

swapoff -a  #臨時

sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久

free -l -h

image_1eq1hml7heuu193b1u3h6hn14fr1g.png-82.6kB


將橋接的 IPv4 流量傳遞到 iptables 的鏈
   若是沒有/etc/sysctl.conf文件的話直接執行
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" >> /etc/sysctl.conf
echo "net.ipv6.conf.all.forwarding = 1"  >> /etc/sysctl.conf

image_1eq1hr62sbmrg2918i21lgb9qt1t.png-174.3kB

1.3 部署docker

下載地址:https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz

如下在全部節點操做。這裏採用二進制安裝,用yum安裝也同樣。
在 node01.flyfish,node02.flyfish 與 node03.flyfish 節點上面安裝

3.1 解壓二進制包

tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin

image_1eq1i8mmbr901l11v9j1haptl9.png-65.9kB

3.2 systemd管理docker

cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF

image_1eq1iog7f1o5a144hi4blc01k7lm.png-135.3kB

3.3 建立配置文件

mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF

registry-mirrors 阿里雲鏡像加速器
3.4 啓動並設置開機啓動

systemctl daemon-reload
systemctl start docker
systemctl enable docker

image_1eq1iovdi1i2rkmp9it618rs13.png-162kB

二 :安裝k8s 集羣

安裝k8s、kubelet、kubeadm、kubectl(全部節點)
配置K8S的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

安裝kubelet、kubeadm、kubectl
yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

image_1eq1k04tlkfb1qfo1dc6aiv19pg1g.png-186.8kB

systemctl enable kubelet && systemctl start kubelet

image_1eq1k19611h64tb61kj31nq31g8o1t.png-104.7kB

初始化全部節點:
 下載鏡像腳本:
 vim image.sh
 ----
 #!/bin/bash

images=(

  kube-apiserver:v1.17.3

    kube-proxy:v1.17.3

  kube-controller-manager:v1.17.3

  kube-scheduler:v1.17.3

  coredns:1.6.5

  etcd:3.4.3-0

    pause:3.1

)

for imageName in ${images[@]} ; do

    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName

done
 -----

image_1eq1kaan6hccof3d5c1h6m9jh2a.png-167.5kB


初始化 master節點:
注意,該操做只是在master節點以後構建環境。
kubeadm init \
--apiserver-advertise-address=192.168.100.11 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.17.3 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16

image_1eq1kg65g1o879g82u710vp1jhr2n.png-223.4kB


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

image_1eq1khr70qbsfcep0910mcncr34.png-64.2kB


部署網絡插件
  kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

image_1eq1kn6aq1mt3ttkahjd6p5mp3h.png-299.8kB

image_1eq1kptkk19tg15uu8s64ar19qd3u.png-51.4kB


image_1eq1ktf6k6cd1clhus25bj1ir44b.png-209.2kB

其餘節點加入:
kubeadm join 192.168.100.11:6443 --token y28jw9.gxstbcar3m4n5p1a \
    --discovery-token-ca-cert-hash sha256:769528577607a4024ead671ae01b694744dba16e0806e57ed1b099eb6c6c9350

image_1eq1kvda81844o7g1edm1sac1s1c4o.png-204.2kB

image_1eq1kvp7g1ds21sflrhf1eph1sqt55.png-215.5kB

image_1eq1l0j2h131pavp8chp6k16m15i.png-67.3kB


三:部署NFS 服務器

yum install -y nfs-utils

echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

image_1eq1l6vo2135v1q0o1drlm4d4rt5v.png-115.9kB

mkdir -p /nfs/data

systemctl enable rpcbind

systemctl enable nfs-server

systemctl start rpcbind

systemctl start nfs-server

exportfs -r

image_1eq1la8jqiev1hkg350or0f966p.png-156.2kB

image_1eq1l8d8h8m97b2tddb6o1esu6c.png-47.4kB


測試Pod直接掛載NFS了(主節點操做)

在opt目錄下建立一個nginx.yaml的文件

vim nginx.yaml

----
apiVersion: v1
kind: Pod
metadata:
  name: vol-nfs
  namespace: default
spec:
  volumes:
  - name: html
    nfs:
      path: /nfs/data   #1000G
      server: 192.168.100.11 #本身的nfs服務器地址
  containers:
  - name: myapp
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html/
----
kubectl apply -f nginx.yaml

cd /nfs/data/

echo " 11111" >>> index.html

image_1eq1lv1q7np81rin14vs1i812t776.png-53.5kB


安裝客戶端工具(node節點操做)
node02.flyfish.cn

showmount -e 192.168.100.11

image_1eq1mc39bf7sdsvs8pvgaien7j.png-39.5kB


建立同步文件夾
mkdir /root/nfsmount

將客戶端的/root/nfsmount和/nfs/data/作同步(node節點操做)
mount -t nfs 192.168.100.11:/nfs/data/ /root/nfsmount

image_1eq1popta5o1ucj1evbq6p1nav8d.png-296.2kB

image_1eq1ps3341s041a3ba3gbjmhej8q.png-51.5kB

image_1eq1pskkn1f7c2g81cr95ie6n097.png-36kB


四:設置動態供應鏈storageclass

image_1eq1q1tutp21monrlt1hf99839k.png-587.9kB

vim nfs-rbac.yaml
----
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   name: nfs-provisioner-runner
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Deployment
apiVersion: apps/v1
metadata:
   name: nfs-client-provisioner
spec:
   replicas: 1
   strategy:
     type: Recreate
   selector:
     matchLabels:
        app: nfs-client-provisioner
   template:
      metadata:
         labels:
            app: nfs-client-provisioner
      spec:
         serviceAccount: nfs-provisioner
         containers:
            -  name: nfs-client-provisioner
               image: lizhenliang/nfs-client-provisioner
               volumeMounts:
                 -  name: nfs-client-root
                    mountPath:  /persistentvolumes
               env:
                 -  name: PROVISIONER_NAME
                    value: storage.pri/nfs
                 -  name: NFS_SERVER
                    value: 192.168.100.11
                 -  name: NFS_PATH
                    value: /nfs/data
         volumes:
           - name: nfs-client-root
             nfs:
               server: 192.168.100.11
               path: /nfs/data
----
kubectl apply -f nfs-rbac.yaml
kubectl get pod

image_1eq1qekdf190m16581uu1oqr1m2sa1.png-65kB

image_1eq1qghjjqpl1oq6c001nk7159mae.png-72kB

建立storageclass
vi storageclass-nfs.yaml
----
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: storage-nfs
provisioner: storage.pri/nfs
reclaimPolicy: Delete
----
kubectl apply -f storageclass-nfs.yaml

image_1eq1qm3001a9ne6c195319go6srar.png-47kB

#擴展"reclaim policy"有三種方式:Retain、Recycle、Deleted。
Retain
#保護被PVC釋放的PV及其上數據,並將PV狀態改爲"released",不將被其它PVC綁定。集羣管理員手動經過以下步驟釋放存儲資源:
手動刪除PV,但與其相關的後端存儲資源如(AWS EBS, GCE PD, Azure Disk, or Cinder volume)仍然存在。
手動清空後端存儲volume上的數據。
手動刪除後端存儲volume,或者重複使用後端volume,爲其建立新的PV。

Delete
刪除被PVC釋放的PV及其後端存儲volume。對於動態PV其"reclaim policy"繼承自其"storage class",
默認是Delete。集羣管理員負責將"storage class"的"reclaim policy"設置成用戶指望的形式,不然須要用
戶手動爲建立後的動態PV編輯"reclaim policy"

Recycle
保留PV,但清空其上數據,已廢棄

kubectl get storageclass

image_1eq1qttcv17i3vaf17br1de02i1b8.png-56.2kB


改變默認sc
https://kubernetes.io/zh/docs/tasks/administer-cluster/change-default-storage-class/#%e4%b8%ba%e4%bb%80%e4%b9%88%e8%a6%81%e6%94%b9%e5%8f%98%e9%bb%98%e8%ae%a4-storage-class

kubectl patch storageclass storage-nfs -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

驗證nfs動態供應

建立pvc

vim pvc.yaml
-----
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-claim-01
  #annotations:
   #   volume.beta.kubernetes.io/storage-class: "storage-nfs"
spec:
  storageClassName: storage-nfs  #這個class必定注意要和sc的名字同樣
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
-----

kubectl apply -f pvc.yaml

image_1eq1rcgqqqgp1mie11921f5a1u0pc2.png-41.3kB

使用pvc

vi testpod.yaml
----
kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: busybox
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: pvc-claim-01
-----
kubectl apply -f testpod.yaml

image_1eq1rg5601t481hk9jki1paq115mcf.png-39.6kB

五:安裝metrics-server

一、先安裝metrics-server(yaml以下,已經改好了鏡像和配置,能夠直接使用),這樣就能監控到pod。node的資源狀況(默認只有cpu、memory的資源審計信息喲,更專業的咱們後面對接 Prometheus)

vim 2222.yaml
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
kubectl apply -f 2222.yaml

image_1eq1s4c0fvq01bvg18kgqi6g9rcs.png-118.3kB


kubetl top nodes

kubectl top nodes

image_1eq1s9mrl1i7e1lj2puh1v681c7ddm.png-156.9kB

image_1eq1sai2015malp21pgn11i718hse3.png-94.5kB


六: 安裝 kubesphere

https://kubesphere.com.cn/docs/quick-start/minimal-kubesphere-on-k8s/

wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml

wget https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml

image_1eq1sh9uictr14gdneh17fr1p0meg.png-472.3kB


vim cluster-configuration.yaml
----
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.0.0
spec:
  persistence:
    storageClass: ""        # If there is not a default StorageClass in your cluster, you need to specify an existing StorageClass here.
  authentication:
    jwtSecret: ""           # Keep the jwtSecret consistent with the host cluster. Retrive the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the host cluster.
  etcd:
    monitoring: true       # Whether to enable etcd monitoring dashboard installation. You have to create a secret for etcd before you enable it.
    endpointIps: 192.168.100.11  # etcd cluster EndpointIps, it can be a bunch of IPs here.
    port: 2379              # etcd port
    tlsEnable: true
  common:
    mysqlVolumeSize: 20Gi # MySQL PVC size.
    minioVolumeSize: 20Gi # Minio PVC size.
    etcdVolumeSize: 20Gi  # etcd PVC size.
    openldapVolumeSize: 2Gi   # openldap PVC size.
    redisVolumSize: 2Gi # Redis PVC size.
    es:   # Storage backend for logging, events and auditing.
      # elasticsearchMasterReplicas: 1   # total number of master nodes, it's not allowed to use even number
      # elasticsearchDataReplicas: 1     # total number of data nodes.
      elasticsearchMasterVolumeSize: 4Gi   # Volume size of Elasticsearch master nodes.
      elasticsearchDataVolumeSize: 20Gi    # Volume size of Elasticsearch data nodes.
      logMaxAge: 7                     # Log retention time in built-in Elasticsearch, it is 7 days by default.
      elkPrefix: logstash              # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
  console:
    enableMultiLogin: true  # enable/disable multiple sing on, it allows an account can be used by different users at the same time.
    port: 30880
  alerting:                # (CPU: 0.3 Core, Memory: 300 MiB) Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
    enabled: true
  auditing:                # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants.
    enabled: true
  devops:                  # (CPU: 0.47 Core, Memory: 8.6 G) Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
    enabled: true
    jenkinsMemoryLim: 2Gi      # Jenkins memory limit.
    jenkinsMemoryReq: 1500Mi   # Jenkins memory request.
    jenkinsVolumeSize: 8Gi     # Jenkins volume size.
    jenkinsJavaOpts_Xms: 512m  # The following three fields are JVM parameters.
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:                  # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
    enabled: true
    ruler:
      enabled: true
      replicas: 2
  logging:                 # (CPU: 57 m, Memory: 2.76 G) Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
    enabled: true
    logsidecarReplicas: 2
  metrics_server:                    # (CPU: 56 m, Memory: 44.35 MiB) Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
    enabled: false
  monitoring:
    # prometheusReplicas: 1            # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
    prometheusMemoryRequest: 400Mi   # Prometheus request memory.
    prometheusVolumeSize: 20Gi       # Prometheus PVC size.
    # alertmanagerReplicas: 1          # AlertManager Replicas.
  multicluster:
    clusterRole: none  # host | member | none  # You can install a solo cluster, or specify it as the role of host or member cluster.
  networkpolicy:       # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
    # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
    enabled: true
  notification:        # Email Notification support for the legacy alerting system, should be enabled/disabled together with the above alerting option.
    enabled: true
  openpitrix:          # (2 Core, 3.6 G) Whether to install KubeSphere Application Store. It provides an application store for Helm-based applications, and offer application lifecycle management.
    enabled: true
  servicemesh:         # (0.3 Core, 300 MiB) Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology.
    enabled: true
----

kubectl apply -f kubesphere-installer.yaml

kubectl apply -f cluster-configuration1.yaml

image_1eq1t3e60udqdui1k16agtftqet.png-125.2kB


查看安裝進度:
   kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

image_1eq1vjdj2b5jam1c9q3lfdltlt.png-355.6kB

kubectl get pod -A

image_1eq1u209o1njh1v1v85i1vf11r0pg7.png-381.3kB

image_1eq1u2tel7g916pf1ln214hqcjogk.png-413.5kB

kubesphere-monitoring-system   prometheus-k8s-0                                    0/3     ContainerCreating   0          7m20s
kubesphere-monitoring-system   prometheus-k8s-1                                    0/3     ContainerCreating   0          7m20s

prometheus-k8s-1 這個一直在 ContainerCreating  這個 狀態

image_1eq1u9dlvbfjsg8f0kpvnhl9he.png-180.6kB


kubectl describe pod prometheus-k8s-0 -n kubesphere-monitoring-system

kube-etcd-client-certs 這個證書沒有找到:

image_1eq1uc2rl8v81p22s598uo15eehr.png-299.3kB

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt  --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key

 kubectl get secret -A |grep etcd

image_1eq1uj35o18qkrmd1lo41o501hiki8.png-53.1kB

kubectl get pod -n kubesphere-monitoring-system

prometheus-k8s-1 這個pod 就變成Running 狀態了

image_1eq1um7aj198vskr1jg714481icbil.png-167.8kB


下面根據日誌提示打開kubesphere 的web 頁面:

image_1eq1uq2jvasb12pg1pbtg1bqbj2.png-119.5kB

image_1eq1uselj1mtln5p1afg1fl1p2rjf.png-37.6kB

image_1eq1ut25r15mi1guoo2d1qr41nf3js.png-55.9kB

image_1eq1v2ma51fq51021n7q1e92e4uk9.png-161.4kB

image_1eq1v590r3bk151c1v88m51o6akm.png-134.3kB

image_1eq1v69tn1ukh1di91l5q1f3vec6l3.png-190.9kB

相關文章
相關標籤/搜索