本文來自 KubeSphere 社區用戶 Will,演示如何使用 Sealos + Longhorn 部署一個帶有持久化存儲的 Kubernetes 集羣,而後使用 ks-installer 在該集羣上部署 KubeSphere 3.0.0。這是一篇最適合小白初次上手的 KubeSphere 3.0.0 快速部署和體驗的文章🚀。node
Sealos 簡介
Sealos (https://sealyun.com/
),只能用絲滑一詞形容的 Kubernetes 高可用安裝工具,一條命令,離線安裝,包含全部依賴,內核負載不依賴 haproxy keepalived,純 Golang 開發,99 年證書,支持 v1.16 ~ v1.19。mysql
Longhorn簡介
Longhorn(https://www.rancher.cn/longhorn
)是 Rancher 開源的 Kubernetes 高可用持久化存儲,提供簡單的增量快照和備份,支持跨集羣災難恢復。linux
KubeSphere 簡介
KubeSphere(https://kubesphere.io)是在 Kubernetes 之上構建的以應用爲中心的多租戶容器平臺,徹底開源,支持多雲與多集羣管理,提供全棧的 IT 自動化運維的能力,簡化企業的 DevOps 工做流。KubeSphere 提供了運維友好的嚮導式操做界面,幫助企業快速構建一個強大和功能豐富的容器雲平臺。git
KubeSphere 支持以下兩種安裝方式:github
使用 KubeKey 部署 Kubernetes 集羣 + KubeSphere 在已有 Kubernetes 集羣部署 KubeSphere
對於已有 Kubernetes 集羣的用戶來講,在已有 Kubernetes 集羣部署 KubeSphere 具備更高的靈活性。下面演示單獨部署一個 Kubernetes 集羣,並在集羣上部署 KubeSphere。web
使用 Sealos 部署 Kubernetes 集羣
準備 4 個節點,因爲實驗的機器有限,咱們暫時準備 3 個 master 和 1 個 node,注意在實際的生產環境建議配置 3 master 和至少 3 node。全部節點必須配置主機名,並確認節點時間同步:redis
hostnamectl set-hostname xx
yum install -y chrony
systemctl enable --now chronyd
timedatectl set-timezone Asia/Shanghai
在第一個 master 節點操做,下載部署工具及離線包:sql
# 基於 go 的二進制安裝程序
wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/latest/sealos && \
chmod +x sealos && mv sealos /usr/bin
# 以 K8s v1.18.8爲例,不建議使用 v1.19.x,由於 KubeSphere v3.0.0 暫不支持
wget -c https://sealyun.oss-cn-beijing.aliyuncs.com/cd3d5791b292325d38bbfaffd9855312-1.18.8/kube1.18.8.tar.gz
執行如下命令部署 Kubernetes 集羣,passwd 爲全部節點 root 密碼:json
sealos init --passwd 123456 \
--master 10.39.140.248 \
--master 10.39.140.249 \
--master 10.39.140.250 \
--node 10.39.140.251 \
--pkg-url kube1.18.8.tar.gz \
--version v1.18.8
確認 Kubernetes 集羣運行正常:api
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready master 13h v1.18.8
k8s-master2 Ready master 13h v1.18.8
k8s-master3 Ready master 13h v1.18.8
k8s-node1 Ready <none> 13h v1.18.8
部署 Longhorn 存儲
Longhorn 推薦單獨掛盤做爲存儲使用,這裏做爲測試直接使用本地存儲目錄 /data/longhorn
,默認爲 /var/lib/longhorn
。
注意,KubeSphere 有幾個組件申請的 PV 大小爲 20G
,確保節點空間充足,不然可能出現 PV 可以綁定成功但沒有知足條件的節點可調度的狀況。
安裝具備 3 數據副本的 Longhorn 至少須要 3 個節點,這裏去除 master 節點污點使其可調度 Pod:
kubectl taint nodes --all node-role.kubernetes.io/master-
在 k8s-master1 安裝 Helm:
version=v3.3.1
curl -LO https://repo.huaweicloud.com/helm/${version}/helm-${version}-linux-amd64.tar.gz
tar -zxvf helm-${version}-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm && rm -rf linux-amd64
全部節點安裝 longhorn 依賴:
yum install -y iscsi-initiator-utils
systemctl enable --now iscsid
添加 Longhorn Chart,若是網絡較差能夠去 Longhorn 的 github release 下載 Chart:
helm repo add longhorn https://charts.longhorn.io
helm repo update
部署 Longhorn,支持離線部署,須要提早推送鏡像到私有倉庫 longhorn.io 下:
kubectl create namespace longhorn-system
helm install longhorn \
--namespace longhorn-system \
--set defaultSettings.defaultDataPath="/data/longhorn/" \
--set defaultSettings.defaultReplicaCount=3 \
--set service.ui.type=NodePort \
--set service.ui.nodePort=30890 \
#--set privateRegistry.registryUrl=10.39.140.196:8081 \
longhorn/longhorn
確認 Longhorn 運行正常:
[root@jenkins longhorn]# kubectl -n longhorn-system get pods
NAME READY STATUS RESTARTS AGE
csi-attacher-58b856dcff-9kqdt 1/1 Running 0 13h
csi-attacher-58b856dcff-c4zzp 1/1 Running 0 13h
csi-attacher-58b856dcff-tvfw2 1/1 Running 0 13h
csi-provisioner-56dd9dc55b-6ps8m 1/1 Running 0 13h
csi-provisioner-56dd9dc55b-m7gz4 1/1 Running 0 13h
csi-provisioner-56dd9dc55b-s9bh4 1/1 Running 0 13h
csi-resizer-6b87c4d9f8-2skth 1/1 Running 0 13h
csi-resizer-6b87c4d9f8-sqn2g 1/1 Running 0 13h
csi-resizer-6b87c4d9f8-z6xql 1/1 Running 0 13h
engine-image-ei-b99baaed-5fd7m 1/1 Running 0 13h
engine-image-ei-b99baaed-jcjxj 1/1 Running 0 12h
engine-image-ei-b99baaed-n6wxc 1/1 Running 0 12h
engine-image-ei-b99baaed-qxfhg 1/1 Running 0 12h
instance-manager-e-44ba7ac9 1/1 Running 0 12h
instance-manager-e-48676e4a 1/1 Running 0 12h
instance-manager-e-57bd994b 1/1 Running 0 12h
instance-manager-e-753c704f 1/1 Running 0 13h
instance-manager-r-4f4be1c1 1/1 Running 0 12h
instance-manager-r-68bfb49b 1/1 Running 0 12h
instance-manager-r-ccb87377 1/1 Running 0 12h
instance-manager-r-e56429be 1/1 Running 0 13h
longhorn-csi-plugin-fqgf7 2/2 Running 0 12h
longhorn-csi-plugin-gbrnf 2/2 Running 0 13h
longhorn-csi-plugin-kjj6b 2/2 Running 0 12h
longhorn-csi-plugin-tvbvj 2/2 Running 0 12h
longhorn-driver-deployer-74bb5c9fcb-khmbk 1/1 Running 0 14h
longhorn-manager-82ztz 1/1 Running 0 12h
longhorn-manager-8kmsn 1/1 Running 0 12h
longhorn-manager-flmfl 1/1 Running 0 12h
longhorn-manager-mz6zj 1/1 Running 0 14h
longhorn-ui-77c6d6f5b7-nzsg2 1/1 Running 0 14h
確認默認的 StorageClass 已就緒:
# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
longhorn (default) driver.longhorn.io Delete Immediate true 14h
登陸 Longhorn UI 確認節點處於可調度狀態:
Longhorn UI 查看綁定的 PV 卷
查看存儲卷詳情
在 Kubernetes 上部署 KubeSphere
使用 ks-installer 項目來安裝 KubeSphere,下載 KubeSphere 安裝的 Yaml 文件:
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/kubesphere-installer.yaml
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.0.0/deploy/cluster-configuration.yaml
KubeSphere 默認僅開啓了最小化安裝,可修改 cluster-configuration.yaml
,找到相應字段開啓須要安裝的功能組件,如下僅爲參考:
devops:
enabled: true
......
logging:
enabled: true
......
metrics_server:
enabled: true
......
openpitrix:
enabled: true
......
執行命令部署 KubeSphere:
kubectl apply -f kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
查看部署日誌,確認無報錯:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
部署完成後確認全部 KubeSphere 相關的 Pod 運行正常:
[root@k8s-master1 ~]# kubectl get pods -A | grep kubesphere
kubesphere-controls-system default-http-backend-857d7b6856-q24v2 1/1 Running 0 12h
kubesphere-controls-system kubectl-admin-58f985d8f6-jl9bj 1/1 Running 0 11h
kubesphere-controls-system kubesphere-router-demo-ns-6c97d4968b-njgrc 1/1 Running 1 154m
kubesphere-devops-system ks-jenkins-54455f5db8-hm6kc 1/1 Running 0 11h
kubesphere-devops-system s2ioperator-0 1/1 Running 1 11h
kubesphere-devops-system uc-jenkins-update-center-cd9464fff-qnvfz 1/1 Running 0 12h
kubesphere-logging-system elasticsearch-logging-curator-elasticsearch-curator-160079hmdmb 0/1 Completed 0 11h
kubesphere-logging-system elasticsearch-logging-data-0 1/1 Running 0 12h
kubesphere-logging-system elasticsearch-logging-data-1 1/1 Running 0 12h
kubesphere-logging-system elasticsearch-logging-discovery-0 1/1 Running 0 12h
kubesphere-logging-system fluent-bit-c45h2 1/1 Running 0 12h
kubesphere-logging-system fluent-bit-kptfc 1/1 Running 0 12h
kubesphere-logging-system fluent-bit-rzjfp 1/1 Running 0 12h
kubesphere-logging-system fluent-bit-wztkp 1/1 Running 0 12h
kubesphere-logging-system fluentbit-operator-855d4b977d-fk6hs 1/1 Running 0 12h
kubesphere-logging-system ks-events-exporter-5bc4d9f496-x297f 2/2 Running 0 12h
kubesphere-logging-system ks-events-operator-8dbf7fccc-9qmml 1/1 Running 0 12h
kubesphere-logging-system ks-events-ruler-698b7899c7-fkn4l 2/2 Running 0 12h
kubesphere-logging-system ks-events-ruler-698b7899c7-hw6rq 2/2 Running 0 12h
kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-cxkxm 2/2 Running 0 12h
kubesphere-logging-system logsidecar-injector-deploy-74c66bfd85-lzxbm 2/2 Running 0 12h
kubesphere-monitoring-system alertmanager-main-0 2/2 Running 0 11h
kubesphere-monitoring-system alertmanager-main-1 2/2 Running 0 11h
kubesphere-monitoring-system alertmanager-main-2 2/2 Running 0 11h
kubesphere-monitoring-system kube-state-metrics-95c974544-r8kmq 3/3 Running 0 12h
kubesphere-monitoring-system node-exporter-9ddxn 2/2 Running 0 12h
kubesphere-monitoring-system node-exporter-dw929 2/2 Running 0 12h
kubesphere-monitoring-system node-exporter-ht868 2/2 Running 0 12h
kubesphere-monitoring-system node-exporter-nxdsm 2/2 Running 0 12h
kubesphere-monitoring-system notification-manager-deployment-7c8df68d94-hv56l 1/1 Running 0 12h
kubesphere-monitoring-system notification-manager-deployment-7c8df68d94-ttdsg 1/1 Running 0 12h
kubesphere-monitoring-system notification-manager-operator-6958786cd6-pllgc 2/2 Running 0 12h
kubesphere-monitoring-system prometheus-k8s-0 3/3 Running 1 11h
kubesphere-monitoring-system prometheus-k8s-1 3/3 Running 1 11h
kubesphere-monitoring-system prometheus-operator-84d58bf775-5rqdj 2/2 Running 0 12h
kubesphere-system etcd-65796969c7-whbzx 1/1 Running 0 12h
kubesphere-system ks-apiserver-b4dbcc67-2kknm 1/1 Running 0 11h
kubesphere-system ks-apiserver-b4dbcc67-k6jr2 1/1 Running 0 11h
kubesphere-system ks-apiserver-b4dbcc67-q8845 1/1 Running 0 11h
kubesphere-system ks-console-786b9846d4-86hxw 1/1 Running 0 12h
kubesphere-system ks-console-786b9846d4-l6mhj 1/1 Running 0 12h
kubesphere-system ks-console-786b9846d4-wct8z 1/1 Running 0 12h
kubesphere-system ks-controller-manager-7fd8799789-478ks 1/1 Running 0 11h
kubesphere-system ks-controller-manager-7fd8799789-hwgmp 1/1 Running 0 11h
kubesphere-system ks-controller-manager-7fd8799789-pdbch 1/1 Running 0 11h
kubesphere-system ks-installer-64ddc4b77b-c7qz8 1/1 Running 0 12h
kubesphere-system minio-7bfdb5968b-b5v59 1/1 Running 0 12h
kubesphere-system mysql-7f64d9f584-kvxcb 1/1 Running 0 12h
kubesphere-system openldap-0 1/1 Running 0 12h
kubesphere-system openldap-1 1/1 Running 0 12h
kubesphere-system redis-ha-haproxy-5c6559d588-2rt6v 1/1 Running 9 12h
kubesphere-system redis-ha-haproxy-5c6559d588-mhj9p 1/1 Running 8 12h
kubesphere-system redis-ha-haproxy-5c6559d588-tgpjv 1/1 Running 11 12h
kubesphere-system redis-ha-server-0 2/2 Running 0 12h
kubesphere-system redis-ha-server-1 2/2 Running 0 12h
kubesphere-system redis-ha-server-2 2/2 Running 0 12h
KubeSphere 部分組件使用 Helm 部署,檢查 Chart 部署狀況:
[root@k8s-master1 ~]# helm ls -A | grep kubesphere
elasticsearch-logging kubesphere-logging-system 1 2020-09-23 00:49:08.526873742 +0800 CST deployed elasticsearch-1.22.1 6.7.0-0217
elasticsearch-logging-curator kubesphere-logging-system 1 2020-09-23 00:49:16.117842593 +0800 CST deployed elasticsearch-curator-1.3.3 5.5.4-0217
ks-events kubesphere-logging-system 1 2020-09-23 00:51:45.529430505 +0800 CST deployed kube-events-0.1.0 0.1.0
ks-jenkins kubesphere-devops-system 1 2020-09-23 01:03:15.106022826 +0800 CST deployed jenkins-0.19.0 2.121.3-0217
ks-minio kubesphere-system 2 2020-09-23 00:48:16.990599158 +0800 CST deployed minio-2.5.16 RELEASE.2019-08-07T01-59-21Z
ks-openldap kubesphere-system 1 2020-09-23 00:03:28.767712181 +0800 CST deployed openldap-ha-0.1.0 1.0
ks-redis kubesphere-system 1 2020-09-23 00:03:19.439784188 +0800 CST deployed redis-ha-3.9.0 5.0.5
logsidecar-injector kubesphere-logging-system 1 2020-09-23 00:51:57.519733074 +0800 CST deployed logsidecar-injector-0.1.0 0.1.0
notification-manager kubesphere-monitoring-system 1 2020-09-23 00:54:14.662762759 +0800 CST deployed notification-manager-0.1.0 0.1.0
uc kubesphere-devops-system 1 2020-09-23 00:51:37.885154574 +0800 CST deployed jenkins-update-center-0.8.0 3.0.0
獲取 KubeSphere Console 監聽端口,默認爲 30880:
kubectl get svc/ks-console -n kubesphere-system
默認登陸帳號爲 admin/P@88w0rd
,登陸 KubeSphere Console:
在 KubeSphere 查看 Kubernetes 集羣總覽(界面很是清新簡潔):
查看 Kubernetes 集羣節點信息:
查看 KubeSphere 服務組件信息:
訪問 KubeSphere 應用商店:
查看 KubeSphere 項目資源:
提示:關於如何在 KubeSphere 平臺導入多集羣、建立項目與集羣資源、開啓可插拔功能組件以及建立 CI/CD 流水線,能夠參考 KubeSphere 官方文檔 (
kubesphere.io/docs
) 瞭解更多信息。
清理 KubeSphere 集羣
wget https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/kubesphere-delete.sh
sh kubesphere-delete.sh
原文首發於:
https://blog.csdn.net/networken/article/details/105664147
關
於 KubeSphere
KubeSphere (https://kubesphere.io)是在 Kubernetes 之上構建的容器混合雲,提供全棧的 IT 自動化運維的能力,簡化企業的 DevOps 工做流。
KubeSphere 已被 Aqara 智能家居、本來生活、新浪、中國人保壽險、華夏銀行、中國太平保險、四川航空、國藥集團、微衆銀行、紫金保險、Radore、ZaloPay 等海內外數千家企業採用。KubeSphere 提供了運維友好的向導式操做界面和豐富的企業級功能,包括多雲與多集羣管理、Kubernetes 資源管理、DevOps (CI/CD)、應用生命周期管理、微服務治理 (Service Mesh)、多租戶管理、監控日志、告警通知、存儲與網絡管理、GPU support 等功能,幫助企業快速構建一個強大和功能豐富的容器雲平臺。
本文分享自微信公衆號 - KubeSphere(gh_4660e44db839)。
若有侵權,請聯繫 support@oschina.cn 刪除。
本文參與「OSC源創計劃」,歡迎正在閱讀的你也加入,一塊兒分享。