不少人都使用過Ubuntu下的ap-get或者CentOS下的yum, 這二者都是Linux系統下的包管理工具。採用apt-get/yum,應用開發者能夠管理應用包之間的依賴關係,發佈應用;用戶則能夠以簡單的方式查找、安裝、升級、卸載應用程序。html
咱們能夠將Helm看做Kubernetes下的apt-get/yum。Helm是Deis (https://deis.com/) 開發的一個用於kubernetes的包管理器。每一個包稱爲一個Chart,一個Chart是一個目錄(通常狀況下會將目錄進行打包壓縮,造成name-version.tgz格式的單一文件,方便傳輸和存儲)。node
對於應用發佈者而言,能夠經過Helm打包應用,管理應用依賴關係,管理應用版本併發布應用到軟件倉庫。linux
對於使用者而言,使用Helm後不用須要瞭解Kubernetes的Yaml語法並編寫應用部署文件,能夠經過Helm下載並在kubernetes上安裝須要的應用。git
除此之外,Helm還提供了kubernetes上的軟件部署,刪除,升級,回滾應用的強大功能。github
Helm 是一個命令行下的客戶端工具。主要用於 Kubernetes 應用程序 Chart 的建立、打包、發佈以及建立和管理本地和遠程的 Chart 倉庫。web
Tiller 是 Helm 的服務端,部署在 Kubernetes 集羣中。Tiller 用於接收 Helm 的請求,並根據 Chart 生成 Kubernetes 的部署文件( Helm 稱爲 Release ),而後提交給 Kubernetes 建立應用。Tiller 還提供了 Release 的升級、刪除、回滾等一系列功能。mongodb
Helm 的軟件包,採用 TAR 格式。相似於 APT 的 DEB 包或者 YUM 的 RPM 包,其包含了一組定義 Kubernetes 資源相關的 YAML 文件。docker
Helm 的軟件倉庫,Repository 本質上是一個 Web 服務器,該服務器保存了一系列的 Chart 軟件包以供用戶下載,而且提供了一個該 Repository 的 Chart 包的清單文件以供查詢。Helm 能夠同時管理多個不一樣的 Repository。api
使用 helm install 命令在 Kubernetes 集羣中部署的 Chart 稱爲 Release。服務器
注:須要注意的是:Helm 中提到的 Release 和咱們一般概念中的版本有所不一樣,這裏的 Release 能夠理解爲 Helm 使用 Chart 包部署的一個應用實例。
Helm由客戶端命helm令行工具和服務端tiller組成,Helm的安裝十分簡單。 下載helm命令行工具到master節點node1的/usr/local/bin下,這裏下載的2.14.1版本:
curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz tar -zxvf helm-v2.14.1-linux-amd64.tar.gz cd linux-amd64/ cp helm /usr/local/bin/
爲了安裝服務端tiller,還須要在這臺機器上配置好kubectl工具和kubeconfig文件,確保kubectl工具能夠在這臺機器上訪問apiserver且正常使用。 這裏的master節點已經配置好了kubectl。
由於Kubernetes APIServer開啓了RBAC訪問控制,因此須要建立tiller使用的service account: tiller並分配合適的角色給它。 詳細內容能夠查看helm文檔中的Role-based Access Control。 這裏簡單起見直接分配cluster-admin這個集羣內置的ClusterRole給它。
建立helm-rbac.yaml文件:
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
建立帳號:
[root@master /]# kubectl apply -f helm-rbac.yaml serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
接下來使用helm部署tiller:
[root@master /]# helm init --service-account tiller --skip-refresh Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
因爲 Helm 默認會去 kubernetes-charts.storage.googleapis.com
拉取鏡像,若是你當前執行的機器不能訪問該域名的話能夠使用如下命令來安裝:
# 建立服務端 helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts # 建立TLS認證服務端,參考地址:https://github.com/gjmzj/kubeasz/blob/master/docs/guide/helm.md helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 --tiller-tls-cert /etc/kubernetes/ssl/tiller001.pem --tiller-tls-key /etc/kubernetes/ssl/tiller001-key.pem --tls-ca-cert /etc/kubernetes/ssl/ca.pem --tiller-namespace kube-system --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
client:
helm init --client-only --stable-repo-url https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts/ helm repo add incubator https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/ helm repo update
tiller默認被部署在k8s集羣中的kube-system這個namespace下:
[root@master /]# kubectl get pod -n kube-system -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-7bf78cdbf7-nclnr 0/1 ImagePullBackOff 0 40s
查看:
[root@master /]# helm version Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} Error: could not find a ready tiller pod
發現沒有啓動起來,查看錯誤:
[root@master /]# kubectl describe pod tiller-deploy-7bf78cdbf7-nclnr -n kube-system 。。。。。。。 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 29m (x4 over 32m) kubelet, slaver1 Pulling image "gcr.io/kubernetes-helm/tiller:v2.14.1" Warning Failed 29m (x4 over 31m) kubelet, slaver1 Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.14.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 29m (x4 over 31m) kubelet, slaver1 Error: ErrImagePull Normal BackOff 29m (x6 over 31m) kubelet, slaver1 Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.14.1" Normal Scheduled 28m default-scheduler Successfully assigned kube-system/tiller-deploy-7bf78cdbf7-nclnr to slaver1 Warning Failed 6m58s (x97 over 31m) kubelet, slaver1 Error: ImagePullBackOff
明顯就是網絡問題,鏡像拉去不下來
[root@master /]# helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
仍是有問題
直接刪除/root/.helm
安裝socat
[root@master /]# yum install socat -y
再卸載helm
[root@master /]# helm reset -f
再次執行
[root@master /]# helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been upgraded to the current version. [root@master /]# helm version Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
[root@master /]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-5c98db65d4-gts57 1/1 Running 0 3d22h 10.244.2.2 slaver2 <none> <none> coredns-5c98db65d4-qhwrw 1/1 Running 0 3d22h 10.244.1.2 slaver1 <none> <none> etcd-master 1/1 Running 2 3d22h 18.16.202.163 master <none> <none> kube-apiserver-master 1/1 Running 2 3d22h 18.16.202.163 master <none> <none> kube-controller-manager-master 1/1 Running 6 3d22h 18.16.202.163 master <none> <none> kube-flannel-ds-amd64-2lwl8 1/1 Running 0 3d18h 18.16.202.227 slaver1 <none> <none> kube-flannel-ds-amd64-9bjck 1/1 Running 0 3d18h 18.16.202.95 slaver2 <none> <none> kube-flannel-ds-amd64-gxxqg 1/1 Running 0 3d18h 18.16.202.163 master <none> <none> kube-proxy-8cwj4 1/1 Running 0 18h 18.16.202.163 master <none> <none> kube-proxy-j9zpz 1/1 Running 0 18h 18.16.202.227 slaver1 <none> <none> kube-proxy-vfgjv 1/1 Running 0 18h 18.16.202.95 slaver2 <none> <none> kube-scheduler-master 1/1 Running 6 3d22h 18.16.202.163 master <none> <none> tiller-deploy-6787c946f8-qm9cn 1/1 Running 0 15h 10.244.1.5 slaver1 <none> <none>
能夠看到只在slaver1
節點上面才安裝了tiller
# 先移除原先的倉庫 helm repo remove stable # 添加新的倉庫地址 helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts # 更新倉庫 helm repo update
阿里雲的倉庫 版本更新太慢,換成微軟的
helm repo add stable http://mirror.azure.cn/kubernetes/charts/
查看倉庫:
[root@master /]# helm repo list NAME URL stable http://mirror.azure.cn/kubernetes/charts/ local http://127.0.0.1:8879/charts
[root@master /]# helm search NAME CHART VERSION APP VERSION DESCRIPTION stable/acs-engine-autoscaler 2.1.3 2.1.1 Scales worker nodes within agent pools stable/aerospike 0.1.7 v3.14.1.2 A Helm chart for Aerospike in Kubernetes stable/anchore-engine 0.1.3 0.1.6 Anchore container analysis and policy evaluation engine s... stable/artifactory 7.0.3 5.8.4 Universal Repository Manager supporting all major packagi... stable/artifactory-ha 0.1.0 5.8.4 Universal Repository Manager supporting all major packagi... stable/aws-cluster-autoscaler 0.3.2 Scales worker nodes within autoscaling groups. stable/bitcoind 0.1.0 0.15.1 Bitcoin is an innovative payment network and a new kind o... stable/buildkite 0.2.1 3 Agent for Buildkite stable/centrifugo 2.0.0 1.7.3 Centrifugo is a real-time messaging server. stable/cert-manager 0.2.2 0.2.3 A Helm chart for cert-manager stable/chaoskube 0.6.2 0.6.1 Chaoskube periodically kills random pods in your Kubernet... stable/chronograf 0.4.2 Open-source web application written in Go and React.js th... stable/cluster-autoscaler 0.4.2 1.1.0 Scales worker nodes within autoscaling groups. stable/cockroachdb 0.6.5 1.1.5 CockroachDB is a scalable, survivable, strongly-consisten... stable/concourse 1.0.2 3.9.0 Concourse is a simple and scalable CI system. stable/consul 1.3.1 1.0.0 Highly available and distributed service discovery and ke... stable/coredns 0.8.0 1.0.1 CoreDNS is a DNS server that chains plugins and provides ... stable/coscale 0.2.0 3.9.1 CoScale Agent stable/dask-distributed 2.0.0 Distributed computation in Python stable/datadog 0.10.9 DataDog Agent stable/docker-registry 1.0.3 2.6.2 A Helm chart for Docker Registry 。。。。。。
helm reset
examples/ Chart.yaml # Yaml文件,用於描述Chart的基本信息,包括名稱版本等 LICENSE # [可選] 協議 README.md # [可選] 當前Chart的介紹 values.yaml # Chart的默認配置文件 requirements.yaml # [可選] 用於存放當前Chart依賴的其它Chart的說明文件 charts/ # [可選]: 該目錄中放置當前Chart依賴的其它Chart templates/ # [可選]: 部署文件模版目錄,模版使用的值來自values.yaml和由Tiller提供的值 templates/NOTES.txt # [可選]: 放置Chart的使用指南
name: [必須] Chart的名稱 version: [必須] Chart的版本號,版本號必須符合 SemVer 2:http://semver.org/ description: [可選] Chart的簡要描述 keywords: - [可選] 關鍵字列表 home: [可選] 項目地址 sources: - [可選] 當前Chart的下載地址列表 maintainers: # [可選] - name: [必須] 名字 email: [可選] 郵箱 engine: gotpl # [可選] 模版引擎,默認值是gotpl icon: [可選] 一個SVG或PNG格式的圖片地址
requirements.yaml 文件內容:
dependencies: - name: example version: 1.2.3 repository: http://example.com/charts - name: Chart名稱 version: Chart版本 repository: 該Chart所在的倉庫地址
Chart支持兩種方式表示依賴關係,能夠使用requirements.yaml或者直接將依賴的Chart放置到charts目錄中。
templates目錄中存放了Kubernetes部署文件的模版。
# db.yaml apiVersion: v1 kind: ReplicationController metadata: name: deis-database namespace: deis labels: heritage: deis spec: replicas: 1 selector: app: deis-database template: metadata: labels: app: deis-database spec: serviceAccount: deis-database containers: - name: deis-database image: {{.Values.imageRegistry}}/postgres:{{.Values.dockerTag}} imagePullPolicy: {{.Values.pullPolicy}} ports: - containerPort: 5432 env: - name: DATABASE_STORAGE value: {{default "minio" .Values.storage}}
咱們建立一個名爲mongodb
的chart,看一看chart的文件結構。
[root@master /]# helm create mongodb Creating mongodb [root@master /]# tree mongodb mongodb ├── charts #依賴的chart ├── Chart.yaml #Chart自己的版本和配置信息 ├── templates #配置模板目錄 │ ├── deployment.yaml #kubernetes Deployment object │ ├── _helpers.tpl #用於修改kubernetes objcet配置的模板 │ ├── ingress.yaml │ ├── NOTES.txt #helm提示信息 │ ├── service.yaml #kubernetes Serivce │ └── tests │ └── test-connection.yaml └── values.yaml #kubernetes object configuration 3 directories, 8 files