Kubernetes部署(一):架構及功能說明
Kubernetes部署(二):系統環境初始化
Kubernetes部署(三):CA證書製做
Kubernetes部署(四):ETCD集羣部署
Kubernetes部署(五):Haproxy、Keppalived部署
Kubernetes部署(六):Master節點部署
Kubernetes部署(七):Node節點部署
Kubernetes部署(八):Flannel網絡部署
Kubernetes部署(九):CoreDNS、Dashboard、Ingress部署
Kubernetes部署(十):儲存之glusterfs和heketi部署
Kubernetes部署(十一):管理之Helm和Rancher部署
Kubernetes部署(十二):helm部署harbor企業級鏡像倉庫node
harbor官方github:https://github.com/goharbor
Harbor是一個用於存儲和分發Docker鏡像的企業級Registry服務器。Harbor經過添加用戶一般須要的功能(如安全性,身份和管理)來擴展開源Docker Distribution。使registry更接近構建和運行環境能夠提升圖像傳輸效率。Harbor支持在registry之間複製映像,還提供高級安全功能,如用戶管理,訪問控制和活動審計。mysql
將h.cnlinux.club
和n.cnlinux.club
的A記錄解析到個人負載均衡IP 10.31.90.200
,用於建立ingress。linux
[root@node-01 harbor]# wget https://github.com/goharbor/harbor-helm/archive/1.0.0.tar.gz -O harbor-helm-v1.0.0.tar.gz
harbor-helm-v1.0.0.tar.gz
文件中的values.yaml
文件,並放到和harbor-helm-v1.0.0.tar.gz同一級的目錄中。修改values.yaml,個人配置修改了以下幾個字段:nginx
須要說明的是若是k8s集羣中存在storageclass就能夠直接用storageclass,在幾個persistence.persistentVolumeClaim.XXX.storageClass中指定storageclass名就能夠了,會自動建立多個pvc,可是我這裏爲了防止建立多個pvc增長管理難度,我在部署前建立了一個pvc,harbor下全部的服務都使用這一個pvc,具體每一個字段的做用請查看官方文檔https://github.com/goharbor/harbor-helm。git
expose.ingress.hosts.core
xpose.ingress.hosts.notary
externalURL
persistence.persistentVolumeClaim.registry.existingClaim
persistence.persistentVolumeClaim.registry.subPath
persistence.persistentVolumeClaim.chartmuseum.existingClaim
persistence.persistentVolumeClaim.chartmuseum.subPath
persistence.persistentVolumeClaim.jobservice.existingClaim
persistence.persistentVolumeClaim.jobservice.subPath
persistence.persistentVolumeClaim.database.existingClaim
persistence.persistentVolumeClaim.database.subPath
persistence.persistentVolumeClaim.redis.existingClaim
persistence.persistentVolumeClaim.redis.subPath
expose: type: ingress tls: enabled: true secretName: "" notarySecretName: "" commonName: "" ingress: hosts: core: h.cnlinux.club notary: n.cnlinux.club annotations: ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-body-size: "0" clusterIP: name: harbor ports: httpPort: 80 httpsPort: 443 notaryPort: 4443 nodePort: name: harbor ports: http: port: 80 nodePort: 30002 https: port: 443 nodePort: 30003 notary: port: 4443 nodePort: 30004 externalURL: https://h.cnlinux.club persistence: enabled: true resourcePolicy: "keep" persistentVolumeClaim: registry: existingClaim: "pvc-harbor" storageClass: "" subPath: "registry" accessMode: ReadWriteOnce size: 5Gi chartmuseum: existingClaim: "pvc-harbor" storageClass: "" subPath: "chartmuseum" accessMode: ReadWriteOnce size: 5Gi jobservice: existingClaim: "pvc-harbor" storageClass: "" subPath: "jobservice" accessMode: ReadWriteOnce size: 1Gi database: existingClaim: "pvc-harbor" storageClass: "" subPath: "database" accessMode: ReadWriteOnce size: 1Gi redis: existingClaim: "pvc-harbor" storageClass: "" subPath: "redis" accessMode: ReadWriteOnce size: 1Gi imageChartStorage: type: filesystem filesystem: rootdirectory: /storage imagePullPolicy: IfNotPresent logLevel: debug harborAdminPassword: "Harbor12345" secretKey: "not-a-secure-key" nginx: image: repository: goharbor/nginx-photon tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} portal: image: repository: goharbor/harbor-portal tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} core: image: repository: goharbor/harbor-core tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} adminserver: image: repository: goharbor/harbor-adminserver tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} jobservice: image: repository: goharbor/harbor-jobservice tag: v1.7.0 replicas: 1 maxJobWorkers: 10 jobLogger: file nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} registry: registry: image: repository: goharbor/registry-photon tag: v2.6.2-v1.7.0 controller: image: repository: goharbor/harbor-registryctl tag: v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} chartmuseum: enabled: true image: repository: goharbor/chartmuseum-photon tag: v0.7.1-v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} clair: enabled: true image: repository: goharbor/clair-photon tag: v2.0.7-v1.7.0 replicas: 1 httpProxy: httpsProxy: updatersInterval: 12 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} notary: enabled: true server: image: repository: goharbor/notary-server-photon tag: v0.6.1-v1.7.0 replicas: 1 signer: image: repository: goharbor/notary-signer-photon tag: v0.6.1-v1.7.0 replicas: 1 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} database: type: internal internal: image: repository: goharbor/harbor-db tag: v1.7.0 password: "changeit" nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {} redis: type: internal internal: image: repository: goharbor/redis-photon tag: v1.7.0 nodeSelector: {} tolerations: [] affinity: {} podAnnotations: {}
由於harbor須要使用到mysql,爲防止mysql在調度過程當中形成數據丟失,咱們須要將mysql的數據存儲在gluster的存儲卷裏。github
[root@node-01 harbor]# vim pvc-harbor.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-harbor spec: storageClassName: gluster-heketi accessModes: - ReadWriteMany resources: requests: storage: 50Gi
[root@node-01 harbor]# kubectl apply -f pvc-harbor.yaml
[root@node-01 harbor]# helm install --name harbor harbor-helm-v1.0.0.tar.gz -f values.yaml
若是安裝不成功能夠用
helm del --purge harbor
刪除從新安裝。redis
在一段時間後能夠看到harbor全部相關的pod都已經運行起來了,咱們就能夠訪問了,默認用戶密碼是admin/Harbor12345,能夠經過修改values.yaml來更改默認的用戶名和密碼。sql
[root@node-01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE harbor-harbor-adminserver-7fffc7bf4d-vj845 1/1 Running 1 15d harbor-harbor-chartmuseum-bdf64f899-brnww 1/1 Running 0 15d harbor-harbor-clair-8457c45dd8-9rgq8 1/1 Running 1 15d harbor-harbor-core-7fc454c6d8-b6kvs 1/1 Running 1 15d harbor-harbor-database-0 1/1 Running 0 15d harbor-harbor-jobservice-7895949d6b-zbwkf 1/1 Running 1 15d harbor-harbor-notary-server-57dd94bf56-txdkl 1/1 Running 0 15d harbor-harbor-notary-signer-5d64c5bf8d-kppts 1/1 Running 0 15d harbor-harbor-portal-648c56499f-g28rz 1/1 Running 0 15d harbor-harbor-redis-0 1/1 Running 0 15d harbor-harbor-registry-5cd9c49489-r92ph 2/2 Running 0 15d
接下來咱們建立test的私有項目用來測試。
docker
for n in `seq -w 01 06`;do ssh node-$n "mkdir -p /etc/docker/certs.d/h.cnlinux.club";done #將下載下來的harbor CA證書拷貝到每一個node節點的etc/docker/certs.d/h.cnlinux.club目錄下 for n in `seq -w 01 06`;do scp ca.crt node-$n:/etc/docker/certs.d/h.cnlinux.club/;done
.docker/config.json
裏。[root@node-06 ~]# docker login h.cnlinux.club Username: admin Password: Login Succeeded [root@node-06 ~]# cat .docker/config.json { "auths": { "h.cnlinux.club": { "auth": "YWRtaW46SGFyYm9yMTIzNDU=" } } }
[root@node-06 ~]# docker pull nginx:latest [root@node-06 ~]# docker tag nginx:latest h.cnlinux.club/test/nginx:latest [root@node-06 ~]# docker push h.cnlinux.club/test/nginx:latest
問題:若是個人k8s集羣不少的node節點是否是每一個node節點都要上去登陸才能pull harbor倉庫的鏡像?這樣是否是就很是麻煩了?json
kubernetes.io/dockerconfigjson
就是用來解決這種問題的。[root@node-06 ~]# cat .docker/config.json |base64 ewoJImF1dGhzIjogewoJCSJoLmNubGludXguY2x1YiI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wNi4xLWNlIChsaW51eCkiCgl9Cn0=
apiVersion: v1 kind: Secret metadata: name: harbor-registry-secret namespace: default data: .dockerconfigjson: ewoJImF1dGhzIjogewoJCSJoLmNubGludXguY2x1YiI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZTR0Z5WW05eU1USXpORFU9IgoJCX0KCX0sCgkiSHR0cEhlYWRlcnMiOiB7CgkJIlVzZXItQWdlbnQiOiAiRG9ja2VyLUNsaWVudC8xOC4wNi4xLWNlIChsaW51eCkiCgl9Cn0= type: kubernetes.io/dockerconfigjson
[root@node-01 ~]# kubectl create -f harbor-registry-secret.yaml secret/harbor-registry-secret created
10.31.90.200
。apiVersion: apps/v1 kind: Deployment metadata: name: deploy-nginx labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: h.cnlinux.club/test/nginx:latest ports: - containerPort: 80 imagePullSecrets: - name: harbor-registry-secret --- apiVersion: v1 kind: Service metadata: name: nginx spec: selector: app: nginx ports: - name: nginx protocol: TCP port: 80 targetPort: 80 type: ClusterIP --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx annotations: # nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: nginx spec: rules: - host: nginx.cnlinux.club http: paths: - path: backend: serviceName: nginx servicePort: 80
[root@node-01 ~]# kubectl get pod -o wide|grep nginx deploy-nginx-647f9649f5-88mkt 1/1 Running 0 2m41s 10.34.0.5 node-06 <none> <none> deploy-nginx-647f9649f5-9z842 1/1 Running 0 2m41s 10.40.0.5 node-04 <none> <none> deploy-nginx-647f9649f5-w44ck 1/1 Running 0 2m41s 10.46.0.6 node-05 <none> <none>
最後咱們訪問http://nginx.cnlinux.club
,至此全部的都已完成。
後續會陸續更新全部的k8s相關文檔,若是你以爲我寫的不錯,但願你們多多關注點贊,若有問題能夠在下面給我留言,很是感謝!