在Kubernetes集羣上部署高可用Harbor鏡像倉庫

在Kubernetes集羣上部署高可用Harbor鏡像倉庫

關於基於Harbor的高可用私有鏡像倉庫,在個人博客裏曾不止一次提到,在源創會2017瀋陽站上,我還專門以此題目和你們作了分享。過後,不少人經過微博私信我的公衆號或博客評論問我是否能夠在Kubernetes集羣上安裝高可用的Harbor倉庫,今天我就用這篇文章來回答你們這個問題。node

1、Kubernetes上的高可用Harbor方案

首先,我能夠確定給出一個回答:Harbor支持在Kubernetes部署。只不過Harbor官方的默認安裝並不是是高可用的,而是「單點式」的。在《基於Harbor的高可用企業級私有容器鏡像倉庫部署實踐》一文中,我曾談到了一種在裸機或VM上的、基於Cephfs共享存儲的高可用Harbor方案。在Kubernetes上部署,其高可用的思路也是相似的,可見下面這幅示意圖:mysql

圍繞這幅示意圖,簡單說明一下咱們的方案:linux

  • 經過在Kubernetes上啓動Harbor內部各組件的多個副本的方式實現Harbor服務的計算高可用;
  • 經過掛載CephFS共享存儲的方式實現鏡像數據高可用;
  • Harbor使用的配置數據和關係數據放在外部(External)數據庫集羣中,保證數據高可用和實時一致性;
  • 經過外部Redis集羣實現UI組件的session共享。

方案肯定後,接下來咱們就開始部署。nginx

2、環境準備

在Harbor官方的對Kubernetes支持的說明中,提到當前的Harbor on kubernetes相關腳本和配置在Kubernetes v1.6.5和Harbor v1.2.0上驗證測試經過了,所以在咱們的實驗環境中,Kubernetes至少要準備v1.6.5及之後版本。下面是個人環境的一些信息:git

Kubernetes使用v1.7.3版本:

# kubelet --version
Kubernetes v1.7.3

Docker使用17.03.2版本:

# docker version
Client:
 Version:      17.03.2-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 03:35:14 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.2-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 03:35:14 2017
 OS/Arch:      linux/amd64
 Experimental: false

關於Harbor的相關腳本,咱們直接用master branch中的,而不是v1.2.0這個release版本中的。切記!不然你會發現v1.2.0版本源碼中的相關kubernetes支持腳本根本就無法工做,甚至缺乏adminserver組件的相關腳本。不過Harbor相關組件的image版本,咱們使用的仍是v1.2.0的:github

Harbor源碼的版本:

commit 82d842d77c01657589d67af0ea2d0c66b1f96014
Merge pull request #3741 from wy65701436/add-tc-concourse   on Dec 4, 2017

Harbor各組件的image的版本:

REPOSITORY                      TAG                 IMAGE ID
vmware/harbor-jobservice      v1.2.0          1fb18427db11
vmware/harbor-ui              v1.2.0          b7069ac3bd4b
vmware/harbor-adminserver     v1.2.0          a18331f0c1ae
vmware/registry               2.6.2-photon    c38af846a0da
vmware/nginx-photon           1.11.13         2971c92cc1ae

除此以外,高可用Harbor使用外部的DB cluster和redis cluster,DB cluster咱們採用MySQL,對於MySQL cluster,可使用mysql galera cluster或MySQL5.7以上版本自帶的Group Replication (MGR) 集羣。redis

3、探索harbor on k8s部署腳本和配置

咱們在本地建立harbor-install-on-k8s目錄,並將Harbor最新源碼下載到該目錄下:sql

# mkdir harbor-install-on-k8s
# cd harbor-install-on-k8s
# wget -c https://github.com/vmware/harbor/archive/master.zip
# unzip master.zip
# cd harbor-master
# ls -F
AUTHORS  CHANGELOG.md  contrib/  CONTRIBUTING.md  docs/
LICENSE  make/  Makefile  NOTICE  partners.md  README.md
ROADMAP.md  src/  tests/  tools/  VERSION

將Harbor部署到k8s上的腳本就在make/kubernetes目錄下:docker

# cd harbor-master/make
# tree kubernetes
kubernetes
├── adminserver
│   ├── adminserver.rc.yaml
│   └── adminserver.svc.yaml
├── jobservice
│   ├── jobservice.rc.yaml
│   └── jobservice.svc.yaml
├── k8s-prepare
├── mysql
│   ├── mysql.rc.yaml
│   └── mysql.svc.yaml
├── nginx
│   ├── nginx.rc.yaml
│   └── nginx.svc.yaml
├── pv
│   ├── log.pvc.yaml
│   ├── log.pv.yaml
│   ├── registry.pvc.yaml
│   ├── registry.pv.yaml
│   ├── storage.pvc.yaml
│   └── storage.pv.yaml
├── registry
│   ├── registry.rc.yaml
│   └── registry.svc.yaml
├── templates
│   ├── adminserver.cm.yaml
│   ├── jobservice.cm.yaml
│   ├── mysql.cm.yaml
│   ├── nginx.cm.yaml
│   ├── registry.cm.yaml
│   └── ui.cm.yaml
└── ui
    ├── ui.rc.yaml
    └── ui.svc.yaml

8 directories, 25 files
  • k8s-prepare腳本:根據templates下的模板文件以及harbor.cfg中的配置生成各個組件,好比registry等的最終configmap配置文件。它的做用相似於用docker-compose工具部署Harbor時的prepare腳本;
  • templates目錄:templates目錄下放置各個組件的配置模板文件(configmap文件模板),將做爲k8s-prepare的輸入;
  • pv目錄:Harbor組件所使用的存儲插件的配置,默認狀況下使用hostpath,對於高可用Harbor而言,咱們這裏將使用cephfs;
  • 其餘組件目錄,好比:registry:這些目錄中存放這各個組件的service yaml和rc yaml,用於在Kubernetes cluster啓動各個組件時使用。

下面我用一個示意圖來形象地描述一下配置的生成過程以及各個文件在後續Harbor組件啓動中的做用:數據庫

 

因爲使用external mysql db,Harbor自帶的mysql組件咱們不會使用,對應的pv目錄下的storage.pv.yaml和storage.pvc.yaml咱們也不會去關注和使用。

4、部署步驟

一、配置和建立掛載Cephfs的pv和pvc

咱們先在共享分佈式存儲CephFS上爲Harbor的存儲需求建立目錄:apps/harbor-k8s,並在harbor-k8s下建立兩個子目錄:log和registry,分別知足jobservice和registry的存儲需求:

# cd /mnt   // CephFS的根目錄掛載到了/mnt下面
# mkdir -p apps/harbor-k8s/log
# mkdir -p apps/harbor-k8s/registry
# tree apps/harbor-k8s
apps/harbor-k8s
├── log
└── registry

關於CephFS的掛載等具體操做步驟,能夠參見個人《Kubernetes集羣跨節點掛載CephFS》一文。

接下來,建立用於k8s pv掛載cephfs的ceph-secret,咱們編寫一個ceph-secret.yaml文件:

//ceph-secret.yaml
apiVersion: v1
data:
  key: {base64 encoding of the ceph admin.secret}
kind: Secret
metadata:
  name: ceph-secret
type: Opaque

建立ceph-secret:

# kubectl create -f ceph-secret.yaml
secret "ceph-secret" created

最後,咱們來修改pv、pvc文件並建立對應的pv和pvc資源,要修改的文件包括pv/log.xxx和pv/registry.xxx,咱們的目的就是用cephfs替代原先的hostPath:

//log.pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: log-pv
  labels:
    type: log
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - {ceph-mon-node-ip}:6789
    path: /apps/harbor-k8s/log
    user: admin
    secretRef:
      name: ceph-secret
    readOnly: false
  persistentVolumeReclaimPolicy: Retain

//log.pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: log-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      type: log

// registry.pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: registry-pv
  labels:
    type: registry
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  cephfs:
    monitors:
      - 10.47.217.91:6789
    path: /apps/harbor-k8s/registry
    user: admin
    secretRef:
      name: ceph-secret
    readOnly: false
  persistentVolumeReclaimPolicy: Retain

//registry.pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: registry-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  selector:
    matchLabels:
      type: registry

建立pv和pvc:

# kubectl create -f log.pv.yaml
persistentvolume "log-pv" created
# kubectl create -f log.pvc.yaml
persistentvolumeclaim "log-pvc" created
# kubectl create -f registry.pv.yaml
persistentvolume "registry-pv" created
# kubectl create -f registry.pvc.yaml
persistentvolumeclaim "registry-pvc" created
# kubectl get pvc
NAME           STATUS    VOLUME        CAPACITY   ACCESSMODES   STORAGECLASS   AGE
log-pvc        Bound     log-pv        1Gi        RWX                          31s
registry-pvc   Bound     registry-pv   5Gi        RWX                          2s
# kubectl get pv
NAME          CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                  STORAGECLASS   REASON    AGE
log-pv        1Gi        RWX           Retain          Bound     default/log-pvc                                 36s
registry-pv   5Gi        RWX           Retain          Bound     default/registry-pvc                            6s

二、建立和初始化Harbor用的數據庫

咱們須要在External DB中建立Harbor訪問數據庫所用的user(harbork8s/harbork8s)以及所使用的數據庫(registry_k8s):

mysql> create user harbork8s identified  by 'harbork8s';
Query OK, 0 rows affected (0.03 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'harbork8s'@'%' IDENTIFIED BY 'harbork8s' WITH GRANT OPTION;
Query OK, 0 rows affected, 1 warning (0.00 sec)

# mysql> create database registry_k8s;
Query OK, 1 row affected (0.00 sec)

mysql> grant all on registry_k8s.* to 'harbork8s' identified by 'harbork8s';
Query OK, 0 rows affected, 1 warning (0.00 sec)

因爲目前Harbor還不支持自動init數據庫,所以咱們須要爲新建的registry_k8s數據庫作初始化,具體的方案就是先使用docker-compose工具在本地啓動一個harbor,經過mysqldump將harbor-db container中的數據表dump出來,再導入到external db中的registry_k8s中,具體操做步驟以下:

# wget -c http://harbor.orientsoft.cn/harbor-1.2.0/harbor-offline-installer-v1.2.0.tgz
# tar zxvf harbor-offline-installer-v1.2.0.tgz

進入harbor目錄,修改harbor.cfg中的hostname:

hostname = hub.tonybai.com:31777

# ./prepare
# docker-compose up -d

找到harbor_db的container id: 77fde71390e7,進入容器,並將數據庫registry dump出來:

# docker exec -i -t  77fde71390e7 bash
# mysqldump -u root -pxxx --databases registry > registry.dump

離開容器,將容器內導出的registry.dump copy到本地:
# docker cp 77fde71390e7:/tmp/registry.dump ./

修改registry.dump爲registry_k8s.dump,修改其內容中的registry爲registry_k8s,而後導入到external db:

# mysqldump -h external_db_ip -P 3306 -u harbork8s -pharbork8s
mysql> source ./registry_k8s.dump;

三、配置make/harbor.cfg

harbor.cfg是整個配置生成的重要輸入,咱們在k8s-prepare執行以前,先要根據咱們的須要和環境對harbor.cfg進行配置:

// make/harbor.cfg
hostname = hub.tonybai.com:31777
db_password = harbork8s
db_host = {external_db_ip}
db_user = harbork8s

四、對templates目錄下的configmap配置模板(*.cm.yaml)進行配置調整

  • templates/adminserver.cm.yaml:
MYSQL_HOST: {external_db_ip}
MYSQL_USR: harbork8s
MYSQL_DATABASE: registry_k8s
RESET: "true"

注:adminserver.cm.yaml沒有使用harbor.cfg中的有關數據庫的配置項,而是須要單獨再配置一遍,這塊估計未來會fix掉這個問題。

  • templates/registry.cm.yaml:
rootcertbundle: /etc/registry/root.crt
  • templates/ui.cm.yaml:

ui組件須要添加session共享。ui組件讀取_REDIS_URL環境變量:

//vmware/harbor/src/ui/main.go
... ..
    redisURL := os.Getenv("_REDIS_URL")
    if len(redisURL) > 0 {
        beego.BConfig.WebConfig.Session.SessionProvider = "redis"
        beego.BConfig.WebConfig.Session.SessionProviderConfig = redisURL
    }
... ...

而redisURL的格式在beego的源碼中有說明:

// beego/session/redis/sess_redis.go

// SessionInit init redis session
// savepath like redis server addr,pool size,password,dbnum
// e.g. 127.0.0.1:6379,100,astaxie,0
func (rp *Provider) SessionInit(maxlifetime int64, savePath string) error {...}

所以,咱們在templates/ui.cm.yaml中添加一行:

_REDIS_URL: {redis_ip}:6379,100,{redis_password},11

jobservice.cm.yaml和nginx.cm.yaml無需改變。

五、對各組件目錄下的xxx.rc.yaml和xxx.svc.yaml配置模板進行配置調整

  • adminserver/adminserver.rc.yaml
replicas: 3
  • adminserver/adminserver.svc.yaml

不變。

  • jobservice/jobservice.rc.yaml、jobservice/jobservice.svc.yaml

不變。

  • nginx/nginx.rc.yaml
replicas: 3
  • nginx/nginx.svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      nodePort: 31777
      protocol: TCP
  selector:
    name: nginx-apps
  • registry/registry.rc.yaml
replicas: 3
mountPath: /etc/registry

這裏有一個嚴重的bug,即registry.rc.yaml中configmap的默認mount路徑:/etc/docker/registry與registry的docker image中的registry配置文件的路徑/etc/registry不一致,這將致使咱們精心配置的registry的configmap根本沒有發揮做用,數據依然在memory中,而不是在咱們配置的Cephfs中。這樣一旦registry container退出,倉庫的image數據就會丟失。同時也沒法實現數據的高可用。所以,咱們將mountPath都改成與registry image的一致,即:/etc/registry目錄。

  • registry/registry.svc.yaml

不變。

  • ui/ui.rc.yaml
replicas: 3
  • ui/ui.svc.yaml
- name: _REDIS_URL
             valueFrom:
               configMapKeyRef:
                 name: harbor-ui-config
                 key: _REDIS_URL

六、執行k8s-prepare

執行k8s-prepare,生成各個組件的configmap文件:

# ./k8s-prepare
# git status
 ... ...

    adminserver/adminserver.cm.yaml
    jobservice/jobservice.cm.yaml
    mysql/mysql.cm.yaml
    nginx/nginx.cm.yaml
    registry/registry.cm.yaml
    ui/ui.cm.yaml

七、啓動Harbor組件

  • 建立configmap
# kubectl apply -f jobservice/jobservice.cm.yaml
configmap "harbor-jobservice-config" created
# kubectl apply -f nginx/nginx.cm.yaml
configmap "harbor-nginx-config" created
# kubectl apply -f registry/registry.cm.yaml
configmap "harbor-registry-config" created
# kubectl apply -f ui/ui.cm.yaml
configmap "harbor-ui-config" created
# kubectl apply -f adminserver/adminserver.cm.yaml
configmap "harbor-adminserver-config" created

# kubectl get cm
NAME                        DATA      AGE
harbor-adminserver-config   42        14s
harbor-jobservice-config    8         16s
harbor-nginx-config         3         16s
harbor-registry-config      2         15s
harbor-ui-config            9         15s
  • 建立harbor各組件對應的k8s service
# kubectl apply -f jobservice/jobservice.svc.yaml
service "jobservice" created
# kubectl apply -f nginx/nginx.svc.yaml
service "nginx" created
# kubectl apply -f registry/registry.svc.yaml
service "registry" created
# kubectl apply -f ui/ui.svc.yaml
service "ui" created
# kubectl apply -f adminserver/adminserver.svc.yaml
service "adminserver" created

# kubectl get svc
NAME               CLUSTER-IP      EXTERNAL-IP   PORT(S)
adminserver        10.103.7.8      <none>        80/TCP
jobservice         10.104.14.178   <none>        80/TCP
nginx              10.103.46.129   <nodes>       80:31777/TCP
registry           10.101.185.42   <none>        5000/TCP,5001/TCP
ui                 10.96.29.187    <none>        80/TCP
  • 建立rc,啓動各個組件pods
# kubectl apply -f registry/registry.rc.yaml
replicationcontroller "registry-rc" created
# kubectl apply -f jobservice/jobservice.rc.yaml
replicationcontroller "jobservice-rc" created
# kubectl apply -f ui/ui.rc.yaml
replicationcontroller "ui-rc" created
# kubectl apply -f nginx/nginx.rc.yaml
replicationcontroller "nginx-rc" created
# kubectl apply -f adminserver/adminserver.rc.yaml
replicationcontroller "adminserver-rc" created

#kubectl get pods
NAMESPACE     NAME                  READY     STATUS    RESTARTS   AGE
default       adminserver-rc-9pc78  1/1       Running   0          3m
default       adminserver-rc-pfqtv  1/1       Running   0          3m
default       adminserver-rc-w55sx  1/1       Running   0          3m
default       jobservice-rc-d18zk   1/1       Running   1          3m
default       nginx-rc-3t5km        1/1       Running   0          3m
default       nginx-rc-6wwtz        1/1       Running   0          3m
default       nginx-rc-dq64p        1/1       Running   0          3m
default       registry-rc-6w3b7     1/1       Running   0          3m
default       registry-rc-dfdld     1/1       Running   0          3m
default       registry-rc-t6fnx     1/1       Running   0          3m
default       ui-rc-0kwrz           1/1       Running   1          3m
default       ui-rc-kzs8d           1/1       Running   1          3m
default       ui-rc-vph6d           1/1       Running   1          3m

5、驗證與Troubleshooting

一、docker cli訪問

因爲harbor默認使用了http訪問,所以在docker login前先要將咱們的倉庫地址加到/etc/docker/daemon.json的insecure-registries中:

///etc/docker/daemon.json
{
  "insecure-registries": ["hub.tonybai.com:31777"]
}

systemctl daemon-reload and restart後,咱們就能夠經過docker login登陸新建的倉庫了(初始密碼:Harbor12345):

docker login hub.tonybai.com:31777
Username (admin): admin
Password:
Login Succeeded

二、docker push & pull

咱們測試上傳一個busybox image:

# docker pull busybox
Using default tag: latest
latest: Pulling from library/busybox
0ffadd58f2a6: Pull complete
Digest: sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
Status: Downloaded newer image for busybox:latest
# docker tag busybox:latest hub.tonybai.com:31777/library/busybox:latest
# docker push hub.tonybai.com:31777/library/busybox:latest
The push refers to a repository [hub.tonybai.com:31777/library/busybox]
0271b8eebde3: Preparing
0271b8eebde3: Pushing [==================================================>] 1.338 MB
0271b8eebde3: Pushed
latest: digest: sha256:179cf024c8a22f1621ea012bfc84b0df7e393cb80bf3638ac80e30d23e69147f size: 527

下載剛剛上傳的busybox:

# docker pull hub.tonybai.com:31777/library/busybox:latest
latest: Pulling from library/busybox
414e5515492a: Pull complete
Digest: sha256:179cf024c8a22f1621ea012bfc84b0df7e393cb80bf3638ac80e30d23e69147f
Status: Downloaded newer image for hub.tonybai.com:31777/library/busybox:latest

三、訪問Harbor UI

在瀏覽器中打開http://hub.tonybai.com:31777,用admin/Harbor12345登陸,若是看到下面頁面,說明安裝部署成功了:

 

6、參考資料

相關文章
相關標籤/搜索