在 Kubernetes 中部署高可用 Harbor 鏡像倉庫

原文連接:fuckcloudnative.io/posts/insta…node

系統環境:linux

  • kubernetes 版本:1.18.10
  • Harbor Chart 版本:1.5.2
  • Harbor 版本:2.1.2
  • Helm 版本:3.3.4
  • 持久化存儲驅動:Ceph RBD

1. Harbor 簡介

簡介

Harbor 是一個開放源代碼容器鏡像註冊表,可經過基於角色權限的訪問控制來管理鏡像,還能掃描鏡像中的漏洞並將映像簽名爲受信任。Harbor 是 CNCF 孵化項目,可提供合規性,性能和互操做性,以幫助跨 Kubernetes 和 Docker 等雲原生計算平臺持續,安全地管理鏡像。nginx

特性

  • 管理:多租戶、可擴展
  • 安全:安全和漏洞分析、內容簽名與驗證

2. 建立自定義證書

安裝 Harbor 咱們會默認使用 HTTPS 協議,須要 TLS 證書,若是咱們沒用本身設定自定義證書文件,那麼 Harbor 將自動建立證書文件,不過這個有效期只有一年時間,因此這裏咱們生成自簽名證書,爲了不頻繁修改證書,將證書有效期爲 100 年,操做以下:git

安裝 cfssl

fssl 是 CloudFlare 開源的一款 PKI/TLS 工具,cfssl 包含一個命令行工具和一個用於簽名,驗證而且捆綁 TLS 證書的HTTP API服務,使用 Go 語言編寫.github

github: github.com/cloudflare/…web

下載地址: pkg.cfssl.org/redis

macOS 安裝步驟:docker

🐳  → brew install cfssl
複製代碼

通用安裝方式:json

🐳  → wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
🐳  → wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
🐳  → wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo
🐳  → chmod +x /usr/local/bin/cfssl*
複製代碼

獲取默認配置

🐳  → cfssl print-defaults config > ca-config.json
🐳  → cfssl print-defaults csr > ca-csr.json
複製代碼

生成 CA 證書

ca-config.json內容修改成:swift

{
    "signing": {
        "default": {
            "expiry": "876000h"
        },
        "profiles": {
            "harbor": {
                "expiry": "876000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            }
        }
    }
}
複製代碼

修改ca-csr.json文件內容爲:

{
  "CN": "CA",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "hangzhou",
      "L": "hangzhou",
      "O": "harbor",
      "OU": "System"
    }
  ]
}
複製代碼

修改好配置文件後,接下來就能夠生成 CA 證書了:

🐳  → cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2020/12/30 00:45:55 [INFO] generating a new CA key and certificate from CSR
2020/12/30 00:45:55 [INFO] generate received request
2020/12/30 00:45:55 [INFO] received CSR
2020/12/30 00:45:55 [INFO] generating key: rsa-2048
2020/12/30 00:45:56 [INFO] encoded CSR
2020/12/30 00:45:56 [INFO] signed certificate with serial number 529798847867094212963042958391637272775966762165
複製代碼

此時目錄下會出現三個文件:

🐳  → tree
├── ca-config.json #這是剛纔的json
├── ca.csr
├── ca-csr.json    #這也是剛纔申請證書的json
├── ca-key.pem
├── ca.pem

複製代碼

這樣 咱們就生成了:

  • 根證書文件: ca.pem
  • 根證書私鑰: ca-key.pem
  • 根證書申請文件: ca.csr (csr 是否是 client ssl request?)

簽發證書

建立harbor-csr.json,內容爲:

{
    "CN": "harbor",
    "hosts": [
        "example.net",
        "*.example.net"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "US",
            "ST": "CA",
            "L": "San Francisco",
	    "O": "harbor",
	    "OU": "System"
        }
    ]
}
複製代碼

使用以前的 CA 證書籤發 harbor 證書:

🐳  → cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=harbor harbor-csr.json | cfssljson -bare harbor
2020/12/30 00:50:31 [INFO] generate received request
2020/12/30 00:50:31 [INFO] received CSR
2020/12/30 00:50:31 [INFO] generating key: rsa-2048
2020/12/30 00:50:31 [INFO] encoded CSR
2020/12/30 00:50:31 [INFO] signed certificate with serial number 372641098655462687944401141126722021767151134362
複製代碼

此時目錄下會多幾個文件:

🐳  → tree -L 1
├── etcd.csr
├── etcd-csr.json
├── etcd-key.pem
├── etcd.pem
複製代碼

至此,harbor 的證書生成完成。

生成 Secret 資源

建立 Kubernetes 的 Secret 資源,且將證書文件導入:

  • - n:指定建立資源的 Namespace
  • --from-file:指定要導入的文件地址
🐳  → kubectl create ns harbor
🐳  → kubectl -n harbor create secret generic harbor-tls --from-file=tls.crt=harbor.pem --from-file=tls.key=harbor-key.pem --from-file=ca.crt=ca.pem
複製代碼

查看是否建立成功:

🐳  → kubectl -n harbor get secret harbor-tls
NAME         TYPE     DATA   AGE
harbor-tls   Opaque   3      1m
複製代碼

3. 使用 Ceph S3 爲 Harbor chart 提供後端存儲

建立 radosgw

若是你是經過 ceph-deploy 部署的,能夠經過如下步驟建立 radosgw

先安裝 radosgw:

🐳  → ceph-deploy install --rgw 172.16.7.1 172.16.7.2 172.16.7.3
複製代碼

而後建立 radosgw:

🐳  → ceph-deploy rgw create 172.16.7.1 172.16.7.2 172.16.7.3
複製代碼

若是你是經過 cephadm 部署的,能夠經過如下步驟建立 radosgw

cephadm 將 radosgw 部署爲管理特定領域區域的守護程序的集合。例如,要在 172.16.7.1 上部署 1 個服務於 mytest 領域和 myzone 區域的 rgw 守護程序:

#若是還沒有建立領域,請首先建立一個領域:
🐳  → radosgw-admin realm create --rgw-realm=mytest --default

#接下來建立一個新的區域組:
🐳  → radosgw-admin zonegroup create --rgw-zonegroup=myzg --master --default

#接下來建立一個區域:
🐳  → radosgw-admin zone create --rgw-zonegroup=myzg --rgw-zone=myzone --master --default

#爲特定領域和區域部署一組radosgw守護程序:
🐳  → ceph orch apply rgw mytest myzone --placement="1 172.16.7.1"
複製代碼

查看服務狀態:

🐳  → ceph orch ls|grep rgw
rgw.mytest.myzone      1/1  5m ago     7w   count:1 k8s01  docker.io/ceph/ceph:v15     4405f6339e35
複製代碼

測試服務是否正常:

🐳  → curl -s http://172.16.7.1
複製代碼

正常返回以下數據:

<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Owner>
    <ID>anonymous</ID>
    <DisplayName></DisplayName>
  </Owner>
  <Buckets></Buckets>
</ListAllMyBucketsResult>
複製代碼

查看 zonegroup

🐳  → radosgw-admin zonegroup get
{
    "id": "ed34ba6e-7089-4b7f-91c4-82fc856fc16c",
    "name": "myzg",
    "api_name": "myzg",
    "is_master": "true",
    "endpoints": [],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "650e7cca-aacb-4610-a589-acd605d53d23",
    "zones": [
        {
            "id": "650e7cca-aacb-4610-a589-acd605d53d23",
            "name": "myzone",
            "endpoints": [],
            "log_meta": "false",
            "log_data": "false",
            "bucket_index_max_shards": 11,
            "read_only": "false",
            "tier_type": "",
            "sync_from_all": "true",
            "sync_from": [],
            "redirect_zone": ""
        }
    ],
    "placement_targets": [
        {
            "name": "default-placement",
            "tags": [],
            "storage_classes": [
                "STANDARD"
            ]
        }
    ],
    "default_placement": "default-placement",
    "realm_id": "e63c234c-e069-4a0d-866d-1ebdc69ec5fe",
    "sync_policy": {
        "groups": []
    }
}
複製代碼

Create Auth Key

🐳  → ceph auth get-or-create client.radosgw.gateway osd 'allow rwx' mon 'allow rwx' -o /etc/ceph/ceph.client.radosgw.keyring
複製代碼

分發 /etc/ceph/ceph.client.radosgw.keyring 到其它 radosgw 節點。

建立對象存儲用戶和訪問憑證

  1. Create a radosgw user for s3 access

    🐳  → radosgw-admin user create --uid="harbor" --display-name="Harbor Registry"
    複製代碼
  2. Create a swift user

    🐳  → adosgw-admin subuser create --uid=harbor --subuser=harbor:swift --access=full
    複製代碼
  3. Create Secret Key

    🐳  → radosgw-admin key create --subuser=harbor:swift --key-type=swift --gen-secret
    複製代碼

    記住 keys 字段中的 access_key & secret_key

建立存儲桶(bucket)

首先須要安裝 awscli

🐳  → pip3 install awscli  -i https://pypi.tuna.tsinghua.edu.cn/simple
複製代碼

查看祕鑰:

🐳  → radosgw-admin user info --uid="harbor"|jq .keys
[
  {
    "user": "harbor",
    "access_key": "VGZQY32LMFQOQPVNTDSJ",
    "secret_key": "YZMMYqoy1ypHaqGOUfwLvdAj9A731iDYDjYqwkU5"
  }
]
複製代碼

配置 awscli:

🐳  → aws configure --profile=ceph
AWS Access Key ID [None]: VGZQY32LMFQOQPVNTDSJ
AWS Secret Access Key [None]: YZMMYqoy1ypHaqGOUfwLvdAj9A731iDYDjYqwkU5
Default region name [None]:
Default output format [None]: json
複製代碼

配置完成後,憑證將會存儲到 ~/.aws/credentials

🐳  → cat ~/.aws/credentials
[ceph]
aws_access_key_id = VGZQY32LMFQOQPVNTDSJ
aws_secret_access_key = YZMMYqoy1ypHaqGOUfwLvdAj9A731iDYDjYqwkU5
複製代碼

配置將會存儲到 ~/.aws/config

🐳  → cat ~/.aws/config
[profile ceph]
region = cn-hangzhou-1
output = json
複製代碼

建立存儲桶(bucket):

🐳  → aws --profile=ceph --endpoint=http://172.16.7.1 s3api create-bucket --bucket harbor
複製代碼

查看存儲桶(bucket)列表:

🐳  → radosgw-admin bucket list
[
    "harbor"
]
複製代碼

查看存儲桶狀態:

🐳  → radosgw-admin bucket stats
[
    {
        "bucket": "harbor",
        "num_shards": 11,
        "tenant": "",
        "zonegroup": "ed34ba6e-7089-4b7f-91c4-82fc856fc16c",
        "placement_rule": "default-placement",
        "explicit_placement": {
            "data_pool": "",
            "data_extra_pool": "",
            "index_pool": ""
        },
        "id": "650e7cca-aacb-4610-a589-acd605d53d23.194274.1",
        "marker": "650e7cca-aacb-4610-a589-acd605d53d23.194274.1",
        "index_type": "Normal",
        "owner": "harbor",
        "ver": "0#1,1#1,2#1,3#1,4#1,5#1,6#1,7#1,8#1,9#1,10#1",
        "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0",
        "mtime": "2020-12-29T17:19:02.481567Z",
        "creation_time": "2020-12-29T17:18:58.940915Z",
        "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#",
        "usage": {},
        "bucket_quota": {
            "enabled": false,
            "check_on_raw": false,
            "max_size": -1,
            "max_size_kb": 0,
            "max_objects": -1
        }
    }
]
複製代碼

查看存儲池狀態

🐳  → rados df
POOL_NAME                    USED  OBJECTS  CLONES  COPIES  MISSING_ON_PRIMARY  UNFOUND  DEGRADED    RD_OPS       RD     WR_OPS       WR  USED COMPR  UNDER COMPR
.rgw.root                 2.3 MiB       13       0      39                   0        0         0       533  533 KiB         21   16 KiB         0 B          0 B
cache                         0 B        0       0       0                   0        0         0         0      0 B          0      0 B         0 B          0 B
device_health_metrics     3.2 MiB       18       0      54                   0        0         0       925  929 KiB        951  951 KiB         0 B          0 B
kubernetes                735 GiB    72646      99  217938                   0        0         0  48345148  242 GiB  283283048  7.3 TiB         0 B          0 B
myzone.rgw.buckets.index  8.6 MiB       11       0      33                   0        0         0        44   44 KiB         11      0 B         0 B          0 B
myzone.rgw.control            0 B        8       0      24                   0        0         0         0      0 B          0      0 B         0 B          0 B
myzone.rgw.log              6 MiB      206       0     618                   0        0         0   2188882  2.1 GiB    1457026   32 KiB         0 B          0 B
myzone.rgw.meta           960 KiB        6       0      18                   0        0         0        99   80 KiB         17    8 KiB         0 B          0 B

total_objects    72908
total_used       745 GiB
total_avail      87 TiB
total_space      88 TiB
複製代碼

3. 設置 Harbor 配置清單

因爲咱們須要經過 Helm 安裝 Harbor 倉庫,須要提早建立 Harbor Chart 的配置清單文件,裏面是對要建立的應用 Harbor 進行一系列參數配置,因爲參數過多,關於都有 Harbor Chart 都可以配置哪些參數這裏就不一一羅列,能夠經過訪問 Harbor-helm 的 Github 地址 進行了解。

下面描述下,須要的一些配置參數:

values.yaml

#入口配置,我只在內網使用,因此直接使用 cluserIP
expose:
  type: clusterIP
  tls:
    ### 是否啓用 https 協議
    enabled: true
    certSource: secret
    auto:
      # The common name used to generate the certificate, it's necessary
      # when the type isn't "ingress"
      commonName: "harbor.example.net"
    secret:
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      secretName: "harbor-tls"
      # The name of secret which contains keys named:
      # "tls.crt" - the certificate
      # "tls.key" - the private key
      # Only needed when the "expose.type" is "ingress".
      notarySecretName: ""

## 若是Harbor部署在代理後,將其設置爲代理的URL
externalURL: https://harbor.example.net

### Harbor 各個組件的持久化配置,並將 storageClass 設置爲集羣默認的 storageClass
persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete
  # operation. Leaving it empty will delete PVCs after the chart deleted
  # (this does not apply for PVCs that are created for internal database
  # and redis components, i.e. they are never deleted automatically)
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound,
      # and specify the "subPath" if the PVC is shared with other components
      existingClaim: ""
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used(the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: "csi-rbd-sc"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 100Gi
    chartmuseum:
      existingClaim: ""
      storageClass: "csi-rbd-sc"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: ""
      storageClass: "csi-rbd-sc"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: ""
      storageClass: "csi-rbd-sc"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    # If external Redis is used, the following settings for Redis will
    # be ignored
    redis:
      existingClaim: ""
      storageClass: "csi-rbd-sc"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    trivy:
      existingClaim: ""
      storageClass: "csi-rbd-sc"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi

### 默認用戶名 admin 的密碼配置,注意:密碼中必定要包含大小寫字母與數字
harborAdminPassword: "Mydlq123456"

### 設置日誌級別
logLevel: info

#各個組件 CPU & Memory 資源相關配置
nginx:
  resources:
    requests:
      memory: 256Mi
      cpu: 500m
portal:
  resources:
    requests:
      memory: 256Mi
      cpu: 500m
core:
  resources:
    requests:
      memory: 256Mi
      cpu: 1000m
jobservice:
  resources:
    requests:
      memory: 256Mi
      cpu: 500m
registry:
  registry:
    resources:
      requests:
        memory: 256Mi
        cpu: 500m
  controller:
    resources:
      requests:
        memory: 256Mi
        cpu: 500m
clair:
  clair:
    resources:
      requests:
        memory: 256Mi
        cpu: 500m
  adapter:
    resources:
      requests:
        memory: 256Mi
        cpu: 500m
notary:
  server:
    resources:
      requests:
        memory: 256Mi
        cpu: 500m
  signer:
    resources:
      requests:
        memory: 256Mi
        cpu: 500m
database:
  internal:
    resources:
      requests:
        memory: 256Mi
        cpu: 500m
redis:
  internal:
    resources:
      requests:
        memory: 256Mi
        cpu: 500m
trivy:
  enabled: true
  resources:
    requests:
      cpu: 200m
      memory: 512Mi
    limits:
      cpu: 1000m
      memory: 1024Mi

#開啓 chartmuseum,使 Harbor 可以存儲 Helm 的 chart
chartmuseum:
  enabled: true
  resources:
    requests:
     memory: 256Mi
     cpu: 500m

  imageChartStorage:
    # Specify whether to disable `redirect` for images and chart storage, for
    # backends which not supported it (such as using minio for `s3` storage type), please disable
    # it. To disable redirects, simply set `disableredirect` to `true` instead.
    # Refer to
    # https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
    # for the detail.
    disableredirect: false
    # Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
    # The secret must contain keys named "ca.crt" which will be injected into the trust store
    # of registry's and chartmuseum's containers.
    # caBundleSecretName:

    # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
    # "oss" and fill the information needed in the corresponding section. The type
    # must be "filesystem" if you want to use persistent volumes for registry
    # and chartmuseum
    type: s3
    s3:
      region: cn-hangzhou-1
      bucket: harbor
      accesskey: VGZQY32LMFQOQPVNTDSJ
      secretkey: YZMMYqoy1ypHaqGOUfwLvdAj9A731iDYDjYqwkU5
      regionendpoint: http://172.16.7.1
      #encrypt: false
      #keyid: mykeyid
      secure: false
      #skipverify: false
      #v4auth: true
      #chunksize: "5242880"
      #rootdirectory: /s3/object/name/prefix
      #storageclass: STANDARD
      #multipartcopychunksize: "33554432"
      #multipartcopymaxconcurrency: 100
      #multipartcopythresholdsize: "33554432"
複製代碼

4. 安裝 Harbor

添加 Helm 倉庫

🐳  → helm repo add harbor https://helm.goharbor.io
複製代碼

部署 Harbor

🐳  → helm install harbor harbor/harbor -f values.yaml -n harbor
複製代碼

查看應用是否部署完成

🐳  → kubectl -n harbor get pod
NAME                                          READY   STATUS    RESTARTS   AGE
harbor-harbor-chartmuseum-55fb975fbd-74vnh    1/1     Running   0          3m
harbor-harbor-clair-695c7f9c69-7gpkh          2/2     Running   0          3m
harbor-harbor-core-687cfb49b6-zmwxr           1/1     Running   0          3m
harbor-harbor-database-0                      1/1     Running   0          3m
harbor-harbor-jobservice-88994b9b7-684vb      1/1     Running   0          3m
harbor-harbor-nginx-6758559548-x9pq6          1/1     Running   0          3m
harbor-harbor-notary-server-6d55b785f-6jsq9   1/1     Running   0          3m
harbor-harbor-notary-signer-9696cbdd8-8tfw9   1/1     Running   0          3m
harbor-harbor-portal-6f474574c4-8jzh2         1/1     Running   0          3m
harbor-harbor-redis-0                         1/1     Running   0          3m
harbor-harbor-registry-5b6cbfb4cf-42fm9       2/2     Running   0          3m
harbor-harbor-trivy-0                         1/1     Running   0          3m
複製代碼

Host 配置域名

接下來配置 Hosts,客戶端想經過域名訪問服務,必需要進行 DNS 解析,因爲這裏沒有 DNS 服務器進行域名解析,因此修改 hosts 文件將 Harbor 指定 clusterIP 和自定義 host 綁定。首先查看 nginx 的 clusterIP:

🐳  → kubectl -n harbor get svc harbor-harbor-nginx
NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
harbor-harbor-nginx   ClusterIP   10.109.50.142   <none>        80/TCP,443/TCP   22h
複製代碼

打開主機的 Hosts 配置文件,往其加入下面配置:

10.109.50.142 harbor.example.net
複製代碼

若是想在集羣外訪問,建議將 Service nginx 的 type 改成 nodePort 或者經過 ingress 來代理。固然,若是你在集羣外可以直接訪問 clusterIP,那更好。

輸入地址 https://harbor.example.net 訪問 Harbor 倉庫。

  • 用戶:admin
  • 密碼:Mydlq123456 (在安裝配置中自定義的密碼)

進入後能夠看到 Harbor 的管理後臺:

5. 服務器配置鏡像倉庫

對於 Containerd 來講,不能像 docker 同樣 docker login 登陸到鏡像倉庫,須要修改其配置文件來進行認證。/etc/containerd/config.toml 須要添加以下內容:

[plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        ...
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.example.net".tls]
          insecure_skip_verify = true
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.example.net".auth]
          username = "admin"
          password = "Mydlq123456"
複製代碼

因爲 Harbor 是基於 Https 的,理論上須要提早配置 tls 證書,但能夠經過 insecure_skip_verify 選項跳過證書認證。

固然,若是你想經過 Kubernetes 的 secret 來進行用戶驗證,配置還能夠精簡下:

[plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        ...
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.example.net".tls]
          insecure_skip_verify = true
複製代碼

Kubernetes 集羣使用 docker-registry 類型的 Secret 來經過鏡像倉庫的身份驗證,進而拉取私有映像。因此須要建立 Secret,命名爲 regcred

🐳  → kubectl create secret docker-registry regcred \
  --docker-server=<你的鏡像倉庫服務器> \
  --docker-username=<你的用戶名> \
  --docker-password=<你的密碼> \
  --docker-email=<你的郵箱地址>
複製代碼

而後就能夠在 Pod 中使用該 secret 來訪問私有鏡像倉庫了,下面是一個示例 Pod 配置文件:

apiVersion: v1
kind: Pod
metadata:
  name: private-reg
spec:
  containers:
  - name: private-reg-container
    image: <your-private-image>
  imagePullSecrets:
  - name: regcred
複製代碼

若是你不嫌麻煩,想更安全一點,那就老老實實將 CA、證書和祕鑰拷貝到全部節點的 /etc/ssl/certs/ 目錄下。/etc/containerd/config.toml 須要添加的內容更多一點:

[plugins."io.containerd.grpc.v1.cri".registry]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        ...
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.example.net".tls]
          ca_file = "/etc/ssl/certs/ca.pem"
          cert_file = "/etc/ssl/certs/harbor.pem"
          key_file  = "/etc/ssl/certs/harbor-key.pem"
複製代碼

至於 Docker 的配置方式,你們能夠本身去搜一下,這裏就跳過了,誰讓它如今不受待見呢。

6. 測試功能

這裏爲了測試推送鏡像,先下載一個用於測試的 helloworld 小鏡像,而後推送到 harbor.example.net 倉庫:

### 拉取 Helloworld 鏡像
🐳  → ctr i pull bxsfpjcb.mirror.aliyuncs.com/library/hello-world:latest
bxsfpjcb.mirror.aliyuncs.com/library/hello-world:latest:                          resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:1a523af650137b8accdaed439c17d684df61ee4d74feac151b5b337bd29e7eec:    done           |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042: done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:0e03bdcc26d7a9a57ef3b6f1bf1a210cff6239bff7c8cac72435984032851689:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b:   done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 15.8s                                                                    total:  2.6 Ki (166.0 B/s)
unpacking linux/amd64 sha256:1a523af650137b8accdaed439c17d684df61ee4d74feac151b5b337bd29e7eec...
done

### 將下載的鏡像使用 tag 命令改變鏡像名
🐳  → ctr i tag bxsfpjcb.mirror.aliyuncs.com/library/hello-world:latest harbor.example.net/library/hello-world:latest
harbor.example.net/library/hello-world:latest

### 推送鏡像到鏡像倉庫
🐳  → ctr i push --user admin:Mydlq123456 --platform linux/amd64 harbor.example.net/library/hello-world:latest
manifest-sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042: done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:0e03bdcc26d7a9a57ef3b6f1bf1a210cff6239bff7c8cac72435984032851689:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 2.2 s                                                                    total:  4.5 Ki (2.0 KiB/s)
複製代碼

鏡像倉庫中也能看到:

將以前的下載的鏡像刪除,而後測試從 harbor.example.net 下載鏡像進行測試:

### 刪除以前鏡像
🐳  → ctr i rm harbor.example.net/library/hello-world:latest
🐳  → ctr i rm bxsfpjcb.mirror.aliyuncs.com/library/hello-world:latest

### 測試從 harbor.example.net 下載新鏡像
🐳  → ctr i pull harbor.example.net/library/hello-world:latest
harbor.example.net/library/hello-world:latest:                                   resolved       |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042: done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:0e03bdcc26d7a9a57ef3b6f1bf1a210cff6239bff7c8cac72435984032851689:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b:   done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.6 s                                                                    total:  525.0  (874.0 B/s)
unpacking linux/amd64 sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042...
done
複製代碼

參考

相關文章
相關標籤/搜索