前言:此文檔是用來在線下環境harbor利用MinIO作鏡像存儲的,至於那些說OSS不香嗎?或者單機harbor的,不用看了。此文檔對你沒啥用,若是是採用單機的harbor鏈接集羣MinIO,請看個人另外一篇博文。node
環境:linux
應用版本:
helm v3.2.3
kubernetes 1.14.3
nginx-ingress 1.39.1
harbor 2.0
nginx 1.15.3
MinIO RELEASE.2020-05-08T02-40-49Znginx
### 這裏就不講解kubernetes集羣怎麼搭建了。咱們kubernetes共享存儲爲了簡單,採用的是nfs。咱們先講解一下怎麼採用nfs作k8s持久存儲。
### 注意執行主機,除了nfs-server是在94那臺服務器執行了相關命令,其餘的大部分是在master1上面執行git
## 1、nfs-client-provisioner
### 一、在nfs-server安裝nfs服務github
yum -y install nfs-utils rpcbind mkdir /nfs/data chmod 777 /nfs/data echo '/nfs/data *(rw,no_root_squash,sync)' > /etc/exports exportfs -r systemctl restart rpcbind && systemctl enable rpcbind systemctl restart nfs && systemctl enable nfs rpcinfo -p localhost showmount -e 10.0.0.94
### 二、在其餘服務器安裝nfs-clientredis
yum install -y nfs-utils
### 三、在k8s-master1上安裝nfs-client-provisioner 實現動態持久存儲,nfs-client-provisioner 是一個Kubernetes的簡易NFS的外部provisioner,自己不提供NFSdocker
cd /usr/local/src && mkdir nfs-client-provisioner && cd nfs-client-provisioner
### 注意deployment.yaml文件中,IP對應的是nfs-server的,PATH路徑對應的是nfs-server的/etc/exports的路徑vim
cat > deployment.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay-mirror.qiniu.com/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 10.0.0.94 - name: NFS_PATH value: /nfs/data volumes: - name: nfs-client-root nfs: server: 10.0.0.94 path: /nfs/data EOF
cat > rbac.yaml << EOF kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default - kind: ServiceAccount name: nfs-client-provisioner namespace: kube-system roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io EOF
cat > StorageClass.yaml << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs parameters: archiveOnDelete: "false" EOF
kubectl apply -f deployment.yaml kubectl apply -f rbac.yaml kubectl apply -f StorageClass.yaml
### 稍等片刻,檢查nfs-client-provisioner是否正常,出現下面的輸出說明正常,若是不正常請檢查上面的步驟,是否存在問題api
kubectl get pods -n kube-system | grep nfs nfs-client-provisioner-7778496f89-kthnj 1/1 Running 0 169m
## 2、安裝helm3bash
cd /usr/local/src &&\ wget https://get.helm.sh/helm-v3.2.3-linux-amd64.tar.gz &&\ tar xf helm-v3.2.3-linux-amd64.tar.gz &&\ cp linux-amd64/helm /usr/bin/ &&\ helm version
## 3、安裝nginx-controller-manager
helm repo add stable http://mirror.azure.cn/kubernetes/charts helm pull stable/nginx-ingress &&\ docker pull fungitive/defaultbackend-amd64 &&\ docker tag fungitive/defaultbackend-amd64 k8s.gcr.io/defaultbackend-amd64:1.5 &&\ helm template guoys nginx-ingress-*.tgz | kubectl apply -f -
## 4、安裝MinIO
### 一、在準備的4臺服務器安裝minio Server,官方建議是準備最低4臺服務器,而且是單獨的磁盤空間存放minio數據
cd /usr/local/src &&\ wget https://dl.min.io/server/minio/release/linux-amd64/minio &&\ chmod +x minio && cp minio /usr/bin
cat > /etc/systemd/system/minio.service <<EOF [Unit] Description=Minio Documentation=https://docs.minio.io Wants=network-online.target After=network-online.target AssertFileIsExecutable=/usr/bin/minio [Service] EnvironmentFile=-/etc/minio/minio.conf ExecStart=/usr/bin/minio server $ENDPOINTS # Let systemd restart this service always Restart=always # Specifies the maximum file descriptor number that can be opened by this process LimitNOFILE=65536 # Disable timeout logic and wait until process is stopped TimeoutStopSec=infinity SendSIGKILL=no [Install] WantedBy=multi-user.target EOF
### 此處IP地址要與本身的機器地址對應或者採用域名,後綴是minio存儲路徑
mkdir -p /etc/minio cat > /etc/minio/minio.conf <<EOF MINIO_ACCESS_KEY=guoxy MINIO_SECRET_KEY=guoxy321export ENDPOINTS="http://10.0.0.91/minio http://10.0.0.92/minio http://10.0.0.93/minio http://10.0.0.94/minio" EOF systemctl daemon-reload && systemctl start minio && systemctl enable minio
### 二、在k8s-master1安裝mc命令,並建立bucket harbor
cd /usr/local/src && \ wget https://dl.min.io/client/mc/release/linux-amd64/mc && \ chmod +x mc && cp mc /usr/bin/ && \ mc config host add minio "http://10.0.0.91:9000 http://10.0.0.92:9000 http://10.0.0.93:9000/ http://10.0.0.94:9000" guoxy guoxy321export && \ mc mb minio/harbor
## 5、在k8s中安裝harbor
### 一、先在k8s中建立harbor要使用的TLS證書的secret,證書若是沒有能夠let's encrypt申請
kubectl create secret tls guofire.xyz --key privkey.pem --cert fullchain.pem
### 二、克隆harbor-helm
cd /usr/local/src && \ git clone -b 1.4.0 https://github.com/goharbor/harbor-helm
### 三、修改harbor-helm/values.yaml,因爲內容太多了,我只把須要修改的內容貼出來
vim harbor-helm/values.yaml
### secretName對應剛剛建立的secret名稱,core爲harbor訪問域名
secretName: "guofire.xyz" core: harbor.guofire.xyz notary: notary.guofire.xyz
externalURL: https://harbor.guofire.xyz
### 下面是nfs持久化存儲
persistentVolumeClaim: registry: storageClass: "managed-nfs-storage" subPath: "registry" storageClass: "managed-nfs-storage" subPath: "chartmuseum" storageClass: "managed-nfs-storage" subPath: "jobservice" storageClass: "managed-nfs-storage" subPath: "database" storageClass: "managed-nfs-storage" subPath: "redis" storageClass: "managed-nfs-storage" subPath: "trivy"
### 這往下最重要,regionendpoint地址能夠寫nginx代理的地址和端口,我這裏只寫了minio Server其中一臺
imageChartStorage: disableredirect: true type: s3 filesystem: rootdirectory: /storage #maxthreads: 100 s3: region: us-west-1 bucket: harbor accesskey: guoys! secretkey: guoys321export regionendpoint: http://10.0.0.92:9000 encrypt: false secure: false v4auth: true chunksize: "5242880" rootdirectory: / redirect: disabled: false maintenance: uploadpurging: enabled: false delete: enabled: true
### 四、經過helm在k8s中安裝harbor
helm install harbor harbor-helm/
### 五、最後稍等三、5分鐘,查看harbor應用是否正常
kubectl get pods
### 出現下面相似的輸出,基本上說明harbor已經正常啓動
NAME READY UP-TO-DATE AVAILABLE AGE harbor-harbor-chartmuseum 1/1 1 1 13h harbor-harbor-clair 1/1 1 1 13h harbor-harbor-core 1/1 1 1 13h harbor-harbor-jobservice 1/1 1 1 13h harbor-harbor-notary-server 1/1 1 1 13h harbor-harbor-notary-signer 1/1 1 1 13h harbor-harbor-portal 1/1 1 1 13h harbor-harbor-registry 1/1 1 1 13h zy-nginx-ingress-controller 1/1 1 1 32h zy-nginx-ingress-default-backend 1/1 1 1 32h
## 6、安裝nginx 4層轉發,不然沒法經過nginx-ingress訪問harbor
### 一、因爲nginx-ingress默認是LoadBalancer模式,在線下環境沒法正常使用。咱們須要改成NodePort
kubectl edit svc guoys-nginx-ingress-controller
### 修改.spec.type的值爲NodePort,並保存
### 二、查看nginx-ingress-controller的nodeport端口,記住80和443對應的端口
kubectl get svc | grep 'ingress-controller' guoys-nginx-ingress-controller NodePort 10.200.248.214 <none> 80:32492/TCP,443:30071/TCP 32h
### 三、安裝nginx4層代理
yum install -y gcc make mkdir /apps cd /usr/local/src/ wget http://nginx.org/download/nginx-1.15.3.tar.gz tar xf nginx-1.15.3.tar.gz cd nginx-1.15.3 ./configure --with-stream --without-http --prefix=/apps/nginx --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module make && make install
### 下面upstream中的端口必定要跟上面2步驟NodePort的相對應
cat > /apps/nginx/conf/nginx.conf <<EOF worker_processes 1; events { worker_connections 1024; } stream { log_format tcp '$remote_addr [$time_local] ' '$protocol $status $bytes_sent $bytes_received ' '$session_time "$upstream_addr" ' '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"'; upstream https_default_backend { server 10.0.0.91:30071; server 10.0.0.92:30071; server 10.0.0.93:30071; } upstream http_backend { server 10.0.0.91:32492; server 10.0.0.92:32492; server 10.0.0.93:32492; } server { listen 443; proxy_pass https_default_backend; access_log logs/access.log tcp; error_log logs/error.log; } server { listen 80; proxy_pass http_backend; } } EOF
### 測試並啓動nginx
/apps/nginx/sbin/nginx -t /apps/nginx/sbin/nginx echo '/apps/nginx/sbin/nginx' >> /etc/rc.local
## 若是此文檔對你有所幫助,請不吝打賞