Jenkins-k8s-helm-harbor-githab-mysql-nfs微服務發佈平臺實戰

基於 K8S 構建 Jenkins 微服務發佈平臺

實現彙總:前端

  1. 發佈流程設計講解
  2. 準備基礎環境
    1. K8s環境(部署Ingress Controller,CoreDNS,Calico/Flannel)
    2. 部署代碼版本倉庫Gitlab
    3. 配置本地Git上傳測試代碼,建立項目到Gitlab
    4. 部署pinpoint 全鏈路監控系統(提早修改Dockerfile,打包鏡像上傳)
    5. 部署鏡像倉庫Harbor(開啓helm倉庫)
    6. master節點部署helm應用包管理器(配置本地helm倉庫,上傳helm包)
    7. 部署K8S 存儲(nfs、ceph),master節點提供pv自動供給
    8. 部署MySQL集羣(導入微服務數據庫)
    9. 部署EFK日誌採集(追加)
    10. 部署prometheus監控系統(追加)
  3. 在Kubernetes中部署Jenkins
  4. Jenkins Pipeline 及參數化構建
  5. Jenkins在K8S中動態建立代理
  6. 自定義構建Jenkins-Slave鏡像
  7. 基於Kubernetes構建Jenkins CI系統
  8. Pipeline 集成 Helm 發佈微服務項目

發佈流程設計講解

機器環境

當前環境部署主要是實現微服務自動發佈和推送,具體實現的功能細節主要實如今下述幾大軟件上面。其實自動發佈和推送有不少種方式,若有不足,請留言補充。java

IP地址 主機名 服務配置
192.168.25.223 k8s-master01 Kubernetes-Master節點+Jenkins
192.168.25.225 k8s-node01 Kubernetes-Node節點
192.168.25.226 k8s-node02 Kubernetes-Node節點
192.168.25.227 gitlab-nfs Gitlab,NFS,Git
192.168.25.228 harbor harbor,mysql,docker,pinpoint

準備基礎環境

K8s環境(部署Ingress Controller,CoreDNS,Calico/Flannel)

部署命令
單Master版:node

ansible-playbook -i hosts single-master-deploy.yml -uroot -k

多Master版:mysql

ansible-playbook -i hosts multi-master-deploy.yml -uroot -k

部署控制

若是安裝某個階段失敗,可針對性測試.linux

例如:只運行部署插件nginx

ansible-playbook -i hosts single-master-deploy.yml -uroot -k --tags addons

示例參考:https://github.com/ansible/ansible-examplesgit


部署代碼版本倉庫Gitlab

部署docker

Uninstall old versions
$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
SET UP THE REPOSITORY
$ sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
INSTALL DOCKER ENGINE
$ sudo yum install docker-ce docker-ce-cli containerd.io -y
$ sudo systemctl start docker && sudo systemctl enable docker
$ sudo docker run hello-world

部署gitlab

docker run -d \
  --name gitlab \
  -p 8443:443 \
  -p 9999:80 \
  -p 9998:22 \
  -v $PWD/config:/etc/gitlab \
  -v $PWD/logs:/var/log/gitlab \
  -v $PWD/data:/var/opt/gitlab \
  -v /etc/localtime:/etc/localtime \
  passzhang/gitlab-ce-zh:latest

訪問地址:http://IP:9999github

初次會先設置管理員密碼 ,而後登錄,默認管理員用戶名root,密碼就是剛設置的。spring

配置本地Git上傳測試代碼,建立項目到Gitlab

https://github.com/passzhang/simple-microservicesql

代碼分支說明:

  • dev1 交付代碼

  • dev2 編寫Dockerfile構建鏡像

  • dev3 K8S資源編排

  • dev4 增長微服務鏈路監控

  • master 最終上線

拉取master分支,推送到私有代碼倉庫:

git clone https://github.com/PassZhang/simple-microservice.git

# cd 進入simple-microservice目錄
# 修改.git/config文件,將地址上傳地址配置成本地gitlab既能夠
vim /root/simple-microservice/.git/config
...
[remote "origin"]
        url = http://192.168.25.227:9999/root/simple-microservice.git
        fetch = +refs/heads/*:refs/remotes/origin/*
...

# 下載以後,還需修改鏈接數據庫配置(xxx-service/src/main/resources/application-fat.yml),本次測試我將數據庫地址修改爲192.168.25.228::3306.
# 修改好數據庫地址後,才能夠上傳文件。


cd microservice
git config --global user.email "passzhang@example.com"
git config --global user.name "passzhang"
git add .
git commit -m 'all'
git push origin master

部署pinpoint 全鏈路監控系統(提早修改Dockerfile,打包鏡像上傳)


部署鏡像倉庫Harbor(開啓helm倉庫)

安裝docker與docker-compose

# wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# yum install docker-ce -y
# systemctl start docker && systemctl enable docker
curl -L https://github.com/docker/compose/releases/download/1.25.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

2.2 解壓離線包部署

# tar zxvf harbor-offline-installer-v1.9.1.tgz
# cd harbor
-----------
# vi harbor.yml
hostname: 192.168.25.228
    http: 8088
-----------
# ./prepare
# ./install.sh --with-chartmuseum --with-clair
# docker-compose ps

--with-chartmuseum 參數表示啓用Charts存儲功能。

配置Docker可信任

因爲habor未配置https,還須要在docker配置可信任。

# cat /etc/docker/daemon.json 
{"registry-mirrors": ["http://f1361db2.m.daocloud.io"],
  "insecure-registries": ["192.168.25.228:8088"]
}
# systemctl restart docker
#這邊配置好倉庫以後,也要保證K8S的master節點和docker節點都能同時鏈接上。須要修改dameon.json文件。

master節點部署helm應用包管理器(配置本地helm倉庫,上傳helm包)

安裝Helm工具

# wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
# tar zxvf helm-v3.0.0-linux-amd64.tar.gz 
# mv linux-amd64/helm /usr/bin/

配置國內Chart倉庫

# helm repo add stable http://mirror.azure.cn/kubernetes/charts
# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts 
# helm repo list

安裝push插件

# helm plugin install https://github.com/chartmuseum/helm-push

若是網絡下載不了,也能夠直接解壓課件裏包:

# tar zxvf helm-push_0.7.1_linux_amd64.tar.gz
# mkdir -p /root/.local/share/helm/plugins/helm-push
# chmod +x bin/*
# mv bin plugin.yaml /root/.local/share/helm/plugins/helm-push

添加repo

# helm repo add  --username admin --password Harbor12345 myrepo http://192.168.25.228:8088/chartrepo/ms

推送與安裝Chart

# helm push ms-0.1.0.tgz --username=admin --password=Harbor12345 http://192.168.25.228:8088/chartrepo/ms
# helm install    --username=admin --password=Harbor12345 --version 0.1.0 http://192.168.25.228:8088/chartrepo/library/ms

部署K8S 存儲(nfs、ceph),master節點提供pv自動供給

先準備一臺NFS服務器爲K8S提供存儲支持。

# yum install nfs-utils -y
# vi /etc/exports
/ifs/kubernetes * (rw,no_root_squash)
# mkdir -p /ifs/kubernetes
# systemctl start nfs
# systemctl enable nfs

而且要在每一個Node上安裝nfs-utils包,用於mount掛載時用。

因爲K8S不支持NFS動態供給,還須要先安裝上圖中的nfs-client-provisioner插件:

具體配置文件以下:

[root@k8s-master1 nfs-storage-class]# tree  
.
├── class.yaml
├── deployment.yaml
└── rbac.yaml

0 directories, 3 files

rbac.yaml

[root@k8s-master1 nfs-storage-class]# cat rbac.yaml 
kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

class.yaml

[root@k8s-master1 nfs-storage-class]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"

deployment.yaml

[root@k8s-master1 nfs-storage-class]# cat deployment.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1 
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 192.168.25.227 
            - name: NFS_PATH
              value: /ifs/kubernetes
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.25.227 
            path: /ifs/kubernetes
            
# 部署時不要忘記將server地址修改爲新的nfs地址。
# cd nfs-client
# vi deployment.yaml # 修改裏面NFS地址和共享目錄爲你的
# kubectl apply -f .
# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-df88f57df-bv8h7   1/1     Running   0          49m

部署MySQL集羣(導入微服務數據庫)

# yum install mariadb-server -y
# systemctl start mariadb.service
# mysqladmin -uroot password '123456'

或者docker建立

docker run -d --name db -p 3306:3306 -v /opt/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7 --character-set-server=utf8

最後將微服務數據庫導入。

[root@cephnode03 db]# pwd 
/root/simple-microservice/db
[root@cephnode03 db]# ls 
order.sql  product.sql  stock.sql
[root@cephnode03 db]# mysql -uroot -p123456 <order.sql 
[root@cephnode03 db]# mysql -uroot -p123456 <product.sql 
[root@cephnode03 db]# mysql -uroot -p123456 <stock.sql 

# 配置好以後須要修改數據庫受權 
GRANT ALL PRIVILEGES ON *.* TO 'root'@'192.168.25.%' IDENTIFIED BY '123456';

部署EFK日誌採集(追加)


部署prometheus監控系統(追加)


在Kubernetes中部署Jenkins

參考:https://github.com/jenkinsci/kubernetes-plugin/tree/fc40c869edfd9e3904a9a56b0f80c5a25e988fa1/src/main/kubernetes

當前咱們直接在kubernetes中部署Jenkins程序,部署以前須要提早準備好存儲,前面已經部署了nfs 存儲,也可使用其餘存儲方案,例如ceph等。接下來咱們開始部署吧。

Jenkins yaml文件彙總

[root@k8s-master1 jenkins]# tree 
.
├── deployment.yml
├── ingress.yml
├── rbac.yml
├── service-account.yml
└── service.yml

0 directories, 5 files

rbac.yml

[root@k8s-master1 jenkins]# cat rbac.yml 
---
# 建立名爲jenkins的ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---
# 建立名爲jenkins的Role,授予容許管理API組的資源Pod
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]

---
# 將名爲jenkins的Role綁定到名爲jenkins的ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
- kind: ServiceAccount
  name: jenkins

service-account.yml

[root@k8s-master1 jenkins]# cat service-account.yml 
# In GKE need to get RBAC permissions first with
# kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin [--user=<user-name>|--group=<group-name>]

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
- kind: ServiceAccount
  name: jenkins

ingress.yml

[root@k8s-master1 jenkins]# cat ingress.yml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: 100m
spec:
  rules:
  - host: jenkins.test.com
    http:
      paths:
      - path: /
        backend:
          serviceName: jenkins
          servicePort: 80

service.yml

[root@k8s-master1 jenkins]# cat service.yml 
apiVersion: v1
kind: Service
metadata:
  name: jenkins
spec:
  selector:
    name: jenkins
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 8080
      protocol: TCP
      nodePort: 30006
    - name: agent
      port: 50000
      protocol: TCP

deployment.yml

[root@k8s-master1 jenkins]# cat deployment.yml 
apiVersion: apps/v1
kind: Deployment 
metadata:
  name: jenkins
  labels:
    name: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      name: jenkins 
  template:
    metadata:
      name: jenkins
      labels:
        name: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts 
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
            - containerPort: 50000
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 0.5
              memory: 500Mi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=Asia/Shanghai
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
      securityContext:
        fsGroup: 1000
      volumes:
        - name: jenkins-home
          persistentVolumeClaim:
            claimName: jenkins-home
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-home
spec:
  storageClassName: "managed-nfs-storage"
  accessModes: ["ReadWriteOnce"]
  resources:
    requests:
      storage: 5Gi

登陸地址:直接輸入ingress配置的域名:http://jenkins.test.com

修改插件地址:

因爲默認插件源在國外服務器,大多數網絡沒法順利下載,需修改國內插件源地址:

cd jenkins_home/updates
sed -i 's/http:\/\/updates.jenkins-ci.org\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json && \
sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json

Jenkins Pipeline 及參數化構建

Jenkins參數化構建流程圖

Jenkins Pipeline是一套插件,支持在Jenkins中實現集成和持續交付管道;

  • Pipeline經過特定語法對簡單到複雜的傳輸管道進行建模;
    1. 聲明式:遵循與Groovy相同語法。pipeline { }
    2. 腳本式:支持Groovy大部分功能,也是很是表達和靈活的工具。node { }
  • Jenkins Pipeline的定義被寫入一個文本文件,稱爲Jenkinsfile。

參考:https://jenkins.io/doc/book/pipeline/syntax/

當前環境中咱們須要配置pipeline腳本,咱們能夠先來建立一個Jenkins-pipeline腳本測試一下

安裝pipeline插件 : Jenkins 首頁 ------ >系統管理 ------ > 插件管理 ------> 可選插件 ------> 過濾輸入pipeline, 安裝pipeline插件既可使用。

流水線中輸入如下腳本進行測試

pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                echo 'Building'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing'
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying'
            }
        }
    }
}

測試結果以下:

日誌以下:

控制檯輸出
Started by user admin
Running in Durability level: MAX_SURVIVABILITY
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/jenkins_home/workspace/pipeline-test
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Build)
[Pipeline] echo
Building
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Test)
[Pipeline] echo
Testing
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] echo
Deploying
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS

輸出SUCCESS即成功測試

Jenkins在K8S中動態建立代理

前面咱們已經完成了pipeline腳本的測試,可是考慮到Jenkins 主機性能有限,若是咱們要運行大批量的任務,Jenkins 主機可能會崩潰,這時咱們採用Jenkins-slave的方式,給Jenkins主機增長小弟,由Jenkins主機來部署任務,具體任務和編譯則留給小弟去作。

傳統的Jenkins Master/Slave架構

K8S中Jenkins Master/Slave架構

添加Kubernetes插件

Kubernetes插件:Jenkins在Kubernetes集羣中運行動態代理.

插件介紹:https://github.com/jenkinsci/kubernetes-plugin

新增一個kubernetes 雲

當前環境中咱們須要將Jenkins和kubernetes 進行關聯,讓Jenkins能夠連通kubernetes 而且自動在kubernetes 中 進行命令操做,須要添加kubernetes 雲,操做步驟以下:

Jenkins 首頁 ------ > 系統管理 ------ > 系統配置 ------ > 雲 ------ > 新增一個雲 ------ > Kubernetes

配置一下kubernetes 雲,當前咱們部署的Jenkins是在kubernetes 中直接部署的pod,Jenkins能夠直接經過service 讀取到kubernetes的地址,因此咱們這個地方輸入kubernetes的DNS地址(https://kubernetes.default)就能夠了,輸入完以後不要忘記點擊連接測試哦。

Jenkins地址咱們也直接輸入DNS地址既能夠,地址爲(http://jenkins.default),這樣咱們就新增了一個kubernetes雲。

自定義構建Jenkins-Slave鏡像推送到鏡像倉庫

配置所需文件:

[root@k8s-master1 jenkins-slave]# tree 
.
├── Dockerfile              #構建Jenkins-slave所需
├── helm                            #helm 命令:用於在Jenkins-slave pod 工做時,執行helm 操做安裝helm chart庫。
├── jenkins-slave           #jenkins-slave所需腳本
├── kubectl                     #kebectl 命令:用於在Jenkins-slave pod 工做中,執行pod 建立命令和查詢pod 運行結果等。
├── settings.xml            #Jenkins-slave 所需文件
└── slave.jar                   #Jenkins-slave jar包

0 directories, 6 files

Jenkins-slave 所需 Dockerfile文件

FROM centos:7
LABEL maintainer passzhang
RUN yum install -y java-1.8.0-openjdk maven curl git   libtool-ltdl-devel && \
  yum clean all && \
  rm -rf /var/cache/yum/* && \
  mkdir -p /usr/share/jenkins
COPY slave.jar /usr/share/jenkins/slave.jar
COPY jenkins-slave /usr/bin/jenkins-slave
COPY settings.xml /etc/maven/settings.xml
RUN chmod +x /usr/bin/jenkins-slave
COPY helm kubectl /usr/bin/
ENTRYPOINT ["jenkins-slave"]

參考:https://github.com/jenkinsci/docker-jnlp-slave

參考:https://plugins.jenkins.io/kubernetes

推送Jenkins-slave 鏡像到harbor倉庫

[root@k8s-master1 jenkins-slave]# 
docker build -t jenkins-slave:jdk-1.8 .

docker tag jenkins-slave:jdk-1.8 192.168.25.228:8088/library/jenkins-slave:jdk-1.8

docker login 192.168.25.228:8088  #登陸私有倉庫
docker push 192.168.25.228:8088/library/jenkins-slave:jdk-1.8                                       #推送鏡像到私有倉庫

配置好以後,須要使用pipeline 流水線測試一下是否能夠直接調用Jenkins-slave ,查看Jenkins-slave 是否正常工做。

測試pipeline腳本:

pipeline {
    agent {   
    kubernetes {
      label "jenkins-slave"
      yaml """
apiVersion: v1
kind: Pod
metadata:
  name: jenkins-slave
spec:
  containers:
  - name: jnlp
    image: 192.168.25.228:8088/library/jenkins-slave:jdk-1.8
"""
    }
}
    stages {
        stage('Build') {
            steps {
                echo 'Building'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing'
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying'
            }
        }
    }
}

部署截圖以下:

Pipeline 集成 Helm 發佈微服務項目

部署步驟:

拉取代碼 ——> 代碼編譯 ——> 單元測試 ——> 構建鏡像 ——> Helm部署到K8S 測試

建立新的Jenkins任務k8s-deploy-spring-cloud

增長pipeline腳本:

#!/usr/bin/env groovy
// 所需插件: Git Parameter/Git/Pipeline/Config File Provider/kubernetes/Extended Choice Parameter
// 公共
def registry = "192.168.25.228:8088"
// 項目
def project = "ms"
def git_url = "http://192.168.25.227:9999/root/simple-microservice.git"
def gateway_domain_name = "gateway.test.com"
def portal_domain_name = "portal.test.com"
// 認證
def image_pull_secret = "registry-pull-secret"
def harbor_registry_auth = "9d5822e8-b1a1-473d-a372-a59b20f9b721"
def git_auth = "2abc54af-dd98-4fa7-8ac0-8b5711a54c4a"
// ConfigFileProvider ID
def k8s_auth = "f1a38eba-4864-43df-87f7-1e8a523baa35"

pipeline {
  agent {
    kubernetes {
        label "jenkins-slave"
        yaml """
kind: Pod
metadata:
  name: jenkins-slave
spec:
  containers:
  - name: jnlp
    image: "${registry}/library/jenkins-slave:jdk-1.8"
    imagePullPolicy: Always
    volumeMounts:
      - name: docker-cmd
        mountPath: /usr/bin/docker
      - name: docker-sock
        mountPath: /var/run/docker.sock
      - name: maven-cache
        mountPath: /root/.m2
  volumes:
    - name: docker-cmd
      hostPath:
        path: /usr/bin/docker
    - name: docker-sock
      hostPath:
        path: /var/run/docker.sock
    - name: maven-cache
      hostPath:
        path: /tmp/m2
"""
        }
      
      }
    parameters {
        gitParameter branch: '', branchFilter: '.*', defaultValue: '', description: '選擇發佈的分支', name: 'Branch', quickFilterEnabled: false, selectedValue: 'NONE', sortMode: 'NONE', tagFilter: '*', type: 'PT_BRANCH'        
        extendedChoice defaultValue: 'none', description: '選擇發佈的微服務', \
          multiSelectDelimiter: ',', name: 'Service', type: 'PT_CHECKBOX', \
          value: 'gateway-service:9999,portal-service:8080,product-service:8010,order-service:8020,stock-service:8030'
        choice (choices: ['ms', 'demo'], description: '部署模板', name: 'Template')
        choice (choices: ['1', '3', '5', '7', '9'], description: '副本數', name: 'ReplicaCount')
        choice (choices: ['ms'], description: '命名空間', name: 'Namespace')
    }
    stages {
        stage('拉取代碼'){
            steps {
                checkout([$class: 'GitSCM', 
                branches: [[name: "${params.Branch}"]], 
                doGenerateSubmoduleConfigurations: false, 
                extensions: [], submoduleCfg: [], 
                userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_url}"]]
                ])
            }
        }
        stage('代碼編譯') {
            // 編譯指定服務
            steps {
                sh """
                  mvn clean package -Dmaven.test.skip=true
                """
            }
        }
        stage('構建鏡像') {
          steps {
              withCredentials([usernamePassword(credentialsId: "${harbor_registry_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
                sh """
                 docker login -u ${username} -p '${password}' ${registry}
                 for service in \$(echo ${Service} |sed 's/,/ /g'); do
                    service_name=\${service%:*}
                    image_name=${registry}/${project}/\${service_name}:${BUILD_NUMBER}
                    cd \${service_name}
                    if ls |grep biz &>/dev/null; then
                        cd \${service_name}-biz
                    fi
                    docker build -t \${image_name} .
                    docker push \${image_name}
                    cd ${WORKSPACE}
                  done
                """
                configFileProvider([configFile(fileId: "${k8s_auth}", targetLocation: "admin.kubeconfig")]){
                    sh """
                    # 添加鏡像拉取認證
                    kubectl create secret docker-registry ${image_pull_secret} --docker-username=${username} --docker-password=${password} --docker-server=${registry} -n ${Namespace} --kubeconfig admin.kubeconfig |true
                    # 添加私有chart倉庫
                    helm repo add  --username ${username} --password ${password} myrepo http://${registry}/chartrepo/${project}
                    """
                }
              }
          }
        }
        stage('Helm部署到K8S') {
          steps {
              sh """
              common_args="-n ${Namespace} --kubeconfig admin.kubeconfig"
              
              for service in  \$(echo ${Service} |sed 's/,/ /g'); do
                service_name=\${service%:*}
                service_port=\${service#*:}
                image=${registry}/${project}/\${service_name}
                tag=${BUILD_NUMBER}
                helm_args="\${service_name} --set image.repository=\${image} --set image.tag=\${tag} --set replicaCount=${replicaCount} --set imagePullSecrets[0].name=${image_pull_secret} --set service.targetPort=\${service_port} myrepo/${Template}"

                # 判斷是否爲新部署
                if helm history \${service_name} \${common_args} &>/dev/null;then
                  action=upgrade
                else
                  action=install
                fi

                # 針對服務啓用ingress
                if [ \${service_name} == "gateway-service" ]; then
                  helm \${action} \${helm_args} \
                  --set ingress.enabled=true \
                  --set ingress.host=${gateway_domain_name} \
                   \${common_args}
                elif [ \${service_name} == "portal-service" ]; then
                  helm \${action} \${helm_args} \
                  --set ingress.enabled=true \
                  --set ingress.host=${portal_domain_name} \
                   \${common_args}
                else
                  helm \${action} \${helm_args} \${common_args}
                fi
              done
              # 查看Pod狀態
              sleep 10
              kubectl get pods \${common_args}
              """
          }
        }
    }
}

執行結果以下:

當前直接點擊構建,構建時前面幾回可能會失敗,多構建一次,打印出全部參數,既能夠直接執行成功。

點擊發布gateway-service pod 查看日誌結果

+ kubectl get pods -n ms --kubeconfig admin.kubeconfig
NAME                                  READY   STATUS    RESTARTS   AGE
eureka-0                              1/1     Running   0          3h11m
eureka-1                              1/1     Running   0          3h10m
eureka-2                              1/1     Running   0          3h9m
ms-gateway-service-66d695c486-9x9mc   0/1     Running   0          10s
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS

# 執行成功以後,會打印出來pod 信息

發佈剩下的服務,並查看結果:

+ kubectl get pods -n ms --kubeconfig admin.kubeconfig
NAME                                  READY   STATUS    RESTARTS   AGE
eureka-0                              1/1     Running   0          3h14m
eureka-1                              1/1     Running   0          3h13m
eureka-2                              1/1     Running   0          3h12m
ms-gateway-service-66d695c486-9x9mc   1/1     Running   0          3m1s
ms-order-service-7465c47d79-lbxgd     0/1     Running   0          10s
ms-portal-service-7fd6c57955-jkgkk    0/1     Running   0          11s
ms-product-service-68dbf5b57-jwpv9    0/1     Running   0          10s
ms-stock-service-b8b9895d6-cb72b      0/1     Running   0          10s
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
Finished: SUCCESS

查看eureka結果:

能夠看到全部的服務模塊都已經註冊到eureka中了。

訪問一下前端頁面:

能夠看到有商品查詢出來,表明已經鏈接數據庫,同時業務能夠正常運行。大功告成了!

總結環境所需插件

  • 使用Jenkins的插件
    • Git & gitParameter
    • Kubernetes
    • Pipeline
    • Kubernetes Continuous Deploy
    • Config File Provider
    • Extended Choice Parameter
  • CI/CD環境特色
    • Slave彈性伸縮
    • 基於鏡像隔離構建環境
    • 流水線發佈,易維護
  • Jenkins參數化構建可幫助你完成更復雜環境CI/CD
相關文章
相關標籤/搜索