當前微服務已經成爲服務端開發的主流架構,而Go語言因其簡單易學、內置高併發、快速編譯、佔用內存小等特色也愈來愈受到開發者的青睞,微服務實戰系列文章將從實戰的角度和你們一塊兒學習微服務相關的知識。本系列文章將以一個「博客系統」由淺入深的和你們一塊兒一步步搭建起一個完整的微服務系統java
該篇文章爲微服務實戰系列的第一篇文章,咱們將基於go-zero+gitlab+jenkins+k8s構建微服務持續集成和自動構建發佈系統,先對以上模塊作一個簡單介紹:node
實戰主要分爲五個步驟,下面針對如下的五個步驟分別進行詳細的講解mysql
首先咱們搭建實驗環境,這裏我採用了兩臺ubuntu16.04服務器,分別安裝了gitlab和jenkins。gtilab使用apt-get直接安裝,安裝好後啓動服務並查看服務狀態,各組件爲run狀態說明服務已經啓動,默認端口爲9090直接訪問便可linux
gitlab-ctl start // 啓動服務 gitlab-ctl status // 查看服務狀態 run: alertmanager: (pid 1591) 15442s; run: log: (pid 2087) 439266s run: gitaly: (pid 1615) 15442s; run: log: (pid 2076) 439266s run: gitlab-exporter: (pid 1645) 15442s; run: log: (pid 2084) 439266s run: gitlab-workhorse: (pid 1657) 15441s; run: log: (pid 2083) 439266s run: grafana: (pid 1670) 15441s; run: log: (pid 2082) 439266s run: logrotate: (pid 5873) 1040s; run: log: (pid 2081) 439266s run: nginx: (pid 1694) 15440s; run: log: (pid 2080) 439266s run: node-exporter: (pid 1701) 15439s; run: log: (pid 2088) 439266s run: postgres-exporter: (pid 1708) 15439s; run: log: (pid 2079) 439266s run: postgresql: (pid 1791) 15439s; run: log: (pid 2075) 439266s run: prometheus: (pid 10763) 12s; run: log: (pid 2077) 439266s run: puma: (pid 1816) 15438s; run: log: (pid 2078) 439266s run: redis: (pid 1821) 15437s; run: log: (pid 2086) 439266s run: redis-exporter: (pid 1826) 15437s; run: log: (pid 2089) 439266s run: sidekiq: (pid 1835) 15436s; run: log: (pid 2104) 439266s
jenkins也是用apt-get直接安裝,須要注意的是安裝jenkins前須要先安裝java,過程比較簡單這裏就再也不演示,jenkins默認端口爲8080,默認帳號爲admin,初始密碼路徑爲/var/lib/jenkins/secrets/initialAdminPassword,初始化安裝推薦的插件便可,後面能夠根據本身的須要再安裝其它插件nginx
k8s集羣搭建過程比較複雜,雖然可使用kubeadm等工具快速搭建,但距離真正的生產級集羣仍是有必定差距,由於咱們的服務最終是要上生產的,因此這裏我選擇了xxx雲的彈性k8s集羣版本爲1.16.9,彈性集羣的好處是按需收費沒有額外的費用,當咱們實驗完成後經過kubectl delete立馬釋放資源只會產生不多的費用,並且xxx雲的k8s集羣給咱們提供了友好的監控頁面,能夠經過這些界面查看各類統計信息,集羣建立好後須要建立集羣訪問憑證才能訪問集羣git
若當前訪問客戶端還沒有配置任何集羣的訪問憑證,即 ~/.kube/config 內容爲空,可直接將訪問憑證內容並粘貼入 ~/.kube/config 中github
若當前訪問客戶端已配置了其餘集羣的訪問憑證,須要經過以下命令合併憑證golang
KUBECONFIG=~/.kube/config:~/Downloads/k8s-cluster-config kubectl config view --merge --flatten > ~/.kube/config export KUBECONFIG=~/.kube/config
配置好訪問權限後經過以下命令可查看當前集羣web
kubectl config current-context
查看集羣版本,輸出內容以下redis
kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.9", GitCommit:"a17149e1a189050796ced469dbd78d380f2ed5ef", GitTreeState:"clean", BuildDate:"2020-04-16T11:44:51Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.9-eks.2", GitCommit:"f999b99a13f40233fc5f875f0607448a759fc613", GitTreeState:"clean", BuildDate:"2020-10-09T12:54:13Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
到這裏咱們的試驗已經搭建完成了,這裏版本管理也可使用github
整個項目採用大倉的方式,目錄結構以下,最外層項目命名爲blog,app目錄下爲按照業務拆分紅的不一樣的微服務,好比user服務下面又分爲api服務和rpc服務,其中api服務爲聚合網關對外提供restful接口,而rpc服務爲內部通訊提供高性能的數據緩存等操做
├── blog │ ├── app │ │ ├── user │ │ │ ├── api │ │ │ └── rpc │ │ ├── article │ │ │ ├── api │ │ │ └── rpc
項目目錄建立好以後咱們進入api目錄建立user.api文件,文件內容以下,定義服務端口爲2233,同時定義了一個/user/info接口
type UserInfoRequest struct { Uid int64 `form:"uid"` } type UserInfoResponse struct { Uid int64 `json:"uid"` Name string `json:"name"` Level int `json:"level"` } @server( port: 2233 ) service user-api { @doc( summary: 獲取用戶信息 ) @server( handler: UserInfo ) get /user/info(UserInfoRequest) returns(UserInfoResponse) }
定義好api文件以後咱們執行以下命令生成api服務代碼,一鍵生成真是能大大提高咱們的生產力呢
goctl api go -api user.api -dir .
代碼生成後咱們對代碼稍做改造以便後面部署後方便進行測試,改造後的代碼爲返回本機的ip地址
func (ul *UserInfoLogic) UserInfo(req types.UserInfoRequest) (*types.UserInfoResponse, error) { addrs, err := net.InterfaceAddrs() if err != nil { return nil, err } var name string for _, addr := range addrs { if ipnet, ok := addr.(*net.IPNet); ok && !ipnet.IP.IsLoopback() && ipnet.IP.To4() != nil { name = ipnet.IP.String() } } return &types.UserInfoResponse{ Uid: req.Uid, Name: name, Level: 666, }, nil }
到這裏服務生成部分就完成了,由於本節爲基礎框架的搭建因此只是添加一些測試的代碼,後面會繼續豐富項目代碼
通常的經常使用鏡像好比mysql、memcache等咱們能夠直接從鏡像倉庫拉取,可是咱們的服務鏡像須要咱們自定義,自定義鏡像有多重方式而使用Dockerfile則是使用最多的一種方式,使用Dockerfile定義鏡像雖然不難可是也很容易出錯,因此這裏咱們也藉助工具來自動生成,這裏就不得再也不誇一下goctl這個工具真的是棒棒的,還能幫助咱們一鍵生成Dockerfile呢,在api目錄下執行以下命令
goctl docker -go user.go
生成後的文件稍做改動以符合咱們的目錄結構,文件內容以下,採用了兩階段構建,第一階段構建可執行文件確保構建獨立於宿主機,第二階段會引用第一階段構建的結果,最終構建出極簡鏡像
FROM golang:alpine AS builder LABEL stage=gobuilder ENV CGO_ENABLED 0 ENV GOOS linux ENV GOPROXY https://goproxy.cn,direct WORKDIR /build/zero RUN go mod init blog/app/user/api RUN go mod download COPY . . COPY /etc /app/etc RUN go build -ldflags="-s -w" -o /app/user user.go FROM alpine RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata ENV TZ Asia/Shanghai WORKDIR /app COPY --from=builder /app/user /app/user COPY --from=builder /app/etc /app/etc CMD ["./user", "-f", "etc/user-api.yaml"]
而後執行以下命令建立鏡像
docker build -t user:v1 app/user/api/
這個時候咱們使用docker images命令查看鏡像會發現user鏡像已經建立,版本爲v1
REPOSITORY TAG IMAGE ID CREATED SIZE user v1 1c1f64579b40 4 days ago 17.2MB
一樣,k8s的部署文件編寫也比較複雜很容易出錯,因此咱們也使用goctl自動來生成,在api目錄下執行以下命令
goctl kube deploy -name user-api -namespace blog -image user:v1 -o user.yaml -port 2233
生成的ymal文件以下
apiVersion: apps/v1 kind: Deployment metadata: name: user-api namespace: blog labels: app: user-api spec: replicas: 2 revisionHistoryLimit: 2 selector: matchLabels: app: user-api template: metadata: labels: app: user-api spec: containers: - name: user-api image: user:v1 lifecycle: preStop: exec: command: ["sh","-c","sleep 5"] ports: - containerPort: 2233 readinessProbe: tcpSocket: port: 2233 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: tcpSocket: port: 2233 initialDelaySeconds: 15 periodSeconds: 10 resources: requests: cpu: 500m memory: 512Mi limits: cpu: 1000m memory: 1024Mi
到今生成鏡像和k8s部署文件步驟已經結束了,上面主要是爲了演示,真正的生產環境中都是經過持續集成工具自動建立鏡像的
jenkins是經常使用的繼續集成工具,其提供了多種構建方式,而pipeline是最經常使用的構建方式之一,pipeline支持聲名式和腳本式兩種方式,腳本式語法靈活、可擴展,但也意味着更復雜,並且須要學習Grovvy語言,增長了學習成本,因此纔有了聲明式語法,聲名式語法是一種更簡單,更結構化的語法,咱們後面也都會使用聲名式語法
這裏再介紹下Jenkinsfile,其實Jenkinsfile就是一個純文本文件,也就是部署流水線概念在Jenkins中的表現形式,就像Dockerfile之於Docker,全部的部署流水線邏輯均可在Jenkinsfile文件中定義,須要注意,Jenkins默認是不支持Jenkinsfile的,咱們須要安裝Pipeline插件,安裝插件的流程爲Manage Jenkins -> Manager Plugins 而後搜索安裝便可,以後即可構建pipeline了
咱們能夠直接在pipeline的界面中輸入構建腳本,可是這樣無法作到版本化,因此若是不是臨時測試的話是不推薦這種方式的,更通用的方式是讓jenkins從git倉庫中拉取Jenkinsfile並執行
首先須要安裝Git插件,而後使用ssh clone方式拉取代碼,因此,須要將git私鑰放到jenkins中,這樣jenkins纔有權限從git倉庫拉取代碼
將git私鑰放到jenkins中的步驟是:Manage Jenkins -> Manage credentials -> 添加憑據,類型選擇爲SSH Username with private key,接下來按照提示進行設置就能夠了,以下圖所示
而後在咱們的gitlab中新建一個項目,只須要一個Jenkinsfile文件
在user-api項目中流水線定義選擇Pipeline script from SCM,添加gitlab ssh地址和對應的token,以下圖所示
接着咱們就能夠按照上面的實戰步驟進行Jenkinsfile文件的編寫了
從gitlab拉取代碼,從咱們的gitlab倉庫中拉取代碼,並使用commit_id用來區分不一樣版本
stage('從gitlab拉取服務代碼') { steps { echo '從gitlab拉取服務代碼' git credentialsId: 'xxxxxxxx', url: 'http://xxx.xxx.xxx.xxx:xxx/blog/blog.git' script { commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim() } } }
構建docker鏡像,使用goctl生成的Dockerfile文件構建鏡像
stage('構建鏡像') { steps { echo '構建鏡像' sh "docker build -t user:${commit_id} app/user/api/" } }
上傳鏡像到鏡像倉庫,把生產的鏡像push到鏡像倉庫
stage('上傳鏡像到鏡像倉庫') { steps { echo "上傳鏡像到鏡像倉庫" sh "docker login -u xxx -p xxxxxxx" sh "docker tag user:${commit_id} xxx/user:${commit_id}" sh "docker push xxx/user:${commit_id}" } }
部署到k8s,把部署文件中的版本號替換,從遠程拉取鏡,使用kubectl apply命令進行部署
stage('部署到k8s') { steps { echo "部署到k8s" sh "sed -i 's/<COMMIT_ID_TAG>/${commit_id}/' app/user/api/user.yaml" sh "cp app/user/api/user.yaml ." sh "kubectl apply -f user.yaml" } }
完整的Jenkinsfile文件以下
pipeline { agent any stages { stage('從gitlab拉取服務代碼') { steps { echo '從gitlab拉取服務代碼' git credentialsId: 'xxxxxx', url: 'http://xxx.xxx.xxx.xxx:9090/blog/blog.git' script { commit_id = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim() } } } stage('構建鏡像') { steps { echo '構建鏡像' sh "docker build -t user:${commit_id} app/user/api/" } } stage('上傳鏡像到鏡像倉庫') { steps { echo "上傳鏡像到鏡像倉庫" sh "docker login -u xxx -p xxxxxxxx" sh "docker tag user:${commit_id} xxx/user:${commit_id}" sh "docker push xxx/user:${commit_id}" } } stage('部署到k8s') { steps { echo "部署到k8s" sh "sed -i 's/<COMMIT_ID_TAG>/${commit_id}/' app/user/api/user.yaml" sh "kubectl apply -f app/user/api/user.yaml" } } } }
到這裏全部的配置基本完畢,咱們的基礎框架也基本搭建完成,下面開始執行pipeline,點擊左側的當即構建在下面Build History中就回產生一個構建歷史序列號,點擊對應的序列號而後點擊左側的Console Output便可查看構建過程的詳細信息,若是構建過程出現錯誤也會在這裏輸出
構建詳細輸出以下,pipeline對應的每個stage都有詳細的輸出
Started by user admin Obtained Jenkinsfile from git git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] node Running on Jenkins in /var/lib/jenkins/workspace/user-api [Pipeline] { [Pipeline] stage [Pipeline] { (Declarative: Checkout SCM) [Pipeline] checkout Selected Git installation does not exist. Using Default The recommended git tool is: NONE using credential gitlab_token > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git # timeout=10 Fetching upstream changes from git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git > git --version # timeout=10 > git --version # 'git version 2.7.4' using GIT_SSH to set credentials > git fetch --tags --progress git@xxx.xxx.xxx.xxx:gitlab-instance-1ac0cea5/pipelinefiles.git +refs/heads/*:refs/remotes/origin/* # timeout=10 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision 77eac3a4ca1a5b6aea705159ce26523ddd179bdf (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10 Commit message: "add" > git rev-list --no-walk 77eac3a4ca1a5b6aea705159ce26523ddd179bdf # timeout=10 [Pipeline] } [Pipeline] // stage [Pipeline] withEnv [Pipeline] { [Pipeline] stage [Pipeline] { (從gitlab拉取服務代碼) [Pipeline] echo 從gitlab拉取服務代碼 [Pipeline] git The recommended git tool is: NONE using credential gitlab_user_pwd > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url http://xxx.xxx.xxx.xxx:9090/blog/blog.git # timeout=10 Fetching upstream changes from http://xxx.xxx.xxx.xxx:9090/blog/blog.git > git --version # timeout=10 > git --version # 'git version 2.7.4' using GIT_ASKPASS to set credentials > git fetch --tags --progress http://xxx.xxx.xxx.xxx:9090/blog/blog.git +refs/heads/*:refs/remotes/origin/* # timeout=10 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 Checking out Revision b757e9eef0f34206414bdaa4debdefec5974c3f5 (refs/remotes/origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10 > git branch -a -v --no-abbrev # timeout=10 > git branch -D master # timeout=10 > git checkout -b master b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10 Commit message: "Merge branch 'blog/dev' into 'master'" > git rev-list --no-walk b757e9eef0f34206414bdaa4debdefec5974c3f5 # timeout=10 [Pipeline] script [Pipeline] { [Pipeline] sh + git rev-parse --short HEAD [Pipeline] } [Pipeline] // script [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (構建鏡像) [Pipeline] echo 構建鏡像 [Pipeline] sh + docker build -t user:b757e9e app/user/api/ Sending build context to Docker daemon 28.16kB Step 1/18 : FROM golang:alpine AS builder alpine: Pulling from library/golang 801bfaa63ef2: Pulling fs layer ee0a1ba97153: Pulling fs layer 1db7f31c0ee6: Pulling fs layer ecebeec079cf: Pulling fs layer 63b48972323a: Pulling fs layer ecebeec079cf: Waiting 63b48972323a: Waiting 1db7f31c0ee6: Verifying Checksum 1db7f31c0ee6: Download complete ee0a1ba97153: Verifying Checksum ee0a1ba97153: Download complete 63b48972323a: Verifying Checksum 63b48972323a: Download complete 801bfaa63ef2: Verifying Checksum 801bfaa63ef2: Download complete 801bfaa63ef2: Pull complete ee0a1ba97153: Pull complete 1db7f31c0ee6: Pull complete ecebeec079cf: Verifying Checksum ecebeec079cf: Download complete ecebeec079cf: Pull complete 63b48972323a: Pull complete Digest: sha256:49b4eac11640066bc72c74b70202478b7d431c7d8918e0973d6e4aeb8b3129d2 Status: Downloaded newer image for golang:alpine ---> 1463476d8605 Step 2/18 : LABEL stage=gobuilder ---> Running in c4f4dea39a32 Removing intermediate container c4f4dea39a32 ---> c04bee317ea1 Step 3/18 : ENV CGO_ENABLED 0 ---> Running in e8e848d64f71 Removing intermediate container e8e848d64f71 ---> ff82ee26966d Step 4/18 : ENV GOOS linux ---> Running in 58eb095128ac Removing intermediate container 58eb095128ac ---> 825ab47146f5 Step 5/18 : ENV GOPROXY https://goproxy.cn,direct ---> Running in df2add4e39d5 Removing intermediate container df2add4e39d5 ---> c31c1aebe5fa Step 6/18 : WORKDIR /build/zero ---> Running in f2a1da3ca048 Removing intermediate container f2a1da3ca048 ---> 5363d05f25f0 Step 7/18 : RUN go mod init blog/app/user/api ---> Running in 11d0adfa9d53 [91mgo: creating new go.mod: module blog/app/user/api [0mRemoving intermediate container 11d0adfa9d53 ---> 3314852f00fe Step 8/18 : RUN go mod download ---> Running in aa9e9d9eb850 Removing intermediate container aa9e9d9eb850 ---> a0f2a7ffe392 Step 9/18 : COPY . . ---> a807f60ed250 Step 10/18 : COPY /etc /app/etc ---> c4c5d9f15dc0 Step 11/18 : RUN go build -ldflags="-s -w" -o /app/user user.go ---> Running in a4321c3aa6e2 [91mgo: finding module for package github.com/tal-tech/go-zero/core/conf [0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest/httpx [0m[91mgo: finding module for package github.com/tal-tech/go-zero/rest [0m[91mgo: finding module for package github.com/tal-tech/go-zero/core/logx [0m[91mgo: downloading github.com/tal-tech/go-zero v1.1.1 [0m[91mgo: found github.com/tal-tech/go-zero/core/conf in github.com/tal-tech/go-zero v1.1.1 go: found github.com/tal-tech/go-zero/rest in github.com/tal-tech/go-zero v1.1.1 go: found github.com/tal-tech/go-zero/rest/httpx in github.com/tal-tech/go-zero v1.1.1 go: found github.com/tal-tech/go-zero/core/logx in github.com/tal-tech/go-zero v1.1.1 [0m[91mgo: downloading gopkg.in/yaml.v2 v2.4.0 [0m[91mgo: downloading github.com/justinas/alice v1.2.0 [0m[91mgo: downloading github.com/dgrijalva/jwt-go v3.2.0+incompatible [0m[91mgo: downloading go.uber.org/automaxprocs v1.3.0 [0m[91mgo: downloading github.com/spaolacci/murmur3 v1.1.0 [0m[91mgo: downloading github.com/google/uuid v1.1.1 [0m[91mgo: downloading google.golang.org/grpc v1.29.1 [0m[91mgo: downloading github.com/prometheus/client_golang v1.5.1 [0m[91mgo: downloading github.com/beorn7/perks v1.0.1 [0m[91mgo: downloading github.com/golang/protobuf v1.4.2 [0m[91mgo: downloading github.com/prometheus/common v0.9.1 [0m[91mgo: downloading github.com/cespare/xxhash/v2 v2.1.1 [0m[91mgo: downloading github.com/prometheus/client_model v0.2.0 [0m[91mgo: downloading github.com/prometheus/procfs v0.0.8 [0m[91mgo: downloading github.com/matttproud/golang_protobuf_extensions v1.0.1 [0m[91mgo: downloading google.golang.org/protobuf v1.25.0 [0mRemoving intermediate container a4321c3aa6e2 ---> 99ac2cd5fa39 Step 12/18 : FROM alpine latest: Pulling from library/alpine 801bfaa63ef2: Already exists Digest: sha256:3c7497bf0c7af93428242d6176e8f7905f2201d8fc5861f45be7a346b5f23436 Status: Downloaded newer image for alpine:latest ---> 389fef711851 Step 13/18 : RUN apk update --no-cache && apk add --no-cache ca-certificates tzdata ---> Running in 51694dcb96b6 fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz v3.12.3-38-g9ff116e4f0 [http://dl-cdn.alpinelinux.org/alpine/v3.12/main] v3.12.3-39-ge9195171b7 [http://dl-cdn.alpinelinux.org/alpine/v3.12/community] OK: 12746 distinct packages available fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz (1/2) Installing ca-certificates (20191127-r4) (2/2) Installing tzdata (2020f-r0) Executing busybox-1.31.1-r19.trigger Executing ca-certificates-20191127-r4.trigger OK: 10 MiB in 16 packages Removing intermediate container 51694dcb96b6 ---> e5fb2e4d5eea Step 14/18 : ENV TZ Asia/Shanghai ---> Running in 332fd0df28b5 Removing intermediate container 332fd0df28b5 ---> 11c0e2e49e46 Step 15/18 : WORKDIR /app ---> Running in 26e22103c8b7 Removing intermediate container 26e22103c8b7 ---> 11d11c5ea040 Step 16/18 : COPY --from=builder /app/user /app/user ---> f69f19ffc225 Step 17/18 : COPY --from=builder /app/etc /app/etc ---> b8e69b663683 Step 18/18 : CMD ["./user", "-f", "etc/user-api.yaml"] ---> Running in 9062b0ed752f Removing intermediate container 9062b0ed752f ---> 4867b4994e43 Successfully built 4867b4994e43 Successfully tagged user:b757e9e [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (上傳鏡像到鏡像倉庫) [Pipeline] echo 上傳鏡像到鏡像倉庫 [Pipeline] sh + docker login -u xxx -p xxxxxxxx WARNING! Using --password via the CLI is insecure. Use --password-stdin. WARNING! Your password will be stored unencrypted in /var/lib/jenkins/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded [Pipeline] sh + docker tag user:b757e9e xxx/user:b757e9e [Pipeline] sh + docker push xxx/user:b757e9e The push refers to repository [docker.io/xxx/user] b19a970f64b9: Preparing f695b957e209: Preparing ee27c5ca36b5: Preparing 7da914ecb8b0: Preparing 777b2c648970: Preparing 777b2c648970: Layer already exists ee27c5ca36b5: Pushed b19a970f64b9: Pushed 7da914ecb8b0: Pushed f695b957e209: Pushed b757e9e: digest: sha256:6ce02f8a56fb19030bb7a1a6a78c1a7c68ad43929ffa2d4accef9c7437ebc197 size: 1362 [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (部署到k8s) [Pipeline] echo 部署到k8s [Pipeline] sh + sed -i s/<COMMIT_ID_TAG>/b757e9e/ app/user/api/user.yaml [Pipeline] sh + kubectl apply -f app/user/api/user.yaml deployment.apps/user-api created [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // withEnv [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline Finished: SUCCESS
能夠看到最後輸出了SUCCESS說明咱們的pipeline已經成了,這個時候咱們能夠經過kubectl工具查看一下,-n參數爲指定namespace
kubectl get pods -n blog NAME READY STATUS RESTARTS AGE user-api-84ffd5b7b-c8c5w 1/1 Running 0 10m user-api-84ffd5b7b-pmh92 1/1 Running 0 10m
咱們在k8s部署文件中制定了命名空間爲blog,因此在執行pipeline以前咱們須要先建立這個namespance
kubectl create namespace blog
服務已經部署好了,那麼接下來怎麼從外部訪問服務呢?這裏使用LoadBalancer方式,Service部署文件定義以下,80端口映射到容器的2233端口上,selector用來匹配Deployment中定義的label
apiVersion: v1 kind: Service metadata: name: user-api-service namespace: blog spec: selector: app: user-api type: LoadBalancer ports: - protocol: TCP port: 80 targetPort: 2233
執行建立service,建立完後查看service輸出以下,注意必定要加上-n參數指定namespace
kubectl apply -f user-service.yaml
kubectl get services -n blog NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE user-api-service LoadBalancer <none> xxx.xxx.xxx.xx 80:32470/TCP 79m
這裏的EXTERNAL-IP 即爲提供給外網訪問的ip,端口爲80
到這裏咱們的全部的部署任務都完成了,你們最好也能親自動手來實踐一下
最後咱們來測試下部署的服務是否正常,使用EXTERNAL-IP來進行訪問
curl "http://xxx.xxx.xxx.xxx:80/user/info?uid=1" {"uid":1,"name":"172.17.0.5","level":666} curl http://xxx.xxx.xxx.xxx:80/user/info\?uid\=1 {"uid":1,"name":"172.17.0.8","level":666}
curl訪問了兩次/user/info接口,都能正常返回,說明咱們的服務沒有問題,name字段分別輸出了兩個不一樣ip,能夠看出LoadBalancer默認採用了Round Robin的負載均衡策略
以上咱們實現了從代碼開發到版本管理再到構建部署的DevOps全流程,完成了基礎架構的搭建,固然這個架構如今依然很簡陋。在本系列後續中,咱們將以這個博客系統爲基礎逐漸的完善整個架構,好比逐漸的完善CI、CD流程、增長監控、博客系統功能的完善、高可用最佳實踐和其原理等等
工欲善其事必先利其器,好的工具能大大提高咱們的工做效率並且能下降出錯的可能,上面咱們大量使用了goctl工具簡直有點愛不釋手了哈哈哈,下次見
因爲我的能力有限不免有表達有誤的地方,歡迎廣大觀衆姥爺批評指正!
https://github.com/tal-tech/go-zero
歡迎使用並 star 支持咱們!👏
go-zero 系列文章見『微服務實踐』公衆號