java微服務 k8s生產環境搭建

整理了很久,來一波乾貨😄🎉java

準備k8s集羣機器

一臺k8s部署機(堡壘機) 1G以上 3臺k8s master節點機器 2c4G以上 3臺k8s node節點機器 2c4G以上node

爲以上7臺機器所有分配固定ipmysql

機器 ip
K8s-ha-master1 172.16.67.130
K8s-ha-master2 172.16.67.131
K8s-ha-master3 172.16.67.132
K8s-ha-node1 172.16.67.135
K8s-ha-node2 172.16.67.136
K8s-ha-node3 172.16.67.137
K8s-ha-deploy 172.16.67.140
安裝k8s集羣

登錄部署機 生成ssh key: ssh-keygen -t rsa -b 4096 -C "your_email@example.com" 將公鑰拷貝到k8s全部的機器上linux

ssh-copy-id 172.16.67.130
ssh-copy-id 172.16.67.131
ssh-copy-id 172.16.67.132
ssh-copy-id 172.16.67.135
ssh-copy-id 172.16.67.136
ssh-copy-id 172.16.67.137
ssh-copy-id 172.16.67.140
複製代碼

下載k8s docker安裝工具包git

git clone https://github.com/gjmzj/kubeasz.git
mkdir -p /etc/ansible
mv kubeasz/* /etc/ansible
複製代碼

參考這個文檔https://github.com/gjmzj/kubeasz/blob/master/docs/setup/quickStart.md 下載k8s集羣須要的二進制文件和離線docker鏡像並解壓github

把以上的機器配置到ansible裏web

cd /etc/ansible && cp example/hosts.m-masters.example hosts
複製代碼
# 集羣部署節點:通常爲運行ansible 腳本的節點
# 變量 NTP_ENABLED (=yes/no) 設置集羣是否安裝 chrony 時間同步
[deploy]
127.0.0.1 NTP_ENABLED=no

# etcd集羣請提供以下NODE_NAME,注意etcd集羣必須是1,3,5,7...奇數個節點
[etcd]
172.16.67.130 NODE_NAME=etcd1
172.16.67.131 NODE_NAME=etcd2
172.16.67.132 NODE_NAME=etcd3

[new-etcd] # 預留組,後續添加etcd節點使用
#192.168.1.x NODE_NAME=etcdx

[kube-master]
172.16.67.130
172.16.67.131
172.16.67.132

[new-master] # 預留組,後續添加master節點使用
#192.168.1.5

[kube-node]
172.16.67.137 NEW_NODE=yes
172.16.67.136 NEW_NODE=yes
172.16.67.135

[new-node] # 預留組,後續添加node節點使用
#192.168.1.xx

# 參數 NEW_INSTALL:yes表示新建,no表示使用已有harbor服務器
# 若是不使用域名,能夠設置 HARBOR_DOMAIN=""
[harbor]
#192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no

# 負載均衡(目前已支持多於2節點,通常2節點就夠了) 安裝 haproxy+keepalived
[lb]
172.16.67.130 LB_ROLE=backup
172.16.67.131 LB_ROLE=master

#【可選】外部負載均衡,用於自有環境負載轉發 NodePort 暴露的服務等
[ex-lb]
#192.168.1.6 LB_ROLE=backup EX_VIP=192.168.1.250
#192.168.1.7 LB_ROLE=master EX_VIP=192.168.1.250

[all:vars]
# ---------集羣主要參數---------------
#集羣部署模式:allinone, single-master, multi-master
DEPLOY_MODE=multi-master

#集羣主版本號,目前支持: v1.8, v1.9, v1.10,v1.11, v1.12, v1.13
K8S_VER="v1.13"

# 集羣 MASTER IP即 LB節點VIP地址,爲區別與默認apiserver端口,設置VIP監聽的服務端口8443
# 公有云上請使用雲負載均衡內網地址和監聽端口
MASTER_IP="172.16.67.165"
KUBE_APISERVER="https://{{ MASTER_IP }}:8443"

# 集羣網絡插件,目前支持calico, flannel, kube-router, cilium
CLUSTER_NETWORK="flannel"

# 服務網段 (Service CIDR),注意不要與內網已有網段衝突
SERVICE_CIDR="10.68.0.0/16"

# POD 網段 (Cluster CIDR),注意不要與內網已有網段衝突
CLUSTER_CIDR="172.20.0.0/16"

# 服務端口範圍 (NodePort Range)
NODE_PORT_RANGE="20000-40000"

# kubernetes 服務 IP (預分配,通常是 SERVICE_CIDR 中第一個IP)
CLUSTER_KUBERNETES_SVC_IP="10.68.0.1"

# 集羣 DNS 服務 IP (從 SERVICE_CIDR 中預分配)
CLUSTER_DNS_SVC_IP="10.68.0.2"

# 集羣 DNS 域名
CLUSTER_DNS_DOMAIN="cluster.local."

# 集羣basic auth 使用的用戶名和密碼
BASIC_AUTH_USER="admin"
BASIC_AUTH_PASS="test1234"

# ---------附加參數--------------------
#默認二進制文件目錄
bin_dir="/opt/kube/bin"

#證書目錄
ca_dir="/etc/kubernetes/ssl"

#部署目錄,即 ansible 工做目錄,建議不要修改
base_dir="/etc/ansible"
複製代碼

運行ansible安裝集羣sql

ansible-playbook 01.prepare.yml
ansible-playbook 02.etcd.yml
ansible-playbook 03.docker.yml
ansible-playbook 04.kube-master.yml
ansible-playbook 05.kube-node.yml
ansible-playbook 06.network.yml
ansible-playbook 07.cluster-addon.yml 
複製代碼
安裝rancher,導入k8s集羣

使用rancher鏡像啓動rancherdocker

docker run -d --name=rancher --restart=unless-stopped \
  -p 8880:80 -p 8843:443 \
  -v ~/rancher:/var/lib/rancher \
  rancher/rancher:stable
複製代碼

登錄 ip:8843查看效果 shell

導入k8s集羣

生成配置

獲取生成的配置,在k8s的部署機上執行

curl --insecure -sfL https://172.16.123.1:8843/v3/import/7gtwrh84nlpgkn48pj26lrzv4c8bt4mjl9f7r5w2sfprbt82tkdk6f.yaml | kubectl apply -f -
複製代碼

在rancher上查看導入的集羣

網關和項目鏡像推送阿里雲

將java鏡像打包推送到阿里雲,參考:github.com/neatlife/jf…

docker build -t jframework .
docker tag jframework:latest registry.cn-hangzhou.aliyuncs.com/suxiaolin/jframework:latest
docker push registry.cn-hangzhou.aliyuncs.com/suxiaolin/jframework:latest
複製代碼

準備網關k8s配置文件

{
 "kind": "DaemonSet",
 "apiVersion": "extensions/v1beta1",
 "metadata": {
 "name": "gateway",
 "namespace": "default",
 "labels": {
 "k8s-app": "gateway"
    },
 "annotations": {
      "deployment.kubernetes.io/revision": "2"
    }
  },
 "spec": {
 "selector": {
 "matchLabels": {
 "k8s-app": "gateway"
      }
    },
 "template": {
 "metadata": {
 "name": "gateway",
 "labels": {
 "k8s-app": "gateway"
        }
      },
 "spec": {
 "containers": [
        {
 "name": "gateway",
 "ports": [
            {
 "containerPort": 8080,
 "hostPort": 8080,
 "name": "8080tcp80800",
 "protocol": "TCP"
            }
          ],
 "image": "registry.cn-hangzhou.aliyuncs.com/suxiaolin/gateway:latest",
 "readinessProbe": {
 "httpGet": {
 "scheme": "HTTP",
 "path": "/actuator/info",
 "port": 8080
            },
 "initialDelaySeconds": 10,
 "periodSeconds": 5
          },
 "resources": {

          },
 "terminationMessagePath": "/dev/termination-log",
 "terminationMessagePolicy": "File",
 "imagePullPolicy": "Always",
 "securityContext": {
 "privileged": false,
 "procMount": "Default"
          }
        }
        ],
 "restartPolicy": "Always",
 "terminationGracePeriodSeconds": 30,
 "dnsPolicy": "ClusterFirst",
 "securityContext": {

        },
 "schedulerName": "default-scheduler"
      }
    },
 "revisionHistoryLimit": 10
  }
}
複製代碼

準備項目k8s配置文件

{
 "kind": "Deployment",
 "apiVersion": "extensions/v1beta1",
 "metadata": {
 "name": "jframework",
 "namespace": "default",
 "labels": {
 "k8s-app": "jframework"
    },
 "annotations": {
      "deployment.kubernetes.io/revision": "2"
    }
  },
 "spec": {
 "replicas": 1,
 "selector": {
 "matchLabels": {
 "k8s-app": "jframework"
      }
    },
 "template": {
 "metadata": {
 "name": "jframework",
 "labels": {
 "k8s-app": "jframework"
        }
      },
 "spec": {
 "containers": [
          {
 "name": "jframework",
 "image": "registry.cn-hangzhou.aliyuncs.com/suxiaolin/jframework:latest",
 "readinessProbe": {
 "httpGet": {
 "scheme": "HTTP",
 "path": "/heartbeat",
 "port": 8080
              },
 "initialDelaySeconds": 10,
 "periodSeconds": 5
            },
 "resources": {
              
            },
 "terminationMessagePath": "/dev/termination-log",
 "terminationMessagePolicy": "File",
 "imagePullPolicy": "Always",
 "securityContext": {
 "privileged": false,
 "procMount": "Default"
            }
          }
        ],
 "restartPolicy": "Always",
 "terminationGracePeriodSeconds": 30,
 "dnsPolicy": "ClusterFirst",
 "securityContext": {
          
        },
 "schedulerName": "default-scheduler"
      }
    },
 "strategy": {
 "type": "RollingUpdate",
 "rollingUpdate": {
 "maxUnavailable": "25%",
 "maxSurge": "25%"
      }
    },
 "revisionHistoryLimit": 10,
 "progressDeadlineSeconds": 600
  }
}
複製代碼

將項目在rancher上導入,並查看效果

網關訪問應用集羣可以使用k8s內置的dns域名訪問 好比:jframework.default:8080

k8s內置的dns已自帶etcd負載均衡

安裝配置中心

下載apollo docker工具包並啓動

git clone https://github.com/ctripcorp/apollo.git
cd apollo/scripts/docker-quick-start/
docker-compose up -d
複製代碼

查看效果

安裝elk

下載elk docker工具包 github.com/deviantony/… 並啓動

git clone https://github.com/deviantony/docker-elk.git
cd docker-elk
docker-compose up -d
複製代碼

訪問ip:5601查看效果

安裝pinpoint

下載pinpont docker工具包並啓動

git clone https://github.com/naver/pinpoint-docker.git
cd pinpoint-docker
docker-compose up -d pinpoint-hbase pinpoint-mysql pinpoint-web pinpoint-collector pinpoint-agent zoo1 zoo2 zoo3 jobmanager taskmanager
複製代碼

訪問ip:8079查看效果

配置jenkins項目發佈

在jenkins上建立maven構建項目,而後使用rancher啓動集羣

/opt/rancher/rancher kubectl apply -f k8s.yml
複製代碼
配置阿里雲lsb負載均衡

在阿里雲slbs上 slb.console.aliyun.com/slb/cn-hang…

建立負載均衡,指向網關的ip

工具集合

java ci/cd環境搭建 一文使用的工具列表

工具 做用
Nexus maven私服
jenkins 自動打包/發佈
docker 應用虛擬機
gitlab 源碼管理
yearning sql審覈
Sonarqube 代碼質量審覈
maven&&gradle 項目打包工具
kubectl k8s集羣操控工具
K8s 項目運行環境
rancher 簡化k8s管理工具
apollo配置中心 管理項目集羣配置
pinpoint 項目異常運行監控
elk 應用日誌收集工具
阿里雲slbs 負載均衡
ansible linux命令自動化工具
showdoc 項目文檔管理

持續更新...

相關文章
相關標籤/搜索