如何在kubesphere優雅的使用jmeter集羣進行性能測試

1、前提條件:

Kubernetes > v1.16node

kubesphere > v3.0.0  參考文檔 https://v3-0.docs.kubesphere.io/docs/quick-start/all-in-one-on-linux/linux

1 master + 3 nodenginx

 

2、在kubesphere部署

備註:步驟二仍適用於僅在K8S環境部署jmeter集羣git

在kubesphere後臺的master節點建立如下文件 :github

jmeter_cluster_create.sh jmeter_master_configmap.yaml jmeter_grafana_deploy.yamlshell

jmeter_master_deploy.yaml start_test.sh jmeter_grafana_svc.yaml數據庫

jmeter_influxdb_configmap.yaml jmeter_slaves_deploy.yaml apache

 

一、部署組件

執行jmeter_cluster_create.sh,並輸入一個惟一的 namespace, 如 jmeterjson

./jmeter_cluster_create.sh

查看pods安裝狀況api

[root@master ~]# kubectl get pod -n jmeter
NAME                               READY   STATUS    RESTARTS   AGE
influxdb-jmeter-5f7dd64975-xkgqc   1/1     Running   0          21h
jmeter-grafana-5856f7b855-rw4lp    1/1     Running   0          21h
jmeter-master-75c7d64449-46pjm     1/1     Running   0          21h
jmeter-slaves-d69c647d-dd6sb       1/1     Running   0          21h
jmeter-slaves-d69c647d-glbjn       1/1     Running   0          21h
jmeter-slaves-d69c647d-nsfgf       1/1     Running   0          21h

二、部署清單

2.1 執行腳本

jmeter_cluster_create.sh【用於建立命名空間和全部組件(jmeter master,slaves,influxdb 和 grafana)】:

#!/usr/bin/env bash
#Create multiple Jmeter namespaces on an existing kuberntes cluster
working_dir=`pwd`
echo "checking if kubectl is present"
if ! hash kubectl 2>/dev/null
then
echo "'kubectl' was not found in PATH"
echo "Kindly ensure that you can acces an existing kubernetes cluster via kubectl"
exit
fi
kubectl version --short
echo "Current list of namespaces on the kubernetes cluster:"
echo
kubectl get namespaces | grep -v NAME | awk '{print $1}'
echo
echo "Enter the name of the new tenant unique name, this will be used to create the namespace"
read tenant
echo
#Check If namespace exists
kubectl get namespace $tenant > /dev/null 2>&1
if [ $? -eq 0 ]
then
echo "Namespace $tenant already exists, please select a unique name"
echo "Current list of namespaces on the kubernetes cluster"
sleep 2
kubectl get namespaces | grep -v NAME | awk '{print $1}'
exit 1
fi
echo
echo "Creating Namespace: $tenant"
kubectl create namespace $tenant
echo "Namspace $tenant has been created"
echo
echo "Creating Jmeter slave nodes"
nodes=`kubectl get no | egrep -v "master|NAME" | wc -l`
echo
echo "Number of worker nodes on this cluster is " $nodes
echo
#echo "Creating $nodes Jmeter slave replicas and service"
echo
kubectl create -n $tenant -f $working_dir/jmeter_slaves_deploy.yaml
kubectl create -n $tenant -f $working_dir/jmeter_slaves_svc.yaml
echo "Creating Jmeter Master"
kubectl create -n $tenant -f $working_dir/jmeter_master_configmap.yaml
kubectl create -n $tenant -f $working_dir/jmeter_master_deploy.yaml

echo "Creating Influxdb and the service"
kubectl create -n $tenant -f $working_dir/jmeter_influxdb_configmap.yaml
kubectl create -n $tenant -f $working_dir/jmeter_influxdb_deploy.yaml
kubectl create -n $tenant -f $working_dir/jmeter_influxdb_svc.yaml
echo "Creating Grafana Deployment"
kubectl create -n $tenant -f $working_dir/jmeter_grafana_deploy.yaml
kubectl create -n $tenant -f $working_dir/jmeter_grafana_svc.yaml
echo "Printout Of the $tenant Objects"
echo
kubectl get -n $tenant all
echo namespace = $tenant > $working_dir/tenant_export

2.2  jmeter-slave

jmeter_slaves_deploy.yaml(Jmeter slave 的部署文件):
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jmeter-slaves
  labels:
    jmeter_mode: slave
spec:
  replicas: 3 
  selector:
    matchLabels:
      jmeter_mode: slave
  template:
    metadata:
      labels:
        jmeter_mode: slave
    spec:
      containers:
      - name: jmslave
        image: wenxinxin/jmeter-slave:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 1099
        - containerPort: 50000
    
- containerPort: 50001
        resources:
          limits:
            cpu: 4000m
            memory: 4Gi
          requests:
            cpu: 500m
            memory: 512Mi
jmeter_slaves_svc.yaml( Jmeter slave 服務的部署文件):
apiVersion: v1
kind: Service
metadata:
  name: jmeter-slaves-svc
  labels:
    jmeter_mode: slave
spec:
  clusterIP: None
  ports:
    - port: 1099
      name: first
      targetPort: 1099
    - port: 50000
      name: second
      targetPort: 50000
    - port: 50001
      name: third
      targetPort: 50001    

2.三、jmeter_master

jmeter_master_configmap.yaml(jmeter_master 應用配置):
apiVersion: v1
kind: ConfigMap
metadata:
  name: jmeter-load-test
  labels:
    app: influxdb-jmeter
data:
  load_test: |
    #!/bin/bash
    #Script created to invoke jmeter test script with the slave POD IP addresses
    #Script should be run like: ./load_test "path to the test script in jmx format"
    /jmeter/apache-jmeter-*/bin/jmeter -n -t $1 `getent ahostsv4 jmeter-slaves-svc | cut -d' ' -f1 | sort -u | awk -v ORS=, '{print $1}' | sed 's/,$//'`
jmeter_master_deploy.yaml(jmeter_master 部署文件):
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: jmeter-master
  labels:
    jmeter_mode: master
spec:
  replicas: 1
  selector:
    matchLabels:
      jmeter_mode: master
  template:
    metadata:
      labels:
        jmeter_mode: master
    spec:
      containers:
      - name: jmmaster
        image: wenxinxin/jmeter-master:latest
        imagePullPolicy: IfNotPresent
        command: [ "/bin/bash", "-c", "--" ]
        args: [ "while true; do sleep 30; done;" ]
        volumeMounts:
          - name: loadtest
            mountPath: /load_test
            subPath: "load_test"
        ports:
        - containerPort: 60000
        resources:
          limits:
            cpu: 4000m
            memory: 4Gi
          requests:
            cpu: 500m
            memory: 512Mi
      volumes:
      - name: loadtest 
        configMap:
         name: jmeter-load-test

2.四、influxdb

jmeter_influxdb_configmap.yaml(influxdb 的應用配置):
apiVersion: v1
kind: ConfigMap
metadata:
  name: influxdb-config
  labels:
    app: influxdb-jmeter
data:
  influxdb.conf: |
    [meta]
      dir = "/var/lib/influxdb/meta"

    [data]
      dir = "/var/lib/influxdb/data"
      engine = "tsm1"
      wal-dir = "/var/lib/influxdb/wal"

    # Configure the graphite api
    [[graphite]]
    enabled = true
    bind-address = ":2003" # If not set, is actually set to bind-address.
    database = "jmeter"  # store graphite data in this database
jmeter_influxdb_deploy.yaml(influxdb 部署文件):
apiVersion: apps/v1
kind: Deployment
metadata:
  name: influxdb-jmeter
  labels:
    app: influxdb-jmeter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: influxdb-jmeter
  template:
    metadata:
      labels:
        app: influxdb-jmeter
    spec:
      containers:
        - image: influxdb:1.8.4-alpine
          imagePullPolicy: IfNotPresent
          name: influxdb
          volumeMounts:
          - name: config-volume
            mountPath: /etc/influxdb
          ports:
            - containerPort: 8083
              name: influx
            - containerPort: 8086
              name: api
            - containerPort: 2003
              name: graphite
      volumes:
      - name: config-volume
        configMap:
         name: influxdb-config
jmeter_influxdb_svc.yaml(influxdb 服務的部署文件):
apiVersion: v1
kind: Service
metadata:
  name: jmeter-influxdb
  labels:
    app: influxdb-jmeter
spec:
  ports:
    - port: 8083
      name: http
      targetPort: 8083
    - port: 8086
      name: api
      targetPort: 8086
    - port: 2003
      name: graphite
      targetPort: 2003
  selector:
    app: influxdb-jmeter

2.五、grafana

jmeter_grafana_deploy.yaml(grafana 部署文件):
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jmeter-grafana
  labels:
    app: jmeter-grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jmeter-grafana
  template:
    metadata:
      labels:
        app: jmeter-grafana
    spec:
      containers:
      - name: grafana
        image: grafana/grafana:5.2.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
          protocol: TCP
        env:
        - name: GF_AUTH_BASIC_ENABLED
          value: "true"
        - name: GF_USERS_ALLOW_ORG_CREATE
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
jmeter_grafana_svc.yaml(grafana 服務的部署文件):
apiVersion: v1
kind: Service
metadata:
  name: jmeter-grafana
  labels:
    app: jmeter-grafana
spec:
  ports:
    - port: 3000
      targetPort: 3000
  selector:
    app: jmeter-grafana
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/service-weight: 'jmeter-grafana: 100'
  name: jmeter-grafana-ingress
spec:
  rules:
  # 配置七層域名
  - host: grafana-jmeter.com
    http:
      paths:
      # 配置Context Path
      - path: /
        backend:
          serviceName: jmeter-grafana
          servicePort: 3000

三、 初始化dashboard

3.1 啓動dashboard腳本

$ ./dashboard.sh

檢查 service 部署狀況:

[root@master ~]# kubectl get svc -n jmeter
NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE
jmeter-grafana      NodePort    10.233.22.208   <none>        3000:30157/TCP                 22h
jmeter-influxdb     ClusterIP   10.233.26.95    <none>        8083/TCP,8086/TCP,2003/TCP     22h
jmeter-slaves-svc   ClusterIP   None            <none>        1099/TCP,50000/TCP,50001/TCP   22h

經過 http:// 節點IP:30157 便可訪問 grafana

在granafa導入dashboard模版(5496),選擇數據庫便可。

 

 

 

 3.2 部署文件

dashboard.sh 該腳本用於自動建立如下內容:

  • (1)influxdb pod 中的一個 influxdb 數據庫(Jmeter)
  • (2)grafana 中的數據源(jmeterdb)
#!/usr/bin/env bash
working_dir=`pwd`
#Get namesapce variable
tenant=`awk '{print $NF}' $working_dir/tenant_export`
## Create jmeter database automatically in Influxdb
echo "Creating Influxdb jmeter Database"
##Wait until Influxdb Deployment is up and running
##influxdb_status=`kubectl get po -n $tenant | grep influxdb-jmeter | awk '{print $2}' | grep Running
influxdb_pod=`kubectl get po -n $tenant | grep influxdb-jmeter | awk '{print $1}'`
kubectl exec -ti -n $tenant $influxdb_pod -- influx -execute 'CREATE DATABASE jmeter'
## Create the influxdb datasource in Grafana
echo "Creating the Influxdb data source"
grafana_pod=`kubectl get po -n $tenant | grep jmeter-grafana | awk '{print $1}'`
## Make load test script in Jmeter master pod executable
#Get Master pod details
master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'`
kubectl exec -ti -n $tenant $master_pod -- cp -r /load_test /![]()jmeter/load_test
kubectl exec -ti -n $tenant $master_pod -- chmod 755 /jmeter/load_test
##kubectl cp $working_dir/influxdb-jmeter-datasource.json -n $tenant $grafana_pod:/influxdb-jmeter-datasource.json
kubectl exec -ti -n $tenant $grafana_pod -- curl 'http://admin:admin@127.0.0.1:3000/api/datasources' -X POST -H 'Content-Type: application/json;charset=UTF-8' --data-binary '{"name":"jmeterdb","type":"influxdb","url":"http://jmeter-influxdb:8086","access":"proxy","isDefault":true,"database":"jmeter","user":"admin","password":"admin"}'

四、 啓動測試

4.1 執行腳本

$ ./start_test.sh

須要一個jmeter測試腳本

4.2 腳本說明

start_test.sh(此腳本用於自動運行 Jmeter 測試腳本,而無需手動登陸 Jmeter 主 shell,它將詢問 Jmeter 測試腳本的位置,而後將其複製到 Jmeter master pod 並啓動自動對 Jmeter slave 進行測試):

#!/usr/bin/env bash
#Script created to launch Jmeter tests directly from the current terminal without accessing the jmeter master pod.
#It requires that you supply the path to the jmx file
#After execution, test script jmx file may be deleted from the pod itself but not locally.

#直接從當前終端啓動 Jmeter 測試而建立的腳本,無需訪問 Jmeter master pod。
#要求提供 jmx 文件的路徑
#執行後,測試腳本 jmx 文件不會從 pod 自己刪除,也不會在本地刪除。

working_dir="`pwd`"

# 獲取 namesapce 變量
tenant=`awk '{print $NF}' "$working_dir/tenant_export"`

jmx="$1"
[ -n "$jmx" ] || read -p 'Enter path to the jmx file ' jmx

if [ ! -f "$jmx" ];
then
    echo "Test script file was not found in PATH"
    echo "Kindly check and input the correct file path"
    exit
fi

test_name="$(basename "$jmx")"

# 獲取 master pod 詳細信息
master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'`
kubectl cp "$jmx" -n $tenant "$master_pod:/$test_name"

## 啓動 Jmeter 壓測
kubectl exec -ti -n $tenant $master_pod -- /bin/bash /load_test "$test_name"
kubectl exec -ti -n $tenant $master_pod -- /bin/bash /load_test "$test_name"

jmeter_stop.sh(中止測試):

#!/usr/bin/env bash
#Script writtent to stop a running jmeter master test
#Kindly ensure you have the necessary kubeconfig

#編寫腳原本中止運行的 jmeter master 測試
#請確保你有必要的 kubeconfig
working_dir=`pwd`

#獲取 namesapce 變量
tenant=`awk '{print $NF}' $working_dir/tenant_export`
master_pod=`kubectl get po -n $tenant | grep jmeter-master | awk '{print $1}'`
kubectl -n $tenant exec -it $master_pod -- bash -c "./jmeter/apache-jmeter-5.2.1/bin/stoptest.sh"                               

 

3、使用kubesphere管理jmeter集羣:

一、新建企業空間

 

 

 

 

 

二、進入集羣管理-->項目管理-->用戶項目

將項目jmeter分配到企業空間wx

 

 三、經過工做臺-->企業空間wx-->項目jmeter,查看和管理jmeter集羣。

 

 

 

 

 

參考資料:

  • [1]:https://github.com/kubernauts/jmeter-kubernetes
相關文章
相關標籤/搜索