nGrinder控制器配置指南 - 玩轉壓測nGrinder

本章描述了高級nGrinder控制器配置。若是您不是系統管理員,您可能不須要閱讀本指南。然而,若是您想將nGrinder做爲PAAS運行,您應該閱讀這一章。javascript

控制器首頁

${NGRINDER_HOME}

當nGrinder控制器啓動時,nGrinder將建立 ${user.home}/.ngrinder 目錄到用戶的主目錄。此目錄包含默認配置文件和數據。下面是.ngrinder目錄的默認位置。html

  • Window:C:\Users${user.home}.ngrinder
  • Unix/Linux :${user.home}/.ngrinder

可是若是您想爲主目錄分配其餘目錄,請設置環境變量 ${NGRINDER_HOME} 在運行ngrinder以前,您能夠在命令行中執行 --ngrinder-home HOME_PATHjava

java -XX:MaxPermSize=200m -jar  ngrinder-controller-X.X.war --ngrinder-home /home/user/ngrinder

若是您想運行多個nGrinder控制器(每一個控制器只處理一個網絡區域)並使它們做爲一個(集羣模式)工做,您應該讓全部控制器共享相同的${NGRINDER_HOME}。這一般使用NFS來完成控制器目錄共享。有關詳細信息,請參閱 集羣體系結構web

${NGRINDER_EX_HOME}

${NGRINDER_EX_HOME}用於在集羣模式中每一個特定的控制器。默認狀況下,它被設置在~/.ngrinder_ex數據庫

${NGRINDER_HOME}不一樣,${NGRINDER_EX_HOME}在nGrinder啓動時不會自動建立。tomcat

  • Window : C:\Users${user.home}.ngrinder_ex
  • Unix/Linux :${user.home}/.ngrinder_ex

${NGRINDER_EX_HOME}不是多個控制器共享的主題。每一個控制器均可以有本身的擴展home。用戶能夠添加額外的系統配置在${NGRINDER_EX_HOME}/system.conf服務器

控制器首先從 ${NGRINDER_HOME}/system.conf 加載系統配置,而後,它將嘗試從 ${NGRINDER_EX_HOME}/system.conf ,並將其覆蓋 ${NGRINDER_HOME}/system.conf上的系統配置。網絡

例如:cluster.region配置能夠設置在每一個集羣成員的${NGRINDER_EX_HOME}/system.conf 文件中。當${NGRINDER_EX_HOME}目錄存在,且控制器在集羣模式下啓動,控制器將輸出日誌到 ${NGRINDER_EX_HOME}/logs/ngrinder_{region_name}.log文件中。app

命令行參數

基礎

若是你運行一個沒有WAS的控制器。您能夠在CLI接口上提供多個選項。webapp

名稱 示例 重載屬性 描述
-p / –port -p 80 服務器的HTTP端口。默認端口是:8080
-c / --context-path -c ngrinder 控制器的Web上下文路徑。默認的上下文路徑是""。例如,若是您在這裏提供"ngrinner",訪問url將是"http://localhost:8080/ngrinder"
-cm / --cluster-mode -cm easy cluster.mode 集羣模式。有三個可用的選項(none/easy/advanced)。默認值爲none
-nh / --ngrinder-home -nh ~/ngrinder ngrinder.home home路徑,默認:~/.ngrinder
-exh / --exhome -ex ~/ngrinder_ex ngrinder.exhome 擴展的home路徑,默認:~/.ngrinder_ex
-h / --help / –? -h 幫助文檔
-D -Ddatabase=cubrid &database_url=blar can override all 動態屬性。這個選項能夠覆蓋 database.confsystem.conf 的全部配置

單應用模式

若是在非集羣模式下運行ngrinder(這意味着您根本沒有提供「-cm」選項),則可使用如下附加選項。

名稱 示例 重載屬性 描述
-cp / --controller-port -cp 9000 controller.port 代理鏈接的控制器端口。

簡單的集羣模式

一些公司使用多個idc,它們須要集羣(單個nGrinder實例中的多區域支持)特性。然而,3.3版本前的nGrinder要求網絡文件系統共享${NGRINDER_HOME}和Cubrid,以使多個控制器擁有相同的DB。經過容許在一臺機器上安裝多個控制器並容許H2 TCP服務器鏈接,咱們取消了這些限制。爲了理解容易的集羣,咱們強烈建議閱讀咱們的 簡單集羣指南

您能夠經過如下命令以集羣模式輕鬆的運行控制器。

java -XX:MaxPermSize=200m -jar  ngrinder-controller-X.X.war -cm easy

如下是必選項

名稱 示例 重載屬性 描述
-clh / --cluster-host -clh 200.1.22.3 cluster.host 當前區域的代理鏈接的控制器ip或主機名。
-clp / --cluster-port -clp 10222 cluster.port 此集羣成員的集羣通訊端口。每一個集羣成員都應該使用惟一的集羣端口運行
-cp / --controller-port -cp 9000 controller.port 代理鏈接的控制器端口。每一個集羣成員都應該使用惟一的控制器端口運行。
-r / --region -region NORTH cluster.region 區域名稱。每一個集羣成員應該使用惟一的區域名稱運行。
-dt / --database-type -dt h2 database.type 數據庫類型。有h2和curbrid。
-dh / --database-host -dh localhost database.host 數據庫主機名。默認值是localhost
-dp / --database-port -dp 9092 database.port 數據庫端口號。當cubrid被選中時,默認值是33000;h2被選中,默認值是9092

高級集羣模式

java -XX:MaxPermSize=200m -jar  ngrinder-controller-X.X.war -cm advanced

高級集羣模式沒有任何選項。它只是激活集羣模式,而後從集羣配置${NGRINDER_HOME}/system.conf 或者 ${NGRINDER_EX_HOME}/system.conf 文件中獲取。

配置

當控制器啓動時,它將默認配置複製到${NGRINDER_HOME}中。您能夠修改它們來設置控制器。

${NGRINDER_HOME}/database.conf

  • 這裏包含數據庫配置。您能夠在須要使用Cubrid時修改此文件。默認狀況下,nGrinder使用H2做爲數據庫。
database=H2
database_username=admin
database_password=admin

若是你只設置以上選項,H2將${NGRINDER_HOME}/db/h2.db建立DB,以嵌入式模式運行。在這種狀況下,沒有其餘進程在運行時不能訪問這個數據庫。

若是您在服務器模式下運行H2,而不是在本身的嵌入式模式下運行。您還應該提供數據庫鏈接URL。

database_url=tcp://{your_h2_server_host_ip_or_name}:{the_h2_server_port}/db/ngrinder

若是您喜歡使用Cubrid,您須要設置如下配置。

database=cubrid
database_url={your_cubrid_host_ip_or_name}:{cubrid_port_maybe_33000}:{dbname}
database_username=admin
database_password=admin

注意:若是您想使用Cubrid DB高可用性特性。請按照指南在cubrid中啓用HA,並在database.conf添加備選的db地址。

database_url_option=&althosts={you_cubrid_secondary_host_ip_or_name}:{cubrid_port_maybe_33000}
#### ${NGRINDER_HOME}/system.conf

##### Generic
- This contains controller configurations.
- You can modify these settings to calibrate the controller’s behavior.

|Key|Default|Compatible Keys (for ~nGrinder 3.2.X)|Description|
|---|-------|-------------------------------------|-----------|
|controller.verbose|false|verbose|Set true to see the more detailed log.|
|controller.dev_mode|false|testmode|Set true to run controller in dev mode. In the dev mode, the log goes to the default output(such as catalina.log in Tomcat) not ${NGRINDER_HOME}/logs/ and the security mode and cluster config verification are disabled. In addition, "agent force update" and "agent auto approval" is enabled. finally the script console is activated as well.|
|controller.demo_mode|false|demo|Set true to run controller in the demo mode. In the demo mode, each use does not allow to change the user password.|
|controller.security|false|security|Set true if security mode should be enabled. In the secutiry mode, nGrinder SecurityManager will be activated and limit the each test’s access to underlying resources/network in the agent. Please refer [Script Security](script-security)|
|controller.user_password_sha256|false|ngrinder.security.sha256|By default, nGrinder uses sha1 to encode passwords. If you like to use sha256, please set this true. However, you need to delete out all databases completly to apply the this configuration.|
|controller.usage_report|true|usage.report|Set false if you don't want to report ngrinder usage to google analytics. |
|controller.plugin_support|true|pluginsupport|Set false if the plugin should be de-activated. This is not the option applied on the fly. You need to restart the controller.|
|controller.user_security|false|user.security|Set true if you want to make some of the user profile fields(email, mobile phone) mandatory.|
|controller.allow_sign_up|false| |Set true if some users should be able to sign them by themselves. See [TBD](tbd)|
|controller.max_agent_per_test|10|agent.max.size|The maximum number of agents which can be attached per one test. This option is useful when you like to make nGrinder shared by many users. This configuration makes each test use only limited number of agents. For example, if you have 15 agents in total and you set 5 here, you can guarantee 3 users can run performance tests concurrently.|
controller.max_vuser_per_agent|3000|agent.max.vuser|The maximum number of vusers which can be used per one agent. In nGrinder, the vuser count means the total thread count. This should be carefully selected depending on the agent memory size. If you have the 8G RAM and 4 core agent, maybe the more than 10000 vusers can be executable. See TBD for our benchmark result.|
|controller.max_run_count|10000|agent.max.runcount|The maximum test run count for one thread. If you set this 10,000 and run 100 threads per an agent, the test can be executed 10,000 * 100 times at maximum.|
|controller.max_run_hour|8|agent.max.runhour|The maximum running hour for one test.|
|controller.max_concurrent_test|10|ngrinder.max.concurrenttest|The count of allowed maximun concurrent tests. If the more tests than specified here are started, some of them will be wating in the run queue.|
|controller.monitor_port|13243|monitor.listen.port|The monitor connecting port. The default value is 13243. When a perftest starts, the controller try to connect to the monitor in the specified target hosts for system statistics.|
|controller.url|auto-matically selected|ngrinder.http.url http.url|Controller URL (such as http://ngrinder.mycompany.com). This is used to construct the host name part of URL in the controller(such as SVN Link). If not set, the controller analyzes user request to represent URL text in the web page.|
|controller.controller_port|16001|ngrinder.agent.control.port|The port number to which each agent connects in the connection phase.|
|controller.console_port_base|12000|ngrinder.console.portbase|The base port number to which agents in the each test connects in the testing phase. If you allowed 10 concurrent tests by controller.max_concurrent_test=10 option, the ports from 12000 to 12009 are used for agents to connect the controller in the testing phase. You need to restart nGrinder to apply the this configuration.|
|controller.ip|all available IPs|ngrinder.controller.- ipaddress ngrinder.controller.ip|By default, the empty controller.ip configuration makes the controller bind to all available IPs in the current machine so that it can allow agents to be connected to all controller’s IP. Generally, It causes no problem. However, in the specialized env(such as EC2), more than 2 IPs(one for inboud and the other or outbound) are assigned. If you want to allow only one IP to be connected by the agent, you should put it here.|
|controller.validation_timeout|100|ngrinder.validation.timeout|Script validation timeout in the unit of sec. Increase this when you have the script which takes more than 100 secs for a single validation.|
|controller.enable_script_console|false| |true if the script console should be activated. Script console provides the way to directly access ngrinder internal.|
|controller.enable_agent_auto_approval|true| |false if the agent should be approved before using it. This option is useful when ngrinder is provided as PAAS.|
|controller.front_page_enabled|true| |Set false if the controller doesn’t have the internet access. this disables periodic RSS feed access to developer resources and QnAs.|
|controller.front_page_resources_rss|…|ngrinder.frontpage.rss|RSS URL for "Developer Resources" panel in the front page.|
|controller.front_page_resources_ more_url|…| |"More" URL for "Developer Resources" panel in the front page.|
|controller.front_page_qna_rss|…|ngrinder.qna.rss|RSS URL for "QnA panel" in the front page.|
|controller.front_page_ qna_more_url|…| |"More" URL for "Developer Resources" panel in the front page.|
|controller.front_page_ask_question_url|….|ngrinder.ask.question.url|"Ask a question" URL for "QnA panel" in the front page.|
|controller.help_url|…|ngrinder.help.url|The top most HELP link URL.|
|controller.default_lang|en|ngrinder.langauge.default|The default language if the user didn’t specify the langauge during login . This option is useful when you installs custom SSO plugin.|
|controller.admin_password_reset|false| |true if the admin password should be reset as "admin" while booting up. This option is useful when the admin lost its password. You should make it false after setting the admin password.|
|controller.agent_force_update|false| |true if the agents should always be updated when the update message is sent.  If it’s enabled, update will be performed even when the agent has later version than the controller’s.|
|controller.update_chunk_size|1048576| |The byte size of agent update message. by default, it’s set as 1048576(1MB). Agent update message contains the fragment of agent page and are sent to agents multiple times. If it’s bigger, the count of consequent messages are smaller so that it will be more speedy.|
|controller.safe_dist|false|ngrinder.dist.safe.region|true if you want to always enable the safe script transmission. If it is turned on, the each perf test starting speed will be slower but files are gurantee to be transmitted without errors.|
|controller.safe_dist_threshold|1000000|ngrinder.dist.safe.threashhold|If the files are bigger, the transmission error possibility is increased as well. nGrinder automatically enable safe script transmission by looking the file size. If you want to disable this, please make it 100000000.|

##### Cluster-Related Configurations
This file can have serveral cluster mode related options as well. Because ${NGRINDER_HOME}/system.conf should be shared by multiple controllers via NFS,
some cluster related configurations which applies to all controllers in the cluster can be located here for easy administration.  
Followings are the options.

|Key|Default|Compatible Keys (for ~nGrinder 3.2.X)|Description|
|---|-------|-------------------------------------|-----------|
|cluster.enabled|false|ngrinder.cluster.mode|true if the cluster mode should be activated.|
|cluster.mode|none| |easy if you want to run the multiple controllers in a single machine.|
|cluster.members|-|ngrinder.cluster.uris|Comma or semi-colon separated list of all cluster member’s IP. For example) - 192.168.1.1;192.168.2.2;192.168.3.3|
|cluster.port|40003|ngrinder.cluster.listener.port|cluster communication port. When it’s the easy mode, each controllers in a cluster should have unique cluster port. However it’s the advanced mode, all cluter should have the  same cluster port.|

#### ${NGRINDER_EX_HOME}/system.conf
As we described here, ${NGRINDER_EX_HOME} is used to provide the specialized configuration per each controller in the cluster mode.  You can add additional system.conf file here as well and define several per controller configurations.

If you run controller in the single mode or easy cluster mode, you don’t need to create this file at all. However if you run controller in the advanced mode, you may need following configurations in ${NGRINDER_EX_HOME}/system.conf file.

|Key|Default|Compatible Keys (for ~nGrinder 3.2.X)|Description|
|---|-------|-------------------------------------|-----------|
|cluster.port|40003|ngrinder.cluster.listener.port|cluster communication port. When it’s the easy mode, each controllers in a cluster should have unique cluster port.  However it’s the advanced|
|cluster.host|-|cluster.ip|Console binding IP of this region. If not set, console will be bound to all available IPs.|
|cluster.region|NONE|ngrinder.cluster.region|The region name of this cluster member.|
|cluster.hidden_region|false|ngrinder.cluster.region.hide|true if you want to make this controller invisible from cluster members. This is useful when you like to run a private controller for administration.|
|cluster.safe_dist|false|-|true if tje file transmission in this region should be done by safer way.|

#### ${NGRINDER_HOME}/process_and_thread_policy.js
This file defines the logic to determine the appropriate combination of processes and threads for the given count of vusers.  
This file provides the flexible way to configure the appropriate processes and threads. Users usually don’t know which process and thread combination can result the best performance. Therefore, nGrinder let a user just input expected vuser per agent and configure the process and thread count automatically.  
The default logic is like following.
```javascript
function getProcessCount(total) {
    if (total < 2) {
        return 1;
    }

    var processCount = 2;

    if (total > 80) {
        processCount = parseInt(total / 40) + 1;
    }

    if (processCount > 10) {
        processCount = 10;
    }
    return processCount;
}

function getThreadCount(total) {
    var processCount = getProcessCount(total);
    return parseInt(total / processCount);
}

默認狀況下,不容許超過10個進程,當使用80個vusers時,每一個進程分配的線程數超過40個。

您能夠自由地重寫這個方法(getProcessCount() / getThreadCount())以知足你的須要。

${NGRINDER_HOME}/grinder.properties

這個文件定義了默認的"Grinder"行爲。有些在運行時被nGrinder重載,有些則否則。在大多數狀況下,管理員不須要更改這個文件。
詳情請參閱 http://grinder.sourceforge.net/g3/properties.html

插件

該文件夾包含插件。若是您想安裝新的插件(~~.jar)或升級現有的插件,只需將它們放入這個文件夾。它們將被自動掃描和激活。若是你想刪除插件,只要從那裏刪除插件文件。例如:你能夠找到可用的插件TBD。

主文件夾結構

${NGRINDER_HOME}中,有一些文件夾用於存儲nGrinder中使用的數據。如下是對它們的描述。

文件夾名稱 描述
logs 這裏存儲nGrinder的日誌. nGrinder攔截tomcat日誌並保存日誌到ngrinder.log文件。此日誌只包含與控制器相關的日誌。您還能夠經過admin菜單監視這個文件的內容。
perftest 此文件夾存儲與每一個性能測試相關的數據。
download 此文件夾包含可下載的文件,如agent和monitor。例如,若是您想從新建立代理和監視器包,您能夠刪除該文件夾中的全部內容。
plugins 這個文件夾包含插件。若是您想安裝新的插件(~~.jar)或升級現有的插件,只需將它們放入這個文件夾。它們將被自動掃描和激活。
repos 此文件夾包含每一個用戶的svn存儲庫。
script 此文件夾包含與腳本驗證相關的資源。這隻在執行驗證時使用。
db 此文件夾包含H2數據庫數據。
subversion 此文件夾包含底層svnkit的默認配置。
webapp 當控制器的web應用程序文件在嵌入式web服務器上執行時,該文件夾包含控制器的web應用程序文件。

更多內容請查看: 壓力測試平臺(nGrinder)入門到精通教程

相關文章
相關標籤/搜索