TIDB數據集羣部署

 TIDB 數據庫集羣

 1、TiDB數據介紹

 1.一、TiDB數據簡介

TiDB 是 PingCAP 公司設計的開源分佈式 HTAP (Hybrid Transactional and Analytical Processing) 數據庫,結合了傳統的 RDBMS 和 NoSQL 的最佳特性。TiDB 兼容 MySQL,支持無限的水平擴展,具有強一致性和高可用性。TiDB 的目標是爲 OLTP (Online Transactional Processing) 和 OLAP (Online Analytical Processing) 場景提供一站式的解決方案。node

TiDB 具有以下特性:python

高度兼容 MySQL

大多數狀況下,無需修改代碼便可從 MySQL 輕鬆遷移至 TiDB,分庫分表後的 MySQL 集羣亦可經過 TiDB 工具進行實時遷移。mysql

水平彈性擴展

經過簡單地增長新節點便可實現 TiDB 的水平擴展,按需擴展吞吐或存儲,輕鬆應對高併發、海量數據場景。git

分佈式事務

TiDB 100% 支持標準的 ACID 事務。github

真正金融級高可用

相比於傳統主從 (M-S) 複製方案,基於 Raft 的多數派選舉協議能夠提供金融級的 100% 數據強一致性保證,且在不丟失大多數副本的前提下,能夠實現故障的自動恢復 (auto-failover),無需人工介入。sql

一站式 HTAP 解決方案

TiDB 做爲典型的 OLTP 行存數據庫,同時兼具強大的 OLAP 性能,配合 TiSpark,可提供一站式 HTAP 解決方案,一份存儲同時處理 OLTP & OLAP,無需傳統繁瑣的 ETL 過程。shell

雲原生 SQL 數據庫

TiDB 是爲雲而設計的數據庫,支持公有云、私有云和混合雲,使部署、配置和維護變得十分簡單。數據庫

TiDB Serverbootstrap

TiDB Server 負責接收 SQL 請求,處理 SQL 相關的邏輯,並經過 PD 找到存儲計算所需數據的 TiKV 地址,與 TiKV 交互獲取數據,最終返回結果。TiDB Server 是無狀態的,其自己並不存儲數據,只負責計算,能夠無限水平擴展,能夠經過負載均衡組件(如LVS、HAProxy 或 F5)對外提供統一的接入地址。vim

PD Server

Placement Driver (簡稱 PD) 是整個集羣的管理模塊,其主要工做有三個:一是存儲集羣的元信息(某個 Key 存儲在哪一個 TiKV 節點);二是對 TiKV 集羣進行調度和負載均衡(如數據的遷移、Raft group leader 的遷移等);三是分配全局惟一且遞增的事務 ID。

PD 是一個集羣,須要部署奇數個節點,通常線上推薦至少部署 3 個節點

TiKV Server

TiKV Server 負責存儲數據,從外部看 TiKV 是一個分佈式的提供事務的 Key-Value 存儲引擎。存儲數據的基本單位是 Region,每一個 Region 負責存儲一個 Key Range(從 StartKey 到 EndKey 的左閉右開區間)的數據,每一個 TiKV 節點會負責多個 Region。TiKV 使用 Raft 協議作複製,保持數據的一致性和容災。副本以 Region 爲單位進行管理,不一樣節點上的多個 Region 構成一個 Raft Group,互爲副本。數據在多個 TiKV 之間的負載均衡由 PD 調度,這裏也是以 Region 爲單位進行調度

TiSpark

TiSpark 做爲 TiDB 中解決用戶複雜 OLAP 需求的主要組件,將 Spark SQL 直接運行在 TiDB 存儲層上,同時融合 TiKV 分佈式集羣的優點,並融入大數據社區生態。至此,TiDB 能夠經過一套系統,同時支持 OLTP 與 OLAP,免除用戶數據同步的煩惱

1.二、Tidb 數據基本操做

建立、查看和刪除數據庫

 1 CREATE DATABASE db_name [options];
 2 CREATE DATABASE IF NOT EXISTS samp_db;
 3 DROP DATABASE samp_db;
 4 DROP TABLE IF EXISTS person;
 5 CREATE INDEX person_num ON person (number);
 6 ALTER TABLE person ADD INDEX person_num (number);
 7 CREATE UNIQUE INDEX person_num ON person (number);
 8 CREATE USER 'tiuser'@'localhost' IDENTIFIED BY '123456';
 9 GRANT SELECT ON samp_db.* TO 'tiuser'@'localhost';
10 SHOW GRANTS for tiuser@localhost;
11 DROP USER 'tiuser'@'localhost';
12 GRANT ALL PRIVILEGES ON test.* TO 'xxxx'@'%' IDENTIFIED BY 'yyyyy';
13 REVOKE ALL PRIVILEGES ON `test`.* FROM 'genius'@'localhost';
14 SHOW GRANTS for 'root'@'%';
15 SELECT Insert_priv FROM mysql.user WHERE user='test' AND host='%';
16 FLUSH PRIVILEGES;

2、TiDB Ansible 部署

 2.一、安裝Tidb集羣基礎環境

使用三臺物理機搭建Tidb集羣,三臺機器ip 爲 172.16.5.50,172.16.5.51,172.16.5.10,其中172.16.5.51做爲中控機。

軟件安裝以下:

172.16.5.51 TiDB,PD,TiKV

172.16.5.50 TiKV

172.16.5.10 TiKV

安裝中控機軟件

1  yum -y install epel-release git curl sshpass atop vim htop net-tools 
2  yum -y install python-pip

在中控機上建立 tidb 用戶,並生成 ssh key

1 # 建立tidb用戶
2 useradd -m -d /home/tidb tidb && passwd tidb
3 # 配置tidb用戶sudo權限
4 visudo
5 tidb ALL=(ALL) NOPASSWD: ALL
6 # 使用tidb帳戶生成 ssh key
7 su tidb && ssh-keygen -t rsa -C mikel@tidb

在中控機器上下載 TiDB-Ansible

1 # 下載Tidb-Ansible 版本
cd /home/tidb && git clone -b release-2.0 https://github.com/pingcap/tidb-ansible.git 2 # 安裝ansible及依賴 cd /home/tidb/tidb-ansible/ && pip install -r ./requirements.txt

在中控機上配置部署機器ssh互信及sudo 規則

 1 # 配置hosts.ini
 2 su tidb && cd /home/tidb/tidb-ansible
 3 vim hosts.ini
 4 [servers]
 5 172.16.5.50
 6 172.16.5.51
 7 172.16.5.52
 8 [all:vars]
 9 username = tidb
10 ntp_server = pool.ntp.org
11 # 配置ssh 互信
12 ansible-playbook -i hosts.ini create_users.yml -u root -k

在目標機器上安裝ntp服務

1 # 中控機器上給目標主機安裝ntp服務
2 cd /home/tidb/tidb-ansible
3 ansible-playbook -i hosts.ini deploy_ntp.yml -u tidb -b

目標機器上調整cpufreq

1 # 查看cpupower 調節模式,目前虛擬機不支持,調節10服務器cpupower
2 cpupower frequency-info --governors
3 analyzing CPU 0:
4 available cpufreq governors: Not Available
5 # 配置cpufreq調節模式
6 cpupower frequency-set --governor performance

目標機器上添加數據盤ext4 文件系統掛載

 1 # 建立分區表
 2 parted -s -a optimal /dev/nvme0n1 mklabel gpt -- mkpart primary ext4 1 -1
 3 # 手動建立分區
 4 parted dev/sdb
 5 mklabel gpt
 6 mkpart primary 0KB 210GB 
 7 # 格式化分區
 8 mkfs.ext4 /dev/sdb
 9 # 查看數據盤分區 UUID
10 [root@tidb-tikv1 ~]# lsblk -f
11 NAME FSTYPE LABEL UUID MOUNTPOINT
12 sda 
13 ├─sda1 xfs f41c3b1b-125f-407c-81fa-5197367feb39 /boot
14 ├─sda2 xfs 8119193b-c774-467f-a057-98329c66b3b3 /
15 ├─sda3 
16 └─sda5 xfs 42356bb3-911a-4dc4-b56e-815bafd08db2 /home
17 sdb ext4 532697e9-970e-49d4-bdba-df386cac34d2 
18 # 分別在三臺機器上,編輯 /etc/fstab 文件,添加 nodelalloc 掛載參數
19 vim /etc/fstab
20 UUID=8119193b-c774-467f-a057-98329c66b3b3 / xfs defaults 0 0
21 UUID=f41c3b1b-125f-407c-81fa-5197367feb39 /boot xfs defaults 0 0
22 UUID=42356bb3-911a-4dc4-b56e-815bafd08db2 /home xfs defaults 0 0
23 UUID=532697e9-970e-49d4-bdba-df386cac34d2 /data ext4 defaults,nodelalloc,noatime 0 2
24 # 掛載數據盤
25 mkdir /data
26 mount -a
27 mount -t ext4
28 /dev/sdb on /data type ext4 (rw,noatime,seclabel,nodelalloc,data=ordered)

分配機器資源,編輯inventory.ini 文件

 1 # 單機Tikv實例
 2 Name HostIP Services
 3 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
 4 tidb-tikv2 172.16.5.51 PD2, TiKV2
 5 tidb-tikv3 172.16.5.52 PD3, TiKV3
 6 # 編輯inventory.ini 文件
 7 cd /home/tidb/tidb-ansible
 8 vim inventory.ini
 9 ## TiDB Cluster Part
10 [tidb_servers]
11 172.16.5.50
12 172.16.5.51
13 
14 [tikv_servers]
15 172.16.5.50
16 172.16.5.51
17 172.16.5.52
18 
19 [pd_servers]
20 172.16.5.50
21 172.16.5.51
22 172.16.5.52
23 
24 ## Monitoring Part
25 # prometheus and pushgateway servers
26 [monitoring_servers]
27 172.16.5.50
28 
29 # node_exporter and blackbox_exporter servers
30 [monitored_servers]
31 172.16.5.50
32 172.16.5.51
33 172.16.5.52
34 
35 [all:vars]
36 #deploy_dir = /home/tidb/deploy
37 deploy_dir = /data/deploy
38 # 檢測ssh互信
39 [tidb@tidb-tikv1 tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami'
40 172.16.5.51 | SUCCESS | rc=0 >>
41 tidb
42 172.16.5.52 | SUCCESS | rc=0 >>
43 tidb
44 172.16.5.50 | SUCCESS | rc=0 >>
45 tidb
46 # 檢測tidb 用戶 sudo 免密碼配置
47 [tidb@tidb-tikv1 tidb-ansible]$ ansible -i inventory.ini all -m shell -a 'whoami' -b
48 172.16.5.52 | SUCCESS | rc=0 >>
49 root
50 172.16.5.51 | SUCCESS | rc=0 >>
51 root
52 172.16.5.50 | SUCCESS | rc=0 >>
53 root
54 # 執行 local_prepare.yml playbook,聯網下載 TiDB binary 到中控機
55 ansible-playbook local_prepare.yml
56 # 初始化系統環境,修改內核參數
57 ansible-playbook bootstrap.yml

2.二、安裝Tidb集羣

1 ansible-playbook deploy.yml

2.三、啓動Tidb集羣

1 ansible-playbook start.yml

2.四、測試集羣

 1 # 使用 MySQL 客戶端鏈接測試,TCP 4000 端口是 TiDB 服務默認端口
 2 mysql -u root -h 172.16.5.50 -P 4000
 3 mysql> show databases;
 4 +--------------------+
 5 | Database |
 6 +--------------------+
 7 | INFORMATION_SCHEMA |
 8 | PERFORMANCE_SCHEMA |
 9 | mysql |
10 | test |
11 +--------------------+
12 4 rows in set (0.00 sec)
13 # 經過瀏覽器訪問監控平臺
14 地址:http://172.16.5.51:3000 默認賬號密碼是:admin/admin

 3、TIDB集羣擴容

 3.一、擴容 TiDB/TiKV 節點

 1 # 單機Tikv實例
 2 Name HostIP Services
 3 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
 4 tidb-tikv2 172.16.5.51 PD2, TiKV2
 5 tidb-tikv3 172.16.5.52 PD3, TiKV3
 6 # 新增一臺TIDB節點
 7 添加一個 TiDB 節點(tidb-tikv4),IP 地址爲 172.16.5.53
 8 # 編輯inventory.ini 文件
 9 cd /home/tidb/tidb-ansible
10 vim inventory.ini
------------------start---------------------------
11 ## TiDB Cluster Part 12 [tidb_servers] 13 172.16.5.50 14 172.16.5.51 15 172.16.5.53 16 17 [tikv_servers] 18 172.16.5.50 19 172.16.5.51 20 172.16.5.52 21 22 [pd_servers] 23 172.16.5.50 24 172.16.5.51 25 172.16.5.52 26 27 ## Monitoring Part 28 # prometheus and pushgateway servers 29 [monitoring_servers] 30 172.16.5.50 31 32 # node_exporter and blackbox_exporter servers 33 [monitored_servers] 34 172.16.5.50 35 172.16.5.51 36 172.16.5.52 37 172.16.5.53
----------------------end-------------------
38 # 拓撲結構以下 39 Name HostIP Services 40 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1 41 tidb-tikv2 172.16.5.51 PD2, TiKV2 42 tidb-tikv3 172.16.5.52 PD3, TiKV3 43 tidb-tikv4 172.16.5.53 TiDB2 44 # 初始化新增節點 45 ansible-playbook bootstrap.yml -l 172.16.5.53 46 # 部署新增節點 47 ansible-playbook deploy.yml -l 172.16.5.53 48 # 啓動新節點服務 49 ansible-playbook start.yml -l 172.16.5.53 50 # 更新 Prometheus 配置並重啓 51 ansible-playbook rolling_update_monitor.yml --tags=prometheus

3.二、擴容PD節點

 1 # 拓撲結構以下# 單機Tikv實例
 2 Name HostIP Services
 3 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
 4 tidb-tikv2 172.16.5.51 PD2, TiKV2
 5 tidb-tikv3 172.16.5.52 PD3, TiKV3
 6 # 新增一臺PD節點
 7 添加一個 PD 節點(tidb-pd1),IP 地址爲 172.16.5.54
 8 # 編輯inventory.ini 文件
 9 cd /home/tidb/tidb-ansible
10 vim inventory.ini
11 ## TiDB Cluster Part
12 [tidb_servers]
13 172.16.5.50
14 172.16.5.51
15 
16 [tikv_servers]
17 172.16.5.50
18 172.16.5.51
19 172.16.5.52
20 
21 [pd_servers]
22 172.16.5.50
23 172.16.5.51
24 172.16.5.52
25 172.16.5.54
26 
27 ## Monitoring Part
28 # prometheus and pushgateway servers
29 [monitoring_servers]
30 172.16.5.50
31 
32 # node_exporter and blackbox_exporter servers
33 [monitored_servers]
34 172.16.5.50
35 172.16.5.51
36 172.16.5.52
37 172.16.5.54
38 # 拓撲結構以下
39 Name HostIP Services
40 tidb-tikv1 172.16.5.50 PD1, TiDB1, TiKV1
41 tidb-tikv2 172.16.5.51 PD2, TiKV2
42 tidb-tikv3 172.16.5.52 PD3, TiKV3
43 tidb-pd1 172.16.5.54 PD4
44 # 初始化新增節點
45 ansible-playbook bootstrap.yml -l 172.16.5.54
46 # 部署新增節點
47 ansible-playbook deploy.yml -l 172.16.5.54
48 # 登陸新增的 PD 節點,編輯啓動腳本:{deploy_dir}/scripts/run_pd.sh
49 1、移除 --initial-cluster="xxxx" \ 配置。
50 2、添加 --join="http://172.16.10.1:2379" \,IP 地址 (172.16.10.1) 能夠是集羣內現有 PD IP 地址中的任意一個。
51 3、在新增 PD 節點中手動啓動 PD 服務:
52 {deploy_dir}/scripts/start_pd.sh
53 4、使用 pd-ctl 檢查新節點是否添加成功:
54 /home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://172.16.10.1:2379" -d member
55 # 滾動升級整個集羣
56 ansible-playbook rolling_update.yml
57 # 更新 Prometheus 配置並重啓
58 ansible-playbook rolling_update_monitor.yml --tags=prometheus

 4、tidb集羣測試

 4.一、sysbench基準庫測試

sysbench安裝

1 # 二進制安裝
2 curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | sudo bash
3 sudo yum -y install sysbench

性能測試

  1 # cpu性能測試
  2 sysbench --test=cpu --cpu-max-prime=20000 run
  3 ----------------------------------start----------------------------------------
  4 Number of threads: 1
  5 Initializing random number generator from current time
  6 Prime numbers limit: 20000
  7 Initializing worker threads...
  8 Threads started!
  9 CPU speed:
 10 events per second: 286.71
 11 General statistics:
 12 total time: 10.0004s
 13 total number of events: 2868
 14 Latency (ms):
 15 min: 3.46
 16 avg: 3.49
 17 max: 4.49
 18 95th percentile: 3.55
 19 sum: 9997.23
 20 Threads fairness:
 21 events (avg/stddev): 2868.0000/0.00
 22 execution time (avg/stddev): 9.9972/0.00
 23 -----------------------------------end-------------------------------------------
 24 # 線程測試
 25 sysbench --test=threads --num-threads=64 --thread-yields=100 --thread-locks=2 run
 26 ------------------------------------start-----------------------------------------
 27 Number of threads: 64
 28 Initializing random number generator from current time
 29 Initializing worker threads...
 30 Threads started!
 31 General statistics:
 32 total time: 10.0048s
 33 total number of events: 108883
 34 Latency (ms):
 35 min: 0.05
 36 avg: 5.88
 37 max: 49.15
 38 95th percentile: 17.32
 39 sum: 640073.32
 40 Threads fairness:
 41 events (avg/stddev): 1701.2969/36.36
 42 execution time (avg/stddev): 10.0011/0.00
 43 -----------------------------------end-----------------------------------------
 44 # 磁盤IO測試
 45 sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw prepare
 46 ----------------------------------start-----------------------------------------
 47 128 files, 24576Kb each, 3072Mb total
 48 Creating files for the test...
 49 Extra file open flags: (none)
 50 Creating file test_file.0
 51 Creating file test_file.1
 52 Creating file test_file.2
 53 Creating file test_file.3
 54 Creating file test_file.4
 55 Creating file test_file.5
 56 Creating file test_file.6
 57 Creating file test_file.7
 58 Creating file test_file.8
 59 Creating file test_file.9
 60 Creating file test_file.10
 61 Creating file test_file.11
 62 Creating file test_file.12
 63 Creating file test_file.13
 64 Creating file test_file.14
 65 Creating file test_file.15
 66 Creating file test_file.16
 67 Creating file test_file.17
 68 Creating file test_file.18
 69 Creating file test_file.19
 70 Creating file test_file.20
 71 Creating file test_file.21
 72 Creating file test_file.22
 73 Creating file test_file.23
 74 Creating file test_file.24
 75 Creating file test_file.25
 76 Creating file test_file.26
 77 Creating file test_file.27
 78 Creating file test_file.28
 79 Creating file test_file.29
 80 Creating file test_file.30
 81 Creating file test_file.31
 82 Creating file test_file.32
 83 Creating file test_file.33
 84 Creating file test_file.34
 85 Creating file test_file.35
 86 Creating file test_file.36
 87 Creating file test_file.37
 88 Creating file test_file.38
 89 Creating file test_file.39
 90 Creating file test_file.40
 91 Creating file test_file.41
 92 Creating file test_file.42
 93 Creating file test_file.43
 94 Creating file test_file.44
 95 Creating file test_file.45
 96 Creating file test_file.46
 97 Creating file test_file.47
 98 Creating file test_file.48
 99 Creating file test_file.49
100 Creating file test_file.50
101 Creating file test_file.51
102 Creating file test_file.52
103 Creating file test_file.53
104 Creating file test_file.54
105 Creating file test_file.55
106 Creating file test_file.56
107 Creating file test_file.57
108 Creating file test_file.58
109 Creating file test_file.59
110 Creating file test_file.60
111 Creating file test_file.61
112 Creating file test_file.62
113 Creating file test_file.63
114 Creating file test_file.64
115 Creating file test_file.65
116 Creating file test_file.66
117 Creating file test_file.67
118 Creating file test_file.68
119 Creating file test_file.69
120 Creating file test_file.70
121 Creating file test_file.71
122 Creating file test_file.72
123 Creating file test_file.73
124 Creating file test_file.74
125 Creating file test_file.75
126 Creating file test_file.76
127 Creating file test_file.77
128 Creating file test_file.78
129 Creating file test_file.79
130 Creating file test_file.80
131 Creating file test_file.81
132 Creating file test_file.82
133 Creating file test_file.83
134 Creating file test_file.84
135 Creating file test_file.85
136 Creating file test_file.86
137 Creating file test_file.87
138 Creating file test_file.88
139 Creating file test_file.89
140 Creating file test_file.90
141 Creating file test_file.91
142 Creating file test_file.92
143 Creating file test_file.93
144 Creating file test_file.94
145 Creating file test_file.95
146 Creating file test_file.96
147 Creating file test_file.97
148 Creating file test_file.98
149 Creating file test_file.99
150 Creating file test_file.100
151 Creating file test_file.101
152 Creating file test_file.102
153 Creating file test_file.103
154 Creating file test_file.104
155 Creating file test_file.105
156 Creating file test_file.106
157 Creating file test_file.107
158 Creating file test_file.108
159 Creating file test_file.109
160 Creating file test_file.110
161 Creating file test_file.111
162 Creating file test_file.112
163 Creating file test_file.113
164 Creating file test_file.114
165 Creating file test_file.115
166 Creating file test_file.116
167 Creating file test_file.117
168 Creating file test_file.118
169 Creating file test_file.119
170 Creating file test_file.120
171 Creating file test_file.121
172 Creating file test_file.122
173 Creating file test_file.123
174 Creating file test_file.124
175 Creating file test_file.125
176 Creating file test_file.126
177 Creating file test_file.127
178 3221225472 bytes written in 339.76 seconds (9.04 MiB/sec)
179 ----------------------------------end------------------------------------------
180 sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw run
181 ----------------------------------start-----------------------------------------
182 Number of threads: 16
183 Initializing random number generator from current time
184 Extra file open flags: (none)
185 128 files, 24MiB each
186 3GiB total file size
187 Block size 16KiB
188 Number of IO requests: 0
189 Read/Write ratio for combined random IO test: 1.50
190 Periodic FSYNC enabled, calling fsync() each 100 requests.
191 Calling fsync() at the end of test, Enabled.
192 Using synchronous I/O mode
193 Doing random r/w test
194 Initializing worker threads...
195 Threads started!
196 File operations:
197 reads/s: 299.19
198 writes/s: 199.46
199 fsyncs/s: 816.03
200 Throughput:
201 read, MiB/s: 4.67
202 written, MiB/s: 3.12
203 General statistics:
204 total time: 10.8270s
205 total number of events: 12189
206 Latency (ms):
207 min: 0.00
208 avg: 13.14
209 max: 340.58
210 95th percentile: 92.42
211 sum: 160186.15
212 Threads fairness:
213 events (avg/stddev): 761.8125/216.01
214 execution time (avg/stddev): 10.0116/0.01
215 --------------------------------------end---------------------------------------
216 sysbench --test=fileio --num-threads=16 --file-total-size=3G --file-test-mode=rndrw cleanup
217 # 內存測試
218 sysbench --test=memory --memory-block-size=8k --memory-total-size=4G run 
219 ------------------------------------start-----------------------------------------
220 Number of threads: 1
221 Initializing random number generator from current time
222 Running memory speed test with the following options:
223 block size: 8KiB
224 total size: 4096MiB
225 operation: write
226 scope: global
227 Initializing worker threads...
228 Threads started!
229 Total operations: 524288 (1111310.93 per second)
230 4096.00 MiB transferred (8682.12 MiB/sec)
231 General statistics:
232 total time: 0.4692s
233 total number of events: 524288
234 Latency (ms):
235 min: 0.00
236 avg: 0.00
237 max: 0.03
238 95th percentile: 0.00
239 sum: 381.39
240 
241 Threads fairness:
242 events (avg/stddev): 524288.0000/0.00
243 execution time (avg/stddev): 0.3814/0.00
244 -------------------------------------end---------------------------------------

4.二、OLTP測試

 1 # 登陸tidb建立測試數據庫
 2 mysql -u root -P 4000 -h 172.16.5.50
 3 create database sbtest
 4 # 準備測試數據
 5 sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=20 --table_size=20000000 --threads=100 --max-requests=0 prepare
 6 --tables=20 # 建立20個表
 7 --table_size=20000000 # 每一個表兩千萬數據
 8 --threads=100 # 使用100個線程數
 9 ---------------------------------報錯信息以下------------------------------------------
10 FATAL: mysql_drv_query() returned error 9001 (PD server timeout[try again later]
11 2018/11/23 11:23:19.236 log.go:82: [warning] etcdserver: [timed out waiting for read index response]
12 2018/11/23 14:15:17.329 heartbeat_streams.go:97: [error] [store 1] send keepalive message fail: EOF
13 2018/11/23 14:14:04.603 leader.go:312: [info] leader is deleted
14 2018/11/23 14:14:04.603 leader.go:103: [info] pd2 is not etcd leader, skip campaign leader and check later
15 2018/11/23 14:21:10.071 coordinator.go:570: [info] [region 1093] send schedule command: transfer leader from store 7 to store 2
16 FATAL: mysql_drv_query() returned error 1105 (Information schema is out of date)
17 ------------------------------------end-----------------------------------------------
18 # 調整線程數爲10,表數量爲10,表數據爲2000000 作測試
19 sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=1 --table_size=2000000 --threads=10 --max-requests=0 prepare
20 --------------------------------------start--------------------------------------------
21 FATAL: mysql_drv_query() returned error 1105 (Information schema is out of date) 超時報錯
22 成功寫入2張表,其他8張表數據並未寫滿,寫好索引
23 # 對tidb集羣進行讀寫測試
24 sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=1 --table_size=2000000 --threads=10 --max-requests=0 run
25 ----------------------------------------start--------------------------------------
26 Number of threads: 10
27 Initializing random number generator from current time
28 Initializing worker threads...
29 Threads started!
30 SQL statistics:
31 queries performed:
32 read: 868
33 write: 62
34 other: 310
35 total: 1240
36 transactions: 62 (5.60 per sec.)
37 queries: 1240 (112.10 per sec.)
38 ignored errors: 0 (0.00 per sec.)
39 reconnects: 0 (0.00 per sec.)
40 General statistics:
41 total time: 11.0594s
42 total number of events: 62
43 Latency (ms):
44 min: 944.55
45 avg: 1757.78
46 max: 2535.05
47 95th percentile: 2320.55
48 sum: 108982.56
49 Threads fairness:
50 events (avg/stddev): 6.2000/0.40
51 execution time (avg/stddev): 10.8983/0.31
52 ------------------------------------end----------------------------------------
53 # 使用mysql對比測試
54 mysql -uroot -P 3306 -h 172.15.5.154
55 create database sbtest
56 sysbench /usr/share/sysbench/oltp_common.lua --mysql-host=172.16.5.154 --mysql-port=3306 --mysql-user=root --mysql-password=root --tables=20 --table_size=20000000 --threads=10 --max-requests=0 prepare
57 使用mysql 作測試未發現報錯狀況

4.三、業務數據測試

1 sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-host=172.16.5.50 --mysql-port=4000 --mysql-user=root --tables=20 --table_size=2000000 --threads=10 --max-requests=0 run
相關文章
相關標籤/搜索