本文主要是摘抄和翻譯了一下官方文檔,7.2是最新版本,集成mysq5.5,memcached的API。聽說性能很好。因爲資料不是不少,網絡文檔又怕有錯誤的地方。因此翻譯了一下官方的manual。 html
在線添加數據節點大體分爲如下幾個步驟:node
1. Edit the cluster configuration config.ini
file, adding new [ndbd]
sections corresponding to the nodes to be added. In the case where the cluster uses multiple management servers, these changes need to be made to all config.ini
files used by the management servers.mysql
2. Perform a rolling restart of all MySQL Cluster management servers. All management servers must be restarted with the --reload
or --initial
option to force the reading of the new configuration.sql
3. Perform a rolling restart of all existing MySQL Cluster data nodes. It is not necessary (or usually even desirable) to use --initial
when restarting the existing data nodes. If you are using API nodes with dynamically allocated IDs matching any node IDs that you wish to assign to new data nodes, you must restart all API nodes (including SQL nodes) before restarting any of the data nodes processes in this step. This causes any API nodes with node IDs that were previously not explicitly assigned to relinquish those node IDs and acquire new ones.shell
4. Perform a rolling restart of any SQL or API nodes connected to the MySQL Cluster. 數據庫
5. Perform an initial start of the new data nodes. api
6. Execute one or more CREATE NODEGROUP
commands in the MySQL Cluster management client to create the new node group or node groups to which the new data nodes will belong.服務器
7. Execute one or more CREATE NODEGROUP
commands in the MySQL Cluster management client to create the new node group or node groups to which the new data nodes will belong. (This needs to be done only for tables already existing at the time the new node group is added. Data in tables created after the new node group is added is distributed automatically; however, data added to any given table tbl
that existed before the new nodes were added is not distributed using the new nodes until that table has been reorganized using ALTER ONLINE TABLE tbl REORGANIZE PARTITION
.)網絡
8. Reclaim the space freed on the 「old」 nodes by issuing, for each NDBCLUSTER
table, an OPTIMIZE TABLE
statement in the mysql client.多線程
一個例子:
假設如今的配置文件config.ini以下:
[ndbd default]
DataMemory = 100M
IndexMemory = 100M
NoOfReplicas = 2
DataDir = /usr/local/mysql/var/mysql-cluster
[ndbd]
Id = 1
HostName = 192.168.0.1
[ndbd]
Id = 2
HostName = 192.168.0.2
[mgm]
HostName = 192.168.0.10
Id = 10
[api]
Id=20
HostName = 192.168.0.2
[api]
Id=21
HostName = 192.168.0.21
注意:咱們留下一個空白的序列在數據節點和其餘節點。這樣容易爲新添加的數據服務器分配新的未使用的節點id。
如今咱們show如下整個cluster的信息:
ndb_mgm> SHOW
Connected to Management Server at: 192.168.0.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @192.168.0.1 (5.1.61-ndb-7.1.20, Nodegroup: 0, Master)
id=2 @192.168.0.2 (5.1.61-ndb-7.1.20, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=10 @192.168.0.10 (5.1.61-ndb-7.1.20)
[mysqld(API)] 2 node(s)
id=20 @192.168.0.20 (5.1.61-ndb-7.1.20)
id=21 @192.168.0.21 (5.1.61-ndb-7.1.20)
最終,咱們假設只有一張ndbcluster的表。
USE n;
CREATE TABLE ips (
id BIGINT NOT NULL AUTO_INCREMENT PRIMARY KEY,
country_code CHAR(2) NOT NULL,
type CHAR(4) NOT NULL,
ip_address varchar(15) NOT NULL,
addresses BIGINT UNSIGNED DEFAULT NULL,
date BIGINT UNSIGNED DEFAULT NULL
) ENGINE NDBCLUSTER;
注意:若是使用多線程的ndbmtd會出現bug(這裏不知道翻譯的對不對)。
步驟一
升級配置文件(config.ini)
假設咱們新添加兩個新數據節點192.168.0.3和192.168.0.4
[ndbd default]
DataMemory = 100M
IndexMemory = 100M
NoOfReplicas = 2
DataDir = /usr/local/mysql/var/mysql-cluster
[ndbd]
Id = 1
HostName = 192.168.0.1
[ndbd]
Id = 2
HostName = 192.168.0.2
[ndbd]
Id = 3
HostName = 192.168.0.3
[ndbd]
Id = 4
HostName = 192.168.0.4
[mgm]
HostName = 192.168.0.10
Id = 10
[api]
Id=20
HostName = 192.168.0.20
[api]
Id=21
HostName = 192.168.0.21
紅色字體爲新添加的配置.
步驟二
重啓管理節點
找到管理節點ID
ndb_mgm> 10 STOP
Node 10 has shut down.
Disconnecting to allow Management Server to shutdown
從新啓動管理節點(使用—reload選項)
shell>
ndb_mgmd -f config.ini --reload
如今再show一下:
ndb_mgm> SHOW
Connected to Management Server at: 192.168.0.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @192.168.0.1 (5.1.61-ndb-7.1.20, Nodegroup: 0, Master)
id=2 @192.168.0.2 (5.1.61-ndb-7.1.20, Nodegroup: 0)
id=3 (not connected, accepting connect from 192.168.0.3)
id=4 (not connected, accepting connect from 192.168.0.4)
[ndb_mgmd(MGM)] 1 node(s)
id=10 @192.168.0.10 (5.1.61-ndb-7.1.20)
[mysqld(API)] 2 node(s)
id=20 @192.168.0.20 (5.1.61-ndb-7.1.20)
id=21 @192.168.0.21 (5.1.61-ndb-7.1.20)
看到新添加的節點已經被管理節點讀取。
步驟三
對已經存在的數據節點執行環狀重啓
ndb_mgm> 1 RESTART
Node 1: Node shutdown initiated
Node 1: Node shutdown completed, restarting, no start.
Node 1 is being restarted
ndb_mgm> Node 1: Start initiated (version 7.1.20)
Node 1: Started (version 7.1.20)
ndb_mgm> 2 RESTART
Node 2: Node shutdown initiated
Node 2: Node shutdown completed, restarting, no start.
Node 2 is being restarted
ndb_mgm> Node 2: Start initiated (version 7.1.20)
ndb_mgm> Node 2: Started (version 7.1.20)
注意:必定要等到管理端報告 Node X ….
步驟四
對mysql節點執行環狀重啓
shell>service mysqld restart
步驟五
對新數據節點進行初始化啓動
shell>
ndbd -c 192.168.0.3 --initial
shell>
ndbd -c 192.168.0.4 –initial
注意:這裏不須要一個數據節點啓動後在啓動另外一個,能夠同時啓動
如今show一下:
ndb_mgm> SHOW
Connected to Management Server at: 192.168.0.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @192.168.0.1 (5.1.61-ndb-7.1.20, Nodegroup: 0, Master)
id=2 @192.168.0.2 (5.1.61-ndb-7.1.20, Nodegroup: 0)
id=3 @192.168.0.3 (5.1.61-ndb-7.1.20, no nodegroup)
id=4 @192.168.0.4 (5.1.61-ndb-7.1.20, no nodegroup)
[ndb_mgmd(MGM)] 1 node(s)
id=10 @192.168.0.10 (5.1.61-ndb-7.1.20)
[mysqld(API)] 2 node(s)
id=20 @192.168.0.20 (5.1.61-ndb-7.1.20)
id=21 @192.168.0.21 (5.1.61-ndb-7.1.20)
步驟六
爲新數據節點分配新的組
ndb_mgm> CREATE NODEGROUP 3,4
Nodegroup 1 created
ndb_mgm> SHOW
Connected to Management Server at: 192.168.0.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @192.168.0.1 (5.1.61-ndb-7.1.20, Nodegroup: 0, Master)
id=2 @192.168.0.2 (5.1.61-ndb-7.1.20, Nodegroup: 0)
id=3 @192.168.0.3 (5.1.61-ndb-7.1.20, Nodegroup: 1)
id=4 @192.168.0.4 (5.1.61-ndb-7.1.20, Nodegroup: 1)
[ndb_mgmd(MGM)] 1 node(s)
id=10 @192.168.0.10 (5.1.61-ndb-7.1.20)
[mysqld(API)] 2 node(s)
id=20 @192.168.0.20 (5.1.61-ndb-7.1.20)
id=21 @192.168.0.21 (5.1.61-ndb-7.1.20)
步驟七
從新分配集羣數據
當新加入數據節點時,已經存在的數據或者索引不會被自動分配到新的節點上,以下所示:
ndb_mgm>
ALL REPORT MEMORY
Node 1: Data usage is 5%(177 32K pages of total 3200)
Node 1: Index usage is 0%(108 8K pages of total 12832)
Node 2: Data usage is 5%(177 32K pages of total 3200)
Node 2: Index usage is 0%(108 8K pages of total 12832)
Node 3: Data usage is 0%(0 32K pages of total 3200)
Node 3: Index usage is 0%(0 8K pages of total 12832)
Node 4: Data usage is 0%(0 32K pages of total 3200)
Node 4: Index usage is 0%(0 8K pages of total 12832)
使用ndb_desc –p 能夠看到分區信息。拿上面的那張表爲例,看到它仍是在使用2個分區:
shell> ndb_desc -c 192.168.0.10 -d n ips -p
-- ips --
Version: 1
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 6
Number of primary keys: 1
Length of frm data: 340
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 2
TableStatus: Retrieved
-- Attributes --
id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
country_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
type Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
ip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
addresses Bigunsigned NULL AT=FIXED ST=MEMORY
date Bigunsigned NULL AT=FIXED ST=MEMORY
-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedIndex
-- Per partition info --
Partition Row count Commit count Frag fixed memory Frag varsized memory
0 26086 26086 1572864 557056
1 26329 26329 1605632 557056
NDBT_ProgramExit: 0 - OK
-c後面的地址是管理節點地址,-d後面是數據庫,數據表
如今進行數據的從新分配
在mysql節點上執行對全部的ndbcluster引擎的表:
Myql>alter online table … reorganize partition;
如今在看一下上面那張表:
shell> ndb_desc -c 192.168.0.10 -d n ips -p
-- ips --
Version: 16777217
Fragment type: 9
K Value: 6
Min load factor: 78
Max load factor: 80
Temporary table: no
Number of attributes: 6
Number of primary keys: 1
Length of frm data: 341
Row Checksum: 1
Row GCI: 1
SingleUserMode: 0
ForceVarPart: 1
FragmentCount: 4
TableStatus: Retrieved
-- Attributes --
id Bigint PRIMARY KEY DISTRIBUTION KEY AT=FIXED ST=MEMORY AUTO_INCR
country_code Char(2;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
type Char(4;latin1_swedish_ci) NOT NULL AT=FIXED ST=MEMORY
ip_address Varchar(15;latin1_swedish_ci) NOT NULL AT=SHORT_VAR ST=MEMORY
addresses Bigunsigned NULL AT=FIXED ST=MEMORY
date Bigunsigned NULL AT=FIXED ST=MEMORY
-- Indexes --
PRIMARY KEY(id) - UniqueHashIndex
PRIMARY(id) - OrderedInde
-- Per partition info --
Partition Row count Commit count Frag fixed memory Frag varsized memory
0 12981 52296 1572864 557056
1 13236 52515 1605632 557056
2 13105 13105 819200 294912
3 13093 13093 819200 294912
NDBT_ProgramExit: 0 - OK
從新分配數據存儲分區以後,常常使用 optimize table去掉浪費的空間 。
想獲得全部使用ndbcluster引擎的表,可使用下面的查詢:
Mysql>select table_schema,table_name from information_schema.tables where engine= ‘ndbcluster’;
如今使用 all report memory查看一下數據和索引的存儲狀況:
ndb_mgm>
ALL REPORT MEMORY
Node 1: Data usage is 5%(176 32K pages of total 3200)
Node 1: Index usage is 0%(76 8K pages of total 12832)
Node 2: Data usage is 5%(176 32K pages of total 3200)
Node 2: Index usage is 0%(76 8K pages of total 12832)
Node 3: Data usage is 2%(80 32K pages of total 3200)
Node 3: Index usage is 0%(51 8K pages of total 12832)
Node 4: Data usage is 2%(80 32K pages of total 3200)
Node 4: Index usage is 0%(50 8K pages of total 12832)