mongodb3.2複製集和shard集羣搭建

三臺機器操做系統環境以下:node

1linux

2mongodb

3shell

4bash

5架構

6fetch

7ui

[mongodb@node1 ~]$ cat /etc/issuegoogle

CentOS release 6.4 (Final)spa

Kernel \r on an \m

[mongodb@node1 ~]$ uname -r

2.6.32-358.el6.x86_64

[mongodb@node1 ~]$ uname -m

x86_64

 

架構以下圖,以前的架構圖找不到了,就湊合看下面的表格吧。。

wKiom1bfhbagjVaTAAGCoOPjNbU972.jpg

 

192.168.75.12八、shard1:1000一、shard2:1000二、shard3:1000三、configsvr:1000四、mongos:10005

注:shard1主節點,shard2仲裁,shard3副本

192.168.75.12九、shard1:1000一、shard2:1000二、shard3:1000三、configsvr:1000四、mongos:10005

注:shard1副本,shard2主節點,shard3仲裁

192.168.75.130、shard1:1000一、shard2:1000二、shard3:1000三、configsvr:1000四、mongos:10005

注:shard1仲裁,shard2副本,shard3主節點

node1:192.168.75.128

node2:192.168.75.129

node3:192.168.75.130

 

建立mongodb用戶

1

2

3

4

5

[root@node1 ~]# groupadd  mongodb

[root@node1 ~]# useradd  -g mongodb mongodb

[root@node1 ~]# mkdir /data

[root@node1 ~]# chown mongodb.mongodb /data -R

[root@node1 ~]# su - mongodb

建立目錄和文件

1

2

3

4

5

6

[mongodb@node1 ~]$ mkdir /data/{config,shard1,shard2,shard3,mongos,logs,configsvr,keyfile} -pv

[mongodb@node1 ~]$ touch /data/keyfile/zxl

[mongodb@node1 ~]$ touch /data/logs/shard{1..3}.log

[mongodb@node1 ~]$ touch /data/logs/{configsvr,mongos}.log

[mongodb@node1 ~]$ touch /data/config/shard{1..3}.conf

[mongodb@node1 ~]$ touch /data/config/{configsvr,mongos}.conf

下載mongodb

1

2

3

[mongodb@node1 ~]$ wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel62-3.2.3.tgz

[mongodb@node3 ~]$ tar fxz mongodb-linux-x86_64-rhel62-3.2.3.tgz -C /data

[mongodb@node3 ~]$ ln -s /data/mongodb-linux-x86_64-rhel62-3.2.3 /data/mongodb

配置mongodb環境變量

1

2

[mongodb@node1 ~]$ echo "export PATH=$PATH:/data/mongodb/bin" >> ~/.bash_profile

[mongodb@node1 data]$ source ~/.bash_profile

 

shard1.conf配置文件內容以下:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

[mongodb@node1 ~]$ cat /data/config/shard1.conf 

systemLog:

  destination: file

  path: /data/logs/shard1.log

  logAppend: true

processManagement:

  fork: true

  pidFilePath: "/data/shard1/shard1.pid"

net:

  port: 10001

storage:

  dbPath: "/data/shard1"

  engine: wiredTiger

  journal:

    enabled: true

  directoryPerDB: true

operationProfiling:

  slowOpThresholdMs: 10

  mode: "slowOp"

#security:

#  keyFile: "/data/keyfile/zxl"

#  clusterAuthMode: "keyFile"

replication:

  oplogSizeMB: 50

  replSetName: "shard1_zxl"

  secondaryIndexPrefetch: "all"

shard2.conf配置文件內容以下:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

[mongodb@node1 ~]$ cat /data/config/shard2.conf 

systemLog:

  destination: file

  path: /data/logs/shard2.log

  logAppend: true

processManagement:

  fork: true

  pidFilePath: "/data/shard2/shard2.pid"

net:

  port: 10002

storage:

  dbPath: "/data/shard2"

  engine: wiredTiger

  journal:

    enabled: true

  directoryPerDB: true

operationProfiling:

  slowOpThresholdMs: 10

  mode: "slowOp"

#security:

#  keyFile: "/data/keyfile/zxl"

#  clusterAuthMode: "keyFile"

replication:

  oplogSizeMB: 50

  replSetName: "shard2_zxl"

  secondaryIndexPrefetch: "all"

shard3.conf配置文件內容以下:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

[mongodb@node1 ~]$ cat /data/config/shard3.conf 

systemLog:

  destination: file

  path: /data/logs/shard3.log

  logAppend: true

processManagement:

  fork: true

  pidFilePath: "/data/shard3/shard3.pid"

net:

  port: 10003

storage:

  dbPath: "/data/shard3"

  engine: wiredTiger

  journal:

    enabled: true

  directoryPerDB: true

operationProfiling:

  slowOpThresholdMs: 10

  mode: "slowOp"

#security:

#  keyFile: "/data/keyfile/zxl"

#  clusterAuthMode: "keyFile"

replication:

  oplogSizeMB: 50

  replSetName: "shard3_zxl"

  secondaryIndexPrefetch: "all"

configsvr.conf配置文件內容以下:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

[mongodb@node1 ~]$ cat /data/config/configsvr.conf 

systemLog:

  destination: file

  path: /data/logs/configsvr.log

  logAppend: true

processManagement:

  fork: true

  pidFilePath: "/data/configsvr/configsvr.pid"

net:

  port: 10004

storage:

  dbPath: "/data/configsvr"

  engine: wiredTiger

  journal:

    enabled: true

#security:

#  keyFile: "/data/keyfile/zxl"

#  clusterAuthMode: "keyFile"

sharding:

  clusterRole: configsvr

mongos.conf配置文件內容以下:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

[mongodb@node3 ~]$ cat /data/config/mongos.conf 

systemLog:

  destination: file

  path: /data/logs/mongos.log

  logAppend: true

processManagement:

  fork: true

  pidFilePath: /data/mongos/mongos.pid

net:

  port: 10005

sharding:

  configDB: 192.168.75.128:10004,192.168.75.129:10004,192.168.75.130:10004

#security:

#  keyFile: "/data/keyfile/zxl"

#  clusterAuthMode: "keyFile"

注:以上操做只是在node1機器上操做,請把上面這些操做步驟在另外2臺機器操做一下,包括建立用戶建立目錄文件以及安裝mongodb等,以及文件拷貝到node二、node3對應的目錄下,拷貝以後查看一下文件的屬主屬組是否爲mongodb。關於configsvr的問題,官方建議1臺或者3臺,最好爲奇數,你懂得。不懂的話自行搜索mongodb官方就知道答案了,連接找不到了,本身找找吧。

啓動各個機器節點的mongod,shard一、shard二、shard3

1

2

3

4

5

[mongodb@node1 ~]$ mongod -f /data/config/shard1.conf

mongod: /usr/lib64/libcrypto.so.10: no version information available (required by m

mongod: /usr/lib64/libcrypto.so.10: no version information available (required by m

mongod: /usr/lib64/libssl.so.10: no version information available (required by mong

mongod: relocation error: mongod: symbol TLSv1_1_client_method, version libssl.so.1n file libssl.so.10 with link time reference

注:沒法啓動,看到相應的提示後

解決:安裝openssl便可,三臺機器均安裝openssl-devel

1

2

3

[mongodb@node1 ~]$ su - root

Password: 

[root@node1 ~]# yum install openssl-devel -y

再次切換mongodb用戶啓動三臺機器上的mongod,shard一、shard二、shard3

1

2

3

4

5

6

7

8

9

10

11

12

[mongodb@node1 ~]$ mongod -f /data/config/shard1.conf

about to fork child process, waiting until server is ready for connections.

forked process: 1737

child process started successfully, parent exiting

[mongodb@node1 ~]$ mongod -f /data/config/shard2.conf

about to fork child process, waiting until server is ready for connections.

forked process: 1760

child process started successfully, parent exiting

[mongodb@node1 ~]$ mongod -f /data/config/shard3.conf

about to fork child process, waiting until server is ready for connections.

forked process: 1783

child process started successfully, parent exiting

進入node1機器上的mongod:10001登陸

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

[mongodb@node1 ~]$ mongo --port 10001

MongoDB shell version: 3.2.3

connecting to: 127.0.0.1:10001/test

Welcome to the MongoDB shell.

For interactive help, type "help".

For more comprehensive documentation, see

http://docs.mongodb.org/

Questions? Try the support group

http://groups.google.com/group/mongodb-user

Server has startup warnings: 

2016-03-08T13:28:18.508+0800 I CONTROL  [initandlisten] 

2016-03-08T13:28:18.508+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/epage/enabled is 'always'.

2016-03-08T13:28:18.508+0800 I CONTROL  [initandlisten] **        We suggest settin

2016-03-08T13:28:18.508+0800 I CONTROL  [initandlisten] 

2016-03-08T13:28:18.508+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/epage/defrag is 'always'.

2016-03-08T13:28:18.508+0800 I CONTROL  [initandlisten] **        We suggest settin

2016-03-08T13:28:18.508+0800 I CONTROL  [initandlisten]

注:提示warning......

解決:在三臺機器上均操做一下內容便可

1

2

3

4

[mongodb@node2 config]$ su - root

Password

[root@node1 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled

[root@node1 ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag

關閉三臺機器上的mongod實例,而後再次啓動三臺機器上mongod實例便可。

1

2

3

4

[mongodb@node1 ~]$ netstat -ntpl|grep mongo|awk '{print $NF}'|awk -F'/' '{print $1}'|xargs kill 

[mongodb@node1 ~]$ mongod -f /data/config/shard1.conf

[mongodb@node1 ~]$ mongod -f /data/config/shard2.conf

[mongodb@node1 ~]$ mongod -f /data/config/shard3.conf

配置複製集

node1機器上操做配置複製集

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

[mongodb@node1 config]$ mongo --port 10001

MongoDB shell version: 3.2.3

connecting to: 127.0.0.1:10001/test

> use admin

switched to db admin

> config = { _id:"shard1_zxl", members:[

... ... {_id:0,host:"192.168.75.128:10001"},

... ... {_id:1,host:"192.168.75.129:10001"},

... ... {_id:2,host:"192.168.75.130:10001",arbiterOnly:true}

... ... ]

... ... }

{

"_id" "shard1_zxl",

"members" : [

{

"_id" : 0,

"host" "192.168.75.128:10001"

},

{

"_id" : 1,

"host" "192.168.75.129:10001"

},

{

"_id" : 2,

"host" "192.168.75.130:10001",

"arbiterOnly" true

}

]

}

> rs.initiate(con

config                 connect(               connectionURLTheSame(  constructor

> rs.initiate(config)

"ok" : 1 }

node2機器上操做配置複製集

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

[mongodb@node2 config]$ mongo --port 10002

MongoDB shell version: 3.2.3

connecting to: 127.0.0.1:10002/test

Welcome to the MongoDB shell.

For interactive help, type "help".

For more comprehensive documentation, see

http://docs.mongodb.org/

Questions? Try the support group

http://groups.google.com/group/mongodb-user

> use admin

switched to db admin

> config = { _id:"shard2_zxl", members:[

... ... {_id:0,host:"192.168.75.129:10002"},

... ... {_id:1,host:"192.168.75.130:10002"},

... ... {_id:2,host:"192.168.75.128:10002",arbiterOnly:true}

... ... ]

... ... }

{

"_id" "shard2_zxl",

"members" : [

{

"_id" : 0,

"host" "192.168.75.129:10002"

},

{

"_id" : 1,

"host" "192.168.75.130:10002"

},

{

"_id" : 2,

"host" "192.168.75.128:10002",

"arbiterOnly" true

}

]

}

> rs.initiate(config)

"ok" : 1 }

node3機器上操做配置複製集

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

[mongodb@node3 config]$ mongo --port 10003

MongoDB shell version: 3.2.3

connecting to: 127.0.0.1:10003/test

Welcome to the MongoDB shell.

For interactive help, type "help".

For more comprehensive documentation, see

http://docs.mongodb.org/

Questions? Try the support group

http://groups.google.com/group/mongodb-user

> use admin

switched to db admin

>  config = {_id:"shard3_zxl", members:[

... ... {_id:0,host:"192.168.75.130:10003"},

... ... {_id:1,host:"192.168.75.128:10003"},

... ... {_id:2,host:"192.168.75.129:10003",arbiterOnly:true}

... ... ]

... ... }

{

"_id" "shard3_zxl",

"members" : [

{

"_id" : 0,

"host" "192.168.75.130:10003"

},

{

"_id" : 1,

"host" "192.168.75.128:10003"

},

{

"_id" : 2,

"host" "192.168.75.129:10003",

"arbiterOnly" true

}

]

}

> rs.initiate(config)

"ok" : 1 }

注:以上是配置rs複製集,相關命令如:rs.status(),查看各個複製集的情況

啓動三臺機器上的configsvr和mongos節點

1

2

3

4

5

6

7

8

[mongodb@node1 logs]$ mongod -f /data/config/configsvr.conf

about to fork child process, waiting until server is ready for connections.

forked process: 6317

child process started successfully, parent exiting

[mongodb@node1 logs]$ mongos -f /data/config/mongos.conf

about to fork child process, waiting until server is ready for connections.

forked process: 6345

child process started successfully, parent exiting

配置shard分片

在node1機器上配置shard分片

1

2

3

4

5

6

7

8

9

10

11

[mongodb@node1 config]$ mongo --port 10005

MongoDB shell version: 3.2.3

connecting to: 127.0.0.1:10005/test

mongos> use admin

switched to db admin

mongos> db.runCommand({addshard:"shard1_zxl/192.168.75.128:10001,192.168.75.129:10001,192.168.75.130:10001"});

"shardAdded" "shard1_zxl""ok" : 1 }

mongos> db.runCommand({addshard:"shard2_zxl/192.168.75.128:10002,192.168.75.129:10002,192.168.75.130:10002"});

"shardAdded" "shard2_zxl""ok" : 1 }

mongos> db.runCommand({addshard:"shard3_zxl/192.168.75.128:10003,192.168.75.129:10003,192.168.75.130:10003"});

"shardAdded" "shard3_zxl""ok" : 1 }

1

2

3

4

#db.runCommand({addshard:"shard1_zxl/192.168.33.131:10001,192.168.33.132:10001,192.168.33.136:10001"});

#db.runCommand({addshard:"shard2_zxl/192.168.33.131:10002,192.168.33.132:10002,192.168.33.136:10002"});

#db.runCommand({addshard:"shard3_zxl/192.168.33.131:10003,192.168.33.132:10003,192.168.33.136:10003"});

注:根據本身的實際狀況,修改上面內容,快速執行。。你懂得。。

查看shard信息

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

mongos> sh.status()

--- Sharding Status --- 

  sharding version: {

"_id" : 1,

"minCompatibleVersion" : 5,

"currentVersion" : 6,

"clusterId" : ObjectId("56de6f4176b47beaa9c75e9d")

}

  shards:

{  "_id" "shard1_zxl",  "host" "shard1_zxl/192.168.75.128:10001,192.168.75.129:10001" }

{  "_id" "shard2_zxl",  "host" "shard2_zxl/192.168.75.129:10002,192.168.75.130:10002" }

{  "_id" "shard3_zxl",  "host" "shard3_zxl/192.168.75.128:10003,192.168.75.130:10003" }

  active mongoses:

"3.2.3" : 3

  balancer:

Currently enabled:  yes

Currently running:  no

Failed balancer rounds in last 5 attempts:  0

Migration Results for the last 24 hours: 

No recent migrations

  databases:

查看分片狀態

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

mongos> db.runCommand( {listshards : 1 } )

{

"shards" : [

{

"_id" "shard1_zxl",

"host" "shard1_zxl/192.168.75.128:10001,192.168.75.129:10001"

},

{

"_id" "shard2_zxl",

"host" "shard2_zxl/192.168.75.129:10002,192.168.75.130:10002"

},

{

"_id" "shard3_zxl",

"host" "shard3_zxl/192.168.75.128:10003,192.168.75.130:10003"

}

],

"ok" : 1

}

啓用shard分片的庫名字爲'zxl',即爲庫

1

2

mongos> sh.enableSharding("zxl")

"ok" : 1 }

設置集合的名字以及字段,默認自動創建索引,zxl庫,haha集合

1

2

mongos> sh.shardCollection("zxl.haha",{age: 1, name: 1})

"collectionsharded" "zxl.haha""ok" : 1 }

模擬在haha集合中插入10000數據

1

2

mongos> for (i=1;i<=10000;i++) db.haha.insert({name: "user"+i, age: (i%150)})

WriteResult({ "nInserted" : 1 })

能夠使用上面mongos> sh.status()命令查看各個shard分片狀況,以上就是複製集和shard分片搭建完成,主要仍是須要理解rs和shard原理。仍是把結果發出來吧,以下

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

mongos> sh.status()

--- Sharding Status --- 

  sharding version: {

"_id" : 1,

"minCompatibleVersion" : 5,

"currentVersion" : 6,

"clusterId" : ObjectId("56de6f4176b47beaa9c75e9d")

}

  shards:

{  "_id" "shard1_zxl",  "host" "shard1_zxl/192.168.75.128:10001,192.168.75.129:10001" }

{  "_id" "shard2_zxl",  "host" "shard2_zxl/192.168.75.129:10002,192.168.75.130:10002" }

{  "_id" "shard3_zxl",  "host" "shard3_zxl/192.168.75.128:10003,192.168.75.130:10003" }

  active mongoses:

"3.2.3" : 3

  balancer:

Currently enabled:  yes

Currently running:  no

Failed balancer rounds in last 5 attempts:  0

Migration Results for the last 24 hours: 

2 : Success

  databases:

{  "_id" "zxl",  "primary" "shard3_zxl",  "partitioned" true }

zxl.haha

shard key: { "age" : 1, "name" : 1 }

unique: false

balancing: true

chunks:

shard1_zxl1

shard2_zxl1

shard3_zxl1

"age" : { "$minKey" : 1 }, "name" : { "$minKey" : 1 } } -->> { "age" : 2, "name" "user2" } on : shard1_zxl Timestamp(2, 0) 

"age" : 2, "name" "user2" } -->> { "age" : 22, "name" "user22" } on : shard2_zxl Timestamp(3, 0) 

"age" : 22, "name" "user22" } -->> { "age" : { "$maxKey" : 1 }, "name" : { "$maxKey" : 1 } } on : shard3_zxl Timestamp(3, 1)

以上就是mongodb3.2複製集和shard分片搭建就此完成,仍是多多看看各個角色是什麼概念以及原理性的東東吧。。

相關文章
相關標籤/搜索