mongodb分佈式集羣搭建手記

1、架構簡介

目標
單機搭建mongodb分佈式集羣(副本集 + 分片集羣),演示mongodb分佈式集羣的安裝部署、簡單操做。html

說明
在同一個vm啓動由兩個分片組成的分佈式集羣,每一個分片都是一個PSS(Primary-Secondary-Secondary)模式的數據副本集;
Config副本集採用PSS(Primary-Secondary-Secondary)模式。node

2、配置說明

  • 端口通信
    當前集羣中存在shard、config、mongos共12個進程節點,端口矩陣編排以下:
編號 實例類型
1 mongos
2 mongos
3 mongos
4 config
5 config
6 config
7 shard1
8 shard1
9 shard1
10 shard2
11 shard2
12 shard2
  • 內部鑑權
    節點間鑑權採用keyfile方式實現鑑權,mongos與分片之間、副本集節點之間共享同一套keyfile文件。 官方說明linux

  • 帳戶設置
    管理員帳戶:admin/Admin@01,具備集羣及全部庫的管理權限
    應用帳號:appuser/AppUser@01,具備appdb的owner權限算法

關於初始化權限
keyfile方式默認會開啓鑑權,而針對初始化安裝的場景,Mongodb提供了localhost-exception機制
能夠在首次安裝時經過本機建立用戶、角色,以及副本集初始操做。mongodb

3、準備工做

1. 下載安裝包

官方地址:https://www.mongodb.com/download-center數據庫

wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-3.6.3.tgz

2. 部署目錄

解壓壓縮文件,將bin目錄拷貝到目標路徑/opt/local/mongo-cluster,參考如下命令:架構

tar -xzvf mongodb-linux-x86_64-rhel70-3.6.3.tgz
mkdir -p  /opt/local/mongo-cluster
cp -r mongodb-linux-x86_64-rhel70-3.6.3/bin  /opt/local/mongo-cluster

3. 建立配置文件

cd /opt/local/mongo-cluster
mkdir conf

A. mongod 配置文件 mongo_node.conf
mongo_node.conf 做爲mongod實例共享的配置文件,內容以下:app

storage:
    engine: wiredTiger
    directoryPerDB: true
    journal:
        enabled: true
systemLog:
    destination: file
    logAppend: true
operationProfiling:
  slowOpThresholdMs: 10000
replication:
    oplogSizeMB: 10240
processManagement:
    fork: true
net:
    http:
      enabled: false
security:
    authorization: "enabled"

選項說明可參考這裏分佈式

B. mongos 配置文件 mongos.conf工具

systemLog:
    destination: file
    logAppend: true
processManagement:
    fork: true
net:
    http:
      enabled: false

4. 建立keyfile文件

cd /opt/local/mongo-cluster
mkdir keyfile
openssl rand -base64 756 > mongo.key
chmod 400 mongo.key
mv mongo.key keyfile

mongo.key 採用隨機算法生成,用做節點內部通信的密鑰文件

5. 建立節點目錄

WORK_DIR=/opt/local/mongo-cluster
mkdir -p $WORK_DIR/nodes/config/n1/data
mkdir -p $WORK_DIR/nodes/config/n2/data
mkdir -p $WORK_DIR/nodes/config/n3/data

mkdir -p $WORK_DIR/nodes/shard1/n1/data
mkdir -p $WORK_DIR/nodes/shard1/n2/data
mkdir -p $WORK_DIR/nodes/shard1/n3/data

mkdir -p $WORK_DIR/nodes/shard2/n1/data
mkdir -p $WORK_DIR/nodes/shard2/n2/data
mkdir -p $WORK_DIR/nodes/shard2/n3/data

mkdir -p $WORK_DIR/nodes/mongos/n1
mkdir -p $WORK_DIR/nodes/mongos/n2
mkdir -p $WORK_DIR/nodes/mongos/n3

以config 節點1 爲例,nodes/config/n1/data是數據目錄,而pid文件、日誌文件都存放於n1目錄
以mongos 節點1 爲例,nodes/mongos/n1 存放了pid文件和日誌文件

4、搭建集羣

1. Config副本集

按如下腳本啓動3個Config實例

WORK_DIR=/opt/local/mongo-cluster
KEYFILE=$WORK_DIR/keyfile/mongo.key
CONFFILE=$WORK_DIR/conf/mongo_node.conf
MONGOD=$WORK_DIR/bin/mongod

$MONGOD --port 26001 --configsvr --replSet configReplSet --keyFile $KEYFILE --dbpath $WORK_DIR/nodes/config/n1/data --pidfilepath $WORK_DIR/nodes/config/n1/db.pid --logpath $WORK_DIR/nodes/config/n1/db.log --config $CONFFILE

$MONGOD --port 26002 --configsvr --replSet configReplSet --keyFile $KEYFILE --dbpath $WORK_DIR/nodes/config/n2/data --pidfilepath $WORK_DIR/nodes/config/n2/db.pid --logpath $WORK_DIR/nodes/config/n2/db.log --config $CONFFILE

$MONGOD --port 26003 --configsvr --replSet configReplSet --keyFile $KEYFILE --dbpath $WORK_DIR/nodes/config/n3/data --pidfilepath $WORK_DIR/nodes/config/n3/db.pid --logpath $WORK_DIR/nodes/config/n3/db.log --config $CONFFILE

待成功啓動後,輸出日誌以下:

about to fork child process, waiting until server is ready for connections.
forked process: 4976
child process started successfully, parent exiting

此時經過ps 命令也能夠看到3個啓動的進程實例。

鏈接其中一個Config進程,執行副本集初始化

./bin/mongo --port 26001 --host 127.0.0.1
> MongoDB server version: 3.4.7
> cfg={
    _id:"configReplSet", 
    configsvr: true,
    members:[
        {_id:0, host:'127.0.0.1:26001'},
        {_id:1, host:'127.0.0.1:26002'}, 
        {_id:2, host:'127.0.0.1:26003'}
    ]};
rs.initiate(cfg);

其中configsvr:true指明這是一個用於分片集羣的Config副本集。
關於副本集配置可參考這裏

2. 建立分片

按如下腳本啓動Shard1的3個實例

WORK_DIR=/opt/local/mongo-cluster
KEYFILE=$WORK_DIR/keyfile/mongo.key
CONFFILE=$WORK_DIR/conf/mongo_node.conf
MONGOD=$WORK_DIR/bin/mongod

echo "start shard1 replicaset"

$MONGOD --port 27001 --shardsvr --replSet shard1 --keyFile $KEYFILE --dbpath $WORK_DIR/nodes/shard1/n1/data --pidfilepath $WORK_DIR/nodes/shard1/n1/db.pid --logpath $WORK_DIR/nodes/shard1/n1/db.log --config $CONFFILE
$MONGOD --port 27002 --shardsvr --replSet shard1 --keyFile $KEYFILE --dbpath $WORK_DIR/nodes/shard1/n2/data --pidfilepath $WORK_DIR/nodes/shard1/n2/db.pid --logpath $WORK_DIR/nodes/shard1/n2/db.log --config $CONFFILE
$MONGOD --port 27003 --shardsvr --replSet shard1 --keyFile $KEYFILE --dbpath $WORK_DIR/nodes/shard1/n3/data --pidfilepath $WORK_DIR/nodes/shard1/n3/db.pid --logpath $WORK_DIR/nodes/shard1/n3/db.log --config $CONFFILE

待成功啓動後,輸出日誌以下:

about to fork child process, waiting until server is ready for connections.
forked process: 5976
child process started successfully, parent exiting

此時經過ps 命令也能夠看到3個啓動的Shard進程實例。

鏈接其中一個Shard進程,執行副本集初始化

./bin/mongo --port 27001 --host 127.0.0.1
> MongoDB server version: 3.4.7
> cfg={
    _id:"shard1", 
    members:[
        {_id:0, host:'127.0.0.1:27001'},
        {_id:1, host:'127.0.0.1:27002'}, 
        {_id:2, host:'127.0.0.1:27003'}
    ]};
rs.initiate(cfg);

參考以上步驟,啓動Shard2的3個實例進程,並初始化副本集。

3. 啓動mongos路由

執行如下腳本啓動3個mongos進程

WORK_DIR=/opt/local/mongo-cluster
KEYFILE=$WORK_DIR/keyfile/mongo.key
CONFFILE=$WORK_DIR/conf/mongos.conf
MONGOS=$WORK_DIR/bin/mongos

echo "start mongos instances"
$MONGOS --port=25001 --configdb configReplSet/127.0.0.1:26001,127.0.0.1:26002,127.0.0.1:26003 --keyFile $KEYFILE --pidfilepath $WORK_DIR/nodes/mongos/n1/db.pid --logpath $WORK_DIR/nodes/mongos/n1/db.log --config $CONFFILE
$MONGOS --port 25002 --configdb configReplSet/127.0.0.1:26001,127.0.0.1:26002,127.0.0.1:26003 --keyFile $KEYFILE --pidfilepath $WORK_DIR/nodes/mongos/n2/db.pid --logpath $WORK_DIR/nodes/mongos/n2/db.log --config $CONFFILE
$MONGOS --port 25003 --configdb configReplSet/127.0.0.1:26001,127.0.0.1:26002,127.0.0.1:26003 --keyFile $KEYFILE --pidfilepath $WORK_DIR/nodes/mongos/n3/db.pid --logpath $WORK_DIR/nodes/mongos/n3/db.log --config $CONFFILE

待成功啓動後,經過ps命令看到mongos進程:

dbuser      7903    1  0 17:49 ?        00:00:00 /opt/local/mongo-cluster/bin/mongos --port=25001 --configdb configReplSet/127.0.0.1:26001,127.0.0.1:26002,127.0.0.1:26003 --keyFile /opt/local/mongo-cluster/keyfile/mongo.key --pidfilepath /opt/local/mongo-cluster/nodes/mongos/n1/db.pid --logpath /opt/local/mongo-cluster/nodes/mongos/n1/db.log --config /opt/local/mongo-cluster/conf/mongos.conf
dbuser      7928    1  0 17:49 ?        00:00:00 /opt/local/mongo-cluster/bin/mongos --port 25002 --configdb configReplSet/127.0.0.1:26001,127.0.0.1:26002,127.0.0.1:26003 --keyFile /opt/local/mongo-cluster/keyfile/mongo.key --pidfilepath /opt/local/mongo-cluster/nodes/mongos/n2/db.pid --logpath /opt/local/mongo-cluster/nodes/mongos/n2/db.log --config /opt/local/mongo-cluster/conf/mongos.conf
dbuser      7954    1  0 17:49 ?        00:00:00 /opt/local/mongo-cluster/bin/mongos --port 25003 --configdb configReplSet/127.0.0.1:26001,127.0.0.1:26002,127.0.0.1:26003 --keyFile /opt/local/mongo-cluster/keyfile/mongo.key --pidfilepath /opt/local/mongo-cluster/nodes/mongos/n3/db.pid --logpath /opt/local/mongo-cluster/nodes/mongos/n3/db.log --config /opt/local/mongo-cluster/conf/mongos.conf

接入其中一個mongos實例,執行添加分片操做:

./bin/mongo --port 25001 --host 127.0.0.1
mongos> MongoDB server version: 3.4.7
mongos> sh.addShard("shard1/127.0.0.1:27001")
{ "shardAdded" : "shard1", "ok" : 1 }
mongos> sh.addShard("shard2/127.0.0.1:27004")
{ "shardAdded" : "shard2", "ok" : 1 }

至此,分佈式集羣架構啓動完畢,但進一步操做須要先添加用戶。

4. 初始化用戶

接入其中一個mongos實例,添加管理員用戶

use admin
db.createUser({
    user:'admin',pwd:'Admin@01',
    roles:[
        {role:'clusterAdmin',db:'admin'},
        {role:'userAdminAnyDatabase',db:'admin'},
        {role:'dbAdminAnyDatabase',db:'admin'},
        {role:'readWriteAnyDatabase',db:'admin'}
]})

當前admin用戶具備集羣管理權限、全部數據庫的操做權限。
須要注意的是,在第一次建立用戶以後,localexception再也不有效,接下來的全部操做要求先經過鑑權。

use admin
db.auth('admin','Admin@01')

檢查集羣狀態

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5aa39c3e915210dc501a1dc8")
}
  shards:
    {  "_id" : "shard1",  "host" : "shard1/127.0.0.1:27001,127.0.0.1:27002,127.0.0.1:27003",  "state" : 1 }
    {  "_id" : "shard2",  "host" : "shard2/127.0.0.1:27004,127.0.0.1:27005,127.0.0.1:27006",  "state" : 1 }
  active mongoses:
    "3.4.7" : 3
autosplit:
    Currently enabled: yes

集羣用戶
分片集羣中的訪問都會經過mongos入口,而鑑權數據是存儲在config副本集中的,即config實例中system.users數據庫存儲了集羣用戶及角色權限配置。mongos與shard實例則經過內部鑑權(keyfile機制)完成,所以shard實例上能夠經過添加本地用戶以方便操做管理。在一個副本集上,只須要在Primary節點上添加用戶及權限,相關數據會自動同步到Secondary節點。
關於集羣鑑權
在本案例中,咱們爲兩個分片副本集都添加了本地admin用戶。

經過mongostat工具能夠顯示集羣全部角色:

host insert query update delete getmore command dirty used flushes mapped vsize  res faults qrw arw net_in net_out conn    set repl                time
127.0.0.1:27001    *0    *0    *0    *0      0    6|0  0.1% 0.1%      0        1.49G 44.0M    n/a 0|0 0|0  429b  56.1k  25 shard1  PRI Mar 10 19:05:13.928
127.0.0.1:27002    *0    *0    *0    *0      0    7|0  0.1% 0.1%      0        1.43G 43.0M    n/a 0|0 0|0  605b  55.9k  15 shard1  SEC Mar 10 19:05:13.942
127.0.0.1:27003    *0    *0    *0    *0      0    7|0  0.1% 0.1%      0        1.43G 43.0M    n/a 0|0 0|0  605b  55.9k  15 shard1  SEC Mar 10 19:05:13.946
127.0.0.1:27004    *0    *0    *0    *0      0    6|0  0.1% 0.1%      0        1.48G 43.0M    n/a 0|0 0|0  546b  55.8k  18 shard2  PRI Mar 10 19:05:13.939
127.0.0.1:27005    *0    *0    *0    *0      0    6|0  0.1% 0.1%      0        1.43G 42.0M    n/a 0|0 0|0  540b  54.9k  15 shard2  SEC Mar 10 19:05:13.944
127.0.0.1:27006    *0    *0    *0    *0      0    6|0  0.1% 0.1%      0        1.46G 44.0M    n/a 0|0 0|0  540b  54.9k  17 shard2  SEC Mar 10 19:05:13.936

5、數據操做

在案例中,建立appuser用戶、爲數據庫實例appdb啓動分片。

use appdb
db.createUser({user:'appuser',pwd:'AppUser@01',roles:[{role:'dbOwner',db:'appdb'}]})
sh.enableSharding("appdb")

建立集合book,爲其執行分片初始化。

use appdb
db.createCollection("book")
db.device.ensureIndex({createTime:1})
sh.shardCollection("appdb.book", {bookId:"hashed"}, false, { numInitialChunks: 4} )

繼續往device集合寫入1000W條記錄,觀察chunks的分佈狀況

use appdb
var cnt = 0;
for(var i=0; i<1000; i++){
    var dl = [];
    for(var j=0; j<100; j++){
        dl.push({
                "bookId" : "BBK-" + i + "-" + j,
                "type" : "Revision",
                "version" : "IricSoneVB0001",
                "title" : "Jackson's Life",
                "subCount" : 10,
                "location" : "China CN Shenzhen Futian District",
                "author" : {
                      "name" : 50,
                      "email" : "RichardFoo@yahoo.com",
                      "gender" : "female"
                },
                "createTime" : new Date()
            });
      }
      cnt += dl.length;
      db.book.insertMany(dl);
      print("insert ", cnt);
}

執行db.book.getShardDistribution(),輸出以下:

Shard shard1 at shard1/127.0.0.1:27001,127.0.0.1:27002,127.0.0.1:27003
data : 13.41MiB docs : 49905 chunks : 2
estimated data per chunk : 6.7MiB
estimated docs per chunk : 24952

Shard shard2 at shard2/127.0.0.1:27004,127.0.0.1:27005,127.0.0.1:27006
data : 13.46MiB docs : 50095 chunks : 2
estimated data per chunk : 6.73MiB
estimated docs per chunk : 25047

Totals
data : 26.87MiB docs : 100000 chunks : 4
Shard shard1 contains 49.9% data, 49.9% docs in cluster, avg obj size on shard : 281B
Shard shard2 contains 50.09% data, 50.09% docs in cluster, avg obj size on shard : 281B

總結

  • Mongodb集羣架構由Mongos、Config副本集和多個分片組成;
    安裝過程當中先初始化Config副本集、分片副本集,最後經過Mongos添加分片
  • Config副本集存儲了集羣訪問的用戶及角色權限,爲了方便管理,能夠給分片副本集添加本地用戶
  • Mongodb提供了LocalException機制,首次安裝數據庫時能夠在本機直接添加用戶
相關文章
相關標籤/搜索