MongoDB集羣配置

1 MongoDB 分片(高可用)

1.1 準備工做

  • 三臺虛擬機
  • 安裝MongoDB
  • 虛擬機相互之間能夠相互通訊
  • 虛擬機與主機之間能夠相互通訊

1.2 安裝MongoDB

在Ubuntu16.04 中安裝 MongoDB 。參考步驟MongoDB官方網站javascript

  • 安裝時會報錯java

    E: The method driver /usr/lib/apt/methods/https could not be found.
    N: Is the package apt-transport-https installed?

    提示須要安裝apt-transport-httpsmongodb

    sudo apt-get install -y apt-transport-httpsshell

1.3 啓動MongoDB

sudo service mongod start

檢查是否啓動成功數據庫

sudo cat /var/log/mongodb/mongod.log 後端

2019-04-19T15:40:52.808+0800 I NETWORK [initandlisten] waiting for connections on port 27017緩存

2 MongoDB 分片

分片(sharding)是將數據拆分,將其分散存到不一樣機器上的過程。MongoDB 支持自動分片,可使數據庫架構對應用程序不可見。對於應用程序來講,好像始終在使用一個單機的 MongoDB 服務器同樣,另外一方面,MongoDB 自動處理數據在分片上的分佈,也更容易添加和刪除分片。相似於MySQL中的分庫。服務器

2.1 基礎組件

其利用到了四個組件:mongos,config server,shard,replica set網絡

mongos:數據庫集羣請求的入口,全部請求須要通過mongos進行協調,無需在應用層面利用程序來進行路由選擇,mongos其自身是一個請求分發中心,負責將外部的請求分發到對應的shard服務器上,mongos做爲統一的請求入口,爲防止mongos單節點故障,通常須要對其作HA。能夠理解爲在微服務架構中的路由Eureka。架構

config server:配置服務器,存儲全部數據庫元數據(分片,路由)的配置。mongos自己沒有物理存儲分片服務器和數據路由信息,只是存緩存在內存中來讀取數據,mongos在第一次啓動或後期重啓時候,就會從config server中加載配置信息,若是配置服務器信息發生更新會通知全部的mongos來更新本身的狀態,從而保證準確的請求路由,生產環境中一般也須要多個config server,防止配置文件存在單節點丟失問題。理解爲 配置中心

shard:在傳統意義上來說,若是存在海量數據,單臺服務器存儲1T壓力很是大,不管考慮數據庫的硬盤,網絡IO,又有CPU,內存的瓶頸,若是多臺進行分攤1T的數據,到每臺上就是可估量的較小數據,在mongodb集羣只要設置好分片規則,經過mongos操做數據庫,就能夠自動把對應的操做請求轉發到對應的後端分片服務器上。真正的存儲

replica set:在整體mongodb集羣架構中,對應的分片節點,若是單臺機器下線,對應整個集羣的數據就會出現部分缺失,這是不能發生的,所以對於shard節點須要replica set來保證數據的可靠性,生產環境一般爲2個副本+1個仲裁。 副本保持數據的HA

2.2 架構圖

img

3 安裝部署

爲了節省服務器,採用多實例配置,三個mongos,三個config server,單個服務器上面運行不通角色的shard(爲了後期數據分片均勻,將三臺shard在各個服務器上充當不一樣的角色。),在一個節點內採用replica set保證高可用,對應主機與端口信息以下:

主機名 IP地址 組件mongos 組件config server shard
主節點: 22001
mongodb-1 192.168.90.130 端口:20000 端口:21000 副本節點:22002
仲裁節點:22003
主節點: 22002
mongodb-2 192.168.90.131 端口:20000 端口:21000 副本節點:22001
仲裁節點:22003
主節點: 22003
mongodb-3 192.168.90.132 端口:20000 端口:21000 副本節點:22001
仲裁節點:22002

架構

3.1 部署配置服務器集羣

3.1.1 先建立對應的文件夾

mkdir -p /usr/local/mongo/mongoconf/{data,log,config}
touch mongoconfg.conf
touch mongoconf.log

3.1.2 填寫對應的配置文件

# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# Where and how to store data.
storage:
  dbPath: /usr/local/mongo/mongoconf/data
  journal:
    enabled: true
    commitIntervalMs: 200
#  engine:
#  mmapv1:
#  wiredTiger:

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /usr/local/mongo/mongoconf/log/mongoconf.log

# network interfaces
net:
  port: 21000
  bindIp: 0.0.0.0
  maxIncomingConnections: 1000

# how the process runs
processManagement:
  fork: true
#security:


#operationProfiling:

#replication:

replication:
  replSetName: replconf

#sharding:
sharding:
  clusterRole: configsvr
## Enterprise-Only Options:

#auditLog:

#snmp:

配置集羣須要指定 data ,log ,以及對應的sharding角色

3.1.3 啓動config集羣

在三臺機器上都配置好對應的文件後能夠啓動config集羣

mogod -f /usr/local/mongo/mongoconf/conf/mongoconf.conf

3.1.4 副本初始化操做

進入任意一臺機器(130爲例),作集羣副本初始化操做

mongo 192.168.90.130

config = {
    _id:"replconf"
    members:[
        {_id:0,host:"192.168.90.130:21000"}
        {_id:0,host:"192.168.90.131:21000"}
        {_id:0,host:"192.168.90.132:21000"}
    ]
    
}
rs.initiate(config)

//查看集羣狀態
rs.status()

返回結果以下

{
    "set" : "replconf",
    "date" : ISODate("2019-04-21T06:38:50.164Z"),
    "myState" : 2,
    "term" : NumberLong(3),
    "syncingTo" : "192.168.90.131:21000",
    "syncSourceHost" : "192.168.90.131:21000",
    ...
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.90.130:21000",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 305,
            "optime" : {
                "ts" : Timestamp(1555828718, 1),
                "t" : NumberLong(3)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1555828718, 1),
                "t" : NumberLong(3)
            },
            "optimeDate" : ISODate("2019-04-21T06:38:38Z"),
            "optimeDurableDate" : ISODate("2019-04-21T06:38:38Z"),
            "lastHeartbeat" : ISODate("2019-04-21T06:38:49.409Z"),
            "lastHeartbeatRecv" : ISODate("2019-04-21T06:38:49.408Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.90.131:21000",
            "syncSourceHost" : "192.168.90.131:21000",
            "syncSourceId" : 1,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 1,
            "name" : "192.168.90.131:21000",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 307,
            "optime" : {
                "ts" : Timestamp(1555828718, 1),
                "t" : NumberLong(3)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1555828718, 1),
                "t" : NumberLong(3)
            },
            "optimeDate" : ISODate("2019-04-21T06:38:38Z"),
            "optimeDurableDate" : ISODate("2019-04-21T06:38:38Z"),
            "lastHeartbeat" : ISODate("2019-04-21T06:38:49.380Z"),
            "lastHeartbeatRecv" : ISODate("2019-04-21T06:38:49.635Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "electionTime" : Timestamp(1555828429, 1),
            "electionDate" : ISODate("2019-04-21T06:33:49Z"),
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.90.132:21000",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 319,
            "optime" : {
                "ts" : Timestamp(1555828718, 1),
                "t" : NumberLong(3)
            },
            "optimeDate" : ISODate("2019-04-21T06:38:38Z"),
            "syncingTo" : "192.168.90.131:21000",
            "syncSourceHost" : "192.168.90.131:21000",
            "syncSourceId" : 1,
            "infoMessage" : "",
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        }
    ],
    "ok" : 1,
    ...
}

能夠看出config集羣部署成功,且經過選舉機制選定131爲primary節點。

3.2 部署Shard分片集羣

3.2.1 分別對三片數據存儲建立對應的文件夾

mkdir -p shard1/{data,conf,log}
mkdir -p shard2/{data,conf,log}
mkdir -p shard3/{data,conf,log}

touch shard1/log/shard1.log
touch shard2/log/shard2.log
touch shard3/log/shard3.log

3.2.2 建立對應的配置文件

# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# Where and how to store data.
storage:
  dbPath: /usr/local/mongo/shard1/data
  journal:
    enabled: true
    commitIntervalMs: 200
  mmapv1:
    smallFiles: true
#  wiredTiger:

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /usr/local/mongo/shard1/log/shard1.log

# network interfaces
net:
  port: 22001
  bindIp: 0.0.0.0
  maxIncomingConnections: 1000

# how the process runs
processManagement:
  fork: true
#security:


#operationProfiling:

#replication:
replication:
  replSetName: shard1
  oplogSizeMB: 4096
#sharding:
sharding:
  clusterRole: shardsvr
## Enterprise-Only Options:

#auditLog:

#snmp:

分別創建shard1,shard2,shard3 的配置文件

3.2.3 啓動對應的shard

mongod -f /usr/local/mongo/shard1/conf/shard1.conf 
 mongod -f /usr/local/mongo/shard2/conf/shard2.conf 
 mongod -f /usr/local/mongo/shard3/conf/shard3.conf

使用

netstat -lntup

來查看進程

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22001           0.0.0.0:*               LISTEN      1891/mongod     
tcp        0      0 0.0.0.0:22002           0.0.0.0:*               LISTEN      1974/mongod     
tcp        0      0 0.0.0.0:22003           0.0.0.0:*               LISTEN      2026/mongod     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -               
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      -               
tcp        0      0 0.0.0.0:21000           0.0.0.0:*               LISTEN      1779/mongod

發現進程所有啓動

3.2.4 指定對應shard集羣的副本

在130 上登陸

mongo 192.168.90.130:22001

指定集羣以及對應的arbiter節點

config = {
    _id:"shard1",
    members:[
        {_id:0,host:"192.168.90.130:22001"},
        {_id:1,host:"192.168.90.131:22001"},
        {_id:2,host:"192.168.90.132:22001",arbiterOnly:true}
    ]   
}
rs.initate(config)
rs.status()

在131,132 節點登陸,哪個節點初始化 即哪個節點第一次優先做爲primary節點

config = {
    _id:"shard2",
    members:[
        {_id:0,host:"192.168.90.130:22002",arbiterOnly:true},
        {_id:1,host:"192.168.90.131:22002"},
        {_id:2,host:"192.168.90.132:22002"}
    ]   
}
rs.initate(config)
rs.status()

config = {
    _id:"shard3",
    members:[
        {_id:0,host:"192.168.90.130:22001"},
        {_id:1,host:"192.168.90.131:22001",arbiterOnly:true},
        {_id:2,host:"192.168.90.132:22001"}
    ]   
}
rs.initate(config)
rs.status()

3.3 配置mongos路由集羣

3.3.1 建立對應文件

mkdir -p ./mongos/{config,log}

由於mongos不須要存儲數據或者元數據信息,只負責處理請求分發,當啓動時到config集羣中取得元數據信息加載到內存使用。

3.3.2 配置文件

# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# Where and how to store data.
#  engine:
#  mmapv1:
#  wiredTiger:

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /usr/local/mongo/mongos/log/mongos.log

# network interfaces
net:
  port: 20000
  bindIp: 0.0.0.0
  maxIncomingConnections: 1000

# how the process runs
processManagement:
  fork: true
sharding:
  configDB: replconf/mongo0:21000,mongo1:21000,mongo2:21000
## Enterprise-Only Options:


#auditLog:

3.3.3 啓動mongos

mongos -f /usr/local/mongo/mongos/conf/mongos.conf

查看集羣啓動狀況

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:22001           0.0.0.0:*               LISTEN      1891/mongod     
tcp        0      0 0.0.0.0:22002           0.0.0.0:*               LISTEN      1974/mongod     
tcp        0      0 0.0.0.0:22003           0.0.0.0:*               LISTEN      2026/mongod     
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -               
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      -               
tcp        0      0 0.0.0.0:20000           0.0.0.0:*               LISTEN      2174/mongos     
tcp        0      0 0.0.0.0:21000           0.0.0.0:*               LISTEN      1779/mongod

3.3.4 在admin表中填寫shard分片信息

登陸任意一臺mongos,在admin表中加入信息

use admin

db.runCommand(
    {
        addShard:"shard1/192.168.90.130:22001,192.168.90.131:22001,192.168.90.132:22001"
    }
)

db.runCommand(
    {
        addShard:"shard2/192.168.90.130:22002,192.168.90.131:22002,192.168.90.132:22002"
    }
)
db.runCommand(
    {
        addShard:"shard3/192.168.90.130:22003,192.168.90.131:22003,192.168.90.132:22003"
    }
)

sh.status()

返回信息:

--- Sharding Status --- 
  sharding version: {
      "_id" : 1,
      "minCompatibleVersion" : 5,
      "currentVersion" : 6,
      "clusterId" : ObjectId("5cba82d33290d8f4fb3ac8f7")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.90.130:22001,192.168.90.132:22001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.90.130:22002,192.168.90.131:22002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.90.131:22003,192.168.90.132:22003",  "state" : 1 }
  active mongoses:
....

3.3.5 給對應的表進行分片

use admin
db.runCommand({
    shardCollection:"lishubindb.table1",key:{_id:"hashed"}
})
db.runCommand({
    listshards:1
})

3.4 測試

加入10W條數據到table1集合中

use lishubindb
for(var i=0;i<100000;i++){
    db.table1.insert({
        "name":"lishubin"+i,"num":i
    })
}

觀察分片狀況

db.table1.status()

    "ns" : "lishubindb.table1",
    "count" : 100000,
    "size" : 5888890,
    "storageSize" : 2072576,
    "totalIndexSize" : 4694016,
    "indexSizes" : {
        "_id_" : 1060864,
        "_id_hashed" : 3633152
    },
    "avgObjSize" : 58,
    "maxSize" : NumberLong(0),
    "nindexes" : 2,
    "nchunks" : 6,
    "shards" : {
        "shard3" : {
            "ns" : "lishubindb.table1",
            "size" : 1969256,
            "count" : 33440,
            "avgObjSize" : 58,
            "storageSize" : 712704,
            "capped" : false,
            ...
        },
        "shard2" : {
            "ns" : "lishubindb.table1",
            "size" : 1973048,
            "count" : 33505,
            "avgObjSize" : 58,
            "storageSize" : 708608,
            "capped" : false,
            ...
        },
        "shard1" : {
            "ns" : "lishubindb.table1",
            "size" : 1946586,
            "count" : 33055,
            "avgObjSize" : 58,
            "storageSize" : 651264,
            "capped" : false,
            ...
        },

4 參考文獻

相關文章
相關標籤/搜索