1、分片概述;
2、分片存儲原理;
3、案例:mongodb分片結合複製集高效存儲;linux
1、分片概述:
概述:分片(sharding)是指將數據庫拆分,將其分散在不一樣的機器上的過程。分片集羣(sharded cluster)是一種水平擴展數據庫系統性能的方法,可以將數據集分佈式存儲在不一樣的分片(shard)上,每一個分片只保存數據集的一部分,MongoDB保證各個分片之間不會有重複的數據,全部分片保存的數據之和就是完整的數據集。分片集羣將數據集分佈式存儲,可以將負載分攤到多個分片上,每一個分片只負責讀寫一部分數據,充分利用了各個shard的系統資源,提升數據庫系統的吞吐量。
注:mongodb3.2版本後,分片技術必須結合複製集完成;
應用場景:
1.單臺機器的磁盤不夠用了,使用分片解決磁盤空間的問題。
2.單個mongod已經不能知足寫數據的性能要求。經過分片讓寫壓力分散到各個分片上面,使用分片服務器自身的資源。
3.想把大量數據放到內存裏提升性能。和上面同樣,經過分片使用分片服務器自身的資源。mongodb
2、分片存儲原理:
存儲方式:數據集被拆分紅數據塊(chunk),每一個數據塊包含多個doc,數據塊分佈式存儲在分片集羣中。
角色:
Config server:MongoDB負責追蹤數據塊在shard上的分佈信息,每一個分片存儲哪些數據塊,叫作分片的元數據,保存在config server上的數據庫 config中,通常使用3臺config server,全部config server中的config數據庫必須徹底相同(建議將config server部署在不一樣的服務器,以保證穩定性);
Shard server:將數據進行分片,拆分紅數據塊(chunk),數據塊真正存放的單位;
Mongos server:數據庫集羣請求的入口,全部的請求都經過mongos進行協調,查看分片的元數據,查找chunk存放位置,mongos本身就是一個請求分發中心,在生產環境一般有多mongos做爲請求的入口,防止其中一個掛掉全部的mongodb請求都沒有辦法操做。數據庫
總結:應用請求mongos來操做mongodb的增刪改查,配置服務器存儲數據庫元信息,而且和mongos作同步,數據最終存入在shard(分片)上,爲了防止數據丟失,同步在副本集中存儲了一份,仲裁節點在數據存儲到分片的時候決定存儲到哪一個節點。
3、分片的片鍵;
概述:片鍵是文檔的一個屬性字段或是一個複合索引字段,一旦創建後則不可改變,片鍵是拆分數據的關鍵的依據,如若在數據極爲龐大的場景下,片鍵決定了數據在分片的過程當中數據的存儲位置,直接會影響集羣的性能;
注:建立片鍵時,須要有一個支撐片鍵運行的索引;
片鍵分類:
1.遞增片鍵:使用時間戳,日期,自增的主鍵,ObjectId,_id等,此類片鍵的寫入操做集中在一個分片服務器上,寫入不具備分散性,這會致使單臺服務器壓力較大,但分割比較容易,這臺服務器可能會成爲性能瓶頸;
語法解析:
mongos> use 庫名
mongos> db.集合名.ensureIndex({"鍵名":1}) ##建立索引
mongos> sh.enableSharding("庫名") ##開啓庫的分片
mongos> sh.shardCollection("庫名.集合名",{"鍵名":1}) ##開啓集合的分片並指定片鍵
2.哈希片鍵:也稱之爲散列索引,使用一個哈希索引字段做爲片鍵,優勢是使數據在各節點分佈比較均勻,數據寫入可隨機分發到每一個分片服務器上,把寫入的壓力分散到了各個服務器上。可是讀也是隨機的,可能會命中更多的分片,可是缺點是沒法實現範圍區分;
3.組合片鍵: 數據庫中沒有比較合適的鍵值供片鍵選擇,或者是打算使用的片鍵基數過小(即變化少如星期只有7天可變化),能夠選另外一個字段使用組合片鍵,甚至能夠添加冗餘字段來組合;
4.標籤片鍵:數據存儲在指定的分片服務器上,能夠爲分片添加tag標籤,而後指定相應的tag,好比讓10...(T)出如今shard0000上,11...(Q)出如今shard0001或shard0002上,就可使用tag讓均衡器指定分發;安全
4、案例:mongodb分片結合複製集高效存儲bash
實驗步驟:
安裝mongodb服務;
配置config節點的實例;
配置shard1的實例:
配置shard2的實例:
配置分片並驗證:
服務器
安裝mongodb服務: 192.168.100.10一、192.168.100.10二、192.168.100.103: [root@config ~]# tar zxvf mongodb-linux-x86_64-rhel70-3.6.3.tgz [root@config ~]# mv mongodb-linux-x86_64-rhel70-3.6.3 /usr/local/mongodb [root@config ~]# echo "export PATH=/usr/local/mongodb/bin:\$PATH" >>/etc/profile [root@config ~]# source /etc/profile [root@config ~]# ulimit -n 25000 [root@config ~]# ulimit -u 25000 [root@config ~]# echo 0 >/proc/sys/vm/zone_reclaim_mode [root@config ~]# sysctl -w vm.zone_reclaim_mode=0 [root@config ~]# echo never >/sys/kernel/mm/transparent_hugepage/enabled [root@config ~]# echo never >/sys/kernel/mm/transparent_hugepage/defrag [root@config ~]# cd /usr/local/mongodb/bin/ [root@config bin]# mkdir {../mongodb1,../mongodb2,../mongodb3} [root@config bin]# mkdir ../logs [root@config bin]# touch ../logs/mongodb{1..3}.log [root@config bin]# chmod 777 ../logs/mongodb* 配置config節點的實例: 192.168.100.101: [root@config bin]# cat <<END >>/usr/local/mongodb/bin/mongodb1.conf bind_ip=192.168.100.101 port=27017 dbpath=/usr/local/mongodb/mongodb1/ logpath=/usr/local/mongodb/logs/mongodb1.log logappend=true fork=true maxConns=5000 replSet=configs #replication name configsvr=true END 註解: #日誌文件位置 logpath=/data/db/journal/mongodb.log (這些都是能夠自定義修改的) # 以追加方式寫入日誌 logappend=true # 是否以守護進程方式運行 fork = true # 默認27017 #port = 27017 # 數據庫文件位置 dbpath=/data/db # 啓用按期記錄CPU利用率和 I/O 等待 #cpu = true # 是否以安全認證方式運行,默認是不認證的非安全方式 #noauth = true #auth = true # 詳細記錄輸出 #verbose = true # Inspect all client data for validity on receipt (useful for # developing drivers)用於開發驅動程序時驗證客戶端請求 #objcheck = true # Enable db quota management # 啓用數據庫配額管理 #quota = true # 設置oplog記錄等級 # Set oplogging level where n is # 0=off (default) # 1=W # 2=R # 3=both # 7=W+some reads #diaglog=0 # Diagnostic/debugging option 動態調試項 #nocursors = true # Ignore query hints 忽略查詢提示 #nohints = true # 禁用http界面,默認爲localhost:28017 #nohttpinterface = true # 關閉服務器端腳本,這將極大的限制功能 # Turns off server-side scripting. This will result in greatly limited # functionality #noscripting = true # 關閉掃描表,任何查詢將會是掃描失敗 # Turns off table scans. Any query that would do a table scan fails. #notablescan = true # 關閉數據文件預分配 # Disable data file preallocation. #noprealloc = true # 爲新數據庫指定.ns文件的大小,單位:MB # Specify .ns file size for new databases. # nssize = # Replication Options 複製選項 # in replicated mongo databases, specify the replica set name here #replSet=setname # maximum size in megabytes for replication operation log #oplogSize=1024 # path to a key file storing authentication info for connections # between replica set members #指定存儲身份驗證信息的密鑰文件的路徑 #keyFile=/path/to/keyfile [root@config bin]# cat <<END >>/usr/local/mongodb/bin/mongodb2.conf bind_ip=192.168.100.101 port=27018 dbpath=/usr/local/mongodb/mongodb2/ logpath=/usr/local/mongodb/logs/mongodb2.log logappend=true fork=true maxConns=5000 replSet=configs configsvr=true END [root@config bin]# cat <<END >>/usr/local/mongodb/bin/mongodb3.conf bind_ip=192.168.100.101 port=27019 dbpath=/usr/local/mongodb/mongodb3/ logpath=/usr/local/mongodb/logs/mongodb3.log logappend=true fork=true maxConns=5000 replSet=configs configsvr=true END [root@config bin]# cd [root@config ~]# mongod -f /usr/local/mongodb/bin/mongodb1.conf [root@config ~]# mongod -f /usr/local/mongodb/bin/mongodb2.conf [root@config ~]# mongod -f /usr/local/mongodb/bin/mongodb3.conf [root@config ~]# netstat -utpln |grep mongod tcp 0 0 192.168.100.101:27019 0.0.0.0:* LISTEN 2271/mongod tcp 0 0 192.168.100.101:27017 0.0.0.0:* LISTEN 2440/mongod tcp 0 0 192.168.100.101:27018 0.0.0.0:* LISTEN 1412/mongod [root@config ~]# echo -e "/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb1.conf \n/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb2.conf\n/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local [root@config ~]# chmod +x /etc/rc.local [root@config ~]# cat <<END >>/etc/init.d/mongodb #!/bin/bash INSTANCE=\$1 ACTION=\$2 case "\$ACTION" in 'start') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;; 'stop') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown;; 'restart') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;; esac END [root@config ~]# chmod +x /etc/init.d/mongodb [root@config ~]# mongo --port 27017 --host 192.168.100.101 > cfg={"_id":"configs","members":[{"_id":0,"host":"192.168.100.101:27017"},{"_id":1,"host":"192.168.100.101:27018"},{"_id":2,"host":"192.168.100.101:27019"}]} > rs.initiate(cfg) configs:PRIMARY> rs.status() { "set" : "configs", "date" : ISODate("2018-04-24T18:53:44.375Z"), "myState" : 1, "term" : NumberLong(1), "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.101:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 6698, "optime" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T18:53:40Z"), "electionTime" : Timestamp(1524590293, 1), "electionDate" : ISODate("2018-04-24T17:18:13Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "192.168.100.101:27018", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5741, "optime" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T18:53:40Z"), "optimeDurableDate" : ISODate("2018-04-24T18:53:40Z"), "lastHeartbeat" : ISODate("2018-04-24T18:53:42.992Z"), "lastHeartbeatRecv" : ISODate("2018-04-24T18:53:43.742Z"), "pingMs" : NumberLong(0), "syncingTo" : "192.168.100.101:27017", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.100.101:27019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 5741, "optime" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1524596020, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T18:53:40Z"), "optimeDurableDate" : ISODate("2018-04-24T18:53:40Z"), "lastHeartbeat" : ISODate("2018-04-24T18:53:42.992Z"), "lastHeartbeatRecv" : ISODate("2018-04-24T18:53:43.710Z"), "pingMs" : NumberLong(0), "syncingTo" : "192.168.100.101:27017", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1524596020, 1), "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("7fffffff0000000000000001") }, "$clusterTime" : { "clusterTime" : Timestamp(1524596020, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } configs:PRIMARY> show dbs admin 0.000GB config 0.000GB local 0.000GB configs:PRIMARY> exit [root@config bin]# cat <<END >>/usr/local/mongodb/bin/mongos.conf bind_ip=192.168.100.101 port=27025 logpath=/usr/local/mongodb/logs/mongodbs.log fork=true maxConns=5000 configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019 END 注:mongos的configdb參數只能指定一個(複製集中的primary)或多個(複製集中的所有節點); [root@config bin]# touch ../logs/mongos.log [root@config bin]# chmod 777 ../logs/mongos.log [root@config bin]# mongos -f /usr/local/mongodb/bin/mongos.conf about to fork child process, waiting until server is ready for connections. forked process: 1562 child process started successfully, parent exiting [root@config ~]# netstat -utpln |grep mongo tcp 0 0 192.168.100.101:27019 0.0.0.0:* LISTEN 1601/mongod tcp 0 0 192.168.100.101:27020 0.0.0.0:* LISTEN 1345/mongod tcp 0 0 192.168.100.101:27025 0.0.0.0:* LISTEN 1822/mongos tcp 0 0 192.168.100.101:27017 0.0.0.0:* LISTEN 1437/mongod tcp 0 0 192.168.100.101:27018 0.0.0.0:* LISTEN 1541/mongod 配置shard1的實例: 192.168.100.102: [root@shard1 bin]# cat <<END >>/usr/local/mongodb/bin/mongodb1.conf bind_ip=192.168.100.102 port=27017 dbpath=/usr/local/mongodb/mongodb1/ logpath=/usr/local/mongodb/logs/mongodb1.log logappend=true fork=true maxConns=5000 replSet=shard1 #replication name shardsvr=true END [root@shard1 bin]# cat <<END >>/usr/local/mongodb/bin/mongodb2.conf bind_ip=192.168.100.102 port=27018 dbpath=/usr/local/mongodb/mongodb2/ logpath=/usr/local/mongodb/logs/mongodb2.log logappend=true fork=true maxConns=5000 replSet=shard1 shardsvr=true END [root@shard1 bin]# cat <<END >>/usr/local/mongodb/bin/mongodb3.conf bind_ip=192.168.100.102 port=27019 dbpath=/usr/local/mongodb/mongodb3/ logpath=/usr/local/mongodb/logs/mongodb3.log logappend=true fork=true maxConns=5000 replSet=shard1 shardsvr=true END [root@shard1 bin]# cd [root@shard1 ~]# mongod -f /usr/local/mongodb/bin/mongodb1.conf [root@shard1 ~]# mongod -f /usr/local/mongodb/bin/mongodb2.conf [root@shard1 ~]# mongod -f /usr/local/mongodb/bin/mongodb3.conf [root@shard1 ~]# netstat -utpln |grep mongod tcp 0 0 192.168.100.101:27019 0.0.0.0:* LISTEN 2271/mongod tcp 0 0 192.168.100.101:27017 0.0.0.0:* LISTEN 2440/mongod tcp 0 0 192.168.100.101:27018 0.0.0.0:* LISTEN 1412/mongod [root@shard1 ~]# echo -e "/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb1.conf \n/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb2.conf\n/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local [root@shard1 ~]# chmod +x /etc/rc.local [root@shard1 ~]# cat <<END >>/etc/init.d/mongodb #!/bin/bash INSTANCE=\$1 ACTION=\$2 case "\$ACTION" in 'start') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;; 'stop') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown;; 'restart') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;; esac END [root@shard1 ~]# chmod +x /etc/init.d/mongodb [root@shard1 ~]# mongo --port 27017 --host 192.168.100.102 >cfg={"_id":"shard1","members":[{"_id":0,"host":"192.168.100.102:27017"},{"_id":1,"host":"192.168.100.102:27018"},{"_id":2,"host":"192.168.100.102:27019"}]} > rs.initiate(cfg) { "ok" : 1 } shard1:PRIMARY> rs.status() { "set" : "shard1", "date" : ISODate("2018-04-24T19:06:53.160Z"), "myState" : 1, "term" : NumberLong(1), "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.102:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 6648, "optime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T19:06:50Z"), "electionTime" : Timestamp(1524590628, 1), "electionDate" : ISODate("2018-04-24T17:23:48Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "192.168.100.102:27018", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 6195, "optime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T19:06:50Z"), "optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"), "lastHeartbeat" : ISODate("2018-04-24T19:06:52.176Z"), "lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"), "pingMs" : NumberLong(0), "syncingTo" : "192.168.100.102:27017", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.100.102:27019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 6195, "optime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T19:06:50Z"), "optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"), "lastHeartbeat" : ISODate("2018-04-24T19:06:52.177Z"), "lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"), "pingMs" : NumberLong(0), "syncingTo" : "192.168.100.102:27017", "configVersion" : 1 } ], "ok" : 1 } shard1:PRIMARY> show dbs admin 0.000GB config 0.000GB local 0.000GB shard1:PRIMARY> exit [root@shard1 bin]# cat <<END >>/usr/local/mongodb/bin/mongos.conf bind_ip=192.168.100.102 port=27025 logpath=/usr/local/mongodb/logs/mongodbs.log fork=true maxConns=5000 configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019 END [root@shard1 bin]# touch ../logs/mongos.log [root@shard1 bin]# chmod 777 ../logs/mongos.log [root@shard1 bin]# mongos -f /usr/local/mongodb/bin/mongos.conf about to fork child process, waiting until server is ready for connections. forked process: 1562 child process started successfully, parent exiting [root@shard1 ~]# netstat -utpln| grep mongo tcp 0 0 192.168.100.102:27019 0.0.0.0:* LISTEN 1098/mongod tcp 0 0 192.168.100.102:27020 0.0.0.0:* LISTEN 1125/mongod tcp 0 0 192.168.100.102:27025 0.0.0.0:* LISTEN 1562/mongos tcp 0 0 192.168.100.102:27017 0.0.0.0:* LISTEN 1044/mongod tcp 0 0 192.168.100.102:27018 0.0.0.0:* LISTEN 1071/mongod 配置shard2的實例: 192.168.100.103: [root@shard2 bin]# cat <<END >>/usr/local/mongodb/bin/mongodb1.conf bind_ip=192.168.100.103 port=27017 dbpath=/usr/local/mongodb/mongodb1/ logpath=/usr/local/mongodb/logs/mongodb1.log logappend=true fork=true maxConns=5000 replSet=shard2 #replication name shardsvr=true END [root@shard2 bin]# cat <<END >>/usr/local/mongodb/bin/mongodb2.conf bind_ip=192.168.100.103 port=27018 dbpath=/usr/local/mongodb/mongodb2/ logpath=/usr/local/mongodb/logs/mongodb2.log logappend=true fork=true maxConns=5000 replSet=shard2 shardsvr=true END [root@shard2 bin]# cat <<END >>/usr/local/mongodb/bin/mongodb3.conf bind_ip=192.168.100.103 port=27019 dbpath=/usr/local/mongodb/mongodb3/ logpath=/usr/local/mongodb/logs/mongodb3.log logappend=true fork=true maxConns=5000 replSet=shard2 shardsvr=true END [root@shard2 bin]# cd [root@shard2 ~]# mongod -f /usr/local/mongodb/bin/mongodb1.conf [root@shard2 ~]# mongod -f /usr/local/mongodb/bin/mongodb2.conf [root@shard2 ~]# mongod -f /usr/local/mongodb/bin/mongodb3.conf [root@shard2 ~]# netstat -utpln |grep mongod tcp 0 0 192.168.100.101:27019 0.0.0.0:* LISTEN 2271/mongod tcp 0 0 192.168.100.101:27017 0.0.0.0:* LISTEN 2440/mongod tcp 0 0 192.168.100.101:27018 0.0.0.0:* LISTEN 1412/mongod [root@shard2 ~]# echo -e "/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb1.conf \n/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb2.conf\n/usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/mongodb3.conf">>/etc/rc.local [root@shard2 ~]# chmod +x /etc/rc.local [root@shard2 ~]# cat <<END >>/etc/init.d/mongodb #!/bin/bash INSTANCE=\$1 ACTION=\$2 case "\$ACTION" in 'start') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;; 'stop') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown;; 'restart') /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf --shutdown /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/bin/"\$INSTANCE".conf;; esac END [root@shard2 ~]# chmod +x /etc/init.d/mongodb [root@shard2 ~]# mongo --port 27017 --host 192.168.100.103 >cfg={"_id":"shard2","members":[{"_id":0,"host":"192.168.100.103:27017"},{"_id":1,"host":"192.168.100.103:27018"},{"_id":2,"host":"192.168.100.103:27019"}]} > rs.initiate(cfg) { "ok" : 1 } shard2:PRIMARY> rs.status() { "set" : "shard2", "date" : ISODate("2018-04-24T19:06:53.160Z"), "myState" : 1, "term" : NumberLong(1), "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.103:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 6648, "optime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T19:06:50Z"), "electionTime" : Timestamp(1524590628, 1), "electionDate" : ISODate("2018-04-24T17:23:48Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "192.168.100.103:27018", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 6195, "optime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T19:06:50Z"), "optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"), "lastHeartbeat" : ISODate("2018-04-24T19:06:52.176Z"), "lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"), "pingMs" : NumberLong(0), "syncingTo" : "192.168.100.103:27017", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.100.103:27019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 6195, "optime" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1524596810, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-04-24T19:06:50Z"), "optimeDurableDate" : ISODate("2018-04-24T19:06:50Z"), "lastHeartbeat" : ISODate("2018-04-24T19:06:52.177Z"), "lastHeartbeatRecv" : ISODate("2018-04-24T19:06:52.626Z"), "pingMs" : NumberLong(0), "syncingTo" : "192.168.100.103:27017", "configVersion" : 1 } ], "ok" : 1 } shard2:PRIMARY> show dbs admin 0.000GB config 0.000GB local 0.000GB shard2:PRIMARY> exit [root@shard2 bin]# cat <<END >>/usr/local/mongodb/bin/mongos.conf bind_ip=192.168.100.103 port=27025 logpath=/usr/local/mongodb/logs/mongodbs.log fork=true maxConns=5000 configdb=configs/192.168.100.101:27017,192.168.100.101:27018,192.168.100.101:27019 END [root@shard2 bin]# touch ../logs/mongos.log [root@shard2 bin]# chmod 777 ../logs/mongos.log [root@shard2 bin]# mongos -f /usr/local/mongodb/bin/mongos.conf about to fork child process, waiting until server is ready for connections. forked process: 1562 child process started successfully, parent exiting [root@shard2 ~]# netstat -utpln |grep mongo tcp 0 0 192.168.100.103:27019 0.0.0.0:* LISTEN 1095/mongod tcp 0 0 192.168.100.103:27020 0.0.0.0:* LISTEN 1122/mongod tcp 0 0 192.168.100.103:27025 0.0.0.0:* LISTEN 12122/mongos tcp 0 0 192.168.100.103:27017 0.0.0.0:* LISTEN 1041/mongod tcp 0 0 192.168.100.103:27018 0.0.0.0:* LISTEN 1068/mongod 配置分片並驗證: 192.168.100.101(隨意選擇mongos進行設置分片,三臺mongos會同步如下操做): [root@config ~]# mongo --port 27025 --host 192.168.100.101 mongos> use admin; switched to db admin mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77") } shards: active mongoses: "3.6.3" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } mongos> sh.addShard("shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019") { "shardAdded" : "shard1", "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1524598580, 9), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1524598580, 9) } mongos> sh.addShard("shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019") { "shardAdded" : "shard2", "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1524598657, 7), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1524598657, 7) } mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77") } shards: { "_id" : "shard1", "host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019", "state" : 1 } active mongoses: "3.6.3" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } 注:目前配置服務、路由服務、分片服務、副本集服務都已經串聯起來了,但咱們的目的是但願插入數據,數據可以自動分片。鏈接在mongos上,準備讓指定的數據庫、指定的集合分片生效。 [root@config ~]# mongo --port 27025 --host 192.168.100.101 mongos> use admin mongos> sh.enableSharding("testdb") ##開啓數據庫的分片 { "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1524599672, 13), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1524599672, 13) mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77") } shards: { "_id" : "shard1", "host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019", "state" : 1 } active mongoses: "3.6.3" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "testdb", "primary" : "shard2", "partitioned" : true } mongos> db.runCommand({shardcollection:"testdb.table1", key:{_id:1}}); ##開啓數據庫中集合的分片 { "collectionsharded" : "testdb.table1", "collectionUUID" : UUID("883bb1e2-b218-41ab-8122-6a5cf4df5e7b"), "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1524601471, 14), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1524601471, 14) } mongos> use testdb; mongos> for(i=1;i<=10000;i++){db.table1.insert({"id":i,"name":"huge"})}; WriteResult({ "nInserted" : 1 }) mongos> show collections table1 mongos> db.table1.count() 10000 mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77") } shards: { "_id" : "shard1", "host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019", "state" : 1 } active mongoses: "3.6.3" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "testdb", "primary" : "shard2", "partitioned" : true } testdb.table1 shard key: { "_id" : 1 } unique: false balancing: true chunks: shard2 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) mongos> use admin switched to db admin mongos> sh.enableSharding("testdb2") { "ok" : 1, "$clusterTime" : { "clusterTime" : Timestamp(1524602371, 7), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } }, "operationTime" : Timestamp(1524602371, 7) } mongos> db.runCommand({shardcollection:"testdb2.table1", key:{_id:1}}); mongos> use testdb2 switched to db testdb2 mongos> for(i=1;i<=10000;i++){db.table1.insert({"id":i,"name":"huge"})}; WriteResult({ "nInserted" : 1 }) mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5adf66d7518b3e5b3aad4e77") } shards: { "_id" : "shard1", "host" : "shard1/192.168.100.102:27017,192.168.100.102:27018,192.168.100.102:27019", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.100.103:27017,192.168.100.103:27018,192.168.100.103:27019", "state" : 1 } active mongoses: "3.6.3" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "testdb", "primary" : "shard2", "partitioned" : true } testdb.table1 shard key: { "_id" : 1 } unique: false balancing: true chunks: shard2 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) { "_id" : "testdb2", "primary" : "shard1", "partitioned" : true } testdb2.table1 shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) mongos> db.table1.stats() ##查看集合的分片狀況 { "sharded" : true, "capped" : false, "ns" : "testdb2.table1", "count" : 10000, "size" : 490000, "storageSize" : 167936, "totalIndexSize" : 102400, "indexSizes" : { "_id_" : 102400 }, "avgObjSize" : 49, "nindexes" : 1, "nchunks" : 1, "shards" : { "shard1" : { "ns" : "testdb2.table1", "size" : 490000, "count" : 10000, "avgObjSize" : 49, "storageSize" : 167936, "capped" : false, "wiredTiger" : { "metadata" : { "formatVersion" : 1 }, "creationString" : ...
在192.168.100.102和192.168.100.103上登陸mongos節點查看上述配置,發現已經同步;
在192.168.100.102上關閉shard1複製集的primary節點,測試mongos訪問數據依然沒有問題,實現了複製集的高可用;session