MongoDB也是NoSQL的一種,它是一個基於分佈式文件存儲的文檔型數據庫,由C++語言編寫。由於是文檔型數據庫,因此它是非關係數據庫中功能最豐富、最像關係數據庫的。php
官方網站 ,最新版本4.0.1。html
MongoDB將數據存儲爲一個文檔,數據結構將鍵值(key-value)對組成,MongoDB文檔相似於JSON對象,字段值能夠包含其它文檔、數組及文檔數組。mysql
瞭解JSONlinux
文檔(document):MongoDB中數據的基本單元,很是相似於關係型數據庫系統中的行,可是比行要複雜的多。 集合(collection):一組文檔,若是說MongoDB中的文檔相似於關係型數據庫中的行,那麼集合就如同表。 MongoDB的單個計算機能夠容納多個獨立的數據庫,每個數據庫都有本身的集合和權限; MongoDB自帶簡潔但功能強大的JavaScript shell,這個工具對於管理MongoDB實例和操做數據做用很是大; 每個文檔都有一個特殊的鍵」_id」,它在文檔所處的集合中是惟一的,至關於關係數據庫中的表的主鍵。
MongoDB的特色是高性能、易部署、易使用,存儲數據很是方便,它的主要特性有:web
面向集合存儲,易存儲對象類型的數據
模式自由
支持動態查詢
支持徹底索引,包含內部對象
支持複製和故障恢復
使用高效的二進制數據存儲,包括大型對象
文件存儲格式爲BSON(一種JSON的擴展)redis
# cd /etc/yum.repos.d/# vim mongo.repo[mongodb-org-3.4]name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc# yum install -y mongodb-org
# systemctl start mongod# ps aux |grep mongodmongod 1522 0.8 0.9 972380 37512 ? Sl 10:30 0:00 /usr/bin/mongod -f /etc/mongo.conf# netstat -lntp |grep mongodtcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 1522/mongod
# mongoMongoDB shell version v3.4.16 connecting to: mongodb://127.0.0.1:27017 #顯示IP及監聽端口MongoDB server version: 3.4.16 Welcome to the MongoDB shell. For interactive help, type "help".For more comprehensive documentation, see http://docs.mongodb.org/ Questions? Try the support group http://groups.google.com/group/mongodb-user Server has startup warnings: 2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] 2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] 2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] 2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] 2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' #關於內核的警告,可忽略2018-08-25T10:30:18.431+0800 I CONTROL [initandlisten] >
若是監聽端口不是默認的27017,則在鏈接的時候須要加--port
選項,如:sql
mongo --port 27018
遠程鏈接mongodb,須要加--host
,如:mongodb
mongo --host 127.0.0.1
若是設置了驗證,則在鏈接的時候須要帶上用戶名和密碼,如:shell
mongo -u用戶名 -p密碼 --authenticationDatabase 數據庫名 #和mysql相似
> use admin #use切換庫,只有在admin庫中才能夠操做用戶管理switched to db admin#建立用戶並受權;customData表示對用戶的描述,可省略;roles表示角色,指定庫的權限> db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "admin123", roles: [ { role: "root", db:"admin" } ] } )Successfully added user: { "user" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ]}
> db.system.users.find() #列出全部用戶,須要切換到admin庫{ "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "J8r1+mWlE53xnEKSiQ98hA==", "storedKey" : "7dl9ZmLoj8AHTb7LguX/w4C9X9U=", "serverKey" : "ZMpgYfQMxGAULJh9a5rFijPny2E=" } }, "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }#admin.admin表示admin庫的admin用戶
> show users #列出當前庫下全部的用戶{ "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ]}
> db.createUser({user:"lzx",pwd:"123123",roles:[{role:"read",db:"testdb"}]}) #建立lzx用戶,設爲read角色,針對的庫是testdbSuccessfully added user: { "user" : "lzx", "roles" : [ { "role" : "read", "db" : "testdb" } ]}> show users #查看新建立的用戶{ "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ]}{ "_id" : "admin.lzx", "user" : "lzx", "db" : "admin", "roles" : [ { "role" : "read", "db" : "testdb" } ]}> db.dropUser('lzx') #刪除剛建立的lzx用戶true> show users { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ]} #新建立的用戶已經消失
新增用戶以後,若要用戶生效,還須要編輯啓動腳本:數據庫
# vim /usr/lib/systemd/system/mongod.serviceEnvironment="OPTIONS=--auth -f /etc/mongod.conf"# systemctl daemon-reload# systemctl restart mongod# ps aux |grep mongodmongod 8049 17.8 1.0 972380 41420 ? Sl 14:03 0:02 /usr/bin/mongod --auth -f /etc/mongod.conf# mongo --host 127.0.0.1 --port 27017MongoDB shell version v3.4.16 connecting to: mongodb://127.0.0.1:27017/ MongoDB server version: 3.4.16
> use admin switched to db admin> show users #剛剛登陸的用戶沒有權限執行命令2018-08-25T14:06:39.482+0800 E QUERY [thread1] Error: not authorized on admin to execute command { usersInfo: 1.0 } :_getErrorWithCode@src/mongo/shell/utils.js:25:13 DB.prototype.getUsers@src/mongo/shell/db.js:1539:1 shellHelper.show@src/mongo/shell/utils.js:771:9 shellHelper@src/mongo/shell/utils.js:678:15 @(shellhelp2):1:1# mongo --host 127.0.0.1 --port 27017 -u 'admin' -p 'admin123' --authenticationDatabase "admin" #指定用戶密碼和庫,再次登陸MongoDB shell version v3.4.16 connecting to: mongodb://127.0.0.1:27017/ MongoDB server version: 3.4.16 Server has startup warnings: 2018-08-25T14:03:27.518+0800 I CONTROL [initandlisten] 2018-08-25T14:03:27.518+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.2018-08-25T14:03:27.518+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-25T14:03:27.518+0800 I CONTROL [initandlisten] 2018-08-25T14:03:27.518+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.2018-08-25T14:03:27.518+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-25T14:03:27.518+0800 I CONTROL [initandlisten] > use admin switched to db admin> show users #再次執行命令就沒問題了{ "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ]}
> use db1 switched to db db1> db.createUser({user:"test1",pwd:"123123",roles:[{role:"readWrite",db:"db1"},{role:"read",db:"db2"}]}) #建立用戶test1,對db1庫有讀寫權限,對db2庫只讀Successfully added user: { "user" : "test1", "roles" : [ { "role" : "readWrite", "db" : "db1" }, { "role" : "read", "db" : "db2" } ]}
先use db1
,表示用戶在db1庫中建立,用戶的信息會跟隨數據庫。
> db.auth('test1','123123') #在命令行認證用戶1 #返回1說明認證成功> use db2 switched to db db2> db.auth('test1','123123')Error: Authentication failed. 0 #db2下認證失敗
MongoDB中有這些角色:
read:容許用戶讀取指定數據庫 readWrite:容許用戶讀寫指定數據庫 dbAdmin:容許用戶在指定數據庫中執行管理函數,如索引建立、刪除,查看統計或訪問system.profile userAdmin:容許用戶向system.users集合寫入,能夠在指定數據庫裏建立、刪除和管理用戶 clusterAdmin:只在admin數據中可用,賦予用戶全部分片和複製集相關函數的管理權限 readAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的讀權限 readWriteAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的讀寫權限 userAdminAnyDatabase:只在admin數據庫中可用,賦予全部數據庫的userAdmin權限 dbAdminAnyDatabase:只在admin數據庫中可用,賦予全部數據庫的dbAdmin權限 root:只在admin數據庫中可用。超級帳號,超級權限
> use db1 switched to db db1> db.createCollection("mycol",{capped:true,size:6142800,max:10000}){ "ok" : 1 }
建立集合;集合名爲mycol;option可選,用來配置集合的參數。
capped true/false (可選),若是爲true,則啓用封頂集合。封頂集合是固定大小的集合,當它達到最大大小,會自動覆蓋最先的條目。若是指定爲true,也須要指定尺寸參數;
size(可選),指定最大大小字節封頂集合。若是capped是true,那麼就必須指定這個字段,單位B;
max(可選),指定封頂集合容許在文件的最大數量。
> show tables #查看集合mycol> show collections #這樣也能夠查看集合mycol
> db.Account.insert({AccountID:1,UserName:"123",password:"123456"}) #向集合中插入數據,若集合不存在,則自動建立WriteResult({ "nInserted" : 1 })> show collections Account mycol
> db.Account.insert({AccountID:2,UserName:"aaa",password:"aaaaaa"}) #再插入一個文檔WriteResult({ "nInserted" : 1 })> show collections Account #仍屬於Account集合 mycol> db.Account.update({AccountID:1},{"$set":{"Age":20}}) #更新集合中AccountID:1文檔的數據WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })> db.Account.find() #查看全部文檔{ "_id" : ObjectId("5b80feb826e56a836ac4d168"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 }{ "_id" : ObjectId("5b8100cc26e56a836ac4d169"), "AccountID" : 2, "UserName" : "aaa", "password" : "aaaaaa" }
> db.Account.find({AccountID:1}) #指定條件查詢{ "_id" : ObjectId("5b80feb826e56a836ac4d168"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 }
> db.Account.remove({AccountID:1})WriteResult({ "nRemoved" : 1 }) #指定條件刪除文檔> db.Account.find(){ "_id" : ObjectId("5b8100cc26e56a836ac4d169"), "AccountID" : 2, "UserName" : "aaa", "password" : "aaaaaa" }
> db.Account.drop() #刪除集合Account中全部文檔,即刪除集合true> show collections mycol> db.mycol.drop()true> show collections
> db.col123.insert({AccountID:1,UserName:"123",password:"123456"})WriteResult({ "nInserted" : 1 })> db.printCollectionStats() #查看全部集合狀態col123{ "ns" : "db1.col123", "size" : 80, "count" : 1, "avgObjSize" : 80, "storageSize" : 16384, "capped" : false, "wiredTiger" : { "metadata" : { "formatVersion" : 1 }, "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u", "type" : "file", "uri" : "statistics:table:collection-4-6088379814182883756", "LSM" : { "bloom filter false positives" : 0, "bloom filter hits" : 0, "bloom filter misses" : 0, "bloom filter pages evicted from cache" : 0, "bloom filter pages read into cache" : 0, "bloom filters in the LSM tree" : 0, "chunks in the LSM tree" : 0, "highest merge generation in the LSM tree" : 0, "queries that could have benefited from a Bloom filter that did not exist" : 0, "sleep for LSM checkpoint throttle" : 0, "sleep for LSM merge throttle" : 0, "total size of bloom filters" : 0 }, "block-manager" : { "allocations requiring file extension" : 3, "blocks allocated" : 3, "blocks freed" : 0, "checkpoint size" : 4096, "file allocation unit size" : 4096, "file bytes available for reuse" : 0, "file magic number" : 120897, "file major version number" : 1, "file size in bytes" : 16384, "minor version number" : 0 }, "btree" : { "btree checkpoint generation" : 22, "column-store fixed-size leaf pages" : 0, "column-store internal pages" : 0, "column-store variable-size RLE encoded values" : 0, "column-store variable-size deleted values" : 0, "column-store variable-size leaf pages" : 0, "fixed-record size" : 0, "maximum internal page key size" : 368, "maximum internal page size" : 4096, "maximum leaf page key size" : 2867, "maximum leaf page size" : 32768, "maximum leaf page value size" : 67108864, "maximum tree depth" : 3, "number of key/value pairs" : 0, "overflow pages" : 0, "pages rewritten by compaction" : 0, "row-store internal pages" : 0, "row-store leaf pages" : 0 }, "cache" : { "bytes currently in the cache" : 952, "bytes read into cache" : 0, "bytes written from cache" : 175, "checkpoint blocked page eviction" : 0, "data source pages selected for eviction unable to be evicted" : 0, "hazard pointer blocked page eviction" : 0, "in-memory page passed criteria to be split" : 0, "in-memory page splits" : 0, "internal pages evicted" : 0, "internal pages split during eviction" : 0, "leaf pages split during eviction" : 0, "modified pages evicted" : 0, "overflow pages read into cache" : 0, "overflow values cached in memory" : 0, "page split during eviction deepened the tree" : 0, "page written requiring lookaside records" : 0, "pages read into cache" : 0, "pages read into cache requiring lookaside entries" : 0, "pages requested from the cache" : 1, "pages written from cache" : 2, "pages written requiring in-memory restoration" : 0, "tracked dirty bytes in the cache" : 0, "unmodified pages evicted" : 0 }, "cache_walk" : { "Average difference between current eviction generation when the page was last considered" : 0, "Average on-disk page image size seen" : 0, "Clean pages currently in cache" : 0, "Current eviction generation" : 0, "Dirty pages currently in cache" : 0, "Entries in the root page" : 0, "Internal pages currently in cache" : 0, "Leaf pages currently in cache" : 0, "Maximum difference between current eviction generation when the page was last considered" : 0, "Maximum page size seen" : 0, "Minimum on-disk page image size seen" : 0, "On-disk page image sizes smaller than a single allocation unit" : 0, "Pages created in memory and never written" : 0, "Pages currently queued for eviction" : 0, "Pages that could not be queued for eviction" : 0, "Refs skipped during cache traversal" : 0, "Size of the root page" : 0, "Total number of pages currently in cache" : 0 }, "compression" : { "compressed pages read" : 0, "compressed pages written" : 0, "page written failed to compress" : 0, "page written was too small to compress" : 2, "raw compression call failed, additional data available" : 0, "raw compression call failed, no additional data available" : 0, "raw compression call succeeded" : 0 }, "cursor" : { "bulk-loaded cursor-insert calls" : 0, "create calls" : 1, "cursor-insert key and value bytes inserted" : 81, "cursor-remove key bytes removed" : 0, "cursor-update value bytes updated" : 0, "insert calls" : 1, "next calls" : 0, "prev calls" : 1, "remove calls" : 0, "reset calls" : 2, "restarted searches" : 0, "search calls" : 0, "search near calls" : 0, "truncate calls" : 0, "update calls" : 0 }, "reconciliation" : { "dictionary matches" : 0, "fast-path pages deleted" : 0, "internal page key bytes discarded using suffix compression" : 0, "internal page multi-block writes" : 0, "internal-page overflow keys" : 0, "leaf page key bytes discarded using prefix compression" : 0, "leaf page multi-block writes" : 0, "leaf-page overflow keys" : 0, "maximum blocks required for a page" : 0, "overflow values written" : 0, "page checksum matches" : 0, "page reconciliation calls" : 2, "page reconciliation calls for eviction" : 0, "pages deleted" : 0 }, "session" : { "object compaction" : 0, "open cursor count" : 1 }, "transaction" : { "update conflicts" : 0 } }, "nindexes" : 1, "totalIndexSize" : 16384, "indexSizes" : { "_id_" : 16384 }, "ok" : 1}
針對PHP的mongodb擴展有兩個:mongo.so
和mongodb.so
,其中,mongo.so針對的是PHP5.x版本,新的擴展是mongodb.so。兩個能夠任選一個,下面安裝mongodb.so的擴展。
# cd /software# wget https://pecl.php.net/get/mongodb-1.3.0.tgz# tar zxf mongodb-1.3.0.tgz # cd mongodb-1.3.0# /usr/local/php-fpm/bin/phpize Configuring for: PHP Api Version: 20131106 Zend Module Api No: 20131226 Zend Extension Api No: 220131226
# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config# echo $?0# make && make install# echo $?0# vim /usr/local/php-fpm/etc/php.ini #增長下面一行extension=mongodb.so# /usr/local/php-fpm/bin/php -m[PHP Modules]Core ctypecurldatedom ereg exif fileinfo filterftpgdhashiconvjson libxml mbstring mcrypt mongodb #有mongodb說明沒問題mysql openssl pcre PDO pdo_sqlite Phar posix redis Reflection session SimpleXML soap SPL sqlite3 standard tokenizer xml xmlreader xmlwriter zlib[Zend Modules]# /etc/init.d/php-fpm restartGracefully shutting down php-fpm doneStarting php-fpm done
安裝mongo.so的步驟與上面基本相同,這裏不作贅述。
MongoDB也能夠作主從配置。在早期,使用master-slave,一主一從和MySQL相似,但slave在此架構中爲只讀,當主庫宕機後,從庫不能自動切換爲主。目前已經淘汰這種模式。
如今改成副本集,這種模式下,一主(primary)多從(secondary),從仍然只讀。同時支持設置權重,當主宕掉後,權重最高的從自動切換爲主。
此外,此架構中還能夠創建一個仲裁(arbiter)的角色,只負責裁決主是否宕機,而不存儲數據,也是爲了防止腦裂。由於讀寫數據都是在主上,因此要想實現負載均衡,還須要手動指定讀庫的目標server。
所有關閉防火牆和selinux primary 192.168.100.150(主) secondary 192.168.100.160(從1) secondary 192.168.100.170(從2)
# vim /etc/mongod.confport:27018 bindIp: 127.0.0.1,192.168.100.150 #本機內網IPreplication: oplogSizeMB: 20 #冒號後面記得空格,不然啓動會報錯 replSetName: lzx #定義副本集的名字# systemctl restart mongod# ps aux |grep mongodmongod 1510 8.0 1.4 1017096 46800 ? Sl 21:21 0:00 /usr/bin/mongod --auth -f /etc/mongod.conf# vim /usr/lib/systemd/system/mongod.serviceEnvironment="OPTIONS=-f /etc/mongod.conf" #方便實驗,去掉--auth# systemctl daemon-reload# systemctl restart mongod# ps aux |grep mongodmongod 1716 14.3 1.3 1017096 44488 ? Sl 21:40 0:00 /usr/bin/mongod -f /etc/mongod.conf# netstat -lntp |grep mongodtcp 0 0 192.168.100.150:27018 0.0.0.0:* LISTEN 1716/mongod tcp 0 0 127.0.0.1:27018 0.0.0.0:* LISTEN 1716/mongod
# vim /etc/mongod.confport:27019 bindIp: 127.0.0.1,192.168.100.160 #本機內網IPreplication: oplogSizeMB: 20 replSetName: lzx# vim /usr/lib/systemd/system/mongod.serviceEnvironment="OPTIONS=-f /etc/mongod.conf"# systemctl daemon-reload# systemctl restart mongod# ps aux |grep mongodmongod 2076 14.0 1.2 1017092 48488 ? Sl 21:40 0:00 /usr/bin/mongod -f /etc/mongod.conf# netstat -lntp |grep mongodtcp 0 0 192.168.100.160:27019 0.0.0.0:* LISTEN 2076/mongod tcp 0 0 127.0.0.1:27019 0.0.0.0:* LISTEN 2076/mongod
# vim /etc/mongod.confport:27020 bindIp: 127.0.0.1,192.168.100.170 #本機內網IPreplication: oplogSizeMB: 20 replSetName: lzx# vim /usr/lib/systemd/system/mongod.serviceEnvironment="OPTIONS=-f /etc/mongod.conf"# systemctl daemon-reload# systemctl restart mongod# ps aux |grep mongodmongod 2086 11.5 1.1 1017100 44588 ? Sl 21:39 0:00 /usr/bin/mongod -f /etc/mongod.conf# netstat -lntp |grep mongodtcp 0 0 192.168.100.170:27020 0.0.0.0:* LISTEN 2086/mongod tcp 0 0 127.0.0.1:27020 0.0.0.0:* LISTEN 2086/mongod
# mongo --port 27018MongoDB shell version v3.4.16 connecting to: mongodb://127.0.0.1:27018 MongoDB server version: 3.4.16 Server has startup warnings: 2018-08-25T21:40:19.064+0800 I CONTROL [initandlisten] 2018-08-25T21:40:19.064+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-08-25T21:40:19.064+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-08-25T21:40:19.064+0800 I CONTROL [initandlisten] 2018-08-25T21:40:19.065+0800 I CONTROL [initandlisten] 2018-08-25T21:40:19.065+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.2018-08-25T21:40:19.065+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-25T21:40:19.065+0800 I CONTROL [initandlisten] 2018-08-25T21:40:19.065+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.2018-08-25T21:40:19.065+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-25T21:40:19.065+0800 I CONTROL [initandlisten] > use admin switched to db admin
> config={_id:"lzx",members:[{_id:0,host:"192.168.100.150:27018"},{_id:1,host:"192.168.100.160:27019"},{_id:2,host:"192.168.100.170:27020"}]}{ "_id" : "lzx", "members" : [ { "_id" : 0, "host" : "192.168.100.150:27018" }, { "_id" : 1, "host" : "192.168.100.160:27019" }, { "_id" : 2, "host" : "192.168.100.170:27020" } ]}> rs.initiate(config) #初始化副本集,必需要保證以前數據庫中沒有任何數據,不然報錯{ "ok" : 1 }lzx:OTHER> rs.status() #查看副本集狀態{ "set" : "lzx", "date" : ISODate("2018-08-26T09:29:41.449Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535275776, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1535275776, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1535275776, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.150:27018", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", #192.168.100.150是primary "uptime" : 268, "optime" : { "ts" : Timestamp(1535275776, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-26T09:29:36Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1535275644, 1), "electionDate" : ISODate("2018-08-26T09:27:24Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.100.160:27019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", #192.168.100.160是secondary "uptime" : 148, "optime" : { "ts" : Timestamp(1535275776, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535275776, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-26T09:29:36Z"), "optimeDurableDate" : ISODate("2018-08-26T09:29:36Z"), "lastHeartbeat" : ISODate("2018-08-26T09:29:40.721Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T09:29:40.698Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.100.170:27020", "syncSourceHost" : "192.168.100.170:27020", "syncSourceId" : 2, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.100.170:27020", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", #192.168.100.170是secondary "uptime" : 148, "optime" : { "ts" : Timestamp(1535275776, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535275776, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-26T09:29:36Z"), "optimeDurableDate" : ISODate("2018-08-26T09:29:36Z"), "lastHeartbeat" : ISODate("2018-08-26T09:29:40.721Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T09:29:39.563Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.100.150:27018", "syncSourceHost" : "192.168.100.150:27018", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1}
若是兩個從上的狀態爲"stateStr":"STARTUP"
,則須要執行下面操做:
> var config={_id:"lzx",members:[{_id:0,host:"192.168.100.150:27018"},{_id:1,host:"192.168.100.160:27019"},{_id:2,host:"192.168.100.170:27020"}]}> rs.reconfig(config)
此時再次查看rs.status()
會發現從的狀態變爲SECONDARY
。若是沒有PRIMARY
,能夠去增長權重,權重高的即爲PRIMARY
。
lzx:PRIMARY> use mydb switched to db mydb lzx:PRIMARY> db.acc.insert({AccountID:1,UserName:"123",password:"123456"}) #建立集合,插入數據WriteResult({ "nInserted" : 1 })lzx:PRIMARY> show dbs admin 0.000GB local 0.000GB mydb 0.000GB #有mydb庫lzx:PRIMARY> use mydb switched to db mydb lzx:PRIMARY> show tables acc #有acc集合
# mongo --port 27019MongoDB shell version v3.4.16 connecting to: mongodb://127.0.0.1:27019/ MongoDB server version: 3.4.16 Server has startup warnings: 2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] 2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] 2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] 2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] 2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-26T05:32:20.355-0400 I CONTROL [initandlisten] lzx:SECONDARY> show dbs 2018-08-26T05:58:31.215-0400 E QUERY [thread1] Error: listDatabases failed:{ "ok" : 0, "errmsg" : "not master and slaveOk=false", #有這樣的報錯信息 "code" : 13435, "codeName" : "NotMasterNoSlaveOk"} :_getErrorWithCode@src/mongo/shell/utils.js:25:13 Mongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1 shellHelper.show@src/mongo/shell/utils.js:788:19 shellHelper@src/mongo/shell/utils.js:678:15 @(shellhelp2):1:1 lzx:SECONDARY> rs.slaveOk() #設置slaveOk=truelzx:SECONDARY> show dbs admin 0.000GB local 0.000GB mydb 0.000GB #能夠看到有mydb庫lzx:SECONDARY> use mydb switched to db mydb lzx:SECONDARY> show tables acc #能夠看到有acc集合
# mongo --port 27020MongoDB shell version v3.4.16 connecting to: mongodb://127.0.0.1:27020/ MongoDB server version: 3.4.16 Server has startup warnings: 2018-08-26T05:25:13.079-0400 I CONTROL [initandlisten] 2018-08-26T05:25:13.079-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-08-26T05:25:13.079-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-08-26T05:25:13.079-0400 I CONTROL [initandlisten] 2018-08-26T05:25:13.079-0400 I CONTROL [initandlisten] 2018-08-26T05:25:13.079-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.2018-08-26T05:25:13.079-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-26T05:25:13.080-0400 I CONTROL [initandlisten] 2018-08-26T05:25:13.080-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.2018-08-26T05:25:13.080-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-26T05:25:13.080-0400 I CONTROL [initandlisten]lzx:SECONDARY> show dbs 2018-08-26T05:58:31.215-0400 E QUERY [thread1] Error: listDatabases failed:{ "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435, "codeName" : "NotMasterNoSlaveOk"} :_getErrorWithCode@src/mongo/shell/utils.js:25:13 Mongo.prototype.getDBs@src/mongo/shell/mongo.js:62:1 shellHelper.show@src/mongo/shell/utils.js:788:19 shellHelper@src/mongo/shell/utils.js:678:15 @(shellhelp2):1:1 lzx:SECONDARY> rs.slaveOk() #設置slaveOk=truelzx:SECONDARY> show dbs admin 0.000GB local 0.000GB mydb 0.000GB #能夠看到有mydb庫lzx:SECONDARY> use mydb switched to db mydb lzx:SECONDARY> show tables acc #能夠看到有acc集合
lzx:PRIMARY> rs.config(){ "_id" : "lzx", "version" : 1, "protocolVersion" : NumberLong(1), "members" : [ { "_id" : 0, "host" : "192.168.100.150:27018", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, #權重爲1 "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "192.168.100.160:27019", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, #權重爲1 "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 2, "host" : "192.168.100.170:27020", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, #權重爲1 "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 10000, "catchUpTimeoutMillis" : 60000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5b827271b551f8b2b89e2ff0") }}
# iptables -I INPUT -p tcp --dport 27018 -j DROP
lzx:SECONDARY> rs.status(){ "set" : "lzx", "date" : ISODate("2018-08-26T10:13:08.119Z"), "myState" : 2, "term" : NumberLong(2), "syncingTo" : "192.168.100.170:27020", "syncSourceHost" : "192.168.100.170:27020", "syncSourceId" : 2, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535277967, 1), "t" : NumberLong(2) }, "appliedOpTime" : { "ts" : Timestamp(1535277967, 1), "t" : NumberLong(2) }, "durableOpTime" : { "ts" : Timestamp(1535277967, 1), "t" : NumberLong(2) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.150:27018", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", #192.168.100.150變成不可達 "uptime" : 0, "optime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2018-08-26T10:13:06.423Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T10:13:07.227Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "Couldn't get a connection within the time limit", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : -1 }, { "_id" : 1, "name" : "192.168.100.160:27019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 2449, "optime" : { "ts" : Timestamp(1535277967, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2018-08-26T10:06:07Z"), "syncingTo" : "192.168.100.170:27020", "syncSourceHost" : "192.168.100.170:27020", "syncSourceId" : 2, "infoMessage" : "", "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 2, "name" : "192.168.100.170:27020", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", #192.168.100.170變成primary "uptime" : 2344, "optime" : { "ts" : Timestamp(1535277967, 1), "t" : NumberLong(2) }, "optimeDurable" : { "ts" : Timestamp(1535277967, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2018-08-26T10:06:07Z"), "optimeDurableDate" : ISODate("2018-08-26T10:06:07Z"), "lastHeartbeat" : ISODate("2018-08-26T10:13:06.469Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T10:13:06.866Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1535277926, 2), "electionDate" : ISODate("2018-08-26T10:05:26Z"), "configVersion" : 1 } ], "ok" : 1}
# iptables -D INPUT -p tcp --dport 27018 -j DROP# mongo --port 27018MongoDB shell version v3.4.16 connecting to: mongodb://127.0.0.1:27018/ MongoDB server version: 3.4.16 Server has startup warnings: 2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] 2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] 2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] 2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] 2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never'2018-08-26T05:25:16.157-0400 I CONTROL [initandlisten] lzx:SECONDARY> #192.168.100.150仍然是從,由於三者權重相等,除非它的權重最高才會變回primary
lzx:PRIMARY> cfg=rs.conf(){ "_id" : "lzx", "version" : 1, "protocolVersion" : NumberLong(1), "members" : [ { "_id" : 0, "host" : "192.168.100.150:27018", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "192.168.100.160:27019", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 2, "host" : "192.168.100.170:27020", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 10000, "catchUpTimeoutMillis" : 60000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5b827271b551f8b2b89e2ff0") }}lzx:PRIMARY> cfg.members[0].priority = 3 #設置成員0權重爲33 lzx:PRIMARY> cfg.members[1].priority = 2 #設置成員1權重爲22 lzx:PRIMARY> cfg.members[2].priority = 1 #設置成員2權重爲11 lzx:PRIMARY> rs.reconfig(cfg) #使上面權重設置生效{ "ok" : 1 }lzx:PRIMARY> #按Enter鍵2018-08-26T06:21:40.200-0400 I NETWORK [thread1] trying reconnect to 127.0.0.1:27020 (127.0.0.1) failed 2018-08-26T06:21:40.200-0400 I NETWORK [thread1] reconnect 127.0.0.1:27020 (127.0.0.1) ok lzx:SECONDARY> #從2由主變爲從lzx:SECONDARY> rs.config() #查看權重{ "_id" : "lzx", "version" : 2, "protocolVersion" : NumberLong(1), "members" : [ { "_id" : 0, "host" : "192.168.100.150:27018", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 3, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "192.168.100.160:27019", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 2, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 2, "host" : "192.168.100.170:27020", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 10000, "catchUpTimeoutMillis" : 60000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5b827271b551f8b2b89e2ff0") }}
分片就是將數據庫進行拆分,將大型集合分割到不一樣服務器上。好比,原本100G的數據,能夠分割成10份存儲到10臺服務器上,這樣每臺機器只有10G的數據。
分片後的數據存儲與訪問是由一個mongos的進程來實現的,即mongos是整個分片架構的核心。對客戶端而言是透明的,客戶端只須要把讀寫操做轉達給mongos便可。
雖然分片會把數據分割到不少服務器上,可是每個節點都須要有一個備用角色的,這樣能保證數據的高可用。當系統須要更多空間或資源的時候,分片能夠很方便地讓咱們按需擴展,只須要把mongodb服務的機器加入到分片集羣中便可。
MongoDB分片相關概念:
mongos:數據庫集羣請求的入口,全部的請求都是經過mongos進行協調,不須要再應用程序中添加一個路由選擇器,mongos本身就是一個請求分發中心, 它負責把對應的數據請求轉發到對應的shard服務器上。在生產環境一般有多mongos做爲請求的入口,防止其中一個掛掉而致使全部的mongodb請求都沒有辦法操做。 config server:配置服務器,存儲全部數據庫元信息(路由、分片)的配置。mongos自己沒有物理存儲分片服務器和數據路由信息,只是緩存在內存裏, 配置放服務器則實際存儲這些信息。mongos第一次啓動或者關掉重啓就會從配置服務器加載配置信息,之後若是配置服務器信息變化會通知到全部的mongos更新本身的狀態, 這樣mongos就能繼續準確路由。在生產環境一般有多個config server,由於它存儲了分片路由的元數據,防止數據丟失。 shard:存儲了一個集合部分數據的mongodb實例,每一個分片是單獨的mongodb服務或者副本集,在生產環境中,全部的分片都應該是副本集。
A:192.168.100.150 B:192.168.100.160 C:192.168.100.170 A搭建:mongos、config server、副本集1 主節點、副本集2 從節點、副本集3 仲裁 B搭建:mongos、config server、副本集1 仲裁、副本集2 主節點、副本集3 從節點 C搭建:mongos、config server、副本集1 從節點、副本集2 仲裁、副本集3 主節點 端口分配:mongos 20000、config 21000、副本集1 2700一、副本集2 2700二、副本集3 27003 分別在三臺機器上建立各個角色所須要的目錄: /data/mongodb/mongos/log /data/mongodb/config/{data,log} /data/mongodb/shard1/{data,log} /data/mongodb/shard2/{data,log} /data/mongodb/shard3/{data,log} 三臺機器所有關閉防火牆和SElinux,或者增長對應端口的規則
# mkdir -p /data/mongodb/mongos/log# mkdir -p /data/mongodb/config/{data,log}# mkdir -p /data/mongodb/shard1/{data,log}# mkdir -p /data/mongodb/shard2/{data,log}# mkdir -p /data/mongodb/shard3/{data,log}# mkdir /etc/mongod/
# vim /etc/mongod/config.confpidfilepath = /var/run/mongodb/configsrv.pid dbpath = /data/mongodb/config/data logpath = /data/mongodb/config/log/configsrv.log logappend = truebind_ip = 192.168.100.150 port = 21000 fork = trueconfigsvr = truereplSet=configs #副本集名稱maxConns=20000 #設置最大鏈接數
# mongod -f /etc/mongod/config.conf about to fork child process, waiting until server is ready for connections. forked process: 10212 child process started successfully, parent exiting# netstat -lntp |grep mongodtcp 0 0 192.168.100.150:21000 0.0.0.0:* LISTEN 10212/mongod tcp 0 0 192.168.100.150:27018 0.0.0.0:* LISTEN 10185/mongod tcp 0 0 127.0.0.1:27018 0.0.0.0:* LISTEN 10185/mongod
# mkdir -p /data/mongodb/mongos/log# mkdir -p /data/mongodb/config/{data,log}# mkdir -p /data/mongodb/shard1/{data,log}# mkdir -p /data/mongodb/shard2/{data,log}# mkdir -p /data/mongodb/shard3/{data,log}# mkdir /etc/mongod/
# vim /etc/mongod/config.confpidfilepath = /var/run/mongodb/configsrv.pid dbpath = /data/mongodb/config/data logpath = /data/mongodb/config/log/configsrv.log logappend = truebind_ip = 192.168.100.160 port = 21000 fork = trueconfigsvr = truereplSet=configs maxConns=20000
# mongod -f /etc/mongod/config.confabout to fork child process, waiting until server is ready for connections. forked process: 10472 child process started successfully, parent exiting# netstat -lntp |grep mongodtcp 0 0 192.168.100.160:27019 0.0.0.0:* LISTEN 10446/mongod tcp 0 0 127.0.0.1:27019 0.0.0.0:* LISTEN 10446/mongod tcp 0 0 192.168.100.160:21000 0.0.0.0:* LISTEN 10472/mongod
# mkdir -p /data/mongodb/mongos/log# mkdir -p /data/mongodb/config/{data,log}# mkdir -p /data/mongodb/shard1/{data,log}# mkdir -p /data/mongodb/shard2/{data,log}# mkdir -p /data/mongodb/shard3/{data,log}# mkdir /etc/mongod/
# vim /etc/mongod/config.confpidfilepath = /var/run/mongodb/configsrv.pid dbpath = /data/mongodb/config/data logpath = /data/mongodb/config/log/configsrv.log logappend = truebind_ip = 192.168.100.170 port = 21000 fork = trueconfigsvr = truereplSet=configs maxConns=20000
# mongod -f /etc/mongod/config.confabout to fork child process, waiting until server is ready for connections. forked process: 10444 child process started successfully, parent exiting# netstat -lntp |grep mongodtcp 0 0 192.168.100.170:27020 0.0.0.0:* LISTEN 10418/mongod tcp 0 0 127.0.0.1:27020 0.0.0.0:* LISTEN 10418/mongod tcp 0 0 192.168.100.170:21000 0.0.0.0:* LISTEN 10444/mongod
三臺機器都已經啓動服務,因爲他們權重都同樣,因此能夠任選一臺機器來建立副本集。下面我在A機器上操做。
# mongo --host 192.168.100.150 --port 21000> use admin switched to db admin> config={_id:"configs",members:[{_id:0,host:"192.168.100.150:21000"},{_id:1,host:"192.168.100.160:21000"},{_id:2,host:"192.168.100.170:21000"}]}{ "_id" : "configs", "members" : [ { "_id" : 0, "host" : "192.168.100.150:21000" }, { "_id" : 1, "host" : "192.168.100.160:21000" }, { "_id" : 2, "host" : "192.168.100.170:21000" } ]}> rs.initiate(config) #初始化副本集{ "ok" : 1 }configs:OTHER> rs.status() #查看副本集狀態{ "set" : "configs", "date" : ISODate("2018-08-27T05:28:33.632Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.150:21000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", #192.168.100.150是primary "uptime" : 1163, "optime" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T05:28:14Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1535347692, 1), "electionDate" : ISODate("2018-08-27T05:28:12Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.100.160:21000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 32, "optime" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T05:28:14Z"), "optimeDurableDate" : ISODate("2018-08-27T05:28:14Z"), "lastHeartbeat" : ISODate("2018-08-27T05:28:32.620Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T05:28:31.678Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.100.170:21000", "syncSourceHost" : "192.168.100.170:21000", "syncSourceId" : 2, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.100.170:21000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 32, "optime" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535347694, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T05:28:14Z"), "optimeDurableDate" : ISODate("2018-08-27T05:28:14Z"), "lastHeartbeat" : ISODate("2018-08-27T05:28:32.620Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T05:28:32.763Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.100.150:21000", "syncSourceHost" : "192.168.100.150:21000", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1}
至此,config server配置完畢。
# vim /etc/mongod/shard1.confpidfilepath = /var/run/mongodb/shard1.pid dbpath = /data/mongodb/shard1/data logpath = /data/mongodb/shard1/log/shard1.log logappend = truebind_ip = 192.168.100.150 port = 27001 fork = truehttpinterface=true #打開web監控rest=true replSet=shard1 #副本集名稱shardsvr = true maxConns=20000 #設置最大鏈接數
# cd /etc/mongod/# cp shard1.conf shard2.conf# cp shard1.conf shard3.conf# vim shard2.confpidfilepath = /var/run/mongodb/shard2.pid dbpath = /data/mongodb/shard2/data logpath = /data/mongodb/shard2/log/shard2.log logappend = truebind_ip = 192.168.100.150 port = 27002 #注意更改端口fork = truehttpinterface=true rest=true replSet=shard2 shardsvr = truemaxConns=20000
# vim shard3.confpidfilepath = /var/run/mongodb/shard3.pid dbpath = /data/mongodb/shard3/data logpath = /data/mongodb/shard3/log/shard3.log logappend = truebind_ip = 192.168.100.150 port = 27003 #注意更改端口fork = truehttpinterface=true rest=true replSet=shard3 shardsvr = truemaxConns=20000
# mongod -f /etc/mongod/shard1.conf about to fork child process, waiting until server is ready for connections. forked process: 10387 child process started successfully, parent exiting# mongod -f /etc/mongod/shard2.conf about to fork child process, waiting until server is ready for connections. forked process: 10417 child process started successfully, parent exiting# mongod -f /etc/mongod/shard3.conf about to fork child process, waiting until server is ready for connections. forked process: 10446 child process started successfully, parent exiting# netstat -lntp |grep mongodtcp 0 0 192.168.100.150:27001 0.0.0.0:* LISTEN 10387/mongod tcp 0 0 192.168.100.150:27002 0.0.0.0:* LISTEN 10417/mongod tcp 0 0 192.168.100.150:27003 0.0.0.0:* LISTEN 10446/mongod tcp 0 0 192.168.100.150:28001 0.0.0.0:* LISTEN 10387/mongod tcp 0 0 192.168.100.150:28002 0.0.0.0:* LISTEN 10417/mongod tcp 0 0 192.168.100.150:28003 0.0.0.0:* LISTEN 10446/mongod tcp 0 0 192.168.100.150:21000 0.0.0.0:* LISTEN 10212/mongod tcp 0 0 192.168.100.150:27018 0.0.0.0:* LISTEN 10185/mongod tcp 0 0 127.0.0.1:27018 0.0.0.0:* LISTEN 10185/mongod
# vim /etc/mongod/shard1.confpidfilepath = /var/run/mongodb/shard1.pid dbpath = /data/mongodb/shard1/data logpath = /data/mongodb/shard1/log/shard1.log logappend = truebind_ip = 192.168.100.160 port = 27001 fork = truehttpinterface=true rest=true replSet=shard1 shardsvr = true maxConns=20000
# cd /etc/mongod/# cp shard1.conf shard2.conf# cp shard1.conf shard3.conf# vim shard2.confpidfilepath = /var/run/mongodb/shard2.pid dbpath = /data/mongodb/shard2/data logpath = /data/mongodb/shard2/log/shard2.log logappend = truebind_ip = 192.168.100.160 port = 27002 fork = truehttpinterface=true rest=true replSet=shard2 shardsvr = truemaxConns=20000
# vim shard3.confpidfilepath = /var/run/mongodb/shard3.pid dbpath = /data/mongodb/shard3/data logpath = /data/mongodb/shard3/log/shard3.log logappend = truebind_ip = 192.168.100.160 port = 27003 fork = truehttpinterface=true rest=true replSet=shard3 shardsvr = truemaxConns=20000
# mongod -f /etc/mongod/shard1.conf about to fork child process, waiting until server is ready for connections. forked process: 10795 child process started successfully, parent exiting# mongod -f /etc/mongod/shard2.conf about to fork child process, waiting until server is ready for connections. forked process: 10824 child process started successfully, parent exiting# mongod -f /etc/mongod/shard3.conf about to fork child process, waiting until server is ready for connections. forked process: 10853 child process started successfully, parent exiting# netstat -lntp |grep mongodtcp 0 0 192.168.100.160:27019 0.0.0.0:* LISTEN 10446/mongod tcp 0 0 127.0.0.1:27019 0.0.0.0:* LISTEN 10446/mongod tcp 0 0 192.168.100.160:27001 0.0.0.0:* LISTEN 10795/mongod tcp 0 0 192.168.100.160:27002 0.0.0.0:* LISTEN 10824/mongod tcp 0 0 192.168.100.160:27003 0.0.0.0:* LISTEN 10853/mongod tcp 0 0 192.168.100.160:28001 0.0.0.0:* LISTEN 10795/mongod tcp 0 0 192.168.100.160:28002 0.0.0.0:* LISTEN 10824/mongod tcp 0 0 192.168.100.160:28003 0.0.0.0:* LISTEN 10853/mongod tcp 0 0 192.168.100.160:21000 0.0.0.0:* LISTEN 10472/mongod
# vim /etc/mongod/shard1.confpidfilepath = /var/run/mongodb/shard1.pid dbpath = /data/mongodb/shard1/data logpath = /data/mongodb/shard1/log/shard1.log logappend = truebind_ip = 192.168.100.170 port = 27001 fork = truehttpinterface=true rest=true replSet=shard1 shardsvr = true maxConns=20000
# cd /etc/mongod/# cp shard1.conf shard2.conf# cp shard1.conf shard3.conf# vim shard2.confpidfilepath = /var/run/mongodb/shard2.pid dbpath = /data/mongodb/shard2/data logpath = /data/mongodb/shard2/log/shard2.log logappend = truebind_ip = 192.168.100.170 port = 27002 fork = truehttpinterface=true rest=true replSet=shard2 shardsvr = truemaxConns=20000
# vim shard3.confpidfilepath = /var/run/mongodb/shard3.pid dbpath = /data/mongodb/shard3/data logpath = /data/mongodb/shard3/log/shard3.log logappend = truebind_ip = 192.168.100.170 port = 27003 fork = truehttpinterface=true rest=true replSet=shard3 shardsvr = truemaxConns=20000
# mongod -f /etc/mongod/shard1.conf about to fork child process, waiting until server is ready for connections. forked process: 10607 child process started successfully, parent exiting# mongod -f /etc/mongod/shard2.conf about to fork child process, waiting until server is ready for connections. forked process: 10636 child process started successfully, parent exiting# mongod -f /etc/mongod/shard3.conf about to fork child process, waiting until server is ready for connections. forked process: 10665 child process started successfully, parent exiting# netstat -lntp |grep mongodtcp 0 0 192.168.100.170:27020 0.0.0.0:* LISTEN 10418/mongod tcp 0 0 127.0.0.1:27020 0.0.0.0:* LISTEN 10418/mongod tcp 0 0 192.168.100.170:27001 0.0.0.0:* LISTEN 10607/mongod tcp 0 0 192.168.100.170:27002 0.0.0.0:* LISTEN 10636/mongod tcp 0 0 192.168.100.170:27003 0.0.0.0:* LISTEN 10665/mongod tcp 0 0 192.168.100.170:28001 0.0.0.0:* LISTEN 10607/mongod tcp 0 0 192.168.100.170:28002 0.0.0.0:* LISTEN 10636/mongod tcp 0 0 192.168.100.170:28003 0.0.0.0:* LISTEN 10665/mongod tcp 0 0 192.168.100.170:21000 0.0.0.0:* LISTEN 10444/mongod
由於每一個分片裏面都有仲裁節點,而仲裁節點不能做爲登陸入口,因此只能選擇非仲裁節點登陸進行操做。
針對shard1,我使用A機器登陸操做。
# mongo --host 192.168.100.150 --port 27001> use admin switched to db admin> config={_id:"shard1",members:[{_id:0,host:"192.168.100.150:27001"},{_id:1,host:"192.168.100.160:27001"},{_id:2,host:"192.168.100.170:27001",arbiterOnly:true}]} //192.168.100.170做爲仲裁節點 { "_id" : "shard1", "members" : [ { "_id" : 0, "host" : "192.168.100.150:27001" }, { "_id" : 1, "host" : "192.168.100.160:27001" }, { "_id" : 2, "host" : "192.168.100.170:27001", "arbiterOnly" : true } ]}> rs.initiate(config) #初始化副本集{ "ok" : 1 }shard1:OTHER> rs.status(){ "set" : "shard1", "date" : ISODate("2018-08-27T04:45:50.273Z"), "myState" : 2, "term" : NumberLong(0), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "appliedOpTime" : { "ts" : Timestamp(1535345139, 1), "t" : NumberLong(-1) }, "durableOpTime" : { "ts" : Timestamp(1535345139, 1), "t" : NumberLong(-1) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.150:27001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 828, "optime" : { "ts" : Timestamp(1535345139, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-08-27T04:45:39Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.100.160:27001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "ts" : Timestamp(1535345139, 1), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(1535345139, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-08-27T04:45:39Z"), "optimeDurableDate" : ISODate("2018-08-27T04:45:39Z"), "lastHeartbeat" : ISODate("2018-08-27T04:45:49.984Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T04:45:47.086Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.100.170:27001", "health" : 1, "state" : 7, "stateStr" : "ARBITER", #仲裁節點 "uptime" : 10, "lastHeartbeat" : ISODate("2018-08-27T04:45:49.984Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T04:45:46.960Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1}shard1:SECONDARY> #多按幾回Enter鍵shard1:PRIMARY> #自動變爲primary,再次rs.status()就能看到
針對shard2,我使用B機器登陸操做。
# mongo --host 192.168.100.160 --port 27002> use admin switched to db admin> config={_id:"shard2",members:[{_id:0,host:"192.168.100.150:27002",arbiterOnly:true},{_id:1,host:"192.168.100.160:27002"},{_id:2,host:"192.168.100.170:27002"}]}{ "_id" : "shard2", "members" : [ { "_id" : 0, "host" : "192.168.100.150:27002", "arbiterOnly" : true }, { "_id" : 1, "host" : "192.168.100.160:27002" }, { "_id" : 2, "host" : "192.168.100.170:27002" } ]}> rs.initiate(config){ "ok" : 1 }shard2:OTHER> rs.status(){ "set" : "shard2", "date" : ISODate("2018-08-27T12:28:23.263Z"), "myState" : 2, "term" : NumberLong(0), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "appliedOpTime" : { "ts" : Timestamp(1535372897, 1), "t" : NumberLong(-1) }, "durableOpTime" : { "ts" : Timestamp(1535372897, 1), "t" : NumberLong(-1) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.150:27002", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 6, "lastHeartbeat" : ISODate("2018-08-27T12:28:22.126Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T12:28:19.097Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 1, "name" : "192.168.100.160:27002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 987, "optime" : { "ts" : Timestamp(1535372897, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-08-27T12:28:17Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 2, "name" : "192.168.100.170:27002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 6, "optime" : { "ts" : Timestamp(1535372897, 1), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(1535372897, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-08-27T12:28:17Z"), "optimeDurableDate" : ISODate("2018-08-27T12:28:17Z"), "lastHeartbeat" : ISODate("2018-08-27T12:28:22.126Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T12:28:19.236Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1}shard2:SECONDARY> #多按幾回Enter鍵shard2:PRIMARY> #自動變爲primary,再次rs.status()就能看到
針對shard3,我使用C機器登陸操做。
# mongo --host 192.168.100.170 --port 27003> use admin switched to db admin> config={_id:"shard3",members:[{_id:0,host:"192.168.100.150:27003"},{_id:1,host:"192.168.100.160:27003",arbiterOnly:true},{_id:2,host:"192.168.100.170:27003"}]}{ "_id" : "shard3", "members" : [ { "_id" : 0, "host" : "192.168.100.150:27003" }, { "_id" : 1, "host" : "192.168.100.160:27003", "arbiterOnly" : true }, { "_id" : 2, "host" : "192.168.100.170:27003" } ]}> rs.initiate(config){ "ok" : 1 }shard3:OTHER> rs.status(){ "set" : "shard3", "date" : ISODate("2018-08-27T12:32:53.016Z"), "myState" : 2, "term" : NumberLong(0), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "appliedOpTime" : { "ts" : Timestamp(1535373164, 1), "t" : NumberLong(-1) }, "durableOpTime" : { "ts" : Timestamp(1535373164, 1), "t" : NumberLong(-1) } }, "members" : [ { "_id" : 0, "name" : "192.168.100.150:27003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 8, "optime" : { "ts" : Timestamp(1535373164, 1), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(1535373164, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-08-27T12:32:44Z"), "optimeDurableDate" : ISODate("2018-08-27T12:32:44Z"), "lastHeartbeat" : ISODate("2018-08-27T12:32:49.921Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T12:32:48.571Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 1, "name" : "192.168.100.160:27003", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 8, "lastHeartbeat" : ISODate("2018-08-27T12:32:49.921Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T12:32:52.646Z"), "pingMs" : NumberLong(1), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.100.170:27003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 1067, "optime" : { "ts" : Timestamp(1535373164, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-08-27T12:32:44Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" } ], "ok" : 1}shard3:SECONDARY> #多按幾回Enter鍵shard3:PRIMARY> #自動變爲primary,再次rs.status()就能看到
至此,shard配置完畢。
爲何要將mongos放在最後配置,是由於mongos要想啓動起來,它必需要知道config server和shard是誰。
# vim /etc/mongod/mongos.confpidfilepath = /var/run/mongodb/mongospid logpath = /data/mongodb/mongos/log/mongos.log logappend = truebind_ip = 192.168.100.150 port = 20000 fork = trueconfigdb = configs/192.168.100.150:21000,192.168.100.160:21000,192.168.100.170:21000 maxConns=20000
# mongos -f /etc/mongod/mongos.conf #注意,以前是mongod,這裏是mongosabout to fork child process, waiting until server is ready for connections. forked process: 1562 child process started successfully, parent exiting# netstat -lntp |grep mongostcp 0 0 192.168.100.150:20000 0.0.0.0:* LISTEN 1562/mongos
# vim /etc/mongod/mongos.confpidfilepath = /var/run/mongodb/mongospid logpath = /data/mongodb/mongos/log/mongos.log logappend = truebind_ip = 192.168.100.160 port = 20000 fork = trueconfigdb = configs/192.168.100.150:21000,192.168.100.160:21000,192.168.100.170:21000 maxConns=20000
# mongos -f /etc/mongod/mongos.confabout to fork child process, waiting until server is ready for connections. forked process: 1550 child process started successfully, parent exiting# netstat -lntp |grep mongostcp 0 0 192.168.100.160:20000 0.0.0.0:* LISTEN 1550/mongos
# vim /etc/mongod/mongos.confpidfilepath = /var/run/mongodb/mongospid logpath = /data/mongodb/mongos/log/mongos.log logappend = truebind_ip = 192.168.100.170 port = 20000 fork = trueconfigdb = configs/192.168.100.150:21000,192.168.100.160:21000,192.168.100.170:21000 maxConns=20000
# mongos -f /etc/mongod/mongos.conf about to fork child process, waiting until server is ready for connections. forked process: 1559 child process started successfully, parent exiting# netstat -lntp |grep mongostcp 0 0 192.168.100.170:20000 0.0.0.0:* LISTEN 1559/mongos
至此,mongos配置完畢。
如今就要啓用分片,把全部分片和mongos(路由器)串聯起來。任選一臺機器操做,這裏我使用A機器進行操做。
# mongo --host 192.168.100.150 --port 20000 #登陸mongosMongoDB shell version v3.4.16 connecting to: mongodb://192.168.100.150:20000/ MongoDB server version: 3.4.16 Server has startup warnings: 2018-08-27T08:55:48.279-0400 I CONTROL [main] 2018-08-27T08:55:48.279-0400 I CONTROL [main] ** WARNING: Access control is not enabled for the database. 2018-08-27T08:55:48.279-0400 I CONTROL [main] ** Read and write access to data and configuration is unrestricted. 2018-08-27T08:55:48.279-0400 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended. 2018-08-27T08:55:48.279-0400 I CONTROL [main] #把shard1分片和mongos串聯起來mongos> sh.addShard("shard1/192.168.100.150:27001,192.168.100.160:27001,192.168.100.170:27001"){ "shardAdded" : "shard1", "ok" : 1 }#把shard2分片和mongos串聯起來mongos> sh.addShard("shard2/192.168.100.150:27002,192.168.100.160:27002,192.168.100.170:27002"){ "shardAdded" : "shard2", "ok" : 1 }#把shard3分片和mongos串聯起來mongos> sh.addShard("shard3/192.168.100.150:27003,192.168.100.160:27003,192.168.100.170:27003"){ "shardAdded" : "shard3", "ok" : 1 }mongos> sh.status() #查看分片狀態--- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5b83e28d28d74224a9920642") } shards: { "_id" : "shard1", "host" : "shard1/192.168.100.150:27001,192.168.100.160:27001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.100.160:27002,192.168.100.170:27002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/192.168.100.150:27003,192.168.100.170:27003", "state" : 1 } active mongoses: "3.4.16" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no NaN Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases:
分片啓用完成。
如今進行分片測試,任選一臺機器操做,這裏我選擇A機器進行操做。
# mongo --host 192.168.100.150 --port 20000mongos> use admin switched to db admin#指定要分片的數據,db.runCommand({enablesharding:"testdb"}) 也能夠實現mongos> sh.enableSharding("testdb"){ "ok" : 1 }#指定數據庫裏須要分片的集合和片鍵,db.runCommand({shardcollection:"testdb.table1",key:{id:1}}) 也能夠實現mongos> sh.shardCollection("testdb.table1",{"id":1}){ "collectionsharded" : "testdb.table1", "ok" : 1 }mongos> use testdb switched to db testdb mongos> for (var i = 1; i <= 10000; i++) db.table1.save({id:1,"test1":"testval1"}) #插入測試數據,10000條WriteResult({ "nInserted" : 1 })mongos> show dbs admin 0.000GB config 0.001GB testdb 0.000GB mongos> sh.status()--- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5b83e28d28d74224a9920642") } shards: { "_id" : "shard1", "host" : "shard1/192.168.100.150:27001,192.168.100.160:27001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.100.160:27002,192.168.100.170:27002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/192.168.100.150:27003,192.168.100.170:27003", "state" : 1 } active mongoses: "3.4.16" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no NaN Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "testdb", "primary" : "shard2", "partitioned" : true } testdb.table1 #有剛剛建立的數據 shard key: { "id" : 1 } unique: false balancing: true chunks: shard2 1 #剛剛建立的testdb.table1在shard2裏面 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0)
mongos> sh.enableSharding("db2"){ "ok" : 1 }mongos> sh.shardCollection("db2.col2",{"id":1}){ "collectionsharded" : "db2.col2", "ok" : 1 }mongos> sh.status()--- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5b83e28d28d74224a9920642") } shards: { "_id" : "shard1", "host" : "shard1/192.168.100.150:27001,192.168.100.160:27001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.100.160:27002,192.168.100.170:27002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/192.168.100.150:27003,192.168.100.170:27003", "state" : 1 } active mongoses: "3.4.16" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no NaN Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "testdb", "primary" : "shard2", "partitioned" : true } testdb.table1 shard key: { "id" : 1 } unique: false balancing: true chunks: shard2 1 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) { "_id" : "db2", "primary" : "shard1", "partitioned" : true } db2.col2 #剛剛建立的db2.col2 shard key: { "id" : 1 } unique: false balancing: true chunks: shard1 1 #在shard1裏面 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0)
mongos> sh.enableSharding("db3"){ "ok" : 1 }mongos> sh.shardCollection("db3.col3",{"id":1}){ "collectionsharded" : "db3.col3", "ok" : 1 }mongos> sh.status()--- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5b83e28d28d74224a9920642") } shards: { "_id" : "shard1", "host" : "shard1/192.168.100.150:27001,192.168.100.160:27001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.100.160:27002,192.168.100.170:27002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/192.168.100.150:27003,192.168.100.170:27003", "state" : 1 } active mongoses: "3.4.16" : 1 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no NaN Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "testdb", "primary" : "shard2", "partitioned" : true } testdb.table1 shard key: { "id" : 1 } unique: false balancing: true chunks: shard2 1 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) { "_id" : "db2", "primary" : "shard1", "partitioned" : true } db2.col2 shard key: { "id" : 1 } unique: false balancing: true chunks: shard1 1 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) { "_id" : "db3", "primary" : "shard3", "partitioned" : true } db3.col3 #剛剛建立的db3.col3 shard key: { "id" : 1 } unique: false balancing: true chunks: shard3 1 #在shard3上面 { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 0)
能夠看到數據是在三個分片上均衡存儲的。這其實也不夠明顯,由於數據量不夠龐大。當生產環境下數據量很龐大的時候,3個分片上面的數據就會很平均。
任選一臺機器操做,下面在A機器上進行操做。
# mkdir /tmp/mongobak; mongodump --host 192.168.100.150 --port 20000 -d testdb -o /tmp/mongobak# -d 指定備份的數據庫;-o 指定存儲的位置,這裏若是不先建立存儲位置目錄,就沒法備份2018-08-27T09:44:58.747-0400 writing testdb.table1 to 2018-08-27T09:44:58.808-0400 done dumping testdb.table1 (10000 documents)# ls /tmp/mongobak/testdb# ls /tmp/mongobak/testdb/table1.bson table1.metadata.json# cd !$cd /tmp/mongobak/testdb/# du -sh *528K table1.bson 4.0K table1.metadata.json# cat table1.metadata.json {"options":{},"indexes":[{"v":2,"key":{"_id":1},"name":"_id_","ns":"testdb.table1"},{"v":2,"key":{"id":1.0},"name":"id_1","ns":"testdb.table1"}]} #有剛剛建立的testdb.table1,有所記錄
# mongodump --host 192.168.100.150 --port 20000 -o /tmp/mongobak# 不使用-d 指定庫,就會備份全部庫;這裏存儲位置的目錄不存在也會自動建立2018-08-27T09:52:22.762-0400 writing admin.system.version to 2018-08-27T09:52:22.764-0400 done dumping admin.system.version (1 document)2018-08-27T09:52:22.764-0400 writing testdb.table1 to 2018-08-27T09:52:22.764-0400 writing config.lockpings to 2018-08-27T09:52:22.764-0400 writing config.changelog to 2018-08-27T09:52:22.764-0400 writing config.locks to 2018-08-27T09:52:22.771-0400 done dumping config.lockpings (11 documents)2018-08-27T09:52:22.771-0400 writing config.chunks to 2018-08-27T09:52:22.776-0400 done dumping config.changelog (9 documents)2018-08-27T09:52:22.776-0400 writing config.collections to 2018-08-27T09:52:22.778-0400 done dumping config.collections (3 documents)2018-08-27T09:52:22.778-0400 writing config.databases to 2018-08-27T09:52:22.781-0400 done dumping config.locks (7 documents)2018-08-27T09:52:22.781-0400 writing config.shards to 2018-08-27T09:52:22.782-0400 done dumping config.shards (3 documents)2018-08-27T09:52:22.782-0400 writing config.mongos to 2018-08-27T09:52:22.785-0400 done dumping config.mongos (2 documents)2018-08-27T09:52:22.785-0400 writing config.version to 2018-08-27T09:52:22.786-0400 done dumping config.version (1 document)2018-08-27T09:52:22.786-0400 writing config.tags to 2018-08-27T09:52:22.789-0400 done dumping config.tags (0 documents)2018-08-27T09:52:22.789-0400 writing config.migrations to 2018-08-27T09:52:22.790-0400 done dumping config.migrations (0 documents)2018-08-27T09:52:22.790-0400 writing db2.col2 to 2018-08-27T09:52:22.795-0400 done dumping config.databases (3 documents)2018-08-27T09:52:22.795-0400 writing db3.col3 to 2018-08-27T09:52:22.797-0400 done dumping config.chunks (3 documents)2018-08-27T09:52:22.799-0400 done dumping db2.col2 (0 documents)2018-08-27T09:52:22.800-0400 done dumping db3.col3 (0 documents)2018-08-27T09:52:22.941-0400 done dumping testdb.table1 (10000 documents)# cd ..# ls admin config db2 db3 testdb #每個目錄表明一個庫
# mongodump --host 192.168.100.150 --port 20000 -d testdb -c table1 -o /tmp/mongobak1# -c 指定備份的集合2018-08-27T09:57:12.795-0400 writing testdb.table1 to 2018-08-27T09:57:12.840-0400 done dumping testdb.table1 (10000 documents)# ls !$ls /tmp/mongobak1 testdb# ls /tmp/mongobak1/testdb/table1.bson table1.metadata.json
# mongoexport --host 192.168.100.150 --port 20000 -d testdb -c table1 -o /tmp/table1.json# mongoexport表示導出,-o 指定存儲文件,不存在會自動建立2018-08-27T10:01:34.462-0400 connected to: 192.168.100.150:20000 2018-08-27T10:01:34.610-0400 exported 10000 records# vim !$vim /tmp/table1.json{"_id":{"$oid":"5b83fb338c1de6b83f6a11ab"},"id":1.0,"test1":"testval1"}{"_id":{"$oid":"5b83fb338c1de6b83f6a11ac"},"id":1.0,"test1":"testval1"}{"_id":{"$oid":"5b83fb338c1de6b83f6a11ad"},"id":1.0,"test1":"testval1"}{"_id":{"$oid":"5b83fb338c1de6b83f6a11ae"},"id":1.0,"test1":"testval1"}{"_id":{"$oid":"5b83fb338c1de6b83f6a11af"},"id":1.0,"test1":"testval1"}{"_id":{"$oid":"5b83fb338c1de6b83f6a11b0"},"id":1.0,"test1":"testval1"}{"_id":{"$oid":"5b83fb338c1de6b83f6a11b1"},"id":1.0,"test1":"testval1"}{"_id":{"$oid":"5b83fb338c1de6b83f6a11b2"},"id":1.0,"test1":"testval1"}{"_id":{"$oid":"5b83fb338c1de6b83f6a11b3"},"id":1.0,"test1":"testval1"}# 這裏就是咱們以前插入的10000條數據,不是亂碼
# mongorestore --host 192.168.100.150 --port 20000 -d mydb dir# -d 後跟要恢復的庫,dir表示該庫備份時所在的目錄
# mongorestore --host 192.168.100.150 --port 20000 dir# dir表示該庫備份時所在的目錄
# mongorestore --host 192.168.100.150 --port 20000 -d mydb -c tab1 dir# -c 後跟要恢復的庫的集合,dir表示該集合備份時所在的目錄
# mongoimport --host 192.168.100.150 --port 20000 -d testdb -c table1 --file /tmp/table1.json# mongoexport表示導出,--file指定導入文件
更多資料參考: