第三十六課 非關係統型數據庫-mangodb
目錄
二十四 mongodb介紹
二十五 mongodb安裝
二十六 鏈接mongodb
二十七 mongodb用戶管理
二十八 mongodb建立集合、數據管理
二十九 php的mongodb擴展
三十 php的mongo擴展
三十一 mongodb副本集介紹
三十二 mongodb副本集搭建
三十三 mongodb副本集測試
三十四 mongodb分片介紹
三十五 mongodb分片搭建
三十六 mongodb分片測試
三十七 mongodb備份恢復
三十八 擴展php
官網www.mongodb.com, 截止2018年08月26日,當前最新版4.0.1html
C++編寫,基於分佈式的,屬於NoSQL的一種node
在NoSQL中是最像關係型數據庫的linux
MongoDB 將數據存儲爲一個文檔,數據結構由鍵值(key=>value)對組成。MongoDB 文檔相似於 JSON 對象。字段值能夠包含其餘文檔、數組及文檔數組。nginx
關於JSON http://www.w3school.com.cn/json/index.aspgit
由於基於分佈式,因此很容易擴展github
MongoDB和關係型數據庫對比web
關係型數據庫數據結構mongodb
mongodb數據庫數據結構shell
官方安裝文檔https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/
1.新建yum源
[root@mangodbserver1 ~]# vim /etc/yum.repos.d/mangodb-org-4.0.repo [mongodb-org-4.0] name=MongoDB Repository baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/ gpgcheck=1 enabled=1 gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
2.查看軟件包
[root@mangodbserver1 ~]# yum list |grep mongodb collectd-write_mongodb.x86_64 5.8.0-4.el7 epel mongodb.x86_64 2.6.12-6.el7 epel mongodb-org.x86_64 4.0.1-1.el7 mongodb-org-4.0 mongodb-org-mongos.x86_64 4.0.1-1.el7 mongodb-org-4.0 mongodb-org-server.x86_64 4.0.1-1.el7 mongodb-org-4.0 mongodb-org-shell.x86_64 4.0.1-1.el7 mongodb-org-4.0 mongodb-org-tools.x86_64 4.0.1-1.el7 mongodb-org-4.0 mongodb-server.x86_64 2.6.12-6.el7 epel mongodb-test.x86_64 2.6.12-6.el7 epel nodejs-mongodb.noarch 1.4.7-1.el7 epel php-mongodb.noarch 1.0.4-1.el7 epel php-pecl-mongodb.x86_64 1.1.10-1.el7 epel poco-mongodb.x86_64 1.6.1-3.el7 epel syslog-ng-mongodb.x86_64 3.5.6-3.el7 epel
3.安裝mangodb-org
[root@mangodbserver1 ~]# yum -y install mongodb-org Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * epel: mirrors.tongji.edu.cn * extras: mirrors.aliyun.com * updates: mirrors.163.com Resolving Dependencies ...中間略... Installed: mongodb-org.x86_64 0:4.0.1-1.el7 Dependency Installed: mongodb-org-mongos.x86_64 0:4.0.1-1.el7 mongodb-org-server.x86_64 0:4.0.1-1.el7 mongodb-org-shell.x86_64 0:4.0.1-1.el7 mongodb-org-tools.x86_64 0:4.0.1-1.el7 Complete!
1.啓動mangodb
[root@mangodbserver1 ~]# systemctl start mongod.service [root@mangodbserver1 ~]# systemctl enable mongod.service [root@mangodbserver1 ~]# lsof -i :27017 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mongod 1782 mongod 11u IPv4 22968 0t0 TCP localhost:27017 (LISTEN)
2.鏈接
# 本機直接運行命令mongo進入到mongodb shell中 -bash: mango: command not found [root@mangodbserver1 ~]# mongo MongoDB shell version v4.0.1 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 4.0.1 Welcome to the MongoDB shell. For interactive help, type "help". For more comprehensive documentation, see http://docs.mongodb.org/ Questions? Try the support group http://groups.google.com/group/mongodb-user Server has startup warnings: 2018-08-26T14:01:51.415+0800 I CONTROL [initandlisten] 2018-08-26T14:01:51.415+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-08-26T14:01:51.415+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-08-26T14:01:51.415+0800 I CONTROL [initandlisten] --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- > # 若是mongodb監聽端口並非默認的27017,則在鏈接的時候須要加--port 選項,例如 # mongo --port 27018 # 鏈接遠程mongodb,須要加--host,例如 # mongo --host 192.168.1.47 # 若是設置了驗證,則在鏈接的時候須要帶用戶名和密碼 # mongo -uusername -ppasswd --authenticationDatabase db 相似於MySQL
經常使用操做
# 須要切換到admn庫 > use admin switched to db admin # 建立用戶admin,user指定用戶,customData爲說明字段,能夠省略,pwd爲密碼,roles指定用戶的角色,db指定庫名 > db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "admin122", roles: [ { role: "root", db: "admin" } ] } ) Successfully added user: { "user" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] } # 列出全部用戶,須要切換到admin庫 > db.system.users.find() { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "t3C0r5eRm8qrPrdyQwPGRw==", "storedKey" : "2FNCyDURiJKU6LnC8QvRVtUcO00=", "serverKey" : "1eyFpJkirCnUwvFTj7sLeucNZ5Q=" }, "SCRAM-SHA-256" : { "iterationCount" : 15000, "salt" : "h58+0R+lhBUZFMMRqjwRYcnOPyoCl62xb0gg5g==", "storedKey" : "G9gV0/k0nQ+KjBE/12qvtjhGNiFPBy6RRSolPZmVkNo=", "serverKey" : "/Vh31wMqLZkuxPh3zNL6QQLTfGlUcxqZx8fk1GRRugY=" } }, "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] } # 查看當前庫下全部的用戶 > show users { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } # 建立用戶kennminn > db.createUser({user: "kennminn", pwd: "123456", roles:[{role: "read", db: "test"}]}) Successfully added user: { "user" : "kennminn", "roles" : [ { "role" : "read", "db" : "test" } ] } # 刪除用戶kennminn > db.dropUser('kennminn') true # 若要用戶生效,還須要編輯啓動腳本vim /usr/lib/systemd/system/mongod.service,在OPTIONS=後面增--auth [root@mangodbserver1 ~]# vim /usr/lib/systemd/system/mongod.service # 修改此句開啓驗證 Environment="OPTIONS=--auth -f /etc/mongod.conf" [root@mangodbserver1 ~]# systemctl restart mongod.service Warning: mongod.service changed on disk. Run 'systemctl daemon-reload' to reload units. [root@mangodbserver1 ~]# systemctl daemon-reload [root@mangodbserver1 ~]# systemctl restart mongod.service # 未驗證身份,查詢失敗 [root@mangodbserver1 ~]# mongo MongoDB shell version v4.0.1 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 4.0.1 > use admin switched to db admin > show users 2018-08-26T16:52:37.729+0800 E QUERY [js] Error: command usersInfo requires authentication : _getErrorWithCode@src/mongo/shell/utils.js:25:13 DB.prototype.getUsers@src/mongo/shell/db.js:1757:1 shellHelper.show@src/mongo/shell/utils.js:859:9 shellHelper@src/mongo/shell/utils.js:766:15 @(shellhelp2):1:1 # 認證身份 [root@mangodbserver1 ~]# mongo -u "admin" -p "admin122" --authenticationDatabase "admin" MongoDB shell version v4.0.1 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 4.0.1 --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- > use admin switched to db admin > show users { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } # 在db1庫中建立用戶test1,test1用戶對db1庫讀寫,對db2庫只讀。 > use db1 switched to db db1 > db.createUser( { user: "test1", pwd: "123aaa", roles: [ { role: "readWrite", db: "db1" }, {role: "read", db: "db2" } ] } ) Successfully added user: { "user" : "test1", "roles" : [ { "role" : "readWrite", "db" : "db1" }, { "role" : "read", "db" : "db2" } ] } > show users { "_id" : "db1.test1", "user" : "test1", "db" : "db1", "roles" : [ { "role" : "readWrite", "db" : "db1" }, { "role" : "read", "db" : "db2" } ], "mechanisms" : [ "SCRAM-SHA-1", "SCRAM-SHA-256" ] } > use db2 switched to db db2 > show users > db.auth("test1", "123aaa") Error: Authentication failed. 0 > use db1 switched to db db1 > db.auth("test1", "123aaa")
MongoDB用戶角色
Read:容許用戶讀取指定數據庫
readWrite:容許用戶讀寫指定數據庫
dbAdmin:容許用戶在指定數據庫中執行管理函數,如索引建立、刪除,查看統計或訪問system.profile
userAdmin:容許用戶向system.users集合寫入,能夠找指定數據庫裏建立、刪除和管理用戶
clusterAdmin:只在admin數據庫中可用,賦予用戶全部分片和複製集相關函數的管理權限。
readAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的讀權限
readWriteAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的讀寫權限
userAdminAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的userAdmin權限
dbAdminAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的dbAdmin權限。
root:只在admin數據庫中可用。超級帳號,超級權限
MongoDB庫管理
# 查看版本 > db.version() 4.0.1 # 若是庫存在就切換,不存在就建立 > use db1 switched to db db1 # 查看庫,該庫是空庫,沒有建立集合, > show dbs admin 0.000GB config 0.000GB local 0.000GB # 建立集合 > db.createCollection('clo1') { "ok" : 1 } > show dbs admin 0.000GB config 0.000GB db1 0.000GB local 0.000GB # 刪除當前庫,要想刪除某個庫,必須切換到那個庫下 > use userdb switched to db userdb > db.createCollection('clo1') { "ok" : 1 } > db.dropDatabase() { "dropped" : "userdb", "ok" : 1 } # 查看mongodb服務器的狀態 > db.serverStatus() { "host" : "mangodbserver1", "version" : "4.0.1", "process" : "mongod", "pid" : NumberLong(2968), "uptime" : 2383, "uptimeMillis" : NumberLong(2382444), "uptimeEstimate" : NumberLong(2382), "localTime" : ISODate("2018-08-26T09:39:29.055Z"), "asserts" : { ...下略... }, "ttl" : { "deletedDocuments" : NumberLong(0), "passes" : NumberLong(39) } }, "ok" : 1 }
1.建立集合
# 語法:db.createCollection(name,options) # name就是集合的名字,options可選,用來配置集合的參數,參數以下 # capped true/false (可選)若是爲true,則啓用封頂集合。封頂集合是固定大小的集合,當它達到其最大大小,會自動覆蓋最先的條目。若是指定true,則也須要指定尺寸參數。 # autoindexID true/false (可選)若是爲true,自動建立索引_id字段的默認值是false。 # size (可選)指定最大大小字節封頂集合。若是封頂若是是 true,那麼你還須要指定這個字段。單位B # max (可選)指定封頂集合容許在文件的最大數量。 > db.createCollection("mycol", { capped : true, size : 6142800, max : 10000 } ) { "ok" : 1 }
2.數據管理
# 查看集合,或者使用show tables > show collections clo1 mycol > show tables clo1 mycol # 若是集合不存在,直接插入數據,則mongodb會自動建立集合 > db.Account.insert({AccountID:1,UserName:"123",password:"123456"}) WriteResult({ "nInserted" : 1 }) # 更新 > db.Account.update({AccountID:1},{"$set":{"Age":20}}) WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 }) # 查看全部文檔 > db.Account.find() { "_id" : ObjectId("5b828f3d727dfa33d5561c62"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 } # 根據條件查詢 > db.Account.find({AccountID:1}) { "_id" : ObjectId("5b828f3d727dfa33d5561c62"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 } # 根據條件刪除 > db.Account.remove({AccountID:1}) WriteResult({ "nRemoved" : 1 }) > db.Account.find() # 刪除全部文檔,即刪除集合 > db.Account.drop() true # 先進入對應的庫 > use db1 switched to db db1 # 而後查看集合狀態 > db.printCollectionStats() clo1 { "ns" : "db1.clo1", "size" : 0, "count" : 0, "storageSize" : 4096, "capped" : false, "wiredTiger" : { "metadata" : { "formatVersion" : 1 ...中間略... "nindexes" : 1, "totalIndexSize" : 4096, "indexSizes" : { "_id_" : 4096 }, "ok" : 1 } ---
安裝過程
[root@mangodbserver1 ~]# cd /usr/local/src/ [root@mangodbserver1 src]# git clone https://github.com/mongodb/mongo-php-driver Cloning into 'mongo-php-driver'... remote: Counting objects: 22561, done. remote: Compressing objects: 100% (2/2), done. remote: Total 22561 (delta 0), reused 2 (delta 0), pack-reused 22559 Receiving objects: 100% (22561/22561), 6.64 MiB | 2.12 MiB/s, done. Resolving deltas: 100% (17804/17804), done. [root@mangodbserver1 src]# cd mongo-php-driver [root@mangodbserver1 mongo-php-driver]# git submodule update --init Submodule 'src/libmongoc' (https://github.com/mongodb/mongo-c-driver.git) registered for path 'src/libmongoc' Cloning into 'src/libmongoc'... remote: Counting objects: 104584, done. remote: Compressing objects: 100% (524/524), done. remote: Total 104584 (delta 266), reused 240 (delta 138), pack-reused 103918 Receiving objects: 100% (104584/104584), 51.46 MiB | 4.84 MiB/s, done. Resolving deltas: 100% (91157/91157), done. Submodule path 'src/libmongoc': checked out 'a690091bae086f267791bd2227400f2035de99e8' [root@mangodbserver1 mongo-php-driver]# /usr/local/php-fpm/bin/php php php-cgi php-config phpize [root@mangodbserver1 mongo-php-driver]# /usr/local/php-fpm/bin/phpize Configuring for: PHP Api Version: 20131106 Zend Module Api No: 20131226 Zend Extension Api No: 220131226 [root@mangodbserver1 mongo-php-driver]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /usr/bin/sed ...中間略... config.status: creating /usr/local/src/mongo-php-driver/src/libmongoc/src/libbson/src/bson/bson-config.h config.status: creating /usr/local/src/mongo-php-driver/src/libmongoc/src/libbson/src/bson/bson-version.h config.status: creating /usr/local/src/mongo-php-driver/src/libmongoc/src/libmongoc/src/mongoc/mongoc-config.h config.status: creating /usr/local/src/mongo-php-driver/src/libmongoc/src/libmongoc/src/mongoc/mongoc-version.h config.status: creating config.h [root@mangodbserver1 mongo-php-driver]# make && make install /bin/sh /usr/local/src/mongo-php-driver/libtool --mode=compile cc -DBSON_COMPILATION -DMONGOC_COMPILATION -pthread ...中間略... Build complete. Don't forget to run 'make test'. Installing shared extensions: /usr/local/php-fpm/lib/php/extensions/no-debug-non-zts-20131226/ [root@mangodbserver1 mongo-php-driver]# vim /usr/local/php-fpm/etc/php.ini # 添加 extension = mongodb.so [root@mangodbserver1 mongo-php-driver]# /usr/local/php-fpm/sbin/php-fpm -m | grep mongo mongodb [root@mangodbserver1 mongo-php-driver]# /etc/init.d/php-fpm restart Gracefully shutting down php-fpm . done Starting php-fpm done
[root@mangodbserver1 mongo-php-driver]# cd /usr/local/src/ [root@mangodbserver1 src]# wget https://pecl.php.net/get/mongo-1.6.16.tgz --2018-08-26 20:58:55-- https://pecl.php.net/get/mongo-1.6.16.tgz Resolving pecl.php.net (pecl.php.net)... 104.236.228.160 ...中間略... 2018-08-26 20:58:59 (140 KB/s) - ‘mongo-1.6.16.tgz’ saved [210341/210341] [root@mangodbserver1 src]# tar -zxvf mongo-1.6.16.tgz [root@mangodbserver1 src]# cd mongo-1.6.16/ [root@mangodbserver1 mongo-1.6.16]# /usr/local/php-fpm/bin/phpize Configuring for: PHP Api Version: 20131106 Zend Module Api No: 20131226 Zend Extension Api No: 220131226 [root@mangodbserver1 mongo-1.6.16]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for a sed that does not truncate output... /usr/bin/sed ...中間略... creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h [root@mangodbserver1 mongo-1.6.16]# make && make install /bin/sh /usr/local/src/mongo-1.6.16/libtool --mode=compile cc -I./util -I. -I/usr/local/src/mongo-1.6.16 -DPHP_ATOM_INC -I/usr/local/src/mongo-1.6.16/include -I/usr/local/src/mongo-1.6.16/main -I/usr/local/src/mongo-1.6.16 -I/usr/local/php-fpm/include/php -I/usr/local/php-fpm/include/php/main -I/usr/local/php-fpm/include/php/TSRM -I/usr/local/php-fpm/include/php/Zend -I/usr/local/php-fpm/include/php/ext -I/usr/local/php-fpm/include/php/ext/date/lib -I/usr/local/src/mongo-1.6.16/api -I/usr/local/src/mongo-1.6.16/util -I/usr/local/src/mongo-1.6.16/exceptions -I/usr/local/src/mongo-1.6.16/gridfs -I/usr/local/src/mongo-1.6.16/types -I/usr/local/src/mongo-1.6.16/batch -I/usr/local/src/mongo-1.6.16/contrib -I/usr/local/src/mongo-1.6.16/mcon -I/usr/local/src/mongo-1.6.16/mcon/contrib -DHAVE_CONFIG_H -g -O2 -c /usr/local/src/mongo-1.6.16/php_mongo.c -o php_mongo.lo ...中間略... Build complete. Don't forget to run 'make test'. Installing shared extensions: /usr/local/php-fpm/lib/php/extensions/no-debug-non-zts-20131226/ [root@mangodbserver1 mongo-1.6.16]# vim /usr/local/php-fpm/etc/php.ini # 添加extension = mongo.so extension = mongo.so [root@mangodbserver1 mongo-1.6.16]# /usr/local/php-fpm/sbin/php-fpm -m | grep mon mongo mongodb [root@mangodbserver1 mongo-1.6.16]# /etc/init.d/php-fpm restart Gracefully shutting down php-fpm . done Starting php-fpm done
測試mongo擴展
參考文檔 https://docs.mongodb.com/ecosystem/drivers/php/
http://www.runoob.com/mongodb/mongodb-php.html
# 新建測試頁 [root@mangodbserver1 mongo-1.6.16]# vim /usr/local/nginx/html/1.php <?php $m = new MongoClient(); $db = $m->test; $collection = $db->createCollection("runoob"); echo "集合建立成功"; ?> [root@mangodbserver1 mongo-1.6.16]# curl localhost/1.php 集合建立成功[root@mangodbserver1 mongo-1.6.16]#
早期版本使用master-slave,一主一從和MySQL相似,但slave在此架構中爲只讀,當主庫宕機後,從庫不能自動切換爲主
目前已經淘汰master-slave模式,改成副本集,這種模式下有一個主(primary),和多個從(secondary),只讀。支持給它們設置權重,當主宕掉後,權重最高的從切換爲主
在此架構中還能夠創建一個仲裁(arbiter)的角色,它只負責裁決,而不存儲數據
在此架構中讀寫數據都是在主上,要想實現負載均衡的目的須要手動指定讀庫的目標server
環境:
三臺機器 CentOS Linux release 7.5.1804 (Core)
mongodbserver1: 192.168.1.47
mongodbserver2: 192.168.1.48
mongodbserver3: 192.168.1.49
1.分別編輯三臺server的配置文件
net: port: 27017 # 增長綁定本機ip bindIp: 127.0.0.1,192.168.1.49 # 去掉註釋,增長下兩行內容 replication: oplogSizeMB: 20 replSetName: rs0 [root@mongodbserver3 ~]# systemctl start mongod.service [root@mongodbserver3 ~]# netstat -nltup | grep mongo tcp 0 0 192.168.1.49:27017 0.0.0.0:* LISTEN 2073/mongod tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 2073/mongod
2.初始化
[root@mongodbserver1 ~]# mongo > config={_id:"rs0",members:[{_id:0,host:"192.168.1.47:27017"},{_id:1,host:"192.168.1.48:27017"},{_id:2,host:"192.168.1.49:27017"}]} { "_id" : "rs0", "members" : [ { "_id" : 0, "host" : "192.168.1.47:27017" }, { "_id" : 1, "host" : "192.168.1.48:27017" }, { "_id" : 2, "host" : "192.168.1.49:27017" } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1535293700, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535293700, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } rs0:PRIMARY> rs.status() { "set" : "rs0", "date" : ISODate("2018-08-26T14:31:21.930Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1535293832, 1), "members" : [ { "_id" : 0, "name" : "192.168.1.47:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 1166, "optime" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-26T14:31:12Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1535293711, 1), "electionDate" : ISODate("2018-08-26T14:28:31Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.1.48:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 181, "optime" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-26T14:31:12Z"), "optimeDurableDate" : ISODate("2018-08-26T14:31:12Z"), "lastHeartbeat" : ISODate("2018-08-26T14:31:21.381Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T14:31:21.417Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.1.47:27017", "syncSourceHost" : "192.168.1.47:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.1.49:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 181, "optime" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535293872, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-26T14:31:12Z"), "optimeDurableDate" : ISODate("2018-08-26T14:31:12Z"), "lastHeartbeat" : ISODate("2018-08-26T14:31:21.381Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T14:31:21.548Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.1.47:27017", "syncSourceHost" : "192.168.1.47:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1535293872, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535293872, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } rs0:PRIMARY>
# 主上操做 rs0:PRIMARY> use mydb rs0:PRIMARY> use mydb switched to db mydb rs0:PRIMARY> db.acc.insert({AccountID:1,UserName:"123",password:"123456"}) WriteResult({ "nInserted" : 1 }) rs0:PRIMARY> show dbs admin 0.000GB config 0.000GB db1 0.000GB local 0.000GB mydb 0.000GB # 從上操做 [root@mongodbserver2 ~]# mongo rs0:SECONDARY> show dbs 2018-08-26T22:45:06.594+0800 E QUERY [js] Error: listDatabases failed:{ "operationTime" : Timestamp(1535294702, 1), "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435, "codeName" : "NotMasterNoSlaveOk", "$clusterTime" : { "clusterTime" : Timestamp(1535294702, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } : _getErrorWithCode@src/mongo/shell/utils.js:25:13 Mongo.prototype.getDBs@src/mongo/shell/mongo.js:67:1 shellHelper.show@src/mongo/shell/utils.js:876:19 shellHelper@src/mongo/shell/utils.js:766:15 @(shellhelp2):1:1 rs0:SECONDARY> rs.slaveOk() rs0:SECONDARY> show dbs admin 0.000GB config 0.000GB db1 0.000GB local 0.000GB mydb 0.000GB rs0:SECONDARY> use mydb switched to db mydb rs0:SECONDARY> show tables acc
副本集更改權重模擬主宕機
# 查看當前三臺主機的權重,初始權重都是1 rs0:PRIMARY> rs.conf() { "_id" : "rs0", "version" : 1, "protocolVersion" : NumberLong(1), "writeConcernMajorityJournalDefault" : true, "members" : [ { "_id" : 0, "host" : "192.168.1.47:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "192.168.1.48:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 2, "host" : "192.168.1.49:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 10000, "catchUpTimeoutMillis" : -1, "catchUpTakeoverDelayMillis" : 30000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5b82b904244cfb9393d33a63") } } # 設置權重 rs0:PRIMARY> cfg.members[0].priority = 3 3 rs0:PRIMARY> cfg.members[1].priority = 2 2 rs0:PRIMARY> cfg.members[2].priority = 1 1 rs0:PRIMARY> rs.reconfig(cfg) { "ok" : 1, "operationTime" : Timestamp(1535296028, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535296028, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } # 查看新的權重,第二個節點將會成爲候選主節點。 rs0:PRIMARY> rs.conf() { "_id" : "rs0", "version" : 2, "protocolVersion" : NumberLong(1), "writeConcernMajorityJournalDefault" : true, "members" : [ { "_id" : 0, "host" : "192.168.1.47:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 3, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "192.168.1.48:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 2, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 2, "host" : "192.168.1.49:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 10000, "catchUpTimeoutMillis" : -1, "catchUpTakeoverDelayMillis" : 30000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5b82b904244cfb9393d33a63") } } rs0:PRIMARY> # 模擬主宕機,斷開主的網卡 # 在192.168.1.48上查看,以前的主192.168.1.47的狀態:"stateStr" : "(not reachable/healthy)", # 192.168.1.48已經變爲主了。 rs0:PRIMARY> rs.status() { "set" : "rs0", "date" : ISODate("2018-08-26T15:11:07.305Z"), "myState" : 1, "term" : NumberLong(2), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535296259, 1), "t" : NumberLong(2) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1535296259, 1), "t" : NumberLong(2) }, "appliedOpTime" : { "ts" : Timestamp(1535296259, 1), "t" : NumberLong(2) }, "durableOpTime" : { "ts" : Timestamp(1535296259, 1), "t" : NumberLong(2) } }, "lastStableCheckpointTimestamp" : Timestamp(1535296229, 1), "members" : [ { "_id" : 0, "name" : "192.168.1.47:27017", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2018-08-26T15:11:04.624Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T15:09:36.925Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "Error connecting to 192.168.1.47:27017 :: caused by :: No route to host", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : -1 }, { "_id" : 1, "name" : "192.168.1.48:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 3505, "optime" : { "ts" : Timestamp(1535296259, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2018-08-26T15:10:59Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1535296187, 1), "electionDate" : ISODate("2018-08-26T15:09:47Z"), "configVersion" : 2, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 2, "name" : "192.168.1.49:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 2565, "optime" : { "ts" : Timestamp(1535296259, 1), "t" : NumberLong(2) }, "optimeDurable" : { "ts" : Timestamp(1535296259, 1), "t" : NumberLong(2) }, "optimeDate" : ISODate("2018-08-26T15:10:59Z"), "optimeDurableDate" : ISODate("2018-08-26T15:10:59Z"), "lastHeartbeat" : ISODate("2018-08-26T15:11:05.393Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T15:11:07.196Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.1.48:27017", "syncSourceHost" : "192.168.1.48:27017", "syncSourceId" : 1, "infoMessage" : "", "configVersion" : 2 } ], "ok" : 1, "operationTime" : Timestamp(1535296259, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535296259, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } rs0:PRIMARY> # 將192.168.1.47從新連上網絡,並查看狀態,192.168.1.47又變爲主了 rs0:PRIMARY> rs.status() { "set" : "rs0", "date" : ISODate("2018-08-26T15:16:21.215Z"), "myState" : 1, "term" : NumberLong(3), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) }, "appliedOpTime" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) }, "durableOpTime" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) } }, "lastStableCheckpointTimestamp" : Timestamp(1535296534, 1), "members" : [ { "_id" : 0, "name" : "192.168.1.47:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 3866, "optime" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2018-08-26T15:16:14Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1535296432, 1), "electionDate" : ISODate("2018-08-26T15:13:52Z"), "configVersion" : 2, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.1.48:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 159, "optime" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) }, "optimeDurable" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2018-08-26T15:16:14Z"), "optimeDurableDate" : ISODate("2018-08-26T15:16:14Z"), "lastHeartbeat" : ISODate("2018-08-26T15:16:20.414Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T15:16:19.382Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.1.47:27017", "syncSourceHost" : "192.168.1.47:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 2 }, { "_id" : 2, "name" : "192.168.1.49:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 159, "optime" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) }, "optimeDurable" : { "ts" : Timestamp(1535296574, 1), "t" : NumberLong(3) }, "optimeDate" : ISODate("2018-08-26T15:16:14Z"), "optimeDurableDate" : ISODate("2018-08-26T15:16:14Z"), "lastHeartbeat" : ISODate("2018-08-26T15:16:20.414Z"), "lastHeartbeatRecv" : ISODate("2018-08-26T15:16:21.156Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.1.47:27017", "syncSourceHost" : "192.168.1.47:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 2 } ], "ok" : 1, "operationTime" : Timestamp(1535296574, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535296574, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } rs0:PRIMARY>
分片就是將數據庫進行拆分,將大型集合分隔到不一樣服務器上。好比,原本100G的數據,能夠分割成10份存儲到10臺服務器上,這樣每臺機器只有10G的數據。
經過一個mongos的進程(路由)實現分片後的數據存儲與訪問,也就是說mongos是整個分片架構的核心,對客戶端而言是不知道是否有分片的,客戶端只須要把讀寫操做轉達給mongos便可。
雖然分片會把數據分隔到不少臺服務器上,可是每個節點都是須要有一個備用角色的,這樣能保證數據的高可用。
當系統須要更多空間或者資源的時候,分片可讓咱們按需方便擴展,只須要把mongodb服務的機器加入到分片集羣中便可
mongodb分片架構
mongos: 數據庫集羣請求的入口,全部的請求都經過mongos進行協調,不須要在應用程序添加一個路由選擇器,mongos本身就是一個請求分發中心,它負責把對應的數據請求請求轉發到對應的shard服務器上。在生產環境一般有多mongos做爲請求的入口,防止其中一個掛掉全部的mongodb請求都沒有辦法操做。
config server: 配置服務器,存儲全部數據庫元信息(路由、分片)的配置。mongos自己沒有物理存儲分片服務器和數據路由信息,只是緩存在內存裏,配置服務器則實際存儲這些數據。mongos第一次啓動或者關掉重啓就會從 config server 加載配置信息,之後若是配置服務器信息變化會通知到全部的 mongos 更新本身的狀態,這樣 mongos 就能繼續準確路由。在生產環境一般有多個 config server 配置服務器,由於它存儲了分片路由的元數據,防止數據丟失!
shard: 存儲了一個集合部分數據的MongoDB實例,每一個分片是單獨的mongodb服務或者副本集,在生產環境中,全部的分片都應該是副本集。
1.分片搭建 -服務器規劃
三臺機器 mongodbserver1: 192.168.1.47 mongos、config server、副本集1主節點、副本集2仲裁、副本集3從節點 mongodbserver2: 192.168.1.48 mongos、config server、副本集1從節點、副本集2主節點、副本集3仲裁 mongodbserver3: 192.168.1.49 mongos、config server、副本集1仲裁、副本集2從節點、副本集3 端口分配:mongos 20000、config 21000、副本集1 2700一、副本集2 2700二、副本集3 27003 三臺機器所有關閉firewalld服務和selinux,或者增長對應端口的規則
2.分片搭建 – 建立目錄
# 分別在三臺機器上建立各個角色所須要的目錄 # mkdir -p /data/mongodb/mongos/log # mkdir -p /data/mongodb/config/{data,log} # mkdir -p /data/mongodb/shard1/{data,log} # mkdir -p /data/mongodb/shard2/{data,log} # mkdir -p /data/mongodb/shard3/{data,log} [root@mongodbserver1 ~]# mkdir -p /data/mongodb/mongos/log [root@mongodbserver1 ~]# ls -ld !$ ls -ld /data/mongodb/mongos/log drwxr-xr-x 2 root root 6 Aug 27 00:04 /data/mongodb/mongos/log [root@mongodbserver1 ~]# mkdir -p /data/mongodb/config/{data,log} [root@mongodbserver1 ~]# [root@mongodbserver1 ~]# ls -l /data/mongodb/config total 0 drwxr-xr-x 2 root root 6 Aug 27 00:04 data drwxr-xr-x 2 root root 6 Aug 27 00:04 log [root@mongodbserver1 ~]# mkdir -p /data/mongodb/shard1/{data,log} [root@mongodbserver1 ~]# ls -l /data/mongodb/shard1 total 0 drwxr-xr-x 2 root root 6 Aug 27 00:05 data drwxr-xr-x 2 root root 6 Aug 27 00:05 log [root@mongodbserver1 ~]# mkdir -p /data/mongodb/shard2/{data,log} [root@mongodbserver1 ~]# mkdir -p /data/mongodb/shard3/{data,log} [root@mongodbserver1 ~]# ls -l /data/mongodb/shard2 total 0 drwxr-xr-x 2 root root 6 Aug 27 00:05 data drwxr-xr-x 2 root root 6 Aug 27 00:05 log [root@mongodbserver1 ~]# ls -l /data/mongodb/shard3 total 0 drwxr-xr-x 2 root root 6 Aug 27 00:06 data drwxr-xr-x 2 root root 6 Aug 27 00:06 log
3.config server配置
# mongodb3.4版本之後須要對config server建立副本集 # 添加配置文件(三臺機器都操做) [root@mongodbserver3 ~]# mkdir /etc/mongod/ [root@mongodbserver3 ~]# [root@mongodbserver3 ~]# vim /etc/mongod/config.conf # 內容以下,bind_ip可偵聽全部端口,爲安全考慮,也能夠只偵聽本機端口 pidfilepath = /var/run/mongodb/configsrv.pid dbpath = /data/mongodb/config/data logpath = /data/mongodb/config/log/congigsrv.log logappend = true bind_ip = 0.0.0.0 port = 21000 fork = true configsvr = true #declare this is a config db of a cluster; replSet=configs #副本集名稱 maxConns=20000 #設置最大鏈接數 # 啓動config server [root@mongodbserver1 ~]# mongod -f /etc/mongod/config.conf 2018-08-27T00:13:47.546+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 41218 child process started successfully, parent exiting # 登陸任意一臺機器的21000端口,初始化副本集 [root@mongodbserver1 ~]# mongo --port 21000 > config = { _id: "configs", members: [ {_id : 0, host : "192.168.1.47:21000"},{_id : 1, host : "192.168.1.48:21000"},{_id : 2, host : "192.168.1.49:21000"}] } { "_id" : "configs", "members" : [ { "_id" : 0, "host" : "192.168.1.47:21000" }, { "_id" : 1, "host" : "192.168.1.48:21000" }, { "_id" : 2, "host" : "192.168.1.49:21000" } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1535300396, 1), "$gleStats" : { "lastOpTime" : Timestamp(1535300396, 1), "electionId" : ObjectId("000000000000000000000000") }, "lastCommittedOpTime" : Timestamp(0, 0), "$clusterTime" : { "clusterTime" : Timestamp(1535300396, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } configs:PRIMARY>
4.分片配置
# mongodbserver1,mongodbserver2,mongodbserver3上分別新建以下配置文件 [root@mongodbserver1 mongod]# cat /etc/mongod/shard1.conf pidfilepath = /var/run/mongodb/shard1.pid dbpath = /data/mongodb/shard1/data logpath = /data/mongodb/shard1/log/shard1.log logappend = true bind_ip = 0.0.0.0 port = 27001 fork = true oplogSize = 4096 journal = true quiet = true replSet=shard1 #副本集名稱 shardsvr = true #declare this is a shard db of a cluster; maxConns=20000 #設置最大鏈接數 [root@mongodbserver1 mongod]# cat /etc/mongod/shard2.conf pidfilepath = /var/run/mongodb/shard2.pid dbpath = /data/mongodb/shard2/data logpath = /data/mongodb/shard2/log/shard2.log logappend = true bind_ip = 0.0.0.0 port = 27002 fork = true oplogSize = 4096 journal = true quiet = true replSet=shard2 #副本集名稱 shardsvr = true #declare this is a shard db of a cluster; maxConns=20000 #設置最大鏈接數 [root@mongodbserver1 mongod]# cat /etc/mongod/shard3.conf pidfilepath = /var/run/mongodb/shard3.pid dbpath = /data/mongodb/shard3/data logpath = /data/mongodb/shard3/log/shard3.log logappend = true bind_ip = 0.0.0.0 port = 27003 fork = true oplogSize = 4096 journal = true quiet = true replSet=shard3 #副本集名稱 shardsvr = true #declare this is a shard db of a cluster; maxConns=20000 #設置最大鏈接數 [root@mongodbserver1 ~]# mongod -f /etc/mongod/shard1.conf 2018-08-27T09:45:20.879+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 46045 child process started successfully, parent exiting [root@mongodbserver1 ~]# mongod -f /etc/mongod/shard2.conf 2018-08-27T09:45:20.879+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 46045 child process started successfully, parent exiting [root@mongodbserver1 mongod]# mongod -f /etc/mongod/shard3.conf 2018-08-27T09:50:15.220+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' about to fork child process, waiting until server is ready for connections. forked process: 46162 child process started successfully, parent exiting [root@mongodbserver1 mongod]# netstat -nltup | grep :27003 tcp 0 0 0.0.0.0:27003 0.0.0.0:* LISTEN 46162/mongod [root@mongodbserver1 mongod]# netstat -nltup | grep :27001 tcp 0 0 0.0.0.0:27001 0.0.0.0:* LISTEN 45946/mongod [root@mongodbserver1 mongod]# netstat -nltup | grep :27002 tcp 0 0 0.0.0.0:27002 0.0.0.0:* LISTEN 46045/mongod
4.副本集初始化
# shard1副本集初始化 # 登陸192.168.1.47或者192.168.1.48中的任何一臺機器的27001端口初始化副本集,192.168.1.49由於shard1中咱們把這臺機器的27001端口做爲了仲裁節點 [root@mongodbserver1 mongod]# mongo --port 27001 MongoDB shell version v4.0.1 connecting to: mongodb://127.0.0.1:27001/ MongoDB server version: 4.0.1 Server has startup warnings: 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten] 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2018-08-27T09:37:06.874+0800 I CONTROL [initandlisten] --- Enable MongoDB's free cloud-based monitoring service, which will then receive and display metrics about your deployment (disk utilization, CPU, operation statistics, etc). The monitoring data will be available on a MongoDB website with a unique URL accessible to you and anyone you share the URL with. MongoDB may use this information to make product improvements and to suggest MongoDB products and deployment options to you. To enable free monitoring, run the following command: db.enableFreeMonitoring() To permanently disable this reminder, run the following command: db.disableFreeMonitoring() --- > use admin switched to db admin > config = { _id: "shard1", members: [ {_id : 0, host : "192.168.1.47:27001"}, {_id: 1,host : "192.168.1.48:27001"},{_id : 2, host : "192.168.1.49:27001",arbiterOnly:true}] } { "_id" : "shard1", "members" : [ { "_id" : 0, "host" : "192.168.1.47:27001" }, { "_id" : 1, "host" : "192.168.1.48:27001" }, { "_id" : 2, "host" : "192.168.1.49:27001", "arbiterOnly" : true } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1535335745, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535335745, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard1:SECONDARY> rs.status() { "set" : "shard1", "date" : ISODate("2018-08-27T02:09:26.775Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535335758, 2), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1535335758, 2), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1535335758, 2), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1535335758, 2), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1535335758, 1), "members" : [ { "_id" : 0, "name" : "192.168.1.47:27001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 1940, "optime" : { "ts" : Timestamp(1535335758, 2), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T02:09:18Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1535335756, 1), "electionDate" : ISODate("2018-08-27T02:09:16Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "192.168.1.48:27001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 20, "optime" : { "ts" : Timestamp(1535335758, 2), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535335758, 2), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T02:09:18Z"), "optimeDurableDate" : ISODate("2018-08-27T02:09:18Z"), "lastHeartbeat" : ISODate("2018-08-27T02:09:26.532Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T02:09:25.106Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.1.47:27001", "syncSourceHost" : "192.168.1.47:27001", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.1.49:27001", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 20, "lastHeartbeat" : ISODate("2018-08-27T02:09:26.531Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T02:09:25.865Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1535335758, 2), "$clusterTime" : { "clusterTime" : Timestamp(1535335758, 2), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard1:PRIMARY> # shard2副本集初始化 # 登陸192.168.1.48或者192.168.1.49中的任何一臺機器的27002端口初始化副本集,192.168.1.47由於shard2中咱們把這臺機器的27002端口做爲了仲裁節點 # rs.remove("192.168.1.49:27002"); 刪除節點 # rs.add({_id: 2, host: "192.168.1.49:27002"}) # rs.add({_id: 2, host: "192.168.1.47:27002",arbiterOnly:true}) [root@mongodbserver2 ~]# mongo --port 27002 >config = { _id: "shard2", members: [ {_id : 0, host : "192.168.1.47:27002" ,arbiterOnly:true},{_id : 1, host : "192.168.1.48:27002"},{_id : 2, host : "192.168.1.49:27002"}] } >rs.reinitiate(config) # shard3副本集初始化 # 登陸192.168.1.47或者192.168.1.49中的任何一臺機器的27003端口初始化副本集,192.168.1.48由於shard3中咱們把這臺機器的27002端口做爲了仲裁節點 [root@mongodbserver3 ~]# mongo --port 27003 > use admin switched to db admin > config = { _id: "shard3", members: [ {_id : 0, host : "192.168.1.47:27003"}, {_id : 1, host : "192.168.1.48:27003", arbiterOnly:true}, {_id : 2, host : "192.168.1.49:27003"}] } { "_id" : "shard3", "members" : [ { "_id" : 0, "host" : "192.168.1.47:27003" }, { "_id" : 1, "host" : "192.168.1.48:27003", "arbiterOnly" : true }, { "_id" : 2, "host" : "192.168.1.49:27003" } ] } > rs.initiate(config) { "ok" : 1, "operationTime" : Timestamp(1535338725, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535338725, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } shard3:PRIMARY> rs.status() { "set" : "shard3", "date" : ISODate("2018-08-27T02:59:59.805Z"), "myState" : 1, "term" : NumberLong(1), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1535338797, 1), "t" : NumberLong(1) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1535338797, 1), "t" : NumberLong(1) }, "appliedOpTime" : { "ts" : Timestamp(1535338797, 1), "t" : NumberLong(1) }, "durableOpTime" : { "ts" : Timestamp(1535338797, 1), "t" : NumberLong(1) } }, "lastStableCheckpointTimestamp" : Timestamp(1535338797, 1), "members" : [ { "_id" : 0, "name" : "192.168.1.47:27003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 74, "optime" : { "ts" : Timestamp(1535338797, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1535338797, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T02:59:57Z"), "optimeDurableDate" : ISODate("2018-08-27T02:59:57Z"), "lastHeartbeat" : ISODate("2018-08-27T02:59:57.821Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T02:59:58.287Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "192.168.1.49:27003", "syncSourceHost" : "192.168.1.49:27003", "syncSourceId" : 2, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 1, "name" : "192.168.1.48:27003", "health" : 1, "state" : 7, "stateStr" : "ARBITER", "uptime" : 74, "lastHeartbeat" : ISODate("2018-08-27T02:59:57.821Z"), "lastHeartbeatRecv" : ISODate("2018-08-27T02:59:59.384Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.1.49:27003", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 4185, "optime" : { "ts" : Timestamp(1535338797, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2018-08-27T02:59:57Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1535338735, 1), "electionDate" : ISODate("2018-08-27T02:58:55Z"), "configVersion" : 1, "self" : true, "lastHeartbeatMessage" : "" } ], "ok" : 1, "operationTime" : Timestamp(1535338797, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535338797, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
5.配置路由服務器
bye [root@mongodbserver2 ~]# vim /etc/mongod/mongos.conf pidfilepath = /var/run/mongodb/mongos.pid logpath = /data/mongodb/mongos/log/mongos.log logappend = true bind_ip = 0.0.0.0 port = 20000 fork = true configdb = configs/192.168.1.47:21000, 192.168.1.48:21000, 192.168.1.49:21000 #監聽的配置服務器,只能有1個或者3個> ,configs爲配置服務器的副本集名字 maxConns=20000 #設置最大鏈接數 "/etc/mongod/mongos.conf" [New] 8L, 360C written [root@mongodbserver2 ~]# mongos -f /etc/mongod/mongos.conf 2018-08-27T11:05:04.093+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none' 2018-08-27T11:05:04.254+0800 I NETWORK [main] getaddrinfo(" 192.168.1.48") failed: Name or service not known 2018-08-27T11:05:04.305+0800 I NETWORK [main] getaddrinfo(" 192.168.1.49") failed: Name or service not known about to fork child process, waiting until server is ready for connections. forked process: 4238 child process started successfully, parent exiting [root@mongodbserver2 ~]#
6.啓用分片
mongos> sh.addShard("shard1/192.168.1.47:27001,192.168.1.48:27001,192.168.1.49:27001") { "shardAdded" : "shard1", "ok" : 1, "operationTime" : Timestamp(1535339410, 3), "$clusterTime" : { "clusterTime" : Timestamp(1535339410, 3), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } # 192.168.1.49初始化的時候角色弄錯了,刪掉了 mongos> sh.addShard("shard2/192.168.1.47:27002,192.168.1.48:27002") { "shardAdded" : "shard2", "ok" : 1, "operationTime" : Timestamp(1535339580, 9), "$clusterTime" : { "clusterTime" : Timestamp(1535339580, 9), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.addShard("shard3/192.168.1.47:27003,192.168.1.48:27003,192.168.1.49:27003") { "shardAdded" : "shard3", "ok" : 1, "operationTime" : Timestamp(1535339462, 5), "$clusterTime" : { "clusterTime" : Timestamp(1535339462, 5), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("5b82d3380925dce6faaca18a") } shards: { "_id" : "shard1", "host" : "shard1/192.168.1.47:27001,192.168.1.48:27001", "state" : 1 } { "_id" : "shard2", "host" : "shard2/192.168.1.48:27002", "state" : 1 } { "_id" : "shard3", "host" : "shard3/192.168.1.47:27003,192.168.1.49:27003", "state" : 1 } active mongoses: "4.0.1" : 3 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) mongos>
[root@mongodbserver1 mongod]# mongo --port 20000 MongoDB shell version v4.0.1 connecting to: mongodb://127.0.0.1:20000/ MongoDB server version: 4.0.1 Server has startup warnings: 2018-08-27T11:05:03.697+0800 I CONTROL [main] 2018-08-27T11:05:03.697+0800 I CONTROL [main] ** WARNING: Access control is not enabled for the database. 2018-08-27T11:05:03.697+0800 I CONTROL [main] ** Read and write access to data and configuration is unrestricted. 2018-08-27T11:05:03.697+0800 I CONTROL [main] ** WARNING: You are running this process as the root user, which is not recommended. 2018-08-27T11:05:03.697+0800 I CONTROL [main] mongos> use admin switched to db admin mongos> db.runCommand({ enablesharding : "testdb"}) { "ok" : 1, "operationTime" : Timestamp(1535339870, 5), "$clusterTime" : { "clusterTime" : Timestamp(1535339870, 5), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } ) { "collectionsharded" : "testdb.table1", "collectionUUID" : UUID("733c1dad-cb4e-4ed3-a3ca-c3cfbec2e30e"), "ok" : 1, "operationTime" : Timestamp(1535339897, 16), "$clusterTime" : { "clusterTime" : Timestamp(1535339897, 16), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos> use testdb switched to db testdb mongos> for (var i = 1; i <= 10000; i++) db.table1.save({id:i,"test1":"testval1"}) WriteResult({ "nInserted" : 1 }) mongos> db.table1.stats() { "sharded" : true, "capped" : false, "wiredTiger" : { "metadata" : { "formatVersion" : 1 }, "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u", "type" : "file", "uri" : "statistics:table:collection-24--8506913186853910022", "LSM" : { "bloom filter false positives" : 0, "bloom filter hits" : 0, "bloom filter misses" : 0, "bloom filter pages evicted from cache" : 0, "bloom filter pages read into cache" : 0, "bloom filters in the LSM tree" : 0, "chunks in the LSM tree" : 0, "highest merge generation in the LSM tree" : 0, "queries that could have benefited from a Bloom filter that did not exist" : 0, "sleep for LSM checkpoint throttle" : 0, "sleep for LSM merge throttle" : 0, "total size of bloom filters" : 0 }, "block-manager" : { "allocations requiring file extension" : 21, "blocks allocated" : 21, "blocks freed" : 0, "checkpoint size" : 155648, "file allocation unit size" : 4096, "file bytes available for reuse" : 0, "file magic number" : 120897, "file major version number" : 1, "file size in bytes" : 167936, "minor version number" : 0 }, "btree" : { "btree checkpoint generation" : 91, "column-store fixed-size leaf pages" : 0, "column-store internal pages" : 0, "column-store variable-size RLE encoded values" : 0, "column-store variable-size deleted values" : 0, "column-store variable-size leaf pages" : 0, "fixed-record size" : 0, "maximum internal page key size" : 368, "maximum internal page size" : 4096, "maximum leaf page key size" : 2867, "maximum leaf page size" : 32768, "maximum leaf page value size" : 67108864, "maximum tree depth" : 3, "number of key/value pairs" : 0, "overflow pages" : 0, "pages rewritten by compaction" : 0, "row-store internal pages" : 0, "row-store leaf pages" : 0 }, "cache" : { "bytes currently in the cache" : 1349553, "bytes read into cache" : 0, "bytes written from cache" : 530298, "checkpoint blocked page eviction" : 0, "data source pages selected for eviction unable to be evicted" : 0, "eviction walk passes of a file" : 0, "eviction walk target pages histogram - 0-9" : 0, "eviction walk target pages histogram - 10-31" : 0, "eviction walk target pages histogram - 128 and higher" : 0, "eviction walk target pages histogram - 32-63" : 0, "eviction walk target pages histogram - 64-128" : 0, "eviction walks abandoned" : 0, "eviction walks gave up because they restarted their walk twice" : 0, "eviction walks gave up because they saw too many pages and found no candidates" : 0, "eviction walks gave up because they saw too many pages and found too few candidates" : 0, "eviction walks reached end of tree" : 0, "eviction walks started from root of tree" : 0, "eviction walks started from saved location in tree" : 0, "hazard pointer blocked page eviction" : 0, "in-memory page passed criteria to be split" : 0, "in-memory page splits" : 0, "internal pages evicted" : 0, "internal pages split during eviction" : 0, "leaf pages split during eviction" : 0, "modified pages evicted" : 0, "overflow pages read into cache" : 0, "page split during eviction deepened the tree" : 0, "page written requiring lookaside records" : 0, "pages read into cache" : 0, "pages read into cache after truncate" : 1, "pages read into cache after truncate in prepare state" : 0, "pages read into cache requiring lookaside entries" : 0, "pages requested from the cache" : 10000, "pages seen by eviction walk" : 0, "pages written from cache" : 20, "pages written requiring in-memory restoration" : 0, "tracked dirty bytes in the cache" : 1349094, "unmodified pages evicted" : 0 }, "cache_walk" : { "Average difference between current eviction generation when the page was last considered" : 0, "Average on-disk page image size seen" : 0, "Average time in cache for pages that have been visited by the eviction server" : 0, "Average time in cache for pages that have not been visited by the eviction server" : 0, "Clean pages currently in cache" : 0, "Current eviction generation" : 0, "Dirty pages currently in cache" : 0, "Entries in the root page" : 0, "Internal pages currently in cache" : 0, "Leaf pages currently in cache" : 0, "Maximum difference between current eviction generation when the page was last considered" : 0, "Maximum page size seen" : 0, "Minimum on-disk page image size seen" : 0, "Number of pages never visited by eviction server" : 0, "On-disk page image sizes smaller than a single allocation unit" : 0, "Pages created in memory and never written" : 0, "Pages currently queued for eviction" : 0, "Pages that could not be queued for eviction" : 0, "Refs skipped during cache traversal" : 0, "Size of the root page" : 0, "Total number of pages currently in cache" : 0 }, "compression" : { "compressed pages read" : 0, "compressed pages written" : 19, "page written failed to compress" : 0, "page written was too small to compress" : 1, "raw compression call failed, additional data available" : 0, "raw compression call failed, no additional data available" : 0, "raw compression call succeeded" : 0 }, "cursor" : { "bulk-loaded cursor-insert calls" : 0, "create calls" : 5, "cursor operation restarted" : 0, "cursor-insert key and value bytes inserted" : 561426, "cursor-remove key bytes removed" : 0, "cursor-update value bytes updated" : 0, "cursors cached on close" : 0, "cursors reused from cache" : 9996, "insert calls" : 10000, "modify calls" : 0, "next calls" : 1, "prev calls" : 1, "remove calls" : 0, "reserve calls" : 0, "reset calls" : 20003, "search calls" : 0, "search near calls" : 0, "truncate calls" : 0, "update calls" : 0 }, "reconciliation" : { "dictionary matches" : 0, "fast-path pages deleted" : 0, "internal page key bytes discarded using suffix compression" : 36, "internal page multi-block writes" : 0, "internal-page overflow keys" : 0, "leaf page key bytes discarded using prefix compression" : 0, "leaf page multi-block writes" : 1, "leaf-page overflow keys" : 0, "maximum blocks required for a page" : 1, "overflow values written" : 0, "page checksum matches" : 0, "page reconciliation calls" : 2, "page reconciliation calls for eviction" : 0, "pages deleted" : 0 }, "session" : { "cached cursor count" : 5, "object compaction" : 0, "open cursor count" : 0 }, "transaction" : { "update conflicts" : 0 } }, "ns" : "testdb.table1", "count" : 10000, "size" : 540000, "storageSize" : 167936, "totalIndexSize" : 208896, "indexSizes" : { "_id_" : 94208, "id_1" : 114688 }, "avgObjSize" : 54, "maxSize" : NumberLong(0), "nindexes" : 2, "nchunks" : 1, "shards" : { "shard3" : { "ns" : "testdb.table1", "size" : 540000, "count" : 10000, "avgObjSize" : 54, "storageSize" : 167936, "capped" : false, "wiredTiger" : { "metadata" : { "formatVersion" : 1 }, "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u", "type" : "file", "uri" : "statistics:table:collection-24--8506913186853910022", "LSM" : { "bloom filter false positives" : 0, "bloom filter hits" : 0, "bloom filter misses" : 0, "bloom filter pages evicted from cache" : 0, "bloom filter pages read into cache" : 0, "bloom filters in the LSM tree" : 0, "chunks in the LSM tree" : 0, "highest merge generation in the LSM tree" : 0, "queries that could have benefited from a Bloom filter that did not exist" : 0, "sleep for LSM checkpoint throttle" : 0, "sleep for LSM merge throttle" : 0, "total size of bloom filters" : 0 }, "block-manager" : { "allocations requiring file extension" : 21, "blocks allocated" : 21, "blocks freed" : 0, "checkpoint size" : 155648, "file allocation unit size" : 4096, "file bytes available for reuse" : 0, "file magic number" : 120897, "file major version number" : 1, "file size in bytes" : 167936, "minor version number" : 0 }, "btree" : { "btree checkpoint generation" : 91, "column-store fixed-size leaf pages" : 0, "column-store internal pages" : 0, "column-store variable-size RLE encoded values" : 0, "column-store variable-size deleted values" : 0, "column-store variable-size leaf pages" : 0, "fixed-record size" : 0, "maximum internal page key size" : 368, "maximum internal page size" : 4096, "maximum leaf page key size" : 2867, "maximum leaf page size" : 32768, "maximum leaf page value size" : 67108864, "maximum tree depth" : 3, "number of key/value pairs" : 0, "overflow pages" : 0, "pages rewritten by compaction" : 0, "row-store internal pages" : 0, "row-store leaf pages" : 0 }, "cache" : { "bytes currently in the cache" : 1349553, "bytes read into cache" : 0, "bytes written from cache" : 530298, "checkpoint blocked page eviction" : 0, "data source pages selected for eviction unable to be evicted" : 0, "eviction walk passes of a file" : 0, "eviction walk target pages histogram - 0-9" : 0, "eviction walk target pages histogram - 10-31" : 0, "eviction walk target pages histogram - 128 and higher" : 0, "eviction walk target pages histogram - 32-63" : 0, "eviction walk target pages histogram - 64-128" : 0, "eviction walks abandoned" : 0, "eviction walks gave up because they restarted their walk twice" : 0, "eviction walks gave up because they saw too many pages and found no candidates" : 0, "eviction walks gave up because they saw too many pages and found too few candidates" : 0, "eviction walks reached end of tree" : 0, "eviction walks started from root of tree" : 0, "eviction walks started from saved location in tree" : 0, "hazard pointer blocked page eviction" : 0, "in-memory page passed criteria to be split" : 0, "in-memory page splits" : 0, "internal pages evicted" : 0, "internal pages split during eviction" : 0, "leaf pages split during eviction" : 0, "modified pages evicted" : 0, "overflow pages read into cache" : 0, "page split during eviction deepened the tree" : 0, "page written requiring lookaside records" : 0, "pages read into cache" : 0, "pages read into cache after truncate" : 1, "pages read into cache after truncate in prepare state" : 0, "pages read into cache requiring lookaside entries" : 0, "pages requested from the cache" : 10000, "pages seen by eviction walk" : 0, "pages written from cache" : 20, "pages written requiring in-memory restoration" : 0, "tracked dirty bytes in the cache" : 1349094, "unmodified pages evicted" : 0 }, "cache_walk" : { "Average difference between current eviction generation when the page was last considered" : 0, "Average on-disk page image size seen" : 0, "Average time in cache for pages that have been visited by the eviction server" : 0, "Average time in cache for pages that have not been visited by the eviction server" : 0, "Clean pages currently in cache" : 0, "Current eviction generation" : 0, "Dirty pages currently in cache" : 0, "Entries in the root page" : 0, "Internal pages currently in cache" : 0, "Leaf pages currently in cache" : 0, "Maximum difference between current eviction generation when the page was last considered" : 0, "Maximum page size seen" : 0, "Minimum on-disk page image size seen" : 0, "Number of pages never visited by eviction server" : 0, "On-disk page image sizes smaller than a single allocation unit" : 0, "Pages created in memory and never written" : 0, "Pages currently queued for eviction" : 0, "Pages that could not be queued for eviction" : 0, "Refs skipped during cache traversal" : 0, "Size of the root page" : 0, "Total number of pages currently in cache" : 0 }, "compression" : { "compressed pages read" : 0, "compressed pages written" : 19, "page written failed to compress" : 0, "page written was too small to compress" : 1, "raw compression call failed, additional data available" : 0, "raw compression call failed, no additional data available" : 0, "raw compression call succeeded" : 0 }, "cursor" : { "bulk-loaded cursor-insert calls" : 0, "create calls" : 5, "cursor operation restarted" : 0, "cursor-insert key and value bytes inserted" : 561426, "cursor-remove key bytes removed" : 0, "cursor-update value bytes updated" : 0, "cursors cached on close" : 0, "cursors reused from cache" : 9996, "insert calls" : 10000, "modify calls" : 0, "next calls" : 1, "prev calls" : 1, "remove calls" : 0, "reserve calls" : 0, "reset calls" : 20003, "search calls" : 0, "search near calls" : 0, "truncate calls" : 0, "update calls" : 0 }, "reconciliation" : { "dictionary matches" : 0, "fast-path pages deleted" : 0, "internal page key bytes discarded using suffix compression" : 36, "internal page multi-block writes" : 0, "internal-page overflow keys" : 0, "leaf page key bytes discarded using prefix compression" : 0, "leaf page multi-block writes" : 1, "leaf-page overflow keys" : 0, "maximum blocks required for a page" : 1, "overflow values written" : 0, "page checksum matches" : 0, "page reconciliation calls" : 2, "page reconciliation calls for eviction" : 0, "pages deleted" : 0 }, "session" : { "cached cursor count" : 5, "object compaction" : 0, "open cursor count" : 0 }, "transaction" : { "update conflicts" : 0 } }, "nindexes" : 2, "totalIndexSize" : 208896, "indexSizes" : { "_id_" : 94208, "id_1" : 114688 }, "ok" : 1, "operationTime" : Timestamp(1535339957, 1), "$gleStats" : { "lastOpTime" : { "ts" : Timestamp(1535339939, 572), "t" : NumberLong(1) }, "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1535339957, 1), "$configServerState" : { "opTime" : { "ts" : Timestamp(1535339956, 1), "t" : NumberLong(1) } }, "$clusterTime" : { "clusterTime" : Timestamp(1535339957, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } }, "ok" : 1, "operationTime" : Timestamp(1535339957, 1), "$clusterTime" : { "clusterTime" : Timestamp(1535339957, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } mongos>
1.備份指定庫
[root@mongodbserver1 mongod]# mongodump --host 127.0.0.1 --port 20000 -d testdb -o /tmp/mongobak 2018-08-27T11:22:43.430+0800 writing testdb.table1 to 2018-08-27T11:22:43.541+0800 done dumping testdb.table1 (10000 documents) [root@mongodbserver1 mongod]# ls -lh /tmp/mongobak/testdb/ total 532K -rw-r--r-- 1 root root 528K Aug 27 11:22 table1.bson -rw-r--r-- 1 root root 187 Aug 27 11:22 table1.metadata.json
2.備份全部庫
[root@mongodbserver1 mongod]# mongodump --host 127.0.0.1 --port 20000 -o /tmp/mongobak/alldatabase 2018-08-27T11:24:40.955+0800 writing admin.system.version to 2018-08-27T11:24:41.049+0800 done dumping admin.system.version (1 document) 2018-08-27T11:24:41.143+0800 writing config.locks to 2018-08-27T11:24:41.144+0800 writing config.changelog to 2018-08-27T11:24:41.144+0800 writing testdb.table1 to 2018-08-27T11:24:41.144+0800 writing config.lockpings to 2018-08-27T11:24:41.149+0800 done dumping config.changelog (7 documents) 2018-08-27T11:24:41.149+0800 writing config.shards to 2018-08-27T11:24:41.149+0800 done dumping config.locks (4 documents) 2018-08-27T11:24:41.149+0800 writing config.mongos to 2018-08-27T11:24:41.149+0800 done dumping config.lockpings (9 documents) 2018-08-27T11:24:41.149+0800 writing config.chunks to 2018-08-27T11:24:41.151+0800 done dumping config.mongos (3 documents) 2018-08-27T11:24:41.151+0800 writing config.collections to 2018-08-27T11:24:41.152+0800 done dumping config.shards (3 documents) 2018-08-27T11:24:41.152+0800 writing config.databases to 2018-08-27T11:24:41.153+0800 done dumping config.chunks (2 documents) 2018-08-27T11:24:41.153+0800 writing config.version to 2018-08-27T11:24:41.154+0800 done dumping config.databases (1 document) 2018-08-27T11:24:41.154+0800 writing config.tags to 2018-08-27T11:24:41.155+0800 done dumping config.collections (2 documents) 2018-08-27T11:24:41.155+0800 writing config.migrations to 2018-08-27T11:24:41.157+0800 done dumping config.version (1 document) 2018-08-27T11:24:41.170+0800 done dumping config.tags (0 documents) 2018-08-27T11:24:41.179+0800 done dumping config.migrations (0 documents) 2018-08-27T11:24:41.301+0800 done dumping testdb.table1 (10000 documents)
3.指定備份集合
# 它依然會生成mydb目錄,再在這目錄下面生成兩個文件 [root@mongodbserver1 mongod]# mongodump --host 127.0.0.1 --port 20000 -d testdb -c table1 -o /tmp/mongobak/ 2018-08-27T11:28:31.578+0800 writing testdb.table1 to 2018-08-27T11:28:31.636+0800 done dumping testdb.table1 (10000 documents)
4.導出集合爲json文件
[root@mongodbserver1 mongod]# mongoexport --host 127.0.0.1 --port 20000 -d testdb -c table1 -o /tmp/mydb2/testdb.json 2018-08-27T11:31:07.827+0800 connected to: 127.0.0.1:20000 2018-08-27T11:31:07.974+0800 exported 10000 records [root@mongodbserver1 mongod]#
5.mongodb的恢復
1.恢復全部庫
# 其中dir是備份全部庫的目錄名字,其中--drop可選,意思是當恢復以前先把以前的數據刪除,不建議使用 # 其中--drop可選,意思是當恢復以前先把以前的數據刪除,不建議使用 [root@mongodbserver1 mongod]# mongorestore -h 127.0.0.1 --port 20000 --drop /tmp/mongobak/alldatabase
2.恢復指定庫
# -d跟要恢復的庫名字,dir就是該庫備份時所在的目錄 [root@mongodbserver1 mongod]# mongorestore -d testdb /tmp/mongobak
3.恢復集合
# -c後面跟要恢復的集合名字,dir是備份mydb庫時生成文件所在路徑,這裏是一個bson文件的路徑 [root@mongodbserver1 mongod]# mongorestore -d testdb -c table1 /tmp/mongobak/
4.導入集合
[root@mongodbserver1 mongod]# mongoimport -d testdb -c table1 --file /tmp/mydb2/testdb.json
擴展內容 mongodb安全設置 http://www.mongoing.com/archives/631 mongodb執行js腳本 http://www.jianshu.com/p/6bd8934bd1ca