NoSQL——MongoDB

NoSQL——MongoDB

1、MongoDB介紹

  • MongoDB是一個基於分佈式文件存儲的數據庫。由C++語言編寫。旨在爲WEB應用提供可擴展的高性能數據存儲解決方案。php

  • MongoDB 將數據存儲爲一個文檔,數據結構由鍵值(key=>value)對組成。它支持的數據結構很是鬆散,MongoDB 文檔相似於 JSON 對象。字段值能夠包含其餘文檔、數組及文檔數組。html

什麼是 JSON ? JSON 指的是 JavaScript 對象表示法(JavaScript Object Notation) JSON 是輕量級的文本數據交換格式 JSON 獨立於語言 * JSON 具備自我描述性,更易理解 JSON 使用 JavaScript 語法來描述數據對象,可是 JSON 仍然獨立於語言和平臺。JSON 解析器和 JSON 庫支持許多不一樣的編程語言。mysql

  • Mongo最大的特色是它支持的查詢語言很是強大,其語法有點相似於面向對象的查詢語言,幾乎能夠實現相似關係數據庫單表查詢的絕大部分功能,並且還支持對數據創建索引。

MongoDB和關係型數據庫對比linux

關係型數據庫數據結構nginx

MongoDB數據結構web

2、mongodb安裝

按照官方網站 https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/, 編輯repo文件面試

[root@ying01 ~]# cd /etc/yum.repos.d/
[root@ying01 yum.repos.d]# vim mongodb-org-4.0.repo

[mongodb-org-4.0]
name = MongoDB Repository
baseurl = https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck = 1
enabled = 1
gpgkey = https://www.mongodb.org/static/pgp/server-4.0.asc

查看yum list當中新生成的mongodb-org,並用yum安裝全部的mongodb-orgredis

[root@ying01 yum.repos.d]# yum list|grep mongodb-org
mongodb-org.x86_64                        4.0.1-1.el7                  @mongodb-org-4.0
mongodb-org-mongos.x86_64                 4.0.1-1.el7                  @mongodb-org-4.0
mongodb-org-server.x86_64                 4.0.1-1.el7                  @mongodb-org-4.0
mongodb-org-shell.x86_64                  4.0.1-1.el7                  @mongodb-org-4.0
mongodb-org-tools.x86_64                  4.0.1-1.el7                  @mongodb-org-4.0

[root@ying01 yum.repos.d]# yum install -y mongodb-org

3、鏈接mongodb

在mongod.conf文件中,添加本機ip,用逗號隔開sql

[root@ying01 ~]# vim /etc/mongod.conf 

bindIp: 127.0.0.1,192.168.112.136    //增長IP

啓動,並查看進程和端口mongodb

[root@ying01 ~]# systemctl start mongod

[root@ying01 ~]# ps aux |grep mongod
mongod    8169  0.8  3.2 1074460 60764 ?       Sl   09:51   0:01 /usr/bin/mongod -f /etc/mongod.conf
root      8197  0.0  0.0 112720   984 pts/0    S+   09:54   0:00 grep --color=auto mongod
[root@ying01 ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.112.136:27017   0.0.0.0:*               LISTEN      8169/mongod         
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      8169/mongod         
[root@ying01 ~]#

鏈接MongoDB

直接鏈接

[root@ying01 ~]# mongo

指定端口(配置文件沒有明確指定)

[root@ying01 ~]# mongo --port 27017

遠程鏈接,須要IP 和端口

[root@ying01 ~]# mongo --host 192.168.112.136  --port 27017

4、mongodb用戶管理

進入數據庫,建立用戶

[root@ying01 ~]# mongo


> use admin                     //切換到admin庫
switched to db admin
> db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "www123", roles: [ { role: "root", db: "admin" } ] } )           //建立admin用戶
Successfully added user: {
	"user" : "admin",
	"customData" : {
		"description" : "superuser"
	},
	"roles" : [
		{
			"role" : "root",
			"db" : "admin"
		}
	]
}

語句釋義:

db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "admin122", roles: [ { role: "root", db: "admin" } ] } )

  • db.createUser 建立用戶的命令
  • user: "admin" 定義用戶名
  • customData: {description: "superuser"}
  • pwd: "admin122" 定義用戶密碼
  • roles: [ { role: "root", db: "admin" } ] 規則:角色 root,數據庫admin

列出全部用戶,須要切換到admin庫

> db.system.users.find()
{ "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "NRoDD1kSxLktW8vyDg4mpw==", "storedKey" : "yX+2kSbCl1bPpsh+0ZeE7QlcW6A=", "serverKey" : "XM9NgrMNOwXAvuWusY6iVhpyuFw=" }, "SCRAM-SHA-256" : { "iterationCount" : 15000, "salt" : "MOokBWPCOobBeNwHnhm/2QagzAT8h2yIuCzROg==", "storedKey" : "tAqs7zMF8InT0FU09lCgq2ZVB9wRgeIyoa1UONgRDM0=", "serverKey" : "lN2TYZX5Snik4gMthUNZE7jw71Nkxo13LAChh9K8ZiI=" } }, "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }

查看當前庫下全部的用戶

> show users
{
	"_id" : "admin.admin",
	"user" : "admin",
	"db" : "admin",
	"customData" : {
		"description" : "superuser"
	},
	"roles" : [
		{
			"role" : "root",
			"db" : "admin"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}

建立新用戶ying

> db.createUser({user:"ying",pwd:"www123",roles:[{role:"read",db:"testdb"}]})
Successfully added user: {
	"user" : "ying",
	"roles" : [
		{
			"role" : "read",
			"db" : "testdb"
		}
	]
}
> show users                 
{
	"_id" : "admin.admin",
	"user" : "admin",
	"db" : "admin",
	"customData" : {
		"description" : "superuser"
	},
	"roles" : [
		{
			"role" : "root",
			"db" : "admin"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}
{
	"_id" : "admin.ying",
	"user" : "ying",
	"db" : "admin",
	"roles" : [
		{
			"role" : "read",
			"db" : "testdb"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}

刪除用戶

> db.dropUser('ying')
true
> show users
{
	"_id" : "admin.admin",
	"user" : "admin",
	"db" : "admin",
	"customData" : {
		"description" : "superuser"
	},
	"roles" : [
		{
			"role" : "root",
			"db" : "admin"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}

切換到testdb庫,若此庫不存在,會自動建立

> use testdb
switched to db testdb
> show users                     //當前庫,無用戶
> db.system.user.find()
> ^C
bye

要使用用戶生效,須要編輯啓動服務腳本,添加 --auth,之後登陸須要身份驗證

[root@ying01 ~]# vim /usr/lib/systemd/system/mongod.service


Environment="OPTIONS=-f /etc/mongod.conf"  改成 Environment="OPTIONS=--auth -f /etc/mongod.conf"

重啓mongod服務

[root@ying01 ~]# systemctl restart mongod
Warning: mongod.service changed on disk. Run 'systemctl daemon-reload' to reload units.
[root@ying01 ~]# systemctl daemon-reload
[root@ying01 ~]# systemctl restart mongod
[root@ying01 ~]# ps aux |grep mongod
mongod    8611 12.6  2.8 1068324 52744 ?       Sl   11:42   0:01 /usr/bin/mongod --auth -f /etc/mongod.conf
root      8642  0.0  0.0 112720   980 pts/0    S+   11:42   0:00 grep --color=auto mongod

如今直接訪問,不加密碼,不會查看數據庫,由於須要認證,

[root@ying01 ~]# mongo --host 192.168.112.136 --port 27017
MongoDB shell version v4.0.1
connecting to: mongodb://192.168.112.136:27017/
MongoDB server version: 4.0.1
> use admin
switched to db admin
> show users
2018-08-27T11:43:36.654+0800 E QUERY    [js] Error: command usersInfo requires authentication : //須要身份驗證

使用密碼,身份認證,登陸

[root@ying01 ~]# mongo --host 192.168.112.136 --port 27017 -u admin -p 'admin122' --authenticationDatabase "admin"

> use admin
switched to db admin
> show users
{
	"_id" : "admin.admin",
	"user" : "admin",
	"db" : "admin",
	"customData" : {
		"description" : "superuser"
	},
	"roles" : [
		{
			"role" : "root",
			"db" : "admin"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}

切換到db1下,建立新用戶

> use db1
switched to db db1
> show users
> db.createUser( { user: "test1", pwd: "www123", roles: [ { role: "readWrite", db: "db1" }, {role: "read", db: "db2" } ] } )
Successfully added user: {
	"user" : "test1",
	"roles" : [
		{
			"role" : "readWrite",
			"db" : "db1"
		},
		{
			"role" : "read",
			"db" : "db2"
		}
	]
}

> show users                //查看在db1庫下,建立的用戶
{
	"_id" : "db1.test1",
	"user" : "test1",
	"db" : "db1",
	"roles" : [
		{
			"role" : "readWrite",
			"db" : "db1"
		},
		{
			"role" : "read",
			"db" : "db2"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}

> use db1
switched to db db1
> db.auth('test1','www123')            //驗證用戶test1,返回值爲1,則驗證成功   
1
>

MongoDB用戶角色

  • Read:容許用戶讀取指定數據庫
  • readWrite:容許用戶讀寫指定數據庫
  • dbAdmin:容許用戶在指定數據庫中執行管理函數,如索引建立、刪除,查看統計或訪問system.profile
  • userAdmin:容許用戶向system.users集合寫入,能夠找指定數據庫裏建立、刪除和管理用戶
  • clusterAdmin:只在admin數據庫中可用,賦予用戶全部分片和複製集相關函數的管理權限。
  • readAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的讀權限
  • readWriteAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的讀寫權限
  • userAdminAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的userAdmin權限
  • dbAdminAnyDatabase:只在admin數據庫中可用,賦予用戶全部數據庫的dbAdmin權限。
  • root:只在admin數據庫中可用。超級帳號,超級權限

5、mongodb建立集合、數據管理

在db1下,建立 mycol的集合

[root@ying01 ~]# mongo --host 192.168.112.136 --port 27017 -u admin -p 'www123' --authenticationDatabase "admin"


> use db1
switched to db db1
> show users
{
	"_id" : "db1.test1",
	"user" : "test1",
	"db" : "db1",
	"roles" : [
		{
			"role" : "readWrite",
			"db" : "db1"
		},
		{
			"role" : "read",
			"db" : "db2"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}

> db.createCollection("mycol", { capped : true, autoIndexID : true, size : 6142800, max : 10000 } )
{
	"ok" : 0,
	"errmsg" : "too many users are authenticated",
	"code" : 13,
	"codeName" : "Unauthorized"
}

語法:db.createCollection(name,options)

  • name就是集合的名字,
  • options可選,用來配置集合的參數,參數以下
    • capped true/false (可選)若是爲true,則啓用封頂集合。封頂集合是固定大小的集合,當它達到其最大大小,會自動覆蓋最先的條目。若是指定true,則也須要指定尺寸參數。
    • autoindexID true/false (可選)若是爲true,自動建立索引_id字段的默認值是false。
    • size (可選)指定最大大小字節封頂集合。若是封頂若是是 true,那麼你還須要指定這個字段。單位B
    • max (可選)指定封頂集合容許在文件的最大數量。

錯誤排查

此時出現錯誤信息:

"errmsg" : "too many users are authenticated" 「太多用戶經過身份驗證」,

這個在百度上,也沒有搜到。此時我認爲是用戶問題,所以經過屢次嘗試,決定不用 --host --port 登陸

[root@ying01 ~]# mongo -u admin -p 'www123' --authenticationDatabase "admin"


> use db1
switched to db db1
> show users
{
	"_id" : "db1.test1",
	"user" : "test1",
	"db" : "db1",
	"roles" : [
		{
			"role" : "readWrite",
			"db" : "db1"
		},
		{
			"role" : "read",
			"db" : "db2"
		}
	],
	"mechanisms" : [
		"SCRAM-SHA-1",
		"SCRAM-SHA-256"
	]
}
> db.createCollection("mycol", { capped : true, autoIndexID : true, size : 6142800, max : 10000 } )
{
	"ok" : 0,
	"errmsg" : "The field 'autoIndexID' is not a valid collection option. Options: { capped: true, autoIndexID: true, size: 6142800.0, max: 10000.0 }",
	"code" : 72,
	"codeName" : "InvalidOptions"
}

此時又出現錯誤信息,可是還好,不是以前那一條。

"The field 'autoIndexID' is not a valid collection option. Options: { capped: true, autoIndexID: true, size:

大意:「字段'autoIndexID'不是有效的集合選項。選項:{capped:true,autoIndexID:true,size:

那麼此時取消: autoIndexID : true

> db.createCollection("mycol", { capped : true, size : 6142800, max : 10000 } )
{ "ok" : 1 }
> show tables
mycol

此時,發現OK了。再用show tables 命令,有mycol輸出; 問題解決,可是原理,先記錄下來,往後有時間及知識不斷積累,再詳解其中原理。

查看集合:show collections 或者使用show tables

> show collections
mycol

在集合Account中,直接插入數據。若是該集合不存在,則mongodb會自動建立集合

> db.Account.insert({AccountID:1,UserName:"123",password:"123456"})
WriteResult({ "nInserted" : 1 })
> show tables   //多了一個
Account
mycol
> db.Account.insert({AccountID:2,UserName:"ying",password:"abcdef"})  //再插入一條信息
WriteResult({ "nInserted" : 1 })
> show tables
Account
mycol

在集合中更新信息數據;

> db.Account.update({AccountID:1},{"$set":{"Age":20}})  //在集合Account中,第一條中,增長一項信息
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })

查看全部文檔:db.Account.find(),此時在第一條 有更新的信息

> db.Account.find()
{ "_id" : ObjectId("5b83bf9209eb45c97dce1c4c"), "AccountID" : 1, "UserName" : "123", "password" : "123456", "Age" : 20 }
{ "_id" : ObjectId("5b83bfe509eb45c97dce1c4d"), "AccountID" : 2, "UserName" : "ying", "password" : "abcdef" }

查看指定的文檔, db.Account.find({AccountID:2})

> db.Account.find({AccountID:2})  //查看數據庫集合Account中,ID爲2的文檔
{ "_id" : ObjectId("5b83bfe509eb45c97dce1c4d"), "AccountID" : 2, "UserName" : "ying", "password" : "abcdef" }

移除指定的文檔:db.Account.remove({AccountID:1})

> db.Account.remove({AccountID:1})    //移除id爲1d 的文檔
WriteResult({ "nRemoved" : 1 })
> db.Account.find()                   //查看全部文檔,此時已經沒有ID爲1的文檔
{ "_id" : ObjectId("5b83bfe509eb45c97dce1c4d"), "AccountID" : 2, "UserName" : "ying", "password" : "abcdef" }

刪除集合:db.Account.drop()

> db.Account.drop()  //刪除Account集合
true
> show tables
mycol

> db.mycol.drop()   //刪除mycol集合
true
> show tables

從新建立集合col2

> db.col2.insert({AccountID:1,UserName:"123",password:"123456"})
WriteResult({ "nInserted" : 1 })

查看全部集合的狀態:db.printCollectionStats()

> db.printCollectionStats()
col2
{
	"ns" : "db1.col2",
	"size" : 80,
	"count" : 1,
	"avgObjSize" : 80,
	"storageSize" : 16384,
	"capped" : false,
	"wiredTiger" : {
		"metadata" : {
			"formatVersion" : 1
		},
		
......

6、PHP的mongodb擴展

下載源碼包,並解壓

[root@ying01 ~]# cd /usr/local/src/
[root@ying01 src]# wget https://pecl.php.net/get/mongodb-1.3.0.tgz 

[root@ying01 src]# ls mongodb-1.3.0.tgz 
mongodb-1.3.0.tgz
[root@ying01 src]# tar zxf mongodb-1.3.0.tgz

用phpize生成configure文件

[root@ying01 src]# cd mongodb-1.3.0
[root@ying01 mongodb-1.3.0]# ls
config.m4   CREDITS  Makefile.frag    phongo_compat.h  php_phongo.c          php_phongo.h          README.md  src    Vagrantfile
config.w32  LICENSE  phongo_compat.c  php_bson.h       php_phongo_classes.h  php_phongo_structs.h  scripts    tests

[root@ying01 mongodb-1.3.0]# /usr/local/php-fpm/bin/phpize  //生成configure文件
Configuring for:
PHP Api Version:         20131106
Zend Module Api No:      20131226
Zend Extension Api No:   220131226

在解壓目錄下,定製相應的功能,併成成makefile文件,編譯,安裝

[root@ying01 mongodb-1.3.0]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config


[root@ying01 mongodb-1.3.0]# make


[root@ying01 mongodb-1.3.0]# make install

[root@ying01 mongodb-1.3.0]# ls /usr/local/php-fpm/lib/php/extensions/no-debug-non-zts-20131226/
memcache.so  mongodb.so  opcache.a  opcache.so  redis.so

在php配置文件中添加:extension = mongodb.so;並檢測php是否加載mongodb模塊

[root@ying01 mongodb-1.3.0]# vim /usr/local/php-fpm/etc/php.ini 

 
extension=memcache.so
extension=redis.so         //以前添加上
extension = mongodb.so     //添加此語句

[root@ying01 mongodb-1.3.0]# /usr/local/php-fpm/bin/php -m|grep mongodb 
mongodb

[root@ying01 mongodb-1.3.0]# /etc/init.d/php-fpm restart  //重啓php服務
Gracefully shutting down php-fpm . done
Starting php-fpm  done

七 、PHP的mongo擴展

下載mongo源碼包,解壓,用phpize生成configure文件

[root@ying01 ~]# cd /usr/local/src/
[root@ying01 src]# wget https://pecl.php.net/get/mongo-1.6.16.tgz

[root@ying01 src]# ls mongo-1.6.16.tgz 
mongo-1.6.16.tgz
[root@ying01 src]# tar zxf mongo-1.6.16.tgz 
[root@ying01 src]# cd mongo-1.6.16/
[root@ying01 mongo-1.6.16]# /usr/local/php-fpm/bin/phpize 
Configuring for:
PHP Api Version:         20131106
Zend Module Api No:      20131226
Zend Extension Api No:   220131226

定製相應的功能,並生成makefile文件,根據makefile的預設參數進行編譯,而後安裝

[root@ying01 mongo-1.6.16]# ./configure --with-php-config=/usr/local/php-fpm/bin/php-config

[root@ying01 mongo-1.6.16]# make

[root@ying01 mongo-1.6.16]# make install

在php配置文件中添加:extension = mongo.so;並檢測php是否加載mongodb模塊

[root@ying01 mongo-1.6.16]# vim /usr/local/php-fpm/etc/php.ini 

extension=memcache.so
extension=redis.so
extension = mongodb.so
extension = mongo.so          //新增長

[root@ying01 mongo-1.6.16]# /usr/local/php-fpm/bin/php -m|grep mongo
mongo
mongodb

寫一個php,測試mongo擴展是否成功

[root@ying01 mongo-1.6.16]# ls /data/wwwroot/default/
1.php  index.html
[root@ying01 mongo-1.6.16]# vim /data/wwwroot/default/1.php 

<?php
$m = new MongoClient(); // 鏈接
$db = $m->test; // 獲取名稱爲 "test" 的數據庫
$collection = $db->createCollection("runoob");
echo "集合建立成功";
?>

輸出 "集合建立成功",說明 runoob集合建立成功

[root@ying01 mongo-1.6.16]# systemctl stop httpd ;systemctl start nginx
[root@ying01 mongo-1.6.16]# ps aux |grep nginx
root     64145  0.0  0.0  45832  1268 ?        Ss   19:26   0:00 nginx: master process /usr/

[root@ying01 mongo-1.6.16]# curl localhost/1.php
集合建立成功

登入mogodb,查看test集合

[root@ying01 mongo-1.6.16]# vim /usr/lib/systemd/system/mongod.service

Environment="OPTIONS=-f /etc/mongod.conf" //把--auth 去掉,不用認證登陸

[root@ying01 mongo-1.6.16]# systemctl daemon-reload
[root@ying01 mongo-1.6.16]# systemctl restart mongod
[root@ying01 mongo-1.6.16]# curl localhost/1.php
集合建立成功


[root@ying01 mongo-1.6.16]# mongo --host 192.168.112.136 --port 27017

> use test
switched to db test
> show tables
runoob
>

8、mongodb副本集架構

8.1 mongodb副本集介紹

  • 早期版本使用master-slave,一主一從和mysql相似,但slave在此架構中只讀,當主庫宕機後,從庫不能自動切換爲主。
  • 目前已經淘汰了master-slave,改成副本集。此模式中有一個主(primary),和多個從(secondary),只讀。支持給它們設置權重,當主宕機後,權重最高的從切換爲主。
  • 在此架構中,還能夠創建一個仲裁(arbiter)的角色,它只負責裁決,而不存儲數據。
  • 此架構中讀寫數據都在主上,要想實現負載均衡則須要手動指定讀庫的目標server。

下圖爲副本集架構,其中有一個主服務器(primary),用於處理客戶端請求;還有多個備份服務器(secondary),用於保存主服務器的數據副本。

主服務器(primary)主宕機後,權重最高的從切換爲主。

如圖 一個仲裁(arbiter)的角色,它只負責裁決,而不存儲數據。

8.2 mongodb副本集搭建

機器分配:

  • ying01 192.168.112.136 PRIMARY
  • ying02 192.168.112.138 SECONDARY
  • ying03 192.168.112.139 SECONDARY

分別在ying0二、ying03上安裝mongdb

[root@ying02 ~]# cd /etc/yum.repos.d/
[root@ying02 yum.repos.d]# vim mongodb-org-4.0.repo


[mongodb-org-4.0]
name = MongoDB Repository
baseurl = https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck = 1
enabled = 1
gpgkey = https://www.mongodb.org/static/pgp/server-4.0.asc
[root@ying02 yum.repos.d]# yum install -y mongodb-org

在ying03機器

[root@ying03 ~]# cd /etc/yum.repos.d/
[root@ying03 yum.repos.d]# vim mongodb-org-4.0.repo

[mongodb-org-4.0]
name = MongoDB Repository
baseurl = https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck = 1
enabled = 1
gpgkey = https://www.mongodb.org/static/pgp/server-4.0.asc
[root@ying03 yum.repos.d]# yum install -y mongodb-org

在ying01上編輯配置文件

[root@ying01 yum.repos.d]# vim /etc/mongod.conf

 bindIp: 127.0.0.1,192.168.112.136

replication:                //讓其加載上,去掉#號
  oplogSizeMB: 20           //添加
  replSetName: yinglinux    //添加

重啓 mongod服務

[root@ying01 yum.repos.d]# systemctl restart mongod.service 
[root@ying01 yum.repos.d]# ps aux|grep mongod
mongod   64534  9.7  3.1 1102168 58380 ?       Sl   20:42   0:03 /usr/bin/mongod -f /etc/mongod.conf
root     64569  0.0  0.0 112720   984 pts/0    S+   20:43   0:00 grep --color=auto mongod

與ying01上的配置同樣,在ying02機器上,編輯mongod配置文件

[root@ying02 ~]# vim /etc/mongod.conf


  bindIp: 127.0.0.1,192.168.112.138

replication:
  oplogSizeMB: 20
  replSetName: yinglinux

在ying03機器上,編輯mongod配置文件

[root@ying03 ~]# vim /etc/mongod.conf


  bindIp: 127.0.0.1,192.168.112.139   //添加上內網IP

replication:                         //增長語句
  oplogSizeMB: 20
  replSetName: yinglinux

在ying0二、ying03上機器都開啓mongod服務

root@ying02 ~]# ps aux|grep mongod
mongod   16421  6.5  2.7 1102140 52160 ?       Sl   20:50   0:00 /usr/bin/mongod -f /etc/mongod.conf
root     16452  0.0  0.0 112720   984 pts/0    R+   20:50   0:00 grep --color=auto mongod
[root@ying02 ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.112.138:27017   0.0.0.0:*               LISTEN      16421/mongod        
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      16421/mongod        
[root@ying02 ~]#
root@ying03 ~]# systemctl start mongod
[root@ying03 ~]# ps aux|grep mongod
mongod    3773  5.6  2.7 1102148 52088 ?       Sl   20:50   0:00 /usr/bin/mongod -f /etc/mongod.conf
root      3804  0.0  0.0 112720   984 pts/0    S+   20:50   0:00 grep --color=auto mongod
[root@ying03 ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.112.139:27017   0.0.0.0:*               LISTEN      3773/mongod         
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      3773/mongod

開始配置副本集架構

> config={_id:"yinglinux",members:[{_id:0,host:"192.168.112.136:27017"},{_id:1,host:"192.168.112.138:27017"},{_id:2,host:"192.168.112.139:27017"}]}              //配置副本集
{
	"_id" : "yinglinux",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.112.136:27017"
		},
		{
			"_id" : 1,
			"host" : "192.168.112.138:27017"
		},
		{
			"_id" : 2,
			"host" : "192.168.112.139:27017"
		}
	]
}
> rs.initiate(config)        //初始化
{
	"ok" : 1,
	"operationTime" : Timestamp(1535374960, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535374960, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
yinglinux:OTHER> rs.status()           //查看rs狀態

	
由於篇幅關係,只是顯示 重要的信息	
	
		{
			"_id" : 0,
			"name" : "192.168.112.136:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",            //PRIMARY,主
			"uptime" : 1274,
			"optime" : {
				"ts" : Timestamp(1535375032, 1),
				"t" : NumberLong(1)
	
		{
			"_id" : 1,
			"name" : "192.168.112.138:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",          //SECONDARY,從
			"uptime" : 73,
			"optime" : {
				"ts" : Timestamp(1535375032, 1),
				"t" : NumberLong(1)
			
		
		{
			"_id" : 2,
			"name" : "192.168.112.139:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",          //SECONDARY,從
			"uptime" : 73,
			"optime" : {
				"ts" : Timestamp(1535375032, 1),
				"t" : NumberLong(1)
			},
			"optimeDurable" : {
				"ts" : Timestamp(1535375032, 1),
				"t" : NumberLong(1)
		
yinglinux:PRIMARY>

8.3 mongodb副本集測試

在ying01機器上

在mydb庫中,新增長acc表

yinglinux:PRIMARY> use admin
switched to db admin
yinglinux:PRIMARY> use mydb
switched to db mydb
yinglinux:PRIMARY> db.acc.insert({AccountID:1,UserName:"123",password:"123456"})
WriteResult({ "nInserted" : 1 })
yinglinux:PRIMARY> show dbs     //查看全部庫
admin   0.000GB
config  0.000GB
db1     0.000GB
local   0.000GB
mydb    0.000GB
test    0.000GB
yinglinux:PRIMARY> use mydb    //切換爲mydb庫
switched to db mydb
yinglinux:PRIMARY> show tables  //查看mydb下表,acc已經生成
acc

在ying02機器上

想在從庫,查看全部庫,須要先執行 rs.slaveOk()

yinglinux:SECONDARY> rs.slaveOk()
yinglinux:SECONDARY> show dbs
admin   0.000GB
config  0.000GB
db1     0.000GB
local   0.000GB
mydb    0.000GB
test    0.000GB
yinglinux:SECONDARY> use mydb     //切換爲mydb庫
switched to db mydb
yinglinux:SECONDARY> show tables  //這時也能再從庫,看到acc表
acc

在ying03機器上

想在從庫,查看全部庫,須要先執行 rs.slaveOk()

yinglinux:SECONDARY> rs.slaveOk()
yinglinux:SECONDARY> show dbs
admin   0.000GB
config  0.000GB
db1     0.000GB
local   0.000GB
mydb    0.000GB
test    0.000GB
yinglinux:SECONDARY> use mydb     //切換爲mydb庫
switched to db mydb
yinglinux:SECONDARY> show tables  //這時也能再從庫,看到acc表
acc

如今模擬ying01宕機,看一下,其他兩臺機器之一會成爲 PRIMARY

在ying01上增長iptables規則

[root@ying01 ~]# iptables -I INPUT -p tcp --dport 27017 -j DROP

在ying02機器上執行,查看狀態,此時可以看到ying01宕機,ying02變爲PRIMARY

yinglinux:PRIMARY> rs.status()       



			"_id" : 0,
			"name" : "192.168.112.136:27017",
			"health" : 0,
			"state" : 8,
			"stateStr" : "(not reachable/healthy)",  //ying01機器出現問題了
			
			
			
			"_id" : 1,
			"name" : "192.168.112.138:27017",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",        //ying02變爲PRIMARY

因爲三臺機器的 權重都是 1,所以以前ying0二、ying03沒有優先權,他們成爲PRIMARY,可能性爲50%。

如今開始設置權重:

在ying01上把以前的iptables的規則刪除

[root@ying01 ~]# iptables -D INPUT -p tcp --dport 27017 -j DROP

此時ying01的鏈接恢復,可是變爲SECONDARY。這個不可以變爲PRIMARY

"_id" : 0,
			"name" : "192.168.112.136:27017",
			"health" : 1,
			"state" : 2,
			"stateStr" : "SECONDARY",

ying02成爲PRIMARY,如今在其上面操做

yinglinux:PRIMARY>  cfg = rs.conf()   //查看權重


"_id" : 0,
			"host" : "192.168.112.136:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 1,            //權重爲1



	"_id" : 1,
			"host" : "192.168.112.138:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false, 
			"priority" : 1,            //權重也爲1



	"_id" : 2,
			"host" : "192.168.112.139:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false, 
			"priority" : 1,          //權重爲1

繼續在PRIMARY上,分配權重

yinglinux:PRIMARY> cfg.members[0].priority = 3  //給ying01設置爲3
3
yinglinux:PRIMARY> cfg.members[1].priority = 2  //給ying02設置爲2

yinglinux:PRIMARY> cfg.members[2].priority = 1  //給ying03設置爲1
1
yinglinux:PRIMARY> rs.reconfig(cfg)              //從新加載,使其生效
{
	"ok" : 1,
	"operationTime" : Timestamp(1535381404, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535381404, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

此時ying01會成爲PRIMARY,由於他的權重設置的爲最高,這一步也只能在PRIMARY操做

yinglinux:PRIMARY>  cfg = rs.conf()   //查看權重



"_id" : 0,
			"host" : "192.168.112.136:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 3,


"_id" : 1,
			"host" : "192.168.112.138:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 2,
		
		
		
"_id" : 2,
			"host" : "192.168.112.139:27017",
			"arbiterOnly" : false,
			"buildIndexes" : true,
			"hidden" : false,
			"priority" : 1,

從上面試驗結果,ying01由於分配爲最高的權重,所以又成爲PRIMARY

九 、mongodb分片

9.1 mongodb分片介紹

  • 分片就是將數據庫進行拆分,將大型集合分隔到不一樣服務器上。好比,原本100G的數據,能夠分割成10份存儲到10臺服務器上,這樣每臺機器只有10G的數據。
  • 經過一個mongos的進程(路由)實現分片後的數據存儲與訪問,也就是說mongos是整個分片架構的核心,對客戶端而言是不知道是否有分片的,客戶端只須要把讀寫操做轉達給mongos便可。
  • 雖然分片會把數據分隔到不少臺服務器上,可是每個節點都是須要有一個備用角色的,這樣能保證數據的高可用。
  • 當系統須要更多空間或者資源的時候,分片可讓咱們按需方便擴展,只須要把mongodb服務的機器加入到分片集羣中便可

MongoDB分片的三個角色

mongos: 數據庫集羣請求的入口,全部的請求都經過mongos進行協調,不須要在應用程序添加一個路由選擇器,mongos本身就是一個請求分發中心,它負責把對應的數據請求請求轉發到對應的shard服務器上。在生產環境一般有多mongos做爲請求的入口,防止其中一個掛掉全部的mongodb請求都沒有辦法操做。

config server: 配置服務器,存儲全部數據庫元信息(路由、分片)的配置。mongos自己沒有物理存儲分片服務器和數據路由信息,只是緩存在內存裏,配置服務器則實際存儲這些數據。mongos第一次啓動或者關掉重啓就會從 config server 加載配置信息,之後若是配置服務器信息變化會通知到全部的 mongos 更新本身的狀態,這樣 mongos 就能繼續準確路由。在生產環境一般有多個 config server 配置服務器,由於它存儲了分片路由的元數據,防止數據丟失!

shard: 存儲了一個集合部分數據的MongoDB實例,每一個分片是單獨的mongodb服務或者副本集,在生產環境中,全部的分片都應該是副本集。

9.2 MongoDB分片搭建

注意:此處出現幾處莫名錯誤,查詢了幾個小時,以及反覆試驗,都未能解決。理論步驟都正確,由於時間緊急,先把問題記錄下來,往後再解決。

分片搭建 -服務器規劃

三臺機器 ying01 ying02 ying03

ying01搭建:mongos、config server、副本集1主節點、副本集2仲裁、副本集3從節點

ying02搭建:mongos、config server、副本集1從節點、副本集2主節點、副本集3仲裁

ying03搭建:mongos、config server、副本集1仲裁、副本集2從節點、副本集3主節點

端口分配:mongos 20000、config 21000、副本集1 2700一、副本集2 2700二、副本集3 27003

三臺機器所有關閉firewalld服務和selinux,或者增長對應端口的規則

[root@ying01 ~]#  mkdir -p /data/mongodb/mongos/log
[root@ying01 ~]# mkdir -p /data/mongodb/config/{data,log}
[root@ying01 ~]#  mkdir -p /data/mongodb/shard1/{data,log}
[root@ying01 ~]# mkdir -p /data/mongodb/shard2/{data,log}
[root@ying01 ~]# mkdir -p /data/mongodb/shard3/{data,log}

[root@ying01 ~]# mkdir /etc/mongod/

config server配置

[root@ying01 ~]# vim /etc/mongod/config.conf


pidfilepath = /var/run/mongodb/configsrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/congigsrv.log
logappend = true
bind_ip = 192.168.112.136
port = 21000
fork = true
configsvr = true #declare this is a config db of a cluster;
replSet=configs #副本集名稱
maxConns=20000 #設置最大鏈接數

啓動config server

[root@ying01 ~]# mongod -f /etc/mongod/config.conf

[root@ying01 ~]# ps aux|grep mongod
mongod   64534  0.9  5.4 1509136 101344 ?      Sl   20:42   1:35 /usr/bin/mongod -f /etc/mongod.conf
root     65446  6.5  3.2 1147180 60648 ?       Sl   23:22   0:01 mongod -f /etc/mongod/config.conf
root     65482  0.0  0.0 112720   980 pts/0    S+   23:23   0:00 grep --color=auto mongod
[root@ying01 ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.112.136:21000   0.0.0.0:*               LISTEN      65446/mongod        
tcp        0      0 192.168.112.136:27017   0.0.0.0:*               LISTEN      64534/mongod        
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      64534/mongod

以端口21000登陸mongdb,初始化副本集

[root@ying01 ~]# mongo --host 192.168.112.136 --port 21000


config = { _id: "configs", members: [ {_id : 0, host : "192.168.112.136:21000"},{_id : 1, host : "192.168.112.138:21000"},{_id : 2, host : "192.168.112.139:21000"}] }
{
	"_id" : "configs",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.112.136:21000"
		},
		{
			"_id" : 1,
			"host" : "192.168.112.138:21000"
		},
		{
			"_id" : 2,
			"host" : "192.168.112.139:21000"
		}
	]
}

> rs.initiate(config)
{
	"ok" : 1,
	"operationTime" : Timestamp(1535383937, 1),
	"$gleStats" : {
		"lastOpTime" : Timestamp(1535383937, 1),
		"electionId" : ObjectId("000000000000000000000000")
	},
	"lastCommittedOpTime" : Timestamp(0, 0),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535383937, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

configs:SECONDARY> rs.status()

	"_id" : 0,
			"name" : "192.168.112.136:21000",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",           //成爲主

分片配置

shard1配置

[root@ying01 ~]# vim /etc/mongod/shard1.conf

pidfilepath = /var/run/mongodb/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true
bind_ip = 192.168.112.136
port = 27001
fork = true
#httpinterface=true #打開web監控
#rest=true
replSet=shard1 #副本集名稱
shardsvr = true #declare this is a shard db of a cluster;
maxConns=20000 #設置最大鏈接數

shard2配置

[root@ying01 ~]# vim /etc/mongod/shard2.conf

pidfilepath = /var/run/mongodb/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true
bind_ip = 192.168.112.136
port = 27002
fork = true
#httpinterface=true #打開web監控
#rest=true
replSet=shard2 #副本集名稱
shardsvr = true #declare this is a shard db of a cluster;
maxConns=20000 #設置最大鏈接數

shard3配置

[root@ying01 ~]# vim /etc/mongod/shard3.conf

pidfilepath = /var/run/mongodb/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true
bind_ip = 192.168.112.136
port = 27003
fork = true
#httpinterface=true #打開web監控
#rest=true
replSet=shard3 #副本集名稱
shardsvr = true #declare this is a shard db of a cluster;
maxConns=20000 #設置最大鏈接數

三臺機子啓動shard一、shard二、shard3

[root@ying01 ~]# mongod -f /etc/mongod/shard1.conf
2018-08-28T00:08:34.971+0800 I CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
about to fork child process, waiting until server is ready for connections.
forked process: 676
child process started successfully, parent exiting

以27001端口登陸mongdb

[root@ying01 ~]# mongo --host 192.168.112.136 --port 27001


> use admin
switched to db admin
> 
>  config = { _id: "shard1", members: [ {_id : 0, host : "192.168.112.136:27001"}, {_id: 1,host : "192.168.112.138:27001"},{_id : 2, host : "192.168.112.139:27001",arbiterOnly:true}] }      //此處注意,不能以ying03登陸,這裏139倍設爲仲裁節點
{
	"_id" : "shard1",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.112.136:27001"
		},
		{
			"_id" : 1,
			"host" : "192.168.112.138:27001"
		},
		{
			"_id" : 2,
			"host" : "192.168.112.139:27001",
			"arbiterOnly" : true
		}
	]
}
> rs.initiate(config)
{
	"ok" : 1,                                           //有OK 1 代表配置正確
	"operationTime" : Timestamp(1535386695, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535386695, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
shard1:SECONDARY> rs.status()

			"_id" : 0,
			"name" : "192.168.112.136:27001",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",                   //此時ying01爲PRIMARY
			"uptime" : 438,
			"optime" : {
				"ts" : Timestamp(1535386758, 1),
				"t" : NumberLong(1)

建立shard2副本集

ying02 以27002端口登陸mongdb

[root@ying02 ~]# mongo --host 192.168.112.138 --port 27002


> use admin
switched to db admin
> config = { _id: "shard2", members: [ {_id : 0, host : "192.168.112.136:27002" ,arbiterOnly:true},{_id : 1, host : "192.168.112.138:27002"},{_id : 2, host : "192.168.112.139:27002"}] }  //不能用136登陸,這裏136被設爲仲裁節點
{
	"_id" : "shard2",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.112.136:27002",
			"arbiterOnly" : true
		},
		{
			"_id" : 1,
			"host" : "192.168.112.138:27002"
		},
		{
			"_id" : 2,
			"host" : "192.168.112.139:27002"
		}
	]
}
> rs.initiate()
{
	"info2" : "no configuration specified. Using a default configuration for the set",
	"me" : "192.168.112.138:27002",
	"ok" : 1,
	"operationTime" : Timestamp(1535387519, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535387519, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
shard2:SECONDARY> rs.status()
{
	"set" : "shard2",
	"date" : ISODate("2018-08-27T16:32:32.274Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1535387551, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1535387551, 1),
			"t" : NumberLong(1)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1535387551, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1535387551, 1),
			"t" : NumberLong(1)
		}
	},
	"lastStableCheckpointTimestamp" : Timestamp(1535387521, 1),
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.112.138:27002",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 318,
			"optime" : {
				"ts" : Timestamp(1535387551, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2018-08-27T16:32:31Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1535387519, 2),
			"electionDate" : ISODate("2018-08-27T16:31:59Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1535387551, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535387551, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
shard2:PRIMARY>
[root@ying03 ~]# mongo --host 192.168.112.139 --port 27003


> use admin
switched to db admin
> config = { _id: "shard3", members: [ {_id : 0, host : "192.168.112.136:27003"},  {_id : 1, host : "192.168.112.138:27003", arbiterOnly:true}, {_id : 2, host : "192.168.112.139:27003"}] }
{
	"_id" : "shard3",
	"members" : [
		{
			"_id" : 0,
			"host" : "192.168.112.136:27003"
		},
		{
			"_id" : 1,
			"host" : "192.168.112.138:27003",
			"arbiterOnly" : true
		},
		{
			"_id" : 2,
			"host" : "192.168.112.139:27003"
		}
	]
}
> rs.initiate()
{
	"info2" : "no configuration specified. Using a default configuration for the set",
	"me" : "192.168.112.139:27003",
	"ok" : 1,
	"operationTime" : Timestamp(1535387799, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535387799, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
shard3:SECONDARY> rs.status()
{
	"set" : "shard3",
	"date" : ISODate("2018-08-27T16:36:54.608Z"),
	"myState" : 1,
	"term" : NumberLong(1),
	"syncingTo" : "",
	"syncSourceHost" : "",
	"syncSourceId" : -1,
	"heartbeatIntervalMillis" : NumberLong(2000),
	"optimes" : {
		"lastCommittedOpTime" : {
			"ts" : Timestamp(1535387812, 1),
			"t" : NumberLong(1)
		},
		"readConcernMajorityOpTime" : {
			"ts" : Timestamp(1535387812, 1),
			"t" : NumberLong(1)
		},
		"appliedOpTime" : {
			"ts" : Timestamp(1535387812, 1),
			"t" : NumberLong(1)
		},
		"durableOpTime" : {
			"ts" : Timestamp(1535387812, 1),
			"t" : NumberLong(1)
		}
	},
	"lastStableCheckpointTimestamp" : Timestamp(1535387802, 2),
	"members" : [
		{
			"_id" : 0,
			"name" : "192.168.112.139:27003",
			"health" : 1,
			"state" : 1,
			"stateStr" : "PRIMARY",
			"uptime" : 165,
			"optime" : {
				"ts" : Timestamp(1535387812, 1),
				"t" : NumberLong(1)
			},
			"optimeDate" : ISODate("2018-08-27T16:36:52Z"),
			"syncingTo" : "",
			"syncSourceHost" : "",
			"syncSourceId" : -1,
			"infoMessage" : "could not find member to sync from",
			"electionTime" : Timestamp(1535387800, 1),
			"electionDate" : ISODate("2018-08-27T16:36:40Z"),
			"configVersion" : 1,
			"self" : true,
			"lastHeartbeatMessage" : ""
		}
	],
	"ok" : 1,
	"operationTime" : Timestamp(1535387812, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535387812, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
shard3:PRIMARY>

開始

[root@ying01 ~]# mongo --host 192.168.112.136 --port 20000
MongoDB shell version v4.0.1
connecting to: mongodb://192.168.112.136:20000/
MongoDB server version: 4.0.1
Server has startup warnings: 
2018-08-28T00:58:01.517+0800 I CONTROL  [main] 
2018-08-28T00:58:01.517+0800 I CONTROL  [main] ** WARNING: Access control is not enabled for the database.
2018-08-28T00:58:01.517+0800 I CONTROL  [main] **          Read and write access to data and configuration is unrestricted.
2018-08-28T00:58:01.517+0800 I CONTROL  [main] ** WARNING: You are running this process as the root user, which is not recommended.
2018-08-28T00:58:01.517+0800 I CONTROL  [main] 
mongos> sh.addShard("shard1/192.168.112.136:27001,192.168.112.138:27001,192.168.112.139:27001")
{
	"shardAdded" : "shard1",
	"ok" : 1,
	"operationTime" : Timestamp(1535389509, 4),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535389509, 4),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> sh.addShard("shard2/192.168.112.136:27002,192.168.112.138:27002,192.168.112.139:27002")
{
	"ok" : 0,
	"errmsg" : "in seed list shard2/192.168.112.136:27002,192.168.112.138:27002,192.168.112.139:27002, host 192.168.112.136:27002 does not belong to replica set shard2; found { hosts: [ \"192.168.112.138:27002\" ], setName: \"shard2\", setVersion: 1, ismaster: true, secondary: false, primary: \"192.168.112.138:27002\", me: \"192.168.112.138:27002\", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1535389581, 1), t: 1 }, lastWriteDate: new Date(1535389581000), majorityOpTime: { ts: Timestamp(1535389581, 1), t: 1 }, majorityWriteDate: new Date(1535389581000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1535389587403), logicalSessionTimeoutMinutes: 30, minWireVersion: 0, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1535389581, 1), $clusterTime: { clusterTime: Timestamp(1535389586, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }",
	"code" : 96,
	"codeName" : "OperationFailed",
	"operationTime" : Timestamp(1535389586, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535389586, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> sh.addShard("shard2/192.168.112.136:27002,192.168.112.138:27002,192.168.112.139:27002")
{
	"ok" : 0,
	"errmsg" : "in seed list shard2/192.168.112.136:27002,192.168.112.138:27002,192.168.112.139:27002, host 192.168.112.136:27002 does not belong to replica set shard2; found { hosts: [ \"192.168.112.138:27002\" ], setName: \"shard2\", setVersion: 1, ismaster: true, secondary: false, primary: \"192.168.112.138:27002\", me: \"192.168.112.138:27002\", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1535389731, 1), t: 1 }, lastWriteDate: new Date(1535389731000), majorityOpTime: { ts: Timestamp(1535389731, 1), t: 1 }, majorityWriteDate: new Date(1535389731000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1535389736979), logicalSessionTimeoutMinutes: 30, minWireVersion: 0, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1535389731, 1), $clusterTime: { clusterTime: Timestamp(1535389735, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }",
	"code" : 96,
	"codeName" : "OperationFailed",
	"operationTime" : Timestamp(1535389735, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535389735, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> sh.addShard("shard3/192.168.112.136:27003,192.168.112.138:27003,192.168.112.139:27003")
{
	"ok" : 0,
	"errmsg" : "in seed list shard3/192.168.112.136:27003,192.168.112.138:27003,192.168.112.139:27003, host 192.168.112.136:27003 does not belong to replica set shard3; found { hosts: [ \"192.168.112.139:27003\" ], setName: \"shard3\", setVersion: 1, ismaster: true, secondary: false, primary: \"192.168.112.139:27003\", me: \"192.168.112.139:27003\", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1535390003, 1), t: 1 }, lastWriteDate: new Date(1535390003000), majorityOpTime: { ts: Timestamp(1535390003, 1), t: 1 }, majorityWriteDate: new Date(1535390003000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1535390011340), logicalSessionTimeoutMinutes: 30, minWireVersion: 0, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1535390003, 1), $clusterTime: { clusterTime: Timestamp(1535390007, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }",
	"code" : 96,
	"codeName" : "OperationFailed",
	"operationTime" : Timestamp(1535390007, 1),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535390007, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos> sh.addShard("shard3/192.168.112.136:27003,192.168.112.138:27003,192.168.112.139:27003")
{
	"ok" : 0,
	"errmsg" : "in seed list shard3/192.168.112.136:27003,192.168.112.138:27003,192.168.112.139:27003, host 192.168.112.136:27003 does not belong to replica set shard3; found { hosts: [ \"192.168.112.139:27003\" ], setName: \"shard3\", setVersion: 1, ismaster: true, secondary: false, primary: \"192.168.112.139:27003\", me: \"192.168.112.139:27003\", electionId: ObjectId('7fffffff0000000000000001'), lastWrite: { opTime: { ts: Timestamp(1535390463, 1), t: 1 }, lastWriteDate: new Date(1535390463000), majorityOpTime: { ts: Timestamp(1535390463, 1), t: 1 }, majorityWriteDate: new Date(1535390463000) }, maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, maxWriteBatchSize: 100000, localTime: new Date(1535390472093), logicalSessionTimeoutMinutes: 30, minWireVersion: 0, maxWireVersion: 7, readOnly: false, compression: [ \"snappy\" ], ok: 1.0, operationTime: Timestamp(1535390463, 1), $clusterTime: { clusterTime: Timestamp(1535390469, 2), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } }",
	"code" : 96,
	"codeName" : "OperationFailed",
	"operationTime" : Timestamp(1535390469, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1535390469, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}
mongos>
相關文章
相關標籤/搜索