docker-swarm部署mongo分片集羣

概述

  • 本文主要介紹在docker-swarm環境下搭建mongo分片集羣。
  • 本文以受權模式建立集羣,可是若是之間啓動受權的腳本,將沒法建立用戶。須要在無受權模式下把用戶建立好,而後再以受權模式重啓。(這兩種模式啓動腳本不一樣,可是掛載同一個文件目錄)

架構圖

架構圖

  • 共三個節點:breakpad(主服務器),bpcluster,bogon

前置步驟

  • 安裝docker
  • 初始化swarm集羣
    • docker swarm init

部署步驟

前面三步執行完集羣就可使用了,不須要受權登陸可不用執行後面4個步驟node

  1. 建立目錄
  2. 部署服務(無受權模式)
  3. 配置分片信息
  4. 生成keyfile文件,並修改權限
  5. 拷貝keyfile到其餘節點
  6. 添加用戶信息
  7. 重啓服務(受權模式)

1. 建立目錄

全部服務器執行before-deploy.shdocker

#!/bin/bash

DIR=/data/fates
DATA_PATH="${DIR}/mongo"
PWD='1qaz2wsx!@#'

DATA_DIR_LIST=('config' 'shard1' 'shard2' 'shard3' 'script')

function check_directory() {
  if [ ! -d "${DATA_PATH}" ]; then
    echo "create directory: ${DATA_PATH}"
    echo ${PWD} | sudo -S mkdir -p ${DATA_PATH}
  else
    echo "directory ${DATA_PATH} already exists."
  fi


  cd "${DATA_PATH}"

  for SUB_DIR in ${DATA_DIR_LIST[@]}
  do
    if [ ! -d "${DATA_PATH}/${SUB_DIR}" ]; then
      echo "create directory: ${DATA_PATH}/${SUB_DIR}"
      echo ${PWD} | sudo -S mkdir -p "${DATA_PATH}/${SUB_DIR}"
    else
      echo "directory: ${DATA_PATH}/${SUB_DIR} already exists."
    fi
  done

  echo ${PWD} | sudo -S chown -R $USER:$USER "${DATA_PATH}"
}

check_directory

複製代碼

2. 無受權模式啓動mongo集羣

  • 這一步尚未受權,無需登陸就能夠操做,用於建立用戶

主服務器下建立fate-mongo.yaml,並執行如下腳本(注意根據本身的機器名稱修改constraints屬性)shell

docker stack deploy -c fates-mongo.yaml fates-mongo
複製代碼
version: '3.4'
services:
 shard1-server1:
 image: mongo:4.0.5
    # --shardsvr: 這個參數僅僅只是將默認的27017端口改成27018,若是指定--port參數,可用不須要這個參數
    # --directoryperdb:每一個數據庫使用單獨的文件夾
 command: mongod --shardsvr --directoryperdb --replSet shard1
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard2-server1:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard3-server1:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard1-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard1
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard2-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard3-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard1-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard1
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 shard2-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 shard3-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 config1:
 image: mongo:4.0.5
    # --configsvr: 這個參數僅僅是將默認端口由27017改成27019, 若是指定--port可不添加該參數
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 config2:
 image: mongo:4.0.5
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 config3:
 image: mongo:4.0.5
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 mongos:
 image: mongo:4.0.5
    # mongo3.6版默認綁定IP爲127.0.0.1,此處綁定0.0.0.0是容許其餘容器或主機能夠訪問
 command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017
 networks:
 - mongo
 ports:
 - 27017:27017
 volumes:
 - /etc/localtime:/etc/localtime
 depends_on:
 - config1
 - config2
 - config3
 deploy:
 restart_policy:
 condition: on-failure
 mode: global

networks:
 mongo:
 driver: overlay
    # 若是外部已經建立好網絡,下面這句話放開
    # external: true

複製代碼

3. 配置分片信息

# 添加配置服務器
docker exec -it $(docker ps | grep "config" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"fates-mongo-config\",configsvr: true, members: [{ _id : 0, host : \"config1:27019\" },{ _id : 1, host : \"config2:27019\" }, { _id : 2, host : \"config3:27019\" }]})' | mongo --port 27019"
 # 添加分片服務器
docker exec -it $(docker ps | grep "shard1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard1\", members: [{ _id : 0, host : \"shard1-server1:27018\" },{ _id : 1, host : \"shard1-server2:27018\" },{ _id : 2, host : \"shard1-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard2" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard2\", members: [{ _id : 0, host : \"shard2-server1:27018\" },{ _id : 1, host : \"shard2-server2:27018\" },{ _id : 2, host : \"shard3-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
docker exec -it $(docker ps | grep "shard3" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard3\", members: [{ _id : 0, host : \"shard3-server1:27018\" },{ _id : 1, host : \"shard2-server2:27018\" },{ _id : 2, host : \"shard3-server3:27018\", arbiterOnly: true }]})' | mongo --port 27018"
 # 添加分片集羣到mongos中
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard1-server1:27018,shard1-server2:27018,shard1-server3:27018\")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard2-server1:27018,shard2-server2:27018,shard2-server3:27018\")' | mongo "
docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard1/shard3-server1:27018,shard3-server2:27018,shard3-server3:27018\")' | mongo "
複製代碼

4. 生成密鑰文件

執行前面三步,已經可用確保mongo分片集羣啓動成功可以使用了,若是不須要加受權,後面的步驟不用看。數據庫

主服務器執行generate-keyfile.shbash

#!/bin/bash

DATA_PATH=/data/fates/mongo
PWD='1qaz2wsx!@#'

function check_directory() {
  if [ ! -d "${DATA_PATH}" ]; then
    echo "directory: ${DATA_PATH} not exists, please run before-depoly.sh."
  fi
}

function generate_keyfile() {
  cd "${DATA_PATH}/script"
  if [ ! -f "${DATA_PATH}/script/mongo-keyfile" ]; then
    echo 'create mongo-keyfile.'
    openssl rand -base64 756 -out mongo-keyfile
    echo "${PWD}" | sudo -S chmod 600 mongo-keyfile
    echo "${PWD}" | sudo -S chown 999 mongo-keyfile
  else
    echo 'mongo-keyfile already exists.'
  fi
}

check_directory
generate_keyfile

複製代碼

5. 拷貝密鑰文件到其餘服務器的script目錄下

在剛纔生成keyfile文件的服務器上執行拷貝(注意-p參數,保留前面修改的權限)服務器

sudo scp -p /data/fates/mongo/script/mongo-keyfile username@server2:/data/fates/mongo/script
sduo scp -p /data/fates/mongo/script/mongo-keyfile username@server3:/data/fates/mongo/script
複製代碼

6. 添加用戶信息

主服務器下執行add-user.sh網絡

腳本給的用戶名和密碼都是root,權限爲root權限。可自定義修改架構

docker exec -it $(docker ps | grep "mongos" | awk '{ print $1 }') bash -c "echo -e 'use admin\n db.createUser({user:\"root\",pwd:\"root\",roles:[{role:\"root\",db:\"admin\"}]})' | mongo"
複製代碼

7. 建立docker啓動的yaml腳本文件(受權)

  • 這一步受權登陸,須要輸入上一步建立的用戶名和密碼纔可操做

主服務器下建立fate-mongo-key.yaml,而後再以受權模式重啓(腳本不一樣,掛載路徑使用以前的)spa

docker stack deploy -c fates-mongo-key.yaml fates-mongo
複製代碼
version: '3.4'
services:
 shard1-server1:
 image: mongo:4.0.5
    # --shardsvr: 這個參數僅僅只是將默認的27017端口改成27018,若是指定--port參數,可用不須要這個參數
    # --directoryperdb:每一個數據庫使用單獨的文件夾
 command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard2-server1:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard3-server1:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 shard1-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard2-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard3-server2:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 shard1-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard1 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard1:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 shard2-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard2 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard2:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 shard3-server3:
 image: mongo:4.0.5
 command: mongod --shardsvr --directoryperdb --replSet shard3 --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/shard3:/data/db
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 config1:
 image: mongo:4.0.5
    # --configsvr: 這個參數僅僅是將默認端口由27017改成27019, 若是指定--port可不添加該參數
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bpcluster
 config2:
 image: mongo:4.0.5
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==bogon
 config3:
 image: mongo:4.0.5
 command: mongod --configsvr --directoryperdb --replSet fates-mongo-config --smallfiles --keyFile /data/mongo-keyfile
 networks:
 - mongo
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/config:/data/configdb
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 deploy:
 restart_policy:
 condition: on-failure
 replicas: 1
 placement:
 constraints:
 - node.hostname==breakpad
 mongos:
 image: mongo:4.0.5
    # mongo3.6版默認綁定IP爲127.0.0.1,此處綁定0.0.0.0是容許其餘容器或主機能夠訪問
 command: mongos --configdb fates-mongo-config/config1:27019,config2:27019,config3:27019 --bind_ip 0.0.0.0 --port 27017  --keyFile /data/mongo-keyfile
 networks:
 - mongo
 ports:
 - 27017:27017
 volumes:
 - /etc/localtime:/etc/localtime
 - /data/fates/mongo/script/mongo-keyfile:/data/mongo-keyfile
 depends_on:
 - config1
 - config2
 - config3
 deploy:
 restart_policy:
 condition: on-failure
 mode: global

networks:
 mongo:
 driver: overlay
    # 若是外部已經建立好網絡,下面這句話放開
    # external: true
複製代碼

遇到的問題

啓動失敗

經過docker service logs name查看日誌,發現配置文件找不到,由於沒有掛載進容器內部rest

config3啓動失敗

配置文件中掛載路徑寫錯了

容器啓動成功,可是鏈接失敗,被拒絕

只執行了啓動容器的腳本,後續的配置都沒有設置(第3步)

mongo-keyfile沒權限:error opening file: /data/mongo-keyfile: Permission denied

  • mongo-keyfile文件必須修改全部者爲999, 權限爲600

addShard失敗

  • 必須等mongos啓動完畢才能執行
  • 根據服務器名稱,自動修改腳本里面constraints的屬性

分片所有完成後發現數據只保存在一個分片上:

分片的一個chrunk默認200MB,數據量過小,只用一個chunk就夠。可修改小這個參數驗證效果

相關文章
相關標籤/搜索