Consul 搭建集羣

Consul 搭建集羣

Consul 是一套開源的分佈式服務發現和配置管理系統,由 HashiCorp 公司用 Go 語言開發。
它具備不少優勢。包括: 基於raft協議,簡潔,支持健康檢查, 同時支持HTTP和DNS協議,支持跨數據中心的WAN集羣,提供圖形界面,跨平臺支持 Linux、Mac、Windows。

快速入門

環境準備

首先須要準備好三臺虛擬機,由三臺虛機組成一個consul集羣。node

ip
s1 172.20.20.20
s2 172.20.20.21
s3 172.20.20.22

基於Vagrant技術,能夠快速的配置出三臺虛機。linux

在命令行輸入web

» mkdir ms
» cd ms
» vagrant init centos/7

編輯Vagrantfile並配置出三臺虛機。shell

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|

  config.vm.box = "centos/7"

  config.vm.define "s1" do |s1|
      s1.vm.hostname = "s1"
      s1.vm.network "private_network", ip: "172.20.20.20"
  end

  config.vm.define "s2" do |s2|
      s2.vm.hostname = "s2"
      s2.vm.network "private_network", ip: "172.20.20.21"
  end

  config.vm.define "s3" do |s3|
      s3.vm.hostname = "s3"
      s3.vm.network "private_network", ip: "172.20.20.22"
  end    

end

啓動虛機json

» vagrant up
Bringing machine 's1' up with 'virtualbox' provider...
Bringing machine 's2' up with 'virtualbox' provider...
Bringing machine 's3' up with 'virtualbox' provider...
==> s1: Importing base box 'centos/7'...
==> s1: Matching MAC address for NAT networking...
==> s1: Setting the name of the VM: ms_s1_1528794737477_2031
==> s1: Clearing any previously set network interfaces...
==> s1: Preparing network interfaces based on configuration...
    s1: Adapter 1: nat
    s1: Adapter 2: hostonly
==> s1: Forwarding ports...
    s1: 22 (guest) => 2222 (host) (adapter 1)
==> s1: Booting VM...
==> s1: Waiting for machine to boot. This may take a few minutes...
    s1: SSH address: 127.0.0.1:2222
    s1: SSH username: vagrant
    s1: SSH auth method: private key
    s1:
    s1: Vagrant insecure key detected. Vagrant will automatically replace
    s1: this with a newly generated keypair for better security.
    s1:
    s1: Inserting generated public key within guest...
    s1: Removing insecure key from the guest if it's present...
    s1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> s1: Machine booted and ready!
==> s1: Checking for guest additions in VM...
    s1: No guest additions were detected on the base box for this VM! Guest
    s1: additions are required for forwarded ports, shared folders, host only
    s1: networking, and more. If SSH fails on this machine, please install
    s1: the guest additions and repackage the box to continue.
    s1:
    s1: This is not an error message; everything may continue to work properly,
    s1: in which case you may ignore this message.
==> s1: Setting hostname...
==> s1: Configuring and enabling network interfaces...
    s1: SSH address: 127.0.0.1:2222
    s1: SSH username: vagrant
    s1: SSH auth method: private key
==> s1: Rsyncing folder: /work/training/vagrant/ms/ => /vagrant
==> s2: Importing base box 'centos/7'...
==> s2: Matching MAC address for NAT networking...
==> s2: Setting the name of the VM: ms_s2_1528794795606_2999
==> s2: Fixed port collision for 22 => 2222. Now on port 2200.
==> s2: Clearing any previously set network interfaces...
==> s2: Preparing network interfaces based on configuration...
    s2: Adapter 1: nat
    s2: Adapter 2: hostonly
==> s2: Forwarding ports...
    s2: 22 (guest) => 2200 (host) (adapter 1)
==> s2: Booting VM...
==> s2: Waiting for machine to boot. This may take a few minutes...
    s2: SSH address: 127.0.0.1:2200
    s2: SSH username: vagrant
    s2: SSH auth method: private key
    s2:
    s2: Vagrant insecure key detected. Vagrant will automatically replace
    s2: this with a newly generated keypair for better security.
    s2:
    s2: Inserting generated public key within guest...
    s2: Removing insecure key from the guest if it's present...
    s2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> s2: Machine booted and ready!
==> s2: Checking for guest additions in VM...
    s2: No guest additions were detected on the base box for this VM! Guest
    s2: additions are required for forwarded ports, shared folders, host only
    s2: networking, and more. If SSH fails on this machine, please install
    s2: the guest additions and repackage the box to continue.
    s2:
    s2: This is not an error message; everything may continue to work properly,
    s2: in which case you may ignore this message.
==> s2: Setting hostname...
==> s2: Configuring and enabling network interfaces...
    s2: SSH address: 127.0.0.1:2200
    s2: SSH username: vagrant
    s2: SSH auth method: private key
==> s2: Rsyncing folder: /work/training/vagrant/ms/ => /vagrant
==> s3: Importing base box 'centos/7'...
==> s3: Matching MAC address for NAT networking...
==> s3: Setting the name of the VM: ms_s3_1528794863986_43122
==> s3: Fixed port collision for 22 => 2222. Now on port 2201.
==> s3: Clearing any previously set network interfaces...
==> s3: Preparing network interfaces based on configuration...
    s3: Adapter 1: nat
    s3: Adapter 2: hostonly
==> s3: Forwarding ports...
    s3: 22 (guest) => 2201 (host) (adapter 1)
==> s3: Booting VM...
==> s3: Waiting for machine to boot. This may take a few minutes...
    s3: SSH address: 127.0.0.1:2201
    s3: SSH username: vagrant
    s3: SSH auth method: private key
    s3:
    s3: Vagrant insecure key detected. Vagrant will automatically replace
    s3: this with a newly generated keypair for better security.
    s3:
    s3: Inserting generated public key within guest...
    s3: Removing insecure key from the guest if it's present...
    s3: Key inserted! Disconnecting and reconnecting using new SSH key...
==> s3: Machine booted and ready!
==> s3: Checking for guest additions in VM...
    s3: No guest additions were detected on the base box for this VM! Guest
    s3: additions are required for forwarded ports, shared folders, host only
    s3: networking, and more. If SSH fails on this machine, please install
    s3: the guest additions and repackage the box to continue.
    s3:
    s3: This is not an error message; everything may continue to work properly,
    s3: in which case you may ignore this message.
==> s3: Setting hostname...
==> s3: Configuring and enabling network interfaces...
    s3: SSH address: 127.0.0.1:2201
    s3: SSH username: vagrant
    s3: SSH auth method: private key
==> s3: Rsyncing folder: /work/training/vagrant/ms/ => /vagrant

至此3臺待測試的虛機就準備完成了bootstrap

單機安裝

登陸到虛機s1,切換用戶到rootcentos

» vagrant ssh s1
[vagrant@s1 ~]$ su
Password:
[root@s1 vagrant]#

安裝一些依賴的工具api

[root@s1 vagrant]# yum install -y epel-release
[root@s1 vagrant]# yum install -y jq
[root@s1 vagrant]# yum install -y unzip

下載1.1.0版本到/tmp目錄下瀏覽器

[root@s1 vagrant]# cd /tmp/
[root@s1 tmp]# curl -s https://releases.hashicorp.com/consul/1.1.0/consul_1.1.0_linux_amd64.zip -o consul.zip

解壓,並賦予consul可執行權限,最後把consul移動到/usr/bin/下ruby

[root@s1 tmp]unzip consul.zip
[root@s1 tmp]chmod +x consul
[root@s1 tmp]mv consul /usr/bin/consul

檢查consul是否安裝成功

[root@s1 tmp]# consul
Usage: consul [--version] [--help] <command> [<args>]

Available commands are:
    agent          Runs a Consul agent
    catalog        Interact with the catalog
    event          Fire a new event
    exec           Executes a command on Consul nodes
    force-leave    Forces a member of the cluster to enter the "left" state
    info           Provides debugging information for operators.
    join           Tell Consul agent to join cluster
    keygen         Generates a new encryption key
    keyring        Manages gossip layer encryption keys
    kv             Interact with the key-value store
    leave          Gracefully leaves the Consul cluster and shuts down
    lock           Execute a command holding a lock
    maint          Controls node or service maintenance mode
    members        Lists the members of a Consul cluster
    monitor        Stream logs from a Consul agent
    operator       Provides cluster-level tools for Consul operators
    reload         Triggers the agent to reload configuration files
    rtt            Estimates network round trip time between nodes
    snapshot       Saves, restores and inspects snapshots of Consul server state
    validate       Validate config files/directories
    version        Prints the Consul version
    watch          Watch for changes in Consul

出現如上所示表明安裝成功。

批量安裝

這裏只是s1虛機安裝完成,還有s2和s3須要再次的安裝,比較費事。還好 Vagrant 提供了一個 shell 腳本的支持。

稍微對Vagrantfile文件作的修改,在第4行增長script腳本的定義

$script = <<SCRIPT

echo "Installing dependencies ..."
yum install -y epel-release
yum install -y jq
yum install -y unzip

echo "Determining Consul version to install ..."
CHECKPOINT_URL="https://checkpoint-api.hashicorp.com/v1/check"
if [ -z "$CONSUL_DEMO_VERSION" ]; then
    CONSUL_DEMO_VERSION=$(curl -s "${CHECKPOINT_URL}"/consul | jq .current_version | tr -d '"')
fi

echo "Fetching Consul version ${CONSUL_DEMO_VERSION} ..."
cd /tmp/
curl -s https://releases.hashicorp.com/consul/${CONSUL_DEMO_VERSION}/consul_${CONSUL_DEMO_VERSION}_linux_amd64.zip -o consul.zip

echo "Installing Consul version ${CONSUL_DEMO_VERSION} ..."
unzip consul.zip
sudo chmod +x consul
sudo mv consul /usr/bin/consul

SCRIPT

在指定box的下一行 增長執行script腳本

config.vm.box = "centos/7"
config.vm.provision "shell", inline: $script

銷燬虛機並從新初始化啓動

» vagrant destroy
    s3: Are you sure you want to destroy the 's3' VM? [y/N] y
==> s3: Forcing shutdown of VM...
==> s3: Destroying VM and associated drives...
    s2: Are you sure you want to destroy the 's2' VM? [y/N] y
==> s2: Forcing shutdown of VM...
==> s2: Destroying VM and associated drives...
    s1: Are you sure you want to destroy the 's1' VM? [y/N] y
==> s1: Forcing shutdown of VM...
==> s1: Destroying VM and associated drives...
» vagrant up
...

安裝過程略長,等待腳本執行完成後登陸s1,s2,s3 執行consul驗證是否安裝成功

啓動 Agent

這裏須要解釋consul一些基本概念和啓動參數

基本概念

  • agent 組成 consul 集羣的每一個成員上都要運行一個 agent,能夠經過 consul agent 命令來啓動。agent 能夠運行在 server 狀態或者 client 狀態。天然的,運行在 server 狀態的節點被稱爲 server 節點;運行在 client 狀態的節點被稱爲 client 節點。
  • client consul的client模式,就是客戶端模式。是consul節點的一種模式,這種模式下,全部註冊到當前節點的服務會被轉發到server,自己是不持久化這些信息。
  • server 表示consul的server模式,代表這個consul是個server,這種模式下,功能和client都同樣,惟一不一樣的是,它會把全部的信息持久化的本地,這樣遇到故障,信息是能夠被保留的。

啓動參數

  • bootstrap-expect 集羣指望的節點數,只有節點數量達到這個值纔會選舉leader
  • server 運行在server模式
  • data-dir 指定數據目錄,其餘的節點對於這個目錄必須有讀的權限
  • node 指定節點的名稱
  • bind 爲該節點綁定一個地址
  • config-dir 指定配置文件,定義服務的,默認全部一.json結尾的文件都會讀
  • enable-script-checks=true 設置檢查服務爲可用
  • datacenter 數據中心名稱,
  • join 加入到已有的集羣中
  • ui 使用自帶的ui
  • client 指定web ui、的監聽地址,默認127.0.0.1只能本機訪問,改成0.0.0.0可外網訪問

先配置一個單機節點啓動,切換用戶爲root

[root@s1 vagrant]# consul agent -server -bootstrap-expect 1 -data-dir /etc/consul.d -node=node1 -bind=172.20.20.20 -ui -client 0.0.0.0

打開瀏覽器輸入http://172.20.20.20:8500/顯示web界面表明啓動成功

WX20180612-231314@2x

搭建服務集羣

登陸虛機s1,切換用戶爲root,啓動consul 設置節點數爲3

[root@s1 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node1 -bind=172.20.20.20 -ui -client 0.0.0.0

登陸虛機s2,切換用戶爲root,啓動consul 設置節點數爲3 並加入到s1中,注意 node名稱不能重複

[root@s2 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node2 -bind=172.20.20.21 -ui -client 0.0.0.0 -join 172.20.20.20

登陸虛機s3,重複在s2中的操做

[root@s3 vagrant]# consul agent -server -bootstrap-expect 3 -data-dir /etc/consul.d -node=node3 -bind=172.20.20.22 -ui -client 0.0.0.0 -join 172.20.20.20

這時再刷新web界面就能看到三個節點。

WX20180612-232727@2x

至此,集羣搭建完成。

進階操做

集羣成員

在虛機s1的另外一個終端上運行consul members,能夠看到Consul集羣的成員。

[root@s1 vagrant]# consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  172.20.20.20:8301  alive   server  1.1.0  2         dc1  <all>
node2  172.20.20.21:8301  alive   server  1.1.0  2         dc1  <all>
node3  172.20.20.22:8301  alive   server  1.1.0  2         dc1  <all>

查詢節點

安裝dig工具

[root@s1 vagrant]# yum install -y bind-utils
[root@s1 vagrant]# dig @172.20.20.20 -p 8600 node2.node.consul

; <<>> DiG 9.9.4-RedHat-9.9.4-61.el7 <<>> @172.20.20.20 -p 8600 node2.node.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38194
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;node2.node.consul.             IN      A

;; ANSWER SECTION:
node2.node.consul.      0       IN      A       172.20.20.21

;; Query time: 33 msec
;; SERVER: 172.20.20.20#8600(172.20.20.20)
;; WHEN: Tue Jun 12 15:49:53 UTC 2018
;; MSG SIZE  rcvd: 62

離開集羣

要離開集羣,能夠正常退出代理(使用Ctrl-C)。

在虛機s2上按Ctrl-C,在虛機s1上再次查詢集羣成員。

[root@s1 vagrant]# consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  172.20.20.20:8301  alive   server  1.1.0  2         dc1  <all>
node2  172.20.20.21:8301  failed  server  1.1.0  2         dc1  <all>
node3  172.20.20.22:8301  alive   server  1.1.0  2         dc1  <all>

能夠看到node2的狀態爲failed。

從新在虛機s2上啓動Agent。在s2的另外一個終端上輸入

[root@s2 vagrant]# consul leave
Graceful leave complete

在虛機s1上查詢集羣成員

[root@s1 vagrant]# consul members
Node   Address            Status  Type    Build  Protocol  DC   Segment
node1  172.20.20.20:8301  alive   server  1.1.0  2         dc1  <all>
node2  172.20.20.21:8301  left    server  1.1.0  2         dc1  <all>
node3  172.20.20.22:8301  alive   server  1.1.0  2         dc1  <all>

node2的狀態爲left,離開集羣成功!

相關文章
相關標籤/搜索