heartbeat + pacemaker實現pg流複製自動切換

heartbeat + pacemaker + postgres_streaming_replicationhtml

說明:node

該文檔用於說明以hearbeat+pacemaker的方式實現PostgreSQL流複製自動切換。注意內容包括有關hearbeat/pacemaker知識總結以及整個環境的搭建過程和問題處理。mysql

1、介紹

Heartbeatlinux

3版本開始,heartbeat將原來項目拆分爲了多個子項目(即多個獨立組件),如今的組件包括:heartbeatcluster-glueresource-agentsnginx

 

各組件主要功能:git

heartbeat:屬於集羣的信息層,負責維護集羣中全部節點的信息以及各節點之間的通訊。github

cluster-glue:包括LRM(本地資源管理器)、STONITH,將heartbeatcrm(集羣資源管理器)聯繫起來,屬於一箇中間層。sql

resource-agents:即各類資源腳本,由LRM調用從而實現各個資源的啓動、中止、監控等。shell

 

Heartbeat內部組件關係圖:數據庫

 

 

Pacemaker

Pacemaker,即Cluster Resource ManagerCRM),管理整個HA,客戶端經過pacemaker管理監控整個集羣。

經常使用的集羣管理工具:

1)基於命令行

crm shell/pcs

2)基於圖形化

pygui/hawk/lcmc/pcs

 

Hawk:http://clusterlabs.org/wiki/Hawk

Lcmc:http://www.drbd.org/mc/lcmc/

 

 

Pacemaker內部組件、模塊關係圖:

2、環境

2.1 OS

# cat /etc/issue
CentOS release 6.4 (Final)
Kernel \r on an \m

# uname -a
Linux node1 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

2.2 IP

node1:

eth0 192.168.100.161/24  GW 192.168.100.1    ---真實地址

eth1 2.2.2.1/24                              ---心跳地址

eth2 192.168.2.1/24                          ---流複製地址

 

node2:

eth0 192.168.100.162/24  GW 192.168.100.1    ---真實地址

eth1 2.2.2.2/24                              ---心跳地址

eth2 192.168.2.2/24                          ---流複製地址

 

虛擬地址:

eth0:0 192.168.100.163/24                    ---vip-master

eth0:0 192.168.100.164/24                    ---vip-slave

eth2:0 192.168.2.3/24                        ---vip-rep

2.3 軟件版本

# rpm -qa | grep heartbeat
heartbeat-devel-3.0.3-2.3.el5
heartbeat-debuginfo-3.0.3-2.3.el5
heartbeat-3.0.3-2.3.el5
heartbeat-libs-3.0.3-2.3.el5
heartbeat-devel-3.0.3-2.3.el5
heartbeat-3.0.3-2.3.el5
heartbeat-debuginfo-3.0.3-2.3.el5
heartbeat-libs-3.0.3-2.3.el5

# rpm -qa | grep pacemaker
pacemaker-libs-1.0.12-1.el5.centos
pacemaker-1.0.12-1.el5.centos
pacemaker-debuginfo-1.0.12-1.el5.centos
pacemaker-debuginfo-1.0.12-1.el5.centos
pacemaker-1.0.12-1.el5.centos
pacemaker-libs-1.0.12-1.el5.centos
pacemaker-libs-devel-1.0.12-1.el5.centos
pacemaker-libs-devel-1.0.12-1.el5.centos

# rpm -qa | grep resource-agent
resource-agents-1.0.4-1.1.el5

# rpm -qa | grep cluster-glue
cluster-glue-libs-1.0.6-1.6.el5
cluster-glue-libs-1.0.6-1.6.el5
cluster-glue-1.0.6-1.6.el5
cluster-glue-libs-devel-1.0.6-1.6.el5


PostgreSQL Version9.1.4

3、安裝

3.1 設置YUM

# wget -O /etc/yum.repos.d/pacemaker.repo http://clusterlabs.org/rpm/epel-5/clusterlabs.repo


3.2 安裝heartbeat/pacemaker

安裝libesmtp

# wget ftp://ftp.univie.ac.at/systems/linux/fedora/epel/5/x86_64/libesmtp-1.0.4-5.el5.x86_64.rpm
# wget ftp://ftp.univie.ac.at/systems/linux/fedora/epel/5/i386/libesmtp-1.0.4-5.el5.i386.rpm
# rpm -ivh libesmtp-1.0.4-5.el5.x86_64.rpm
# rpm -ivh libesmtp-1.0.4-5.el5.i386.rpm


 

安裝pacemaker corosync:

# yum install heartbeat* pacemaker*

 

經過命令查看資源腳本:

# crm ra list ocf
AoEtarget            AudibleAlarm         CTDB                 ClusterMon           Delay                Dummy                EvmsSCC
Evmsd                Filesystem           HealthCPU            HealthSMART          ICP                  IPaddr               IPaddr2
IPsrcaddr            IPv6addr             LVM                  LinuxSCSI            MailTo               ManageRAID           ManageVE
Pure-FTPd            Raid1                Route                SAPDatabase          SAPInstance          SendArp              ServeRAID
SphinxSearchDaemon   Squid                Stateful             SysInfo              SystemHealth         VIPArip              VirtualDomain
WAS                  WAS6                 WinPopup             Xen                  Xinetd               anything             apache
conntrackd           controld             db2                  drbd                 eDir88               exportfs             fio
iSCSILogicalUnit     iSCSITarget          ids                  iscsi                jboss                ldirectord           mysql
mysql-proxy          nfsserver            nginx                o2cb                 oracle               oralsnr              pgsql
ping                 pingd                portblock            postfix              proftpd              rsyncd               scsi2reservation
sfex                 syslog-ng            tomcat               vmware

禁止開機啓動:

# chkconfig heartbeat off

3.3 安裝PostgreSQL

安裝目錄爲/opt/pgsql

{安裝過程略}

 

postgres用戶配置環境變量:

[postgres@node1 ~]$ cat .bash_profile 
# .bash_profile
 
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
 . ~/.bashrc
fi
 
# User specific environment and startup programs
 
 
export PATH=/opt/pgsql/bin:$PATH:$HOME/bin
export PGDATA=/opt/pgsql/data
export PGUSER=postgres
export PGPORT=5432
export LD_LIBRARY_PATH=/opt/pgsql/lib:$LD_LIBRARY_PATH


4、配置

4.1 hosts設置

# vim /etc/hosts
192.168.100.161 node1
192.168.100.162 node2


4.2 配置heartbeat

建立配置文件:

# cp /usr/share/doc/heartbeat-3.0.3/ha.cf /etc/ha.d/

修改配置:

# vim /etc/ha.d/ha.cf
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
ucast eth1 2.2.2.2 #node2上改成2.2.2.1
auto_failback off
node node1
node node2
pacemaker respawn #自heartbeat-3.0.4改成crm respawn

4.3 生成密鑰

# (echo -ne "auth 1\n1 sha1 ";dd if=/dev/urandom bs=512 count=1 | openssl sha1 ) > /etc/ha.d/authkeys
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00032444 s, 1.6 MB/s

# chmod 0600 /etc/ha.d/authkeys


4.4 同步配置

[root@node1 ~]# cd /etc/ha.d/
[root@node1 ha.d]# scp authkeys ha.cf node2:/etc/ha.d/

4.5 下載替換腳本

pgsql腳本過舊,不支持配置pgsql.crm中設置的一些參數,須要從網上下載並替換pgsql

 

下載地址:

https://github.com/ClusterLabs/resource-agents

 

# unzip resource-agents-master.zip
# cd resource-agents-master/heartbeat/
# cp pgsql /usr/lib/ocf/resource.d/heartbeat/
# cp ocf-shellfuncs.in /usr/lib/ocf/lib/heartbeat/ocf-shellfuncs
# cp ocf-rarun /usr/lib/ocf/lib/heartbeat/ocf-rarun
# chmod 755 /usr/lib/ocf/resource.d/heartbeat/pgsql

 

修改ocf-shellfuncs

if [ -z "$OCF_ROOT" ]; then

#    : ${OCF_ROOT=@OCF_ROOT_DIR@}

    : ${OCF_ROOT=/usr/lib/ocf}

Fi

 

 

pgsql資源腳本特性:

主節點失效切換

master宕掉時,RA檢測到該問題並將master標記爲stop,隨後將slave提高爲新的master

異步與同步切換

若是slave宕掉或者LAN中存在問題,那麼當設置爲同步複製時包含寫操做的事務將會被終止,也就意味着服務將中止。所以,爲防止服務中止RA將會動態地將同步轉換爲異步複製。

初始啓動時自動識別新舊數據

當兩個或多個節點上的Pacemaker同時初始啓動時,RA經過每一個節點上最近的replay location進行比較,找出最新數據節點。這個擁有最新數據的節點將被認爲是master。固然,若在一個節點上啓動pacemaker或者該節點上的pacemaker是第一個被啓動的,那麼它也將成爲masterRA依據中止前的數據狀態進行裁定。

讀負載均衡

因爲slave節點能夠處理只讀事務,所以對於讀操做能夠經過虛擬另外一個虛擬IP來實現讀操做的負載均衡。

4.6 啓動heartbeat

啓動:

[root@node1 ~]# service heartbeat start
[root@node2 ~]# service heartbeat start

檢測狀態:

[root@node1 ~]# crm status
============
Last updated: Fri Jan 24 08:02:54 2014
Stack: Heartbeat
Current DC: node2 (43a4f083-c5d3-4c66-a387-b05d79b5dd89) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
0 Resources configured.
============
 
Online: [ node1 node2 ]

{heartbeat啓動成功}

 

 

測試:

禁用stonith,建立一個虛擬ip資源vip

[root@node1 ~]# crm configure property stonith-enabled="false"
[root@node1 ~]# crm configure
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 \
>     params \
>         ip="192.168.100.90" \
>         nic="eth0" \
>         cidr_netmask="24" \
>     op start   timeout="60s" interval="0s"  on-fail="stop" \
>     op monitor timeout="60s" interval="10s" on-fail="restart" \
>     op stop    timeout="60s" interval="0s"  on-fail="block"
crm(live)configure# commit
crm(live)configure# quit
bye

 

[root@node2 heartbeat]# crm_mon -1
============
Last updated: Fri Jan 24 08:23:09 2014
Stack: Heartbeat
Current DC: node2 (43a4f083-c5d3-4c66-a387-b05d79b5dd89) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
1 Resources configured.
============
 
Online: [ node1 node2 ]
 
 vip (ocf::heartbeat:IPaddr2): Started node1

{vip資源在node1上運行}

 

[root@node1 ~]# ping 192.168.100.90
PING 192.168.100.90 (192.168.100.90) 56(84) bytes of data.
64 bytes from 192.168.100.90: icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from 192.168.100.90: icmp_seq=2 ttl=64 time=0.111 ms
64 bytes from 192.168.100.90: icmp_seq=3 ttl=64 time=0.123 ms

 

模擬node1故障

[root@node1 ~]# service heartbeat stop
 
[root@node2 heartbeat]# crm_mon -1
============
Last updated: Fri Jan 24 08:22:22 2014
Stack: Heartbeat
Current DC: node2 (43a4f083-c5d3-4c66-a387-b05d79b5dd89) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
1 Resources configured.
============
 
Online: [ node2 ]
OFFLINE: [ node1 ]
 
 vip (ocf::heartbeat:IPaddr2): Started node2

{node2順利接管資源vip}

 

從新恢復node1

[root@node1 ~]# service heartbeat start
 
[root@node2 heartbeat]# crm_mon -1
============
Last updated: Fri Jan 24 08:23:09 2014
Stack: Heartbeat
Current DC: node2 (43a4f083-c5d3-4c66-a387-b05d79b5dd89) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
1 Resources configured.
============
 
Online: [ node1 node2 ]
 
 vip (ocf::heartbeat:IPaddr2): Started node1

{node1順利收回vip資源的接管權}

 

刪除資源:

[root@node1 ~]# crm_resource -D -r vip -t primitive
[root@node1 ~]# crm_resource -L
NO resources configured

4.7 配置流複製

node1/node2上配置postgresql.conf/pg_hba.conf

postgresql.conf :
listen_addresses = '*'
port = 5432
wal_level = hot_standby
archive_mode = on
archive_command = 'test ! -f /opt/archivelog/%f && cp %p /opt/archivelog/%f'
max_wal_senders = 4
wal_keep_segments = 50
hot_standby = on
 
pg_hba.conf :
host    replication     postgres        192.168.2.0/24           trust


4.8 配置pacemaker

{關於pacemaker的配置可經過多種方式,如crmshhb_guipcs等,該實驗使用crmsh配置}

 

啓動node1heartbeat,關閉node2heartbeat

 

編寫crm配置腳本:

[root@node1 ~]# cat pgsql.crm 
property \
    no-quorum-policy="ignore" \
    stonith-enabled="false" \
    crmd-transition-delay="0s"
 
rsc_defaults \
    resource-stickiness="INFINITY" \
    migration-threshold="1"
 
ms msPostgresql pgsql \
    meta \
        master-max="1" \
        master-node-max="1" \
        clone-max="2" \
        clone-node-max="1" \
        notify="true"
 
clone clnPingCheck pingCheck
group master-group \
      vip-master \
      vip-rep \
      meta \
          ordered="false"
 
primitive vip-master ocf:heartbeat:IPaddr2 \
    params \
        ip="192.168.100.163" \
        nic="eth0" \
        cidr_netmask="24" \
    op start   timeout="60s" interval="0s"  on-fail="stop" \
    op monitor timeout="60s" interval="10s" on-fail="restart" \
    op stop    timeout="60s" interval="0s"  on-fail="block"
 
primitive vip-rep ocf:heartbeat:IPaddr2 \
    params \
        ip="192.168.2.3" \
        nic="eth2" \
        cidr_netmask="24" \
    meta \
            migration-threshold="0" \
    op start   timeout="60s" interval="0s"  on-fail="restart" \
    op monitor timeout="60s" interval="10s" on-fail="restart" \
    op stop    timeout="60s" interval="0s"  on-fail="block"
 
primitive vip-slave ocf:heartbeat:IPaddr2 \
    params \
        ip="192.168.100.164" \
        nic="eth0" \
        cidr_netmask="24" \
    meta \
        resource-stickiness="1" \
    op start   timeout="60s" interval="0s"  on-fail="restart" \
    op monitor timeout="60s" interval="10s" on-fail="restart" \
    op stop    timeout="60s" interval="0s"  on-fail="block"
 
primitive pgsql ocf:heartbeat:pgsql \
    params \
        pgctl="/opt/pgsql/bin/pg_ctl" \
        psql="/opt/pgsql/bin/psql" \
        pgdata="/opt/pgsql/data/" \
        start_opt="-p 5432" \
        rep_mode="sync" \
        node_list="node1 node2" \
        restore_command="cp /opt/archivelog/%f %p" \
        primary_conninfo_opt="keepalives_idle=60 keepalives_interval=5 keepalives_count=5" \
        master_ip="192.168.2.3" \
        stop_escalate="0" \
    op start   timeout="60s" interval="0s"  on-fail="restart" \
    op monitor timeout="60s" interval="7s" on-fail="restart" \
    op monitor timeout="60s" interval="2s"  on-fail="restart" role="Master" \
    op promote timeout="60s" interval="0s"  on-fail="restart" \
    op demote  timeout="60s" interval="0s"  on-fail="stop" \
    op stop    timeout="60s" interval="0s"  on-fail="block" \
    op notify  timeout="60s" interval="0s"
 
primitive pingCheck ocf:pacemaker:pingd \
    params \
        name="default_ping_set" \
        host_list="192.168.100.1" \
        multiplier="100" \
    op start   timeout="60s" interval="0s"  on-fail="restart" \
    op monitor timeout="60s" interval="10s" on-fail="restart" \
    op stop    timeout="60s" interval="0s"  on-fail="ignore"
 
location rsc_location-1 vip-slave \
    rule  200: pgsql-status eq "HS:sync" \
    rule  100: pgsql-status eq "PRI" \
    rule  -inf: not_defined pgsql-status \
    rule  -inf: pgsql-status ne "HS:sync" and pgsql-status ne "PRI"
 
location rsc_location-2 msPostgresql \
    rule -inf: not_defined default_ping_set or default_ping_set lt 100
 
colocation rsc_colocation-1 inf: msPostgresql        clnPingCheck
colocation rsc_colocation-2 inf: master-group        msPostgresql:Master
 
order rsc_order-1 0: clnPingCheck          msPostgresql
order rsc_order-2 0: msPostgresql:promote  master-group:start   symmetrical=false
order rsc_order-3 0: msPostgresql:demote   master-group:stop    symmetrical=false


導入配置腳本:

[root@node1 ~]# crm configure load update pgsql.crm 
WARNING: pgsql: specified timeout 60s for stop is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for start is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for notify is smaller than the advised 90
WARNING: pgsql: specified timeout 60s for demote is smaller than the advised 120
WARNING: pgsql: specified timeout 60s for promote is smaller than the advised 120
WARNING: pingCheck: specified timeout 60s for start is smaller than the advised 90
WARNING: pingCheck: specified timeout 60s for stop is smaller than the advised 100


一段時間後查看ha狀態:

[root@node1 ~]# crm_mon -Afr1
============
Last updated: Mon Jan 27 05:23:59 2014
Stack: Heartbeat
Current DC: node1 (30b7dc95-25c5-40d7-b1e4-7eaf2d5cdf07) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============
 
Online: [ node1 ]
OFFLINE: [ node2 ]
 
Full list of resources:
 
 vip-slave (ocf::heartbeat:IPaddr2): Started node1
 Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep (ocf::heartbeat:IPaddr2): Started node1
 Master/Slave Set: msPostgresql
     Masters: [ node1 ]
     Stopped: [ pgsql:0 ]
 Clone Set: clnPingCheck
     Started: [ node1 ]
     Stopped: [ pingCheck:1 ]
 
Node Attributes:
* Node node1:
    + default_ping_set                 : 100       
    + master-pgsql:1                   : 1000      
    + pgsql-data-status                : LATEST    
    + pgsql-master-baseline            : 0000000003000078
    + pgsql-status                     : PRI       
 
Migration summary:
* Node node1:


注:剛啓動時爲slave,一段時間後自動切換爲master

 

 

待資源在node1上都正常運行後在node2上執行基礎備份同步:

[postgres@node2 data]$ pg_basebackup -h 192.168.2.3 -U postgres -D /opt/pgsql/data/ -P


 

啓動node2heartbeat

[root@node2 ~]# service heartbeat start


 

過一段時間後查看集羣狀態:

[root@node1 ~]# crm_mon -Afr1
============
Last updated: Mon Jan 27 05:27:22 2014
Stack: Heartbeat
Current DC: node1 (30b7dc95-25c5-40d7-b1e4-7eaf2d5cdf07) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============
 
Online: [ node1 node2 ]
 
Full list of resources:
 
 vip-slave (ocf::heartbeat:IPaddr2): Started node2
 Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep (ocf::heartbeat:IPaddr2): Started node1
 Master/Slave Set: msPostgresql
     Masters: [ node1 ]
     Slaves: [ node2 ]
 Clone Set: clnPingCheck
     Started: [ node1 node2 ]
 
Node Attributes:
* Node node1:
    + default_ping_set                 : 100       
    + master-pgsql:1                   : 1000      
    + pgsql-data-status                : LATEST    
    + pgsql-master-baseline            : 0000000003000078
    + pgsql-status                     : PRI       
* Node node2:
    + default_ping_set                 : 100       
    + master-pgsql:0                   : 100       
    + pgsql-data-status                : STREAMING|SYNC
    + pgsql-status                     : HS:sync   
 
Migration summary:
* Node node2: 
* Node node1:


{vip-slave資源已經由node1切換到了node2上,流複製狀態正常}

5、測試

5.1 備節點失效

node2上殺死postgres數據庫進程,模擬備節點上數據庫崩潰:

[root@node2 ~]# killall -9 postgres


 

查看此時集羣狀態:

[root@node1 ~]# crm_mon -Afr1
============
Last updated: Mon Jan 27 08:36:49 2014
Stack: Heartbeat
Current DC: node1 (30b7dc95-25c5-40d7-b1e4-7eaf2d5cdf07) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============
 
Online: [ node1 node2 ]
 
Full list of resources:
 
 vip-slave (ocf::heartbeat:IPaddr2): Started node1
 Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep (ocf::heartbeat:IPaddr2): Started node1
 Master/Slave Set: msPostgresql
     Masters: [ node1 ]
     Stopped: [ pgsql:1 ]
 Clone Set: clnPingCheck
     Started: [ node1 node2 ]
 
Node Attributes:
* Node node1:
    + default_ping_set                 : 100       
    + master-pgsql:0                   : 1000      
    + pgsql-data-status                : LATEST    
    + pgsql-master-baseline            : 0000000010000000
    + pgsql-status                     : PRI       
* Node node2:
    + default_ping_set                 : 100       
    + master-pgsql:1                   : -INFINITY 
    + pgsql-data-status                : DISCONNECT
    + pgsql-status                     : STOP      
 
Migration summary:
* Node node1: 
* Node node2: 
   pgsql:1: migration-threshold=1 fail-count=1
 
Failed actions:
    pgsql:1_monitor_7000 (node=node2, call=11, rc=7, status=complete): not running

{vip-slave資源已成功切換到了node1}


 

重啓node2上的heartbeat,數據庫將從新伴隨啓動:

[root@node2 ~]# service heartbeat restart


過段時間後查看狀態:

[root@node1 ~]# crm_mon -Afr1
============
Last updated: Mon Jan 27 08:39:16 2014
Stack: Heartbeat
Current DC: node1 (30b7dc95-25c5-40d7-b1e4-7eaf2d5cdf07) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============
 
Online: [ node1 node2 ]
 
Full list of resources:
 
 vip-slave (ocf::heartbeat:IPaddr2): Started node2
 Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node1
     vip-rep (ocf::heartbeat:IPaddr2): Started node1
 Master/Slave Set: msPostgresql
     Masters: [ node1 ]
     Slaves: [ node2 ]
 Clone Set: clnPingCheck
     Started: [ node1 node2 ]
 
Node Attributes:
* Node node1:
    + default_ping_set                 : 100       
    + master-pgsql:0                   : 1000      
    + pgsql-data-status                : LATEST    
    + pgsql-master-baseline            : 0000000010000000
    + pgsql-status                     : PRI       
* Node node2:
    + default_ping_set                 : 100       
    + master-pgsql:1                   : 100       
    + pgsql-data-status                : STREAMING|SYNC
    + pgsql-status                     : HS:sync   
 
Migration summary:
* Node node1: 
* Node node2:


{vip-slave又從新回到了nod2上,且流複製從新創建}

5.2 主節點失效切換

node1上殺死postgres數據庫進程,模擬備節點上數據庫崩潰:

[root@node1 ~]# killall -9 postgres


等會查看集羣狀態:

[root@node2 ~]# crm_mon -Afr -1
============
Last updated: Mon Jan 27 08:43:03 2014
Stack: Heartbeat
Current DC: node1 (30b7dc95-25c5-40d7-b1e4-7eaf2d5cdf07) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============
 
Online: [ node1 node2 ]
 
Full list of resources:
 
 vip-slave (ocf::heartbeat:IPaddr2): Started node2
 Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node2
     vip-rep (ocf::heartbeat:IPaddr2): Started node2
 Master/Slave Set: msPostgresql
     Masters: [ node2 ]
     Stopped: [ pgsql:0 ]
 Clone Set: clnPingCheck
     Started: [ node1 node2 ]
 
Node Attributes:
* Node node1:
    + default_ping_set                 : 100       
    + master-pgsql:0                   : -INFINITY 
    + pgsql-data-status                : DISCONNECT
    + pgsql-status                     : STOP      
* Node node2:
    + default_ping_set                 : 100       
    + master-pgsql:1                   : 1000      
    + pgsql-data-status                : LATEST    
    + pgsql-master-baseline            : 00000000120000B0
    + pgsql-status                     : PRI       
 
Migration summary:
* Node node1: 
   pgsql:0: migration-threshold=1 fail-count=1
* Node node2: 
 
Failed actions:
    pgsql:0_monitor_2000 (node=node1, call=25, rc=7, status=complete): not running

{vip-master/vip-rep都已成功切換到node2上,且node2已變爲masternode2pg數據庫狀態已切換爲PRI}

5.3 主節點恢復

修復原主節點後將其恢復爲當前備節點

 

node1上執行一次基礎同步:

[postgres@node1 data]$ pwd
/opt/pgsql/data
[postgres@node1 data]$ rm -rf *
[postgres@node1 data]$ pg_basebackup -h 192.168.2.3 -U postgres -D /opt/pgsql/data/ -P
19172/19172 kB (100%), 1/1 tablespace
NOTICE:  pg_stop_backup complete, all required WAL segments have been archived
[postgres@node1 data]$ ls
backup_label      base    pg_clog      pg_ident.conf  pg_notify  pg_stat_tmp  pg_tblspc    PG_VERSION  postgresql.conf
backup_label.old  global  pg_hba.conf  pg_multixact   pg_serial  pg_subtrans  pg_twophase  pg_xlog     recovery.done

  

啓動heartbeat以前必須刪除資鎖,否則資源將不會伴隨heartbeat啓動:

[root@node1 ~]# rm -rf /var/lib/pgsql/tmp/PGSQL.lock

{該鎖文件在當節點爲主節點時建立,但不會由於heartbeat的異常中止或數據庫/系統的異常終止而自動刪除,因此在恢復一個節點的時候只要該節點充當過主節點就須要手動清理該鎖文件}

 

重啓node1上的heartbeat

[root@node1 ~]# service heartbeat restart

 

過段時間後查看集羣狀態:

[root@node2 ~]# crm_mon -Afr1
============
Last updated: Mon Jan 27 08:50:43 2014
Stack: Heartbeat
Current DC: node2 (f2dcd1df-7429-42f5-82e9-b73921f97cab) - partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============
 
Online: [ node1 node2 ]
 
Full list of resources:
 
 vip-slave (ocf::heartbeat:IPaddr2): Started node1
 Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started node2
     vip-rep (ocf::heartbeat:IPaddr2): Started node2
 Master/Slave Set: msPostgresql
     Masters: [ node2 ]
     Slaves: [ node1 ]
 Clone Set: clnPingCheck
     Started: [ node1 node2 ]
 
Node Attributes:
* Node node1:
    + default_ping_set                 : 100       
    + master-pgsql:0                   : 100       
    + pgsql-data-status                : STREAMING|SYNC
    + pgsql-status                     : HS:sync   
* Node node2:
    + default_ping_set                 : 100       
    + master-pgsql:1                   : 1000      
    + pgsql-data-status                : LATEST    
    + pgsql-master-baseline            : 00000000120000B0
    + pgsql-status                     : PRI       
 
Migration summary:
* Node node1: 
* Node node2:

{vip-slave已成功切到node1上,node1成功成爲流複製備節點}

 、管理

6.1 啓動關閉heartbeat

[root@node1 ~]# service heartbeat start
[root@node1 ~]# service heartbeat stop


6.2 查看HA狀態

[root@node1 ~]# crm status


6.3 查看資源狀態及節點屬性

[root@node1 ~]# crm_mon -Afr -1


6.4 查看配置

[root@node1 ~]# crm configure show


6.5 實時監控HA

[root@node1 ~]# crm_mon -Afr


6.6 crm_resource命令

資源啓動/關閉:

[root@node1 ~]# crm_resource -r vip-master -v started
[root@node1 ~]# crm_resource -r vip-master -v stoped


列舉資源:

[root@node1 ~]# crm_resource -L
 vip-slave (ocf::heartbeat:IPaddr2): Started 
 Resource Group: master-group
     vip-master (ocf::heartbeat:IPaddr2): Started 
     vip-rep (ocf::heartbeat:IPaddr2): Started 
 Master/Slave Set: msPostgresql [pgsql]
     Masters: [ node1 ]
     Slaves: [ node2 ]
 Clone Set: clnPingCheck [pingCheck]
     Started: [ node1 node2 ]


查看資源位置:

[root@node1 ~]# crm_resource -W -r pgsql
resource pgsql is running on: node2


遷移資源:

[root@node1 ~]# crm_resource -M -r vip-slave -N node2


刪除資源:

[root@node1 ~]# crm_resource -D -r vip-slave -t primitive


6.7 crm命令

列舉指定的RA

[root@node1 ~]# crm ra list ocf pacemaker
ClusterMon     Dummy          HealthCPU      HealthSMART    Stateful       SysInfo        SystemHealth   controld       ping           pingd
remote


刪除節點:

[root@node1 ~]# crm node delete node2


停用節點:

[root@node1 ~]# crm node standby node2


啓用節點:

[root@node1 ~]# crm node online node2


配置pacemaker

[root@node1 ~]# crm configure
crm(live)configure#
……
……
crm(live)configure# commit
crm(live)configure# quit


6.8 重置failcount

[root@node1 ~]# crm resource
crm(live)resource# failcount pgsql set node1 0
crm(live)resource# failcount pgsql show node1
scope=status  name=fail-count-pgsql value=0

 

[root@node1 ~]# crm resource cleanup pgsql
Cleaning up pgsql:0 on node1
Waiting for 1 replies from the CRMd. OK

 

[root@node1 ~]# crm_failcount -G -U node1 -r pgsql
scope=status  name=fail-count-pgsql value=INFINITY
[root@node1 ~]# crm_failcount -D -U node1 -r pgsql

7、問題記錄

7.1 Q1

問題現象:

heartbeat日誌中報以下錯誤:

Jan 24 07:47:36 node1 heartbeat: [2515]: WARN: nodename node1 uuid changed to node2

Jan 24 07:47:38 node1 heartbeat: [2515]: WARN: nodename node2 uuid changed to node1

Jan 24 07:47:38 node1 heartbeat: [2515]: WARN: nodename node1 uuid changed to node2

Jan 24 07:47:40 node1 heartbeat: [2515]: WARN: nodename node2 uuid changed to node1

Jan 24 07:47:40 node1 heartbeat: [2515]: WARN: nodename node1 uuid changed to node2

Jan 24 07:47:42 node1 heartbeat: [2515]: WARN: nodename node2 uuid changed to node1

Jan 24 07:47:42 node1 heartbeat: [2515]: WARN: nodename node1 uuid changed to node2

 

解決方式:

由於是經過虛擬機克隆生成的node2,因此hb_uuid相同,須要刪除後從新生成,以下:

[root@node2 ~]# rm -rf /var/lib/heartbeat/hb_uuid
[root@node2 ~]# service heartbeat restart

重啓以後將會生成新的hb_uuid

 

 

7.2 Q2

問題現象:

加載配置時報錯:

[root@node1 ~]# crm configure load update pgsql.crm 
ERROR: pgsql: parameter rep_mode does not exist
ERROR: pgsql: parameter node_list does not exist
ERROR: pgsql: parameter master_ip does not exist
ERROR: pgsql: parameter restore_command does not exist
ERROR: pgsql: parameter primary_conninfo_opt does not exist
WARNING: pgsql: specified timeout 60s for stop is smaller than the advised 120
WARNING: pgsql: action monitor_Master not advertised in meta-data, it may not be supported by the RA
WARNING: pgsql: specified timeout 60s for start is smaller than the advised 120
WARNING: pgsql: action notify not advertised in meta-data, it may not be supported by the RA
WARNING: pgsql: action demote not advertised in meta-data, it may not be supported by the RA
WARNING: pgsql: action promote not advertised in meta-data, it may not be supported by the RA
WARNING: pingCheck: specified timeout 60s for start is smaller than the advised 90
WARNING: pingCheck: specified timeout 60s for stop is smaller than the advised 100
Do you still want to commit?


解決方式:

緣由是pgsql腳本過舊,不支持配置pgsql.crm中設置的一些參數,須要從網上下載並替換pgsql

https://raw.github.com/ClusterLabs/resource-agents

7.3 Q3

問題現象:

加載配置時報錯:

[root@node1 ~]# crm configure load update pgsql.crm 
lrmadmin[15368]: 2014/01/24_09:18:44 ERROR: lrm_get_rsc_type_metadata(578): got a return code HA_FAIL from a reply message of rmetadata with function get_ret_from_msg.
ERROR: ocf:heartbeat:pgsql: could not parse meta-data: 
ERROR: ocf:heartbeat:pgsql: could not parse meta-data: 
ERROR: ocf:heartbeat:pgsql: no such resource agent
WARNING: pingCheck: specified timeout 60s for start is smaller than the advised 90
WARNING: pingCheck: specified timeout 60s for stop is smaller than the advised 100
Do you still want to commit?

 

解決方式:

緣由是pgsql腳本權限不正確,使用下面命令修改便可:

# chmod 755 /usr/lib/ocf/resource.d/heartbeat/pgsql

7.4 Q4

問題現象:

啓動heartbeat時報錯:

[root@node1 ~]# service heartbeat start
/usr/lib/ocf/lib//heartbeat/ocf-shellfuncs: line 56: @OCF_ROOT_DIR@/lib/heartbeat/ocf-binaries: No such file or directory

 

解決方式:

由於在CentOS5.5@OCF_ROOT_DIR@變量沒法替換爲正確的路徑致使,可經過修改腳本實現,以下:

編輯ocf-shellfuncs修改以下內容:

if [ -z "$OCF_ROOT" ]; then

#    : ${OCF_ROOT=@OCF_ROOT_DIR@}

    : ${OCF_ROOT=/usr/lib/ocf}

fi

7.5 Q5

問題現象:

啓動heartbeat時報錯:

# service heartbeat start
/usr/lib/ocf/lib//heartbeat/ocf-shellfuncs: line 60: /usr/lib/ocf/lib/heartbeat/ocf-rarun: No such file or directory

 

解決方式:

由於缺乏ocf-rarun腳本致使

下載放入相應路徑便可:

下載地址https://raw.github.com/ClusterLabs/resource-agents

7.6 Q6

問題現象:

啓動heartbeat時因找不到啓動腳本而報錯:

[root@db1 ~]# service heartbeat start
Starting High-Availability services:  Heartbeat failure [rc=6]. Failed.
 
heartbeat[2074]: 2014/01/23_09:06:59 info: Pacemaker support: yes
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Client child command [/usr/lib64/heartbeat/cib] is not executable
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Directive failfast  hacluster /usr/lib64/heartbeat/cib failed
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Client child command [/usr/lib64/heartbeat/stonithd] is not executable
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Directive respawn root /usr/lib64/heartbeat/stonithd failed
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Client child command [/usr/lib64/heartbeat/attrd] is not executable
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Directive respawn  hacluster /usr/lib64/heartbeat/attrd failed
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Client child command [/usr/lib64/heartbeat/crmd] is not executable
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Directive failfast  hacluster /usr/lib64/heartbeat/crmd failed
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Heartbeat not started: configuration error.
heartbeat[2074]: 2014/01/23_09:06:59 ERROR: Configuration error, heartbeat not started.

 

解決方式:

ln -s /usr/libexec/pacemaker/cib /usr/lib64/heartbeat/cib
ln -s /usr/libexec/pacemaker/stonithd /usr/lib64/heartbeat/stonithd
ln -s /usr/libexec/pacemaker/attrd /usr/lib64/heartbeat/attrd
ln -s /usr/libexec/pacemaker/crmd /usr/lib64/heartbeat/crmd

7.7 Q7

問題現象:

啓動heartbeat時報錯:

Jan 23 09:10:15 db1 heartbeat: [2129]: info: Heartbeat generation: 1390439416

Jan 23 09:10:15 db1 heartbeat: [2129]: info: No uuid found for current node - generating a new uuid.

Jan 23 09:10:15 db1 heartbeat: [2129]: info: Creating FIFO /var/lib/heartbeat/fifo.

Jan 23 09:10:15 db1 heartbeat: [2129]: info: glib: ucast: write socket priority set to IPTOS_LOWDELAY on eth1

Jan 23 09:10:15 db1 heartbeat: [2129]: info: glib: ucast: bound send socket to device: eth1

Jan 23 09:10:15 db1 heartbeat: [2129]: ERROR: glib: ucast: error setting option SO_REUSEPORT(w): Protocol not available

Jan 23 09:10:15 db1 heartbeat: [2129]: ERROR: make_io_childpair: cannot open ucast eth1

Jan 23 09:10:16 db1 heartbeat: [2132]: CRIT: Emergency Shutdown: Master Control process died.

Jan 23 09:10:16 db1 heartbeat: [2132]: CRIT: Killing pid 2129 with SIGTERM

Jan 23 09:10:16 db1 heartbeat: [2132]: CRIT: Emergency Shutdown(MCP dead): Killing ourselves.

 

解決方式:

1.升級內核版本,當前內核版本不支持ucast;

2.其它的檢測方式,如mcast/bcast

7.8 Q8

問題現象:

使用bcast心跳檢測方式時報錯:

Jan 24 01:30:20 db2 heartbeat: [29856]: ERROR: glib: Error binding socket (Address already in use). Retrying.

Jan 24 01:30:21 db2 heartbeat: [29856]: ERROR: glib: Error binding socket (Address already in use). Retrying.

Jan 24 01:30:22 db2 heartbeat: [29856]: ERROR: glib: Error binding socket (Address already in use). Retrying.

Jan 24 01:30:23 db2 heartbeat: [29856]: ERROR: glib: Error binding socket (Address already in use). Retrying.

Jan 24 01:30:24 db2 heartbeat: [29856]: ERROR: glib: Unable to bind socket (Address already in use). Giving up.

Jan 24 01:30:24 db2 heartbeat: [29856]: info: glib: UDP Broadcast heartbeat closed on port 694 interface eth1 - Status: 1

Jan 24 01:30:24 db2 heartbeat: [29856]: ERROR: make_io_childpair: cannot open bcast eth1

Jan 24 01:30:25 db2 heartbeat: [29859]: CRIT: Emergency Shutdown: Master Control process died.

Jan 24 01:30:25 db2 heartbeat: [29859]: CRIT: Killing pid 29856 with SIGTERM

Jan 24 01:30:25 db2 heartbeat: [29859]: CRIT: Emergency Shutdown(MCP dead): Killing ourselves.

 

解決方式:

說明694端口已經被佔用,查看

[root@db1 ~]# netstat -nlp | grep 694
udp        0      0 0.0.0.0:694                 0.0.0.0:*                               1367/rpcbind        
udp        0      0 :::694                      :::*                                    1367/rpcbind

換個UDP端口,如ha.cf中指定udpport 692

8、參考資源

腳本:

https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/pgsql

 

腳本使用說明:

https://github.com/t-matsuo/resource-agents/wiki/Resource-Agent-for-PostgreSQL-9.1-streaming-replication

 

crm_resouce命令:

http://www.novell.com/zh-cn/documentation/sle_ha/book_sleha/data/man_crmresource.html

 

crm_failcount命令:

http://www.novell.com/zh-cn/documentation/sle_ha/book_sleha/data/man_crmfailcount.html

相關文章
相關標籤/搜索