經過VirtualBox建立兩臺虛擬機進行實驗。 node
節點服務器 2臺 sql
服務器1 操做系統 CentOS release 6.7 (Final) shell
主機名 ha1 數據庫
角色 主節點 服務器
IP地址 192.168.100.151 dom
數據庫 postgresql-9.4.5 tcp
slony slony1-2.2.4 ide
服務器2 操做系統 CentOS release 6.7 (Final) post
主機名 ha2 測試
角色 備節點
IP地址 192.168.100.152
數據庫 postgresql-9.4.5
slony slony1-2.2.4
兩個節點均需執行如下配置。
root用戶編輯/etc/hosts文件,修改內容以下:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.151 ha1 192.168.100.152 ha2
root用戶編輯/etc/sysconfig/iptables防火牆配置文件,開啓PostgreSQL端口5432的遠程訪問,最終文件內容以下:
# Generated by iptables-save v1.4.7 on Tue Dec 29 10:45:17 2015 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [26:2536] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 5432 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Tue Dec 29 10:45:17 2015
注意事項:
新規則的位置必須在-A INPUT -j REJECT --reject-with icmp-host-prohibited、-A FORWARD -j REJECT --reject-with icmp-host-prohibited這兩個規則之上,不然不會生效。
修改完成以後,重啓iptables服務。
root用戶建立相應的操做系統用戶、數據庫目錄及slony目錄,並賦權。
[root@ha1 ~]# groupadd postgres [root@ha1 ~]# useradd –g postgres postgres [root@ha1 ~]# passwd postgres [root@ha1 ~]# mkdir –p /usr/local/pg945/data [root@ha1 ~]# mkdir –p /usr/local/slony/log [root@ha1 ~]# mkdir –p /usr/local/slony/archive [root@ha1 ~]# chown –R postgres:postgres /usr/local/pg945 [root@ha1 ~]# chown –R postgres:postgres /usr/local/slony
配置postgres用戶的環境變量,以下:
export PGBASE=/usr/local/pg945 export PGDATA=$PGBASE/data export PGUSER=postgres export PGPORT=5432 export PATH=$PATH:$HOME/bin:$PGBASE/bin:/usr/local/slony/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PGBASE/lib
兩個節點均需執行如下配置。
將下載的PostgreSQL數據庫源碼包放在/home/postgres目錄下, postgres用戶進行解壓、安裝。
安裝目錄爲/usr/local/pg945,具體過程略。
安裝完成以後,配置數據庫容許遠程訪問,具體配置略。
進入數據庫,建立測試對象,操做以下:
[postgres@ha1 ~]$ psql psql (9.4.5) Type "help" for help. postgres=# create user slony superuser password 123456; postgres=# create database slony owner slony; postgres=# \c slony slony slony=# create schema slony authorization slony; slony=# create table tb1 (id int primary key,name varchar);
兩個節點均需執行編譯安裝。
將下載的Slony源碼包放在/home/postgres目錄下,postgres進行解壓、安裝。configure操做爲:
[postgres@ha1 ~]$ ./configure --prefix=/usr/local/slony --with-perltools --with-pgconfigdir=/usr/local/pg945/bin
具體過程略。
兩個節點均需建立配置文件。
在/usr/local/slony/etc目錄下建立配置文件slon.conf,最基本、主要的兩個參數爲:
cluster_name='slony' conn_info='host=localhost port=5432 user=slony'
具體文件內容參考附件。
此處依照slony中添加資源的順序進行說明,最終的完整腳本可參照附件。
只須要在主節點執行腳本便可。
能夠理解爲是一個數據庫,其下存放了集羣運行過程當中所須要的配置信息,如節點、監聽、數據庫對象等。
結合手冊,經過反覆測試能夠看出:
a) 一套slony集羣只能有一個cluster,由cluster name定義
b) init cluster時須要先指定一個admin conninfo
c) admin conninfo所定義的節點即爲主節點
d) 只須要定義主節點的conninfo便可,可不定義備節點
slonik定義以下:
cluster_name=slony master_conninfo="host=ha1 dbname=slony user=slony" slave_conninfo="host=ha2 dbname=slony user=slony" slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; init cluster (id=1, comment = 'Master Node'); EOF
經過store node命令,定義一個node,而後將其保存至cluster的配置中(_$cluster_name.sl_node表)。
在這個過程當中會新節點上建立一個_$cluster_name,這也是爲何上文中說的在init cluster時只須要主節點的conninfo信息,不須要備節點的信息。
因爲從1.2.11版本開始加入了log shipping,因此再也不須要配置spoolnode參數。
slonik腳本以下:
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store node (id = 2,comment = 'slave node',event node = 1); EOF
經過反覆測試得出:
a) store node中須要定義event node
b) event node得是已經存在的node
c) 在store node以前須要定義admin conninfo
d) 因爲event node指定的是1,故還還須要指定node 1的admin conninfo
通訊路徑(path)存儲在_$cluster_name.sl_path中,在完成node定義以後,並不會在cluster中存儲該node的連接信息,須要再經過store path命令來完成存儲。
經過store path命令向cluster中寫入node的訪問方式。相似於store node,須要在store node以前寫入相關node的admin conninfo。
slonik定義以下:
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store path (server = 1,client = 2,conninfo = '$master_conninfo'); store path (server = 2,client = 1,conninfo = '$slave_conninfo'); EOF
經過store listen命令完成該操做。若sl_path中已存儲了相應的listen信息,則會發生wait for event事件。
並且,store listen操做須要一條一條執行,不能同時執行多條。
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store listen (origin = 1, provider = 1, receiver = 2); EOF slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store listen (origin = 2, provider = 2, receiver = 1); EOF
set是slony中的最小單元,存儲要進行同步的數據庫對象。
set(origin發送端)信息存儲在_$cluster_name.sl_set系統表中。
slonik配置以下:
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; create set (id = 1,origin = 1,comment = '1to2'); EOF
配置完發送set以後,須要向其中寫入參與同步的表的信息。
slonik配置以下:
slonik << EOF cluster name = $cluster_name; node 1 admin conninfo = 'host=$master_host dbname=$db_name user=$user_name'; node 2 admin conninfo = 'host=$slave_host dbname=$db_name user=$user_name'; set add table (set id = 1,origin = 1,fully qualified name='slony.tb1',comment='1to2'); EOF
set(receive接受端)信息存儲在_$cluster_name.sl_subscribe系統表中。
slonik配置以下:
slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; subscribe set (id = 1,provider = 1,receiver = 2,forward = no,omit copy = no); EOF
#! /bin/sh # chkconfig: 2345 64 36 # This is an example of a start/stop script # if you have chkconfig, simply: # chkconfig --add postgresql prefix=/usr/local/pg945 PGDATA="/usr/local/pg945/data" PGUSER=postgres PGLOG="$PGDATA/pg_log/postgres-`date +%Y-%m-%d`.log" PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON="$prefix/bin/postmaster" PGCTL="$prefix/bin/pg_ctl" set -e test -x $DAEMON || { echo "$DAEMON not found" if [ "$1" = "stop" ] then exit 0 else exit 5 fi } case $1 in start) echo -n "Starting PostgreSQL: " test x"$OOM_SCORE_ADJ" != x && echo "$OOM_SCORE_ADJ" > /proc/self/oom_score_adj test x"$OOM_ADJ" != x && echo "$OOM_ADJ" > /proc/self/oom_adj su - $PGUSER -c "$DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 echo "ok" ;; stop) echo -n "Stopping PostgreSQL: " su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s -m fast" echo "ok" ;; restart) echo -n "Restarting PostgreSQL: " su - $PGUSER -c "$PGCTL stop -D '$PGDATA' -s -m fast -w" test x"$OOM_SCORE_ADJ" != x && echo "$OOM_SCORE_ADJ" > /proc/self/oom_score_adj test x"$OOM_ADJ" != x && echo "$OOM_ADJ" > /proc/self/oom_adj su - $PGUSER -c "$DAEMON -D '$PGDATA' &" >>$PGLOG 2>&1 echo "ok" ;; reload) echo -n "Reload PostgreSQL: " su - $PGUSER -c "$PGCTL reload -D '$PGDATA' -s" echo "ok" ;; status) su - $PGUSER -c "$PGCTL status -D '$PGDATA'" ;; *) # Print help echo "Usage: $0 {start|stop|restart|reload|status}" 1>&2 exit 1 ;; esac exit 0
#!/bin/sh # chkconfig: - 98 02 # description: Starts and stops the Slon daemon that handles Slony-I replication. if [ -r /etc/sysconfig/slony1 ]; then . /etc/sysconfig/slony1 fi # Source function library. INITD=/etc/rc.d/init.d . $INITD/functions # Get function listing for cross-distribution logic. TYPESET=`typeset -f|grep "declare"` # Get config. . /etc/sysconfig/network # For SELinux we need to use 'runuser' not 'su' if [ -x /sbin/runuser ] then SU=runuser else SU=su fi # Check that networking is up. # We need it for slon [ "${NETWORKING}" = "no" ] && exit 0 # Find the name of the script NAME=`basename $0` if [ ${NAME:0:1} = "S" -o ${NAME:0:1} = "K" ] then NAME=${NAME:3} fi # Set defaults for configuration variables SLONENGINE=/usr/local/slony/bin SLONDAEMON=$SLONENGINE/slon SLONCONF=$SLONENGINE/etc/slon.conf SLONPID=/usr/local/slony/slon.pid SLONLOG=/usr/local/slony/log/slony-`date +%Y-%m-%d`.log test -x $SLONDAEMON || exit 5 script_result=0 cluster_name=slony conn_info="\"host=localhost port=5432 user=slony\"" start(){ SLON_START=$"Starting ${NAME} service: " echo -n "$SLON_START" $SU -l postgres -c "$SLONDAEMON $cluster_name $conn_info -f $SLONCONF &" >> "$SLONLOG" 2>&1 < /dev/null #$SU -l postgres -c "$SLONDAEMON -f $SLONCONF &" >> "$SLONLOG" 2>&1 < /dev/null sleep 2 pid=`pidof -s "$SLONDAEMON"` if [ $pid ] then success "$SLON_START" touch /usr/local/slony/${NAME} echo else failure "$PSQL_START" echo script_result=1 fi } stop(){ echo -n $"Stopping ${NAME} service: " if [ $UID -ne 0 ]; then RETVAL=1 failure else killproc /usr/local/slony/bin/slon RETVAL=$? [ $RETVAL -eq 0 ] && rm -f /usr/local/slony/${NAME} fi; echo return $RETVAL } restart(){ stop start } condrestart(){ [ -e /usr/local/slony/${NAME} ] && restart } condstop(){ [ -e /usr/local/slony/${NAME} ] && stop } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status slon script_result=$? ;; restart) restart ;; condrestart) condrestart ;; condstop) condstop ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|condstop}" exit 1 esac exit $script_result
#!/bin/sh export PGBASE=/usr/local/pg945 export PGDATA=$PGBASE/data export PGUSER=postgres export PGPORT=5432 export PATH=$PATH:$PGBASE/bin:$HOME/bin:/usr/local/slony/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PGBASE/lib export SLON_CONF=/usr/local/slony/etc/slon.conf cluster_name=slony master_conninfo="host=ha1 dbname=slony user=slony" slave_conninfo="host=ha2 dbname=slony user=slony" cluster_log=/usr/local/slony/log function cluster { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; init cluster (id = 1,comment = 'master'); EOF } function node { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store node (id = 2,comment = 'slave',event node = 1); EOF } function path { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store path (server = 1,client = 2,conninfo = '$master_conninfo'); store path (server = 2,client = 1,conninfo = '$slave_conninfo'); EOF } function listen { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store listen (origin = 1, provider = 1, receiver = 2); EOF slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; store listen (origin = 2, provider = 2, receiver = 1); EOF } function set_origin { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; create set (id = 1,origin = 1,comment = '1to2'); EOF } function table { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$ha1_conninfo'; node 2 admin conninfo = '$ha2_conninfo'; set add table (set id = 1,origin = 1,fully qualified name='slony.tb1',comment='ha1toha2'); EOF } function set_receive { slonik <<EOF cluster name = $cluster_name; node 1 admin conninfo = '$master_conninfo'; node 2 admin conninfo = '$slave_conninfo'; subscribe set (id = 1,provider = 1,receiver = 2,forward = no,omit copy = no); EOF } cluster node path listen set_origin table set_receive