業界對系統的高可用有着基本的要求,簡單的說,這些要求能夠總結爲以下所示。html
通常狀況下系統的高可用能夠用幾個9來評估。所謂的幾個9就是系統能夠保證對外提供的服務的時間達到總時間的百分比。例如若是須要達到99.99的高可用,則系統整年發生故障的總時間不能超過52分鐘。java
咱們既然須要實現系統的高可用架構,那麼,咱們到底須要搭建一個什麼樣的系統架構呢?咱們能夠將須要搭建的系統架構簡化成下圖所示。node
因爲我電腦資源有限,我這裏在4臺服務器上搭建高可用環境,你們能夠按照本文將環境擴展到更多的服務器,搭建步驟都是同樣的。mysql
主機名 | IP地址 | 安裝的服務 |
---|---|---|
binghe151 | 192.168.175.151 | Mycat、Zookeeper、MySQL、HAProxy、Keepalived、Xinetd |
binghe152 | 192.168.175.152 | Zookeeper、MySQL |
binghe153 | 192.168.175.153 | Zookeeper、MySQL |
binghe154 | 192.168.175.154 | Mycat、MySQL、HAProxy、Keepalived、Xinetd |
binghe155 | 192.168.175.155 | MySQL |
注意:HAProxy和Keepalived最好和Mycat部署在同一臺服務器上。linux
小夥伴們能夠關注【冰河技術】微信公衆號,參考《MySQL之——源碼編譯MySQL8.x+升級gcc+升級cmake(親測完整版)》redis
因爲Mycat和Zookeeper的運行須要JDK環境的支持,全部咱們須要在每臺服務器上安裝JDK環境。sql
這裏,我以在binghe151服務器上安裝JDK爲例,其餘服務器的安裝方式與在binghe151服務器上的安裝方式相同。安裝步驟以下所示。數據庫
(1)到JDK官網下載JDK 1.8版本,JDK1.8的下載地址爲:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html。apache
注:我下載的JDK安裝包版本爲:jdk-8u212-linux-x64.tar.gz,若是JDK版本已更新,你們下載對應的版本便可。json
(2)將下載的jdk-8u212-linux-x64.tar.gz安裝包上傳到binghe151服務器的/usr/local/src目錄下。
(3)解壓jdk-8u212-linux-x64.tar.gz文件,以下所示。
tar -zxvf jdk-8u212-linux-x64.tar.gz
(4)將解壓的jdk1.8.0_212目錄移動到binghe151服務器下的/usr/local目錄下,以下所示。
mv jdk1.8.0_212/ /usr/local/src/
(5)配置JDK系統環境變量,以下所示。
vim /etc/profile JAVA_HOME=/usr/local/jdk1.8.0_212 CLASS_PATH=.:$JAVA_HOME/lib PATH=$JAVA_HOME/bin:$PATH export JAVA_HOME CLASS_PATH PATH
使系統環境變量生效,以下所示。
source /etc/profile
(6)查看JDK版本,以下所示。
[root@binghe151 ~]# java -version java version "1.8.0_212" Java(TM) SE Runtime Environment (build 1.8.0_212-b10) Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)
結果顯示,正確輸出了JDK的版本信息,說明JDK安裝成功。
下載Mycat 1.6.7.4 Release版本,解壓到服務器的/usr/local/mycat目錄下,並配置Mycat的系統環境變量,隨後,配置Mycat的配置文件,Mycat的最終結果配置以下所示。
<?xml version="1.0"?> <!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="shop" checkSQLschema="false" sqlMaxLimit="1000"> <!--<table name="order_master" primaryKey="order_id" dataNode = "ordb"/>--> <table name="order_master" primaryKey="order_id" dataNode = "orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" autoIncrement="true"> <childTable name="order_detail" primaryKey="order_detail_id" joinKey="order_id" parentKey="order_id" autoIncrement="true"/> </table> <table name="order_cart" primaryKey="cart_id" dataNode = "ordb"/> <table name="order_customer_addr" primaryKey="customer_addr_id" dataNode = "ordb"/> <table name="region_info" primaryKey="region_id" dataNode = "ordb,prodb,custdb" type="global"/> <table name="serial" primaryKey="id" dataNode = "ordb"/> <table name="shipping_info" primaryKey="ship_id" dataNode = "ordb"/> <table name="warehouse_info" primaryKey="w_id" dataNode = "ordb"/> <table name="warehouse_proudct" primaryKey="wp_id" dataNode = "ordb"/> <table name="product_brand_info" primaryKey="brand_id" dataNode = "prodb"/> <table name="product_category" primaryKey="category_id" dataNode = "prodb"/> <table name="product_comment" primaryKey="comment_id" dataNode = "prodb"/> <table name="product_info" primaryKey="product_id" dataNode = "prodb"/> <table name="product_pic_info" primaryKey="product_pic_id" dataNode = "prodb"/> <table name="product_supplier_info" primaryKey="supplier_id" dataNode = "prodb"/> <table name="customer_balance_log" primaryKey="balance_id" dataNode = "custdb"/> <table name="customer_inf" primaryKey="customer_inf_id" dataNode = "custdb"/> <table name="customer_level_inf" primaryKey="customer_level" dataNode = "custdb"/> <table name="customer_login" primaryKey="customer_id" dataNode = "custdb"/> <table name="customer_login_log" primaryKey="login_id" dataNode = "custdb"/> <table name="customer_point_log" primaryKey="point_id" dataNode = "custdb"/> </schema> <dataNode name="mycat" dataHost="binghe151" database="mycat" /> <dataNode name="ordb" dataHost="binghe152" database="order_db" /> <dataNode name="prodb" dataHost="binghe153" database="product_db" /> <dataNode name="custdb" dataHost="binghe154" database="customer_db" /> <dataNode name="orderdb01" dataHost="binghe152" database="orderdb01" /> <dataNode name="orderdb02" dataHost="binghe152" database="orderdb02" /> <dataNode name="orderdb03" dataHost="binghe153" database="orderdb03" /> <dataNode name="orderdb04" dataHost="binghe153" database="orderdb04" /> <dataHost name="binghe151" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="binghe51" url="192.168.175.151:3306" user="mycat" password="mycat"/> </dataHost> <dataHost name="binghe152" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="binghe52" url="192.168.175.152:3306" user="mycat" password="mycat"/> </dataHost> <dataHost name="binghe153" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="binghe53" url="192.168.175.153:3306" user="mycat" password="mycat"/> </dataHost> <dataHost name="binghe154" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"> <heartbeat>select user()</heartbeat> <writeHost host="binghe54" url="192.168.175.154:3306" user="mycat" password="mycat"/> </dataHost> </mycat:schema>
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mycat:server SYSTEM "server.dtd"> <mycat:server xmlns:mycat="http://io.mycat/"> <system> <property name="useHandshakeV10">1</property> <property name="defaultSqlParser">druidparser</property> <property name="serverPort">3307</property> <property name="managerPort">3308</property> <property name="nonePasswordLogin">0</property> <property name="bindIp">0.0.0.0</property> <property name="charset">utf8mb4</property> <property name="frontWriteQueueSize">2048</property> <property name="txIsolation">2</property> <property name="processors">2</property> <property name="idleTimeout">1800000</property> <property name="sqlExecuteTimeout">300</property> <property name="useSqlStat">0</property> <property name="useGlobleTableCheck">0</property> <property name="sequenceHandlerType">1</property> <property name="defaultMaxLimit">1000</property> <property name="maxPacketSize">104857600</property> <property name="sqlInterceptor"> io.mycat.server.interceptor.impl.StatisticsSqlInterceptor </property> <property name="sqlInterceptorType"> UPDATE,DELETE,INSERT </property> <property name="sqlInterceptorFile">/tmp/sql.txt</property> </system> <firewall> <whitehost> <host user="mycat" host="192.168.175.151"></host> </whitehost> <blacklist check="true"> <property name="noneBaseStatementAllow">true</property> <property name="deleteWhereNoneCheck">true</property> </blacklist> </firewall> <user name="mycat" defaultAccount="true"> <property name="usingDecrypt">1</property> <property name="password">cTwf23RrpBCEmalp/nx0BAKenNhvNs2NSr9nYiMzHADeEDEfwVWlI6hBDccJjNBJqJxnunHFp5ae63PPnMfGYA==</property> <property name="schemas">shop</property> </user> </mycat:server>
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mycat:rule SYSTEM "rule.dtd"> <mycat:rule xmlns:mycat="http://io.mycat/"> <tableRule name="order_master"> <rule> <columns>customer_id</columns> <algorithm>mod-long</algorithm> </rule> </tableRule> <function name="mod-long" class="io.mycat.route.function.PartitionByMod"> <property name="count">4</property> </function> </mycat:rule>
#sequence stored in datanode GLOBAL=mycat ORDER_MASTER=mycat ORDER_DETAIL=mycat
關於Mycat的配置,僅供你們參考,你們不必定非要按照我這裏配置,根據自身業務須要配置便可。本文的重點是實現Mycat的高可用環境搭建。
在MySQL中建立Mycat鏈接MySQL的帳戶,以下所示。
CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat'; ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat'; GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE ON *.* TO 'mycat'@'192.168.175.%'; FLUSH PRIVILEGES;
安裝配置完JDK後,就須要搭建Zookeeper集羣了,根據對服務器的規劃,現將Zookeeper集羣搭建在「binghe151」、「binghe152」、「binghe153」三臺服務器上。
1.下載Zookeeper
到Apache官網去下載Zookeeper的安裝包,Zookeeper的安裝包下載地址爲:https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/。具體以下圖所示。
也能夠在binghe151服務器上執行以下命令直接下載zookeeper-3.5.5。
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz
執行上述命令就能夠直接把apache-zookeeper-3.5.5-bin.tar.gz安裝包下載到binghe151服務器上。
2.安裝並配置Zookeeper
注意:(1)、(2)、(3)步都是在binghe152服務器上執行的。
(1)解壓Zookeeper安裝包
在binghe151服務器上執行以下命令,將Zookeeper解壓到「/usr/local/」目錄下,並將Zookeeper目錄修改成zookeeper-3.5.5。
tar -zxvf apache-zookeeper-3.5.5-bin.tar.gz mv apache-zookeeper-3.5.5-bin zookeeper-3.5.5
(2)配置Zookeeper系統環境變量
一樣,須要在/etc/profile文件中配置Zookeeper系統環境變量,以下:
ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5 PATH=$ZOOKEEPER_HOME/bin:$PATH export ZOOKEEPER_HOME PATH
結合以前配置的JDK系統環境變量,/etc/profile,整體配置以下:
MYSQL_HOME=/usr/local/mysql JAVA_HOME=/usr/local/jdk1.8.0_212 MYCAT_HOME=/usr/local/mycat ZOOKEEPER_HOME=/usr/local/zookeeper-3.5.5 MPC_HOME=/usr/local/mpc-1.1.0 GMP_HOME=/usr/local/gmp-6.1.2 MPFR_HOME=/usr/local/mpfr-4.0.2 CLASS_PATH=.:$JAVA_HOME/lib LD_LIBRARY_PATH=$MPC_LIB_HOME/lib:$GMP_HOME/lib:$MPFR_HOME/lib:$LD_LIBRARY_PATH PATH=$MYSQL_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$MYCAT_HOME/bin:$PATH export JAVA_HOME ZOOKEEPER_HOME MYCAT_HOME CLASS_PATH MYSQL_HOME MPC_LIB_HOME GMP_HOME MPFR_HOME LD_LIBRARY_PATH PATH
(3)配置Zookeeper
首先,須要將$ZOOKEEPER_HOME/conf
($ZOOKEEPER_HOME爲Zookeeper的安裝目錄)目錄下的zoo_sample.cfg文件修改成zoo.cfg文件。具體命令以下:
cd /usr/local/zookeeper-3.5.5/conf/ mv zoo_sample.cfg zoo.cfg
接下來修改zoo.cfg文件,修改後的具體內容以下:
tickTime=2000 initLimit=10 syncLimit=5 dataDir=/usr/local/zookeeper-3.5.5/data dataLogDir=/usr/local/zookeeper-3.5.5/dataLog clientPort=2181 server.1=binghe151:2888:3888 server.2=binghe152:2888:3888 server.3=binghe153:2888:3888
在Zookeeper的安裝目錄下建立data和dataLog兩個文件夾。
mkdir -p /usr/local/zookeeper-3.5.5/data mkdir -p /usr/local/zookeeper-3.5.5/dataLog
切換到新建的data目錄下,建立myid文件,具體內容爲數字1,以下所示:
cd /usr/local/zookeeper-3.5.5/data vim myid
將數字1寫入到文件myid。
3.將Zookeeper和系統環境變量文件複製到其餘服務器
注意:(1)、(2)步是在binghe151服務器上執行的。
(1)複製Zookeeper到其餘服務器
根據對服務器的規劃,現將Zookeeper複製到binghe152和binghe53服務器,具體執行操做以下所示:
scp -r /usr/local/zookeeper-3.5.5/ binghe152:/usr/local/ scp -r /usr/local/zookeeper-3.5.5/ binghe153:/usr/local/
(2)複製系統環境變量文件到其餘服務器
根據對服務器的規劃,現將系統環境變量文件/etc/profile複製到binghe15二、binghe153服務器,具體執行操做以下所示:
scp /etc/profile binghe152:/etc/ scp /etc/profile binghe153:/etc/
上述操做可能會要求輸入密碼,根據提示輸入密碼便可。
4.修改其餘服務器上的myid文件
修改binghe152服務器上Zookeeper的myid文件內容爲數字2,同時修改binghe153服務器上Zookeeper的myid文件內容爲數字3。具體以下:
在binghe152服務器上執行以下操做:
echo "2" > /usr/local/zookeeper-3.5.5/data/myid cat /usr/local/zookeeper-3.5.5/data/myid 2
在binghe153服務器上執行以下操做:
echo "3" > /usr/local/zookeeper-3.5.5/data/myid cat /usr/local/zookeeper-3.5.5/data/myid 3
5.使環境變量生效
分別在binghe15一、binghe15二、binghe153上執行以下操做,使系統環境變量生效。
source /etc/profile
6.啓動Zookeeper集羣
分別在binghe15一、binghe15二、binghe153上執行以下操做,啓動Zookeeper集羣。
zkServer.sh start
7.查看Zookeeper集羣的啓動狀態
[root@binghe151 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower
[root@binghe152 local]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: leader
[root@binghe153 ~]# zkServer.sh status ZooKeeper JMX enabled by default Using config: /usr/local/zookeeper-3.5.5/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower
能夠看到,binghe151和binghe153服務器上的Zookeeper角色爲follower,binghe152服務器上的Zookeeper角色爲leader。
注意:初始化Zookeeper中的數據,是在binghe151服務器上進行的,緣由是以前咱們已經在binghe151服務器上安裝了Mycat。
1.查看初始化腳本
在Mycat安裝目錄下的bin目錄中提供了一個init_zk_data.sh腳本文件,以下所示。
[root@binghe151 ~]# ll /usr/local/mycat/bin/ total 384 -rwxr-xr-x 1 root root 3658 Feb 26 17:10 dataMigrate.sh -rwxr-xr-x 1 root root 1272 Feb 26 17:10 init_zk_data.sh -rwxr-xr-x 1 root root 15701 Feb 28 20:51 mycat -rwxr-xr-x 1 root root 2986 Feb 26 17:10 rehash.sh -rwxr-xr-x 1 root root 2526 Feb 26 17:10 startup_nowrap.sh -rwxr-xr-x 1 root root 140198 Feb 28 20:51 wrapper-linux-ppc-64 -rwxr-xr-x 1 root root 99401 Feb 28 20:51 wrapper-linux-x86-32 -rwxr-xr-x 1 root root 111027 Feb 28 20:51 wrapper-linux-x86-64
init_zk_data.sh腳本文件就是用來向Zookeeper中初始化Mycat的配置的,這個文件會經過讀取Mycat安裝目錄下的conf目錄下的配置文件,將其初始化到Zookeeper集羣中。
2.複製Mycat配置文件
首先,咱們查看下Mycat安裝目錄下的conf目錄下的文件信息,以下所示。
[root@binghe151 ~]# cd /usr/local/mycat/conf/ [root@binghe151 conf]# ll total 108 -rwxrwxrwx 1 root root 92 Feb 26 17:10 autopartition-long.txt -rwxrwxrwx 1 root root 51 Feb 26 17:10 auto-sharding-long.txt -rwxrwxrwx 1 root root 67 Feb 26 17:10 auto-sharding-rang-mod.txt -rwxrwxrwx 1 root root 340 Feb 26 17:10 cacheservice.properties -rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql -rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql -rw-r--r-- 1 root root 86 Mar 1 22:37 dnindex.properties -rwxrwxrwx 1 root root 446 Feb 26 17:10 ehcache.xml -rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties -rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml -rwxrwxrwx 1 root root 183 Feb 26 17:10 migrateTables.properties -rwxrwxrwx 1 root root 271 Feb 26 17:10 myid.properties -rwxrwxrwx 1 root root 16 Feb 26 17:10 partition-hash-int.txt -rwxrwxrwx 1 root root 108 Feb 26 17:10 partition-range-mod.txt -rwxrwxrwx 1 root root 988 Mar 1 16:59 rule.xml -rwxrwxrwx 1 root root 3883 Mar 3 23:59 schema.xml -rwxrwxrwx 1 root root 440 Feb 26 17:10 sequence_conf.properties -rwxrwxrwx 1 root root 84 Mar 3 23:52 sequence_db_conf.properties -rwxrwxrwx 1 root root 29 Feb 26 17:10 sequence_distributed_conf.properties -rwxrwxrwx 1 root root 28 Feb 26 17:10 sequence_http_conf.properties -rwxrwxrwx 1 root root 53 Feb 26 17:10 sequence_time_conf.properties -rwxrwxrwx 1 root root 2420 Mar 4 15:14 server.xml -rwxrwxrwx 1 root root 18 Feb 26 17:10 sharding-by-enum.txt -rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload
接下來,將Mycat安裝目錄下的conf目錄下的schema.xml文件、server.xml文件、rule.xml文件和sequence_db_conf.properties文件複製到conf目錄下的zkconf目錄下,以下所示。
cp schema.xml server.xml rule.xml sequence_db_conf.properties zkconf/
3.將Mycat配置信息寫入Zookeeper集羣
執行init_zk_data.sh腳本文件,向Zookeeper集羣中初始化配置信息,以下所示。
[root@binghe151 bin]# /usr/local/mycat/bin/init_zk_data.sh o2020-03-08 20:03:13 INFO JAVA_CMD=/usr/local/jdk1.8.0_212/bin/java o2020-03-08 20:03:13 INFO Start to initialize /mycat of ZooKeeper o2020-03-08 20:03:14 INFO Done
根據以上信息得知,Mycat向Zookeeper寫入初始化配置信息成功。
4.驗證Mycat配置信息是否成功寫入Mycat
咱們可使用Zookeeper的客戶端命令zkCli.sh 登陸Zookeeper來驗證Mycat的配置信息是否成功寫入Mycat。
首先,登陸Zookeeper,以下所示。
[root@binghe151 ~]# zkCli.sh Connecting to localhost:2181 ###################此處省略N行輸出###################### Welcome to ZooKeeper! WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0]
接下來,在Zookeeper命令行查看mycat的信息,以下所示。
[zk: localhost:2181(CONNECTED) 0] ls / [mycat, zookeeper] [zk: localhost:2181(CONNECTED) 1] ls /mycat [mycat-cluster-1] [zk: localhost:2181(CONNECTED) 2] ls /mycat/mycat-cluster-1 [cache, line, rules, schema, sequences, server] [zk: localhost:2181(CONNECTED) 3]
能夠看到,在/mycat/mycat-cluster-1下存在6個目錄,接下來,查看下schema目錄下的信息,以下所示。
[zk: localhost:2181(CONNECTED) 3] ls /mycat/mycat-cluster-1/schema [dataHost, dataNode, schema]
接下來,咱們查看下dataHost的配置,以下所示。
[zk: localhost:2181(CONNECTED) 4] get /mycat/mycat-cluster-1/schema/dataHost [{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
上面的輸出信息格式比較亂,但能夠看出是Json格式的信息,咱們能夠將輸出信息進行格式化,格式化後的結果以下所示。
[ { "balance": 1, "maxCon": 1000, "minCon": 10, "name": "binghe151", "writeType": 0, "switchType": 1, "slaveThreshold": 100, "dbType": "mysql", "dbDriver": "native", "heartbeat": "select user()", "writeHost": [ { "host": "binghe51", "url": "192.168.175.151:3306", "password": "root", "user": "root" } ] }, { "balance": 1, "maxCon": 1000, "minCon": 10, "name": "binghe152", "writeType": 0, "switchType": 1, "slaveThreshold": 100, "dbType": "mysql", "dbDriver": "native", "heartbeat": "select user()", "writeHost": [ { "host": "binghe52", "url": "192.168.175.152:3306", "password": "root", "user": "root" } ] }, { "balance": 1, "maxCon": 1000, "minCon": 10, "name": "binghe153", "writeType": 0, "switchType": 1, "slaveThreshold": 100, "dbType": "mysql", "dbDriver": "native", "heartbeat": "select user()", "writeHost": [ { "host": "binghe53", "url": "192.168.175.153:3306", "password": "root", "user": "root" } ] }, { "balance": 1, "maxCon": 1000, "minCon": 10, "name": "binghe154", "writeType": 0, "switchType": 1, "slaveThreshold": 100, "dbType": "mysql", "dbDriver": "native", "heartbeat": "select user()", "writeHost": [ { "host": "binghe54", "url": "192.168.175.154:3306", "password": "root", "user": "root" } ] } ]
能夠看到,咱們在Mycat的schema.xml文件中配置的dataHost節點的信息,成功寫入到Zookeeper中了。
爲了驗證Mycat的配置信息,是否已經同步到Zookeeper的其餘節點上,咱們也能夠在binghe152和binghe153服務器上登陸Zookeeper,查看Mycat配置信息是否寫入成功。
[root@binghe152 ~]# zkCli.sh Connecting to localhost:2181 #################省略N行輸出信息################ [zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost [{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
能夠看到,Mycat的配置信息成功同步到了binghe152服務器上的Zookeeper中。
[root@binghe153 ~]# zkCli.sh Connecting to localhost:2181 #####################此處省略N行輸出信息##################### [zk: localhost:2181(CONNECTED) 0] get /mycat/mycat-cluster-1/schema/dataHost [{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe151","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe51","url":"192.168.175.151:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe152","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe52","url":"192.168.175.152:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe153","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe53","url":"192.168.175.153:3306","password":"root","user":"root"}]},{"balance":1,"maxCon":1000,"minCon":10,"name":"binghe154","writeType":0,"switchType":1,"slaveThreshold":100,"dbType":"mysql","dbDriver":"native","heartbeat":"select user()","writeHost":[{"host":"binghe54","url":"192.168.175.154:3306","password":"root","user":"root"}]}]
能夠看到,Mycat的配置信息成功同步到了binghe153服務器上的Zookeeper中。
1.在binghe151服務器上配置Mycat
在binghe151服務器上進入Mycat安裝目錄的conf目錄下,查看文件信息,以下所示。
[root@binghe151 ~]# cd /usr/local/mycat/conf/ [root@binghe151 conf]# ll total 108 -rwxrwxrwx 1 root root 92 Feb 26 17:10 autopartition-long.txt -rwxrwxrwx 1 root root 51 Feb 26 17:10 auto-sharding-long.txt -rwxrwxrwx 1 root root 67 Feb 26 17:10 auto-sharding-rang-mod.txt -rwxrwxrwx 1 root root 340 Feb 26 17:10 cacheservice.properties -rwxrwxrwx 1 root root 3338 Feb 26 17:10 dbseq.sql -rwxrwxrwx 1 root root 3532 Feb 26 17:10 dbseq - utf8mb4.sql -rw-r--r-- 1 root root 86 Mar 1 22:37 dnindex.properties -rwxrwxrwx 1 root root 446 Feb 26 17:10 ehcache.xml -rwxrwxrwx 1 root root 2454 Feb 26 17:10 index_to_charset.properties -rwxrwxrwx 1 root root 1285 Feb 26 17:10 log4j2.xml -rwxrwxrwx 1 root root 183 Feb 26 17:10 migrateTables.properties -rwxrwxrwx 1 root root 271 Feb 26 17:10 myid.properties -rwxrwxrwx 1 root root 16 Feb 26 17:10 partition-hash-int.txt -rwxrwxrwx 1 root root 108 Feb 26 17:10 partition-range-mod.txt -rwxrwxrwx 1 root root 988 Mar 1 16:59 rule.xml -rwxrwxrwx 1 root root 3883 Mar 3 23:59 schema.xml -rwxrwxrwx 1 root root 440 Feb 26 17:10 sequence_conf.properties -rwxrwxrwx 1 root root 84 Mar 3 23:52 sequence_db_conf.properties -rwxrwxrwx 1 root root 29 Feb 26 17:10 sequence_distributed_conf.properties -rwxrwxrwx 1 root root 28 Feb 26 17:10 sequence_http_conf.properties -rwxrwxrwx 1 root root 53 Feb 26 17:10 sequence_time_conf.properties -rwxrwxrwx 1 root root 2420 Mar 4 15:14 server.xml -rwxrwxrwx 1 root root 18 Feb 26 17:10 sharding-by-enum.txt -rwxrwxrwx 1 root root 4251 Feb 28 20:51 wrapper.conf drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkconf drwxrwxrwx 2 root root 4096 Feb 28 21:17 zkdownload
能夠看到,在Mycat的conf目錄下,存在一個myid.properties文件,接下來,使用vim編輯器編輯這個文件,以下所示。
vim myid.properties
編輯後的myid.properties文件的內容以下所示。
loadZk=true zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181 clusterId=mycat-cluster-1 myid=mycat_151 clusterSize=2 clusterNodes=mycat_151,mycat_154 #server booster ; booster install on db same server,will reset all minCon to 2 type=server boosterDataHosts=dataHost1
其中幾個重要的參數說明以下所示。
[zk: localhost:2181(CONNECTED) 1] ls /mycat [mycat-cluster-1]
2.在binghe154服務器上安裝全新的Mycat
在binghe154服務器上下載並安裝和binghe151服務器上相同版本的Mycat,並將其解壓到binghe154服務器上的/usr/local/mycat目錄下。
也能夠在binghe151服務器上直接輸入以下命令將Mycat的安裝目錄複製到binghe154服務器上。
[root@binghe151 ~]# scp -r /usr/local/mycat binghe154:/usr/local
注意:別忘了在binghe154服務器上配置Mycat的系統環境變量。
3.修改binghe154服務器上的Mycat配置
在binghe154服務器上修改Mycat安裝目錄下的conf目錄中的myid.properties文件,以下所示。
vim /usr/local/mycat/conf/myid.properties
修改後的myid.properties文件的內容以下所示。
loadZk=true zkURL=192.168.175.151:2181,192.168.175.152:2181,192.168.175.153:2181 clusterId=mycat-cluster-1 myid=mycat_154 clusterSize=2 clusterNodes=mycat_151,mycat_154 #server booster ; booster install on db same server,will reset all minCon to 2 type=server boosterDataHosts=dataHost1
4.重啓Mycat
分別重啓binghe151服務器和binghe154服務器上的Mycat,以下所示。
注意:先重啓
[root@binghe151 ~]# mycat restart Stopping Mycat-server... Stopped Mycat-server. Starting Mycat-server...
[root@binghe154 ~]# mycat restart Stopping Mycat-server... Stopped Mycat-server. Starting Mycat-server...
在binghe151和binghe154服務器上分別查看Mycat的啓動日誌,以下所示。
STATUS | wrapper | 2020/03/08 21:08:15 | <-- Wrapper Stopped STATUS | wrapper | 2020/03/08 21:08:15 | --> Wrapper Started as Daemon STATUS | wrapper | 2020/03/08 21:08:15 | Launching a JVM... INFO | jvm 1 | 2020/03/08 21:08:16 | Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org INFO | jvm 1 | 2020/03/08 21:08:16 | Copyright 1999-2006 Tanuki Software, Inc. All Rights Reserved. INFO | jvm 1 | 2020/03/08 21:08:16 | INFO | jvm 1 | 2020/03/08 21:08:28 | MyCAT Server startup successfully. see logs in logs/mycat.log
從日誌的輸出結果能夠看出,Mycat重啓成功。
此時,先重啓binghe151服務器上的Mycat,再重啓binghe154服務器上的Mycat以後,咱們會發現binghe154服務器上的Mycat的conf目錄下的schema.xml、server.xml、rule.xml和sequence_db_conf.properties文件與binghe151服務器上Mycat的配置文件相同,這就是binghe154服務器上的Mycat從Zookeeper上讀取配置文件的結果。
之後,咱們只須要修改Zookeeper中有關Mycat的配置,這些配置就會自動同步到Mycat中,這樣能夠保證多個Mycat節點的配置是一致的。
分別在binghe151和binghe154服務器上配置虛擬IP,以下所示。
ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up route add -host 192.168.175.110 dev eth0:1
配置完虛擬IP的效果以下所示,以binghe151服務器爲例。
[root@binghe151 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45 inet addr:192.168.175.151 Bcast:192.168.175.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe10:a145/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:116766 errors:0 dropped:0 overruns:0 frame:0 TX packets:85230 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:25559422 (24.3 MiB) TX bytes:55997016 (53.4 MiB) eth0:1 Link encap:Ethernet HWaddr 00:0C:29:10:A1:45 inet addr:192.168.175.110 Bcast:192.168.175.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:51102 errors:0 dropped:0 overruns:0 frame:0 TX packets:51102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2934009 (2.7 MiB) TX bytes:2934009 (2.7 MiB)
注意:在命令行添加VIP後,當服務器重啓後,VIP信息會消失,因此,最好是將建立VIP的命令寫到一個腳本文件中,例如,將命令寫到/usr/local/script/vip.sh文件中,以下所示。
mkdir /usr/local/script vim /usr/local/script/vip.sh
文件的內容以下所示。
ifconfig eth0:1 192.168.175.110 broadcast 192.168.175.255 netmask 255.255.255.0 up route add -host 192.168.175.110 dev eth0:1
接下來,將/usr/local/script/vip.sh文件添加到服務器開機啓動項中,以下所示。
echo /usr/local/script/vip.sh >> /etc/rc.d/rc.local
在binghe151和binghe154服務器上配置系統內核IP轉發功能,編輯/etc/sysctl.conf文件,以下所示。
vim /etc/sysctl.conf
找到以下一行代碼。
net.ipv4.ip_forward = 0
將其修改爲以下所示的代碼。
net.ipv4.ip_forward = 1
保存並退出vim編輯器,並運行以下命令使配置生效。
sysctl -p
咱們須要在安裝HAProxy的服務器上,也就是在binghe151和binghe154服務器上安裝xinetd服務來開啓48700端口。
(1)在服務器命令行執行以下命令安裝xinetd服務,以下所示。
yum install xinetd -y
(2)編輯/etc/xinetd.conf文件,以下所示。
vim /etc/xinetd.conf
檢查文件中是否存在以下配置。
includedir /etc/xinetd.d
若是/etc/xinetd.conf文件中沒有以上配置,則在/etc/xinetd.conf文件中添加以上配置;若是存在以上配置,則不用修改。
(3)建立/etc/xinetd.d目錄,以下所示。
mkdir /etc/xinetd.d
注意:若是/etc/xinetd.d目錄已經存在,建立目錄時會報以下錯誤。
mkdir: cannot create directory `/etc/xinetd.d': File exists
你們可沒必要理會此錯誤信息。
(4)在/etc/xinetd.d目錄下添加Mycat狀態檢測服務器的配置文件mycat_status,以下所示。
touch /etc/xinetd.d/mycat_status
(5)編輯mycat_status文件,以下所示。
vim /etc/xinetd.d/mycat_status
編輯後的mycat_status文件中的內容以下所示。
service mycat_status { flags = REUSE socket_type = stream port = 48700 wait = no user = root server =/usr/local/bin/mycat_check.sh log_on_failure += USERID disable = no }
部分xinetd配置參數說明以下所示。
(6)在/usr/local/bin目錄下添加mycat_check.sh服務腳本,以下所示。
touch /usr/local/bin/mycat_check.sh
(7)編輯/usr/local/bin/mycat_check.sh文件,以下所示。
vim /usr/local/bin/mycat_check.sh
編輯後的文件內容以下所示。
#!/bin/bash mycat=`/usr/local/mycat/bin/mycat status | grep 'not running' | wc -l` if [ "$mycat" = "0" ]; then /bin/echo -e "HTTP/1.1 200 OK\r\n" else /bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n" /usr/local/mycat/bin/mycat start fi
爲mycat_check.sh文件賦予可執行權限,以下所示。
chmod a+x /usr/local/bin/mycat_check.sh
(8)編輯/etc/services文件,以下所示。
vim /etc/services
在文件末尾添加以下所示的內容。
mycat_status 48700/tcp # mycat_status
其中,端口號須要與在/etc/xinetd.d/mycat_status文件中配置的端口號相同。
(9)重啓xinetd服務,以下所示。
service xinetd restart
(10)查看mycat_status服務是否成功啓動,以下所示。
[root@binghe151 ~]# netstat -antup|grep 48700 tcp 0 0 :::48700 :::* LISTEN 2776/xinetd
[root@binghe154 ~]# netstat -antup|grep 48700 tcp 0 0 :::48700 :::* LISTEN 6654/xinetd
結果顯示,兩臺服務器上的mycat_status服務器啓動成功。
至此,xinetd服務安裝並配置成功,即Mycat狀態檢查服務安裝成功。
咱們直接在binghe151和binghe154服務器上使用以下命令安裝HAProxy。
yum install haproxy -y
安裝完成後,咱們須要對HAProxy進行配置,HAProxy的配置文件目錄爲/etc/haproxy,咱們查看這個目錄下的文件信息,以下所示。
[root@binghe151 ~]# ll /etc/haproxy/ total 4 -rw-r--r-- 1 root root 3142 Oct 21 2016 haproxy.cfg
發現/etc/haproxy/目錄下存在一個haproxy.cfg文件。接下來,咱們就修改haproxy.cfg文件,修改後的haproxy.cfg文件的內容以下所示。
global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen admin_status bind 0.0.0.0:48800 stats uri /admin-status stats auth admin:admin listen allmycat_service bind 0.0.0.0:3366 mode tcp option tcplog option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www balance roundrobin server mycat_151 192.168.175.151:3307 check port 48700 inter 5s rise 2 fall 3 server mycat_154 192.168.175.154:3307 check port 48700 inter 5s rise 2 fall 3 listen allmycat_admin bind 0.0.0.0:3377 mode tcp option tcplog option httpchk OPTIONS * HTTP/1.1\r\nHost:\ www balance roundrobin server mycat_151 192.168.175.151:3308 check port 48700 inter 5s rise 2 fall 3 server mycat_154 192.168.175.154:3308 check port 48700 inter 5s rise 2 fall 3
接下來,在binghe151服務器和binghe154服務器上啓動HAProxy,以下所示。
haproxy -f /etc/haproxy/haproxy.cfg
接下來,咱們使用mysql命令鏈接HAProxy監聽的虛擬IP和端口來鏈接Mycat,以下所示。
[root@binghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB) Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
能夠看到,鏈接Mycat成功。
1.安裝並配置Keepalived
直接在binghe151和binghe154服務器上輸入以下命令安裝Keepalived。
yum install keepalived -y
安裝成功後,會在/etc目錄下生成一個keepalived目錄,接下來,咱們在/etc/keepalived目錄下配置keepalived.conf文件,以下所示。
vim /etc/keepalived/keepalived.conf
! Configuration Fileforkeepalived vrrp_script chk_http_port { script "/etc/keepalived/check_haproxy.sh" interval 2 weight 2 } vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 150 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_http_port } virtual_ipaddress { 192.168.175.110 dev eth0 scope global } }
! Configuration Fileforkeepalived vrrp_script chk_http_port { script "/etc/keepalived/check_haproxy.sh" interval 2 weight 2 } vrrp_instance VI_1 { state SLAVE interface eth0 virtual_router_id 51 priority 120 advert_int 1 authentication { auth_type PASS auth_pass 1111 } track_script { chk_http_port } virtual_ipaddress { 192.168.175.110 dev eth0 scope global } }
2.編寫檢測HAProxy的腳本
接下來,須要分別在binghe151和binghe154服務器上的/etc/keepalived目錄下建立check_haproxy.sh腳本,腳本內容以下所示。
#!/bin/bash STARTHAPROXY="/usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg" STOPKEEPALIVED="/etc/init.d/keepalived stop" #STOPKEEPALIVED="/usr/bin/systemctl stop keepalived" LOGFILE="/var/log/keepalived-haproxy-state.log" echo "[check_haproxy status]" >> $LOGFILE A=`ps -C haproxy --no-header |wc -l` echo "[check_haproxy status]" >> $LOGFILE date >> $LOGFILE if [ $A -eq 0 ];then echo $STARTHAPROXY >> $LOGFILE $STARTHAPROXY >> $LOGFILE 2>&1 sleep 5 fi if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then exit 0 else exit 1 fi
使用以下命令爲check_haproxy.sh腳本授予可執行權限。
chmod a+x /etc/keepalived/check_haproxy.sh
3.啓動Keepalived
配置完成後,咱們就能夠啓動Keepalived了,分別在binghe151和binghe154服務器上啓動Keepalived,以下所示。
/etc/init.d/keepalived start
查看Keepalived是否啓動成功,以下所示。
[root@binghe151 ~]# ps -ef | grep keepalived root 1221 1 0 20:06 ? 00:00:00 keepalived -D root 1222 1221 0 20:06 ? 00:00:00 keepalived -D root 1223 1221 0 20:06 ? 00:00:02 keepalived -D root 93290 3787 0 21:42 pts/0 00:00:00 grep keepalived
[root@binghe154 ~]# ps -ef | grep keepalived root 1224 1 0 20:06 ? 00:00:00 keepalived -D root 1225 1224 0 20:06 ? 00:00:00 keepalived -D root 1226 1224 0 20:06 ? 00:00:02 keepalived -D root 94636 3798 0 21:43 pts/0 00:00:00 grep keepalived
能夠看到,兩臺服務器上的Keepalived服務啓動成功。
4.驗證Keepalived綁定的虛擬IP
接下來,咱們分別查看兩臺服務器上的Keepalived是否綁定了虛擬IP。
[root@binghe151 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0 inet 192.168.175.110/32 scope global eth0 inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1 inet6 fe80::20c:29ff:fe10:a145/64 scope link valid_lft forever preferred_lft forever
能夠看到以下一行代碼。
inet 192.168.175.110/32 scope global eth0
說明binghe151服務器上的Keepalived綁定了虛擬IP 192.168.175.110。
[root@binghe154 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0 inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1 inet6 fe80::250:56ff:fe22:2a75/64 scope link valid_lft forever preferred_lft forever
能夠看到binghe154服務器上的Keepalived並無綁定虛擬IP。
5.測試虛擬IP的漂移
如何測試虛擬IP的漂移呢?首先,咱們中止binghe151服務器上的Keepalived,以下所示。
/etc/init.d/keepalived stop
接下來,查看binghe154服務器上Keepalived綁定虛擬IP的狀況,以下所示。
[root@binghe154 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0 inet 192.168.175.110/32 scope global eth0 inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1 inet6 fe80::250:56ff:fe22:2a75/64 scope link valid_lft forever preferred_lft forever
能夠看到,在輸出的結果信息中,存在以下一行信息。
inet 192.168.175.110/32 scope global eth0
說明binghe154服務器上的Keepalived綁定了虛擬IP 192.168.175.110,虛擬IP漂移到了binghe154服務器上。
6.binghe151服務器上的Keepalived搶佔虛擬IP
接下來,咱們啓動binghe151服務器上的Keepalived,以下所示。
/etc/init.d/keepalived start
啓動成功後,咱們再次查看虛擬IP的綁定狀況,以下所示。
[root@binghe151 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:10:a1:45 brd ff:ff:ff:ff:ff:ff inet 192.168.175.151/24 brd 192.168.175.255 scope global eth0 inet 192.168.175.110/32 scope global eth0 inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1 inet6 fe80::20c:29ff:fe10:a145/64 scope link valid_lft forever preferred_lft forever
[root@binghe154 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:50:56:22:2a:75 brd ff:ff:ff:ff:ff:ff inet 192.168.175.154/24 brd 192.168.175.255 scope global eth0 inet 192.168.175.110/24 brd 192.168.175.255 scope global secondary eth0:1 inet6 fe80::250:56ff:fe22:2a75/64 scope link valid_lft forever preferred_lft forever
因爲binghe151服務器上配置的Keepalived優先級要高於binghe154服務器上的Keepalived,因此,再次啓動binghe151服務器上的Keepalived後,binghe151服務器上的Keepalived會搶佔虛擬IP。
這裏,爲了簡單,我將binghe154和binghe155服務器上的MySQL配置成主從複製,你們也能夠根據實際狀況,自行配置其餘服務器上MySQL的主從複製(注意:我這裏配置的是一主一從模式)。
1.編輯my.cnf文件
server_id = 154 log_bin = /data/mysql/log/bin_log/mysql-bin binlog-ignore-db=mysql binlog_format= mixed sync_binlog=100 log_slave_updates = 1 binlog_cache_size=32m max_binlog_cache_size=64m max_binlog_size=512m lower_case_table_names = 1 relay_log = /data/mysql/log/bin_log/relay-bin relay_log_index = /data/mysql/log/bin_log/relay-bin.index master_info_repository=TABLE relay-log-info-repository=TABLE relay-log-recovery
server_id = 155 log_bin = /data/mysql/log/bin_log/mysql-bin binlog-ignore-db=mysql binlog_format= mixed sync_binlog=100 log_slave_updates = 1 binlog_cache_size=32m max_binlog_cache_size=64m max_binlog_size=512m lower_case_table_names = 1 relay_log = /data/mysql/log/bin_log/relay-bin relay_log_index = /data/mysql/log/bin_log/relay-bin.index master_info_repository=TABLE relay-log-info-repository=TABLE relay-log-recovery
2.同步兩臺服務器上MySQL的數據
在binghe154服務器上只有一個customer_db數據庫,咱們使用mysqldump命令導出customer_db數據庫,以下所示。
[root@binghe154 ~]# mysqldump --master-data=2 --single-transaction -uroot -p --databases customer_db > binghe154.sql Enter password:
接下來,咱們查看binghe154.sql文件。
more binghe154.sql
在文件中,咱們能夠找到以下信息。
CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995;
說明當前MySQL的二進制日誌文件爲mysql-bin.000042,二進制日誌文件的位置爲995。
接下來,咱們將binghe154.sql文件複製到binghe155服務器上,以下所示。
scp binghe154.sql 192.168.175.155:/usr/local/src
在binghe155服務器上,將binghe154.sql腳本導入到MySQL中,以下所示。
mysql -uroot -p < /usr/local/src/binghe154.sql
此時,完成了數據的初始化。
3.建立主從複製帳號
在binghe154服務器的MySQL中,建立用於主從複製的MySQL帳號,以下所示。
mysql> CREATE USER 'repl'@'192.168.175.%' IDENTIFIED BY 'repl123456'; Query OK, 0 rows affected (0.01 sec) mysql> ALTER USER 'repl'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'repl123456'; Query OK, 0 rows affected (0.00 sec) mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.175.%'; Query OK, 0 rows affected (0.00 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec)
4.配置複製鏈路
登陸binghe155服務器上的MySQL,並使用以下命令配置複製鏈路。
mysql> change master to > master_host='192.168.175.154', > master_port=3306, > master_user='repl', > master_password='repl123456', > MASTER_LOG_FILE='mysql-bin.000042', > MASTER_LOG_POS=995;
其中,MASTER_LOG_FILE='mysql-bin.000042', MASTER_LOG_POS=995 就是在binghe154.sql文件中找到的。
5.啓動從庫
在binghe155服務器的MySQL命令行啓動從庫,以下所示。
mysql> start slave;
查看從庫是否啓動成功,以下所示。
mysql> SHOW slave STATUS \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 192.168.175.151 Master_User: binghe152 Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 1360 Relay_Log_File: relay-bin.000003 Relay_Log_Pos: 322 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes #################省略部分輸出結果信息##################
結果顯示Slave_IO_Running選項和Slave_SQL_Running選項的值均爲Yes,說明MySQL主從複製環境搭建成功。
最後,別忘了在binghe155服務器的MySQL中建立Mycat鏈接MySQL的用戶,以下所示。
CREATE USER 'mycat'@'192.168.175.%' IDENTIFIED BY 'mycat'; ALTER USER 'mycat'@'192.168.175.%' IDENTIFIED WITH mysql_native_password BY 'mycat'; GRANT SELECT, INSERT, UPDATE, DELETE,EXECUTE ON *.* TO 'mycat'@'192.168.175.%'; FLUSH PRIVILEGES;
修改Mycatd的schema.xml文件,實現binghe154和binghe155服務器上的MySQL讀寫分離。在Mycat安裝目錄的conf/zkconf目錄下,修改schema.xml文件,修改後的schema.xml文件以下所示。
<!DOCTYPE mycat:schema SYSTEM "schema.dtd"> <mycat:schema xmlns:mycat="http://io.mycat/"> <schema name="shop" checkSQLschema="true" sqlMaxLimit="1000"> <table name="order_master" dataNode="orderdb01,orderdb02,orderdb03,orderdb04" rule="order_master" primaryKey="order_id" autoIncrement="true"> <childTable name="order_detail" joinKey="order_id" parentKey="order_id" primaryKey="order_detail_id" autoIncrement="true"/> </table> <table name="order_cart" dataNode="ordb" primaryKey="cart_id"/> <table name="order_customer_addr" dataNode="ordb" primaryKey="customer_addr_id"/> <table name="region_info" dataNode="ordb,prodb,custdb" primaryKey="region_id" type="global"/> <table name="serial" dataNode="ordb" primaryKey="id"/> <table name="shipping_info" dataNode="ordb" primaryKey="ship_id"/> <table name="warehouse_info" dataNode="ordb" primaryKey="w_id"/> <table name="warehouse_proudct" dataNode="ordb" primaryKey="wp_id"/> <table name="product_brand_info" dataNode="prodb" primaryKey="brand_id"/> <table name="product_category" dataNode="prodb" primaryKey="category_id"/> <table name="product_comment" dataNode="prodb" primaryKey="comment_id"/> <table name="product_info" dataNode="prodb" primaryKey="product_id"/> <table name="product_pic_info" dataNode="prodb" primaryKey="product_pic_id"/> <table name="product_supplier_info" dataNode="prodb" primaryKey="supplier_id"/> <table name="customer_balance_log" dataNode="custdb" primaryKey="balance_id"/> <table name="customer_inf" dataNode="custdb" primaryKey="customer_inf_id"/> <table name="customer_level_inf" dataNode="custdb" primaryKey="customer_level"/> <table name="customer_login" dataNode="custdb" primaryKey="customer_id"/> <table name="customer_login_log" dataNode="custdb" primaryKey="login_id"/> <table name="customer_point_log" dataNode="custdb" primaryKey="point_id"/> </schema> <dataNode name="mycat" dataHost="binghe151" database="mycat"/> <dataNode name="ordb" dataHost="binghe152" database="order_db"/> <dataNode name="prodb" dataHost="binghe153" database="product_db"/> <dataNode name="custdb" dataHost="binghe154" database="customer_db"/> <dataNode name="orderdb01" dataHost="binghe152" database="orderdb01"/> <dataNode name="orderdb02" dataHost="binghe152" database="orderdb02"/> <dataNode name="orderdb03" dataHost="binghe153" database="orderdb03"/> <dataNode name="orderdb04" dataHost="binghe153" database="orderdb04"/> <dataHost balance="1" maxCon="1000" minCon="10" name="binghe151" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native"> <heartbeat>select user()</heartbeat> <writeHost host="binghe51" url="192.168.175.151:3306" password="mycat" user="mycat"/> </dataHost> <dataHost balance="1" maxCon="1000" minCon="10" name="binghe152" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native"> <heartbeat>select user()</heartbeat> <writeHost host="binghe52" url="192.168.175.152:3306" password="mycat" user="mycat"/> </dataHost> <dataHost balance="1" maxCon="1000" minCon="10" name="binghe153" writeType="0" switchType="1" slaveThreshold="100" dbType="mysql" dbDriver="native"> <heartbeat>select user()</heartbeat> <writeHost host="binghe53" url="192.168.175.153:3306" password="mycat" user="mycat"/> </dataHost> <dataHost balance="1" maxCon="1000" minCon="10" name="binghe154" writeType="0" switchTymycate="1" slaveThreshold="100" dbType="mysql" dbDriver="native"> <heartbeat>select user()</heartbeat> <writeHost host="binghe54" url="192.168.175.154:3306" password="mycat" user="mycat"> <readHost host="binghe55", url="192.168.175.155:3306" user="mycat" password="mycat"/> </writeHost> <writeHost host="binghe55" url="192.168.175.155:3306" password="mycat" user="mycat"/> </dataHost> </mycat:schema>
保存並退出vim編輯器,接下來,初始化Zookeeper中的數據,以下所示。
/usr/local/mycat/bin/init_zk_data.sh
上述命令執行成功後,會自動將配置同步到binghe151和binghe154服務器上的Mycat的安裝目錄下的conf目錄下的schema.xml中。
接下來,分別啓動binghe151和binghe154服務器上的Mycat服務。
mycat restart
此時,整個高可用環境配置完成,上層應用鏈接高可用環境時,須要鏈接HAProxy監聽的IP和端口。好比使用mysql命令鏈接高可用環境以下所示。
[root@binghe151 ~]# mysql -umycat -pmycat -h192.168.175.110 -P3366 --default-auth=mysql_native_password mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.6.29-mycat-1.6.7.4-release-20200228205020 MyCat Server (OpenCloudDB) Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +----------+ | DATABASE | +----------+ | shop | +----------+ 1 row in set (0.10 sec) mysql> use shop; Database changed mysql> show tables; +-----------------------+ | Tables in shop | +-----------------------+ | customer_balance_log | | customer_inf | | customer_level_inf | | customer_login | | customer_login_log | | customer_point_log | | order_cart | | order_customer_addr | | order_detail | | order_master | | product_brand_info | | product_category | | product_comment | | product_info | | product_pic_info | | product_supplier_info | | region_info | | serial | | shipping_info | | warehouse_info | | warehouse_proudct | +-----------------------+ 21 rows in set (0.00 sec)
這裏,我只是對binghe154服務器上的MySQL擴展了讀寫分離環境,你們也能夠根據實際狀況對其餘服務器的MySQL實現主從複製和讀寫分離,這樣,整個高可用環境就實現了HAProxy的高可用、Mycat的高可用、MySQL的高可用、Zookeeper的高可用和Keepalived的高可用。
好了,今天就到這兒吧,我是冰河,你們有啥問題能夠在下方留言,也能夠加我微信,一塊兒交流技術,一塊兒進階,一塊兒牛逼~~