Mysql數據庫的讀寫分離

Mysql數據庫的讀寫分離

讀寫分離應用

在大量的數據請求下,單臺數據庫將沒法承擔全部讀寫操做。
    解決方法是配置多臺數據庫服務器以實現主從複製+讀寫分離。

讀寫分離的優勢

增長冗餘
    增長了機器的處理能力
    對於讀操做爲主的應用,使用讀寫分離是最好的場景,由於能夠確保寫的服務器壓力更小,而讀又能夠接受點時間上的延遲。

讀寫分離提升性能的緣由

物理服務器增長,負荷增長
    主從只負責各自的寫和讀,極大程度的緩解X鎖和S鎖爭用
    從庫可配置myisam引擎,提高查詢性能以及節約系統開銷
    從庫同步主庫的數據和主庫直接寫仍是有區別的,經過主庫發送來的binlog恢復數據,可是,最重要區別在於主庫向從庫發送binlog是異步的,從庫恢復數據也是異步的
    讀寫分離適用與讀遠大於寫的場景,若是隻有一臺服務器,當select不少時,update和delete會被這些select訪問中的數據堵塞,等待select結束,併發性能不高。
    對於寫和讀比例相近的應用,應該部署雙主相互複製
    能夠在從庫啓動是增長一些參數來提升其讀的性能,例如--skip-innodb、--skip-bdb、--low-priority-updates以及--delay-key-write=ALL。固然這些設置也是須要根據具體業務需求來定得,不必定能用上。
    分攤讀取。假如咱們有1主3從,不考慮上述1中提到的從庫單方面設置,假設如今1分鐘內有10條寫入,150條讀取。那麼,1主3從至關於共計40條寫入,而讀取總數沒變,所以平均下來每臺服務器承擔了10條寫入和50條讀取(主庫不承擔讀取操做)。所以,雖然寫入沒變,可是讀取大大分攤了,提升了系統性能。另外,當讀取被分攤後,又間接提升了寫入的性能。因此,整體性能提升了,說白了就是拿機器和帶寬換性能。MySQL官方文檔中有相關演算公式:官方文檔 見6.9FAQ之「MySQL複製可以什麼時候和多大程度提升系統性能」
    MySQL複製另一大功能是增長冗餘,提升可用性,當一臺數據庫服務器宕機後能經過調整另一臺從庫來以最快的速度恢復服務,所以不能光看性能,也就是說1主1從也是能夠的。

讀寫分離實現方法

雙主或多主模型是無須實現讀寫分離,僅須要負載均衡:haproxy, nginx, lvs, ...
    1)基於程序代碼內部實現
        在代碼中根據select、insert進行路由分類,這種方法目前生產環境中應用最普遍。
        優勢是性能較好,由於在程序代碼中實現,不須要增長額外的設備做爲硬件開支。
        缺點是須要開發人員才能實現,運維人員不能。
    2)基於中間代理層實現
        代理通常位於客戶端和服務器之間,代理服務器接到客戶端請求後經過判斷後轉發到後端數據庫
        常見的代理服務器:
            proxysql、mysqlproxysql、Amoeba

基於ProxySQL的讀寫分離的實現

參考:
        http://seanlook.com/2017/04/10/mysql-proxysql-install-config/
        http://seanlook.com/2017/04/17/mysql-proxysql-route-rw_split/
    proxysql
        http://www.proxysql.com/, 
            proxysql is a high performance, high availability, protocol aware proxy for mysql and forks (like percona server and maiadb).
        https://github.com/sysown/proxysql/releases
    master :192.168.213.251
    slave :192.168.213.252
    proxysql:192.168.213.253,是個前端調度器
    1.搭建主從複製
    2.在主服務器上建立兩個帳號,一個用於前端調度器鏈接後端數據庫用,一個用於前端調度器監控後端服務器的健康狀態用
        MariaDB [(none)]> grant all on *.* to dbadmin@'192.168.%.%' identified by 'centos';   ##此用戶也能夠用於客戶端訪問用
        MariaDB [(none)]> grant all on *.* to monitor@'192.168.%.%' identified by 'monitor';   ##注意這個帳號也要建立,否則調度器鏈接不到後端主從服務器
    3.下載ProxySQL並安裝
        http://www.proxysql.com/
        yum install proxysql-*
    4.配置前端調度器proxysql
            4.1
        vim /etc/proxysql.cnf
            datadir="/var/lib/proxysql"
            admin_variables=
            {
                admin_credentials="admin:admin"
                mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock"  ##proxysql的一個管理接口,能夠鏈接到這個管理接口上管理proxysql
            }
            mysql_variables=
            {
                threads=4
                max_connections=2048
                default_query_delay=0
                default_query_timeout=36000000
                have_compress=true
                poll_timeout=2000
                interfaces="0.0.0.0:3306;/tmp/mysql.sock"   ##將端口改成3306,這樣客戶端就能夠用mysql直接鏈接而沒必要指定端口號
                default_schema="information_schema"
                stacksize=1048576
                server_version="5.5.30"
                connect_timeout_server=3000
                monitor_username="monitor"  ##設置一個監測後端主機的用戶,設置了這個用戶就能夠鏈接到後端主機來監控其狀態
                monitor_password=「monitor」  ##設置的密碼
                monitor_history=600000
                monitor_connect_interval=60000
                monitor_ping_interval=10000
                monitor_read_only_interval=1500
                monitor_read_only_timeout=500
                ping_interval_server=120000
                ping_timeout_server=500
                commands_stats=true
                sessions_sort=true
                connect_retries_on_failure=10
            }
            mysql_servers =
            (
                {
                    address = "192.168.213.251" # no default, required . if port is 0 , address is interpred as a unix socket domain
                    port = 3306           # no default, required . if port is 0 , address is interpred as a unix socket domain
                    hostgroup = 0           # no default, required
                    status = "online"     # default: online
                    weight = 1            # default: 1
                    compression = 0       # default: 0
                },
                {
                    address = "192.168.213.252"
                    port = 3306
                    hostgroup = 1
                    status = "online"     # default: online
                    weight = 1            # default: 1
                    compression = 0       # default: 0
                },
                {
                    address = "xxxxx"
                    port = 3306
                    hostgroup = 1
                    status = "online"     # default: online
                    weight = 1            # default: 1
                    compression = 0       # default: 0
                }
            )
            mysql_users:         ##設置一個用戶,用於客戶端和proxysql鏈接後端主機獲取數據,客戶端鏈接前端調度器時使用這個用戶,調度器鏈接後端主機時也使用這個用戶。
            (
                {
                                username = "dbadmin" # no default , required
                                password = "centos" # default: ''
                                default_hostgroup = 0 # default: 0
                                active = 1            # default: 1
                 }
            )
            mysql_query_rules:   ##設置路由規則
            (
                {
                    rule_id=2
                    active=1
                    match_pattern="^SELECT"  ##以SELECT爲開頭的就調度1組
                    destination_hostgroup=1
                    apply=1
                }
            )
            mysql_replication_hostgroups=
            (
                {
                    writer_hostgroup=0  ##寫調度到0組
                    reader_hostgroup=1  ##讀調度到1組
                }
            )
        service proxysql start
        ss -ntl
            4.2
                或者保持原/etc/proxysql.conf不變,或修改admin_variables的mysql_iface和mysql_variables的interface,而後在登陸再經過命令進行配置。
                    admin_variables=
                    {
                        admin_credentials="admin:admin"
                        mysql_ifaces="127.0.0.1:6032;/tmp/proxysql_admin.sock"
                    } ##用於本地管理鏈接
                    mysql_variables=
                    {
                        ...
                        interfaces="0.0.0.0:6033;/tmp/proxysql.sock"  ##
                        ...
                    }   ##用於dbadmin用戶鏈接
                        
                啓動proxysql。
                mysql -uadmin -padmin -h127.0.0.1 -P6032
                admin
                    insert into mysql_servers(hostgroup_id,hostname,port,weight,max_connections,max_replication_lag,comment) values(100,'192.168.150.129',3306,1,1000,10,'test proxysql');
                    insert into mysql_servers(hostgroup_id,hostname,port,weight,max_connections,max_replication_lag,comment) values(1000,'192.168.150.130',3306,1,1000,10,'test proxysql');
                    insert into mysql_servers(hostgroup_id,hostname,port,weight,max_connections,max_replication_lag,comment) values(1000,'192.168.150.131',3306,1,1000,10,'test proxysql');
                    insert into mysql_users(username,password,active,default_hostgroup,transaction_persistent) values('dbadmin','xm1234',1,100,1);
                    INSERT INTO mysql_query_rules(active,match_pattern,destination_hostgroup,apply) VALUES(1,'^SELECT.*FOR UPDATE$',100,1);
                    INSERT INTO mysql_query_rules(active,match_pattern,destination_hostgroup,apply) VALUES(1,'^SELECT',1000,1);
                    
                    查看 
                        select Command,Total_Time_us,Total_cnt from stats_mysql_commands_counters where Total_cnt >0;
                        select * from stats_mysql_query_digest;
                主shang 
                    GRANT all ON *.* TO 'dbadmin'@'192.168.150.%' IDENTIFIED BY 'xm1234';
                    GRANT all ON *.* TO 'monitor'@'192.168.150.%' IDENTIFIED BY 'xm1234';
                
                mysql -udbadmin -pxm1234 -h172.18.77.13 -P6033 ,訪問調度器
                
    5.查看
        mysql -S /tmp/proxysql_admin.sock -uadmin -padmin   ##登陸到管理端口上
        MySQL [(none)]> show databases;
        MySQL [(none)]> show tables;
        MySQL [(none)]> select * from mysql_servers;  ##查看後端服務器的一些信息
        mysql -udbadmin -pcentos  -h192.168.213.253 ---客戶端訪問此調度器
        Welcome to the MariaDB monitor.  Commands end with ; or \g.
        Your MySQL connection id is 5
        Server version: 5.5.30 (ProxySQL)   ##能夠看到訪問的是proxysql
        Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
        Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
        MySQL [(none)]>SELECT * from hellodb.tbl1;   ##發現調度至從服務器上,由於從服務器上不復制hellodb數據庫,在這裏必定要注意,SELECT要用大寫,否則只會往主上調度,識別不了,前面作實驗一直失敗就是這個緣由。
        ERROR 1146 (42S02): Table 'hellodb.tbl1' doesn't exist
        MySQL [(none)]> insert intohellodb.tbl1values(2,'li');  #發現能夠插入數據,由於寫操做調度至主服務器上了

Amoeba實現mysql的讀寫分離

1、Amoeba 是什麼
        Amoeba(變形蟲)項目,專一分佈式數據庫 proxy 開發,座落與Client、DB Server(s)之間。
        對客戶端透明,具備負載均衡、高可用性、sql過濾、讀寫分離、可路由相關的query到目標數據庫、可併發請求多臺數據庫合併結果。
        主要解決:
            下降數據切分帶來的複雜多數據庫結構
            提供切分規則並下降 數據切分規則 給應用帶來的影響
            下降db 與客戶端的鏈接數
            讀寫分離
    2、爲何要用Amoeba
        目前要實現mysql的主從讀寫分離,主要有如下幾種方案:
        一、  經過程序實現,網上不少現成的代碼,比較複雜,若是添加從服務器要更改多臺服務器的代碼。
        二、  經過mysql-proxy來實現,因爲mysql-proxy的主從讀寫分離是經過lua腳原本實現,目前lua的腳本的開發跟不上節奏,而寫沒有完美的現成的腳本,所以致使用於生產環境的話風險比較大,據網上不少人說mysql-proxy的性能不高。
        三、  本身開發接口實現,這種方案門檻高,開發成本高,不是通常的小公司能承擔得起。
        四、  利用阿里巴巴的開源項目Amoeba來實現,具備負載均衡、高可用性、sql過濾、讀寫分離、可路由相關的query到目標數據庫,而且安裝配置很是簡單
    3、Amoeba的安裝
        先介紹下部署環境:
        amoeba:192.168.2.203
        masterDB:192.168.2.204 
        slaveDB:192.168.2.205
        以上系統全爲centos6.8
        Amoeba框架是居於JDK1.5開發的,採用了JDK1.5的特性,因此還須要安裝java環境,建議使用javaSE1.5以上的JDK版本
        一、安裝java環境
            先去官網下載:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
            安裝
                [root@bogon src]# rpm -ivh jdk-8u111-linux-x64.rpm
                Preparing...                ########################################### [100%]
                   1:jdk1.8.0_111           ########################################### [100%]
                Unpacking JAR files...
                    tools.jar...
                    plugin.jar...
                    javaws.jar...
                    deploy.jar...
                    rt.jar...
                    jsse.jar...
                    charsets.jar...
                    localedata.jar...
            而後設置java環境變量
                [root@bogon src]# vim /etc/profile
                #set java environment
                JAVA_HOME=/usr/java/jdk1.8.0_111
                JRE_HOME=/usr/java/jdk1.8.0_111/jre
                CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
                PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
                export JAVA_HOME JRE_HOME CLASS_PATH PATH
                [root@bogon amoeba]# source /etc/profile
            測試是否安裝成功
                [root@bogon src]# java -version
                java version "1.8.0_111"
                Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
                Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
        二、安裝Amoeba
            能夠從https://sourceforge.net/projects/amoeba/下載最新版本的Amoeba,我這裏下載的是amoeba-mysql-3.0.5-RC-distribution.zip。
            Amoeba安裝很是簡單,直接解壓便可使用,這裏將Amoeba解壓到/usr/local/amoeba目錄下,這樣就安裝完成了
            [root@bogon amoeba]# pwd
            /usr/local/amoeba
              [root@bogon amoeba]# ll
              drwxrwxrwx. 2 root root 4096 7月 5 2013 benchmark
              drwxrwxrwx. 2 root root 4096 7月 5 2013 bin
              drwxrwxrwx. 2 root root 4096 7月 5 2013 conf
              -rwxrwxrwx. 1 root root 728 7月 5 2013 jvm.properties
              drwxrwxrwx. 2 root root 4096 7月 5 2013 lib
        三、配置Amoeba
            Amoeba的配置文件在本環境下位於/usr/local/amoeba/conf目錄下。
            配置文件比較多,可是僅僅使用讀寫分離功能,只需配置兩個文件便可,分別是dbServers.xml和amoeba.xml,若是須要配置ip訪問控制,還須要修改access_list.conf文件,下面首先介紹dbServers.xml
            [root@bogon amoeba]# cat conf/dbServers.xml 
            <?xml version="1.0" encoding="gbk"?>
            
            <!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">
            <amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">
            
                    <!-- 
                        Each dbServer needs to be configured into a Pool,
                        If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:
                         add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig
                         such as 'multiPool' dbServer   
                    -->
                    
                <dbServer name="abstractServer" abstractive="true">
                    <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">
                        <property name="connectionManager">${defaultManager}</property>
                        <property name="sendBufferSize">64</property>
                        <property name="receiveBufferSize">128</property>
                            
                        <!-- mysql port -->
                        <property name="port">3306</property>  #設置Amoeba要鏈接的mysql數據庫的端口,默認是3306
                        
                        <!-- mysql schema -->
                        <property name="schema">testdb</property>  #設置缺省的數據庫,當鏈接amoeba時,操做表必須顯式的指定數據庫名,即採用dbname.tablename的方式,不支持 use dbname指定缺省庫,由於操做會調度到各個後端dbserver
                        
                        <!-- mysql user -->
                        <property name="user">test1</property>  #設置amoeba鏈接後端數據庫服務器的帳號和密碼,所以須要在全部後端數據庫上建立該用戶,並受權amoeba服務器可鏈接
                        
                        <property name="password">111111</property>
                    </factoryConfig>
            
                    <poolConfig class="com.meidusa.toolkit.common.poolable.PoolableObjectPool">
                        <property name="maxActive">500</property>  #最大鏈接數,默認500
                        <property name="maxIdle">500</property>    #最大空閒鏈接數
                        <property name="minIdle">1</property>    #最新空閒鏈接數
                        <property name="minEvictableIdleTimeMillis">600000</property>
                        <property name="timeBetweenEvictionRunsMillis">600000</property>
                        <property name="testOnBorrow">true</property>
                        <property name="testOnReturn">true</property>
                        <property name="testWhileIdle">true</property>
                    </poolConfig>
                </dbServer>
            
                <dbServer name="writedb"  parent="abstractServer">  #設置一個後端可寫的dbServer,這裏定義爲writedb,這個名字能夠任意命名,後面還會用到
                    <factoryConfig>
                        <!-- mysql ip -->
                        <property name="ipAddress">192.168.2.204</property> #設置後端可寫dbserver
                    </factoryConfig>
                </dbServer>
                
                <dbServer name="slave"  parent="abstractServer">  #設置後端可讀dbserver
                    <factoryConfig>
                        <!-- mysql ip -->
                        <property name="ipAddress">192.168.2.205</property>
                    </factoryConfig>
                </dbServer>
                
                <dbServer name="myslave" virtual="true">  #設置定義一個虛擬的dbserver,實際上至關於一個dbserver組,這裏將可讀的數據庫ip統一放到一個組中,將這個組的名字命名爲myslave
                    <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">
                        <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->
                        <property name="loadbalance">1</property>  #選擇調度算法,1表示複製均衡,2表示權重,3表示HA, 這裏選擇1
                        
                        <!-- Separated by commas,such as: server1,server2,server1 -->
                        <property name="poolNames">slave</property>  #myslave組成員
                    </poolConfig>
                </dbServer>
                    
            </amoeba:dbServers>
        另外一個配置文件amoeba.xml
            [root@bogon amoeba]# cat conf/amoeba.xml 
            <?xml version="1.0" encoding="gbk"?>
            
            <!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">
            <amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">
            
                <proxy>
                
                    <!-- service class must implements com.meidusa.amoeba.service.Service -->
                    <service name="Amoeba for Mysql" class="com.meidusa.amoeba.mysql.server.MySQLService">
                        <!-- port -->
                        <property name="port">8066</property>    #設置amoeba監聽的端口,默認是8066
                        
                        <!-- bind ipAddress -->    #下面配置監聽的接口,若是不設置,默認監聽因此的IP
                        <!-- 
                        <property name="ipAddress">127.0.0.1</property>
                         -->
                        
                        <property name="connectionFactory">
                            <bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">
                                <property name="sendBufferSize">128</property>
                                <property name="receiveBufferSize">64</property>
                            </bean>
                        </property>
                        
                        <property name="authenticateProvider">
                            <bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">
                                
            # 提供客戶端鏈接amoeba時須要使用這裏設定的帳號 (這裏的帳號密碼和amoeba鏈接後端數據庫服務器的密碼無關)
                                <property name="user">root</property>    
            
                                
                                <property name="password">123456</property>
                                
                                <property name="filter">
                                    <bean class="com.meidusa.toolkit.net.authenticate.server.IPAccessController">
                                        <property name="ipFile">${amoeba.home}/conf/access_list.conf</property>
                                    </bean>
                                </property>
                            </bean>
                        </property>
                        
                    </service>
                    
                    <runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">
                        
                        <!-- proxy server client process thread size -->
                        <property name="executeThreadSize">128</property>
                        
                        <!-- per connection cache prepared statement size  -->
                        <property name="statementCacheSize">500</property>
                        
                        <!-- default charset -->
                        <property name="serverCharset">utf8</property>
                        
                        <!-- query timeout( default: 60 second , TimeUnit:second) -->
                        <property name="queryTimeout">60</property>
                    </runtime>
                    
                </proxy>
                
                <!-- 
                    Each ConnectionManager will start as thread
                    manager responsible for the Connection IO read , Death Detection
                -->
                <connectionManagerList>
                    <connectionManager name="defaultManager" class="com.meidusa.toolkit.net.MultiConnectionManagerWrapper">
                        <property name="subManagerClassName">com.meidusa.toolkit.net.AuthingableConnectionManager</property>
                    </connectionManager>
                </connectionManagerList>
                
                    <!-- default using file loader -->
                <dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">
                    <property name="configFile">${amoeba.home}/conf/dbServers.xml</property>
                </dbServerLoader>
                
                <queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">
                    <property name="ruleLoader">
                        <bean class="com.meidusa.amoeba.route.TableRuleFileLoader">
                            <property name="ruleFile">${amoeba.home}/conf/rule.xml</property>
                            <property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>
                        </bean>
                    </property>
                    <property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>
                    <property name="LRUMapSize">1500</property>
                    <property name="defaultPool">writedb</property>  #設置amoeba默認的池,這裏設置爲writedb
                    
                    
                    <property name="writePool">writedb</property>  #這兩個選項默認是註銷掉的,須要取消註釋,這裏用來指定前面定義好的倆個讀寫池
                    <property name="readPool">myslave</property>   #
                    
                    <property name="needParse">true</property>
                </queryRouter>
            </amoeba:configuration>
        四、在masterdb上建立數據庫testdb
            mysql> create database testdb;
            Query OK, 1 row affected (0.08 sec)
            
            mysql> show databases;
            +--------------------+
            | Database           |
            +--------------------+
            | information_schema |
            | mydb               |
            | mysql              |
            | performance_schema |
            | test               |
            | testdb             |
            +--------------------+
            6 rows in set (0.00 sec)
            查看slavedb是否複製成功
                mysql> show databases;
                +--------------------+
                | Database           |
                +--------------------+
                | information_schema |
                | mydb               |
                | mysql              |
                | performance_schema |
                | test               |
                | testdb             |
                +--------------------+
                6 rows in set (0.00 sec)
            分別在masterdb和slavedb上爲amoedb受權
                mysql> GRANT ALL ON testdb.* TO 'test1'@'192.168.2.203' IDENTIFIED BY '111111';
                Query OK, 0 rows affected (0.05 sec)
                
                mysql> flush privileges;
                Query OK, 0 rows affected (0.02 sec)
            啓動amoeba
                [root@bogon amoeba]# /usr/local/amoeba/bin/launcher
                Error: JAVA_HOME environment variable is not set.
                [root@bogon amoeba]# vim /etc/profile^C
                [root@bogon amoeba]# source /etc/profile
                [root@bogon amoeba]# /usr/local/amoeba/bin/launcher
                Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0
                Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0
                
                The stack size specified is too small, Specify at least 228k
                Error: Could not create the Java Virtual Machine.
                Error: A fatal exception has occurred. Program will exit.
            報錯:
                Error: Could not create the Java Virtual Machine.
                Error: A fatal exception has occurred. Program will exit.
                從錯誤文字上看,應該是因爲stack size過小,致使JVM啓動失敗,要如何修改呢?
                其實Amoeba已經考慮到這個問題,並將JVM參數配置寫在屬性文件裏。如今,讓咱們經過該屬性文件修改JVM參數。
                修改jvm.properties文件JVM_OPTIONS參數。
                [root@bogon amoeba]# vim /usr/local/amoeba/jvm.properties 
                    改爲:JVM_OPTIONS="-server -Xms1024m -Xmx1024m -Xss256k -XX:PermSize=16m -XX:MaxPermSize=96m"
                    原爲:JVM_OPTIONS="-server -Xms256m -Xmx1024m -Xss196k -XX:PermSize=16m -XX:MaxPermSize=96m"
            再次啓動
                [root@bogon ~]# /usr/local/amoeba/bin/launcher
                    at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:329)
                    at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:239)
                    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
                    at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:127)
                    at org.codehaus.classworlds.Launcher.main(Launcher.java:110)
                Caused by: com.meidusa.toolkit.common.bean.util.InitialisationException: default pool required!,defaultPool=writedb invalid
                    at com.meidusa.amoeba.route.AbstractQueryRouter.init(AbstractQueryRouter.java:469)
                    at com.meidusa.amoeba.context.ProxyRuntimeContext.initAllInitialisableBeans(ProxyRuntimeContext.java:337)
                    ... 11 more
                 2016-10-24 18:46:37 [INFO] Project Name=Amoeba-MySQL, PID=1577 , System shutdown ....
                Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=16m; support was removed in 8.0
                Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=96m; support was removed in 8.0
                 2016-10-24 18:50:19 [INFO] Project Name=Amoeba-MySQL, PID=1602 , starting...
                log4j:WARN log4j config load completed from file:/usr/local/amoeba/conf/log4j.xml
                2016-10-24 18:50:21,668 INFO  context.MysqlRuntimeContext - Amoeba for Mysql current versoin=5.1.45-mysql-amoeba-proxy-3.0.4-BETA
                log4j:WARN ip access config load completed from file:/usr/local/amoeba/conf/access_list.conf
                2016-10-24 18:50:22,852 INFO  net.ServerableConnectionManager - Server listening on 0.0.0.0/0.0.0.0:8066.
            查看端口
            [root@bogon ~]# netstat -unlpt | grep java
            tcp        0      0 :::8066                     :::*                        LISTEN      1602/java    
            由此可知Amoeba啓動正常
        五、測試
            遠程登錄mysql客戶端經過指定amoeba配置文件中指定的用戶名、密碼、和端口以及amoeba服務器ip地址連接mysql數據庫
                [root@lys2 ~]# mysql -h192.168.2.203 -uroot -p -P8066
                Enter password: 
                Welcome to the MySQL monitor.  Commands end with ; or \g.
                Your MySQL connection id is 1364055863
                Server version: 5.1.45-mysql-amoeba-proxy-3.0.4-BETA Source distribution
                
                Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
                
                Oracle is a registered trademark of Oracle Corporation and/or its
                affiliates. Other names may be trademarks of their respective
                owners.
                
                Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
                
                mysql> 
            在testdb中建立表test並插入數據
                mysql> use testdb;
                Database changed
                mysql> create table test_table(id int,password varchar(40) not null);
                Query OK, 0 rows affected (0.19 sec)
                
                mysql> show tables;
                +------------------+
                | Tables_in_testdb |
                +------------------+
                | test_table       |
                +------------------+
                1 row in set (0.02 sec)
                
                mysql> insert into test_table(id,password) values('1','test1');
                Query OK, 1 row affected (0.04 sec)
                
                mysql> select * from test_table;
                +------+----------+
                | id   | password |
                +------+----------+
                |    1 | test1    |
                +------+----------+
                1 row in set (0.02 sec)
            分別登錄masterdb和slavedb查看數據
            masterdb:
                mysql> use testdb;
                Database changed
                mysql> show tables;
                +------------------+
                | Tables_in_testdb |
                +------------------+
                | test_table       |
                +------------------+
                1 row in set (0.00 sec)
                
                mysql> select * from test_table;
                +------+----------+
                | id   | password |
                +------+----------+
                |    1 | test1    |
                +------+----------+
                1 row in set (0.03 sec)
            slavedb:
                mysql> use testdb;
                Database changed
                mysql> show tables;
                +------------------+
                | Tables_in_testdb |
                +------------------+
                | test_table       |
                +------------------+
                1 row in set (0.00 sec)
                
                mysql> select * from test_table;
                +------+----------+
                | id   | password |
                +------+----------+
                |    1 | test1    |
                +------+----------+
                1 row in set (0.00 sec)
            停掉masterdb,而後在客戶端分別執行插入和查詢功能
        masterdb:
          [root@bogon ~]# service mysqld stop
          Shutting down MySQL. SUCCESS!
        客戶端:
            mysql> insert into test_table(id,password) values('2','test2');
            ERROR 1044 (42000): Amoeba could not connect to MySQL server[192.168.2.204:3306],拒絕鏈接
            mysql> select * from test_table;
            +------+----------+
            | id   | password |
            +------+----------+
            |    1 | test1    |
            +------+----------+
            1 row in set (0.01 sec)
            能夠看到,關掉masterdb和寫入報錯,讀正常
        開啓masterdb上的msyql 關閉slave上的mysql
            masterdb:
                [root@bogon ~]# service mysqld start
                Starting MySQL.. SUCCESS! 
            slavedb:
                [root@localhost ~]# service mysqld stop
                Shutting down MySQL. SUCCESS! 
            客戶端再次嘗試
                mysql> insert into test_table(id,password) values('2','test2');
                Query OK, 1 row affected (0.19 sec)
                mysql> select * from test_table;
                ERROR 1044 (42000): poolName=myslave, no valid pools
            能夠看到插入成功,讀取失敗
                開啓slavedb上的mysql,查看數據是否自動同步
            slavedb:
                [root@localhost ~]# service mysqld start
                Starting MySQL... SUCCESS! 
            客戶端:
                mysql> select * from test_table;
                +------+----------+
                | id   | password |
                +------+----------+
                |    1 | test1    |
                |    2 | test2    |
                +------+----------+
                2 rows in set (0.01 sec)
            接着客戶端:
                    mysql> insert into test_table(id,password) values('3','test3');
                    Query OK, 1 row affected (0.03 sec)
                
                mysql> select * from test_table;
                +------+----------+
                | id   | password |
                +------+----------+
                |    1 | test1    |
                |    2 | test2    |
                |    3 | test3    |
                +------+----------+
                3 rows in set (0.02 sec)
            OK 一切正常,到此所有結束
            
        Amoeba主配置文件($AMOEBA_HOME/conf/amoeba.xml),用來配置Amoeba服務的基本參數,如Amoeba主機地址、端口、認證方式、用於鏈接的用戶名、密碼、線程數、超時時間、其餘配置文件的位置等。
        數據庫服務器配置文件($AMOEBA_HOME/conf/dbServers.xml),用來存儲和配置Amoeba所代理的數據庫服務器的信息,如:主機IP、端口、用戶名、密碼等。
        切分規則配置文件($AMOEBA_HOME/conf/rule.xml),用來配置切分規則。
        數據庫函數配置文件($AMOEBA_HOME/conf/functionMap.xml),用來配置數據庫函數的處理方法,Amoeba將使用該配置文件中的方法解析數據庫函數。
        切分規則函數配置文件($AMOEBA_HOME/conf/ruleFunctionMap.xml),用來配置切分規則中使用的用戶自定義函數的處理方法。
        訪問規則配置文件($AMOEBA_HOME/conf/access_list.conf),用來受權或禁止某些服務器IP訪問Amoeba。
        日誌規格配置文件($AMOEBA_HOME/conf/log4j.xml),用來配置Amoeba輸出日誌的級別和方式。

mysql-proxy實現讀寫分離

MySQL Proxy最強大的一項功能是實現「讀寫分離(Read/Write Splitting)」。
    基本的原理是讓主數據庫處理事務性查詢,而從數據庫處理SELECT查詢。
    數據庫複製被用來把事務性查詢致使的變動同步到集羣中的從數據庫。 
    固然,主服務器也能夠提供查詢服務。
    使用讀寫分離最大的做用無非是環境服務器壓力。
    
    下面使用MySQL官方提供的數據庫代理層產品MySQLProxy搭建讀寫分離。
    MySQLProxy其實是在客戶端請求與MySQLServer之間創建了一個鏈接池。
    全部客戶端請求都是發向MySQLProxy,而後經由MySQLProxy進行相應的分析,判斷出是讀操做仍是寫操做,分發至對應的MySQLServer上。
    對於多節點Slave集羣,也能夠起作到負載均衡的效果。
    
    
    MySQL讀寫分離配置
    MySQL環境準備
        master 192.168.1.5
        slave 192.168.1.6
        proxy 192.168.1.2
        MySQL:5.5.37
        MySQL-proxy:mysql-proxy-0.8.4-linux-rhel5-x86-64bit.tar.gz
        建立用戶並分配權限
            mysql> create user libai identified by 'libai';
            mysql> grant all on *.* to libai@'192.168.1.%' identified by 'libai';
        在配置了MySQL複製,以上操做在master執行會同步到slave節點。
        啓用MySQL複製
        先關閉並清除以前的複製。
            mysql> stop slave;
            mysql> reset slave all;
        啓用新的複製同步。啓用以前須要清除日誌
            mysql> change master to master_host='192.168.1.5',master_user='libai',master_password='libai',master_port=3306,master_log_file='mysql-bin.000001',master_log_pos=0;
        主庫
            # mysql -h localhost -ulibai -plibai
            mysql> create database d;
            mysql> use d;
            mysql> create table t(i int);
            mysql> insert into t values(1);
        從庫
            mysql> select * from t;
               +------+
                | i    |
                +------+
                |    1 |
    啓用MySQLProxy代理服務器
        代理服務器上建立mysql用戶
            # groupadd mysql
            # useradd -g mysql mysql
        解壓啓動mysql-proxy
            # ./mysql-proxy --daemon --log-level=debug --user=mysql --keepalive --log-file=/var/log/mysql-proxy.log --plugins="proxy" --proxy-backend-addresses="192.168.1.5:3306" --proxy-read-only-backend-addresses="192.168.1.6:3306" --proxy-lua-script="/root/soft/mysql-proxy/rw-splitting.lua" --plugins=admin --admin-username="admin" --admin-password="admin" --admin-lua-script="/root/soft/mysql-proxy/lib/mysql-proxy/lua/admin.lua"
        其中proxy-backend-addresses是master服務器,proxy-read-only-backend-addresses是slave服務器。能夠經過./mysql-proxy --help 查看詳細說明。
    查看啓動後進程
        # ps -ef | grep mysql
            root     25721     1  0 11:33 ?        00:00:00 /root/soft/mysql-proxy/libexec/mysql-proxy --daemon --log-level=debug --user=mysql --keepalive --log-file=/var/log/mysql-proxy.log --plugins=proxy --proxy-backend-addresses=192.168.1.5:3306 --proxy-read-only-backend-addresses=192.168.1.6:3306 --proxy-lua-script=/root/soft/mysql-proxy/rw-splitting.lua --plugins=admin --admin-username=admin --admin-password=admin --admin-lua-script=/root/soft/mysql-proxy/lib/mysql-proxy/lua/admin.lua
            mysql    25722 25721  0 11:33 ?        00:00:00 /root/soft/mysql-proxy/libexec/mysql-proxy --daemon --log-level=debug --user=mysql --keepalive --log-file=/var/log/mysql-proxy.log --plugins=proxy --proxy-backend-addresses=192.168.1.5:3306 --proxy-read-only-backend-addresses=192.168.1.6:3306 --proxy-lua-script=/root/soft/mysql-proxy/rw-splitting.lua --plugins=admin --admin-username=admin --admin-password=admin --admin-lua-script=/root/soft/mysql-proxy/lib/mysql-proxy/lua/admin.lua
        4040是proxy端口,4041是admin管理端口
      # lsof -i:4040
            COMMAND     PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
            mysql-pro 25722 mysql   10u  IPv4 762429      0t0  TCP *:yo-main (LISTEN)
            # lsof -i:4041
            COMMAND     PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
            mysql-pro 25722 mysql   11u  IPv4 762432      0t0  TCP *:houston (LISTEN)
    測試
        保證mysqlproxy節點上可執行mysql 。經過複製同步賬號鏈接proxy
            # mysql -h 192.168.1.2 -ulibai -p --port=4040
            mysql> show databases;
            +--------------------+
            | Database           |
            +--------------------+
            | information_schema |
            | d                  |
            | mysql              |
            | performance_schema |
            | test               |
            +--------------------+
        登陸admin查看狀態
            # mysql -h 192.168.1.2 -u admin -p --port=4041
            mysql> select * from backends;
            +-------------+------------------+-------+------+------+-------------------+
            | backend_ndx | address          | state | type | uuid | connected_clients |
            +-------------+------------------+-------+------+------+-------------------+
            |           1 | 192.168.1.5:3306 | up    | rw   | NULL |                 0 |
            |           2 | 192.168.1.6:3306 | up    | ro   | NULL |                 0 |
            +-------------+------------------+-------+------+------+-------------------+
            2 rows in set (0.00 sec)
        能夠從以上查詢中看到master和slave狀態均爲up。
        1)登陸proxy節點,建立數據庫dufu,並建立一張表t
            mysql> create database dufu;
            mysql> show databases;
            mysql> use dufu;
            mysql> create table t(id int(10),name varchar(20));
            mysql> show tables;
        建立完數據庫及表後,主從節點上應該均可以看到
        2)關閉同步,分別在master和slave上插入數據
        mysql> slave stop;
        master
        mysql> insert into t values(1,'this_is_master');
        slave
        mysql> insert into t values(2,'this_is_slave');
        3)proxy上查看結果
            mysql> use dufu;
            mysql> select * from t;
            +------+---------------+
            | id   | name          |
            +------+---------------+
            |    2 | this_is_slave |
            +------+---------------+
            1 row in set (0.00 sec)
        從結果能夠看到數據是從slave上讀取的,並沒考慮master節點上的數據。
        直接從proxy上插入數據
        mysql> insert into t values(3,'this_is_proxy');
        再次查詢
            mysql> select * from t;
            +------+---------------+
            | id   | name          |
            +------+---------------+
            |    2 | this_is_slave |
            +------+---------------+
        結果顯示查詢數據沒有變化,由於proxy上執行insert至關於寫入到了master上,而查詢的數據是從slave上讀取的。
        master上查詢
            mysql> select * from t;
            +------+----------------+
            | id   | name           |
            +------+----------------+
            |    1 | this_is_master |
            |    3 | this_is_proxy  |
            +------+----------------+
        啓用複製,proxy查詢
            mysql> select * from t;
            +------+----------------+
            | id   | name           |
            +------+----------------+
            |    2 | this_is_slave  |
            |    1 | this_is_master |
            |    3 | this_is_proxy  |
            +------+----------------+
        說明此時master上的數據同步到了slave,而且在proxy查詢到數據是slave數據庫的數據。此時,能夠看到MySQLProxy實現了分離。
相關文章
相關標籤/搜索