前期準備:
前端
NFS服務器:計算機名nfsserver,IP地址192.168.1.103,用於存放業務系統的數據。 node1:計算機名PXC01,IP地址192.168.1.105,安裝pxc系統和業務系統。 node2:計算機名PXC02,IP地址192.168.1.106,安裝pxc系統和業務系統。 node3:計算機名PXC03,IP地址192.168.1.107,安裝pxc系統和業務系統。 lvs服務器:計算機名lvsserver,IP地址192.168.1.121,vip爲192.168.1.100,安裝lvs用於負載均衡。 操做系統均爲:Centos 6.9 64位 備註1:第五章節爲兩臺lvs服務器使用keepalived作高可用。 備註2:本方案相關工具原理知識,請網上了解。
方案架構圖:node
1、安裝業務系統、配置mysql負載均衡(PXC方案)
mysql
----------------------------------前言--------------------------------- 需實現如下目標: 一、實現多臺mysql節點數據如出一轍 二、任何一臺mysql節點掛了,不影響使用 三、每一個節點同時提供服務和讀寫 四、要考慮到之後的擴展性,好比增長節點方便 五、在複雜狀況下全自動化,不須要手工干預 考慮和嘗試過不少方案,最終選擇PXC方案。其它方案不理想的緣由以下: (1)主從複製方案:此方案知足不了需求,由於若是對從節點寫入東西,不會同步到主服務器。 (2)主主複製方案:不適合生產環境,並且增長第三個節點很麻煩。 (3)基於主從複製擴展方案(MMM、MHA):此方案知足不了需求,由於讀服務器雖然實現了負載均衡,可是寫服務器只有一臺提供服務。 (4)將mysql存在共享存儲、DRBD中:不可行,節點不能同時提供服務,屬於HA級別。 (5)Mysql Cluster集羣:某種意義上來講只支持"NDB數據引擎"(分片要改爲NDB,不分片不須要改),並且不支持外鍵、佔用磁盤和內存大。重啓的時候數據load到內存要好久。部署和管理起來複雜。 (6)Mysql Fabric集羣:主要用兩個功能,一個是mysql HA,另一個是分片(即好比一個上TB的表,對其進行分片,而後每臺服務器上存儲一部分),不適用於我須要的環境。 ----------------------------------前言---------------------------------
一、環境準備(node一、node二、node3) node1 192.168.1.105 PXC01 centos 6.9 mini node2 192.168.1.106 PXC02 centos 6.9 mini node3 192.168.1.107 PXC03 centos 6.9 mini 二、關閉防火牆或selinux(node一、node二、node3) [root@localhost ~]# /etc/init.d/iptables stop [root@localhost ~]# chkconfig iptables off [root@localhost ~]# setenforce 0 [root@localhost ~]# cat /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - SELinux is fully disabled. SELINUX=disabled # SELINUXTYPE= type of policy in use. Possible values are: # targeted - Only targeted network daemons are protected. # strict - Full SELinux protection. SELINUXTYPE=targeted 三、安裝業務系統,並升級到最新版本(node一、node二、node3) 四、把業務系統全部表都設置成主鍵(node1) 查沒主鍵的表的命令是: select t1.table_schema,t1.table_name from information_schema.tables t1 left outer join information_schema.TABLE_CONSTRAINTS t2 on t1.table_schema = t2.TABLE_SCHEMA and t1.table_name = t2.TABLE_NAME and t2.CONSTRAINT_NAME in ('PRIMARY') where t2.table_name is null and t1.TABLE_SCHEMA not in ('information_schema','performance_schema','test','mysql', 'sys'); 查有主鍵的表的命令是: select t1.table_schema,t1.table_name from information_schema.tables t1 left outer join information_schema.TABLE_CONSTRAINTS t2 on t1.table_schema = t2.TABLE_SCHEMA and t1.table_name = t2.TABLE_NAME and t2.CONSTRAINT_NAME in ('PRIMARY') where t2.table_name is not null and t1.TABLE_SCHEMA not in ('information_schema','performance_schema','test','mysql', 'sys'); 按照下面的模版設置主鍵(node1) ALTER TABLE `表名` ADD `id` int(11) NOT NULL auto_increment FIRST, ADD primary key(id); 五、把業務系統非innodb引擎的表修改爲innodb引擎表(node1) 查看哪些表用了MyISAM引擎: [root@PXC01 ~]# mysql -u oa -p密碼 -e "show table status from oa where Engine='MyISAM';" [root@PXC01 ~]# mysql -u oa -p密碼 -e "show table status from oa where Engine='MyISAM';" |awk '{print $1}' |sed 1d>mysqlchange 中止有使用到mysql的其它服務: [root@PXC01 ~]# for i in 服務器名1 服務器名2;do /etc/init.d/$i stop;done 執行更改爲innodb引擎的腳本: [root@PXC01 ~]# cat mysqlchange_innodb.sh #! /bin/bash cat mysqlchange | while read LINE do tablename=$(echo $LINE |awk '{print $1}') echo "如今修改$tablename的引擎爲innodb" mysql -u oa -p`密碼 | grep -w pass | awk -F"= " '{print $NF}'` oa -e "alter table $tablename engine=innodb;" done 驗證: [root@PXC01 ~]# mysql -u oa -p`密碼 | grep -w pass | awk -F"= " '{print $NF}'` -e "show table status from oa where Engine='MyISAM';" [root@PXC01 ~]# mysql -u oa -p`密碼 | grep -w pass | awk -F"= " '{print $NF}'` -e "show table status from oa where Engine='innoDB';" 六、備份數據庫(node1) [root@PXC01 ~]# mysqldump -u oa -p`密碼 | grep -w pass | awk -F"= " '{print $NF}'` --databases oa |gzip>20180524.sql.gz [root@PXC01 ~]# ll 總用量 44 -rw-r--r-- 1 root root 24423 5月 22 16:55 20180524.sql.gz 七、將業務系統自帶的mysql端口改爲其它端口而且中止服務,開機不啓動服務(node一、node二、node3) 八、yum安裝pxc並配置 基礎安裝(node一、node二、node3) [root@percona1 ~]# yum -y groupinstall Base Compatibility libraries Debugging Tools Dial-up Networking suppport Hardware monitoring utilities Performance Tools Development tools 組件安裝(node一、node二、node3) [root@percona1 ~]# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm -y [root@percona1 ~]# yum localinstall http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm [root@percona1 ~]# yum install socat libev -y [root@percona1 ~]# yum install Percona-XtraDB-Cluster-55 -y node1配置: [root@PXC01 ~]# vi /etc/my.cnf # 我業務系統須要6033端口 [client] port=6033 # 我業務系統須要6033端口 [mysqld] datadir=/var/lib/mysql user=mysql port=6033 # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://192.168.1.105,192.168.1.106,192.168.1.107 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node #1 address wsrep_node_address=192.168.1.105 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=my_centos_cluster # Authentication for SST method wsrep_sst_auth="sstuser:s3cret" node1啓動mysql命令以下: [root@PXC01 mysql]# /etc/init.d/mysql bootstrap-pxc Bootstrapping PXC (Percona XtraDB Cluster) ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists Starting MySQL (Percona XtraDB Cluster)... SUCCESS! 若是是centos7,則啓動命令以下: [root@percona1 ~]# systemctl start mysql@bootstrap.service 如果重啓的話,就先kill,而後刪除pid文件後再執行上面的啓動命令。 node1查看服務: [root@PXC01 ~]# /etc/init.d/mysql status SUCCESS! MySQL (Percona XtraDB Cluster) running (7712) 將/etc/init.d/mysql bootstrap-pxc加入到rc.local裏面。 node1進入mysql控制檯: [root@PXC01 ~]# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.5.41-37.0-55 Percona XtraDB Cluster (GPL), Release rel37.0, Revision 855, WSREP version 25.12, wsrep_25.12.r4027 Copyright (c) 2009-2014 Percona LLC and/or its affiliates Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show status like 'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 | | wsrep_protocol_version | 4 | | wsrep_last_committed | 0 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_received | 2 | | wsrep_received_bytes | 134 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_cert_deps_distance | 0.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 0 | | wsrep_causal_reads | 0 | | wsrep_incoming_addresses | 192.168.1.105:3306 | | wsrep_cluster_conf_id | 1 | | wsrep_cluster_size | 1 | | wsrep_cluster_state_uuid | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 2.12(r318911d) | | wsrep_ready | ON | | wsrep_thread_count | 2 | +----------------------------+--------------------------------------+ 41 rows in set (0.00 sec) node1設置數據庫用戶名密碼 mysql> UPDATE mysql.user SET password=PASSWORD("Passw0rd") where user='root'; node1建立、受權、同步帳號 mysql> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cret'; mysql> GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; mysql> FLUSH PRIVILEGES; node1查看你指定集羣的IP地址 mysql> SHOW VARIABLES LIKE 'wsrep_cluster_address'; +-----------------------+---------------------------------------------------+ | Variable_name | Value | +-----------------------+---------------------------------------------------+ | wsrep_cluster_address | gcomm://192.168.1.105,192.168.1.106,192.168.1.107 | +-----------------------+---------------------------------------------------+ 1 row in set (0.00 sec) node1此參數查看是否開啓 mysql> show status like 'wsrep_ready'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | wsrep_ready | ON | +---------------+-------+ 1 row in set (0.00 sec) node1查看集羣的成員數 mysql> show status like 'wsrep_cluster_size'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 1 | +--------------------+-------+ 1 row in set (0.00 sec) node1查看wsrep的相關參數 mysql> show status like 'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 | | wsrep_protocol_version | 4 | | wsrep_last_committed | 2 | | wsrep_replicated | 2 | | wsrep_replicated_bytes | 405 | | wsrep_received | 2 | | wsrep_received_bytes | 134 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_cert_deps_distance | 1.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 2 | | wsrep_causal_reads | 0 | | wsrep_incoming_addresses | 192.168.1.105:3306 | | wsrep_cluster_conf_id | 1 | | wsrep_cluster_size | 1 | | wsrep_cluster_state_uuid | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 2.12(r318911d) | | wsrep_ready | ON | | wsrep_thread_count | 2 | +----------------------------+--------------------------------------+ 41 rows in set (0.00 sec) node2配置: [root@PXC02 ~]# vi /etc/my.cnf 我業務系統須要6033端口 [client] port=6033 我業務系統須要6033端口 [mysqld] datadir=/var/lib/mysql user=mysql port=6033 # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://192.168.1.105,192.168.1.106,192.168.1.107 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node #1 address wsrep_node_address=192.168.1.106 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=my_centos_cluster # Authentication for SST method wsrep_sst_auth="sstuser:s3cret" node2啓動服務: [root@PXC02 ~]# /etc/init.d/mysql start ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists Starting MySQL (Percona XtraDB Cluster).....State transfer in progress, setting sleep higher ... SUCCESS! node2查看服務: [root@PXC02 ~]# /etc/init.d/mysql status SUCCESS! MySQL (Percona XtraDB Cluster) running (9071) node3配置: [root@PXC03 ~]# vi /etc/my.cnf 我業務系統須要6033端口 [client] port=6033 我業務系統須要6033端口 [mysqld] datadir=/var/lib/mysql user=mysql port=6033 # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://192.168.1.105,192.168.1.106,192.168.1.107 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This changes how InnoDB autoincrement locks are managed and is a requirement for Galera innodb_autoinc_lock_mode=2 # Node #1 address wsrep_node_address=192.168.1.107 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=my_centos_cluster # Authentication for SST method wsrep_sst_auth="sstuser:s3cret" node3啓動服務: [root@PXC03 ~]# /etc/init.d/mysql start ERROR! MySQL (Percona XtraDB Cluster) is not running, but lock file (/var/lock/subsys/mysql) exists Starting MySQL (Percona XtraDB Cluster)......State transfer in progress, setting sleep higher .... SUCCESS! node3查看服務: [root@PXC03 ~]# /etc/init.d/mysql status SUCCESS! MySQL (Percona XtraDB Cluster) running (9071) ..................................注意................................ -> 除了名義上的master以外,其它的node節點只須要啓動mysql便可。 -> 節點的數據庫的登錄和master節點的用戶名密碼一致,自動同步。因此其它的節點數據庫用戶名密碼無須從新設置。 也就是說,如上設置,只須要在名義上的master節點(如上的node1)上設置權限,其它的節點配置好/etc/my.cnf後,只須要啓動mysql就行,權限會自動同步過來。 如上的node2,node3節點,登錄mysql的權限是和node1同樣的(便是用node1設置的權限登錄) ..................................................................... 若是上面的node二、node3啓動mysql失敗,好比/var/lib/mysql下的err日誌報錯以下: [ERROR] WSREP: gcs/src/gcs_group.cpp:long int gcs_group_handle_join_msg(gcs_ 解決辦法: -> 查看節點上的iptables防火牆是否關閉;檢查到名義上的master節點上的4567端口是否連通(telnet) -> selinux是否關閉 -> 刪除名義上的master節點上的grastate.dat後,重啓名義上的master節點的數據庫;固然當前節點上的grastate.dat也刪除並重啓數據庫 ..................................................................... 九、最後進行測試 在任意一個node上,進行添加,刪除,修改操做,都會同步到其餘的服務器,是如今主主的模式,固然前提是表引擎必須是innodb,由於galera目前只支持innodb的表。 mysql> show status like 'wsrep%'; +----------------------------+----------------------------------------------------------+ | Variable_name | Value | +----------------------------+----------------------------------------------------------+ | wsrep_local_state_uuid | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 | | wsrep_protocol_version | 4 | | wsrep_last_committed | 2 | | wsrep_replicated | 2 | | wsrep_replicated_bytes | 405 | | wsrep_received | 10 | | wsrep_received_bytes | 728 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_cert_deps_distance | 0.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 0 | | wsrep_causal_reads | 0 | | wsrep_incoming_addresses | 192.168.1.105:6033,192.168.1.106:6033,192.168.1.107:6033 | | wsrep_cluster_conf_id | 3 | | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | 1ab083fc-5c46-11e8-a7b7-76a002f2b5c8 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 2.12(r318911d) | | wsrep_ready | ON | | wsrep_thread_count | 2 | +----------------------------+----------------------------------------------------------+ 41 rows in set (0.00 sec) 在node3上建立一個庫 mysql> create database wangshibo; Query OK, 1 row affected (0.02 sec) 而後在node1和node2上查看,自動同步過來 mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | | wangshibo | +--------------------+ 5 rows in set (0.00 sec) 在node1上的wangshibo庫下建立表,插入數據 mysql> use wangshibo; Database changed mysql> create table test(id int(5)); Query OK, 0 rows affected (0.11 sec) mysql> insert into test values(1); Query OK, 1 row affected (0.01 sec) mysql> insert into test values(2); Query OK, 1 row affected (0.02 sec) 一樣,在其它的節點上查看,也是能自動同步過來 mysql> select * from wangshibo.test; +------+ | id | +------+ | 1 | | 2 | +------+ 2 rows in set (0.00 sec) 十、node1還原數據庫(node1) mysql> create database oa; gunzip 20180524.sql.gz /usr/bin/mysql -uroot -p密碼 oa <20180524.sql for i in 服務器名1 服務器名2;do /etc/init.d/$i start;done 十一、而後驗證node2和node3是否從node1同步了數據庫和全部表
2、NFS服務器配置
linux
----------------------------------前言--------------------------------- 這步其實就是就是存儲數據。 要實現三臺節點服務器都使用同樣的數據,方法通常有: (1)NFS:全部數據存放在一臺NFS服務器上,由於NFS能夠將鎖放在nfs服務器端,這樣就不會形成同時讀寫文件、損壞文件。 (2)支持同時讀寫一個文件的存儲設備:設備很昂貴。 (3)分佈式文件系統:好比Fasthds、HDFS,沒有深刻研究,可能須要更改業務程序代碼。 ----------------------------------前言---------------------------------
一、安裝nfs相關包 [root@nfsserver ~]# yum -y install nfs-utils rpcbind 二、創建共享目錄,而且創建用戶,並賦予目錄用戶和組權限(什麼用戶和組取決於業務系統要求)。 [root@nfsserver ~]# groupadd -g 9005 oa [root@nfsserver ~]# useradd -u 9005 -g 9005 oa -d /home/oa -s /sbin/nologin [root@nfsserver ~]# mkdir /data [root@nfsserver ~]# chown -hR oa.oa /data [root@nfsserver ~]# chmod -R 777 /data (或755) 三、創建exports文件,rw表示讀寫。 [root@nfsserver /]# cat /etc/exports /data 192.168.1.105(rw,sync,all_squash,anonuid=9005,anongid=9005) /data 192.168.1.106(rw,sync,all_squash,anonuid=9005,anongid=9005) /data 192.168.1.107(rw,sync,all_squash,anonuid=9005,anongid=9005) 四、重啓服務和設置服務開機啓動,必定要先重啓rpcbind,而後才能重啓nfs。 [root@nfsserver /]# /etc/init.d/rpcbind start 正在啓動 rpcbind: [肯定] [root@nfsserver /]# rpcinfo -p localhost [root@nfsserver /]# netstat -lnt [root@nfsserver /]# chkconfig rpcbind on [root@nfsserver /]# chkconfig --list | grep rpcbind [root@nfsserver /]# /etc/init.d/nfs start 啓動 NFS 服務: [肯定] 啓動 NFS mountd: [肯定] 啓動 NFS 守護進程: [肯定] 正在啓動 RPC idmapd: [肯定] [root@nfsserver /]# rpcinfo -p localhost 會發現多出來好多端口 [root@nfsserver /]# chkconfig nfs on [root@nfsserver /]# chkconfig --list | grep nfs 五、關閉防火牆和selinux。 [root@nfsserver /]# service iptables stop iptables:將鏈設置爲政策 ACCEPT:filter [肯定] iptables:清除防火牆規則: [肯定] iptables:正在卸載模塊: [肯定] [root@nfsserver /]# chkconfig iptables off [root@nfsserver /]# selinux關閉方法略。
3、NFS客戶端配置(node一、node二、node3)
c++
一、看下NFS服務器的共享(node一、node二、node3) [root@oaserver1 ~]# yum -y install nfs-utils rpcbind(showmount命令要安裝這個) [root@oaserver1 /]# /etc/init.d/rpcbind start [root@oaserver1 /]# rpcinfo -p localhost [root@oaserver1 /]# netstat -lnt [root@oaserver1 /]# chkconfig rpcbind on [root@oaserver1 /]# chkconfig --list | grep rpcbind [root@PXC01 ~]# showmount -e 192.168.1.103 Export list for 192.168.1.103: /data 192.168.1.107,192.168.1.106,192.168.1.105 二、新建掛載目錄,而且掛載(node一、node二、node3) [root@oaserver1 ~]# mkdir /data [root@oaserver1 ~]# mount -t nfs 192.168.1.103:/data /data [root@oaserver1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup-lv_root 18G 3.6G 13G 22% / tmpfs 491M 4.0K 491M 1% /dev/shm /dev/sda1 477M 28M 425M 7% /boot 192.168.1.103:/data 14G 2.1G 11G 16% /data [root@oaserver1 ~]# cd / [root@oaserver1 /]# ls -lhd data* drwxrwxrwx 2 oa oa 4.0K 5月 24 2018 data 三、設置開機自動掛載(node一、node二、node3) [root@oaserver1 data]# vi /etc/fstab 192.168.1.103:/data /data nfs defaults 0 0 四、移動數據目錄到/data目錄下 步驟主要是中止相關服務,而後移動數據到/data目錄,而後作個軟鏈接。 五、重啓下服務器後驗證
4、負載均衡lvs配置(單臺負載均衡服務器)
算法
一、DS配置 安裝所需的依賴包 yum install -y wget make kernel-devel gcc gcc-c++ libnl* libpopt* popt-static 建立一個軟連接,防止後面編譯安裝ipvsadm時找不到系統內核 ln -s /usr/src/kernels/2.6.32-696.30.1.el6.x86_64/ /usr/src/linux 下載安裝ipvsadm wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz tar zxvf ipvsadm-1.26.tar.gz cd ipvsadm-1.26 make && make install ipvsadm 建立文件 /etc/init.d/lvsdr, 並賦予執行權限: #!/bin/sh VIP=192.168.1.100 RIP1=192.168.1.105 RIP2=192.168.1.106 RIP3=192.168.1.107 . /etc/rc.d/init.d/functions case "$1" in start) echo " start LVS of DirectorServer" # set the Virtual IP Address ifconfig eth0:0 $VIP/24 #/sbin/route add -host $VIP dev eth0:0 #Clear IPVS table /sbin/ipvsadm -c #set LVS /sbin/ipvsadm -A -t $VIP:80 -s sh /sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g /sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g /sbin/ipvsadm -a -t $VIP:80 -r $RIP3:80 -g /sbin/ipvsadm -A -t $VIP:25 -s sh /sbin/ipvsadm -a -t $VIP:25 -r $RIP1:25 -g /sbin/ipvsadm -a -t $VIP:25 -r $RIP2:25 -g /sbin/ipvsadm -a -t $VIP:25 -r $RIP3:25 -g /sbin/ipvsadm -A -t $VIP:110 -s sh /sbin/ipvsadm -a -t $VIP:110 -r $RIP1:110 -g /sbin/ipvsadm -a -t $VIP:110 -r $RIP2:110 -g /sbin/ipvsadm -a -t $VIP:110 -r $RIP3:110 -g /sbin/ipvsadm -A -t $VIP:143 -s sh /sbin/ipvsadm -a -t $VIP:143 -r $RIP1:143 -g /sbin/ipvsadm -a -t $VIP:143 -r $RIP2:143 -g /sbin/ipvsadm -a -t $VIP:143 -r $RIP3:143 -g #/sbin/ipvsadm -a -t $VIP:80 -r $RIP3:80 –g #Run LVS /sbin/ipvsadm #end ;; stop) echo "close LVS Directorserver" /sbin/ipvsadm -c ;; *) echo "Usage: $0 {start|stop}" exit 1 esac [root@lvsserver ~]# chmod +x /etc/init.d/lvsdr [root@lvsserver ~]# /etc/init.d/lvsdr start 啓動 [root@lvsserver ~]# vi /etc/rc.local 加入到開機自啓動 /etc/init.d/lvsdr start [root@localhost ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:50:56:8D:19:13 inet addr:192.168.1.121 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fe8d:1913/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10420500 errors:0 dropped:0 overruns:0 frame:0 TX packets:421628 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1046805128 (998.3 MiB) TX bytes:101152496 (96.4 MiB) eth0:0 Link encap:Ethernet HWaddr 00:50:56:8D:19:13 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:164717347 errors:0 dropped:0 overruns:0 frame:0 TX packets:164717347 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:28297589130 (26.3 GiB) TX bytes:28297589130 (26.3 GiB) 二、RS配置(node一、node二、node3) [root@oaserver1 ~]# vi /etc/init.d/realserver #!/bin/sh VIP=192.168.1.100 . /etc/rc.d/init.d/functions case "$1" in start) echo " start LVS of RealServer" echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore service network restart ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP route add -host $VIP dev lo:0 #end ;; stop) echo "close LVS Realserver" service network restart ;; *) echo "Usage: $0 {start|stop}" exit 1 esac [root@oaserver1 ~]# chmod +x /etc/init.d/realserver [root@oaserver1 ~]# /etc/init.d/realserver start [root@oaserver1 ~]# vi /etc/rc.local /etc/init.d/realserver start [root@oaserver1 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:DC:B1:39 inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fedc:b139/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:816173 errors:0 dropped:0 overruns:0 frame:0 TX packets:399007 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:534582215 (509.8 MiB) TX bytes:98167814 (93.6 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:43283 errors:0 dropped:0 overruns:0 frame:0 TX packets:43283 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8895319 (8.4 MiB) TX bytes:8895319 (8.4 MiB) lo:0 Link encap:Local Loopback inet addr:192.168.1.100 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:65536 Metric:1 三、重啓服務器測試下,我是將其中一臺的logo換掉,而後刷新,若是logo會變就表示成功。能夠用ipvsadm名稱查看。
5、負載均衡lvs+keepalived配置(兩臺負載均衡服務器實現高可用)sql
兩臺lvs負載均衡服務器用keepalived高可用,另外keepalived能實現健康檢查(自動移除有問題RS節點),若是沒有此功能,那麼有一臺oa掛了,lvs依然會轉發給它,形成訪問故障。數據庫
一、環境 Keepalived1 + lvs1(Director1):192.168.1.121 Keepalived2 + lvs2(Director2):192.168.1.122 Real server1:192.168.1.105 Real server1:192.168.1.106 Real server1:192.168.1.107 IP: 192.168.1.100 二、安裝所需的依賴包 yum install -y wget make kernel-devel gcc gcc-c++ libnl* libpopt* popt-static 建立一個軟連接,防止後面編譯安裝ipvsadm時找不到系統內核 ln -s /usr/src/kernels/2.6.32-696.30.1.el6.x86_64/ /usr/src/linux 三、Lvs + keepalived的2個節點安裝 yum install ipvsadm keepalived -y 也能夠編譯安裝ipvsadm(沒試過這個,不建議) wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.26.tar.gz tar zxvf ipvsadm-1.26.tar.gz cd ipvsadm-1.26 make && make install ipvsadm 四、Real server節點3臺配置腳本(node一、node二、node3): [root@oaserver1 ~]# vi /etc/init.d/realserver #!/bin/sh VIP=192.168.1.100 . /etc/rc.d/init.d/functions case "$1" in start) echo " start LVS of RealServer" echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore service network restart ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP route add -host $VIP dev lo:0 #end ;; stop) echo "close LVS Realserver" service network restart ;; *) echo "Usage: $0 {start|stop}" exit 1 esac [root@oaserver1 ~]# chmod +x /etc/init.d/realserver [root@oaserver1 ~]# /etc/init.d/realserver start [root@oaserver1 ~]# vi /etc/rc.local /etc/init.d/realserver start [root@oaserver1 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:07:D5:96 inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe07:d596/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1390 errors:0 dropped:0 overruns:0 frame:0 TX packets:1459 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:334419 (326.5 KiB) TX bytes:537109 (524.5 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2633 errors:0 dropped:0 overruns:0 frame:0 TX packets:2633 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:539131 (526.4 KiB) TX bytes:539131 (526.4 KiB) lo:0 Link encap:Local Loopback inet addr:192.168.1.100 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:65536 Metric:1 五、Lvs + keepalived節點配置,好像能夠設置若是那臺有問題,而後郵件通知。 主節點( MASTER )配置文件 vi /etc/keepalived/keepalived.conf vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.1.100 } } virtual_server 192.168.1.100 80 { delay_loop 6 lb_algo sh lb_kind DR persistence_timeout 0 protocol TCP real_server 192.168.1.105 80 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.1.106 80 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } real_server 192.168.1.107 80 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 80 } } } virtual_server 192.168.1.100 25 { delay_loop 6 lb_algo sh lb_kind DR persistence_timeout 0 protocol TCP real_server 192.168.1.105 25 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 25 } } real_server 192.168.1.106 25 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 25 } } real_server 192.168.1.107 25 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 25 } } } virtual_server 192.168.1.100 110 { delay_loop 6 lb_algo sh lb_kind DR persistence_timeout 0 protocol TCP real_server 192.168.1.105 110 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 110 } } real_server 192.168.1.106 110 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 110 } } real_server 192.168.1.107 110 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 110 } } } virtual_server 192.168.1.100 143 { delay_loop 6 lb_algo sh lb_kind DR persistence_timeout 0 protocol TCP real_server 192.168.1.105 143 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 143 } } real_server 192.168.1.106 143 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 143 } } real_server 192.168.1.107 143 { weight 1 TCP_CHECK { connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 143 } } } 從節點( BACKUP )配置文件 拷貝主節點的配置文件keepalived.conf,而後修改以下內容: state MASTER -> state BACKUP priority 100 -> priority 90 keepalived的2個節點執行以下命令,開啓轉發功能: # echo 1 > /proc/sys/net/ipv4/ip_forward 六、兩節點關閉防火牆 /etc/init.d/iptables stop chkconfig iptables off 七、兩節點前後順序啓動keepalive 先主後從分別啓動keepalive service keepalived start chkconfig keepalived on 八、測試故障轉移
6、維護注意項
bootstrap
一、麻煩開機或維護的時候,務必按照這個開機順序(間隔2-3分鐘左右): (1)、103 NFS服務器 (2)、105 業務服務器 (3)、10六、107 業務服務器 (4)、121負載分發服務器 (5)、122負載分發服務器 備註:若是103不先啓動,那oa服務器掛載不了存儲分區。而後105oa服務器不首先啓動,106和107的mysql服務啓動不起來。 二、正常關機或重啓順序 (1)12一、122負載分發服務器 (2)10六、107 業務服務器 (3)105 業務服務器 (4)103 NFS服務器 三、若是3臺業務服務器,出現個別正常關機或重啓卡在那裏好久的,麻煩暫時直接關機就能夠。
--------------------------------------------------------------------------------
附:
後端
LVS的調度算法分爲靜態與動態兩類。 1.靜態算法(4種):只根據算法進行調度 而不考慮後端服務器的實際鏈接狀況和負載狀況 ①.RR:輪叫調度(Round Robin) 調度器經過」輪叫」調度算法將外部請求按順序輪流分配到集羣中的真實服務器上,它均等地對待每一臺服務器,而無論服務器上實際的鏈接數和系統負載。 ②.WRR:加權輪叫(Weight RR) 調度器經過「加權輪叫」調度算法根據真實服務器的不一樣處理能力來調度訪問請求。這樣能夠保證處理能力強的服務器處理更多的訪問流量。調度器能夠自動問詢真實服務器的負載狀況,並動態地調整其權值。 ③.DH:目標地址散列調度(Destination Hash ) 根據請求的目標IP地址,做爲散列鍵(HashKey)從靜態分配的散列表找出對應的服務器,若該服務器是可用的且未超載,將請求發送到該服務器,不然返回空。 ④.SH:源地址 hash(Source Hash) 源地址散列」調度算法根據請求的源IP地址,做爲散列鍵(HashKey)從靜態分配的散列表找出對應的服務器,若該服務器是可用的且未超載,將請求發送到該服務器,不然返回空。 2.動態算法(6種):前端的調度器會根據後端真實服務器的實際鏈接狀況來分配請求 ①.LC:最少連接(Least Connections) 調度器經過」最少鏈接」調度算法動態地將網絡請求調度到已創建的連接數最少的服務器上。若是集羣系統的真實服務器具備相近的系統性能,採用」最小鏈接」調度算法能夠較好地均衡負載。 ②.WLC:加權最少鏈接(默認採用的就是這種)(Weighted Least Connections) 在集羣系統中的服務器性能差別較大的狀況下,調度器採用「加權最少連接」調度算法優化負載均衡性能,具備較高權值的服務器將承受較大比例的活動鏈接負載。調度器能夠自動問詢真實服務器的負載狀況,並動態地調整其權值。 ③.SED:最短延遲調度(Shortest Expected Delay ) 在WLC基礎上改進,Overhead = (ACTIVE+1)*256/加權,再也不考慮非活動狀態,把當前處於活動狀態的數目+1來實現,數目最小的,接受下次請求,+1的目的是爲了考慮加權的時候,非活動鏈接過多缺陷:當權限過大的時候,會倒置空閒服務器一直處於無鏈接狀態。 ④.NQ永不排隊/最少隊列調度(Never Queue Scheduling NQ) 無需隊列。若是有臺 realserver的鏈接數=0就直接分配過去,不須要再進行sed運算,保證不會有一個主機很空間。在SED基礎上不管+幾,第二次必定給下一個,保證不會有一個主機不會很空閒着,不考慮非活動鏈接,才用NQ,SED要考慮活動狀態鏈接,對於DNS的UDP不須要考慮非活動鏈接,而httpd的處於保持狀態的服務就須要考慮非活動鏈接給服務器的壓力。 ⑤.LBLC:基於局部性的最少連接(locality-Based Least Connections) 基於局部性的最少連接」調度算法是針對目標IP地址的負載均衡,目前主要用於Cache集羣系統。該算法根據請求的目標IP地址找出該目標IP地址最近使用的服務器,若該服務器是可用的且沒有超載,將請求發送到該服務器;若服務器不存在,或者該服務器超載且有服務器處於一半的工做負載,則用「最少連接」的原則選出一個可用的服務器,將請求發送到該服務器。 ⑥. LBLCR:帶複製的基於局部性最少鏈接(Locality-Based Least Connections with Replication) 帶複製的基於局部性最少連接」調度算法也是針對目標IP地址的負載均衡,目前主要用於Cache集羣系統。它與LBLC算法的不一樣之處是它要維護從一個目標IP地址到一組服務器的映射,而LBLC算法維護從一個目標IP地址到一臺服務器的映射。該算法根據請求的目標IP地址找出該目標IP地址對應的服務器組,按」最小鏈接」原則從服務器組中選出一臺服務器,若服務器沒有超載,將請求發送到該服務器;若服務器超載,則按「最小鏈接」原則從這個集羣中選出一臺服務器,將該服務器加入到服務器組中,將請求發送到該服務器。同時,當該服務器組有一段時間沒有被修改,將最忙的服務器從服務器組中刪除,以下降複製的程度。 三、keepalived切換正常日誌/var/log/message日誌 如下是另一臺服務器掛了,當前這臺服務器自動接管: Aug 4 15:15:47 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Transition to MASTER STATE Aug 4 15:15:48 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Entering MASTER STATE Aug 4 15:15:48 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) setting protocol VIPs. Aug 4 15:15:48 localhost Keepalived_healthcheckers[1303]: Netlink reflector reports IP 192.168.1.100 added Aug 4 15:15:48 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.100 Aug 4 15:15:53 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.100 如下是另一臺服務器恢復後,自動釋放資源: Aug 4 15:17:25 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Received higher prio advert Aug 4 15:17:25 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) Entering BACKUP STATE Aug 4 15:17:25 localhost Keepalived_vrrp[1306]: VRRP_Instance(VI_1) removing protocol VIPs. Aug 4 15:17:25 localhost Keepalived_healthcheckers[1303]: Netlink reflector reports IP 192.168.1.100 removed Aug 4 15:17:34 localhost Keepalived_healthcheckers[1303]: TCP connection to [192.168.1.105]:110 success. Aug 4 15:17:34 localhost Keepalived_healthcheckers[1303]: Adding service [192.168.1.105]:110 to VS [192.168.1.100]:110 Aug 4 15:17:35 localhost Keepalived_healthcheckers[1303]: TCP connection to [192.168.1.105]:143 success. Aug 4 15:17:35 localhost Keepalived_healthcheckers[1303]: Adding service [192.168.1.105]:143 to VS [192.168.1.100]:143 Aug 4 15:18:04 localhost Keepalived_healthcheckers[1303]: TCP connection to [192.168.1.107]:110 success. Aug 4 15:18:04 localhost Keepalived_healthcheckers[1303]: Adding service [192.168.1.107]:110 to VS [192.168.1.100]:110 Aug 4 15:18:05 localhost Keepalived_healthcheckers[1303]: TCP connection to [192.168.1.107]:143 success. Aug 4 15:18:05 localhost Keepalived_healthcheckers[1303]: Adding service [192.168.1.107]:143 to VS [192.168.1.100]:143