mysql讀寫分離

很久沒有寫過博客了,趁着年假還有一天,把去年項目所運用的讀寫分離在這裏概述一下及其注意點,以防之後項目再有使用到;css

準備工做java

1 開發環境:window,idea,maven,spring boot,mybatis,druid(淘寶數據庫鏈接池)node

2 數據庫服務器:linux,mysql master(192.168.203.135),mysql salve(192.168.203.139)mysql

3 讀寫分離以前必須先作好數據庫的主從複製,關於主從複製不是該篇幅的主要敘述重點,關於主從複製讀者能夠自行google或者百度,教程基本都是同樣,可行linux

 

注意如下幾點:
a:作主從複製時,首先肯定兩臺服務器的mysql沒任何自定義庫(不然只能夠配置完後以前的東西無法同步,或者兩個庫都有徹底相同的庫應該也是能夠同步)
b:server_id必須配置不同
c:防火牆不能把mysql服務端口給攔截了(默認3306)
d:確保兩臺mysql能夠相互訪問
e:重置master,slave。Reset master;reset slave;開啓關閉slave,start slave;stop slave;
f:主DB server和從DB server數據庫的版本一致web

4 讀寫分離方式:spring

  4-1 基於程序代碼內部實現: 在代碼中根據select 、insert進行路由分類,這類方法也是目前生產環境下應用最普遍的。優勢是性能較好,由於程序在代碼中實現,不須要增長額外的硬件開支,缺點是須要開發人員來實現,運維人員無從下手。sql

  4-2 基於中間代理層實現: 代理通常介於應用服務器和數據庫服務器之間,代理數據庫服務器接收到應用服務器的請求後根據判斷後轉發到,後端數據庫,有如下表明性的程序。
mongodb

 本文基於兩種方式的敘述:數據庫

基於應用層代碼實現方式(內容都是經過代碼體現,必要的說明存在代碼中)

1 配置pom.xml,導入須要的jar包

  

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.lishun</groupId>
    <artifactId>mysql_master_salve</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>mysql_master_salve</name>
    <description>Demo project for Spring Boot</description>

    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.10.RELEASE</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
        <java.version>1.8</java.version>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.mybatis.spring.boot</groupId>
            <artifactId>mybatis-spring-boot-starter</artifactId>
            <version>1.3.1</version>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <version>RELEASE</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid</artifactId>
            <version>1.0.18</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-aop</artifactId>
        </dependency>
    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.mybatis.generator</groupId>
                <artifactId>mybatis-generator-maven-plugin</artifactId>
                <version>1.3.2</version>
                <dependencies>
                    <dependency>
                        <groupId>mysql</groupId>
                        <artifactId>mysql-connector-java</artifactId>
                        <version>5.1.43</version>
                    </dependency>
                </dependencies>
                <configuration>
                    <overwrite>true</overwrite>
                </configuration>
            </plugin>
        </plugins>
    </build>


</project>

 

2 配置application.properties

server.port=9022
#mybatis配置*mapper.xml文件和實體別名
mybatis.mapper-locations=classpath:mapper/*.xml
mybatis.type-aliases-package=com.lishun.entity

spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.password=123456
spring.datasource.username=root

#寫節點
spring.datasource.master.url=jdbc:mysql://192.168.203.135:3306/worldmap
#兩個個讀節點(爲了方便測試這裏用的是同一個服務器數據庫,生產環境應該不使用)
spring.datasource.salve1.url=jdbc:mysql://192.168.203.139:3306/worldmap
spring.datasource.salve2.url=jdbc:mysql://192.168.203.139:3306/worldmap

# druid 鏈接池 Setting
# 初始化大小,最小,最大
spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
spring.datasource.initialSize=5
spring.datasource.minIdle=5
spring.datasource.maxActive=20
# 配置獲取鏈接等待超時的時間
spring.datasource.maxWait=60000
# 配置間隔多久才進行一次檢測,檢測須要關閉的空閒鏈接,單位是毫秒
spring.datasource.timeBetweenEvictionRunsMillis=60000
# 配置一個鏈接在池中最小生存的時間,單位是毫秒
spring.datasource.minEvictableIdleTimeMillis=300000
spring.datasource.validationQuery=SELECT 1 FROM rscipc_sys_user
spring.datasource.testWhileIdle=true
spring.datasource.testOnBorrow=false
spring.datasource.testOnReturn=false
# 打開PSCache,而且指定每一個鏈接上PSCache的大小
spring.datasource.poolPreparedStatements=true
spring.datasource.maxPoolPreparedStatementPerConnectionSize=20
# 配置監控統計攔截的filters,去掉後監控界面sql沒法統計,'wall'用於防火牆
spring.datasource.filters=stat,wall,log4j
# 經過connectProperties屬性來打開mergeSql功能;慢SQL記錄
spring.datasource.connectionProperties=druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
spring.datasource.logSlowSql=true
#End

3 啓動類(注意:其餘須要spring管理的bean(service,config等)必須放在該啓動類的子包下,否則會掃描不到bean,致使注入失敗)

@SpringBootApplication
@MapperScan("com.lishun.mapper") //!!!!!! 注意:掃描全部mapper
public class MysqlMasterSalveApplication {
	public static void main(String[] args) {
		SpringApplication.run(MysqlMasterSalveApplication.class, args);
	}
}

4 動態數據源  DynamicDataSource

  

/**
 * @author lishun
 * @Description:動態數據源, 繼承AbstractRoutingDataSource
 * @date 2017/8/9
 */
public class DynamicDataSource extends AbstractRoutingDataSource {
	public static final Logger log = LoggerFactory.getLogger(DynamicDataSource.class);

	/**
	 * 默認數據源
	 */
	public static final String DEFAULT_DS = "read_ds";
	private static final ThreadLocal<String> contextHolder = new ThreadLocal<>();
	public static void setDB(String dbType) {// 設置數據源名
		log.info("切換到{}數據源", dbType);
		contextHolder.set(dbType);
	}

	public static void clearDB() {
		contextHolder.remove();
	}// 清除數據源名
	@Override
	protected Object determineCurrentLookupKey() {
		return contextHolder.get();
	}
}

5 線程池配置數據源  

@Configuration
public class DruidConfig {
	private Logger logger = LoggerFactory.getLogger(DruidConfig.class);

	@Value("${spring.datasource.master.url}")
	private String masterUrl;

	@Value("${spring.datasource.salve1.url}")
	private String salve1Url;

	@Value("${spring.datasource.salve2.url}")
	private String salve2Url;

	@Value("${spring.datasource.username}")
	private String username;

	@Value("${spring.datasource.password}")
	private String password;

	@Value("${spring.datasource.driver-class-name}")
	private String driverClassName;

	@Value("${spring.datasource.initialSize}")
	private int initialSize;

	@Value("${spring.datasource.minIdle}")
	private int minIdle;

	@Value("${spring.datasource.maxActive}")
	private int maxActive;

	@Value("${spring.datasource.maxWait}")
	private int maxWait;

	@Value("${spring.datasource.timeBetweenEvictionRunsMillis}")
	private int timeBetweenEvictionRunsMillis;

	@Value("${spring.datasource.minEvictableIdleTimeMillis}")
	private int minEvictableIdleTimeMillis;

	@Value("${spring.datasource.validationQuery}")
	private String validationQuery;

	@Value("${spring.datasource.testWhileIdle}")
	private boolean testWhileIdle;

	@Value("${spring.datasource.testOnBorrow}")
	private boolean testOnBorrow;

	@Value("${spring.datasource.testOnReturn}")
	private boolean testOnReturn;

	@Value("${spring.datasource.filters}")
	private String filters;

	@Value("${spring.datasource.logSlowSql}")
	private String logSlowSql;

	@Bean
	public ServletRegistrationBean druidServlet() {

		logger.info("init Druid Servlet Configuration ");
		ServletRegistrationBean reg = new ServletRegistrationBean();
		reg.setServlet(new StatViewServlet());
		reg.addUrlMappings("/druid/*");
		reg.addInitParameter("loginUsername", username);
		reg.addInitParameter("loginPassword", password);
		reg.addInitParameter("logSlowSql", logSlowSql);
		return reg;
	}

	@Bean
	public FilterRegistrationBean filterRegistrationBean() {
		FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
		filterRegistrationBean.setFilter(new WebStatFilter());
		filterRegistrationBean.addUrlPatterns("/*");
		filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
		filterRegistrationBean.addInitParameter("profileEnable", "true");
		return filterRegistrationBean;
	}

	@Bean
	public DataSource druidDataSource() {
		DruidDataSource datasource = new DruidDataSource();
		datasource.setUrl(masterUrl);
		datasource.setUsername(username);
		datasource.setPassword(password);
		datasource.setDriverClassName(driverClassName);
		datasource.setInitialSize(initialSize);
		datasource.setMinIdle(minIdle);
		datasource.setMaxActive(maxActive);
		datasource.setMaxWait(maxWait);
		datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
		datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
		datasource.setValidationQuery(validationQuery);
		datasource.setTestWhileIdle(testWhileIdle);
		datasource.setTestOnBorrow(testOnBorrow);
		datasource.setTestOnReturn(testOnReturn);
		try {
			datasource.setFilters(filters);
		} catch (SQLException e) {
			logger.error("druid configuration initialization filter", e);
		}

		Map<Object, Object> dsMap = new HashMap();
		dsMap.put("read_ds_1", druidDataSource_read1());
		dsMap.put("read_ds_2", druidDataSource_read2());

		dsMap.put("write_ds", datasource);

		DynamicDataSource dynamicDataSource = new DynamicDataSource();
		dynamicDataSource.setTargetDataSources(dsMap);
		return dynamicDataSource;
	}

	public DataSource druidDataSource_read1() {
		DruidDataSource datasource = new DruidDataSource();
		datasource.setUrl(salve1Url);
		datasource.setUsername(username);
		datasource.setPassword(password);
		datasource.setDriverClassName(driverClassName);
		datasource.setInitialSize(initialSize);
		datasource.setMinIdle(minIdle);
		datasource.setMaxActive(maxActive);
		datasource.setMaxWait(maxWait);
		datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
		datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
		datasource.setValidationQuery(validationQuery);
		datasource.setTestWhileIdle(testWhileIdle);
		datasource.setTestOnBorrow(testOnBorrow);
		datasource.setTestOnReturn(testOnReturn);
		try {
			datasource.setFilters(filters);
		} catch (SQLException e) {
			logger.error("druid configuration initialization filter", e);
		}
		return datasource;
	}
	public DataSource druidDataSource_read2() {
		DruidDataSource datasource = new DruidDataSource();
		datasource.setUrl(salve2Url);
		datasource.setUsername(username);
		datasource.setPassword(password);
		datasource.setDriverClassName(driverClassName);
		datasource.setInitialSize(initialSize);
		datasource.setMinIdle(minIdle);
		datasource.setMaxActive(maxActive);
		datasource.setMaxWait(maxWait);
		datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
		datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
		datasource.setValidationQuery(validationQuery);
		datasource.setTestWhileIdle(testWhileIdle);
		datasource.setTestOnBorrow(testOnBorrow);
		datasource.setTestOnReturn(testOnReturn);
		try {
			datasource.setFilters(filters);
		} catch (SQLException e) {
			logger.error("druid configuration initialization filter", e);
		}
		return datasource;
	}

}

6 數據源註解:在service層經過數據源註解來指定數據源

   

/**
 * @author lishun
 * @Description: 讀數據源註解
 * @date 2017/8/9
 */
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface ReadDataSource {
	String vlaue() default "read_ds";
}

/**
 * @author lishun
 * @Description: 寫數據源註解
 * @date 2017/8/9
 */
@Target({ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
public @interface WriteDataSource {
	String value() default "write_ds";
}

7 service aop切面來切換數據源

  

/**
 * @author lishun
 * @Description: TODO
 * @date 2017/8/9
 */
@Component
@Aspect
public class ServiceAspect implements PriorityOrdered {
	@Pointcut("execution(public * com.lishun.service.*.*(..))")
	public void dataSource(){};

	@Before("dataSource()")
	public void before(JoinPoint joinPoint){
		Class<?> className = joinPoint.getTarget().getClass();//得到當前訪問的class
		String methodName = joinPoint.getSignature().getName();//得到訪問的方法名
		Class[] argClass = ((MethodSignature)joinPoint.getSignature()).getParameterTypes();//獲得方法的參數的類型
		String dataSource = DynamicDataSource.DEFAULT_DS;
		try {
			Method method = className.getMethod(methodName, argClass);// 獲得訪問的方法對象
			if (method.isAnnotationPresent(ReadDataSource.class)) {
				ReadDataSource annotation = method.getAnnotation(ReadDataSource.class);
				dataSource = annotation.vlaue();
				int i = new Random().nextInt(2) + 1;    /* 簡單的負載均衡 */

				dataSource = dataSource + "_" + i;
			}else if (method.isAnnotationPresent(WriteDataSource.class)){
				WriteDataSource annotation = method.getAnnotation(WriteDataSource.class);
				dataSource = annotation.value();
			}
		} catch (Exception e) {
			e.printStackTrace();
		}
		DynamicDataSource.setDB(dataSource);// 切換數據源
	}

	/* 基於方法名
	@Before("execution(public * com.lishun.service.*.find*(..)) || execution(public * com.lishun.service.*.query*(..))")
	public void read(JoinPoint joinPoint){
		DynamicDataSource.setDB("read_ds");// 切換數據源
	}
	@Before("execution(public * com.lishun.service.*.insert*(..)) || execution(public * com.lishun.service.*.add*(..))")
	public void write(JoinPoint joinPoint){
		DynamicDataSource.setDB("write_ds");// 切換數據源
	}
	*/

	@After("dataSource()")
	public void after(JoinPoint joinPoint){
		DynamicDataSource.clearDB();// 切換數據源
	}

	@AfterThrowing("dataSource()")
	public void AfterThrowing(){
		System.out.println("AfterThrowing---------------" );
	}

	@Override
	public int getOrder() {
		return 1;//數值越小該切面先被執行,先選擇數據源(防止事務aop使用數據源出現空異常)
	}
}

8 測試 mapper的代碼就不貼了,主要是service和controller

  service

@Service
@Transactional
public class WmIpInfoServiceImpl implements WmIpInfoService {
	@Autowired
	public WmIpInfoMapper wmIpInfoMapper;

	@Override
	@ReadDataSource
	public WmIpInfo findOneById(String id) {
		//wmIpInfoMapper.selectByPrimaryKey(id);
		return wmIpInfoMapper.selectByPrimaryKey(id);
	}

	@Override
	@WriteDataSource
	public int insert(WmIpInfo wmIpInfo) {
		int result = wmIpInfoMapper.insert(wmIpInfo);
		return result;
	}
}

  contrlloer

@RestController
public class IndexController {
	@Autowired
	public WmIpInfoService wmIpInfoService;
	@GetMapping("/index/{id}")
	public WmIpInfo index(@PathVariable(value = "id") String id){
		WmIpInfo wmIpInfo = new WmIpInfo();
		wmIpInfo.setId(UUID.randomUUID().toString());
		wmIpInfoService.insert(wmIpInfo);
		wmIpInfoService.findOneById(id);
		return null;
	}
}

  運行spring boot 在瀏覽器輸入http://localhost:9022/index/123456

  查看日誌

  

 

 基於中間件方式實現讀寫分離(mycat:主要是mycat安裝使用及其注意事項)

3-1 下載 http://dl.mycat.io/
3-2 解壓,配置MYCAT_HOME;
3-3 修改文件 vim conf/schema.xml

<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">

<mycat:schema xmlns:mycat="http://io.mycat/">
  <schema name="worldmap" checkSQLschema="false" sqlMaxLimit="100" dataNode="worldmap_node"></schema>
  <dataNode name="worldmap_node" dataHost="worldmap_host" database="worldmap" /> <!-- database:數據庫名稱 -->
  <dataHost name="worldmap_host" maxCon="1000" minCon="10" balance="1" writeType="0" dbType="mysql" dbDriver="native" switchType="2" slaveThreshold="100">
    <heartbeat>select user()</heartbeat>
    <writeHost host="hostM1" url="192.168.203.135:3306" user="root" password="123456"><!--讀寫分離模式,寫庫:192.168.203.135,讀庫192.168.203.139-->
      <readHost host="hostR1" url="192.168.203.139:3306" user="root" password="123456" />
    </writeHost>
    <writeHost host="hostM2" url="192.168.203.135:3306" user="root" password="123456"> <!--主從切換模式,當hostM1宕機,讀寫操做在hostM2服務器數據庫執行-->
  </dataHost>
</mycat:schema>

  配置說明:
  name:屬性惟一標識dataHost標籤,供上層的標籤使用。
  maxCon:最大鏈接數
  minCon:最早鏈接數
  balance
    一、balance=0 不開啓讀寫分離機制,全部讀操做都發送到當前可用的writehost了 .
    二、balance=1 所有的readhost與stand by writeHost 參與select語句的負載均衡。簡單的說,雙主雙從模式(M1àS1,M2àS2,而且M1和M2互爲主備),正常狀況下,M1,S1,S2都參與select語句的複雜均衡。
    三、balance=2 全部讀操做都隨機的在readhost和writehost上分發

  writeType 負載均衡類型,目前的取值有3種:
    一、writeType="0″, 全部寫操做發送到配置的第一個writeHost。
    二、writeType="1″,全部寫操做都隨機的發送到配置的writeHost。
    三、writeType="2″,不執行寫操做。

  switchType
    一、switchType=-1 表示不自動切換
    二、switchType=1 默認值,自動切換
    三、switchType=2 基於MySQL 主從同步的狀態決定是否切換

  dbType:數據庫類型 mysql,postgresql,mongodb、oracle、spark等。

  heartbeat:用於和後端數據庫進行心跳檢查的語句。例如,MYSQL可使用select user(),Oracle可使用select 1 from dual等。
      這個標籤還有一個connectionInitSql屬性,主要是當使用Oracla數據庫時,須要執行的初始化SQL語句就這個放到這裏面來。例如:altersession set nls_date_format='yyyy-mm-dd hh24:mi:ss'
      當switchType=2 主從切換的語句必須是:show slave status

  writeHost、readHost:這兩個標籤都指定後端數據庫的相關配置給mycat,用於實例化後端鏈接池。惟一不一樣的是,writeHost指定寫實例、readHost指定讀實例,
            在一個dataHost內能夠定義多個writeHost和readHost。可是,若是writeHost指定的後端數據庫宕機,那麼這個writeHost綁定的全部readHost都將不可用。
            另外一方面,因爲這個writeHost宕機系統會自動的檢測到,並切換到備用的writeHost上去。

3-4 修改文件 vim conf/server.xml

  

<!DOCTYPE mycat:server SYSTEM "server.dtd">
<mycat:server xmlns:mycat="http://io.mycat/">
<system>

</system>

<user name="root">
  <property name="password">123456</property>
  <property name="schemas">worldmap</property><!--與schema.xml相對應-->
  <property name="readOnly">false</property> <!--readOnly是應用鏈接中間件邏輯庫所具備的權限。true爲只讀,false爲讀寫都有,默認爲false。-->
</user>

</mycat:server>

 

3-5 啓動 mycat start
查看啓動日誌:logs/wrapper.log;,正常啓動成功後會有mycat.log日誌,若是服務未啓動成功不會有對應日誌

3-6:對於開發人員mycat至關於一個新的數據庫服務端(默認端口8066),開發人員增刪改查再也不是直接鏈接數據庫,而是鏈接數據庫中間件,中間件經過其自帶的lua腳本進行sql判斷,來路由到指定數據庫(實質根據selet,insert,update,delete關鍵字)

3-7:測試讀寫分離

  讀數據路由到 192.168.203.139

  寫數據路由到192.168.203.135 

 

  當主庫宕機,讀寫操做都在192.168.203.139

  

  

3-8:注意事項 通常使用框架都會用到事務,若是都要到事務那麼就都會訪問主服務器,達不到分離的效果,所以配置事務的時候要注意區分,好比只對包含增刪改的進行事務配置

相關文章
相關標籤/搜索