springboot + mybatis + gradle項目構建過程

1.從Spring boot官網根據需求下載腳手架或者到GitHub上去搜索對應的腳手架項目,D_iao ^0^html

• 文件目錄以下(此處generatorConfig.xml 和 log4j2.xml文件請忽略,後續會講解)前端


 2.使用Mybatis代碼自動構建插件生成代碼java

•  gradle 相關配置node

// Mybatis 代碼自動生成所引入的包
compile group: 'org.mybatis.generator', name: 'mybatis-generator-core', version: '1.3.3'

// MyBatis代碼自動生成插件工具
apply plugin: "com.arenagod.gradle.MybatisGenerator"

configurations {
    mybatisGenerator
}

mybatisGenerator {
    verbose = true
    // 配置文件路徑
    configFile = 'src/main/resources/generatorConfig.xml'
}

•  generatorConfig.xml配置詳解mysql

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE generatorConfiguration PUBLIC "-//mybatis.org//DTD MyBatis Generator Configuration 1.0//EN" "http://mybatis.org/dtd/mybatis-generator-config_1_0.dtd">

<generatorConfiguration>

    <!--數據庫驅動包路徑 -->
    <classPathEntry <!--此驅動包路徑可在項目的包庫中找到,複製過來便可--> location="C:\Users\pc\.gradle\caches\modules-2\files-2.1\mysql\mysql-connector-java\5.1.38\dbbd7cd309ce167ec8367de4e41c63c2c8593cc5\mysql-connector-java-5.1.38.jar"/> <context id="mysql" targetRuntime="MyBatis3">
        <!--關閉註釋 -->
        <commentGenerator>
            <property name="suppressAllComments" value="true"/>
        </commentGenerator>

        <!--數據庫鏈接信息 -->
        <jdbcConnection driverClass="com.mysql.jdbc.Driver" connectionURL="jdbc:mysql://127.0.0.1:3306/xxx" userId="root" password="">
        </jdbcConnection>

        <!--生成的model 包路徑 ,其中rootClass爲model的基類,配置以後他會自動繼承該類做爲基類,trimStrings會爲model字串去空格-->
        <javaModelGenerator targetPackage="com.springboot.mybatis.demo.model" targetProject="D:/self-code/spring-boot-mybatis/spring-boot-mybatis/src/main/java">
            <property name="enableSubPackages" value="true"/>
            <property name="trimStrings" value="true"/>
            <property name="rootClass" value="com.springboot.mybatis.demo.model.common.BaseModel"/>
        </javaModelGenerator>

        <!--生成mapper xml文件路徑 -->
        <sqlMapGenerator targetPackage="mapper" targetProject="D:/self-code/spring-boot-mybatis/spring-boot-mybatis/src/main/resources">
            <property name="enableSubPackages" value="true"/>
        </sqlMapGenerator>

        <!-- 生成的Mapper接口的路徑 -->
        <javaClientGenerator type="XMLMAPPER" targetPackage="com.springboot.mybatis.demo.mapper" targetProject="D:/self-code/spring-boot-mybatis/spring-boot-mybatis/src/main/java">
            <property name="enableSubPackages" value="true"/>
        </javaClientGenerator>

        <!-- 對應的表 這個是生成Mapper xml文件的基礎,enableCountByExample若是爲true則會在xml文件中生成樣例,過於累贅因此不要-->
        <table tableName="tb_user" domainObjectName="User" enableCountByExample="false" enableDeleteByExample="false" enableSelectByExample="false" enableUpdateByExample="false"></table>
    </context>


</generatorConfiguration>

以上配置中注意targetProject路徑請填寫絕對路徑,避免錯誤,其中targetPackage是類所處的包路徑(確保包是存在的,不然沒法生成),也就至關於linux

•  代碼生成git

配置完成以後首先得在數據庫中新建對應的表,而後確保數據庫能正常訪問,最後在終端執行gradle mbGenerator或者點擊以下任務github

成功以後它會生成model、mapper接口以及xml文件web

 


 3.集成日誌redis

• gradle 相關配置

compile group: 'org.springframework.boot', name: 'spring-boot-starter-log4j2', version: '1.4.0.RELEASE'

// 排除衝突
configurations {
    mybatisGenerator
    compile.exclude module: 'spring-boot-starter-logging'
}

當沒有引入spring-boot-starter-log4j2包時會報錯:java.lang.IllegalStateException: Logback configuration error detected Logback 配置錯誤聲明

緣由參考連接;https://blog.csdn.net/blueheart20/article/details/78111350?locationNum=5&fps=1

解決方案:排除依賴 spring-boot-starter-logging

what???

排除依賴以後使用的時候又報錯:Failed to load class "org.slf4j.impl.StaticLoggerBinder" 加載slf4j.impl.StaticLoggerBinder類失敗

緣由參考連接:https://blog.csdn.net/lwj_199011/article/details/51853110

解決方案:添加依賴 spring-boot-starter-log4j2 此包所依賴的包以下:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starters</artifactId>
        <version>1.4.0.RELEASE</version>
    </parent>
    <artifactId>spring-boot-starter-log4j2</artifactId>
    <name>Spring Boot Log4j 2 Starter</name>
    <description>Starter for using Log4j2 for logging. An alternative to spring-boot-starter-logging</description>
    <url>http://projects.spring.io/spring-boot/</url>
    <organization>
        <name>Pivotal Software, Inc.</name>
        <url>http://www.spring.io</url>
    </organization>
    <properties>
        <main.basedir>${basedir}/../..</main.basedir>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-slf4j-impl</artifactId>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-api</artifactId>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-core</artifactId>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>jcl-over-slf4j</artifactId>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>jul-to-slf4j</artifactId>
        </dependency>
    </dependencies>
</project>

它依賴了 log4j-slf4j-impl ,使用的是log4j2日誌框架。

這裏涉及到log4j、logback、log4j2以及slf4j相關概念,那麼它們是啥關係呢?unbelievable...相關知識以下:

slf4j、log4j、logback、log4j2 
日誌接口(slf4j) slf4j是對全部日誌框架制定的一種規範、標準、接口,並非一個框架的具體的實現,由於接口並不能獨立使用,須要和具體的日誌框架實現配合使用(如log4j、logback)
日誌實現(log4j、logback、log4j2) log4j是apache實現的一個開源日誌組件 logback一樣是由log4j的做者設計完成的,擁有更好的特性,用來取代log4j的一個日誌框架,是slf4j的原生實現 Log4j2是log4j 1.x和logback的改進版,聽說採用了一些新技術(無鎖異步、等等),使得日誌的吞吐量、性能比log4j 1.x提升10倍,並解決了一些死鎖的bug,並且配置更加簡單靈活,官網地址: http://logging.apache.org/log4j/2.x/manual/configuration.html 爲何須要日誌接口,直接使用具體的實現不就好了嗎? 接口用於定製規範,能夠有多個實現,使用時是面向接口的(導入的包都是slf4j的包而不是具體某個日誌框架中的包),即直接和接口交互,不直接使用實現,因此能夠任意的更換實現而不用更改代碼中的日誌相關代碼。 好比:slf4j定義了一套日誌接口,項目中使用的日誌框架是logback,開發中調用的全部接口都是slf4j的,不直接使用logback,調用是 本身的工程調用slf4j的接口,slf4j的接口去調用logback的實現,能夠看到整個過程應用程序並無直接使用logback,當項目須要更換更加優秀的日誌框架時(如log4j2)只須要引入Log4j2的jar和Log4j2對應的配置文件便可,徹底不用更改Java代碼中的日誌相關的代碼logger.info(「xxx」),也不用修改日誌相關的類的導入的包(import org.slf4j.Logger; import org.slf4j.LoggerFactory;)
使用日誌接口便於更換爲其餘日誌框架,適配器做用 log4j、logback、log4j2都是一種日誌具體實現框架,因此既能夠單獨使用也能夠結合slf4j一塊兒搭配使用)

• 到此咱們使用的是Log4j2日誌框架,接下來是配置log4j(可使用properties、xml以及yml三種方式配置,這裏使用xml形式;有關log4j詳細配置講解參考連接:http://www.javashuo.com/article/p-gzkgeswd-dh.html),具體配置詳解以下:

<?xml version="1.0" encoding="UTF-8"?>
<!--日誌級別以及優先級排序: OFF > FATAL > ERROR > WARN > INFO > DEBUG > TRACE > ALL -->
 <!--Configuration後面的status,這個用於設置log4j2自身內部的信息輸出,能夠不設置,當設置成trace時,你會看到log4j2內部各類詳細輸出-->
<!--monitorInterval:Log4j可以自動檢測修改配置 文件和從新配置自己,設置間隔秒數-->
<Configuration status="WARN">

    <!--定義一些屬性-->
    <Properties>
        <Property name="PID">????</Property>
        <Property name="LOG_PATTERN"> [%d{yyyy-MM-dd HH:mm:ss.SSS}] - ${sys:PID} --- %c{1}: %m%n </Property>
    </Properties>

    <!--輸出源,用於定義日誌輸出的地方-->
    <Appenders>

        <!--輸出到控制檯-->
        <Console name="Console" target="SYSTEM_OUT" follow="true">
            <PatternLayout pattern="${LOG_PATTERN}">
            </PatternLayout>
        </Console>

        <!--文件會打印出全部信息,這個log每次運行程序會自動清空,由append屬性決定,適合臨時測試用-->
        <!--append爲TRUE表示消息增長到指定文件中,false表示消息覆蓋指定的文件內容,默認值是true-->
        <!--<File name="File" fileName="logs/log.log" append="false">-->
            <!--<PatternLayout>-->
                <!--<pattern>[%-5p] %d %c - %m%n</pattern>-->
            <!--</PatternLayout>-->
        <!--</File>-->

        <!--這個會打印出全部的信息,每次大小超過size,則這size大小的日誌會自動存入按年份-月份創建的文件夾下面並進行壓縮,做爲存檔 -->
        <RollingFile name="RollingAllFile" fileName="logs/all/all.log" filePattern="logs/all/$${date:yyyy-MM}/all-%d{yyyy-MM-dd}-%i.log.gz">
            <PatternLayout pattern="${LOG_PATTERN}" />
            <Policies>
                <!--如下兩個屬性結合filePattern使用,完成周期性的log文件封存工做-->
                <!--TimeBasedTriggeringPolicy 基於時間的觸發策略,如下是它的兩個參數: 1.interval,integer型,指定兩次封存動做之間的時間間隔。單位:以日誌的命名精度來肯定單位,好比yyyy-MM-dd-HH 單位爲小時,yyyy-MM-dd-HH-mm 單位爲分鐘 2.modulate,boolean型,說明是否對封存時間進行調製。若modulate=true,則封存時間將以0點爲邊界進行偏移計算。好比,modulate=true,interval=4hours,那麼假設上次封存日誌的時間爲03:00,則下次封存日誌的時間爲04:00,以後的封存時間依次爲08:00,12:00,16:00-->
                <!--<TimeBasedTriggeringPolicy/>-->
                <!--SizeBasedTriggeringPolicy 基於日誌文件大小的觸發策略,如下配置解釋爲: 當單個文件達到20M後,會自動將之前的內容,先建立相似 2014-09(年-月)的目錄,而後按 "xxx-年-月-日-序號"命名,打成壓縮包-->
                <SizeBasedTriggeringPolicy size="200 MB"/>
            </Policies>
        </RollingFile>

        <!-- 添加過濾器ThresholdFilter,能夠有選擇的輸出某個級別及以上的類別 onMatch="ACCEPT" onMismatch="DENY"意思是匹配就接受,不然直接拒絕 -->
        <RollingFile name="RollingErrorFile" fileName="logs/error/error.log" filePattern="logs/error/$${date:yyyy-MM}/%d{yyyy-MM-dd}-%i.log.gz">
            <ThresholdFilter level="ERROR"/>
            <PatternLayout pattern="${LOG_PATTERN}" />
            <Policies>
                <!--<TimeBasedTriggeringPolicy/>-->
                <SizeBasedTriggeringPolicy size="200 MB"/>
            </Policies>
        </RollingFile>

        <RollingFile name="RollingWarnFile" fileName="logs/warn/warn.log" filePattern="logs/warn/$${date:yyyy-MM}/%d{yyyy-MM-dd}-%i.log.gz">
            <Filters>
                <ThresholdFilter level="WARN"/>
                <ThresholdFilter level="ERROR" onMatch="DENY" onMismatch="NEUTRAL"/>
            </Filters>
            <PatternLayout pattern="${LOG_PATTERN}" />
            <Policies>
                <!--<TimeBasedTriggeringPolicy/>-->
                <SizeBasedTriggeringPolicy size="200 MB"/>
            </Policies>
        </RollingFile>

    </Appenders>

    <!--而後定義Loggers,只有定義了Logger並引入的Appender,Appender纔會生效-->
    <Loggers>
        <Logger name="org.hibernate.validator.internal.util.Version" level="WARN"/>
        <Logger name="org.apache.coyote.http11.Http11NioProtocol" level="WARN"/>
        <Logger name="org.apache.tomcat.util.net.NioSelectorPool" level="WARN"/>
        <Logger name="org.apache.catalina.startup.DigesterFactory" level="ERROR"/>
        <Logger name="org.springframework" level="INFO" />
        <Logger name="com.springboot.mybatis.demo" level="DEBUG"/>
        <!--以上的logger會繼承Root,也就是說他們默認會輸出到Root下定義的符合條件的Appender中,若不想讓它繼承能夠設置 additivity="false" 並能夠在Logger中設置 <AppenderRef ref="Console"/> 指定輸出到Console-->
        <Root level="INFO">
            <AppenderRef ref="Console" />
            <AppenderRef ref="RollingAllFile"/>
            <AppenderRef ref="RollingErrorFile"/>
            <AppenderRef ref="RollingWarnFile"/>
        </Root>
    </Loggers>
</Configuration>

yml配置案例:

 

Configuration:  
  status: info  
  Properties: # 定義全局變量  
    Property: # 缺省配置(用於開發環境)。其餘環境須要在VM參數中指定,以下:  
      - name: log.path  
        value: ./logs/
      - name: project.name  
        value: xx
      - name: info.file.name
        value: ${log.path}/${project.name}.info.log
      - name: error.file.name
        value: ${log.path}/${project.name}.error.log
      - name: kafka.sync.file.name
        value: ${log.path}/${project.name}.kafka.sync.log
  Appenders:  
    Console:  #輸出到控制檯  
      name: POSEIDON  
      target: SYSTEM_OUT  
      ThresholdFilter:  
        level: info # 「sys:」表示:若是VM參數中沒指定這個變量值,則使用本文件中定義的缺省全局變量值  
        onMatch: ACCEPT  
        onMismatch: DENY  
      PatternLayout:  
        pattern: "%d{MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"  
    RollingFile: # 輸出到文件,超過128MB歸檔  
      - name: infolog
        ThresholdFilter: 
          level: info # 「sys:」表示:若是VM參數中沒指定這個變量值,則使用本文件中定義的缺省全局變量值
          onMatch: ACCEPT 
          onMismatch: DENY   
        ignoreExceptions: false  
        fileName: ${info.file.name}  
        PatternLayout: 
          pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS}:%4p %t (%F:%L) - %m%n"
        filePattern: ${log.path}/$${date:yyyy-MM}/${project.name}-%d{yyyy-MM-dd}-%i.error.log.gz
        PatternLayout:  
          pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS}:%4p %t (%F:%L) - %m%n" 
        Policies:  
          SizeBasedTriggeringPolicy:  
            size: "128 MB"  
        DefaultRolloverStrategy:  
          max: 1000
      - name: ROLLINGFILEERROR 
        ThresholdFilter: 
          level: error
          onMatch: ACCEPT
          onMismatch: DENY
        fileName: ${error.file.name}
        PatternLayout: 
          pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS}:%4p %t (%F:%L) - %m%n"
        filePattern: ${log.path}/$${date:yyyy-MM}/${project.name}-%d{yyyy-MM-dd}-%i.error.log.gz
        PatternLayout:
          pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS}:%4p %t (%F:%L) - %m%n"
        Policies:
          SizeBasedTriggeringPolicy:
            size: "128 MB"
        DefaultRolloverStrategy:
          max: 1000
      - name: kafkaSyncLog
        ThresholdFilter:
          level: info
          onMatch: ACCEPT
          onMismatch: DENY
        fileName: ${kafka.sync.file.name}
        PatternLayout:
          pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS}:%4p %t (%F:%L) - %m%n"
        filePattern: ${log.path}/$${date:yyyy-MM}/${project.name}-%d{yyyy-MM-dd}-%i.kafka.sync.log.gz
        PatternLayout:
          pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS}:%4p %t (%F:%L) - %m%n"
        Policies:
          SizeBasedTriggeringPolicy:
            size: "128 MB"
        DefaultRolloverStrategy:
          max: 1000
  Loggers: 
    Root: // 全部文件的對應級別的日誌都會往Root裏面配置的對應級別的日誌文件裏打
      level: info
      AppenderRef: 
        - ref: POSEIDON
        - ref: infolog
        - ref: ROLLINGFILEERROR
    Logger:
      - name: com.xx.log.common
        level: error
      - name: com.xx.xx.xx.xx.xx.kafka // 這裏指定某個類的日誌輸出到自定義的logger文件裏,注意:additivity = false爲此類的日誌不會輸出到Root裏面的logger文件裏;kafkaSyncLog不加入到Root裏是由於也不想讓其餘文件的日誌打印到kafkaSyncLog日誌文件裏
        additivity: false
        level: info
        AppenderRef:
        - ref: kafkaSyncLog

 

 

 

到此咱們就算是把日誌集成進去了,能夠在終端看到各類log,very exciting!!! 

log4j還能夠發送郵件

添加依賴:

compile group: 'org.springframework.boot', name: 'spring-boot-starter-mail', version: '2.0.0.RELEASE'

修改log4j配置:

在appender中添加以下:
 <!-- subject: 郵件主題  to: 接收人,多個以逗號隔開  from: 發送人  replyTo: 發送帳號 smtp: QQ查看連接https://service.mail.qq.com/cgi-bin/help?subtype=1&no=167&id=28 smtpDebug: 開啓詳細日誌 smtpPassword: 受權碼,參看https://service.mail.qq.com/cgi-bin/help?subtype=1&&id=28&&no=1001256 smtpUsername: 用戶名-->
 <SMTP name="Mail" subject="Error Log" to="xxx.com" from="xxx@qq.com" replyTo="xxx@qq.com"
              smtpProtocol="smtp" smtpHost="smtp.qq.com" smtpPort="587" bufferSize="50" smtpDebug="false"
              smtpPassword="受權碼" smtpUsername="xxx.com">
 </SMTP>

在root裏添加上面的appender讓其生效
<AppenderRef ref="Mail" level="error"/>

 搞定!


 4.集成MybatisProvider

• Why ?

    有了它咱們能夠經過註解的方式結合動態SQL實現基本的增刪改查操做,而不須要再在xml中寫那麼多重複繁瑣的SQL了

• Come on ↓

  First: 定義一個Mapper接口並實現基本操做,以下:

package com.springboot.mybatis.demo.mapper.common;

import com.springboot.mybatis.demo.mapper.common.provider.AutoSqlProvider;
import com.springboot.mybatis.demo.mapper.common.provider.MethodProvider;
import com.springboot.mybatis.demo.model.common.BaseModel;
import org.apache.ibatis.annotations.DeleteProvider;
import org.apache.ibatis.annotations.InsertProvider;
import org.apache.ibatis.annotations.SelectProvider;
import org.apache.ibatis.annotations.UpdateProvider;

import java.io.Serializable;
import java.util.List;

public interface BaseMapper<T extends BaseModel, Id extends Serializable> {

    @InsertProvider(type = AutoSqlProvider.class, method = MethodProvider.SAVE)
    int save(T entity);

    @DeleteProvider(type = AutoSqlProvider.class, method = MethodProvider.DELETE_BY_ID)
    int deleteById(Id id);

    @UpdateProvider(type = AutoSqlProvider.class, method = MethodProvider.UPDATE_BY_ID)
    int updateById(Id id);

    @SelectProvider(type = AutoSqlProvider.class, method = MethodProvider.FIND_ALL)
    List<T> findAll(T entity);

    @SelectProvider(type = AutoSqlProvider.class, method = MethodProvider.FIND_BY_ID)
    T findById(T entity);

    @SelectProvider(type = AutoSqlProvider.class, method = MethodProvider.FIND_AUTO_BY_PAGE)
    List<T> findAutoByPage(T entity);
}

其中AutoSqlProvider是提供sql的類,MethodProvider是定義好咱們使用MybatisProvider須要實現的基本持久層方法,這兩個方法具體實現以下:

package com.springboot.mybatis.demo.mapper.common.provider;

import com.google.common.base.CaseFormat;
import com.springboot.mybatis.demo.mapper.common.provider.model.MybatisTable;
import com.springboot.mybatis.demo.mapper.common.provider.utils.ProviderUtils;
import org.apache.ibatis.jdbc.SQL;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.lang.reflect.Field;
import java.util.List;

public class AutoSqlProvider {
    private static Logger logger = LoggerFactory.getLogger(AutoSqlProvider.class);

    public String findAll(Object obj) {
        MybatisTable mybatisTable = ProviderUtils.getMybatisTable(obj);
        List<Field> fields = mybatisTable.getMybatisColumnList();
        SQL sql = new SQL();
        fields.forEach(field -> sql.SELECT(CaseFormat.UPPER_CAMEL.to(CaseFormat.LOWER_UNDERSCORE, field.getName())));
        sql.FROM(mybatisTable.getName());
        logger.info(sql.toString());
        return sql.toString();
    }

    public String save(Object obj) {
     ...
return null; } public String deleteById(String id) {
     ...
return null; } public String findById(Object obj) {
...
return null; } public String updateById(Object obj) {
...
return null; } public String findAutoByPage(Object obj) { return null; } }
package com.springboot.mybatis.demo.mapper.common.provider;

public class MethodProvider {
    public static final String SAVE = "save";
    public static final String DELETE_BY_ID = "deleteById";
    public static final String UPDATE_BY_ID = "updateById";
    public static final String FIND_ALL = "findAll";
    public static final String FIND_BY_ID = "findById";
    public static final String FIND_AUTO_BY_PAGE = "findAutoByPage";
}

注意:

1.若是你在BaseMapper中定義了某個方法必定要在SqlProvider類中去實現該方法,不然將報找不到該方法的錯誤

2.在動態拼接SQL的時候遇到一個問題:即便開啓了駝峯命名轉換,在拼接的時候依然須要手動將表屬性轉換,不然不會自動轉換

3.在SqlProvider中的SQL log能夠去除,由於在集成日誌的時候已經配置好了

4.ProviderUtils是經過反射的方式拿到表的一些基本屬性:表名,表屬性

•  到這裏MybatisProvider的基礎配置已經準備好,接下去就是讓每個mapper接口去繼承咱們這個基礎Mapper,這樣全部的基礎增刪改查都由BaseMapper負責,以下:

package com.springboot.mybatis.demo.mapper;

import com.springboot.mybatis.demo.mapper.common.BaseMapper;
import com.springboot.mybatis.demo.model.User;

import java.util.List;

public interface UserMapper extends BaseMapper<User,String> {

}

這樣UserMapper就不須要再關注那些基礎的操做了,wonderful !!!


 5. 整合JSP過程

• 引入核心包

compile group: 'org.springframework.boot', name: 'spring-boot-starter-web', version: '2.0.0.RELEASE'
// 注意此處必定要是compile或者缺省,不能使用providedRuntime不然jsp沒法渲染
compile group: 'org.apache.tomcat.embed', name: 'tomcat-embed-jasper', version: '9.0.6' 

providedRuntime group: 'org.springframework.boot', name: 'spring-boot-starter-tomcat', version: '2.0.2.RELEASE' // 此行代碼是用於解決內置tomcat和外部tomcat衝突問題,若僅使用內置tomcat則無需此行代碼

這是兩個基本的包,其中spring-boot-starter-web會引入tomcat也就是咱們常說的SpringBoot內置的tomcat,而tomcat-embed-jasper是解析jsp的包,若是這個包沒有引入或是有問題則沒法渲染jsp頁面

• 修改Application啓動類

@EnableTransactionManagement 
@SpringBootApplication 
public class Application extends SpringBootServletInitializer { 
    
    @Override
    protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
        setRegisterErrorPageFilter(false);
        return application.sources(Application.class);
    }

    public static void main(String[] args) throws Exception {
        SpringApplication.run(Application.class, args);
    }
}

注意:啓動類必須繼承SpringBootServletInitializer 類並重寫configure方法

• 建立jsp頁面(目錄詳情以下)

• 接下來就是配置如何去獲取jsp頁面了,有兩中選擇

一:經過在application.properties文件中配置

spring.mvc.view.prefix=/WEB-INF/views/
spring.mvc.view.suffix=.jsp

而後建立controller(注意:在Spring 2.0以後若是要返回jsp頁面必須使用@Controller而不能使用@RestController)

@Controller // spring 2.0 若是要返回jsp頁面必須使用Controller而不能使用RestController
public class IndexController {

    @GetMapping("/")
    public String index() {
        return "index";
    }
}

二:經過配置文件實現,這樣的話直接請求 http:localhost:8080/就能直接獲取到index.jsp頁面,省去了controller代碼的書寫

@Configuration
@EnableWebMvc
public class WebMvcConfig implements WebMvcConfigurer {


// /static (or /public or /resources or /META-INF/resources
    @Bean
    public InternalResourceViewResolver viewResolver() {
        InternalResourceViewResolver resolver = new InternalResourceViewResolver();
        resolver.setPrefix("/WEB-INF/views/");
        resolver.setSuffix(".jsp");
        return resolver;
    }

    @Override
    public void addViewControllers(ViewControllerRegistry registry) {
        registry.addViewController("/").setViewName("index");
    }

  // 此方法若是不重寫的話將沒法找到index.jsp資源 @Override public void configureDefaultServletHandling(DefaultServletHandlerConfigurer configurer) { configurer.enable(); } }

 6.集成Shiro認證和受權以及Session

• shiro核心

認證、受權、會話管理、緩存、加密

集成認證過程

(1)引包(注:包是按需引用的,如下只是我的構建時候引用的,僅供參考↓)

    // shiro
    compile group: 'org.apache.shiro', name: 'shiro-core', version: '1.3.2' // 必引包,shiro核心包
    compile group: 'org.apache.shiro', name: 'shiro-web', version: '1.3.2' // 與web整合的包
    compile group: 'org.apache.shiro', name: 'shiro-spring', version: '1.3.2' // 與spring整合的包
    compile group: 'org.apache.shiro', name: 'shiro-ehcache', version: '1.3.2' // shiro緩存

(2)shiro配置文件

@Configuration
public class ShiroConfig {
    @Bean(name = "shiroFilter")
    public ShiroFilterFactoryBean shiroFilterFactoryBean() {
        ShiroFilterFactoryBean shiroFilterFactoryBean = new ShiroFilterFactoryBean();
        //攔截器Map
        Map<String,String> filterChainDefinitionMap = new LinkedHashMap<String,String>();
        //配置不會被攔截的路徑
        filterChainDefinitionMap.put("/static/**", "anon");
        //配置退出
        filterChainDefinitionMap.put("/logout", "logout");
     //配置須要認證才能訪問的路徑 filterChainDefinitionMap.put("/**", "authc");
     //配置須要認證和admin角色才能訪問的路徑
     filterChainDefinitionMap.put("user/**","authc,roles[admin]") //注意roles中的角色能夠爲多個且時and的關係,即要擁有全部角色才能訪問,若是要or關係可自行寫filter
shiroFilterFactoryBean.setFilterChainDefinitionMap(filterChainDefinitionMap);
//配置登錄路徑 shiroFilterFactoryBean.setLoginUrl("/login"); //配置登錄成功後跳轉的路徑 shiroFilterFactoryBean.setSuccessUrl("/index"); //登錄失敗跳回登錄界面 shiroFilterFactoryBean.setUnauthorizedUrl("/login"); shiroFilterFactoryBean.setSecurityManager(securityManager()); return shiroFilterFactoryBean; } @Bean public ShiroRealmOne shiroRealmOne() { ShiroRealmOne realm = new ShiroRealmOne(); // 此處是自定義shiro規則 return realm; } @Bean(name = "securityManager") public DefaultWebSecurityManager securityManager() { DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(); securityManager.setRealm(shiroRealmOne());
     securityManager.setCacheManager(ehCacheManager());
     securityManager.setSessionManager(sessionManager());
return securityManager; }

@Bean(name = "ehCacheManager") // 將用戶信息緩存起來
public EhCacheManager ehCacheManager() {
return new EhCacheManager();
}

  @Bean(name = "shiroCachingSessionDAO") // shiroSession
  public SessionDAO shiroCachingSessionDAO() {
  EnterpriseCacheSessionDAO sessionDao = new EnterpriseCacheSessionDAO();
  sessionDao.setSessionIdGenerator(new JavaUuidSessionIdGenerator()); // SessionId生成器
  sessionDao.setCacheManager(ehCacheManager()); // 緩存
   return sessionDao;
  }
  @Bean(name = "sessionManager")
  public DefaultWebSessionManager sessionManager() {
  DefaultWebSessionManager defaultWebSessionManager = new DefaultWebSessionManager();
  defaultWebSessionManager.setGlobalSessionTimeout(1000 * 60);
  defaultWebSessionManager.setSessionDAO(shiroCachingSessionDAO());
  return defaultWebSessionManager;
  }
}

自定義realm,繼承了AuthorizationInfo實現簡單的登錄驗證

package com.springboot.mybatis.demo.config.realm;

import com.springboot.mybatis.demo.model.Permission;
import com.springboot.mybatis.demo.model.Role;
import com.springboot.mybatis.demo.model.User;
import com.springboot.mybatis.demo.service.PermissionService;
import com.springboot.mybatis.demo.service.RoleService;
import com.springboot.mybatis.demo.service.UserService;
import com.springboot.mybatis.demo.service.impl.PermissionServiceImpl;
import com.springboot.mybatis.demo.service.impl.RoleServiceImpl;
import com.springboot.mybatis.demo.service.impl.UserServiceImpl;
import org.apache.shiro.SecurityUtils;
import org.apache.shiro.authc.*;
import org.apache.shiro.authz.AuthorizationInfo;
import org.apache.shiro.authz.SimpleAuthorizationInfo;
import org.apache.shiro.realm.AuthorizingRealm;
import org.apache.shiro.session.Session;
import org.apache.shiro.subject.PrincipalCollection;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;

import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;

public class ShiroRealmOne extends AuthorizingRealm {
    private Logger logger = LoggerFactory.getLogger(this.getClass());

    @Autowired
    private UserService userServiceImpl;

    @Autowired
    private RoleService roleServiceImpl;

    @Autowired
    private PermissionService permissionServiceImpl;

    //受權(這裏對受權不作講解,可忽略)
    @Override
    protected AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principalCollection) {
        logger.info("doGetAuthorizationInfo+" + principalCollection.toString());
        User user = userServiceImpl.findByUserName((String) principalCollection.getPrimaryPrincipal());
        List<Role> roleList = roleServiceImpl.findByUserId(user.getId());
        List<Permission> permissionList = roleList != null && !roleList.isEmpty() ? permissionServiceImpl.findByRoleIds(roleList.stream().map(Role::getId).collect(Collectors.toList())) : new ArrayList<>();
        SecurityUtils.getSubject().getSession().setAttribute(String.valueOf(user.getId()), SecurityUtils.getSubject().getPrincipals());
        SimpleAuthorizationInfo simpleAuthorizationInfo = new SimpleAuthorizationInfo();
        //賦予角色
        for (Role role : roleList) {
            simpleAuthorizationInfo.addRole(role.getRolName());
        }
        //賦予權限
        for (Permission permission : permissionList) {
            simpleAuthorizationInfo.addStringPermission(permission.getPrmName());
        }
        return simpleAuthorizationInfo;
    }
  // 認證
    @Override
    protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken authenticationToken) throws AuthenticationException {
        logger.info("doGetAuthenticationInfo +" + authenticationToken.toString());

        UsernamePasswordToken token = (UsernamePasswordToken) authenticationToken;
        String userName = token.getUsername();
        logger.info(userName + token.getPassword());

        User user = userServiceImpl.findByUserName(token.getUsername());
        if (user != null) {
            Session session = SecurityUtils.getSubject().getSession();
            session.setAttribute("user", user);
            return new SimpleAuthenticationInfo(userName, user.getUsrPassword(), getName());
        } else {
            return null;
        }
    }
}

到此shrio認證簡單配置就配置好了,接下來就是驗證了

控制器

package com.springboot.mybatis.demo.controller;


import com.springboot.mybatis.demo.common.utils.SelfStringUtils;
import com.springboot.mybatis.demo.controller.common.BaseController;
import com.springboot.mybatis.demo.model.User;
import org.apache.shiro.SecurityUtils;
import org.apache.shiro.authc.AuthenticationException;
import org.apache.shiro.authc.UsernamePasswordToken;
import org.apache.shiro.subject.Subject;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PostMapping;


@Controller
public class IndexController extends BaseController{

    @PostMapping("login")
    public String login(User user, Model model) {
        if (user == null || SelfStringUtils.isEmpty(user.getUsrName()) || SelfStringUtils.isEmpty(user.getUsrPassword()) ) {
            model.addAttribute("warn","請填寫完整用戶名和密碼!");
            return "login";
        }
        Subject subject = SecurityUtils.getSubject();
        UsernamePasswordToken token = new UsernamePasswordToken(user.getUsrName(), user.getUsrPassword());
        token.setRememberMe(true);
        try {
            subject.login(token);
        } catch (AuthenticationException e) {
            model.addAttribute("error","用戶名或密碼錯誤,請從新登錄!");
            return "login";
        }
        return "index";
    }

    @GetMapping("login")
    public String index() {
        return "login";
    }

}

login jsp:

<%--
  Created by IntelliJ IDEA.
  User: Administrator
  Date: 2018/7/29
  Time: 14:34
  To change this template use File | Settings | File Templates.
--%>
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<html>
<head>
    <title>登錄</title>
</head>
<body>
    <form action="login" method="POST">
        User Name: <input type="text" name="usrName">
        <br />
        User Password: <input type="text" name="usrPassword" />
        <input type="submit" value="Submit" />
    </form>
    <span style="color: #b3b20a;">${warn}</span>
    <span style="color:#b3130f;">${error}</span>
</body>
</html>

index jsp:

<%--
  Created by IntelliJ IDEA.
  User: pc
  Date: 2018/7/23
  Time: 14:02
  To change this template use File | Settings | File Templates.
--%>
<%@ page contentType="text/html;charset=UTF-8" language="java" %>
<html>
<head>
    <title>Title</title>
</head>
<body>
    <h1>Welcome to here!</h1>
</body>
</html>

正常狀況分析:

1.未登陸時訪問非login接口直接跳回login頁面

2.登錄失敗返回帳戶或密碼錯誤

3.未填寫完整帳戶和密碼返回請填寫完整帳戶和密碼

4.登錄成功跳轉到index頁面,若是不是admin角色則不能訪問user/**的路徑,其餘能夠正常訪問


 7.Docker 部署此項目

(1)基礎方式部署

• 構建Dockerfile

FROM docker.io/williamyeh/java8

VOLUME /tmp

VOLUME /opt/workspace

#COPY /build/libs/spring-boot-mybatis-1.0-SNAPSHOT.war /opt/workspace/app.jar

EXPOSE 8080

ENTRYPOINT ["java","-jar","/app.jar"]

建立工做目錄掛載點,則能夠將工做目錄掛載到host機上,然而也能夠直接將jar包拷貝到容器中去,兩者擇其一便可。本人較喜歡前者。

•  在Dockerfile文件目錄下,執行  docker build -t 鏡像名:tag .  構建鏡像

•  由於此項目用到了Mysql,因此還得構建一個Mysql容器,運行命令:docker run --name mysql -v /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root mysql:5.7;

• 運行剛纔構建的項目鏡像:docker run --name myproject -v /home/vagrant/workspace/:/opt/workspace --link mysql:mysql -p 8080:8080 -d 鏡像名字;掛載的目錄 /home/vagrant/workspace 根據本身的目錄而定

•  訪問8080端口測試

(2)使用docker-compose工具管理單機部署(前提:安裝好docker-compose工具)

• 構建docker-compose.yml文件(此處除了有mysql外還加了個redis)

version: '3'
services:
    db:
        image: docker.io/mysql:5.7
        command: --default-authentication-plugin=mysql_native_password
        container_name: db
        volumes:
            - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/data:/var/lib/mysql
            - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/logs:/var/log/mysql
        environment:
            MYSQL_ROOT_PASSWORD: root
            MYSQL_USER: 'test'
            MYSQL_PASS: 'test'
        restart:
            always
        networks:
            - default
    redis:
        image: docker.io/redis
        container_name: redis
        command: redis-server /usr/local/etc/redis/redis.conf
        volumes:
            - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/redis/data:/data
            - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/redis/redis.conf:/usr/local/etc/redis/redis.conf
        networks:
            - default
    spring-boot:
        build:
            context: ./enjoy-dir/workspace
            dockerfile: Dockerfile
        image:
            spring-boot:1.0-SNAPSHOT
        depends_on:
            - db
            - redis
        links:
            - db:mysql
            - redis:redis
        ports:
            - "8080:8080"
        volumes:
            - /home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/workspace:/opt/workspace
        networks:
            - default
networks:
    default:
        driver: bridge

注意:其中的掛載目錄依本身狀況而定;redis密碼能夠在redis.conf文件中配置,其詳細配置參見:https://woodenrobot.me/2018/09/03/%E4%BD%BF%E7%94%A8-docker-compose-%E5%9C%A8-Docker-%E4%B8%AD%E5%90%AF%E5%8A%A8%E5%B8%A6%E5%AF%86%E7%A0%81%E7%9A%84-Redis/

• 在docker-compose.yml文件目錄下執行:docker-compose up;在此過程當中遇到的問題:mysql沒法鏈接 -> 緣由:root用戶外部沒法使用,因而進入mysql中開放root用戶,具體參見:http://www.javashuo.com/article/p-eulqmzde-bu.html

• 訪問 8080 端口測試

(3)使用docker swarm多機分佈式部署

• 構建compose文件基於compose 3.0,其詳細配置參見官方網頁,

version: '3'
services:
    db:
        image: docker.io/mysql:5.7
        command: --default-authentication-plugin=mysql_native_password // 密碼加密機制
        volumes:
            - "/home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/data:/var/lib/mysql"
            - "/home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/mysql/logs:/var/log/mysql"
        environment:
            MYSQL_ROOT_PASSWORD: 'root'
            MYSQL_USER: 'test'
            MYSQL_PASS: 'test'
        restart: // 開機啓動
            always
        networks: // mysql 數據庫容器連到 mynet overlay 網絡,只要連到該網絡的容器都可以經過別名 mysql 鏈接數據庫
            mynet:
                aliases:
                    - mysql
        ports: 
            - "3306:3306"
        deploy: // 使用 swarm 部署須要配置一下
            replicas: 1 // stack 啓動時默認開啓多少個服務
            restart_policy: // 從新構建策略
                condition: on-failure
            placement: // 部署節點
                constraints: [node.role == worker]
    redis:
        image: docker.io/redis
        command: redis-server /usr/local/etc/redis/redis.conf
        volumes:
            - "/home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/redis/data:/data"
            - "/home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/redis/redis.conf:/usr/local/etc/redis/redis.conf"
        networks:
            mynet:
                aliases:
                    - redis
        ports: 
            - "6379:6379"
        deploy: 
            replicas: 1
            restart_policy: 
                condition: on-failure
            placement: 
                constraints: [node.role == worker]
    spring-boot:
        build:
            context: ./enjoy-dir/workspace
            dockerfile: Dockerfile
        image:
            spring-boot:1.0-SNAPSHOT
        depends_on:
            - db
            - redis
        ports: 
            - "8080:8080"
        volumes: 
            - "/home/vagrant/docker-compose/spring-boot-compose/enjoy-dir/workspace:/opt/workspace"
        networks:
            mynet:
                aliases:
                    - spring-boot
        deploy: 
            replicas: 1 
            restart_policy: 
                condition: on-failure
            placement: 
                constraints: [node.role == worker]
networks:
    mynet:

• compose 構建好了則執行 docker stack deploy -c [ compose文件路徑 ]  [ stack名字 ];以下:

執行完成以後能夠在 manager 節點經過命令 docker service ls 查看 service,以下:

 以及查看 service 狀態:

• 經過 Protainer 工具可視化管理 Swarm;首先在任一臺機器上安裝 Protainer , 安裝詳解參見:http://www.pangxieke.com/linux/use-protainer-manage-docker.html

安裝完成以後則能夠進去輕鬆橫向擴展本身的容器也就是service了,自由設置 scale...

 

總結:由 docker 基礎命令建立容器在容器數目很少的狀況下很實用,可是容器多了怎麼辦 -> 用 docker-compose 將容器進行分組管理,這樣大大提高效率,一個命令便可啓用和關閉多個容器。可是在單機下實用 docke-compose 確實能應付得過來,可是多機怎麼辦 -> 用 docker swarm, 是的有了docker swarm 不管多少臺機器,不再用一個機器一個機器去部署,docker swarm 會自動幫咱們把容器部署到資源足夠的機器上去,這樣一個高效率的分佈式部署就變得 so easy...


 8.讀寫分離

 採用讀寫分離來下降單個數據庫的壓力,提升訪問速度

(1)配置數據庫(將原來的數據庫配置改爲下面的,這裏只配置 master 和 slave1 兩個數據庫)

 
 
#----------------------------------------- 數據庫鏈接(單數據庫)----------------------------------------
#spring.datasource.url:=jdbc:mysql://localhost:3306/liuzj?useUnicode=true&characterEncoding=gbk&zeroDateTimeBehavior=convertToNull
#spring.datasource.username=root
#spring.datasource.password=
#spring.datasource.driver-class-name=com.mysql.jdbc.Driver
#spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
#----------------------------------------- 數據庫鏈接(單數據庫)----------------------------------------
#----------------------------------------- 數據庫鏈接(讀寫分離)----------------------------------------
# master(寫)
spring.datasource.master.url=jdbc:mysql://192.168.10.16:3306/test
spring.datasource.master.username=root
spring.datasource.master.password=123456
spring.datasource.master.driver-class-name=com.mysql.jdbc.Driver
# slave1(讀)
spring.datasource.slave1.url=jdbc:mysql://192.168.10.17:3306/test
spring.datasource.slave1.username=test
spring.datasource.slave1.password=123456
spring.datasource.slave1.driver-class-name=com.mysql.jdbc.Driver
#----------------------------------------- 數據庫鏈接(讀寫分離)----------------------------------------

 (2)修改初始化 dataSource(將原來的 dataSource 替換成下面的)

    // ----------------------------------- 單數據源 start----------------------------------------

//    @Bean
//    @ConfigurationProperties(prefix = "spring.datasource")
//    public DataSource dataSource() {
//        DruidDataSource druidDataSource = new DruidDataSource();
//        // 數據源最大鏈接數
//        druidDataSource.setMaxActive(Application.DEFAULT_DATASOURCE_MAX_ACTIVE);
//        // 數據源最小鏈接數
//        druidDataSource.setMinIdle(Application.DEFAULT_DATASOURCE_MIN_IDLE);
//        // 配置獲取鏈接等待超時的時間
//        druidDataSource.setMaxWait(Application.DEFAULT_DATASOURCE_MAX_WAIT);
//        return druidDataSource;
//    }

    // ----------------------------------- 單數據源 end----------------------------------------


    // ----------------------------------- 多數據源(讀寫分離)start----------------------------------------

    @Bean
    @ConfigurationProperties("spring.datasource.master")
    public DataSource masterDataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    @ConfigurationProperties("spring.datasource.slave1")
    public DataSource slave1DataSource() {
        return DataSourceBuilder.create().build();
    }

    @Bean
    public DataSource myRoutingDataSource(@Qualifier("masterDataSource") DataSource masterDataSource,
                                          @Qualifier("slave1DataSource") DataSource slave1DataSource) {
        Map<Object, Object> targetDataSources = new HashMap<>(2);
        targetDataSources.put(DBTypeEnum.MASTER, masterDataSource);
        targetDataSources.put(DBTypeEnum.SLAVE1, slave1DataSource);
        MyRoutingDataSource myRoutingDataSource = new MyRoutingDataSource();
        myRoutingDataSource.setDefaultTargetDataSource(masterDataSource);
        myRoutingDataSource.setTargetDataSources(targetDataSources);
        return myRoutingDataSource;
    }

    @Resource
    MyRoutingDataSource myRoutingDataSource;

    // ----------------------------------- 多數據源(讀寫分離)end----------------------------------------

(3)使用 AOP 動態切換數據源(固然也能夠採用 mycat,具體配置自行查閱資料)

/**
 * @author admin
 * @date 2019-02-27
 */
@Aspect
@Component
public class DataSourceAspect {

    @Pointcut("!@annotation(com.springboot.mybatis.demo.config.annotation.Master) " +
            "&& (execution(* com.springboot.mybatis.demo.service..*.select*(..)) " +
            "|| execution(* com.springboot.mybatis.demo.service..*.get*(..))" +
            "|| execution(* com.springboot.mybatis.demo.service..*.find*(..)))")
    public void readPointcut() {

    }

    @Pointcut("@annotation(com.springboot.mybatis.demo.config.annotation.Master) " +
            "|| execution(* com.springboot.mybatis.demo.service..*.insert*(..)) " +
            "|| execution(* com.springboot.mybatis.demo.service..*.add*(..)) " +
            "|| execution(* com.springboot.mybatis.demo.service..*.update*(..)) " +
            "|| execution(* com.springboot.mybatis.demo.service..*.edit*(..)) " +
            "|| execution(* com.springboot.mybatis.demo.service..*.delete*(..)) " +
            "|| execution(* com.springboot.mybatis.demo.service..*.remove*(..))")
    public void writePointcut() {

    }

    @Before("readPointcut()")
    public void read() {
        DBContextHolder.slave();
    }

    @Before("writePointcut()")
    public void write() {
        DBContextHolder.master();
    }


    /**
     * 另外一種寫法:if...else...  判斷哪些須要讀從數據庫,其他的走主數據庫
     */
//    @Before("execution(* com.springboot.mybatis.demo.service.impl.*.*(..))")
//    public void before(JoinPoint jp) {
//        String methodName = jp.getSignature().getName();
//
//        if (StringUtils.startsWithAny(methodName, "get", "select", "find")) {
//            DBContextHolder.slave();
//        }else {
//            DBContextHolder.master();
//        }
//    }
}

(4)以上只是主要配置及步驟,像 DBContextHolder 等類此處沒有貼出,詳細參看 github

(5)主從庫搭建步驟

 

配置master
vi /etc/my.cnf #編輯配置文件,在[mysqld]部分添加下面內容

server-id=1   #設置服務器id,爲1表示主服務器。
log_bin=mysql-bin  #啓動MySQ二進制日誌系統。
binlog-do-db=abc  #須要同步的數據庫名,若是有多個數據庫,可重複此參數,每一個數據庫一行
binlog-ignore-db = mysql,information_schema  #忽略寫入binlog的庫

重啓master數據庫

docker restart mysql

登陸master數據庫,查看master狀態

show master status;

+------------------+----------+--------------+--------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+--------------------------+
| mysql-bin.000001 | 2722 | | mysql,information_schema |
+------------------+----------+--------------+--------------------------+

配置slave
vi /etc/my.cnf #編輯配置文件,在[mysqld]部分添加下面內容
server-id=2   #設置服務器id,爲2表示從服務器,這個server-id不作規定,只要主從不一致就好
log_bin=mysql-bin  #啓動MySQL二進制日誌系統,若是該從服務器還有從服務器的話,須要開啓,不然不須要
binlog-do-db=abc  #須要同步的數據庫名,若是有多個數據庫,可重複此參數,每一個數據庫一行
binlog-ignore-db = mysql,information_schema  #忽略寫入binlog的庫,若是該從服務器還有從服務器的話,須要開啓,不然不須要

重啓slave數據庫 

docker restart mysql2

登陸從數據庫,
change master to master_host='192.168.0.133',master_user='slave',master_password='123456',master_log_file='mysql-bin.000001',master_log_pos=2722;//mysql-bin.000001,2722爲以前master查到的狀態值。 start slave;//開啓從數據庫 show slave status; //查看從數據庫的狀態

 注:在使用主從數據源時須要注意,一個數據源對應一個事務,也就是說,在一個service事務方法中含有查詢和新增操做時,數據源在兩次操做中不會切換。解決方案有以下兩種:

1.在一個事務service方法中不要將查詢和新增\插入等混合在一塊兒,能夠將查詢提出到外層,eg:controller層

2.另開一個線程去執行查詢或插入\更新操做,由於同一個事務中的connection是從threadLocal中獲取的,一旦從新開了一個線程則會從新去獲取鏈接而不是從threadLocal中獲取

總結:參看資料:http://www.javashuo.com/article/p-cluwozbt-bp.html & http://www.javashuo.com/article/p-odkuabdo-bg.html


 9. 集成 Quartz 分佈式定時任務

  • 幾個經典的定時任務比較:

  

     Spring 自帶定時器Scheduled是單應用服務上的,不支持分佈式環境。若是要支持分佈式須要任務調度控制插件spring-scheduling-cluster的配合,其原理是對任務加鎖實現控制,支持能實現分佈鎖的中間件。

(1)初始化數據庫腳本(可自行到官網下載)

drop table if exists qrtz_fired_triggers;
drop table if exists qrtz_paused_trigger_grps;
drop table if exists qrtz_scheduler_state;
drop table if exists qrtz_locks;
drop table if exists qrtz_simple_triggers;
drop table if exists qrtz_simprop_triggers;
drop table if exists qrtz_cron_triggers;
drop table if exists qrtz_blob_triggers;
drop table if exists qrtz_triggers;
drop table if exists qrtz_job_details;
drop table if exists qrtz_calendars;


create table qrtz_job_details
  (
    sched_name varchar(120) not null,
    job_name  varchar(120) not null,
    job_group varchar(120) not null,
    description varchar(250) null,
    job_class_name   varchar(250) not null,
    is_durable varchar(1) not null,
    is_nonconcurrent varchar(1) not null,
    is_update_data varchar(1) not null,
    requests_recovery varchar(1) not null,
    job_data blob null,
    primary key (sched_name,job_name,job_group)
);

create table qrtz_triggers
  (
    sched_name varchar(120) not null,
    trigger_name varchar(120) not null,
    trigger_group varchar(120) not null,
    job_name  varchar(120) not null,
    job_group varchar(120) not null,
    description varchar(250) null,
    next_fire_time bigint(13) null,
    prev_fire_time bigint(13) null,
    priority integer null,
    trigger_state varchar(16) not null,
    trigger_type varchar(8) not null,
    start_time bigint(13) not null,
    end_time bigint(13) null,
    calendar_name varchar(200) null,
    misfire_instr smallint(2) null,
    job_data blob null,
    primary key (sched_name,trigger_name,trigger_group),
    foreign key (sched_name,job_name,job_group)
        references qrtz_job_details(sched_name,job_name,job_group)
);

create table qrtz_simple_triggers
  (
    sched_name varchar(120) not null,
    trigger_name varchar(120) not null,
    trigger_group varchar(120) not null,
    repeat_count bigint(7) not null,
    repeat_interval bigint(12) not null,
    times_triggered bigint(10) not null,
    primary key (sched_name,trigger_name,trigger_group),
    foreign key (sched_name,trigger_name,trigger_group)
        references qrtz_triggers(sched_name,trigger_name,trigger_group)
);

create table qrtz_cron_triggers
  (
    sched_name varchar(120) not null,
    trigger_name varchar(120) not null,
    trigger_group varchar(120) not null,
    cron_expression varchar(200) not null,
    time_zone_id varchar(80),
    primary key (sched_name,trigger_name,trigger_group),
    foreign key (sched_name,trigger_name,trigger_group)
        references qrtz_triggers(sched_name,trigger_name,trigger_group)
);

create table qrtz_simprop_triggers
  (
    sched_name varchar(120) not null,
    trigger_name varchar(120) not null,
    trigger_group varchar(120) not null,
    str_prop_1 varchar(512) null,
    str_prop_2 varchar(512) null,
    str_prop_3 varchar(512) null,
    int_prop_1 int null,
    int_prop_2 int null,
    long_prop_1 bigint null,
    long_prop_2 bigint null,
    dec_prop_1 numeric(13,4) null,
    dec_prop_2 numeric(13,4) null,
    bool_prop_1 varchar(1) null,
    bool_prop_2 varchar(1) null,
    primary key (sched_name,trigger_name,trigger_group),
    foreign key (sched_name,trigger_name,trigger_group)
    references qrtz_triggers(sched_name,trigger_name,trigger_group)
);

create table qrtz_blob_triggers
  (
    sched_name varchar(120) not null,
    trigger_name varchar(120) not null,
    trigger_group varchar(120) not null,
    blob_data blob null,
    primary key (sched_name,trigger_name,trigger_group),
    foreign key (sched_name,trigger_name,trigger_group)
        references qrtz_triggers(sched_name,trigger_name,trigger_group)
);

create table qrtz_calendars
  (
    sched_name varchar(120) not null,
    calendar_name  varchar(120) not null,
    calendar blob not null,
    primary key (sched_name,calendar_name)
);

create table qrtz_paused_trigger_grps
  (
    sched_name varchar(120) not null,
    trigger_group  varchar(120) not null,
    primary key (sched_name,trigger_group)
);

create table qrtz_fired_triggers
  (
    sched_name varchar(120) not null,
    entry_id varchar(95) not null,
    trigger_name varchar(120) not null,
    trigger_group varchar(120) not null,
    instance_name varchar(200) not null,
    fired_time bigint(13) not null,
    sched_time bigint(13) not null,
    priority integer not null,
    state varchar(16) not null,
    job_name varchar(200) null,
    job_group varchar(200) null,
    is_nonconcurrent varchar(1) null,
    requests_recovery varchar(1) null,
    primary key (sched_name,entry_id)
);

create table qrtz_scheduler_state
  (
    sched_name varchar(120) not null,
    instance_name varchar(120) not null,
    last_checkin_time bigint(13) not null,
    checkin_interval bigint(13) not null,
    primary key (sched_name,instance_name)
);

create table qrtz_locks
  (
    sched_name varchar(120) not null,
    lock_name  varchar(40) not null,
    primary key (sched_name,lock_name)
);

(2)建立並配置好 Quartz 配置文件

# --------------------------------------- quartz ---------------------------------------
# 主要分爲scheduler、threadPool、jobStore、plugin等部分
org.quartz.scheduler.instanceName=DefaultQuartzScheduler
org.quartz.scheduler.rmi.export=false
org.quartz.scheduler.rmi.proxy=false
org.quartz.scheduler.wrapJobExecutionInUserTransaction=false
# 實例化ThreadPool時,使用的線程類爲SimpleThreadPool
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
# threadCount和threadPriority將以setter的形式注入ThreadPool實例
# 併發個數
org.quartz.threadPool.threadCount=5
# 優先級
org.quartz.threadPool.threadPriority=5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread=true
org.quartz.jobStore.misfireThreshold=5000
# 默認存儲在內存中
#org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
#持久化
org.quartz.jobStore.class=org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.tablePrefix=QRTZ_
org.quartz.jobStore.dataSource=qzDS
org.quartz.dataSource.qzDS.driver=com.mysql.jdbc.Driver
org.quartz.dataSource.qzDS.URL=jdbc:mysql://192.168.10.16:3306/test?useUnicode=true&characterEncoding=UTF-8
org.quartz.dataSource.qzDS.user=root
org.quartz.dataSource.qzDS.password=123456
org.quartz.dataSource.qzDS.maxConnections=10
# --------------------------------------- quartz -----------------------------------------

(3)初始化 Quartz 的初始Bean

@Configuration
public class QuartzConfig {

    /**
     * 實例化SchedulerFactoryBean對象
     *
     * @return SchedulerFactoryBean
     * @throws IOException 異常
     */
    @Bean(name = "schedulerFactory")
    public SchedulerFactoryBean schedulerFactoryBean() throws IOException {
        SchedulerFactoryBean factoryBean = new SchedulerFactoryBean();
        factoryBean.setQuartzProperties(quartzProperties());
        return factoryBean;
    }

    /**
     * 加載配置文件
     *
     * @return Properties
     * @throws IOException 異常
     */
    @Bean
    public Properties quartzProperties() throws IOException {
        PropertiesFactoryBean propertiesFactoryBean = new PropertiesFactoryBean();
        propertiesFactoryBean.setLocation(new ClassPathResource("/quartz.properties"));
        //在quartz.properties中的屬性被讀取並注入後再初始化對象
        propertiesFactoryBean.afterPropertiesSet();
        return propertiesFactoryBean.getObject();
    }

    /**
     * quartz初始化監聽器
     *
     * @return QuartzInitializerListener
     */
    @Bean
    public QuartzInitializerListener executorListener() {
        return new QuartzInitializerListener();
    }

    /**
     * 經過SchedulerFactoryBean獲取Scheduler的實例
     *
     * @return Scheduler
     * @throws IOException 異常
     */
    @Bean(name = "Scheduler")
    public Scheduler scheduler() throws IOException {
        return schedulerFactoryBean().getScheduler();
    }

}

(3)建立 Quartz 的 service 對 job進行一些基礎操做,實現動態調度 job

/**
 * @author admin
 * @date 2019-02-28
 */
public interface QuartzJobService {
    /**
     * 添加任務
     *
     * @param scheduler      Scheduler的實例
     * @param jobClassName   任務類名稱
     * @param jobGroupName   任務羣組名稱
     * @param cronExpression cron表達式
     * @throws Exception
     */
    void addJob(Scheduler scheduler, String jobClassName, String jobGroupName, String cronExpression) throws Exception;

    /**
     * 暫停任務
     *
     * @param scheduler    Scheduler的實例
     * @param jobClassName 任務類名稱
     * @param jobGroupName 任務羣組名稱
     * @throws Exception
     */
    void pauseJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception;

    /**
     * 繼續任務
     *
     * @param scheduler    Scheduler的實例
     * @param jobClassName 任務類名稱
     * @param jobGroupName 任務羣組名稱
     * @throws Exception
     */
    void resumeJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception;

    /**
     * 從新執行任務
     *
     * @param scheduler      Scheduler的實例
     * @param jobClassName   任務類名稱
     * @param jobGroupName   任務羣組名稱
     * @param cronExpression cron表達式
     * @throws Exception
     */
    void rescheduleJob(Scheduler scheduler, String jobClassName, String jobGroupName, String cronExpression) throws Exception;

    /**
     * 刪除任務
     *
     * @param jobClassName
     * @param jobGroupName
     * @throws Exception
     */
    void deleteJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception;

    /**
     * 獲取全部任務,使用前端分頁
     *
     * @return List
     */
    List<QuartzJob> findList();

}
/**
 * @author admin
 * @date 2019-02-28
 * @see QuartzJobService
 */
@Service
public class QuartzJobServiceImpl implements QuartzJobService {

    @Autowired
    private QuartzJobMapper quartzJobMapper;

    @Override
    public void addJob(Scheduler scheduler, String jobClassName, String jobGroupName, String cronExpression) throws Exception {
        jobClassName = "com.springboot.mybatis.demo.job." + jobClassName;
        // 啓動調度器
        scheduler.start();
        //構建job信息
        JobDetail jobDetail = JobBuilder.newJob(QuartzJobUtils.getClass(jobClassName).getClass())
                .withIdentity(jobClassName, jobGroupName)
                .build();
        //表達式調度構建器(即任務執行的時間)
        CronScheduleBuilder builder = CronScheduleBuilder.cronSchedule(cronExpression);
        //按新的cronExpression表達式構建一個新的trigger
        CronTrigger trigger = TriggerBuilder.newTrigger()
                .withIdentity(jobClassName, jobGroupName)
                .withSchedule(builder)
                .build();
        // 配置scheduler相關參數
        scheduler.scheduleJob(jobDetail, trigger);
    }

    @Override
    public void pauseJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception {
        jobClassName = "com.springboot.mybatis.demo.job." + jobClassName;
        scheduler.pauseJob(JobKey.jobKey(jobClassName, jobGroupName));
    }

    @Override
    public void resumeJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception {
        jobClassName = "com.springboot.mybatis.demo.job." + jobClassName;
        scheduler.resumeJob(JobKey.jobKey(jobClassName, jobGroupName));
    }

    @Override
    public void rescheduleJob(Scheduler scheduler, String jobClassName, String jobGroupName, String cronExpression) throws Exception {
        jobClassName = "com.springboot.mybatis.demo.job." + jobClassName;
        TriggerKey triggerKey = TriggerKey.triggerKey(jobClassName, jobGroupName);
        CronScheduleBuilder builder = CronScheduleBuilder.cronSchedule(cronExpression);
        CronTrigger trigger = (CronTrigger) scheduler.getTrigger(triggerKey);
        // 按新的cronExpression表達式從新構建trigger
        trigger = trigger.getTriggerBuilder()
                .withIdentity(jobClassName, jobGroupName)
                .withSchedule(builder)
                .build();
        // 按新的trigger從新設置job執行
        scheduler.rescheduleJob(triggerKey, trigger);
    }

    @Override
    public void deleteJob(Scheduler scheduler, String jobClassName, String jobGroupName) throws Exception {
        jobClassName = "com.springboot.mybatis.demo.job." + jobClassName;
        scheduler.pauseTrigger(TriggerKey.triggerKey(jobClassName, jobGroupName));
        scheduler.unscheduleJob(TriggerKey.triggerKey(jobClassName, jobGroupName));
        scheduler.deleteJob(JobKey.jobKey(jobClassName, jobGroupName));
    }

    @Override
    public List<QuartzJob> findList() {
        return quartzJobMapper.findList();
    }
}

(4)建立 job

/**
 * @author admin
 * @date 2019-02-28
 * @see BaseJob
 */
public class HelloJob implements BaseJob {

    private final Logger logger = LoggerFactory.getLogger(getClass());

    @Override
    public void execute(JobExecutionContext jobExecutionContext) throws JobExecutionException {
        logger.info("hello, I'm quartz job - HelloJob");
    }
}

(5)而後就能夠對 job 進行測試(測試添加、暫停、重啓等操做)

 總結:

  • 以上只展現集成的主要步驟,詳細可參看 github。

  • 在分佈式狀況下,quartz 會將任務分佈在不一樣的機器上執行,能夠將項目打成jar包,開啓兩個終端模擬分佈式查看 job 的執行狀況,會發現 HelloJob 會在兩個機器上交替執行。

  • 以上集成過程參看資料:https://zhuanlan.zhihu.com/p/38546754


 10. 自動分表

(1)概述:

通常來講,分表都是根據最高頻查詢的字段進行拆分的。可是考慮到不少功能是須要全局查詢,因此在這種狀況下,是沒法避免全局查詢的。
對於常常須要全局查詢的部分數據,能夠單獨作個冗餘表,這部分就不要分表了。
對於不常常的全局查詢,就只能 union 了。可是一般狀況下這種查詢響應時間都好久。因此就須要在功能上作必定的限制。好比查詢間隔之類的,防止數據庫長時間無響應。或者把數據同步到只讀從庫上,在從庫上進行搜索。不影響主庫運行。

(2)分表準備

  •  分表可配置化(啓用分表,對哪張表進行分表以及分表策略)

  •  如何進行動態分表

(3)實踐

  •   首先定義本身的配置類

import com.beust.jcommander.internal.Lists;
import com.springboot.mybatis.demo.common.constant.Constant;
import com.springboot.mybatis.demo.common.utils.SelfStringUtils;

import java.util.Arrays;
import java.util.List;
import java.util.Map;

/**
 * 獲取數據源配置信息
 *
 * @author lzj
 * @date 2019-04-09
 */
public class DatasourceConfig {

    private Master master;

    private Slave1 slave1;

    private SubTable subTable;

    public SubTable getSubTable() {
        return subTable;
    }

    public void setSubTable(SubTable subTable) {
        this.subTable = subTable;
    }

    public Master getMaster() {
        return master;
    }

    public void setMaster(Master master) {
        this.master = master;
    }

    public Slave1 getSlave1() {
        return slave1;
    }

    public void setSlave1(Slave1 slave1) {
        this.slave1 = slave1;
    }

    public static class Master {

        private String jdbcUrl;

        private String username;

        private String password;

        private String driverClassName;

        public String getJdbcUrl() {
            return jdbcUrl;
        }

        public void setJdbcUrl(String jdbcUrl) {
            this.jdbcUrl = jdbcUrl;
        }

        public String getUsername() {
            return username;
        }

        public void setUsername(String username) {
            this.username = username;
        }

        public String getPassword() {
            return password;
        }

        public void setPassword(String password) {
            this.password = password;
        }

        public String getDriverClassName() {
            return driverClassName;
        }

        public void setDriverClassName(String driverClassName) {
            this.driverClassName = driverClassName;
        }
    }

    public static class Slave1 {

        private String jdbcUrl;

        private String username;

        private String password;

        private String driverClassName;

        public String getJdbcUrl() {
            return jdbcUrl;
        }

        public void setJdbcUrl(String jdbcUrl) {
            this.jdbcUrl = jdbcUrl;
        }

        public String getUsername() {
            return username;
        }

        public void setUsername(String username) {
            this.username = username;
        }

        public String getPassword() {
            return password;
        }

        public void setPassword(String password) {
            this.password = password;
        }

        public String getDriverClassName() {
            return driverClassName;
        }

        public void setDriverClassName(String driverClassName) {
            this.driverClassName = driverClassName;
        }
    }

    public static class SubTable{

        private boolean enable;

        private String schemaRoot;

        private String schemas;

        private String strategy;

        public String getStrategy() {
            return strategy;
        }

        public void setStrategy(String strategy) {
            this.strategy = strategy;
        }

        public boolean isEnable() {
            return enable;
        }

        public void setEnable(boolean enable) {
            this.enable = enable;
        }

        public String getSchemaRoot() {
            return schemaRoot;
        }

        public void setSchemaRoot(String schemaRoot) {
            this.schemaRoot = schemaRoot;
        }

        public List<String> getSchemas() {
            if (SelfStringUtils.isNotEmpty(this.schemas)) {
                return Arrays.asList(this.schemas.split(Constant.Symbol.COMMA));
            }
            return Lists.newArrayList();
        }

        public void setSchemas(String schemas) {
            this.schemas = schemas;
        }
    }
}

由於此項目是配置了多數據源,因此分爲master以及slave兩個數據源配置,再加上分表配置

#-------------------自動分表配置-----------------
spring.datasource.sub-table.enable = true
spring.datasource.sub-table.schema-root = classpath*:sub/
spring.datasource.sub-table.schemas = smg_user
spring.datasource.sub-table.strategy = each_day
#-------------------自動分表配置-----------------

以上配置是寫在application.properties配置文件中的。而後在將咱們定義的配置類DataSourceConfig類交給IOC容器管理,即:

   @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DatasourceConfig datasourceConfig(){
       return new DatasourceConfig();
    }

這樣咱們即可以經過自定義的配置類拿到相關的配置

   •  而後經過AOP切入mapper方法層,每次調用mapper方法時判斷該執行sql的相關實體類是否須要分表

@Aspect
@Component
public class BaseMapperAspect {

    private final static Logger logger = LoggerFactory.getLogger(BaseMapperAspect.class);

//    @Autowired
//    DataSourceProperties dataSourceProperties;

//    @Autowired
//    private DataSource dataSource;

    @Autowired
    private DatasourceConfig datasourceConfig;

    @Autowired
    SubTableUtilsFactory subTableUtilsFactory;

    @Autowired
    private DBService dbService;

    @Resource
    MyRoutingDataSource myRoutingDataSource;


    @Pointcut("execution(* com.springboot.mybatis.demo.mapper.common.BaseMapper.*(..))")
    public void getMybatisTableEntity() {
    }

    /**
     * 獲取runtime class
     * @param joinPoint target
     * @throws ClassNotFoundException 異常
     */
    @Before("getMybatisTableEntity()")
    public void setThreadLocalMap(JoinPoint joinPoint) throws ClassNotFoundException {
        ...
        // 自動分表
        MybatisTable mybatisTable = MybatisTableUtils.getMybatisTable(Class.forName(actualTypeArguments[0].getTypeName()));
        Assert.isTrue(mybatisTable != null, "Null of the MybatisTable");
        String oldTableName = mybatisTable.getName();
        if (datasourceConfig.getSubTable().isEnable() && datasourceConfig.getSubTable().getSchemas().contains(oldTableName)) {
            ThreadLocalUtils.setSubTableName(subTableUtilsFactory.getSubTableUtil(datasourceConfig.getSubTable().getStrategy()).getTableName(oldTableName));
            // 判斷是否須要分表
            dbService.autoSubTable(ThreadLocalUtils.getSubTableName(),oldTableName,datasourceConfig.getSubTable().getSchemaRoot());
        }else {
      ThreadLocalUtils.setSubTableName(oldTableName);
     }
}

若是須要分表則會經過配置的策略獲取表名,而後判斷數據庫是否有該表,若是沒有則自動建立,不然跳過

    •   建立對應分表後,則是對sql進行攔截修改,這裏是定義mybatis攔截器攔截sql,若是該sql對應的實體類須要分表,則修改sql的表名,即定位到對應表進行操做

/**
 * 動態定位表
 *
 * @author liuzj
 * @date 2019-04-15
 */
@Intercepts({@Signature(type = StatementHandler.class, method = "prepare", args = {Connection.class,Integer.class})})
public class SubTableSqlHandler implements Interceptor {

    Logger logger = LoggerFactory.getLogger(SubTableSqlHandler.class);

    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        StatementHandler handler = (StatementHandler)invocation.getTarget();
        BoundSql boundSql = handler.getBoundSql();
        String sql = boundSql.getSql();
        // 修改 sql
        if (SelfStringUtils.isNotEmpty(sql)) {
            MybatisTable mybatisTable = MybatisTableUtils.getMybatisTable(ThreadLocalUtils.get());
            Assert.isTrue(mybatisTable != null, "Null of the MybatisTable");
            Field sqlField = boundSql.getClass().getDeclaredField("sql");
            sqlField.setAccessible(true);
            sqlField.set(boundSql,sql.replaceAll(mybatisTable.getName(),ThreadLocalUtils.getSubTableName()));
        }
        return invocation.proceed();
    }

    @Override
    public Object plugin(Object target) {
        return Plugin.wrap(target, this);
    }

    @Override
    public void setProperties(Properties properties) {
    }
}

以上是此項目動態分表的基本思路,詳細代碼參見GitHub

 

未完!待續。。。若有不妥之處,請提建議和意見,謝謝

相關文章
相關標籤/搜索