記druid 在配置中心下的一個大坑: cpu 達到 100%

把咱們的dubbo 應用移步到配置中心上去以後,發現咱們的應用過一段時間就會出現cpu 100%的狀況 (大概是12個小時),一開始cpu佔用是2-5% 的樣子,什麼都沒作,後面居然用盡了cpu。。java

把jvm 線程堆棧打印一下,發現線程數居然達到了上萬.....   發現最可能是這樣的一個線程:mysql

"com.alibaba.nacos.client.Worker.fixed-192.168.11.196_8848-1f3c60b6-3e28-44eb-9798-7f7eeeff6a8d" #138 daemon prio=5 os_prio=0 tid=0x0000000026c49000 nid=0xf60 waiting on condition [0x000000002aebe000]
   java.lang.Thread.State: TIMED_WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x000000079c3a15a8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
    at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
    at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)git

咱們的配置中心是nacos,從com.alibaba.nacos.client.Worker.fixed來看, 跟nacos 好像有點關係。事實也是如此,咱們以前的沒有用nacos , 是根本沒有這個問題的。 可是,也不是每一個應用都是這樣, 咱們的網關應用就沒這個問題。。 github

spring-cloud-alibaba-dependencies 剛剛從0.2.1.RELEASE 升級到了 0.2.2.RELEASE, 難道這個有關係? 和 spring-cloud-dependencies 的兼容性問題?和 spring-cloud-starter的兼容性問題?redis

            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>Finchley.SR1</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>

            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-alibaba-dependencies</artifactId>
                <version>0.2.2.RELEASE</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>

            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-starter</artifactId>
                <version>2.0.2.RELEASE</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>

但退回原來的版本,問題依舊。spring

 

從新寫了一個spring boot 程序,沒有問題。 把原來的那一套加上,問題出現了。 觀察發現數據源彷佛有問題。sql

 

懷疑是nacos 的坑。com.alibaba.nacos.client.Worker 是什麼鬼? 全局搜索類名, 沒發現,難道不是一個類嗎? fixed 是一個方法嗎? 找不到。 如是把nacos 1.0.0 的源碼下載了下來。 全局搜索類com.alibaba.nacos.client.Worker, 仍是沒發現。全局全文搜索fixed ,發現了com/alibaba/nacos/client/config/impl/ServerListManager.java。經過本機調試,發現了 com.alibaba.nacos.client.config.impl.ClientWorker, 同時發現了com.alibaba.nacos.client.Worker 字眼 , 原來是寫死的! 依然不清楚問題緣由。mongodb

直覺告訴我 正是 ClientWorker 不斷建立了咱們的 「問題線程」, 可是其機制我並不太清楚。貿然的調試沒有發現根本緣由,反而把我弄得一頭霧水。 我得看一下源碼。apache

不斷的加斷點, 發現了 com.alibaba.nacos.client.config.impl.CacheData 的 safeNotifyListener 方法, 其中的 listener.receiveConfigInfo 方法彷佛出現了問題, 執行到了1的地方,2卻不會執行,真是奇了怪了, 難道陷入了死循環?bootstrap

private void safeNotifyListener(final String dataId, final String group, final String content, final String md5, final ManagerListenerWrap listenerWrap) {
        final Listener listener = listenerWrap.listener;
        Runnable job = new Runnable() {
            public void run() {
                ClassLoader myClassLoader = Thread.currentThread().getContextClassLoader();
                ClassLoader appClassLoader = listener.getClass().getClassLoader();

                try {
                    if (listener instanceof AbstractSharedListener) {
                        AbstractSharedListener adapter = (AbstractSharedListener)listener;
                        adapter.fillContext(dataId, group);
                        CacheData.LOGGER.info("[{}] [notify-context] dataId={}, group={}, md5={}", new Object[]{CacheData.this.name, dataId, group, md5});
                    }

                    Thread.currentThread().setContextClassLoader(appClassLoader);
                    ConfigResponse cr = new ConfigResponse();
                    cr.setDataId(dataId);
                    cr.setGroup(group);
                    cr.setContent(content);
                    CacheData.this.configFilterChainManager.doFilter((IConfigRequest)null, cr);
                    String contentTmp = cr.getContent();
                    listener.receiveConfigInfo(contentTmp); //1 出錯!
                    listenerWrap.lastCallMd5 = md5; // 2
                    CacheData.LOGGER.info("[{}] [notify-ok] dataId={}, group={}, md5={}, listener={} ", new Object[]{CacheData.this.name, dataId, group, md5, listener});
                } catch (NacosException var9) {
                    CacheData.LOGGER.error("[{}] [notify-error] dataId={}, group={}, md5={}, listener={} errCode={} errMsg={}", new Object[]{CacheData.this.name, dataId, group, md5, listener, var9.getErrCode(), var9.getErrMsg()});
                } catch (Throwable var10) { //3 異常到了這裏來, 可是這個異常並無打印出來!!
CacheData.LOGGER.error(
"[{}] [notify-error] dataId={}, group={}, md5={}, listener={} tx={}", new Object[]{CacheData.this.name, dataId, group, md5, listener, var10.getCause()}); } finally { // 4 Thread.currentThread().setContextClassLoader(myClassLoader); } } }; long startNotify = System.currentTimeMillis(); try { if (null != listener.getExecutor()) { listener.getExecutor().execute(job); } else { job.run(); } } catch (Throwable var12) { LOGGER.error("[{}] [notify-error] dataId={}, group={}, md5={}, listener={} throwable={}", new Object[]{this.name, dataId, group, md5, listener, var12.getCause()}); } long finishNotify = System.currentTimeMillis(); LOGGER.info("[{}] [notify-listener] time cost={}ms in ClientWorker, dataId={}, group={}, md5={}, listener={} ", new Object[]{this.name, finishNotify - startNotify, dataId, group, md5, listener}); }

結果發現並非的,緣由在於 我沒有在3 和 4的 地方打斷點, 因此誤覺得是 死循環。 其實1 的地方有報錯, 直接跳到了3, 因此2沒有執行。 加斷點給3 和 4 就發現了問題緣由!listener.receiveConfigInfo 是一個接口, 其實現不肯定, F7 跳入發現 實際是 com.alibaba.nacos.api.config.listener.Listener#receiveConfigInfo 方法,跟進去, 發現執行到NacosContextRefresher.this.applicationContext.publishEvent(new RefreshEvent(this, (Object)null, "Refresh Nacos config")); 的時候, 下一行彷佛沒有執行。。 但是publishEvent 方法怎麼跟進去,這就太麻煩了。。

從3 處, 我知道了問題的cause:

Error creating bean with name 'writeDataSource': Could not bind properties to 'DruidDataSource' : prefix=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true

t = {ConfigurationPropertiesBindException@14064} "org.springframework.boot.context.properties.ConfigurationPropertiesBindException: Error creating bean with name 'writeDataSource': Could not bind properties to 'DruidDataSource' : prefix=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true; nested exception is org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
 beanType = {Class@7625} "class com.alibaba.druid.pool.DruidDataSource"
 annotation = {$Proxy30@14067} "@org.springframework.boot.context.properties.ConfigurationProperties(prefix=spring.datasource.write, value=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true)"
 beanName = "writeDataSource"
 resourceDescription = null
 relatedCauses = null
 detailMessage = "Error creating bean with name 'writeDataSource': Could not bind properties to 'DruidDataSource' : prefix=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true"
 cause = {BindException@14069} "org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
  target = {Bindable@14146} "[Bindable@7edd0fc7 type = com.alibaba.druid.pool.DruidDataSource, value = 'provided', annotations = array<Annotation>[@org.springframework.boot.context.properties.ConfigurationProperties(prefix=spring.datasource.write, value=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true)]]"
  property = {ConfigurationProperty@14147} "[ConfigurationProperty@3794965c name = spring.datasource.write.url, value = '${MYSQL_WRITE_URL:jdbc:mysql://192.168.11.200:3418/elppmdb}?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&useSSL=false', origin = "spring.datasource.write.url" from property source "bootstrapProperties"]"
  name = {ConfigurationPropertyName@14148} "spring.datasource.write"
  detailMessage = "Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
  cause = {IllegalStateException@14150} "java.lang.IllegalStateException: Unable to set value for property url"
   detailMessage = "Unable to set value for property url"
   cause = {InvocationTargetException@14164} "java.lang.reflect.InvocationTargetException"
   stackTrace = {StackTraceElement[59]@14166} 
   suppressedExceptions = {Collections$UnmodifiableRandomAccessList@13951}  size = 0
  stackTrace = {StackTraceElement[40]@14152} 
  suppressedExceptions = {Collections$UnmodifiableRandomAccessList@13951}  size = 0
 stackTrace = {StackTraceElement[35]@14072} 
 suppressedExceptions = {Collections$UnmodifiableRandomAccessList@13951}  size = 0
CacheData.this.name = "fixed-192.168.11.196_8848-1f3c60b6-3e28-44eb-9798-7f7eeeff6a8d"

能夠看到根本緣由是:Unable to set value for property url , 難道是 nacos 上的配置中心的配置有問題?

spring:
  application:
    name:  erdp_discuss_app
  profile:
    active: ${PROFILE_ACTIVE:local}
  cloud:
    nacos:
      discovery:
        server-addr: ${NACOS_SERVER_ADDR:192.168.11.196:8848}
  profiles:
    active: ${PROFILE_ACTIVE:local}
  jackson:
    time-zone: GMT+8
    date-format: yyyy-MM-dd HH:mm:ss
    serialization:
      write_dates_as_timestamps: false

  datasource:
    write:
      url: jdbc:mysql://192.168.11.200:3418/elppmdb?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&useSSL=false
      username: elppm_user
      password: elppm
      driver-class-name: com.mysql.jdbc.Driver
      max-active: 20
      initial-size: 1
      min-idle: 3
      max-wait: 60000
      time-between-eviction-runs-millis: 60000
      min-evictable-idle-time-millis: 300000
      validation-query: SELECT 'x' FROM DUAL
      test-while-idle: true
      test-on-borrow: false
      test-on-return: false
      # 配置監控統計攔截的filters,去掉後監控界面sql沒法統計,'wall'用於防火牆
      filters: stat,wall,log4j
      # 經過connectProperties屬性來打開mergeSql功能;慢SQL記錄
      connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=3000
      # 合併多個DruidDataSource的監控數據
      #spring.datasource.useGlobalDataSourceStat=true
    read:
      url: ${MYSQL_READ_URL:jdbc:mysql://192.168.11.200:3418/elppmdb}?useUnicode=true&characterEncoding=UTF-8&autoReconnect=true&useSSL=false
      username: ${MYSQL_DB_USERNAME:elppm_user}
      password: ${MYSQL_DB_PASSWORD:elppm}
      driver-class-name: com.mysql.jdbc.Driver
      max-active: 20
      initial-size: 1
      min-idle: 3
      max-wait: 60000
      time-between-eviction-runs-millis: 60000
      min-evictable-idle-time-millis: 300000
      validation-query: SELECT 'x' FROM DUAL
      test-while-idle: true
      test-on-borrow: false
      test-on-return: false
      # 配置監控統計攔截的filters,去掉後監控界面sql沒法統計,'wall'用於防火牆
      filters: stat,wall,log4j
      # 經過connectProperties屬性來打開mergeSql功能;慢SQL記錄
      connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=3000
      # 合併多個DruidDataSource的監控數據
      #spring.datasource.useGlobalDataSourceStat=true

  redis:
    host: ${APP_REDIS_HOST:192.168.11.200}
    password: ${APP_REDIS_PW:Redis!123} #password: # Login password of the redis server.
    port: ${APP_REDIS_PORT:6479}
    database: 2
    timeout: 1000
    pool:
      max-active: 8 #最大鏈接數
      max-idle: 8 #最大空閒鏈接數
      max-wait: -1 #最大等待時間
      min-idle: 0 #初始化鏈接數
  http:
    multipart:
      enabled: true # Enable support of multi-part uploads.
      file-size-threshold: 4KB # Threshold after which files will be written to disk. Values can use the suffixed "MB" or "KB" to indicate a Megabyte or Kilobyte size.
      location: /tmp # Intermediate location of uploaded files.
      max-file-size: 5Mb # Max file size. Values can use the suffixed "MB" or "KB" to indicate a Megabyte or Kilobyte size.
      max-request-size: 50Mb # Max request size. Values can use the suffixed "MB" or "KB" to indicate a Megabyte or Kilobyte size.

  data:
    mongodb:
      uri: mongodb://${MONGODB_SERV_URL:192.168.11.200:27117/docs}

mybatis:
  typeAliasesPackage: com.lkk.ppm.discuss.domain
  configLocation: classpath:mybatis-config.xml
  mapperLocations: classpath:mybatis/*.xml

server:
  port: ${SERVER_PORT:8092}

erdpapp:
  basepath: ${RDP_BASE_PATH:/eRDPServer}
  logcharset: ${RDP_LOG_CHARSET:UTF-8}

能夠看到,咱們的配置是 數據源讀寫分離的, 並非原始的那種。

咱們的代碼配置是:

package com.lkk.platform.common.service.datasource;

import com.alibaba.druid.pool.DruidDataSource;
import com.baomidou.mybatisplus.entity.GlobalConfiguration;
import com.baomidou.mybatisplus.enums.DBType;
import com.baomidou.mybatisplus.enums.IdType;
import com.baomidou.mybatisplus.plugins.PaginationInterceptor;
import com.baomidou.mybatisplus.spring.MybatisSqlSessionFactoryBean;
import com.lkk.platform.common.exception.SystemException;

import org.apache.ibatis.plugin.Interceptor;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.mybatis.spring.boot.autoconfigure.MybatisProperties;
import org.mybatis.spring.boot.autoconfigure.SpringBootVFS;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.env.Environment;
import org.springframework.core.io.DefaultResourceLoader;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import org.springframework.util.ObjectUtils;
import org.springframework.util.StringUtils;

import javax.sql.DataSource;

/**
 * 數據源配置
 *
 * @author luo
 *
 */
@Configuration
public class DatasourceConfig {
    
    @Autowired
    Environment env;

    /**
     * Write data source druid data source.
     *
     * @return the druid data source
     */
    @Primary
    @Bean(name = "writeDataSource")
    @ConfigurationProperties("spring.datasource.write")
    public DruidDataSource writeDataSource() {
        
        System.out.println("+++++++++++++++++" + env.getProperty("spring.datasource.write.url"));
        return new DruidDataSource();
    }

    /**
     * Read data source druid data source.
     *
     * @return the druid data source
     */
    @Bean(name = "readDataSource")
    @ConfigurationProperties("spring.datasource.read")
    public DruidDataSource readDataSource() {
        return new DruidDataSource();
    }

    /**
     * Dynamic data source data source.
     *
     * @return the data source
     */
    @Bean(name = "dynamicDataSource")
    public DataSource dynamicDataSource() {
        DynamicDataSource dynamicDataSource = new DynamicDataSource();
        dynamicDataSource.setWriteDataSource(writeDataSource());
        dynamicDataSource.setReadDataSource(readDataSource());

        return dynamicDataSource;
    }

    /**
     * Dynamic transaction manager data source transaction manager.
     *
     * @param dynamicDataSource the dynamic data source
     * @return the data source transaction manager
     */
    @Bean(name = "dynamicTransactionManager")
    public DataSourceTransactionManager dynamicTransactionManager(@Qualifier("dynamicDataSource") DataSource dynamicDataSource) {
        return new DynamicDataSourceTransactionManager(dynamicDataSource);
    }

    /**
     * Dynamic sql session factory sql session factory.
     *
     * @param dynamicDataSource the dynamic data source
     * @param properties        the properties
     * @return the sql session factory
     */
    @Bean
    @ConfigurationProperties(prefix = MybatisProperties.MYBATIS_PREFIX)
    public SqlSessionFactory dynamicSqlSessionFactory(
            @Qualifier("dynamicDataSource") DataSource dynamicDataSource,
            MybatisProperties properties) {

//        final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean();
//        sessionFactory.setDataSource(dynamicDataSource);
//        sessionFactory.setVfs(SpringBootVFS.class);
//        sessionFactory.setTypeAliasesPackage(properties.getTypeAliasesPackage());
//        sessionFactory.setConfigLocation(new DefaultResourceLoader().getResource(properties.getConfigLocation()));
//        sessionFactory.setMapperLocations(properties.resolveMapperLocations());

        final MybatisSqlSessionFactoryBean sessionFactory = new MybatisSqlSessionFactoryBean();
        sessionFactory.setDataSource(dynamicDataSource);
        sessionFactory.setVfs(SpringBootVFS.class);
        if (StringUtils.hasLength(properties.getTypeAliasesPackage())) {
            sessionFactory.setTypeAliasesPackage(properties.getTypeAliasesPackage());
        }
        if (StringUtils.hasText(properties.getConfigLocation())) {
            sessionFactory.setConfigLocation(new DefaultResourceLoader().getResource(properties.getConfigLocation()));
        }
        if (!ObjectUtils.isEmpty(properties.resolveMapperLocations())) {
            sessionFactory.setMapperLocations(properties.resolveMapperLocations());
        }

        //翻頁插件
        PaginationInterceptor paginationInterceptor = new PaginationInterceptor();
        paginationInterceptor.setLocalPage(true);
        sessionFactory.setPlugins(new Interceptor[]{
                paginationInterceptor
        });
        //全局配置
        GlobalConfiguration globalConfig = new GlobalConfiguration();
        globalConfig.setDbType(DBType.MYSQL.getDb());
        globalConfig.setIdType(IdType.INPUT.getKey());
        globalConfig.setDbColumnUnderline(true);
        //熱加載
        //globalConfig.setRefresh(true);
        //自動填充
        globalConfig.setMetaObjectHandler(new MyMetaObjectHandler());
        sessionFactory.setGlobalConfig(globalConfig);

        try {
            return sessionFactory.getObject();
        } catch (Exception e) {
            throw new SystemException(e);
        }
    }
}

根本緣由都找到了,可是殊不知道是哪一行報錯的。 就像水中的月亮,看起來近在咫尺,卻遠在天邊。乾脆DruidDataSource 進去看看, 加幾個斷點吧!

作了一些代碼上的註釋/調整後,發現這樣的錯誤:

detailMessage = "Error creating bean with name 'writeDataSource': Could not bind properties to 'DruidDataSource' : prefix=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true"
cause = {BindException@10784} "org.springframework.boot.context.properties.bind.BindException: Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
 target = {Bindable@10792} "[Bindable@36eeef8b type = com.alibaba.druid.pool.DruidDataSource, value = 'provided', annotations = array<Annotation>[@org.springframework.boot.context.properties.ConfigurationProperties(prefix=spring.datasource.write, value=spring.datasource.write, ignoreInvalidFields=false, ignoreUnknownFields=true)]]"
 property = {ConfigurationProperty@10793} "[ConfigurationProperty@49384031 name = spring.datasource.write.username, value = '${MYSQL_DB_USERNAME:elppm_user}', origin = "spring.datasource.write.username" from property source "bootstrapProperties"]"
 name = {ConfigurationPropertyName@10794} "spring.datasource.write"
 detailMessage = "Failed to bind properties under 'spring.datasource.write' to com.alibaba.druid.pool.DruidDataSource"
 cause = {IllegalStateException@10796} "java.lang.IllegalStateException: Unable to set value for property username"
  detailMessage = "Unable to set value for property username"
  cause = {InvocationTargetException@10810} "java.lang.reflect.InvocationTargetException"
  stackTrace = {StackTraceElement[60]@10812} 
  suppressedExceptions = {Collections$UnmodifiableRandomAccessList@10788}  size = 0
 stackTrace = {StackTraceElement[41]@10798} 
 suppressedExceptions = {Collections$UnmodifiableRandomAccessList@10788}  size = 0

這個錯誤依舊沒能解決問題。 但 這回我聰明瞭, 不斷的展開cause, 我發現了 InvocationTargetException, 可是 這個髮型並無什麼用。

調試 DruidDataSource 找找setUsername看看, 沒有發現,找父類 DruidAbstractDataSource, 發現了,

    public void setUsername(String username) {
        if (this.inited) {
            throw new UnsupportedOperationException();
        } else {
            this.username = username;
        }
    }

原來如此, inited 引發的。 加斷點, 果真如此, 就是說, 若是init 過了, 再次setUsername 是會報錯的! 線程堆棧是這樣的:

setUsername:1007, DruidAbstractDataSource (com.alibaba.druid.pool)
invoke:-1, GeneratedMethodAccessor275 (sun.reflect)
invoke:43, DelegatingMethodAccessorImpl (sun.reflect)
invoke:498, Method (java.lang.reflect)
setValue:320, JavaBeanBinder$BeanProperty (org.springframework.boot.context.properties.bind)
bind:79, JavaBeanBinder (org.springframework.boot.context.properties.bind)
bind:62, JavaBeanBinder (org.springframework.boot.context.properties.bind)
bind:54, JavaBeanBinder (org.springframework.boot.context.properties.bind)
lambda$null$5:341, Binder (org.springframework.boot.context.properties.bind)
apply:-1, 1216198248 (org.springframework.boot.context.properties.bind.Binder$$Lambda$109)
accept:193, ReferencePipeline$3$1 (java.util.stream)
tryAdvance:1359, ArrayList$ArrayListSpliterator (java.util)
forEachWithCancel:126, ReferencePipeline (java.util.stream)
copyIntoWithCancel:498, AbstractPipeline (java.util.stream)
copyInto:485, AbstractPipeline (java.util.stream)
wrapAndCopyInto:471, AbstractPipeline (java.util.stream)
evaluateSequential:152, FindOps$FindOp (java.util.stream)
evaluate:234, AbstractPipeline (java.util.stream)
findFirst:464, ReferencePipeline (java.util.stream)
lambda$bindBean$6:342, Binder (org.springframework.boot.context.properties.bind)
get:-1, 397071633 (org.springframework.boot.context.properties.bind.Binder$$Lambda$108)
withIncreasedDepth:441, Binder$Context (org.springframework.boot.context.properties.bind)
withBean:427, Binder$Context (org.springframework.boot.context.properties.bind)
access$400:381, Binder$Context (org.springframework.boot.context.properties.bind)
bindBean:339, Binder (org.springframework.boot.context.properties.bind)
bindObject:278, Binder (org.springframework.boot.context.properties.bind)
bind:221, Binder (org.springframework.boot.context.properties.bind)
bind:210, Binder (org.springframework.boot.context.properties.bind)
bind:192, Binder (org.springframework.boot.context.properties.bind)
bind:82, ConfigurationPropertiesBinder (org.springframework.boot.context.properties)
bind:107, ConfigurationPropertiesBindingPostProcessor (org.springframework.boot.context.properties)
postProcessBeforeInitialization:93, ConfigurationPropertiesBindingPostProcessor (org.springframework.boot.context.properties)
applyBeanPostProcessorsBeforeInitialization:416, AbstractAutowireCapableBeanFactory (org.springframework.beans.factory.support)
initializeBean:1686, AbstractAutowireCapableBeanFactory (org.springframework.beans.factory.support)
initializeBean:407, AbstractAutowireCapableBeanFactory (org.springframework.beans.factory.support)
rebind:102, ConfigurationPropertiesRebinder (org.springframework.cloud.context.properties)
rebind:84, ConfigurationPropertiesRebinder (org.springframework.cloud.context.properties)
onApplicationEvent:128, ConfigurationPropertiesRebinder (org.springframework.cloud.context.properties)
onApplicationEvent:50, ConfigurationPropertiesRebinder (org.springframework.cloud.context.properties)
doInvokeListener:172, SimpleApplicationEventMulticaster (org.springframework.context.event)
invokeListener:165, SimpleApplicationEventMulticaster (org.springframework.context.event)
multicastEvent:139, SimpleApplicationEventMulticaster (org.springframework.context.event)
publishEvent:400, AbstractApplicationContext (org.springframework.context.support)
publishEvent:354, AbstractApplicationContext (org.springframework.context.support)
refresh:65, ContextRefresher (org.springframework.cloud.context.refresh)
handle:36, RefreshEventListener (org.springframework.cloud.endpoint.event)
invoke:-1, GeneratedMethodAccessor286 (sun.reflect)
invoke:43, DelegatingMethodAccessorImpl (sun.reflect)
invoke:498, Method (java.lang.reflect)
doInvoke:261, ApplicationListenerMethodAdapter (org.springframework.context.event)
processEvent:180, ApplicationListenerMethodAdapter (org.springframework.context.event)
onApplicationEvent:142, ApplicationListenerMethodAdapter (org.springframework.context.event)
doInvokeListener:172, SimpleApplicationEventMulticaster (org.springframework.context.event)
invokeListener:165, SimpleApplicationEventMulticaster (org.springframework.context.event)
multicastEvent:139, SimpleApplicationEventMulticaster (org.springframework.context.event)
publishEvent:400, AbstractApplicationContext (org.springframework.context.support)
publishEvent:354, AbstractApplicationContext (org.springframework.context.support)
receiveConfigInfo:125, NacosContextRefresher$1 (org.springframework.cloud.alibaba.nacos.refresh)
run:188, CacheData$1 (com.alibaba.nacos.client.config.impl)
safeNotifyListener:209, CacheData (com.alibaba.nacos.client.config.impl)
checkListenerMd5:160, CacheData (com.alibaba.nacos.client.config.impl)
run:505, ClientWorker$LongPollingRunnable (com.alibaba.nacos.client.config.impl)
runWorker:1149, ThreadPoolExecutor (java.util.concurrent)
run:624, ThreadPoolExecutor$Worker (java.util.concurrent)
run:748, Thread (java.lang)
View Code

能夠看到publishEvent , 能夠確定的是 這個publishEvent 正是前面的 NacosContextRefresher.this.applicationContext.publishEvent 方法 !! (這堆棧好深啊!) 仔細觀察堆棧,發現 有個rebind 方法,感受好奇怪( 後面發現問題就是這個引發的!!)

 

那第一次init 是何時呢? 調試發現是dynamicSqlSessionFactory 方法的 return sessionFactory.getObject();

仍是不清楚 咋回事。 問題可怎麼解決呢??

 

Oh my God!!!

 

上網一把搜索,無果! 沒見到一樣問題的出現。 難道這個問題就我一我的碰到嗎? 難道nacos 就我一我的用嗎???

查看 https://blog.csdn.net/qq_37859539/article/details/81592954  https://blog.csdn.net/JAdroid/article/details/80490679  https://github.com/alibaba/druid/issues/2507 , 難道是  filter 的問題?

查看https://github.com/alibaba/nacos issue, 沒有。。

 

不至於這麼大的坑, 就一我的碰到吧! 感受是哪一個哪一個的版本 問題吧,  druid 從 1.0.26  升級到了 1.1.5 , 再次觀察 setUsername 方法:

    public void setUsername(String username) {
        if (!StringUtils.equals(this.username, username)) {
            if (this.inited) {
                throw new UnsupportedOperationException();
            } else {
                this.username = username;
            }
        }
    }

我一會兒就豁然開朗!

 

原來如此, 這裏的這個判斷是不可少的吧! nacos 會每一個30 s 就去獲取配置, 而後rebind, 可是druid-1.0.26 rebind 就會報錯, 報錯有沒有打印出來! 我擦。 druid-1.1.5 就不會有這個問題了!檢查其餘方法, 發現都作了 !StringUtils.equals 判斷。

 

升級就對了嘛!原來 nacos 並無錯, 30s 從新獲取配置而後從新綁定都是正確的, 邏輯都是沒有問題的! 原來是 druid的 大坑!!

相關文章
相關標籤/搜索