Spring Cloud第九篇 | 分佈式服務跟蹤Sleuth

​本文是Spring Cloud專欄的第九篇文章,瞭解前八篇文章內容有助於更好的理解本文:前端

  1. Spring Cloud第一篇 | Spring Cloud前言及其經常使用組件介紹概覽java

  2. Spring Cloud第二篇 | 使用並認識Eureka註冊中心node

  3. Spring Cloud第三篇 | 搭建高可用Eureka註冊中心linux

  4. Spring Cloud第四篇 | 客戶端負載均衡Ribbongit

  5. Spring Cloud第五篇 | 服務熔斷Hystrixgithub

  6. Spring Cloud第六篇 | Hystrix儀表盤監控Hystrix Dashboardspring

  7. Spring Cloud第七篇 | 聲明式服務調用Feign數據庫

  8. Spring Cloud第八篇 | Hystrix集羣監控Turbinwindows

1、Sleuth前言

    隨着業務的發展,系統規模也會變得愈來愈大,各微服務間的調用關係也變得愈來愈錯綜複雜。一般一個由客戶端發起的請求在後端系統中會通過多個不一樣的微服務調用來協同產生最後的請求結果,在複雜的微服務架構系統中,幾乎每個前端請求都會造成一條複雜的分佈式服務調用鏈路,在每條鏈路中任何一個依賴服務出現延遲太高或錯誤的時候都有可能引發請求最後的失敗。這時候, 對於每一個請求,全鏈路調用的跟蹤就變得愈來愈重要,經過實現對請求調用的跟蹤能夠幫助咱們快速發現錯誤根源以及監控分析每條請求鏈路上的性能瓶頸等。後端

    上面所述的分佈式服務跟蹤問題, Spring Cloud Sleuth提供了一套完整的解決方案,下面將介紹Spring Cloud Sleuth的應用

2、Sleuth快速入門

一、爲了保持其餘模塊的整潔性,從新搭建一個消費者(springcloud-consumer-sleuth),提供者(springcloud-consumer-sleuth),消費者和提供者都是和前面所用的都同樣沒有什麼區別,註冊中心仍是使用前面案例的註冊中心(springcloud-eureka-server/8700),詳細查看案例源碼。

二、完成以上工做以後,咱們爲服務提供者和服務消費者添加跟蹤功能,經過Spring Cloud Sleuth的封裝,咱們爲應用增長服務跟蹤能力的操做很是方便,只須要在服務提供者和服務消費者增長spring-cloud-starter-sleuth依賴便可

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

三、訪問消費者接口,而後查看控制檯日誌顯示

消費者(springcloud-consumer-sleuth)打印的日誌

2019-12-05 12:30:20.178  INFO [springcloud-consumer-sleuth,f6fb983680aab32b,f6fb983680aab32b,false] 8992 --- [nio-9090-exec-1] c.s.controller.SleuthConsumerController  : === consumer hello ===

提供者(springcloud-provider-sleuth)打印的日誌

2019-12-05 12:30:20.972  INFO [springcloud-provider-sleuth,f6fb983680aab32b,c70932279d3b3a54,false] 788 --- [nio-8080-exec-1] c.s.controller.SleuthProviderController  : === provider hello ===

    從上面的控制檯輸出內容中,咱們能夠看到多了一些形如 [springcloud-consumer-sleuth,f6fb983680aab32b,c70932279d3b3a54,false]的日誌信息,而這些元素正是實現分佈式服務跟蹤的重要組成部分,每一個值的含義以下所述:

  • 第一個值: springcloud-consumer-sleuth,它記錄了應用的名稱,也就是application properties 中spring.application.name參數配置的屬性

  • 第二個值:f6fb983680aab32b, Spring Cloud Sleuth生成的一個ID,稱爲Trace ID, 它用來標識一條請求鏈路。一條請求鏈路中包含一個Trace ID,多個Span ID

  • 第三個值:c70932279d3b3a54, Spring Cloud Sleuth生成的另一個ID,稱爲Span ID,它表示一個基本的工做單元,好比發送一個HTTP請求

  • 第四個值: false,表示是否要將該信息輸出到Zipkin等服務中來收集和展現。上面四個值中的Trace ID和Span ID是Spring Cloud Sleuth實現分佈式服務跟蹤的核心,在一次服務請求鏈路的調用過程當中,會保持並傳遞同一個Trace ID,從而將整個分佈於不一樣微服務進程中的請求跟蹤信息串聯起來。以上面輸出內容爲例springcloud-consumer-sleuth和springcloud-provider-sleuth同屬於一個前端服務請求資源,因此他們的Trace ID是相同的,處於同一條請求鏈路中。

3、跟蹤原理

分佈式系統中的服務跟蹤在理論上並不複雜,主要包括下面兩個關鍵點:

  1. 爲了實現請求跟蹤,當請求發送到分佈式系統的入口端點時,只須要服務跟蹤框架爲該請求建立一個惟一的跟蹤標識,同時在分佈式系統內部流轉的時候,框架始終保持傳遞該惟一標識,直到返回給請求方爲止,這個惟一標識就是前文中提到的Trace ID。經過Trace ID的記錄,咱們就能將全部請求過程的日誌關聯起來

  2. 爲了統計各處理單元的時間延遲,當請求到達各個服務組件時,或是處理邏輯到達某個狀態時,也經過一個惟一標識來標記它的開始、具體過程以及結束,該標識就是前面提到的Span ID。對於每一個Span來講,它必須有開始和結束兩個節點,經過記錄開始Span和結束Span的時間戳,就能統計出該Span的時間延遲,除了時間 戳記錄以外,它還能夠包含一些其餘元數據,好比事件名稱、請求信息等

    在【2、sleuth快速入門】示例中,咱們輕鬆實現了日誌級別的跟蹤信息接入,這徹底歸功於spring-cloud-starter-sleuth組件的實現,在SpringBoot應用中經過在工程中引入spring-cloud-starter-sleuth依賴以後,他會自動爲當前應用構建起各通訊通道的跟蹤機制,好比:

  • 經過RabbitMQ、Kafka(或者其餘任何Spring Cloud Stream綁定器實現的消息中間件)傳遞的請求

  • 經過Zuul代理傳遞的請求

  • 經過RestTemplate發起的請求

    在【2、sleuth快速入門】示例中,因爲springcloud-consumer-sleuth對springcloud-provider-sleuth發起的請求是經過RestTemplate實現的,因此spring-cloud-starter-sleuth組件會對該請求進行處理。在發送到springcloud-provider-sleuth以前,Sleuth會在該請求的Header中增長實現跟蹤須要的重要信息,主要有下面這幾個(更多關於頭信息的定義能夠經過查看org.springframework.cloud.sleuth.Span的源碼獲取)。

  • X-B3-TraceId:一條請求鏈路( Trace)的惟一標識,必需的值。

  • X-B3- SpanId:一個工做單元(Span)的惟一標識,必需的值。

  • X-B3- ParentSpanId:標識當前工做單元所屬的上一個工做單元, Root Span(請求鏈路的第一個工做單元)的該值爲空。

  • X-B3-Sampled:是否被抽樣輸出的標誌,1表示須要被輸出,0表示不須要被輸出。

  • X-B3-Name:工做單元的名稱

能夠經過對springcloud-provider-sleuth的實現作一些修改來輸出這些頭信息,具體以下:

private final Logger logger = Logger.getLogger(SleuthProviderController.class.getName());

    @RequestMapping("/hello")
    public String hello(HttpServletRequest request){
        logger.info("=== provider hello ===,Traced={"+request.getHeader("X-B3-TraceId")+"},SpanId={"+request.getHeader("X-B3- SpanId")+"}");
        return "Trace";
    }

經過上面的改造,再次重啓案例,而後訪問咱們查看日誌,能夠看到提供者輸出了正在處理的TraceId和SpanId信息。

消費者(springcloud-consumer-sleuth)打印的日誌

2019-12-05 13:15:01.457  INFO [springcloud-consumer-sleuth,41697d7fa118c150,41697d7fa118c150,false] 10036 --- [nio-9090-exec-2] c.s.controller.SleuthConsumerController  : === consumer hello ===

提供者(springcloud-provider-sleuth)打印的日誌

2019-12-05 13:15:01.865  INFO [springcloud-provider-sleuth,41697d7fa118c150,863a1245c86b580e,false] 11088 --- [nio-8080-exec-1] c.s.controller.SleuthProviderController  : === provider hello ===,Traced={41697d7fa118c150},SpanId={863a1245c86b580e}

4、抽樣收集

    經過Trace ID和Span ID已經實現了對分佈式系統中的請求跟蹤,而記錄的跟蹤信息最終會被分析系統收集起來,並用來實現對分佈式系統的監控和分析功能,好比,預警延遲過長的請求鏈路、查詢請求鏈路的調用明細等。此時,咱們在對接分析系統時就會碰到個問題:分析系統在收集跟蹤信息的時候,須要收集多少跟蹤信息才合適呢?

    理論上來講,咱們收集的跟蹤信息越多就能夠越好地反映出系統的實際運行狀況,並給出更精準的預警和分析。可是在高併發的分佈式系統運行時,大量的請求調用會產生海量的跟蹤日誌信息,若是收集過多的跟蹤信息將會對整個分佈式系統的性能形成必定的影響,同時保存大量的日誌信息也須要很多的存儲開銷。因此,在Sleuth中採用了抽象收集的方式來爲跟蹤信息打上收集標識,也就是咱們以前在日誌信息中看到的第4個布爾類型的值,他表明了該信息是否被後續的跟蹤信息收集器獲取和存儲。

public abstract class Sampler {
  /**
   * Returns true if the trace ID should be measured.
   *
   * @param traceId The trace ID to be decided on, can be ignored
   */
  public abstract boolean isSampled(long traceId);
}

    經過實現isSampled方法, Spring Cloud Sleuth會在產生跟蹤信息的時候調用它來爲跟蹤信息生成是否要被收集的標誌。須要注意的是,即便isSampled返回了false,它僅表明該跟蹤信息不被輸出到後續對接的遠程分析系統(好比Zipkin中,對於請求的跟蹤活動依然會進行,因此咱們在日誌中仍是能看到收集標識爲fase的記錄。

    默認狀況下, Sleuth會使用SamplerProperties實現的抽樣策略,以請求百分比的方式配置和收集跟蹤信息。咱們能夠經過在application.yml中配置下面的參數對其百分比值進行設置,它的默認值爲0.1,表明收集10%的請求跟蹤信息。

spring:
  sleuth:
    sampler:
      probability: 0.1

    在開發調試期間,一般會收集所有跟蹤信息並輸出到遠程倉庫,咱們能夠將其值設置爲1,或者也能夠注入Sampler對象SamplerProperties策略,好比

@Bean
  public Sampler defaultSampler() {
    return Sampler.ALWAYS_SAMPLE;
  }

    因爲跟蹤日誌信息數據的價值每每僅在最近一段時間內很是有用,好比一週。那麼咱們在設計抽樣策略時,主要考慮在不對系統形成明顯性能影響的狀況下,以在日誌保留時間窗內充分利用存儲空間的原則來實現抽樣策略。

5、與Zipkin整合

    因爲日誌文件都離散地存儲在各個服務實例的文件系之上,僅經過查看日誌信息文件來分咱們的請求鏈路依然是一件至關麻煩的事情,因此咱們須要一些工具來幫助集中收集、存儲和搜索這些跟蹤信息,好比ELK日誌平臺,雖然經過ELK平臺提供的收集、存儲、搜索等強大功能,咱們對跟蹤信息的管理和使用已經變得很是便利。可是在ELK平臺中的數據分析維度缺乏對請求鏈路中各階段時間延遲的關注,不少時候咱們追溯請求鏈路的一個緣由是爲了找出整個調用鏈路中出現延遲太高的瓶頸源,或爲了實現對分佈式系統作延遲監控等與時間消耗相關的需求,這時候相似ELK這樣的日誌分析系統就顯得有些乏力了。對於這樣的問題,咱們就能夠引入Zipkin來得以輕鬆解決。

    Zipkin是Twitter的一個開源項目,它基於Google Dapper實現。咱們可使用它來收集各個服務器上請求鏈路的跟蹤數據,並經過它提供的REST API接口來輔助査詢跟蹤數據以實現對分佈式系統的監控程序,從而及時發現系統中出現的延遲升高問題並找出系統 性能瓶頸的根源。除了面向開發的API接口以外,它還提供了方便的UI組件來幫助咱們直觀地搜索跟蹤信息和分析請求鏈路明細,好比能夠査詢某段時間內各用戶請求的處理時間等。

    下圖展現了Zipkin的基礎架構,他主要由4個核心組成:

  • Collector:收集器組件,它主要處理從外部系統發送過來的跟蹤信息,將這些信息轉換爲 Zipkin內部處理的Span格式,以支持後續的存儲、分析、展現等功能。

  • Storage:存儲組件,它主要處理收集器接收到的跟蹤信息,默認會將這些信息存儲在內存中。咱們也能夠修改此存儲策略,經過使用其餘存儲組件將跟蹤信息存儲到數據庫中。

  • RESTful API:API組件,它主要用來提供外部訪問接口。好比給客戶端展現跟蹤信息,或是外接系統訪問以實現監控等。

  • Web UI:UI組件,基於AP組件實現的上層應用。經過UI組件,用戶能夠方便而又直觀地查詢和分析跟蹤信息。

一、構建server-zipkin

    在Spring Cloud爲F版本的時候,已經不須要本身構建Zipkin Server了,只須要下載jar便可,下載地址:https://dl.bintray.com/openzipkin/maven/io/zipkin/zipkin-server/

Zipkin的github地址:https://github.com/openzipkin

二、下載完成jar 包以後,須要運行jar,以下

java -jar zipkin-server-2.10.1-exec.jar
訪問瀏覽器http://localhost:9411,如圖咱們能夠看到zipkin的管理界面

三、爲應用引入和配置Zipkin服務

    咱們須要對應用作一些配置,以實現將跟蹤信息輸出到Zipkin Server。咱們使用【2、sleuth快速入門】中實現的消費者(springcloud-consumer-sleuth),提供者(springcloud-provider-sleuth)爲例,對他們進行改造,都加入整合Zipkin的依賴

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-zipkin</artifactId>
</dependency>

四、在消費者(springcloud-consumer-sleuth),提供者(springcloud-provider-sleuth)中增長Zipkin Server的配置信息,具體信息以下所示,默認是鏈接地址爲:http://localhost:9411

spring:
  zipkin:
    base-url: http://localhost:9411

五、測試與分析

    到這裏咱們已經完成了配置Zipkin Server的全部基本工做,而後訪問幾回消費者接口http://localhost:9090/consumer/hello,當在日誌中出現跟蹤信息的最後一個值爲true的時候,說明該跟蹤信息會輸出給Zipkin Server,以下日誌

2019-12-05 15:47:25.600  INFO [springcloud-consumer-sleuth,cbdbbebaf32355ab,cbdbbebaf32355ab,false] 8564 --- [nio-9090-exec-9] c.s.controller.SleuthConsumerController  : === consumer hello ===
2019-12-05 15:47:27.483  INFO [springcloud-consumer-sleuth,8f332a4da3c05f62,8f332a4da3c05f62,false] 8564 --- [nio-9090-exec-6] c.s.controller.SleuthConsumerController  : === consumer hello ===
2019-12-05 15:47:42.127  INFO [springcloud-consumer-sleuth,61b922906800ac60,61b922906800ac60,true] 8564 --- [nio-9090-exec-2] c.s.controller.SleuthConsumerController  : === consumer hello ===
2019-12-05 15:47:42.457  INFO [springcloud-consumer-sleuth,1acae9ebecc4d36d,1acae9ebecc4d36d,false] 8564 --- [nio-9090-exec-4] c.s.controller.SleuthConsumerController  : === consumer hello ===
2019-12-05 15:47:42.920  INFO [springcloud-consumer-sleuth,b2db9e00014ceb88,b2db9e00014ceb88,false] 8564 --- [nio-9090-exec-7] c.s.controller.SleuthConsumerController  : === consumer hello ===
2019-12-05 15:47:43.457  INFO [springcloud-consumer-sleuth,ade4d5a7d97ca16b,ade4d5a7d97ca16b,false] 8564 --- [nio-9090-exec-9] c.s.controller.SleuthConsumerController  : === consumer hello ===

    因此此時能夠在Zipkin Server的管理界面中選擇合適的查詢條件,單擊Find Traces按鈕,就能夠查詢出剛纔在日誌中出現的跟蹤信息了(也能夠根據日誌信息中的Treac ID,在頁面右上角的輸入框中來搜索),頁面以下所示:

    點擊下方springcloud-consumer-sleuth端點的跟蹤信息,還能夠獲得Sleuth 跟蹤到的詳細信息,其中包括咱們關注的請求時間消耗等。

    點擊導航欄中的《依賴分析》菜單,還能夠查看Zipkin Server根據跟蹤信息分析生成的系統請求鏈路依賴關係圖,以下所示

6、Zipkin將數據存儲到ElasticSearch中

    在【5、與Zipkin整合】中鏈路收集的數據默認存儲在Zipkin服務的內存中,Zipkin服務一重啓這些數據就沒了,在開發環境中咱們圖方便省方即可以直接將數據存儲到內存中,可是在生產環境,咱們須要將這些數據持久化。咱們能夠將其存儲在MySQL中,實際使用中數據量可能會比較大,因此MySQL並非一種很好的選擇,能夠選擇用Elasticsearch來存儲數據,Elasticsearch在搜索方面有先天的優點。

一、上面幾個步驟使用的 zipkin-server-2.10.1-exec.jar 是之前下載的,再此使用的zipkin server版本爲2.19.2,下載地址:https://dl.bintray.com/openzipkin/maven/io/zipkin/zipkin-server/

二、zipkin-server-2.19.2-exec.jar 版本僅支持Elasticsearch5-7.x版本,注意版本對應,自行上elastic官網下載安裝Elasticsearch5-7.x版本,ES服務準備就緒完成以後。

三、啓動zipkin服務命令以下:

java -DSTORAGE_TYPE=elasticsearch -DES_HOSTS=http://47.112.11.147:9200 -jar zipkin-server-2.19.2-exec.jar

另外還有一些其它可配置參數,具體參考:https://github.com/openzipkin/zipkin/tree/master/zipkin-server#elasticsearch-storage

* `ES_HOSTS`: A comma separated list of elasticsearch base urls to connect to ex. http://host:9200.
              Defaults to "http://localhost:9200".
* `ES_PIPELINE`: Indicates the ingest pipeline used before spans are indexed. No default.
* `ES_TIMEOUT`: Controls the connect, read and write socket timeouts (in milliseconds) for
                Elasticsearch Api. Defaults to 10000 (10 seconds)
* `ES_INDEX`: The index prefix to use when generating daily index names. Defaults to zipkin.
* `ES_DATE_SEPARATOR`: The date separator to use when generating daily index names. Defaults to '-'.
* `ES_INDEX_SHARDS`: The number of shards to split the index into. Each shard and its replicas
                     are assigned to a machine in the cluster. Increasing the number of shards
                     and machines in the cluster will improve read and write performance. Number
                     of shards cannot be changed for existing indices, but new daily indices
                     will pick up changes to the setting. Defaults to 5.
* `ES_INDEX_REPLICAS`: The number of replica copies of each shard in the index. Each shard and
                       its replicas are assigned to a machine in the cluster. Increasing the
                       number of replicas and machines in the cluster will improve read
                       performance, but not write performance. Number of replicas can be changed
                       for existing indices. Defaults to 1. It is highly discouraged to set this
                       to 0 as it would mean a machine failure results in data loss.
* `ES_USERNAME` and `ES_PASSWORD`: Elasticsearch basic authentication, which defaults to empty string.
                                   Use when X-Pack security (formerly Shield) is in place.
* `ES_HTTP_LOGGING`: When set, controls the volume of HTTP logging of the Elasticsearch Api.
                     Options are BASIC, HEADERS, BODY

四、咱們修改springcloud-provider-sleuth,springcloud-consumer-sleuth的application.yml文件將抽樣機率修改成1,方便測試

spring:
  sleuth:
    sampler:
      probability: 1

五、而後訪問http://localhost:9090/consumer/hello接口幾回,再次訪問kibana能夠看到索引已經建立了

六、能夠看到裏面已經存儲數據了

七、訪問zipkin能夠看到信息

八、可是依賴中沒有任何信息

九、zipkin會在ES中建立以zipkin開頭日期結尾的索引,而且默認以天爲單位分割,使用ES存儲模式時,zipkin中的依賴信息會沒法顯示,經過zipkin官網能夠看到,咱們須要經過zipkin-dependencies工具包計算

十、zipkin-dependencies生成依賴鏈

    zipkin-dependencies基於spark job來生成全局的調用鏈,此處下載

zipkin-dependencies的版本爲2.4.1

github地址:https://github.com/openzipkin/zipkin-dependencies

下載地址:https://dl.bintray.com/openzipkin/maven/io/zipkin/dependencies/zipkin-dependencies/

十一、下載完成以後啓動

這個jar包就不要再windows上啓動了,啓動不了,啓動到你懷疑人生。在linux上執行

官方網文檔給了個Linux案例:

STORAGE_TYPE=cassandra3 java -jar zipkin-dependencies.jar `date -u -d '1 day ago' +%F`

    STORAGE_TYPE爲存儲類型,我門這裏使用的是ES因此修改成elasticsearch,後面的date參數命令能夠用來顯示或設定系統的日期與時間,不瞭解的自行百度。

啓動命令爲:

ZIPKIN_LOG_LEVEL=DEBUG ES_NODES_WAN_ONLY=true STORAGE_TYPE=elasticsearch  ES_HOSTS=http://47.112.11.147:9200  java -Xms256m -Xmx1024m -jar zipkin-dependencies-2.4.1.jar `date -u -d '1 day ago' +%F`

    下載完成後經過上述命令啓動zipkin-dependencies,這裏要注意的是程序只會根據當日的zipkin數據實時計算一次依賴關係,咱們是昨天(2019-12-17)收集到ES的數據,因此今天 (2019-12-18)咱們在啓動命令中指定前一天,就能生成依賴數據以索引zipkin:dependency-2019-12-17方式存入ES中,而後就退出了(Done),所以要作到實時更新依賴的話須要週期性執行zipkin-dependencies,例如使用Linux中的crontab定時調度等等。

執行後日志以下:

[root@VM_0_8_centos local]# ZIPKIN_LOG_LEVEL=DEBUG ES_NODES_WAN_ONLY=true STORAGE_TYPE=elasticsearch  ES_HOSTS=http://47.112.11.147:9200  java -Xms256m -Xmx1024m -jar zipkin-dependencies-2.4.1.jar `date -u -d '1 day ago' +%F`
19/12/18 21:44:10 WARN Utils: Your hostname, VM_0_8_centos resolves to a loopback address: 127.0.0.1; using 172.21.0.8 instead (on interface eth0)
19/12/18 21:44:10 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: spark.ui.enabled=false
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.index.read.missing.as.empty=true
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.nodes.wan.only=true
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.keystore.location=
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.keystore.pass=
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.truststore.location=
19/12/18 21:44:10 DEBUG ElasticsearchDependenciesJob: Spark conf properties: es.net.ssl.truststore.pass=
19/12/18 21:44:10 INFO ElasticsearchDependenciesJob: Processing spans from zipkin:span-2019-12-17/span
19/12/18 21:44:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/12/18 21:44:12 WARN Java7Support: Unable to load JDK7 types (annotations, java.nio.file.Path): no Java7 support added
19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:13 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:16 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:17 WARN Resource: Detected type name in resource [zipkin:span-2019-12-17/span]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:18 DEBUG DependencyLinker: building trace tree: traceId=a5253479e359638b
19/12/18 21:44:18 DEBUG DependencyLinker: traversing trace tree, breadth-first
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b","id":"a5253479e359638b","kind":"SERVER","name":"get /consumer/hello","timestamp":1576591155280041,"duration":6191,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"remoteEndpoint":{"ipv6":"::1","port":62085},"tags":{"http.method":"GET","http.path":"/consumer/hello","mvc.controller.class":"SleuthConsumerController","mvc.controller.method":"hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: root's client is unknown; skipping
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"CLIENT","name":"get","timestamp":1576591155281192,"duration":3999,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"tags":{"http.method":"GET","http.path":"/provider/hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"SERVER","name":"get /provider/hello","timestamp":1576591155284040,"duration":1432,"localEndpoint":{"serviceName":"springcloud-provider-sleuth","ipv4":"192.168.0.104"},"remoteEndpoint":{"ipv4":"192.168.0.104","port":62182},"tags":{"http.method":"GET","http.path":"/provider/hello","mvc.controller.class":"SleuthProviderController","mvc.controller.method":"hello"},"shared":true}
19/12/18 21:44:18 DEBUG DependencyLinker: found remote ancestor {"traceId":"a5253479e359638b","parentId":"a5253479e359638b","id":"8d6b8fb1bbb4f48c","kind":"CLIENT","name":"get","timestamp":1576591155281192,"duration":3999,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"tags":{"http.method":"GET","http.path":"/provider/hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: incrementing link springcloud-consumer-sleuth -> springcloud-provider-sleuth
19/12/18 21:44:18 DEBUG DependencyLinker: building trace tree: traceId=54af196ac59ee13e
19/12/18 21:44:18 DEBUG DependencyLinker: traversing trace tree, breadth-first
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e","id":"54af196ac59ee13e","kind":"SERVER","name":"get /consumer/hello","timestamp":1576591134958091,"duration":139490,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"remoteEndpoint":{"ipv6":"::1","port":62085},"tags":{"http.method":"GET","http.path":"/consumer/hello","mvc.controller.class":"SleuthConsumerController","mvc.controller.method":"hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: root's client is unknown; skipping
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e","parentId":"54af196ac59ee13e","id":"1a827ae864bd2399","kind":"CLIENT","name":"get","timestamp":1576591134962066,"duration":133718,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"tags":{"http.method":"GET","http.path":"/provider/hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: processing {"traceId":"54af196ac59ee13e","parentId":"54af196ac59ee13e","id":"1a827ae864bd2399","kind":"SERVER","name":"get /provider/hello","timestamp":1576591135064214,"duration":37707,"localEndpoint":{"serviceName":"springcloud-provider-sleuth","ipv4":"192.168.0.104"},"remoteEndpoint":{"ipv4":"192.168.0.104","port":62089},"tags":{"http.method":"GET","http.path":"/provider/hello","mvc.controller.class":"SleuthProviderController","mvc.controller.method":"hello"},"shared":true}
19/12/18 21:44:18 DEBUG DependencyLinker: found remote ancestor {"traceId":"54af196ac59ee13e","parentId":"54af196ac59ee13e","id":"1a827ae864bd2399","kind":"CLIENT","name":"get","timestamp":1576591134962066,"duration":133718,"localEndpoint":{"serviceName":"springcloud-consumer-sleuth","ipv4":"192.168.0.104"},"tags":{"http.method":"GET","http.path":"/provider/hello"}}
19/12/18 21:44:18 DEBUG DependencyLinker: incrementing link springcloud-consumer-sleuth -> springcloud-provider-sleuth
19/12/18 21:44:18 INFO ElasticsearchDependenciesJob: Saving dependency links to zipkin:dependency-2019-12-17/dependency
19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:18 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 WARN Resource: Detected type name in resource [zipkin:dependency-2019-12-17/dependency]. Type names are deprecated and will be removed in a later release.
19/12/18 21:44:19 INFO ElasticsearchDependenciesJob: Processing spans from zipkin-span-2019-12-17
19/12/18 21:44:20 INFO ElasticsearchDependenciesJob: No dependency links could be processed from spans in index zipkin-span-2019-12-17
19/12/18 21:44:20 INFO ElasticsearchDependenciesJob: Done

十二、上面有一個配置:ES_NODES_WAN_ONLY=true

Whether the connector is used against an Elasticsearch instance in a cloud/restricted environment over the WAN, such as Amazon Web Services. In this mode, the connector disables discovery and only connects through the declared es.nodes during all operations, including reads and writes. Note that in this mode, performance is highly affected.

    該配置的含義爲經過公網我訪問雲上或者一些限制性網絡上的ES實例時,如AWS,經過聲明該配置就會禁用發現其它節點的行爲,後續的讀和寫都只會經過這個指定的節點進行操做,增長了該屬性就能夠訪問雲上或者受限制網絡中的ES,可是也由於讀寫都是經過這個節點,於是性能上會受到比較大的影響。zipkin-dependencies的github上也就簡單說明,若是這個配置爲true,將僅使用在ES_HOSTS主機中設置的值,例如ES集羣在Docker中。

1三、查看kibana中生成的索引

1四、而後查看zipkin中的依賴項,咱們能夠看到信息了

 

詳細參考案例源碼:https://gitee.com/coding-farmer/spirngcloud-learn

 

相關文章
相關標籤/搜索