前面已經完成了Springcloud+sleuth+zipkin的入門,以及kafka的安裝。至於ES這裏就不在說明了,網上安裝使用資料挺多的,這裏僅僅是將其做爲持久化工具使用。java
在pom.xml中加入以下內容:web
<dependencies> <!--springboot依賴--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!--加入zipkin依賴--> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin</artifactId> <version>2.4.2</version> </dependency> <!--引入zipkin的ES存儲--> <dependency> <groupId>io.zipkin.java</groupId> <artifactId>zipkin-autoconfigure-storage-elasticsearch-http</artifactId> <version>2.4.2</version> <optional>true</optional> </dependency> <!--引入zipkin的流綁定--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-sleuth-zipkin-stream</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-stream-kafka</artifactId> </dependency> </dependencies>
同時在resources中建立application.ymlspring
#配置kafka spring: sleuth: enabled: false sampler: percentage: 1.0 cloud: stream: kafka: binder: brokers: localhost:9092 zkNodes: localhost:2181 #ES配置 zipkin: storage: type: elasticsearch elasticsearch: host: localhost:9200 cluster: elasticsearch index: zipkin index-shards: 1 index-replicas: 1
啓動類增長以下註解:json
@SpringBootApplication @EnableZipkinStreamServer//配置能夠做爲zipkinserver public class ZipkinServerApplication { public static void main(String[] args) { SpringApplication.run(ZipkinServerApplication.class,args); } }
啓動後能夠 在控制檯看到kafka的鏈接信息springboot
在pom.xml中引入:mvc
<dependencies> <!--引入springboot--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!--引入zipkin綁定--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-sleuth-zipkin-stream</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-kafka</artifactId> </dependency> <!--引入logback日誌輸出配置,這時因爲kafka綁定的是日誌事件--> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>4.6</version> </dependency> <!--引入feign依賴--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-feign</artifactId> </dependency> </dependencies>
注意對於日誌依賴的引入。
在application.yml中加入以下配置:app
server: port: 8082 spring: application: name: serverone sleuth: sampler: percentage: 1.0 cloud: stream: kafka: binder: brokers: localhost:9092 zkNodes: localhost:2181
建立logback-spring.xml並以下配置:elasticsearch
<?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/defaults.xml"/> <springProperty scope="context" name="springAppName" source="spring.application.name"/> <!-- Example for logging into the build folder of your project --> <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}"/> <property name="CONSOLE_LOG_PATTERN" value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr([${springAppName:-},%X{X-B3-TraceId:-},%X{X-B3-SpanId:-},%X{X-Span-Export:-}]){yellow} %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}"/> <!-- Appender to log to console --> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <!-- Minimum logging level to be presented in the console logs--> <level>INFO</level> </filter> <encoder> <pattern>${CONSOLE_LOG_PATTERN}</pattern> <charset>utf8</charset> </encoder> </appender> <!-- Appender to log to file --> <appender name="flatfile" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_FILE}</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.gz</fileNamePattern> <maxHistory>7</maxHistory> </rollingPolicy> <encoder> <pattern>${CONSOLE_LOG_PATTERN}</pattern> <charset>utf8</charset> </encoder> </appender> <!-- Appender to log to file in a JSON format --> <appender name="logstash" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${LOG_FILE}.json</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>${LOG_FILE}.json.%d{yyyy-MM-dd}.gz</fileNamePattern> <maxHistory>7</maxHistory> </rollingPolicy> <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder"> <providers> <timestamp> <timeZone>UTC</timeZone> </timestamp> <pattern> <pattern> { "severity": "%level", "service": "${springAppName:-}", "trace": "%X{X-B3-TraceId:-}", "span": "%X{X-B3-SpanId:-}", "exportable": "%X{X-Span-Export:-}", "pid": "${PID:-}", "thread": "%thread", "class": "%logger{40}", "rest": "%message" } </pattern> </pattern> </providers> </encoder> </appender> <root level="INFO"> <!--<appender-ref ref="console"/>--> <appender-ref ref="logstash"/> <!--<appender-ref ref="flatfile"/>--> </root> </configuration>
在啓動類加入以下配置:ide
@SpringBootApplication @EnableFeignClients //引入feign支持 @EnableAutoConfiguration //引入自動配置,替代配置文件 public class ServerOneApplication { public static void main(String[] args) { SpringApplication.run(ServerOneApplication.class,args); } }
建立的restTemplate 配置類爲:spring-boot
@Configuration public class RestConfiguration { @Bean public RestTemplate restTemplate(){ return new RestTemplate(); } }
feign的客戶端爲:
@FeignClient(name = "sleuthone",url = "http://localhost:8888") public interface SleuthService { @RequestMapping("/sayHello/{name}") public String sayHello(@PathVariable(name = "name")String name); }
調用的後臺controller爲:
@RestController public class SleuthController { @Autowired private RestTemplate restTemplate; @Autowired private SleuthService sleuthService; @ResponseBody @RequestMapping("/restHello/{name}") public String restHello(@PathVariable String name) { return "rest " + restTemplate.getForObject("http://localhost:8888/sayHello/" + name,String.class); } @ResponseBody @RequestMapping("/feignHello/{name}") public String feignHello(@PathVariable String name) { return "feign " + sleuthService.sayHello(name); } }
服務二與服務一基本相同,不一樣的地方在於服務二不須要引入feign了,其pom以下:
<dependencies> <!--依賴--> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <!--引入鏈路調用信息--> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-sleuth</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-sleuth-zipkin-stream</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-kafka</artifactId> </dependency> <!--引入logback日誌輸出配置,這時因爲kafka綁定的是日誌事件--> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>4.6</version> </dependency> </dependencies>
其餘一致,服務二僅提供一個sayHello服務。
@RestController public class SleuthController { @ResponseBody @RequestMapping("/sayHello/{name}") public String sayHello(@PathVariable String name) { return "hello " + name; } }
分別訪問http://localhost:8082/restHello/lisi
http://localhost:8082/feignHello/lisi
正確返回信息後,咱們訪問es查詢數據
http://localhost:9200/zipkin*/_search?pretty
能夠看到以下數據:
{ "took" : 2, "timed_out" : false, "_shards" : { "total" : 2, "successful" : 2, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 37, "max_score" : 1.0, "hits" : [ { "_index" : "zipkin:span-2018-01-08", "_type" : "span", "_id" : "AWDUsg4wSqtAFWoqJU1h", "_score" : 1.0, "_source" : { "traceId" : "45fc228abc4a4edf", "duration" : 4000, "shared" : true, "localEndpoint" : { "serviceName" : "servertwo", "ipv4" : "10.130.236.27", "port" : 8888 }, "timestamp_millis" : 1515396926392, "kind" : "SERVER", "name" : "http:/sayhello/lisi", "id" : "d852bf5a65f75acb", "parentId" : "45fc228abc4a4edf", "timestamp" : 1515396926392000, "tags" : { "mvc.controller.class" : "SleuthController", "mvc.controller.method" : "sayHello", "spring.instance_id" : "01C702601479820.corp.haier.com:servertwo:8888" } } }, { "_index" : "zipkin:span-2018-01-08", "_type" : "span", "_id" : "AWDUsg8wSqtAFWoqJU1i", "_score" : 1.0, "_source" : { "traceId" : "45fc228abc4a4edf", "duration" : 36000, "localEndpoint" : { "serviceName" : "serverone", "ipv4" : "10.130.236.27", "port" : 8082 }, "timestamp_millis" : 1515396926361, "kind" : "CLIENT", "name" : "http:/sayhello/lisi", "id" : "d852bf5a65f75acb", "parentId" : "45fc228abc4a4edf", "timestamp" : 1515396926361000, "tags" : { "http.host" : "localhost", "http.method" : "GET", "http.path" : "/sayHello/lisi", "http.url" : "http://localhost:8888/sayHello/lisi", "spring.instance_id" : "01C702601479820.corp.haier.com:serverone:8082" } } }, { "_index" : "zipkin:span-2018-01-08", "_type" : "span", "_id" : "AWDUsg8wSqtAFWoqJU1j", "_score" : 1.0, "_source" : { "traceId" : "45fc228abc4a4edf", "duration" : 56696, "localEndpoint" : { "serviceName" : "serverone", "ipv4" : "10.130.236.27", "port" : 8082 }, "timestamp_millis" : 1515396926355, "kind" : "SERVER", "name" : "http:/feignhello/lisi", "id" : "45fc228abc4a4edf", "timestamp" : 1515396926355000, "tags" : { "mvc.controller.class" : "SleuthController", "mvc.controller.method" : "feignHello", "spring.instance_id" : "01C702601479820.corp.haier.com:serverone:8082" } } } ] } }
咱們須要注意:duration這裏的單位是微妙,因此耗時56ms
一、數據沒有進入到kafka通道,沒有配置日誌輸出到日誌文件,經過對源碼分析,咱們發現其kafka綁定的是日誌事件 二、啓動服務時,kafka停機或者不能消費參看上一篇關於kafka不能消費的問題。其實主要緣由是jdk版本問題。