轉載請標明出處:
blog.csdn.net/forezp/arti…
本文出自方誌朋的博客html
微服務架構是一個分佈式架構,它按業務劃分服務單元,一個分佈式系統每每有不少個服務單元。因爲服務單元數量衆多,業務的複雜性,若是出現了錯誤和異常,很難去定位。主要體如今,一個請求可能須要調用不少個服務,而內部服務的調用複雜性,決定了問題難以定位。因此微服務架構中,必須實現分佈式鏈路追蹤,去跟進一個請求到底有哪些服務參與,參與的順序又是怎樣的,從而達到每一個請求的步驟清晰可見,出了問題,很快定位。前端
舉個例子,在微服務系統中,一個來自用戶的請求,請求先達到前端A(如前端界面),而後經過遠程調用,達到系統的中間件B、C(如負載均衡、網關等),最後達到後端服務D、E,後端通過一系列的業務邏輯計算最後將數據返回給用戶。對於這樣一個請求,經歷了這麼多個服務,怎麼樣將它的請求過程的數據記錄下來呢?這就須要用到服務鏈路追蹤。java
Google開源的 Dapper鏈路追蹤組件,並在2010年發表了論文《Dapper, a Large-Scale Distributed Systems Tracing Infrastructure》,這篇文章是業內實現鏈路追蹤的標杆和理論基礎,具備很是大的參考價值。
目前,鏈路追蹤組件有Google的Dapper,Twitter 的Zipkin,以及阿里的Eagleeye (鷹眼)等,它們都是很是優秀的鏈路追蹤開源組件。mysql
本文主要講述如何在Spring Cloud Sleuth中集成Zipkin。在Spring Cloud Sleuth中集成Zipkin很是的簡單,只須要引入相應的依賴和作相關的配置便可。git
Spring Cloud Sleuth採用的是Google的開源項目Dapper的專業術語。github
本文案例一共四個工程採用多Module形式。須要新建一個主Maven工程,主要指定了Spring Boot的版本爲1.5.3,Spring Cloud版本爲Dalston.RELEASE。包含了eureka-server工程,做爲服務註冊中心,eureka-server的建立過程這裏不重複;zipkin-server做爲鏈路追蹤服務中心,負責存儲鏈路數據;gateway-service做爲服務網關工程,負責請求的轉發,同時它也做爲鏈路追蹤客戶端,負責產生數據,並上傳給zipkin-service;user-service爲一個應用服務,對外暴露API接口,同時它也做爲鏈路追蹤客戶端,負責產生數據。web
新建一個Module工程,取名爲zipkin-server,其pom文件繼承了主Maven工程的pom文件;做爲Eureka Client,引入Eureka的起步依賴spring-cloud-starter-eureka,引入zipkin-server依賴,以及zipkin-autoconfigure-ui依賴,後兩個依賴提供了Zipkin的功能和Zipkin界面展現的功能。代碼以下:spring
<parent>
<groupId>com.forezp</groupId>
<artifactId>sleuth</artifactId>
<version>0.0.1-SNAPSHOT</version>
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-server</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-autoconfigure-ui</artifactId>
</dependency>
</dependencies>複製代碼
在程序的啓動類ZipkinServiceApplication加上@EnableZipkinServer開啓ZipkinServer的功能,加上@EnableEurekaClient註解,啓動Eureka Client。代碼以下:sql
@SpringBootApplication
@EnableEurekaClient
@EnableZipkinServer
public class ZipkinServerApplication {
public static void main(String[] args) {
SpringApplication.run(ZipkinServerApplication.class, args);
}
}複製代碼
在配置文件application.yml文件,指定程序名爲zipkin-server,端口爲9411,服務註冊地址爲http://localhost:8761/eureka/。數據庫
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
server:
port: 9411
spring:
application:
name: zipkin-server複製代碼
在主Maven工程下建一個Module工程,取名爲user-service,做爲應用服務,對外暴露API接口。pom文件繼承了主Maven工程的pom文件,並引入了Eureka的起步依賴spring-cloud-starter-eureka,Web起步依賴spring-boot-starter-web,Zipkin的起步依賴spring-cloud-starter-zipkin,代碼以下:
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zipkin</artifactId>
<version>RELEASE</version>
</dependency>
</dependencies>複製代碼
在配置文件applicatiom.yml,指定了程序名爲user-service,端口爲8762,服務註冊地址爲http://localhost:8761/eureka/,Zipkin Server地址爲http://localhost:9411。spring.sleuth.sampler.percentage爲1.0,即100%的機率將鏈路的數據上傳給Zipkin Server,在默認的狀況下,該值爲0.1,代碼以下:
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
server:
port: 8762
spring:
application:
name: user-service
zipkin:
base-url: http://localhost:9411
sleuth:
sampler:
percentage: 1.0複製代碼
在UserController類建一個「/user/hi」的API接口,對外提供服務,代碼以下:
@RestController
@RequestMapping("/user")
public class UserController {
@GetMapping("/hi")
public String hi(){
return "I'm forezp";
}
}複製代碼
最後做爲Eureka Client,須要在程序的啓動類UserServiceApplication加上@EnableEurekaClient註解。
新建一個名爲gateway-service工程,這個工程做爲服務網關,將請求轉發到user-service,做爲Zipkin客戶端,須要將鏈路數據上傳給Zipkin Server,同時它也做爲Eureka Client。它在pom文件除了須要繼承主Maven工程的 pom,還需引入的依賴以下:
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-eureka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zuul</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-zipkin</artifactId>
<version>RELEASE</version>
</dependency>
</dependencies>複製代碼
在application.yml文件,配置程序名爲gateway-service,端口爲5000,服務註冊地址爲http://localhost:8761/eureka/,Zipkin Server地址爲http://localhost:9411,以「/user-api/**」開頭的Uri請求,轉發到服務名爲 user-service的服務。配置代碼以下:
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka/
server:
port: 5000
spring:
application:
name: gateway-service
sleuth:
sampler:
percentage: 1.0
zipkin:
base-url: http://localhost:9411
zuul:
routes:
api-a:
path: /user-api/**
serviceId: user-service複製代碼
在程序的啓動類GatewayServiceApplication,加上@EnableEurekaClient註解開啓Eureka Client,加上@EnableZuulProxy註解,開啓Zuul代理功能。代碼以下:
@SpringBootApplication
@EnableZuulProxy
@EnableEurekaClient
public class GatewayServiceApplication {
public static void main(String[] args) {
SpringApplication.run(GatewayServiceApplication.class, args);
}
}複製代碼
完整的項目搭建完畢,依次啓動eureka-server、zipkin-server、user-service、gateway-service。在瀏覽器上訪問http://localhost:5000/user-api/user/hi,瀏覽器顯示:
I'm forezp
訪問http://localhost:9411,即訪問Zipkin的展現界面,界面顯示如圖1所示:
這個界面主要用來查找服務的調用狀況,能夠根據服務名、開始時間、結束時間、請求消耗的時間等條件來查找。點擊「Find Trackes」按鈕,界面如圖所示。從圖可知服務的調用狀況,好比服務調用時間、服務的消耗時間,服務調用的鏈路狀況。
點擊Dependences按鈕,能夠查看服務的依賴關係,在本案例中,gateway-service將請求轉發到了user-service,它們的依賴關係如圖:
如今須要實現這樣一個功能,須要在鏈路數據中加上操做人。這須要在gateway-service上實現。建一個ZuulFilter過濾器,它的類型爲「post」,order爲900,開啓攔截。在攔截邏輯方法裏,經過Tracer的addTag方法加上自定義的數據,好比本案例中加入了鏈路的操做人。另外也能夠在這個過濾器中獲取當前鏈路的traceId信息,traceId做爲鏈路數據的惟一標識,能夠存儲在log日誌中,方便後續查找。
@Component
public class LoggerFilter extends ZuulFilter {
@Autowired
Tracer tracer;
@Override
public String filterType() {
return FilterConstants.POST_TYPE;
}
@Override
public int filterOrder() {
return 900;
}
@Override
public boolean shouldFilter() {
return true;
}
@Override
public Object run() {
tracer.addTag("operator","forezp");
System.out.print(tracer.getCurrentSpan().traceIdString());
return null;
}
}複製代碼
在上述的案例中,最終gateway-service收集的數據,是經過Http上傳給zip-server的,在Spring Cloud Sleuth中支持消息組件來通信的,在這一小節使用RabbitMQ來通信。首先來改造zipkin-server,在pom文件將zipkin-server的依賴去掉,加上spring-cloud-sleuth-zipkin-stream和spring-cloud-starter-stream-rabbit,代碼以下:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>複製代碼
在application.yml配置上RabbitMQ的配置,包括host、端口、用戶名、密碼,以下:
spring:
rabbitmq:
host: localhost
port: 5672
username: guest
password: guest複製代碼
在程序的啓動類ZipkinServerApplication上@EnableZipkinStreamServer註解,開啓ZipkinStreamServer。代碼以下:
@SpringBootApplication
@EnableEurekaClient
@EnableZipkinStreamServer
public class ZipkinServerApplication {
public static void main(String[] args) {
SpringApplication.run(ZipkinServerApplication.class, args);
}
}複製代碼
如今來改造下Zipkin Client(包括gateway-service、user-service),在pom文件中將spring-cloud-starter-zipkin以來改成spring-cloud-sleuth-zipkin-stream和spring-cloud-starter-stream-rabbit,代碼以下:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>複製代碼
同時在applicayion.yml文件加上RabbitMQ的配置,同zipkin-server工程。
這樣,就將鏈路的上傳數據從Http改了爲用消息代組件RabbitMQ。
在上述的例子中,Zipkin Server是將數據存儲在內存中,一旦程序重啓,以前的鏈路數據所有丟失,那麼怎麼將鏈路數據存儲起來呢?Zipkin支持Mysql、Elasticsearch、Cassandra存儲。這一小節講述用Mysql存儲,下一節講述用Elasticsearch存儲。
首先,在zipkin-server工程加上Mysql的鏈接依賴mysql-connector-java,JDBC的起步依賴spring-boot-starter-jdbc,代碼以下:
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
</dependency>複製代碼
在配置文件application.yml加上數據源的配置,包括數據庫的Url、用戶名、密碼、鏈接驅動,另外須要配置zipkin.storage.type爲mysql,代碼以下:
spring:
datasource:
url: jdbc:mysql://localhost:3306/spring-cloud-zipkin?useUnicode=true&characterEncoding=utf8&useSSL=false
username: root
password: 123456
driver-class-name: com.mysql.jdbc.Driver
zipkin:
storage:
type: mysql複製代碼
另外須要在Mysql數據庫中初始化數據庫腳本,數據庫腳本地址:github.com/openzipkin/…
CREATE TABLE IF NOT EXISTS zipkin_spans (
`trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit',
`trace_id` BIGINT NOT NULL,
`id` BIGINT NOT NULL,
`name` VARCHAR(255) NOT NULL,
`parent_id` BIGINT,
`debug` BIT(1),
`start_ts` BIGINT COMMENT 'Span.timestamp(): epoch micros used for endTs query and to implement TTL',
`duration` BIGINT COMMENT 'Span.duration(): micros used for minDuration and maxDuration query'
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;
ALTER TABLE zipkin_spans ADD UNIQUE KEY(`trace_id_high`, `trace_id`, `id`) COMMENT 'ignore insert on duplicate';
ALTER TABLE zipkin_spans ADD INDEX(`trace_id_high`, `trace_id`, `id`) COMMENT 'for joining with zipkin_annotations';
ALTER TABLE zipkin_spans ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTracesByIds';
ALTER TABLE zipkin_spans ADD INDEX(`name`) COMMENT 'for getTraces and getSpanNames';
ALTER TABLE zipkin_spans ADD INDEX(`start_ts`) COMMENT 'for getTraces ordering and range';
CREATE TABLE IF NOT EXISTS zipkin_annotations (
`trace_id_high` BIGINT NOT NULL DEFAULT 0 COMMENT 'If non zero, this means the trace uses 128 bit traceIds instead of 64 bit',
`trace_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.trace_id',
`span_id` BIGINT NOT NULL COMMENT 'coincides with zipkin_spans.id',
`a_key` VARCHAR(255) NOT NULL COMMENT 'BinaryAnnotation.key or Annotation.value if type == -1',
`a_value` BLOB COMMENT 'BinaryAnnotation.value(), which must be smaller than 64KB',
`a_type` INT NOT NULL COMMENT 'BinaryAnnotation.type() or -1 if Annotation',
`a_timestamp` BIGINT COMMENT 'Used to implement TTL; Annotation.timestamp or zipkin_spans.timestamp',
`endpoint_ipv4` INT COMMENT 'Null when Binary/Annotation.endpoint is null',
`endpoint_ipv6` BINARY(16) COMMENT 'Null when Binary/Annotation.endpoint is null, or no IPv6 address',
`endpoint_port` SMALLINT COMMENT 'Null when Binary/Annotation.endpoint is null',
`endpoint_service_name` VARCHAR(255) COMMENT 'Null when Binary/Annotation.endpoint is null'
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;
ALTER TABLE zipkin_annotations ADD UNIQUE KEY(`trace_id_high`, `trace_id`, `span_id`, `a_key`, `a_timestamp`) COMMENT 'Ignore insert on duplicate';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`, `span_id`) COMMENT 'for joining with zipkin_spans';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id_high`, `trace_id`) COMMENT 'for getTraces/ByIds';
ALTER TABLE zipkin_annotations ADD INDEX(`endpoint_service_name`) COMMENT 'for getTraces and getServiceNames';
ALTER TABLE zipkin_annotations ADD INDEX(`a_type`) COMMENT 'for getTraces';
ALTER TABLE zipkin_annotations ADD INDEX(`a_key`) COMMENT 'for getTraces';
ALTER TABLE zipkin_annotations ADD INDEX(`trace_id`, `span_id`, `a_key`) COMMENT 'for dependencies job';
CREATE TABLE IF NOT EXISTS zipkin_dependencies (
`day` DATE NOT NULL,
`parent` VARCHAR(255) NOT NULL,
`child` VARCHAR(255) NOT NULL,
`call_count` BIGINT,
`error_count` BIGINT
) ENGINE=InnoDB ROW_FORMAT=COMPRESSED CHARACTER SET=utf8 COLLATE utf8_general_ci;
ALTER TABLE zipkin_dependencies ADD UNIQUE KEY(`day`, `parent`, `child`);複製代碼
使用Mysql存儲鏈路數據,在併發高的狀況下,顯然不合理,這時能夠選擇使用ElasticSearch存儲。讀者須要自行安裝ElasticSearch、Kibana(下一小節使用),下載地址爲www.elastic.co/products/el…
安裝的過程能夠參考個人這篇文章:blog.csdn.net/forezp/arti…
本小節的案例在上上小節的案例的基礎上進行改造。首先在pom文件,加上zipkin的依賴和zipkin-autoconfigure-storage-elasticsearch-http的依賴,代碼以下:
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin</artifactId>
<version>1.28.0</version>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-autoconfigure-storage-elasticsearch-http</artifactId>
<version>1.28.0</version>
</dependency>複製代碼
在application.yml文件加上Zipkin的配置,配置了zipkin的存儲類型爲elasticsearch,使用的StorageComponent爲elasticsearch。而後須要配置elasticsearch,包括hosts,能夠配置多個,用「,」隔開;index爲zipkin等,具體配置以下:
zipkin:
storage:
type: elasticsearch
StorageComponent: elasticsearch
elasticsearch:
cluster: elasticsearch
max-requests: 30
index: zipkin
index-shards: 3
index-replicas: 1
hosts: localhost:9200複製代碼
上一小節講述瞭如何將鏈路數據存儲在ElasticSearch,ElasticSearch能夠和Kibana結合,將鏈路數據展現在 Kibana上。安裝完Kibana,並啓動,它默認會向本地的9200端口的ElasticSearch讀取數據,它默認的端口爲5601。訪問http://localhost:5601,顯示的界面以下:
在上述的界面點擊"Management"按鈕,而後點擊「Add New」,添加一個index,在上節咱們在ElasticSearch中寫入鏈路數據的index配置爲「zipkin」,那麼在界面填寫爲「zipkin-*」,點擊「Create」按鈕。
建立完index以後,點擊Discover,就能夠在界面上展現鏈路數據了。
最原始的工程:
採用RabbitMq通信的工程:
採用Mysql存儲的工程:
採用ES存儲的工程:
精彩內容不能錯過!