一、官網翻譯:html
路由和過濾器:Zuul(Router andFilter: Zuul)前端
Routing in an integral part of a microservice architecture. For example, / may be mapped toyour web application, /api/users is mapped to theuser service and /api/shop is mapped to theshop service. Zuul is a JVM based router and server side load balancerby Netflix.java
路由是微服務架構中不可或缺的一部分。好比,/ 可能須要映射到你的web應用, /api/users 映射到用戶服務, /api/shop 映射到商城服務. Zuul是Netflix出品的一個基於JVM路由和服務端的負載均衡器。node
Netflix uses Zuul for the following:react
· Authenticationios
· Insightsnginx
· Stress Testinggit
· Canary Testinggithub
· Dynamic Routingweb
· Service Migration
· Load Shedding
· Security
· Static Responsehandling
· Active/Activetraffic management
· 認證
· Insights
· 壓力測試
· 金絲雀測試
· 動態路由
· 服務遷移
· 負載削減
· 安全
· 靜態響應處理
· 主動/主動交換管理
Zuul’s rule engine allows rules and filters to be written in essentiallyany JVM language, with built in support for Java and Groovy.
Zuul的規則引擎容許經過任何JVM語言來編寫規則和過濾器, 支持基於Java和Groovy的構建。
The configuration property zuul.max.host.connections has been replaced by two new properties, zuul.host.maxTotalConnections and zuul.host.maxPerRouteConnections which default to 200 and 20 respectively. |
|
配置屬性 zuul.max.host.connections 已經被兩個新的配置屬性替代, zuul.host.maxTotalConnections 和 zuul.host.maxPerRouteConnections, 默認值分別是200和20. |
嵌入Zuul反向代理(Embedded Zuul Reverse Proxy)
Spring Cloud has created an embedded Zuul proxy to ease the development ofa very common use case where a UI application wants to proxy calls to one ormore back end services. This feature is useful for a user interface to proxy tothe backend services it requires, avoiding the need to manage CORS andauthentication concerns independently for all the backends.
Spring Cloud建立了一個嵌入式Zuul代理來緩和急需一個UI應用程序來代理調用一個或多個後端服務的通用需求, 這個功能對於代理前端須要訪問的後端服務很是有用, 避免了全部後端服務須要關心管理CORS和認證的問題.
To enable it, annotate a Spring Boot main class with @EnableZuulProxy, and this forwards local calls to the appropriate service. By convention,a service with the ID "users", will receive requests from the proxylocated at /users (with the prefixstripped). The proxy uses Ribbon to locate an instance to forward to viadiscovery, and all requests are executed in a hystrix command, so failures willshow up in Hystrix metrics, and once the circuit is open the proxy will not tryto contact the service.
在Spring Boot主函數上經過註解 @EnableZuulProxy 來開啓, 這樣可讓本地的請求轉發到適當的服務. 按照約定, 一個ID爲"users"的服務會收到 /users 請求路徑的代理請求(前綴會被剝離). Zuul使用Ribbon定位服務註冊中的實例, 而且全部的請求都在hystrix的command中執行, 因此失敗信息將會展示在Hystrix metrics中, 而且一旦斷路器打開, 代理請求將不會嘗試去連接服務.
the Zuul starter does not include a discovery client, so for routes based on service IDs you need to provide one of those on the classpath as well (e.g. Eureka is one choice). |
Zuul starter沒有包含服務發現的客戶端, 因此對於路由你須要在classpath中提供一個根據service IDs作服務發現的服務.(例如, eureka是一個不錯的選擇) |
To skip having a service automatically added, set zuul.ignored-services to a list ofservice id patterns. If a service matches a pattern that is ignored, but alsoincluded in the explicitly configured routes map, then it will be unignored.Example:
在服務ID表達式列表中設置 zuul.ignored-services, 能夠忽略已經添加的服務. 若是一個服務匹配表達式, 則將會被忽略, 可是對於明確配置在路由匹配中的, 將不會被忽略, 例如:
application.yml
zuul:
ignoredServices: '*'
routes:
users: /myusers/**
In this example, all services are ignored except "users".
在這個例子中, 除了"users", 其餘全部服務都被忽略了.
To augment or change the proxy routes, you can add external configurationlike the following:
增長或改變代理路由, 你能夠添加相似下面的外部配置:
application.yml
zuul:
routes:
users: /myusers/**
This means that http calls to "/myusers" get forwarded to the"users" service (for example "/myusers/101" is forwarded to"/101").
這個意味着http請求"/myusers"將被轉發到"users"服務(好比 "/myusers/101" 將跳轉到 "/101")
To get more fine-grained control over a route you can specify the path andthe serviceId independently:
爲了更細緻的控制一個路由, 你能夠直接配置路徑和服務ID:
application.yml
zuul:
routes:
users:
path: /myusers/**
serviceId: users_service
This means that http calls to "/myusers" get forwarded to the"users_service" service. The route has to have a "path"which can be specified as an ant-style pattern, so "/myusers/*" onlymatches one level, but "/myusers/**" matches hierarchically.
這個意味着HTTP調用"/myusers"被轉發到"users_service"服務. 路由必須配置一個能夠被指定爲ant風格表達式的"path", 因此「/myusers/*」只能匹配一個層級, 但"/myusers/**"能夠匹配多級.
The location of the backend can be specified as either a"serviceId" (for a service from discovery) or a "url" (fora physical location), e.g.
後端的配置既能夠是"serviceId"(對於服務發現中的服務而言), 也能夠是"url"(對於物理地址), 例如:
application.yml
zuul:
routes:
users:
path: /myusers/**
url:http://example.com/users_service
These simple url-routes don’t get executed as a HystrixCommand nor can youloadbalance multiple URLs with Ribbon. To achieve this, specify a service-routeand configure a Ribbon client for the serviceId (this currently requiresdisabling Eureka support in Ribbon: see above for more information), e.g.
這個簡單的"url-routes"不會按照 HystrixCommand 執行, 也沒法經過Ribbon負載均衡多個URLs. 爲了實現這一指定服務路由和配置Ribbon客戶端(這個必須在Ribbon中禁用Eureka: 具體參考更多信息), 例如:
application.yml
zuul:
routes:
users:
path: /myusers/**
serviceId: users
ribbon:
eureka:
enabled: false
users:
ribbon:
listOfServers:example.com,google.com
You can provide convention between serviceId and routes using regexmapper.It uses regular expression named groups to extract variables from serviceId andinject them into a route pattern.
你可使用regexmapper提供serviceId和routes之間的綁定. 它使用正則表達式組來從serviceId提取變量, 而後注入到路由表達式中.
ApplicationConfiguration.java
@Bean
public PatternServiceRouteMapper serviceRouteMapper() {
return newPatternServiceRouteMapper(
"(?<name>^.+)-(?<version>v.+$)",
"${version}/${name}");
}
This means that a serviceId "myusers-v1" will be mapped to route"/v1/myusers/**". Any regular expression is accepted but all namedgroups must be present in both servicePattern and routePattern. If servicePatterndoes not match a serviceId, the default behavior is used. In the example above,a serviceId "myusers" will be mapped to route "/myusers/**"(no version detected) This feature is disable by default and only applies to discoveredservices.
這個意思是說"myusers-v1"將會匹配路由"/v1/myusers/**". 任何正則表達式均可以, 可是全部組必須存在於servicePattern和routePattern之中. 若是servicePattern不匹配服務ID,則使用默認行爲. 在上面例子中,一個服務ID爲「myusers」將被映射到路徑「/ myusers/**」(沒有版本被檢測到),這個功能默認是關閉的,而且僅適用於服務註冊的服務。
To add a prefix to all mappings, set zuul.prefix to a value, such as /api. The proxy prefixis stripped from the request before the request is forwarded by default (switchthis behaviour off with zuul.stripPrefix=false). You can alsoswitch off the stripping of the service-specific prefix from individual routes,e.g.
設置 zuul.prefix 能夠爲全部的匹配增長前綴, 例如 /api . 代理前綴默認會從請求路徑中移除(經過 zuul.stripPrefix=false能夠關閉這個功能). 你也能夠在指定服務中關閉這個功能, 例如:
application.yml
zuul:
routes:
users:
path: /myusers/**
stripPrefix: false
In this example, requests to "/myusers/101" will be forwarded to"/myusers/101" on the "users" service.
在這個例子中, 請求"/myusers/101"將被跳轉到"users"服務的"/myusers/101"上.
The zuul.routes entries actuallybind to an object of type ZuulProperties. If you look at the properties ofthat object you will see that it also has a "retryable" flag. Setthat flag to "true" to have the Ribbon client automatically retryfailed requests (and if you need to you can modify the parameters of the retryoperations using the Ribbon client configuration).
zuul.routes 實際上綁定到類型爲 ZuulProperties 的對象上. 若是你查看這個對象你會發現一個叫"retryable"的字段, 設置爲"true"會使Ribbon客戶端自動在失敗時重試(若是你須要修改重試參數, 直接使用Ribbon客戶端的配置)
The X-Forwarded-Host header is added tothe forwarded requests by default. To turn it off set zuul.addProxyHeaders = false. The prefix path is stripped by default, and the request to the backendpicks up a header "X-Forwarded-Prefix" ("/myusers" in theexamples above).
X-Forwarded-Host 請求頭默認在跳轉時添加. 經過設置 zuul.addProxyHeaders = false 關閉它. 前綴路徑默認剝離, 而且對於後端的請求經過請求頭"X-Forwarded-Prefix"獲取(上面的例子中是"/myusers")
An application with @EnableZuulProxy could act as a standalone server if you set a default route("/"), for example zuul.route.home: / would route all traffic (i.e. "/**") tothe "home" service.
經過 @EnableZuulProxy 應用程序能夠做爲一個獨立的服務, 若是你想設置一個默認路由("/"), 好比 zuul.route.home: / 將路由全部的請求(例如: "/**")到"home"服務.
If more fine-grained ignoring is needed, you can specify specific patternsto ignore. These patterns are evaluated at the start of the route locationprocess, which means prefixes should be included in the pattern to warrant amatch. Ignored patterns span all services and supersede any other routespecification.
若是須要更細力度的忽略, 你能夠指定特殊的表達式來配置忽略. 這些表達式從路由位置的頭開始匹配, 意味着前綴應該被包括在匹配表達式中. 忽略表達式影響全部服務和取代任何路由的特殊配置.
application.yml
zuul:
ignoredPatterns: /**/admin/**
routes:
users: /myusers/**
This means that all calls such as "/myusers/101" will beforwarded to "/101" on the "users" service. But callsincluding "/admin/" will not resolve.
這個的意思是全部請求, 好比"/myusers/101"的請求會跳轉到"users"服務的"/101", 但包含"/admin/"的請求將不被處理.
Cookies和敏感HTTP頭(Cookies and Sensitive Headers)
It’s OK to share headers between services in the same system, but youprobably don’t want sensitive headers leaking downstream into external servers.You can specify a list of ignored headers as part of the route configuration.Cookies play a special role because they have well-defined semantics inbrowsers, and they are always to be treated as sensitive. If the consumer ofyour proxy is a browser, then cookies for downstream services also causeproblems for the user because they all get jumbled up (all downstream serviceslook like they come from the same place).
在同一個系統中服務間共享請求頭是可行的, 可是你可能不想敏感的頭信息泄露到內部系統的下游。 你能夠在路由配置中指定一批忽略的請求頭列表。 Cookies扮演了一個特殊的角色, 由於他們很好的被定義在瀏覽器中, 並且他們老是被認爲是敏感的. 若是代理的客戶端是瀏覽器, 則對於下游服務來講對用戶, cookies會引發問題, 由於他們都混在一塊兒。(全部下游服務看起來認爲他們來自同一個地方)。
If you are careful with the design of your services, for example if onlyone of the downstream services sets cookies, then you might be able to let themflow from the backend all the way up to the caller. Also, if your proxy setscookies and all your back end services are part of the same system, it can benatural to simply share them (and for instance use Spring Session to link themup to some shared state). Other than that, any cookies that get set bydownstream services are likely to be not very useful to the caller, so it isrecommended that you make (at least) "Set-Cookie" and"Cookie" into sensitive headers for routes that are not part of yourdomain. Even for routes that are part of yourdomain, try to think carefully about what it means before allowing cookies toflow between them and the proxy.
你得當心你的服務設計, 好比即便只有一個下游服務設置cookies, 你都必須讓他們回溯設置全部的調用路線. 固然, 若是你的代理設置cookies和你全部後端服務是同一個系統的一部分, 它能夠天然的被簡單分享(例如, 使用spring session去將它們聯繫在一塊兒共享狀態). 除此以外, 任何被下游設置的cookies可能不是頗有用, 推薦你對於不屬於你域名部分的路由添加(至少)"Set-Cookie"和"Cookie" 到敏感頭. 即便是屬於你的域名的路由, 嘗試仔細思考在容許cookies流傳在它們和代理之間的意義。
The sensitive headers can be configured as a comma-separated list perroute, e.g.
每一個路由中的敏感頭部信息配置按照逗號分隔, 例如:
application.yml
zuul:
routes:
users:
path: /myusers/**
sensitiveHeaders:Cookie,Set-Cookie,Authorization
url: https://downstream
Sensitive headers can also be set globally by setting zuul.sensitiveHeaders. If sensitiveHeaders is set on a route,this will override the global sensitiveHeaders setting.
敏感頭部也支持全局設置 zuul.sensitiveHeaders. 若是在單個路由中設置 sensitiveHeaders 會覆蓋全局 sensitiveHeaders 設置.
this is the default value for sensitiveHeaders, so you don’t need to set it unless you want it to be different. N.B. this is new in Spring Cloud Netflix 1.1 (in 1.0 the user had no control over headers and all cookies flow in both directions). |
注意: 這是sensitiveHeaders 的默認值, 你無需設置除非你須要不一樣的配置. 注. 這是Spring Cloud Netflix 1.1的新功能(在1.0中, 用戶沒法直接控制請求頭和全部cookies).
In addition to the per-route sensitive headers, you can set a global valuefor zuul.ignoredHeaders for values thatshould be discarded (both request and response) during interactions withdownstream services. By default these are empty, if Spring Security is not onthe classpath, and otherwise they are initialized to a set of well-known"security" headers (e.g. involving caching) as specified by SpringSecurity. The assumption in this case is that the downstream services might addthese headers too, and we want the values from the proxy.
除了per-route敏感頭之外, 你能夠設置一個全局的 zuul.ignoredHeaders 在下游相互調用間去丟棄這些值(包括請求和響應). 若是沒有將Spring Security 添加到運行路徑中, 他們默認是空的, 不然他們會被Spring Secuity初始化一批安全頭(例如 緩存相關). 在這種狀況下, 假設下游服務也可能添加這些頭信息, 我但願從代理獲取值.
路由Endpoint(The Routes Endpoint)
If you are using @EnableZuulProxy with tha SpringBoot Actuator you will enable (by default) an additional endpoint, availablevia HTTP as /routes. A GET to this endpoint will return a list of the mapped routes. A POSTwill force a refresh of the existing routes (e.g. in case there have beenchanges in the service catalog).
若是你使用 @EnableZuulProxy 同時引入了Spring Boot Actuator, 你將默認增長一個endpoint, 提供http服務的 /routes. 一個GET請求將返回路由匹配列表. 一個POST請求將強制刷新已存在的路由.(好比, 在服務catalog變化的場景中)
the routes should respond automatically to changes in the service catalog, but the POST to /routes is a way to force the change to happen immediately. |
路由列表應該自動應答服務登記變化, 可是POST是一種強制當即更新的方案. |
窒息模式和本地跳轉(Strangulation Patterns and LocalForwards)
A common pattern when migrating an existing application or API is to"strangle" old endpoints, slowly replacing them with differentimplementations. The Zuul proxy is a useful tool for this because you can useit to handle all traffic from clients of the old endpoints, but redirect someof the requests to new ones.
逐步替代舊的接口是一種通用的遷移現有應用程序或者API的方式, 使用不一樣的具體實現逐步替換它們. Zuul代理是一種頗有用的工具, 由於你可使用這種方式處理全部客戶端到舊接口的請求. 只是重定向了一些請求到新的接口.
Example configuration:
配置樣例:
application.yml
zuul:
routes:
first:
path: /first/**
url: http://first.example.com
second:
path: /second/**
url: forward:/second
third:
path: /third/**
url: forward:/3rd
legacy:
path: /**
url: http://legacy.example.com
In this example we are strangling the "legacy" app which ismapped to all requests that do not match one of the other patterns. Paths in /first/** have been extracted into a new service with anexternal URL. And paths in /second/** are forwared so they can be handled locally, e.g. with a normal Spring @RequestMapping. Paths in/third/** are also forwarded, but with a different prefix (i.e. /third/foo is forwarded to /3rd/foo).
在這個例子中咱們逐步替換除了部分請求外全部到"legacy"應用的請求. 路徑 /first/** 指向了一個額外的URL. 而且路徑 /second/** 是一個跳轉, 因此請求能夠被本地處理. 好比, 帶有Spring註解的 @RequestMapping . 路徑 /third/** 也是一個跳轉, 可是屬於一個不一樣的前綴. (好比 /third/foo 跳轉到 /3rd/foo )
The ignored patterns aren’t completely ignored, they just aren’t handled by the proxy (so they are also effectively forwarded locally). |
忽略表達式並非徹底的忽略請求, 只是配置這個代理不處理這些請求(因此他們也是跳轉執行本地處理) |
經過Zuul上傳文件(Uploading Files through Zuul)
If you @EnableZuulProxy you can use theproxy paths to upload files and it should just work as long as the files aresmall. For large files there is an alternative path which bypasses the Spring DispatcherServlet (to avoidmultipart processing) in "/zuul/*". I.e. if zuul.routes.customers=/customers/** then you can POSTlarge files to "/zuul/customers/*". The servlet path is externalizedvia zuul.servletPath. Extremely large files will also require elevated timeout settings if theproxy route takes you through a Ribbon load balancer, e.g.
若是你使用 @EnableZuulProxy , 你可使用代理路徑上傳文件, 它可以一直正常工做只要小文件. 對於大文件有可選的路徑"/zuul/*"繞過Spring DispatcherServlet (避免處理multipart). 好比對於 zuul.routes.customers=/customers/** , 你可使用 "/zuul/customers/*" 去上傳大文件. Servlet路徑經過 zuul.servletPath 指定. 若是使用Ribbon負載均衡器的代理路由, 在 處理很是大的文件時, 仍然須要提升超時配置. 好比:
application.yml
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds:60000
ribbon:
ConnectTimeout: 3000
ReadTimeout: 60000
Note that for streaming to work with large files, you need to use chunkedencoding in the request (which some browsers do not do by default). E.g. on thecommand line:
注意: 對於大文件的上傳流, 你應該在請求中使用塊編碼. (有些瀏覽器默認不這麼作). 好比在命令行中:
$ curl -v -H"Transfer-Encoding: chunked" \
-F "file=@mylarge.iso"localhost:9999/zuul/simple/file
簡單的嵌入Zuul(Plain Embedded Zuul)
You can also run a Zuul server without the proxying, or switch on parts ofthe proxying platform selectively, if you use @EnableZuulServer (instead of @EnableZuulProxy). Any beans that you add to the application of type ZuulFilterwill be installedautomatically, as they are with @EnableZuulProxy, but without any of the proxyfilters being added automatically.
你能夠運行一個沒有代理功能的Zuul服務, 或者有選擇的開關部分代理功能, 若是你使用 @EnableZuulServer (替代 @EnableZuulProxy ). 你添加的任何 ZuulFilter 類型 實體類都會被自動加載, 和使用 @EnableZuulProxy 同樣, 但不會自動加載任何代理過濾器.
In this case the routes into the Zuul server are still specified byconfiguring "zuul.routes.*", but there is no service discovery and noproxying, so the "serviceId" and "url" settings areignored. For example:
在如下例子中, Zuul服務中的路由仍然是按照 "zuul.routes.*"指定, 可是沒有服務發現和代理, 所以"serviceId"和"url"配置會被忽略. 好比:
application.yml
zuul:
routes:
api: /api/**
maps all paths in "/api/**" to the Zuul filter chain.
匹配全部路徑 "/api/**" 給Zuul過濾器鏈.
關閉Zuul過濾器(Disable Zuul Filters)
Zuul for Spring Cloud comes with a number of ZuulFilter beans enabled by default in both proxy and servermode. See the zuul filters package for the possiblefilters that are enabled. If you want to disable one, simply set zuul.<SimpleClassName>.<filterType>.disable=true. By convention, the package after filters is the Zuul filter type. For example to disable org.springframework.cloud.netflix.zuul.filters.post.SendResponseFilter setzuul.SendResponseFilter.post.disable=true.
在代理和服務模式下, 對於Spring Cloud, Zuul默認加入了一批 ZuulFilter 類. 查閱 the zuul filters package 去獲取可能開啓的過濾器. 若是你想關閉其中一個, 能夠簡單的設置 zuul.<SimpleClassName>.<filterType>.disable=true . 按照約定, 在 filter 後面的包是Zuul過濾器類. 好比關閉 org.springframework.cloud.netflix.zuul.filters.post.SendResponseFilter , 可設置zuul.SendResponseFilter.post.disable=true.
經過Sidecar進行多語言支持(Polyglot support with Sidecar)
Do you have non-jvm languages you want to take advantage of Eureka, Ribbonand Config Server? The Spring Cloud Netflix Sidecar was inspired by Netflix Prana. It includes a simple http api to get all of the instances (ie host andport) for a given service. You can also proxy service calls through an embeddedZuul proxy which gets its route entries from Eureka. The Spring Cloud ConfigServer can be accessed directly via host lookup or through the Zuul Proxy. Thenon-jvm app should implement a health check so the Sidecar can report to eurekaif the app is up or down.
你是否有非jvm語言應用程序須要使用Eureka, Ribbon和Config Server的功能? Spring Cloud Netflix Sidecar 受 Netflix Prana啓發. 它包含一個簡單的HTTP API去獲取全部註冊的實例信息(包括host和port信息). 你也能夠經過依賴Eureka的嵌入式Zuul代理器代理服務調用. The Spring Cloud Config Server能夠經過host查找 或Zuul代理直接進入. 非JVM應用程序提供健康檢查實現便可讓Sidecar向eureka同步應用程序up仍是down.
To enable the Sidecar, create a Spring Boot application with @EnableSidecar. This annotation includes @EnableCircuitBreaker, @EnableDiscoveryClient, and @EnableZuulProxy. Run the resulting application on the same host as the non-jvmapplication.
爲了開啓Sidecar, 建立一個包含 @EnableSidecar 的Springboot應用程序. 這個註解包括了 @EnableCircuitBreaker, @EnableDiscoveryClient 和 @EnableZuulProxy . 運行這個程序在非jvm程序的同一臺主機上.
To configure the side car add sidecar.port and sidecar.health-uri to application.yml. The sidecar.port property is theport the non-jvm app is listening on. This is so the Sidecar can properlyregister the app with Eureka. The sidecar.health-uri is a uri accessible on the non-jvm app that mimicksa Spring Boot health indicator. It should return a json document like thefollowing:
配置Sidecar, 添加 sidecar.port and sidecar.health-uri 到 application.yml 中. 屬性 sidecar.port 配置非jvm應用正在監聽的端口. 這樣Sidecar可以註冊應用到 Eureka. sidecar.health-uri 是一個非JVM應用程序提供模仿SpringBoot健康檢查接口的可訪問的uri. 它應該返回一個json文檔相似以下:
health-uri-document
{
"status":"UP"
}
Here is an example application.yml for a Sidecar application:
這個是Sidecar應用程序application.yml的列子:
application.yml
server:
port: 5678
spring:
application:
name: sidecar
sidecar:
port: 8000
health-uri:http://localhost:8000/health.json
The api for the DiscoveryClient.getInstances() method is /hosts/{serviceId}. Here is an example response for /hosts/customers that returns two instances on different hosts. This api is accessible tothe non-jvm app (if the sidecar is on port 5678) at http://localhost:5678/hosts/{serviceId}.
DiscoveryClient.getInstances() 方法的API是 /hosts/{serviceId} . 對於 /hosts/customers 響應的例子是返回兩個不一樣hosts的實例. 這個API對於非JVM 應用程序是可訪問的. (若是sidecar監聽在5678端口上) http://localhost:5678/hosts/{serviceId} .
/hosts/customers
[
{
"host":"myhost",
"port": 9000,
"uri":"http://myhost:9000",
"serviceId":"CUSTOMERS",
"secure": false
},
{
"host":"myhost2",
"port": 9000,
"uri":"http://myhost2:9000",
"serviceId":"CUSTOMERS",
"secure": false
}
]
The Zuul proxy automatically adds routes for each service known in eurekato /<serviceId>, so the customers service is available at /customers. The Non-jvm appcan access the customer service via http://localhost:5678/customers(assuming the sidecar is listening on port 5678).
Zuul自動代理全部eureka中的服務, 路徑爲 /<serviceId> , 也就是customers服務能夠經過 /customers 代理到. 非JVM應用程序能夠經過 http://localhost:5678/customers 訪問customer服務(假設sidecar監聽在5678端口上).
If the Config Server is registered with Eureka, non-jvm application canaccess it via the Zuul proxy. If the serviceId of the ConfigServer is configserver and the Sidecar is on port 5678, then it can beaccessed athttp://localhost:5678/configserver
若是配置服務已經在eureka裏註冊, 非JVM應用能夠經過Zuul代理訪問到它. 若是ConfigServer的serviceId是 configserver 和Sidecar監聽在5678端口上, 則它能夠經過 http://localhost:5678/configserver 訪問到.
Non-jvm app can take advantage of the Config Server’s ability to returnYAML documents. For example, a call to http://sidecar.local.spring.io:5678/configserver/default-master.yml might result in aYAML document like the following
非JVM應用可使用ConfigServer的功能返回YAML文檔. 好比, 調用 http://sidecar.local.spring.io:5678/configserver/default-master.yml 能夠返回以下文檔:
eureka:
client:
serviceUrl:
defaultZone:http://localhost:8761/eureka/
password: password
info:
description: Spring Cloud Samples
url:https://github.com/spring-cloud-samples
RxJava 與 Spring MVC(RxJava with Spring MVC)
Spring Cloud Netflix includes the RxJava.
Spring Cloud Netflix 包含 RxJava.
RxJava is aJava VM implementation of ReactiveExtensions: a library for composing asynchronousand event-based programs by using observable sequences.
RxJava是一個Java VM實現http://reactivex.io/(Reactive Extensions):是一個使用可觀察數據流進行異步編程的編程接口,ReactiveX結合了觀察者模式、迭代器模式和函數式編程的精華。與異步數據流交互的編程範式
Spring Cloud Netflix provides support for returning rx.Single objects from Spring MVC Controllers. It alsosupports using rx.Observable objects for Server-sent events (SSE). This can be very convenient if your internal APIs are already builtusing RxJava (see Feign Hystrix Support for examples).
Spring Cloud Netflix提供並支持從Spring MVC Controllers返回rx.Single對象. 它還支持使用 rx.Observable 對象,可觀察的對象爲 Server-sent events (SSE). 若是你的內部api已經使用RxJava這會很是的方便(見< < spring-cloud-feign-hystrix > >爲例)。
Here are some examples of using rx.Single:
這裏有一些使用rx.Single的列子:
@RequestMapping(method = RequestMethod.GET, value = "/single")
public Single<String> single() {
return Single.just("singlevalue");
}
@RequestMapping(method = RequestMethod.GET, value ="/singleWithResponse")
public ResponseEntity<Single<String>> singleWithResponse() {
return newResponseEntity<>(Single.just("single value"),HttpStatus.NOT_FOUND);
}
@RequestMapping(method = RequestMethod.GET, value = "/throw")
public Single<Object> error() {
return Single.error(newRuntimeException("Unexpected"));
}
If you have an Observable, rather than a single, you can use .toSingle() or .toList().toSingle(). Here are someexamples:
若是你有一個 Observable, 而不是單一的, 你可使用.toSingle() 或 .toList().toSingle(). 下面是些例子:
@RequestMapping(method = RequestMethod.GET, value = "/single")
public Single<String> single() {
return Observable.just("singlevalue").toSingle();
}
@RequestMapping(method = RequestMethod.GET, value = "/multiple")
public Single<List<String>> multiple() {
returnObservable.just("multiple", "values").toList().toSingle();
}
@RequestMapping(method = RequestMethod.GET, value ="/responseWithObservable")
public ResponseEntity<Single<String>> responseWithObservable(){
Observable<String>observable = Observable.just("single value");
HttpHeaders headers = newHttpHeaders();
headers.setContentType(APPLICATION_JSON_UTF8);
return newResponseEntity<>(observable.toSingle(), headers, HttpStatus.CREATED);
}
@RequestMapping(method = RequestMethod.GET, value = "/timeout")
public Observable<String> timeout() {
return Observable.timer(1,TimeUnit.MINUTES).map(new Func1<Long, String>() {
@Override
public String call(LongaLong) {
return "singlevalue";
}
});
}
If you have a streaming endpoint and client, SSE could be an option. Toconvert rx.Observable to a Spring SseEmitter use RxResponse.sse(). Here are some examples:
若是你有一個流端點和客戶端,SSE多是一個選項。使用 RxResponse.sse()將rx.Observable轉換到Spring 的SseEmitter. 如下是一些例子:
@RequestMapping(method = RequestMethod.GET, value = "/sse")
public SseEmitter single() {
returnRxResponse.sse(Observable.just("single value"));
}
@RequestMapping(method = RequestMethod.GET, value = "/messages")
public SseEmitter messages() {
returnRxResponse.sse(Observable.just("message 1", "message 2","message 3"));
}
@RequestMapping(method = RequestMethod.GET, value = "/events")
public SseEmitter event() {
returnRxResponse.sse(APPLICATION_JSON_UTF8, Observable.just(
newEventDto("Spring io", getDate(2016, 5, 19)),
newEventDto("SpringOnePlatform", getDate(2016, 8, 1))
));
}
指標: Spectator, Servo,and Atlas(Metrics: Spectator, Servo, and Atlas)
When used together, Spectator/Servo and Atlas provide a near real-timeoperational insight platform.
當Spectator/Servo 和 Atlas一塊兒使用時, 提供一個接近實時操做的平臺.
Spectator and Servo are Netflix’s metrics collection libraries. Atlas is aNetflix metrics backend to manage dimensional time series data.
Spectator 和 Servo 的metrics的標準收集庫. Atlas 是 Netflix 的一個後端指標 管理多維時間序列數據。
Servo served Netflix for several years and is still usable, but isgradually being phased out in favor of Spectator, which is only designed towork with Java 8. Spring Cloud Netflix provides support for both, but Java 8based applications are encouraged to use Spectator.
Servo爲netflix服務多年,仍然是可用的,但逐漸被淘汰,取而代之的是Spectator ,僅僅是爲了與java8工做, Spring Cloud Netflix 二者都支持, 但使用java8的推薦使用Spectator.
Dimensional vs. Hierarchical Metrics
Spring Boot Actuator metrics are hierarchical and metrics are separatedonly by name. These names often follow a naming convention that embedskey/value attribute pairs (dimensions) into the name separated by periods.Consider the following metrics for two endpoints, root and star-star:
Spring Boot Actuator指標等級和指標是分開的,這些名字經常遵循命名約定,嵌入key/value attribute 隔着時間的名稱。考慮如下指標爲兩個端點,root 和 star-star:
{
"counter.status.200.root": 20,
"counter.status.400.root": 3,
"counter.status.200.star-star": 5,
}
The first metric gives us a normalized count of successful requestsagainst the root endpoint per unit of time. But what if the system had 20endpoints and you want to get a count of successful requests against all theendpoints? Some hierarchical metrics backends would allow you to specify a wildcard such as counter.status.200. that would read all 20 metrics and aggregatethe results. Alternatively, you could provide a HandlerInterceptorAdapter that intercepts and records a metriclike counter.status.200.all for all successful requests irrespective of the endpoint, but nowyou must write 20+1 different metrics. Similarly if you want to know the totalnumber of successful requests for all endpoints in the service, you couldspecify a wild card such as counter.status.2.*.
第一個指標爲咱們提供了一個規範化的成功請求的時間單位根節點. 歸一化計算對根節點成功請求的時間. 可是若是系統有20個節點和你想要一個對全部節點成功請求的計數呢? 分級指標後端將容許您指定一個 counter.status.200. 閱讀全部20個指標和聚合的結果.或者,你能夠提供一個 HandlerInterceptorAdapter 攔截和記錄全部成功的請求不管節點 counter.status.200.all , 可是如今你必須寫20多個不一樣的指標。一樣的若是你想知道全部節點成功請求服務的總數, y您能夠指定一個通配符 counter.status.2.*.
Even in the presence of wildcarding support on a hierarchical metricsbackend, naming consistency can be difficult. Specifically the position ofthese tags in the name string can slip with time, breaking queries. Forexample, suppose we add an additional dimension to the hierarchical metricsabove for HTTP method. Then counter.status.200.root becomes counter.status.200.method.get.root, etc. Our counter.status.200.* suddenly no longerhas the same semantic meaning. Furthermore, if the new dimension is not applieduniformly across the codebase, certain queries may become impossible. This canquickly get out of hand.
即便後端參次指標支持它的存在,命名一致性是很困難的. 特別是這些標籤的位置名稱字符串能夠隨着時間的推移,打破查詢. 例如, 假設咱們爲HTTP方法上面的分級指標添加一個額外的dimension . 而後「counter.status.200.root」成爲「counter.status.200.method.get.root」等等。咱們的「counter.status.200 *’忽然再也不具備相同的語義。 此外 , 若是新dimension不是整個代碼庫應用均勻,某些查詢可能會變得不可能。這很快就會失控。
Netflix metrics are tagged (a.k.a. dimensional). Each metric has a name,but this single named metric can contain multiple statistics and 'tag'key/value pairs that allows more querying flexibility. In fact, the statisticsthemselves are recorded in a special tag.
Netflix metrics 標記 (又名 dimensional). 每一個指標都有一個名字,但這一命名metric 能夠包含多個數據和「標籤」鍵/值對,容許更多查詢的靈活性. 事實上, 數據自己是記錄在一個特殊的標記中的.
Recorded with Netflix Servo or Spectator, a timer for the root endpointdescribed above contains 4 statistics per status code, where the countstatistic is identical to Spring Boot Actuator’s counter. In the event that wehave encountered an HTTP 200 and 400 thus far, there will be 8 available datapoints:
記錄Netflix Servo 或 Spectator, 一個計時器根節點上包含4統計和狀態碼,Spring Boot Actuator’s 計數的統計數據時相同的. 迄今爲止若是咱們遇到 HTTP 200 and 400 ,將會有8種可用數據點
{
"root(status=200,stastic=count)": 20,
"root(status=200,stastic=max)": 0.7265630630000001,
"root(status=200,stastic=totalOfSquares)": 0.04759702862580789,
"root(status=200,stastic=totalTime)": 0.2093076914666667,
"root(status=400,stastic=count)": 1,
"root(status=400,stastic=max)": 0,
"root(status=400,stastic=totalOfSquares)": 0,
"root(status=400,stastic=totalTime)": 0,
}
默認的 Metrics 集合(Default Metrics Collection)
Without any additional dependencies or configuration, a Spring Cloud basedservice will autoconfigure a Servo MonitorRegistry and begin collecting metrics on every Spring MVC request. By default, aServo timer with the name rest will be recorded for each MVC request which is tagged with:
沒有任何額外的依賴關係或配置,Spring Cloud 基於雲服務可使用autoconfigure a Servo MonitorRegistry 並開始收集 metrics 在每個SpringMVC請求上. 默認狀況下, Servo 定時器 的名字 rest 將被記錄併爲每一個MVC請求標記:
1. HTTP method
2. HTTP status (e.g.200, 400, 500)
3. URI (or"root" if the URI is empty), sanitized for Atlas
4. The exceptionclass name, if the request handler threw an exception
5. The caller, if arequest header with a key matching netflix.metrics.rest.callerHeader is set on the request. There is no default key for netflix.metrics.rest.callerHeader. You must add it to your application properties if you wish to collectcaller information.
6. HTTP method
7. HTTP status (e.g.200, 400, 500)
8. URI (or"root" if the URI is empty), sanitized for Atlas
9. 異常類的名稱,若是請求處理程序拋出一個異常
10. 調用者, 若是一個請求頭和一個key netflix.metrics.rest.callerHeader匹配. callerHeader的設置要求:沒有默認的key「netflix.metrics.rest.callerHeader」。您必須將它添加到您的應用程序屬性若是你想收集調用信息。
Set the netflix.metrics.rest.metricName property to changethe name of the metric from rest to a name you provide.
更改metrics的名字從rest到你提供的一個名字,您所提供的名稱設置netflix.metrics.rest.metricName 屬性
If Spring AOP is enabled and org.aspectj:aspectjweaver is present on your runtime classpath, Spring Cloudwill also collect metrics on every client call made with RestTemplate. A Servo timerwith the name of restclient will be recordedfor each MVC request which is tagged with:
若是 Spring AOP的啓用和org.aspectj:aspectjweaver 是你目前運行時的classpath, Spring Cloud 也會調用建立RestTemplate收集每個客戶端. Servo timer的restclient 將被記錄爲每一個MVC請求標記:
1. HTTP method
2. HTTP status (e.g.200, 400, 500), "CLIENT_ERROR" if the response returned null, or"IO_ERROR" if an IOExceptionoccurred during the execution of the RestTemplate method
3. URI, sanitized forAtlas
4. Client name
5. HTTP method 2.HTTP狀態(如200、400、500),若是響應返回null"CLIENT_ERROR",或着建立RestTemplate方法時拋出"IO_ERROR"
6. URI, sanitized forAtlas
7. Client name
Metrics 集合: Spectator(Metrics Collection:Spectator)
To enable Spectator metrics, include a dependency on spring-boot-starter-spectator:
讓Spectator metrics包含並依賴spring-boot-starter-spectator:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-spectator</artifactId>
</dependency>
In Spectator parlance, a meter is a named, typed, and tagged configurationand a metric represents the value of a given meter at a point in time.Spectator meters are created and controlled by a registry, which currently hasseveral different implementations. Spectator provides 4 meter types: counter,timer, gauge, and distribution summary.
在 Spectator 裏, 一個流量計一個命名的類型和標記的配置和metric表示在某個時間點的給定值. 經過註冊表建立Spectator meters和 controlled, 目前有多種不一樣的實現. Spectator 提供4種類型: counter, timer, gauge, 還有 distribution summary.
Spring Cloud Spectator integration configures an injectable com.netflix.spectator.api.Registry instance for you.Specifically, it configures a ServoRegistry instance in order to unify the collection of REST metrics and theexporting of metrics to the Atlas backend under a single Servo API.Practically, this means that your code may use a mixture of Servo monitors andSpectator meters and both will be scooped up by Spring Boot Actuator MetricReaderinstances and bothwill be shipped to the Atlas backend.
Spring Cloud Spectator 整合配置實例爲你注入 com.netflix.spectator.api.Registry. 具體來講,爲了統一REST指標的集合配置和其餘Atlas 後端下單一Servo的API,它配置了一個ServoRegistry 實例. 實際上, 這意味着你的代碼可使用Servo monitors 和 Spectator meters ,雙方都將被撈起,由 Spring Boot Actuator MetricReader實例運往Atlas後端.
Spectator 計數器(Spectator Counter)
A counter is used to measure the rate at which some event is occurring.
一個計數器是用於測量其中一些事件發生的速率。
// create a counter with a name and a set of tags
Counter counter = registry.counter("counterName","tagKey1", "tagValue1", ...);
counter.increment(); // increment when an event occurs
counter.increment(10); // increment by a discrete amount
The counter records a single time-normalized statistic.
該計數器記錄一個單一的時間歸一化統計。
Spectator Timer
A timer is used to measure how long some event is taking. Spring Cloudautomatically records timers for Spring MVC requests and conditionally RestTemplate requests, which can later be used to createdashboards for request related metrics like latency:
計時器是用來測量一些事件正在發生的時間. Spring Cloud 爲Spring MVC的請求,定時器和有條件RestTemplate請求自動記錄, 之後能夠用來建立請求相關的指標,如延遲儀表板:
Figure 4. 請求延遲(Request Latency)
// create a timer with a name and a set of tags
// 建立一個具備名稱和一組標籤的計時器
Timer timer = registry.timer("timerName", "tagKey1","tagValue1", ...);
// execute an operation and time it at the same time
// 在同一時間執行的操做,而且一次
T result = timer.record(() -> fooReturnsT());
// alternatively, if you must manually record the time
// 另外,你必須手動記錄時間
Long start = System.nanoTime();
T result = fooReturnsT();
timer.record(System.nanoTime() - start, TimeUnit.NANOSECONDS);
The timer simultaneously records 4 statistics: count, max, totalOfSquares,and totalTime. The count statistic will always match the single normalizedvalue provided by a counter if you had called increment() once on the counter for each time you recorded atiming, so it is rarely necessary to count and time separately for a singleoperation.
計時器同時記錄4種統計:計數,最大,總平方和總時間. 若是你調用increment()計數器記錄時間,計數統計與單一的標準值老是由計數器設置,,因此不多須要分別計算單個操做時間。
For long running operations, Spectator provides a special LongTaskTimer.
For long running operations, Spectator 提供了一個特殊的 LongTaskTimer.
Spectator Gauge
Gauges are used to determine some current value like the size of a queueor number of threads in a running state. Since gauges are sampled, they provideno information about how these values fluctuate between samples.
儀表用於肯定當前的值,如隊列的大小或運行狀態中的線程數。因爲壓力錶是採樣的,它們沒有提供關於這些值如何在樣品之間波動的信息。
The normal use of a gauge involves registering the gauge once ininitialization with an id, a reference to the object to be sampled, and afunction to get or compute a numeric value based on the object. The referenceto the object is passed in separately and the Spectator registry will keep aweak reference to the object. If the object is garbage collected, thenSpectator will automatically drop the registration. See the note in Spectator’sdocumentation about potential memory leaks if this API is misused.
一般儀表的一旦初始化正常使用包括註冊表id,採樣對象的引用,和一個函數或計算基於對象的數值. 在 separately 和 Spectator 註冊中心將經過弱引用引用對象. 若是對象被垃圾收集,那麼Spectator就會自動放棄註冊。 See the note 在 Spectator這個文檔中關於潛在的內存泄漏,若是這個API被濫用.
// the registry will automatically sample this gauge periodically
// 註冊表會自動按期抽樣這個gauge
registry.gauge("gaugeName", pool, Pool::numberOfRunningThreads);
// manually sample a value in code at periodic intervals -- last resort!
// 按期手動示例代碼中的一個值----最後一招!
registry.gauge("gaugeName", Arrays.asList("tagKey1","tagValue1", ...), 1000);
Spectator分佈式彙總(Spectator Distribution Summaries)
A distribution summary is used to track the distribution of events. It issimilar to a timer, but more general in that the size does not have to be aperiod of time. For example, a distribution summary could be used to measurethe payload sizes of requests hitting a server.
一個分佈式匯老是用來跟蹤事件的分佈. 它相似於一個計時器,但更廣泛的大小不須要一段時間.舉個例子, 分佈總結能夠測量在服務器上的請求的有效負載大小。
// the registry will automatically sample this gauge periodically
// 註冊表會自動按期抽樣這個gauge
DistributionSummary ds = registry.distributionSummary("dsName","tagKey1", "tagValue1", ...);
ds.record(request.sizeInBytes());
Metrics 蒐集: Servo(Metrics Collection: Servo)
If your code is compiled on Java 8, please use Spectator instead of Servo as Spectator is destined to replace Servo entirely in the long term. |
若是您的代碼Java的編譯8,請用Spectator ,而不是做爲Servo,Spectator註定要在長期內徹底替代Servo. |
In Servo parlance, a monitor is a named, typed, and tagged configurationand a metric represents the value of a given monitor at a point in time. Servomonitors are logically equivalent to Spectator meters. Servo monitors arecreated and controlled by a MonitorRegistry. In spite of the above warning,Servo does have a wider array of monitor optionsthan Spectator has meters.
在Servo裏, monitor是命名的類型和標記的配置和metric表明在某個時間點的給定monitor的值. Servo monitors 邏輯上等同於Spectator meters. Servo monitors 建立並經過 MonitorRegistry控制. 無論上述警告的, Servo 確實比多Spectator一個連接 :https://github.com/Netflix/servo/wiki/Getting-Started[各類各樣].
Spring Cloud integration configures an injectable com.netflix.servo.MonitorRegistry instance for you.Once you have created the appropriate Monitor type in Servo, the process of recording data is wholly similar toSpectator.
一旦您建立了適當的Monitor Servo類型,Spring Cloud集成配置可注入 com.netflix.servo.MonitorRegistry 的實例,記錄數據的過程是徹底相似於Spectator。
建立 Servo Monitors(Creating Servo Monitors)
If you are using the Servo MonitorRegistry instance provided by Spring Cloud (specifically, an instance of DefaultMonitorRegistry), Servo provides convenience classes for retrieving counters and timers. These convenience classes ensure that only one Monitor is registered for each unique combination of nameand tags.
若是你使用的是 Spring Cloud 提供的Servo MonitorRegistry實例,Servo 類用於提供檢索方便 counters and timers. These convenience classes ensure that only one Monitor is registered for each unique combination of nameand tags.
To manually create a Monitor type in Servo, especially for the more exoticmonitor types for which convenience methods are not provided, instantiate theappropriate type by providing a MonitorConfig instance:
手動建立一個監視器類型,尤爲是對於更奇特的顯示器類型不提供便利的方法,經過提供'MonitorConfig`實例,實例化適當的類型:
MonitorConfig config =MonitorConfig.builder("timerName").withTag("tagKey1","tagValue1").build();
// somewhere we should cache this Monitor by MonitorConfig
// 咱們應該用MonitorConfig緩存這個監視器
Timer timer = new BasicTimer(config);
monitorRegistry.register(timer);
Metrics Backend: Atlas
Atlas was developed by Netflix to manage dimensional time series data fornear real-time operational insight. Atlas features in-memory data storage,allowing it to gather and report very large numbers of metrics, very quickly.
Atlas 由Netflix的發展管理多維時間序列數據爲實時業務 地圖功能內存數據存儲,容許它收集和報告很是大量的指標,很是快。
Atlas captures operational intelligence. Whereas business intelligence isdata gathered for analyzing trends over time, operational intelligence providesa picture of what is currently happening within a system.
Atlas 抓住了做戰情報. 而商業智能數據分析趨勢隨着時間的推移,做戰情報提供的照片目前發生在一個系統.
Spring Cloud provides a spring-cloud-starter-atlas that has all the dependencies you need. Then justannotate your Spring Boot application with @EnableAtlas and provide a location for your running Atlas serverwith the netflix.atlas.uri property.
Spring Cloud 提供了一個spring-cloud-starter-atlas,它有你所須要的全部依賴項.而後只須要在你的Spring應用程序啓動註解@EnableAtlas 和爲正在運行的Atlas服務提供一個位置netflix.atlas.uri
Global tags
Spring Cloud enables you to add tags to every metric sent to the Atlasbackend. Global tags can be used to separate metrics by application name,environment, region, etc.
Spring Cloud 您能夠添加標籤發送到Atlas後端每一個指標. Global 標籤可經過 application name, environment, region, etc等.
Each bean implementing AtlasTagProvider will contribute to the global tag list:
每一個 bean 實現 AtlasTagProvider 將有助於Global標記列表:
@Bean
AtlasTagProvider atlasCommonTags(
@Value("${spring.application.name}") String appName) {
return () ->Collections.singletonMap("app", appName);
}
使用Atlas(Using Atlas)
To bootstrap a in-memory standalone Atlas instance:
引導一個內存中的獨立Atlas實例:
$ curl -LOhttps://github.com/Netflix/atlas/releases/download/v1.4.2/atlas-1.4.2-standalone.jar
$ java -jar atlas-1.4.2-standalone.jar
An Atlas standalone node running on an r3.2xlarge (61GB RAM) can handle roughly 2 million metrics per minute for a given 6 hour window. |
Atlas的一個獨立的節點上運行r3.2xlarge(61gb RAM)能夠處理大約200萬個指標,每分鐘爲6小時的窗口。 |
Once running and you have collected a handful of metrics, verify that yoursetup is correct by listing tags on the Atlas server:
一旦運行,你收集的指標屈指可數,驗證您的設置是經過列出阿特拉斯服務器上的標籤正確的:
$ curl http://ATLAS/api/v1/tags
After executing several requests against your service, you can gather some very basic information on the request latency of every request by pasting the following url in your browser: http://ATLAS/api/v1/graph?q=name,rest,:eq,:avg |
TIP:在對您的服務執行多個請求以後,您能夠經過在瀏覽器上粘貼如下網址來收集每一個請求的請求延遲的一些很是基本的信息: http://ATLAS/api/v1/graph?q=name,rest,:eq,:avg
The Atlas wiki contains a compilation of sample queries for variousscenarios.
The Atlas wiki 包含一個連接:https://github.com/Netflix/atlas/wiki/Single-Line[compilation of samplequeries] 各類場景的示例查詢.
Make sure to check out the alerting philosophy and docs on using double exponential smoothing to generatedynamic alert thresholds.
二、Zuul相似nginx,反向代理的功能,不過netflix本身增長了一些配合其餘組件的特性
三、驗證與源碼分析。。。。後續