《牧神記》有一句話說的好,破心中神。當再也不對分佈式,微服務,CLR畏懼迷茫的時候,你就破了心中神。javascript
第一篇: .Net架構篇:思考如何設計一款實用的分佈式監控系統? php
第二篇:NetCore實踐篇:分佈式監控客戶端ZipkinTracer從入門到放棄之路,咱們提到了zipkin的原理和架構說明,以及用zipkintracer實踐失敗的記錄。
今天咱們來複習下。html
全鏈路追蹤工具(根據依賴關係)java
查看每一個接口、每一個service的執行速度(定位問題發生點或者尋找性能瓶頸)node
創造一些追蹤標識符(tracingId,spanId,parentId),最終將一個request的流程樹構建出來mysql
Collector接收各service傳輸的數據;git
Cassandra做爲Storage的一種,默認存儲在內存中,也支持ElasticSearch和mysql用於生產落庫;github
Query負責查詢Storage中存儲的數據,提供簡單的JSON API獲取數據,主要提供給web UI使用;web
Web 提供簡單的web界面;sql
zipkin爲分佈式鏈路調用監控系統,聚合各業務系統調用延遲數據,達到鏈路調用監控跟蹤;
zipkin經過採集跟蹤數據能夠幫助開發者深刻了解在分佈式系統中某一個特定的請求時如何執行的;
參考以下
zipkin4net是.NET客戶端庫。
它爲您提供:
Zipkin 原語(跨度,註釋,二進制註釋,......)【Zipkin primitives (spans, annotations, binary annotations, ...)】
異步跟蹤發送
跟蹤傳輸抽象
var logger = CreateLogger(); //它應該實現ILogger
var sender = CreateYourTransport(); //它應該實現IZipkinSender
TraceManager.SamplingRate = 1.0f; //全監控
var tracer = new ZipkinTracer(sender);
TraceManager.RegisterTracer(tracer);
TraceManager.Start(logger);
//運行你的程序
//當關閉時。
TraceManager.Stop();
簡介到此爲止,剩餘您可參考zipkin4net。
廢話少說,一杯代碼爲敬。
進入代碼以前,我先來演示下代碼結構。這個結構對應我以前的代碼實踐。內存隊列,爬蟲在個人博客內都能找到博客對應。
今天咱們只說zipkin4Net的實踐。爲了測試查看zipkin是否可以聚集不一樣的站點,我特地創建了兩個站點Demo.ZipKinWeb和Demo.ZipKinWeb2。相似下圖:
爲了能真實落庫,我建立了FanQuick.Repository,用於提供mongodb存儲幫助。IRepository泛型接口聲明以下
namespace FanQuick.Repository
{
public interface IRepository<TDocument> where TDocument:EntityBase
{
IQueryable<TDocument> Queryable { get; }
bool Any(Expression<Func<TDocument, bool>> filter);
/// <summary>
/// 刪除
/// </summary>
/// <param name="filter"></param>
/// <returns></returns>
bool Delete(Expression<Func<TDocument, bool>> filter);
/// <summary>
/// 查詢
/// </summary>
/// <param name="filter"></param>
/// <returns></returns>
IEnumerable<TDocument> Find(Expression<Func<TDocument, bool>> filter);
/// <summary>
/// 新增
/// </summary>
/// <param name="document"></param>
void Insert(TDocument document);
/// <summary>
/// 批量插入
/// </summary>
/// <param name="documents"></param>
void Insert(IEnumerable<TDocument> documents);
/// <summary>
/// 統計。
/// </summary>
/// <param name="filter"></param>
/// <returns></returns>
long Count(Expression<Func<TDocument, bool>> filter);
TDocument FindOneAndDelete(Expression<Func<TDocument, bool>> filter);
TDocument FindOneAndUpdate(FilterDefinition<TDocument> filter, UpdateDefinition<TDocument> update);
}
}
爲了兩個站點可以複用調用zipkin4net的通知,我將代碼抽離出來放到了 Demo.ZipkinCommon。
可複用的抽象類CommonStartUp,代碼以下:重點關注下調用zipkin4net的代碼。並將抽象Run方法暴漏給了子類,須要子類實現。要特別注意,appsettings.json須要設置applicationName,否則發送到zipkin就是未命名服務,這就不能區分站點了!
namespace Demo.ZipkinCommon
{
public abstract class CommonStartup {
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public abstract void ConfigureServices(IServiceCollection services);
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) {
var config = ConfigureSettings.CreateConfiguration();
var applicationName = config["applicationName"];
//if (env.IsDevelopment())
//{
// app.UseDeveloperExceptionPage();
//}
//else
//{
// app.UseExceptionHandler("/Home/Error");
// app.UseHsts();
//}
var lifetime = app.ApplicationServices.GetService<IApplicationLifetime>();
lifetime.ApplicationStarted.Register(() =>
{
TraceManager.SamplingRate = 1.0f;
var logger = new TracingLogger(loggerFactory, "zipkin4net");
var httpSender = new HttpZipkinSender("http://weixinhe.cn:9411", "application/json");
var tracer = new ZipkinTracer(httpSender, new JSONSpanSerializer());
TraceManager.RegisterTracer(tracer);
TraceManager.Start(logger);
});
lifetime.ApplicationStopped.Register(() => TraceManager.Stop());
app.UseTracing(applicationName);
Run(app, config);
}
protected abstract void Run(IApplicationBuilder app, IConfiguration configuration);
}
}
讀取配置類,也獨立了出來,可支持讀取appsettings.json,每一個站點須要把appsettings.json設置容許複製,否則會找不到文件!!
namespace Demo.ZipkinCommon
{
public class ConfigureSettings {
public static IConfiguration CreateConfiguration() {
var builder = new ConfigurationBuilder()
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddEnvironmentVariables();
return builder.Build();
}
}
}
公用部分完成了。咱們看看站點Demo.ZipKinWeb代碼。Startup繼承抽象類CommonStartup,並利用.netCore內置依賴注入,將Service和倉儲注入進來。因爲不支持直接注入泛型,但支持type類型的注入,間接也解決了泛型注入問題。關於依賴注入的講解,你能夠參考上篇文中依賴注入部分,加深理解。
namespace Demo.ZipKinWeb
{
public class Startup : CommonStartup
{
public Startup(IConfiguration configuration) {
Configuration = configuration;
}
public IConfiguration Configuration { get; }
public override void ConfigureServices(IServiceCollection services) {
services.Configure<CookiePolicyOptions>(options =>
{
// This lambda determines whether user consent for non-essential cookies is needed for a given request.
options.CheckConsentNeeded = context => true;
options.MinimumSameSitePolicy = SameSiteMode.None;
});
services.AddScoped(typeof(IRepository<>), typeof(BaseRepository<>));
services.AddScoped<IUserService, UserService>();
services.AddScoped<IAddressService, AddressService>();
services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);
}
protected override void Run(IApplicationBuilder app, IConfiguration configuration) {
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseCookiePolicy();
app.UseMvc(routes =>
{
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
}
}
}
爲了實現聚合兩個站點的效果,在Add的方法內,特地調用一下另外個站點的get
[HttpPost]
public IActionResult Add([FromBody]User user)
{
_userService.AddUser(user);
//模擬調用其餘站點請求。
var client = new RestClient($"{ConfigEx.WebSite}");
var request = new RestRequest($"/user/get", Method.POST);
request.AddParameter("id", user.Id); // adds to POST or URL querystring based on Method
IRestResponse response = client.Execute(request);
var content = response.Content;
// return Json(new { data = content });
return Content(content+_addressService.Test());
}
建好必要的Controller和Action後,將兩個站點都設爲已啓動。就能夠查看效果了。
postman是個測試接口的好工具,點擊Send。
打開咱們的zipkin服務器連接,在WebUI上,能夠看到兩條請求數據。這是正確的,一條是Add,裏面又調了另一個站點的get,也能看到消耗的時間。
zipkin Dependencies no data
果真網友是萬能的。elasticsearch存儲,zipkin依賴沒有數據
裏面有位外國同仁提到了
當你用你elasticsearch 或 Cassandra的時候,須要執行zipkin-dependencies
(you need to run https://github.com/openzipkin/zipkin-dependencies when using elasticsearch or Cassandra)
這是一個Spark做業,它將從您的數據存儲區收集跨度,分析服務之間的連接,並存儲它們以供之後在Web UI中呈現(例如http://localhost:8080/dependency)。
什麼是Spark?
Apache Spark 是專爲大規模數據處理而設計的快速通用的計算引擎。
此做業以UTC時間分析當天的全部跟蹤。這意味着您應該將其安排在UTC午夜以前運行。
支持全部Zipkin 存儲組件,包括Cassandra,MySQL和Elasticsearch。
這真是一個弱雞的設計,做爲內存運行的演示,居然不提供及時彙總分析,還要跑定時任務 【2018-9-17日補充,此處結論有誤,內存運行能提供實時彙總,這裏不刪除,是保留寫做時的感悟】
依據官方提示,按最快的方式進行。
wget -O zipkin-dependencies.jar 'https://search.maven.org/remote_content?g=io.zipkin.dependencies&a=zipkin-dependencies&v=LATEST'
STORAGE_TYPE=cassandra3 java -jar zipkin-dependencies.jar
或者用Docker啓動
docker run --env STORAGE_TYPE=cassandra3 --env CASSANDRA_CONTACT_POINTS=host1,host2 openzipkin/zipkin-dependencies
默認狀況下,此做業解析自UTC午夜以來的全部跟蹤。您能夠經過YYYY-mm-dd格式的參數解析不一樣日期的跟蹤,如2016-07-16。
# ex to run the job to process yesterday's traces on OS/X
STORAGE_TYPE=cassandra3 java -jar zipkin-dependencies.jar `date -uv-1d +%F`
# or on Linux
STORAGE_TYPE=cassandra3 java -jar zipkin-dependencies.jar `date -u -d '1 day ago' +%F`
執行失敗
STORAGE_TYPE=cassandra3 java -jar zipkin-dependencies.jar `date -u -d '1 day ago' +%F`
18/09/14 20:24:50 INFO CassandraDependenciesJob: Running Dependencies job for 2018-09-13: 1536796800000000 ≤ Span.timestamp 1536883199999999
18/09/14 20:24:50 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
18/09/14 20:24:51 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 466288640 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:217)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:199)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:330)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:175)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:256)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:423)
at zipkin2.dependencies.cassandra3.CassandraDependenciesJob.run(CassandraDependenciesJob.java:181)
at zipkin2.dependencies.ZipkinDependenciesJob.main(ZipkinDependenciesJob.java:57)
Exception in thread "main" java.lang.IllegalArgumentException: System memory 466288640 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:217)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:199)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:330)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:175)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:256)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:423)
at zipkin2.dependencies.cassandra3.CassandraDependenciesJob.run(CassandraDependenciesJob.java:181)
at zipkin2.dependencies.ZipkinDependenciesJob.main(ZipkinDependenciesJob.java:57)
意思是系統內存過小了。。。渣渣。
搜索到相關連接,以下:
Zipkin 使用api調用沒有數據 Zipkin api traces爲空
Zinkin進階篇-Zipkin-dependencies的應用
zipkin提供的接口標準
spark內存介紹
更簡易的格式顯示內存錯誤信息
ERROR SparkContext: Error initializing SparkContext.
System memory..must be at least ... Please use a larger heap
spark 內存管理
如何設置spark.executor.memory和堆大小
Spark Misconceptions
如何在Eclispe環境中設置spark的堆大小?
裏面彷佛有個有用的答案
您能夠經過編輯「{SPARK_HOME} / conf /」,可是有一個文件「spark-defaults.conf.template」,您可使用如下命令建立「spark-defaults.conf」文件:
cp spark-defaults.conf.template spark-defaults.conf
而後,編輯它:
# Example:
# spark.master spark://master:7077
# spark.eventLog.enabled true
# spark.eventLog.dir hdfs://namenode:8021/directory
# spark.serializer org.apache.spark.serializer.KryoSerializer
# spark.driver.memory 5g
# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.driver.memory
但我沒查到SPARK_HOME的環境變量。
Linux下設置和查看環境變量
繼續調整工做重心。
java.lang.IllegalArgumentException:系統內存
裏面做者或網友提到:
gc是可選的JAVA_OPTS=-verbose:gc -Xms1G -Xmx1G,https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html#BABDJJFI
JAVA_OPTS設置
談一談JVM內存JAVA_OPTS參數
Tomcat之——內存溢出設置JAVA_OPTS
JAVA_OPTS參數說明與配置
按照以上操做修改仍未成功。。。有某位仁兄知道處理辦法,能否告知?
罷了,罷了,不留個尾巴,怎麼能引發個人求知慾。本來只是想簡簡單單看看zipkin,卻邁向了Spark,JVM之路,留個問題待之後深思。
下篇將繼續zipkin熟悉之路,持久化mysql,還有今天未結束的主題,zipkin-dependencies
標題是.NetCore,大部分是在找java問題,我也是醉了。沒辦法用的監控是java開源的,不要抱怨,繼續研究。這應該是個小問題。這就是寫博客擴展的學習範圍,本來不在個人計劃之類。堅持就有收穫,至少我如今知道了Spark的一些介紹,jvm的一些參數。
感謝觀看,本篇結束。