ELK日誌收集

目前日誌的痛點html

  1. 運維要常常登錄到服務器上拿日誌給開發、測試
  2. 每次都是出問題後纔去看日誌,不能提早經過日誌預判問題
  3. 若是是集羣服務,日誌將要從多臺機器取
  4. 開發人員搞出來的日誌不規範,沒有標準。日誌目錄不統1、日誌類型也不明確(系統日誌、錯誤日誌、訪問日誌、運行日誌、設備日誌、debug日誌)

以上痛點可使用ELK解決,
要想讓日誌發揮做用,要有4個階段,前端

  1. 收集
  2. 存儲
  3. 搜索和展示
  4. 日誌分析,作到故障預警和業務拓展

使用 elasticsearch logstash kibana 能夠解決前3個階段的問題java

es: 存儲,搜索
logstash: 收集
kibanna: 展示node

es 和 logstash都是使用java語言開發的,運行時使用jvm,因此運行環境要安裝jdk(open-jdk,聽說安卓系統將改用open-jdk,棄用sun-jdk,讓安卓系統更輕一些)
es安裝及配置
es安裝的最佳實踐是使用yum安裝(也能夠用源碼安裝,就是下載一個tar包,解壓運行便可,好處是更新版本時很方便)
https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.htmllinux

1.Download and install the public signing key:nginx

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearchgit

2.Create a file called elasticsearch.repo in the /etc/yum.repos.d/ directory for RedHat based distributions
[elasticsearch-6.x]
name=Elasticsearch repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-mdgithub

3.And your repository is ready for use. You can now install Elasticsearch with one of the following
sudo yum install elasticsearchweb

配置:
es要配置的地方很少,集羣cluster名稱(很重要),節點名稱(很重要),是否鎖住內存, data path, log path ,監聽網絡的IP ,監聽網絡的接口shell

grep "^[a-z]" /etc/elasticsearch/elasticsearch.yml

cluster.name: oldgirl
node.name: linux-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200

這裏bootstrap.memory_lock: true 是鎖內存,啓動的時候會報錯,致使服務沒法啓動,那是由於limit.conf沒開啓鎖的權限按照日誌報錯提示進行添加
2018-07-01T14:15:44,143][WARN ][o.e.b.JNANatives ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536
[2018-07-01T14:15:44,144][WARN ][o.e.b.JNANatives ] These can be adjusted by modifying /etc/security/limits.conf, for example:
# allow user 'elasticsearch' mlockall
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
[2018-07-01T14:15:44,144][WARN ][o.e.b.JNANatives

至此一個單節點的es安裝完成,能夠訪問測試 http://IP:9200
{
"name" : "linux-node-1",
"cluster_name" : "oldgirl",
"cluster_uuid" : "5hmMNxc5QxG6q-2t2VNqrg",
"version" : {
"number" : "6.3.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "424e937",
"build_date" : "2018-06-11T23:38:03.357887Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
看到以上結果,說明一個es已經搭建成功,es搭建成功後接下來就是往es裏存數據了。
如何和es交互?兩種大的方法
一種是java API 一種是resful api

咱們使用restfulapi,以json數據格式與es交互
好比在shell環境中執行:
curl -H Content-Type:application/json -i -X GET 'http://127.0.0.1:9200/_count?pretty' -d '
{
"query": {
"match_all": {}
}
}'
返回結果
HTTP/1.1 200 OK
content-type: application/json; charset=UTF-8
content-length: 114

{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"skipped" : 0,
"failed" : 0
}
}
-X GET 請求的方法
加-i是把響應頭顯示出來
這裏要加-H Content-Type:application/json ,告訴服務器用json格式解析請求數據,不然會報以下錯誤:
HTTP/1.1 406 Not Acceptable
content-type: application/json; charset=UTF-8
content-length: 109

{
"error" : "Content-Type header [application/x-www-form-urlencoded] is not supported",
"status" : 406
}

這樣使用shell命令行curl訪問 es的restfulapi,可是不方便,es提供了不少插件,咱們來使用官方推薦的插件,提供一個web管理的形式,來和es的restfulapi進行交互

官方推薦的插件在 elasticsearch 6.x版本 不在支持,咱們用開源的elasticsearch-head github地址:https://github.com/mobz/elasticsearch-head
安裝方法:
Running with built in server
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install
npm run start
open http://localhost:9100/

而後去修改elasticsearch的配置文件
vim /etc/elasticsearch/elasticsearch.yml
最後添加以下兩行
http.cors.enabled: true
http.cors.allow-origin: "*"

而後訪問
打開http://localhost:9100/
添加http://localhost:9200
至此 咱們就可使用web方式與elasticsearch的restfulapi進行交互了

接下來就是作一個elasticsearch集羣
安裝都是同樣的,就在配置文件裏把cluster name 設置成同樣 。
啓動後es用多播或者組播 對外宣稱本身是哪一個集羣的。這裏要注意的是,多播形式在6.x版本很差用,建議使用組播。組播的配置方式

discovery.zen.ping.unicast.hosts: ["host1", "host2"] 這裏最好填寫ip

這裏並不須要把全部的節點名稱都添加進去,只須要添加1到2個。由於他們會傳播的。

如何判斷是否加入集羣了,兩種方式,一種看elasticsearch-head 概述裏能看到。
另一種是經過看elasticsearch的日誌,日誌的名稱爲集羣的名稱。

還有就是監控插件bigdesk 很惋惜從2.0後就不支持了。還有一個kopf插件3.0也不支持,總之如今es在作平臺化,咱們這裏學習瞭解便可,,生產儘可能使用平臺產品。少不少運維成本。
經常使用的插件就這3個,有2個已經不能使用了。

es集羣安裝配置成功後,基本的使用和概念瞭解後,咱們就開始學習logstash ,es的使用有不少知識,可是對於咱們運維來講,最重要的是收集日誌,因此接下來重點學習logstash的使用。

logstash的安裝
是否是要在每一臺服務器上安裝logstash,不必定若是經過網絡收就不須要。要是收集文本文件,那就是了。
https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

YUM
Download and install the public signing key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix, for example logstash.repo
vim /etc/yum.repos.d/logstash.repo
[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

And your repository is ready for use. You can install it with:

sudo yum install logstash

logstash使用gruby開發的。啓動會有些慢
/usr/share/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'

-e 執行

一個input 一個output
stdin{} ,stdout{} 是兩個插件
運行須要等1分鐘左右
[root@node2 elasticsearch]# /usr/share/logstash/bin/logstash -e 'input { stdin{} } output { stdout{} }'
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2018-07-01 15:03:59.682 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-07-01 15:04:00.629 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.3.0"}
[INFO ] 2018-07-01 15:04:03.885 [Converge PipelineAction::Create

] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
The stdin plugin is now waiting for input:
[INFO ] 2018-07-01 15:04:04.098 [Converge PipelineAction::Create
] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"# "}
[INFO ] 2018-07-01 15:04:04.225 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2018-07-01 15:04:04.547 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
hello world
{
"@version" => "1",
"@timestamp" => 2018-07-01T07:04:13.785Z,
"message" => "hello world",
"host" => "node2.shared"
}
hehehe
{
"@version" => "1",
"@timestamp" => 2018-07-01T07:04:20.411Z,
"message" => "hehehe",
"host" => "node2.shared"
}

以上就是標準輸入輸出的例子。
/usr/share/logstash/bin/logstash -e 'input { stdin{} } output { stdout{ codec => rubydebug } }'
...
hello
{
"message" => "hello",
"@version" => "1",
"@timestamp" => 2018-07-01T07:08:02.456Z,
"host" => "node2.shared"
}

咱們把logstash進來的每條數據叫作事件,不叫一行 ,多行數據可能表示一個事件,好比 一個報錯確定不止一行信息。

把內容寫到es中
輸入仍是用標準,輸出改下

/usr/share/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["10.211.55.8:9200"] } }'

相關官方文檔https://www.elastic.co/guide/en/logstash/current/index.html

輸出到es 就是那麼簡單。
能不能同時輸出到es和前端,能夠,不是負載均衡是同時。一個input,能夠有多個output

/usr/share/logstash/bin/logstash -e 'input { stdin{} } output { elasticsearch { hosts => ["10.211.55.8:9200"] } stdout { codec => rubydebug } }'
什麼做用呢? 生產上寫到es的時候同時寫到文本。文本保留是最好的,3個好處 1.最簡單 2.能夠2次加工 3. 壓縮比最高 日誌記什麼好? 文本

接下來咱們就要學習寫logstash的配置文件,不能一直在命令行寫,寫到配置文件方便。

最簡單的配置文件:
vim /etc/logstash/conf.d/logstash-simple.conf
input { stdin { } }
output {
elasticsearch { hosts => ["10.211.55.8:9200"] }
stdout { codec => rubydebug }
}

而後啓動
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash-simple.conf

咱們主要學習logstash的配置語法

This is a comment. You should use comments to describe

parts of your configuration.

input {
...
}

filter {
...
}

output {
...
}

input{},output{}是必須的,filter{}是可選的

input {
file {
path => "/var/log/messages"
type => "syslog"
}

file {
path => "/var/log/apache/access.log"
type => "apache"
}
}

案例 1
最多見的就是從文件輸入

vim /etc/logstash/conf.d/file.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}

output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}

接下來不只收集系統日誌 並且要收集java日誌
案例 2
vim /etc/logstash/conf.d/file.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}

file {
    path => "/var/log/elasticsearch/oldgirl.log"
    type => "es-error"
    start_position => "beginning"
}

}

output {
if [type] == "system" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "es-error-%{+YYYY.MM.dd}"
}
}
}

這樣經過type 字段作if判斷。
6.x中file插件文檔沒寫type屬性,可是能用,還不能換成其餘的
這裏要注意的是咱們尚未給massge信息裏作域,域中是有type屬性的,那麼這時候你再在file裏使用type用於判斷 那就會失效了。
固然也能夠在一臺服務器上 啓動多個logstash程序去實現不一樣服務的日誌。不過佔用cpu和內存
Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
啓動時提示信息,告訴咱們配置文件在file裏設置的type並非es 數據瀏覽中的_type

這樣去elasticsearch中查看日誌會有一個問題,就是一個錯誤信息 應該是一個事件,顯示在一個事件裏纔是最好的,可是從文件裏讀取致使這個數據被切成了多行。這樣是很不方便的。怎麼把它收集到一個事件裏呢。該引入codec了

案例3
input {
stdin {
codec => multiline {
pattern => "pattern, a regexp"
negate => "true" or "false"
what => "previous" or "next"
}
}
}

上面三個參數的解釋
pattern 正則 ,在什麼狀況下和並
negate
what
input {
stdin {
codec => multiline {
pattern => "^["
negate => "true"
what => "previous"
}
}
}
output {
stdout {
codec => rubydebug
}
}
以[開頭的爲一個事件,不以[開頭的就合併到上一個事件去
vim /etc/logstash/conf.d/all.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}

file {
    path => "/var/log/elasticsearch/oldgirl.log"
    type => "es-error"
    start_position => "beginning"
        codec => multiline {
             pattern => "^\["
             negate => "true"
             what => "previous"
           }
}

}

output {
if [type] == "system" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "es-error-%{+YYYY.MM.dd}"
}
}
}

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/all.conf

接下來從elastic-head查看不方便,就要引用咱們的kibana服務
kibana是elasticsearch的可視化平臺
https://www.elastic.co/guide/en/kibana/current/index.html
kibana 一開始PHP,改成ruby 又改爲gruby 如今改爲nodejs

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.3.0-linux-x86_64.tar.gz
shasum -a 512 kibana-6.3.0-linux-x86_64.tar.gz
tar -xzf kibana-6.3.0-linux-x86_64.tar.gz
mv kibana-6.3.0-linux-x86_64/ /usr/local/
cd /usr/local/
ln -s kibana-6.3.0-linux-x86_64/ kibana

更改kibana配置文件
cd /usr/local/kibana/config
vim kibana.yml
4個地方修改
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://10.211.55.8:9200"
kibana.index: ".kibana"
kibana.index值得注意,kibana沒有數據庫,但數據總要又個地方存儲,那麼既然和es是生死之交,那就用es,直接告訴你幫我建立一個.kibana的索引,用來存儲kibana數據
配置完成後,直接啓動kibana

咱們收集了system日誌,java 的日誌(es的運行日誌),接下來咱們收集nginx的日誌。
es裏有域的概念,域 能夠理解成表中的字段 。 index 索引 理解成 數據庫實例 ,_type 理解成數據庫裏的表,而域就是字段 即把 message裏的內容 搞成key:value的形式

nginx 的日誌 經過配置nginx.conf文件,可讓ngingx的日誌格式統一輸出爲json文件格式。而logstash 傳遞給es,es能夠直接把這種json數據格式解析成k:v的形式,這樣將爲之後使用elk中的kibana進行搜索增長效率。
nginx配置日誌使用json的方式以下:nginx.org
http://nginx.org/en/docs/http/ngx_http_log_module.html 查看nginx官網的關於日誌模塊的配置
其中
Syntax: log_format name [escape=default|json|none] string ...;
Default: log_format combined "...";
Context: http

咱們只須要在nginx中的http配置塊中添加
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format json '{"@timestamp":"$time_iso8601",'
'"@version":"1",'
'"url":"$uri",'
'"status":"$status",'
'"domain":"$host",'
'"host":"$server_addr",'
'"size":$body_bytes_sent,'
'"responsetime":$request_time,'
'"referer": "$http_referer",'
'"ua": "$http_user_agent"'
'}';
access_log /var/log/nginx/access_json.log json;

access_log /var/log/nginx/access.log main;

啓動nginx,訪問產生日誌,而且確認是json格式的
此時寫一個json.conf文件
vim /etc/logstash/conf.d/json.conf
input {
file {
path => "/var/log/nginx/access_json.log"
codec => json
}
}

output {
stdout {
codec => rubydebug
}
}

執行結果以下:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/json.conf
[INFO ] 2018-07-01 22:22:36.797 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2018-07-01 22:22:37.539 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
{
"domain" => "10.211.55.8",
"@version" => "1",
"host" => "10.211.55.8",
"responsetime" => 0.0,
"@timestamp" => 2018-07-01T14:23:24.000Z,
"size" => 0,
"status" => "304",
"path" => "/var/log/nginx/access_json.log",
"ua" => "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36",
"url" => "/index.html",
"referer" => "-"
}

接下來咱們就能夠添加到all.conf中了
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/nginx/access_json.log"
type => "nginx-log"
start_position => "beginning"
codec => json
}

file {
    path => "/var/log/elasticsearch/oldgirl.log"
    type => "es-error"
    start_position => "beginning"
        codec => multiline {
             pattern => "^\["
             negate => "true"
             what => "previous"
           }
}

}

output {
if [type] == "system" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "es-error-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx-log" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "nginx-log-%{+YYYY.MM.dd}"
}
}
}
這樣就能夠在elasticsearch-head中查看到新的index
在kibana中添加新的索引,而後就能夠進行查詢了

message日誌的收集
前面咱們也收集了message日誌,可是咱們使用的是file插件,
咱們知道系統的日誌是由syslog程序生成,syslog是能夠將日誌寫到遠程的
因此咱們應該使用logstash 監聽一個端口,syslog直接將日誌寫到監聽端口就好了。
最好的是 生產上全部的業務都用syslog進行寫日誌,那就至關於 不須要在每臺機器上安裝logstash進行抓取日誌,只須要搞一個logstash服務端口
nginx 也有支持寫到syslog,原生的不支持,淘寶開源的支持,還有nginx lua 支持

在 input 插件列表中能找到syslog

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html

vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
host => "10.211.55.8"
port => "514"
}

}

output {
stdout {
codec => "rubydebug"
}
}
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf

啓動後確認514端口是開放的

接下來就是更改系統的rsyslog.conf配置文件

vim /etc/rsyslog.conf
找到
# . @@remote-host:514
去掉#改爲:
. @@10.211.55.8:514
而後重啓rsyslog服務
systemctl restart rsyslog
重啓下你就會立馬看到日誌
{
"pid" => "20915",
"severity" => 5,
"logsource" => "node2",
"facility_label" => "security/authorization",
"timestamp" => "Jul 2 20:56:43",
"type" => "system-syslog",
"program" => "polkitd",
"@timestamp" => 2018-07-02T12:56:43.000Z,
"facility" => 10,
"host" => "10.211.55.8",
"@version" => "1",
"message" => "Unregistered Authentication Agent for unix-process:1927:9050003 (system bus name :1.1149, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale zh_CN.UTF-8) (disconnected from bus)\n",
"priority" => 85,
"severity_label" => "Notice"
}

而後咱們就能夠把syslog.conf的配置寫在all.conf配置文件中
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/nginx/access_json.log"
type => "nginx-log"
start_position => "beginning"
codec => json
}

file {
    path => "/var/log/elasticsearch/oldgirl.log"
    type => "es-error"
    start_position => "beginning"
        codec => multiline {
             pattern => "^\["
             negate => "true"
             what => "previous"
           }
}
syslog {
    type => "system-syslog"
    host => "10.211.55.8"
    port => "514"
}

}

output {
if [type] == "system" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "es-error-%{+YYYY.MM.dd}"
}
}
if [type] == "nginx-log" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "nginx-log-%{+YYYY.MM.dd}"
}
}
if [type] == "system-syslog" {
elasticsearch {
hosts => ["10.211.55.8:9200"]
index => "sysetm-syslog-%{+YYYY.MM.dd}"
}
}
}

啓動後
logger "hallo 1"
logger "hallo 1"
logger "hallo 1"
logger "hallo 1"
logger "hallo 1"
logger "hallo 1"
進行測試

上面這個能夠看成生產的模版。
還有一個常見的logstash插件 ,tcp插件
system-syslog能夠監聽syslog日誌,假若有應用程序不想把日誌寫到文件中,就能夠用logstash直接啓動tcp監聽端口
這樣,程序能夠將日誌直接寫到tcp監聽端口。
寫法以下:
vim tcp.conf
input {
tcp {
host => "10.211.55.8"
port => "6666"
}
}

output {
stdout {
codec => "rubydebug"
}
}

啓動 /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf

而後用nc進行測試

nc 10.211.55.8 6666 < /etc/resolv.conf

{
"host" => "node2.shared",
"message" => "# Generated by NetworkManager",
"@timestamp" => 2018-07-02T13:20:27.921Z,
"port" => 44257,
"@version" => "1"
}
{
"host" => "node2.shared",
"message" => "search localdomain shared",
"@timestamp" => 2018-07-02T13:20:27.943Z,
"port" => 44257,
"@version" => "1"
}
{
"host" => "node2.shared",
"message" => "nameserver 10.211.55.1",
"@timestamp" => 2018-07-02T13:20:27.944Z,
"port" => 44257,
"@version" => "1"
}

echo "hehe" | nc 10.211.55.8 6666

{
"host" => "node2.shared",
"message" => "hehe",
"@timestamp" => 2018-07-02T13:21:39.242Z,
"port" => 44259,
"@version" => "1"
}

echo "oldgirl" > /dev/tcp/10.211.55.8/6666

{ "host" => "node2.shared", "message" => "oldgirl", "@timestamp" => 2018-07-02T13:23:23.936Z, "port" => 44260, "@version" => "1" }

相關文章
相關標籤/搜索