ELK是一個開源的產品,其官網是:https://www.elastic.co/css
ELK主要保護三個產品:html
本文將介紹ELK三個組件的安裝和配置,並介紹如何經過ELK監控Azure China的NSG Log。具體的拓撲結構以下:java
最左邊的Azure China上開啓了Network Watcher功能,NSG的各類日誌信息將發送到Azure Storage存儲帳戶。node
中間是ELK組件,包括上面提到的Logstash,並安裝了Azure Blob的插件,Logstash會從指定的Azure存儲帳戶得到NSG的log文件。Logstash把log獲取後,以必定的格式發送到兩臺Elastic Search組成的集羣。Kibanalinux
一 環境準備git
1 安裝Java環境github
本環境安裝的是1.8.0的jdk:json
yum install -y java-1.8.0-openjdk-devel
2 修改hosts文件vim
echo "10.1.4.4 node1" >> /etc/hosts echo "10.1.5.4 node2" >> /etc/hosts
3 修改iptables和selinuxapi
iptables -F setenforce 0
二 Elasticsearch的安裝和配置
1 添加YUM源
導入key:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
添加YUM源文件:
vim /etc/yum.repos.d/elasticsearch.repo [elasticsearch-5.x] name=Elasticsearch repository for5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
2 安裝elasticsearch
yum install elasticsearch -y systemctl enable elasticsearch
3 配置elasticsearch
編輯配置文件:
vim /etc/elasticsearch/elasticsearch.yml cat ./elasticsearch.yml | grep -v "#" cluster.name: es-cluster node.name: node1 path.data: /var/lib/elasticsearch path.logs: /var/lib/elasticsearch network.host: 0.0.0.0 http.port: 9200 discovery.zen.ping.unicast.hosts: ["node1", "node2"] discovery.zen.minimum_master_nodes: 2
在node2上的node.name配置成node2
4 啓動elasticsearch
systemctl start elasticsearch
systemctl status elasticsearch
能夠看到訪問是running狀態。
經過netstat -tunlp查看ES的端口9200和9300是否啓動:
經過下面的命令查看節點信息:
curl -XGET 'http://10.1.4.4:9200/?pretty' curl -XGET 'http://10.1.4.4:9200/_cat/nodes?v'
其中*號的標識是master節點。
固然經過瀏覽器也能夠瀏覽相應的信息:
日誌在配置文件中定義的/var/lib/elasticsearch內,能夠查看這裏ES啓動是否正常。
三 logstash的安裝
Logstash是整個ELK安裝過程當中比較複雜的一個。具體安裝配置過程以下:
1 安裝logstash
Logstash能夠安裝在一個節點上,也能夠安裝在多個節點上。本文將安裝在node1上:
yum install -y logstash systemctl enable logstash
ln -s /usr/share/logstash/bin/logstash /usr/bin/logstash
2 配置logstash
vim /etc/logstash/logstash.yml cat ./logstash.yml | grep -v "#" path.data: /var/lib/logstash path.config: /etc/logstash/conf.d path.logs: /var/log/logstash
3 更改logstash相關文件的權限
在安裝的過程當中會發現logstash安裝後,其文件的權限設置有問題,須要把相關的文件和文件夾的設置正確:
chown logstash:logstash /var/log/logstash/ -R chmod 777 /var/log/messages mkdir -p /usr/share/logstash/config/ ln -s /etc/logstash/* /usr/share/logstash/config chown -R logstash:logstash /usr/share/logstash/config/ mkdir -p /var/lib/logstash/queue chown -R logstash:logstash /var/lib/logstash/queue
3 配置pipeline
Logstash的相關培訓和文檔能夠在elk的官網上找到,簡單來講,logstash包含input,filter和output幾個區域,其中input和output是必須配置的。
以官網教程https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html 爲例:
/usr/share/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
將filebeat做爲輸入組件的例子:
安裝filebeat:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.2.1-x86_64.rpm rpm -vi filebeat-6.2.1-x86_64.rpm
下載demo文件:
wget https://download.elastic.co/demos/logstash/gettingstarted/logstash-tutorial.log.gz gzip -d logstash-tutorial.log.gz
配置filebeat:
vim /etc/filebeat/filebeat.yml cat /etc/filebeat/filebeat.yml |grep -v "#" filebeat.prospectors: - type: log enabled: true paths: - /var/log/ logstash-tutorial.log output.logstash: hosts: ["localhost:5044"]
配置pipeline文件:
vim /etc/logstash/conf.d/logstash.conf input { beats { port => "5044" } } output { file { path => "/var/log/logstash/output.out" } stdout { codec => rubydebug } }
測試配置:
cd /etc/logstash/conf.d logstash -f logstash.conf --config.test_and_exit logstash -f logstash.conf --config.reload.automatic
經過netstat -tunlp能夠看到5044端口已經打開,等待filebeatd 輸入。
運行filebeat:
filebeat -e -c /etc/filebeat/filebeat.yml -d "publish"
須要注意的是,在第二次運行時,須要刪除register文件:
cd /var/lib/filebeat/
rm registry
此時能夠看到logstash的console有日誌輸出,定義的文件也有記錄。格式以下:
{ "@timestamp" => 2018-02-10T02:37:47.166Z, "offset" => 24248, "@version" => "1", "beat" => { "name" => "udrtest01", "hostname" => "udrtest01", "version" => "6.2.1" }, "host" => "udrtest01", "prospector" => { "type" => "log" }, "source" => "/var/log/logstash-tutorial.log", "message" => "86.1.76.62 - - [04/Jan/2015:05:30:37 +0000] \"GET /reset.css HTTP/1.1\" 200 1015 \"http://www.semicomplete.com/projects/xdotool/\" \"Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20140205 Firefox/24.0 Iceweasel/24.3.0\"", "tags" => [ [0] "beats_input_codec_plain_applied" ] }
更改input,改成文件輸入:
input { file { path => "/var/log/messages" } beats { port => "5044" } }
能夠看到新增長的日誌會輸出到logstash的console,同時記錄到output.out文件中。
三. Kibana的安裝和配置
1 Kibana的安裝
yum install kibana -y systemctl enable kibana
2 配置Kibana
vim /etc/kibana/kibana.yml cat /etc/kibana/kibana.yml -v "#" server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://10.1.5.4:9200"
啓動kibana
systemctl start kibana
3 查看Kibana
經過VM的metadata查看VM的PIP地址:
curl -H Metadata:true http://169.254.169.254/metadak?api-version=2017-08-01
查找到公網IP地址後,在瀏覽器中瀏覽Kibana:
建立一個index,在discover中能夠看到相關的日誌:
在Kibana的Dev Tools上能夠查看和刪除相關的信息:
四 Logstash支持Azure Blob做爲input查看NSG Log
1 Logstash的Azure Blob插件的安裝
具體的信息請參考:
https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-input-azureblob
安裝命令爲:
logstash-plugin install logstash-input-azureblob
2 配置
根據上面連接的文檔,把相關信息填入。
其中須要填寫endpoint,把其指向China的endpoint:
vim /etc/logstash/conf.d/nsg.conf input { azureblob { storage_account_name => "xxxx" storage_access_key => "xxxx" container => "insights-logs-networksecuritygroupflowevent" endpoint => "core.chinacloudapi.cn" codec => "json" file_head_bytes => 12 file_tail_bytes => 2 } } filter { split { field => "[records]" } split { field => "[records][properties][flows]"} split { field => "[records][properties][flows][flows]"} split { field => "[records][properties][flows][flows][flowTuples]"} mutate { split => { "[records][resourceId]" => "/"} add_field => {"Subscription" => "%{[records][resourceId][2]}" "ResourceGroup" => "%{[records][resourceId][4]}" "NetworkSecurityGroup" => "%{[records][resourceId][8]}"} convert => {"Subscription" => "string"} convert => {"ResourceGroup" => "string"} convert => {"NetworkSecurityGroup" => "string"} split => { "[records][properties][flows][flows][flowTuples]" => ","} add_field => { "unixtimestamp" => "%{[records][properties][flows][flows][flowTuples][0]}" "srcIp" => "%{[records][properties][flows][flows][flowTuples][1]}" "destIp" => "%{[records][properties][flows][flows][flowTuples][2]}" "srcPort" => "%{[records][properties][flows][flows][flowTuples][3]}" "destPort" => "%{[records][properties][flows][flows][flowTuples][4]}" "protocol" => "%{[records][properties][flows][flows][flowTuples][5]}" "trafficflow" => "%{[records][properties][flows][flows][flowTuples][6]}" "traffic" => "%{[records][properties][flows][flows][flowTuples][7]}" } convert => {"unixtimestamp" => "integer"} convert => {"srcPort" => "integer"} convert => {"destPort" => "integer"} } date{ match => ["unixtimestamp" , "UNIX"] } } output { elasticsearch { hosts => "localhost" index => "nsg-flow-logs" } }
在Kibana上能夠看到相關信息:
五 Logstash支持Azure Blob做爲input查看WAF Log
相似的,把WAF的log發送到Azure Storage中,命令爲:
Set-AzureRmDiagnosticSetting -ResourceId /subscriptions/<subscriptionId>/resourceGroups/<resource group name>/providers/Microsoft.Network/applicationGateways/<application gateway name> -StorageAccountId /subscriptions/<subscriptionId>/resourceGroups/<resource group name>/providers/Microsoft.Storage/storageAccounts/<storage account name> -Enabled $true
在cu存儲帳戶有了日誌後,經過配置logstash,把日誌讀入logstash,再發送給ES,在Kibana上展示。
Firewall log的配置爲:
input { azureblob { storage_account_name => "xxxxx" storage_access_key => "xxxxxx" container => "insights-logs-applicationgatewayaccesslog" endpoint => "core.chinacloudapi.cn" codec => "json" } } filter { date{ match => ["unixtimestamp" , "UNIX"] } } output { elasticsearch { hosts => "localhost" index => "waf-access-logs" }}
能夠看到相關的信息:
六總結
經過ELK工具,能夠把Azure上的相關服務日誌進行圖形化的分析。