ELKStacks是一個技術棧的組合,分別是Elasticsearch、Logstash、Kibana html
ELK Stack:node
一、擴展性:採用高擴展性分佈式架構設計,可支持每日TB級數據linux
二、簡單易用:經過圖形頁面可對日誌數據各類統計,可視化git
三、查詢效率高:能作到秒級數據採集、處理和搜索github
https://www.elastic.co/cn/products/elasticsearchsql
https://www.elastic.co/cn/products/kibana數據庫
https://www.elastic.co/cn/products/beats/filebeatjson
https://www.elastic.co/cn/products/beats/metricbeatbash
Logstash :開源的服務器端數據處理管道,可以同時從多個來源採集數據、轉換數據,而後將數據存儲到數據庫中。服務器
Elasticsearch:搜索、分析和存儲數據。
Kibana:數據可視化。
Beats :輕量型採集器的平臺,從邊緣機器向 Logstash 和 Elasticsearch 發送數據。
Filebeat:輕量型日誌採集器。
https://www.elastic.co/subscriptions
Input:輸入,輸出數據能夠是Stdin、File、TCP、Redis、Syslog等。
Filter:過濾,將日誌格式化。有豐富的過濾插件:Grok正則捕獲、Date時間處理、Json編解碼、Mutate數據修改等。
Output:輸出,輸出目標能夠是Stdout、File、TCP、Redis、ES等。
Node:運行單個ES實例的服務器
Cluster:一個或多個節點構成集羣
Index:索引是多個文檔的集合
Document:Index裏每條記錄稱爲Document,若干文檔構建一個Index
Type:一個Index能夠定義一種或多種類型,將Document邏輯分組
Field:ES存儲的最小單元
Shards:ES將Index分爲若干份,每一份就是一個分片
Replicas:Index的一份或多份副本
ES |
關係型數據庫(好比Mysql) |
Index |
Database |
Type |
Table |
Document |
Row |
Field |
Column |
首先作好系統的初始化配置,安裝好jdk
#1) System initialization on each Servers cat >> /etc/security/limits.conf << EOF * hard memlock unlimited * soft memlock unlimited * - nofile 65535 EOF cat > /etc/sysctl.conf << EOF net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 vm.swappiness = 0 vm.max_map_count=262144 vm.dirty_ratio=10 vm.dirty_background_ratio=5 net.ipv4.tcp_fin_timeout = 30 net.ipv4.tcp_keepalive_time = 1200 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_max_tw_buckets = 5000 net.ipv4.ip_local_port_range = 10000 65000 EOF sysctl -p setenforce 0 sed -i 's/^SELINUX=.*$/SELINUX=disabled/' /etc/selinux/config systemctl stop firewalld.service systemctl disable firewalld.service #2) Install JDK on each Servers wget -c http://download.cashalo.com/schema/auto_jdk.sh source auto_jdk.sh
接下來執行下面的腳本,我這裏是三臺服務器組成的ES集羣,腳本里已經帶了參數,能夠交互式的輸入實際的服務器IP地址,因此請在每一個節點都運行
#!/bin/bash IP=`ifconfig|sed -n 2p|awk '{print $2}'|cut -d ":" -f2` if [ `whoami` != root ] then echo "Please login as root to continue :)" exit 1 fi if [ ! -d /home/tools/ ];then mkdir -p /home/tools else rm -rf /home/tools && mkdir -p /home/tools fi yum install perl-Digest-SHA -y && cd /home/tools #wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.1.rpm #wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.1.rpm.sha512 #shasum -a 512 -c elasticsearch-6.8.1.rpm.sha512 #sudo rpm --install elasticsearch-6.8.1.rpm #3) Download elasticsearch-5.6.10 on each servers wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.10.rpm wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.10.rpm.sha512 shasum -a 512 -c elasticsearch-5.6.10.rpm.sha512 sudo rpm --install elasticsearch-5.6.10.rpm #4) Modify elasticsearch.yml File #Note: network.host means your IP address cat >/etc/elasticsearch/elasticsearch.yml<<EOF cluster.name: graylog node.name: localhost path.data: /data/elasticsearch path.logs: /var/log/elasticsearch network.host: $IP http.port: 9200 discovery.zen.ping.unicast.hosts: ["node1", "node2", "node3"] discovery.zen.minimum_master_nodes: 2 EOF read -p "pls input nodename: " Name sed -i "s/localhost/$Name/g" /etc/elasticsearch/elasticsearch.yml echo -e "\033[33m your nodename is $Name \033[0m" read -p "pls input node1 ip address: " ip1 sed -i "s/node1/$ip1/g" /etc/elasticsearch/elasticsearch.yml echo -e "\033[33m your node1 ip address is $ip1 \033[0m" echo -e "###############################################" echo -e "###############################################" read -p "pls input node2 ip address: " ip2 sed -i "s/node2/$ip2/g" /etc/elasticsearch/elasticsearch.yml echo -e "\033[33m your node2 ip address is $ip2 \033[0m" echo -e "###############################################" echo -e "###############################################" read -p "pls input node3 ip address: " ip3 sed -i "s/node3/$ip3/g" /etc/elasticsearch/elasticsearch.yml echo -e "\033[33m your node3 ip address is $ip3 \033[0m" echo -e "###############################################" echo -e "###############################################" #5) Create elasticsearch Directory on each Servers mkdir -p /data/elasticsearch/ chown -R elasticsearch.elasticsearch /data/elasticsearch/ #6) Start elasticsearch on each Servers service elasticsearch restart && chkconfig elasticsearch on #7) Check your elasticsearch Service curl -X GET "http://127.0.0.1:9200/_cat/health?v"
3.3 數據操做
RestFul API格式
curl -X<verb> ‘<protocol>://<host>:<port>/<path>?<query_string>’-d ‘<body>’
參數 |
描述 |
verb |
HTTP方法,好比GET、POST、PUT、HEAD、DELETE |
host |
ES集羣中的任意節點主機名 |
port |
ES HTTP服務端口,默認9200 |
path |
索引路徑 |
query_string |
可選的查詢請求參數。例如?pretty參數將返回JSON格式數據 |
-d |
裏面放一個GET的JSON格式請求主體 |
body |
本身寫的 JSON格式的請求主體 |
查看索引:
curl http://127.0.0.1:9200/_cat/indices?v
新建索引:
curl -X PUT 127.0.0.1:9200/logs-2018.05.22
刪除索引:
curl -X DELETE 127.0.0.1:9200/logs-2018.05.22
ES提供一種可用於執行查詢JSON式的語言,被稱爲Query DSL
使用官方提供的示例數據:
https://www.elastic.co/guide/en/elasticsearch/reference/current/_exploring_your_data.html
wget https://raw.githubusercontent.com/elastic/elasticsearch/master/docs/src/test/resources/accounts.json