1.配置8臺虛擬機 es[1:5],kibana,logstash,web ip 192.168.1.61-68html
2.開始配置ip和主機名java
3.用ansible部署elasticsearch,並能訪問網頁,如下是ansible部署的yml代碼node
1 --- 2 - hosts: es 3 remote_user: root 4 tasks: 5 - copy: 6 src: local.repo 7 dest: /etc/yum.repos.d/local.repo 8 owner: root 9 group: root 10 mode: 0644 11 - name: install elasticsearch 12 yum: 13 name: java-1.8.0-openjdk,elasticsearch 14 state: installed 15 - template: 16 src: elasticsearch.yml 17 dest: /etc/elasticsearch/elasticsearch.yml 18 owner: root 19 group: root 20 mode: 0644 21 notify: reload elasticsearch 22 tags: esconf 23 - service: 24 name: elasticsearch 25 enabled: yes 26 handlers: 27 - name: reload elasticsearch 28 service: 29 name: elasticsearch 30 state: restarted
4.如下是當前要改的 hostsnginx
192.168.1.61 es1 192.168.1.62 es2 192.168.1.63 es3 192.168.1.64 es4 192.168.1.65 es5 192.168.1.66 kibana 192.168.1.67 logstash
5.yum源有相應的軟件包和依賴包便可web
6.如下是當前要改的 elasticsearch.ymlapache
1 # ======================== Elasticsearch Configuration ========================= 2 # 3 # NOTE: Elasticsearch comes with reasonable defaults for most settings. 4 # Before you set out to tweak and tune the configuration, make sure you 5 # understand what are you trying to accomplish and the consequences. 6 # 7 # The primary way of configuring a node is via this file. This template lists 8 # the most important settings you may want to configure for a production cluster. 9 # 10 # Please see the documentation for further information on configuration options: 11 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html> 12 # 13 # ---------------------------------- Cluster ----------------------------------- 14 # 15 # Use a descriptive name for your cluster: 16 # 17 cluster.name: nsd1810 18 # 19 # ------------------------------------ Node ------------------------------------ 20 # 21 # Use a descriptive name for the node: 22 # 23 node.name: {{ansible_hostname}} 24 # 25 # Add custom attributes to the node: 26 # 27 # node.rack: r1 28 # 29 # ----------------------------------- Paths ------------------------------------ 30 # 31 # Path to directory where to store the data (separate multiple locations by comma): 32 # 33 # path.data: /path/to/data 34 # 35 # Path to log files: 36 # 37 # path.logs: /path/to/logs 38 # 39 # ----------------------------------- Memory ----------------------------------- 40 # 41 # Lock the memory on startup: 42 # 43 # bootstrap.mlockall: true 44 # 45 # Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory 46 # available on the system and that the owner of the process is allowed to use this limit. 47 # 48 # Elasticsearch performs poorly when the system is swapping the memory. 49 # 50 # ---------------------------------- Network ----------------------------------- 51 # 52 # Set the bind address to a specific IP (IPv4 or IPv6): 53 # 54 network.host: 0.0.0.0 55 # 56 # Set a custom port for HTTP: 57 # 58 # http.port: 9200 59 # 60 # For more information, see the documentation at: 61 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html> 62 # 63 # --------------------------------- Discovery ---------------------------------- 64 # 65 # Pass an initial list of hosts to perform discovery when new node is started: 66 # The default list of hosts is ["127.0.0.1", "[::1]"] 67 # 68 discovery.zen.ping.unicast.hosts: ["es1", "es2", "es3"] 69 # 70 # Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1): 71 # 72 # discovery.zen.minimum_master_nodes: 3 73 # 74 # For more information, see the documentation at: 75 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html> 76 # 77 # ---------------------------------- Gateway ----------------------------------- 78 # 79 # Block initial recovery after a full cluster restart until N nodes are started: 80 # 81 # gateway.recover_after_nodes: 3 82 # 83 # For more information, see the documentation at: 84 # <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html> 85 # 86 # ---------------------------------- Various ----------------------------------- 87 # 88 # Disable starting multiple nodes on a single system: 89 # 90 # node.max_local_storage_nodes: 1 91 # 92 # Require explicit names when deleting indices: 93 # 94 # action.destructive_requires_name: true
7.elasticsearch搭建完成,能夠用http://192.168.1.61:9200 訪問檢驗json
8.部署插件bootstrap
插件裝在哪一臺機器上,只能在哪臺機器上使用(這裏安裝在es5機器上面)vim
1)使用遠程 uri 路徑能夠直接安裝瀏覽器
ftp://192.168.1.254/elk/elasticsearch-head-master.zip //安裝head插件
es5 bin]# ./plugin install \
ftp://192.168.1.254/elk/elasticsearch-kopf-master.zip //安裝kopf插件
es5 bin]# [root@es5 bin]# ./plugin install \
ftp://192.168.1.254/elk/bigdesk-master.zip
//安裝bigdesk插件
es5 bin]# ./plugin list //查看安裝的插件
Installed plugins in /usr/share/elasticsearch/plugins:
2)訪問head插件
)訪問kopf插件
4)訪問bigdesk插件
9:安裝kibana
1)在另外一臺主機,配置ip爲192.168.1.66,配置yum源,更改主機名
2)安裝kibana
3)瀏覽器訪問kibana,
4)點擊Status,查看是否安裝成功,所有是綠色的對鉤,說明安裝成功
5)用head插件訪問會有.kibana的索引信息,如圖所示:
.kibana圖以下
不能畫出餅圖,數據太少,訪問網頁要多刷新幾回,數據太少,可能看不到圖片。
上述的logstash.conf文件是用來解析httpd的log日誌的若是是nginx或者是其餘服務就要調用他的宏了,能夠if來判斷,如如下代碼,提供一些思路
1 input{ 2 stdin{ codec => "json"} 3 beats{ 4 port => 5044 5 } 6 file { 7 path => ["/tmp/a.log","/tmp/b.log"] 8 sincedb_path => "/var/lib/logstash/sincedb" 9 start_position => "beginning" 10 type => "testlog" 11 } 12 tcp { 13 host => "0.0.0.0" 14 port => "8888" 15 type => "tcplog" 16 } 17 udp { 18 host => "0.0.0.0" 19 port => "8888" 20 type => "udplog" 21 } 22 23 syslog { 24 type => "syslog" 25 } 26 27 } 28 filter{ 29 if [type] == "httplog"{ 30 grok { 31 match => ["message", "%{COMBINEDAPACHELOG}"] 32 }} 33 } 34 35 output{ 36 stdout{ 37 codec => "rubydebug" 38 } 39 if [type] == "httplog"{ 40 elasticsearch { 41 hosts => ["es1", "es2", "es3"] 42 index => "weblog" 43 flush_size => 2000 44 idle_flush_time => 10 45 } 46 }}
elk 總體思路是 客戶經過訪問web,而後web服務器的filebeat把數據發送給logstash,logstash解析後發送各elasticsearch檢索存儲,kibana上能夠從elastic提取數據並有用web和圖形展現出來