在使用搜索引擎是你可能會以爲很簡單方便,只須要在搜索欄輸入想要的關鍵字就能顯示出想要的結果。但在這簡單的操做背後是搜索引擎複雜的邏輯和許多組件協同工做的結果。java
搜索引擎的組件通常可分爲兩大類:索引組件和搜索組件。在搜索以前搜索引擎必須把可搜索的全部數據作整合處理並構建索引(倒排索引),將全部數據構建成能被搜索的格式並存儲起來,這就成爲索引組件;能根據用戶搜索並能從索引組件構建的索引中查詢出用戶想要的結果的組件稱爲搜索組件。
node
ElasticSearch就屬於搜索組件的一種,而且它是一個分佈式搜索服務器,在搭建ElasticSearch集羣時最好有三臺以上的服務器,由於它的數據都是分片存儲的。Lucene是Apache提供的開源項目,是一個徹底用Java編寫的搜索引擎庫。ElasticSearch使用Lucene做爲內部的搜索索引構建庫,使ElasticSearch集成了搜索引擎的兩大核心組件。雖然用這兩個組件能夠完成索引構建並進行搜索操做,但成爲完善的搜索引擎是不夠的。nginx
對於集羣日誌分析平臺來講,還須要對大量應用服務的日誌數據進行採集,並按須要的格式進行劃分、存儲、分析,這就要用到Logstash和Filebeat組件。web
Filebeat是一個很是輕量化的日誌採集組件,Filebeat 內置的多種模塊(auditd、Apache、NGINX、System 和 MySQL)可實現對常見日誌格式的一鍵收集、解析和可視化。而Logstash是一個開源的服務器端數據處理管道,它能夠同時從多個源中提取數據,對其進行轉換,而後輸出到指定位置。數據庫
在解決上面一系列問題後,搜索引擎還須要提供一個友善的用戶界面來展現給用戶,使用戶可以進行傻瓜式的搜索操做,而且還能將搜索結果經過各類直觀的方式展現在用戶面前。這是就要用到Kibana組件。Kibana可讓ElasticSearch數據極爲豐富的展示出來。express
上面提到的組件除了Lucene庫意外其餘的都屬於Elastic Stack家族的產品,在廣泛的企業中都是採用這些組件構建成集羣來分析處理大量的日誌數據的。更多組件可訪問Elastic官網站點。bootstrap
在本文示例中,如下面的結構來進行演示,圖1:vim
在上圖所示的架構的工做邏輯:Kibana將ElasticSearch集羣提供的搜索內容進行可視化處理,並用多種方式展示給用戶;ElasticSearch集羣和其集成的Lucene用來完成對全部採集到的數據進行分析構建索引並提供搜索;而數據的來源則是經過Logstash和FileBeat採集自Nginx日誌,Logstash未來自FileBeat的數據過濾並輸出給ElasticSearch集羣。後端
在集羣達到必定規模後,大量的後端應用經過FileBeat採集到數據輸出到Logstash會使Logstash Server稱爲性能瓶頸,由於Logstash是用Java程序開發的,很消耗內存,當數據處理量大後性能會大打折扣;因此能夠在Logstash和FileBeat之間增長Redis,Redis專門用來作隊列數據庫,將在FieBeat中採集的數據平緩的輸出到Logstash。如圖2:瀏覽器
Linux版本:CentOS7.2
ElasticSearch:5.5.1
下面先用圖1的架構示例來構建集羣,完成後再引入Redis來進行演示,當集羣沒有達到很龐大規模時引入Redis不會對集羣性能有實質性的提高。
因爲ElasticSearch是用Java開發的,運行時依賴JDK環境,ElasticSearch集羣全部節點上都須要裝上JDK。在n2~n4節點上安裝ElasticSearch和JDK:
yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
在官網下載ElasticSearch,我這裏安裝的是ElasticSearch5.5.1版本,在官網下載rpm包直接安裝:
rpm -ivh elasticsearch-5.5.1.rpm
ElasticSearch5的程序環境:
1 /etc/elasticsearch/elasticsearch.yml #主程序配置文件 2 /etc/elasticsearch/jvm.options #java配置文件 3 /etc/elasticsearch/log4j2.properties #日誌配置文件
主配合文件配置段:
1 Cluster #集羣配置段,須要設置ElasticSearch集羣名稱 2 Node #各節點配置段,要設置當前主機的主機名 3 Paths #各種路徑配置段 4 Memory #內存配置段 5 Network #網絡配置段 6 Discovery # 7 Gateway 8 Various
1 # ---------------------------------- Cluster ----------------------------------- 2 # 3 # Use a descriptive name for your cluster: 4 # 5 #cluster.name: my-application 6 cluster.name: myels #集羣名稱,ElasticSearch是基於集羣名和主機名來識別集羣成員的 7 # ------------------------------------ Node ------------------------------------ 8 # 9 # Use a descriptive name for the node: 10 # 11 #node.name: node-1 12 node.name: n2 #本節點名 13 # Add custom attributes to the node: 14 # 15 #node.attr.rack: r1 16 # 17 # ----------------------------------- Paths ------------------------------------ 18 # 19 # Path to directory where to store the data (separate multiple locations by comma): 20 # 21 #path.data: /path/to/data 22 path.data: /els/data #查詢索引數據存放路徑 23 # Path to log files: 24 path.data: /els/logs #日誌路徑 25 #path.logs: /path/to/logs 26 # 27 # ----------------------------------- Memory ----------------------------------- 28 # 29 # Lock the memory on startup: 30 # 31 #bootstrap.memory_lock: true #是否開啓時就劃用全部內存 32 # 33 # Make sure that the heap size is set to about half the memory available 34 # on the system and that the owner of the process is allowed to use this 35 # limit. 36 # 37 # Elasticsearch performs poorly when the system is swapping the memory. 38 # 39 # ---------------------------------- Network ----------------------------------- 40 # 41 # Set the bind address to a specific IP (IPv4 or IPv6): 42 # 43 #network.host: 192.168.0.1 44 network.host: 192.168.29.102 #監聽地址,默認是本地 45 # Set a custom port for HTTP: 46 # 47 #http.port: 9200 #監聽端口 48 # 49 # For more information, consult the network module documentation. 50 # 51 # --------------------------------- Discovery ---------------------------------- 52 # 53 # Pass an initial list of hosts to perform discovery when new node is started: 54 # The default list of hosts is ["127.0.0.1", "[::1]"] 55 # 56 discovery.zen.ping.unicast.hosts: ["n2", "n3", "n4"] #爲了安全起見,儘可能將ElasticSearch節點的解析名配置進來(畫圈圈) 57 # 58 # Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): 59 # 60 discovery.zen.minimum_master_nodes: 2 #腦裂預防選項 61 # 62 # For more information, consult the zen discovery module documentation. 63 # 64 # ---------------------------------- Gateway ----------------------------------- 65 # 66 # Block initial recovery after a full cluster restart until N nodes are started: 67 # 68 #gateway.recover_after_nodes: 3 69 # 70 # For more information, consult the gateway module documentation. 71 # 72 # ---------------------------------- Various ----------------------------------- 73 # 74 # Require explicit names when deleting indices: 75 # 76 #action.destructive_requires_name: true
建立數據和日誌目錄並修改目錄權限並啓動ElasticSearch:
mkdir -pv /els/{data,logs} chown -R elasticsearch.elasticsearch /els/
systemctl start elasticsearch
啓動時發生了錯誤:
查看 /var/log/messages 發現一條警告信息:
elasticsearch: OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
這是由於JVM中的 ParallelGCThreads 參數未設置正確致使的,我修改了虛擬機的線程數後又出現了新的報錯:
elasticsearch: Exception in thread "main" ElasticsearchParseException[duplicate settings key [path.data] found at line number [36], column number [12], previous value [/els/data], current value [/els/logs]]
這個的大體意思就是路徑衝突了,後來發如今主配置文件中我將 path.logs: /els/logs 寫成了 path.data: /els/logs ,致使路徑衝突。
啓動完成後能夠看到9200和9300端口被監聽:
至此ElasticSearch集羣就已經工做起來了。
在n1上安裝Kibana:
rpm -ivh kibana-5.5.1-x86_64.rpm
修改Kibana配置文件:
vim /etc/kibana/kibana.yml
1 # Kibana is served by a back end server. This setting specifies the port to use. 2 #server.port: 5601 3 server.port: 5601 #監聽端口 4 # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. 5 # The default is 'localhost', which usually means remote machines will not be able to connect. 6 # To allow connections from remote users, set this parameter to a non-loopback address. 7 #server.host: "localhost" 8 server.host: "192.168.29.101" #監聽地址 9 # Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects 10 # the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests 11 # to Kibana. This setting cannot end in a slash. 12 #server.basePath: "" 13 14 # The maximum payload size in bytes for incoming server requests. 15 #server.maxPayloadBytes: 1048576 16 17 # The Kibana server's name. This is used for display purposes. 18 #server.name: "your-hostname" 19 server.name: "n1" #主機名 20 # The URL of the Elasticsearch instance to use for all your queries. 21 #elasticsearch.url: "http://n2:9200" 22 elasticsearch.url: "http://n2:9200" #ElasticSearch地址 23 # When this setting's value is true Kibana uses the hostname specified in the server.host 24 # setting. When the value of this setting is false, Kibana uses the hostname of the host 25 # that connects to this Kibana instance. 26 #elasticsearch.preserveHost: true 27 28 # Kibana uses an index in Elasticsearch to store saved searches, visualizations and 29 # dashboards. Kibana creates a new index if the index doesn't already exist. 30 #kibana.index: ".kibana" 31 32 # The default application to load. 33 #kibana.defaultAppId: "discover" 34 35 # If your Elasticsearch is protected with basic authentication, these settings provide 36 # the username and password that the Kibana server uses to perform maintenance on the Kibana 37 # index at startup. Your Kibana users still need to authenticate with Elasticsearch, which 38 # is proxied through the Kibana server. 39 #elasticsearch.username: "user" #能夠設置登陸認證用戶和密碼 40 #elasticsearch.password: "pass" 41 42 # Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively. 43 # These settings enable SSL for outgoing requests from the Kibana server to the browser. 44 #server.ssl.enabled: false 45 #server.ssl.certificate: /path/to/your/server.crt 46 #server.ssl.key: /path/to/your/server.key 47 48 # Optional settings that provide the paths to the PEM-format SSL certificate and key files. 49 # These files validate that your Elasticsearch backend uses the same key files. 50 #elasticsearch.ssl.certificate: /path/to/your/client.crt 51 #elasticsearch.ssl.key: /path/to/your/client.key 52 53 # Optional setting that enables you to specify a path to the PEM file for the certificate 54 # authority for your Elasticsearch instance. 55 #elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ] 56 57 # To disregard the validity of SSL certificates, change this setting's value to 'none'. 58 #elasticsearch.ssl.verificationMode: full 59 60 # Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of 61 # the elasticsearch.requestTimeout setting. 62 #elasticsearch.pingTimeout: 1500 63 64 # Time in milliseconds to wait for responses from the back end or Elasticsearch. This value 65 # must be a positive integer. 66 #elasticsearch.requestTimeout: 30000 67 68 # List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side 69 # headers, set this value to [] (an empty list). 70 #elasticsearch.requestHeadersWhitelist: [ authorization ] 71 72 # Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten 73 # by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration. 74 #elasticsearch.customHeaders: {} 75 76 # Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable. 77 #elasticsearch.shardTimeout: 0 78 79 # Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying. 80 #elasticsearch.startupTimeout: 5000 81 82 # Specifies the path where Kibana creates the process ID file. 83 #pid.file: /var/run/kibana.pid 84 85 # Enables you specify a file where Kibana stores log output. 86 #logging.dest: stdout 87 88 # Set the value of this setting to true to suppress all logging output. 89 #logging.silent: false 90 91 # Set the value of this setting to true to suppress all logging output other than error messages. 92 #logging.quiet: false 93 94 # Set the value of this setting to true to log all events, including system usage information 95 # and all requests. 96 #logging.verbose: false 97 98 # Set the interval in milliseconds to sample system and process performance 99 # metrics. Minimum is 100ms. Defaults to 5000. 100 #ops.interval: 5000 101 102 # The default locale. This locale can be used in certain circumstances to substitute any missing 103 # translations. 104 #i18n.defaultLocale: "en"
瀏覽器訪問http://192.168.29.101:5601,顯示以下,說明Kibana已經安裝成功:
前面已經將ElasticSearch搜索引擎最重要的部分搭建完成了,能夠進行搜索和構建索引了。下面來部署數據採集的部分。我這裏用Nginx來作演示,用Filebeat將Nginx的日誌蒐集並輸出給ElasticSearch並構建索引提供搜索。
在n6節點安裝Nginx和Filebeat:
rpm -ivh filebeat-5.5.1-x86_64.rpm yum install -y nginx
配置Filebeat並啓動:
vim /etc/filebeat/filebeat.yml
1 #=========================== Filebeat prospectors ============================= 2 3 filebeat.prospectors: 4 5 # Each - is a prospector. Most options can be set at the prospector level, so 6 # you can use different prospectors for various configurations. 7 # Below are the prospector specific configurations. 8 9 - input_type: log 10 11 # Paths that should be crawled and fetched. Glob based paths. 12 paths: 13 #- /var/log/*.log 14 - /var/log/nginx/access.log #指定要採集的日誌文件路徑 15 #- c:\programdata\elasticsearch\logs\* 16 17 # Exclude lines. A list of regular expressions to match. It drops the lines that are 18 # matching any regular expression from the list. 19 #exclude_lines: ["^DBG"] 20 21 # Include lines. A list of regular expressions to match. It exports the lines that are 22 # matching any regular expression from the list. 23 #include_lines: ["^ERR", "^WARN"] 24 25 # Exclude files. A list of regular expressions to match. Filebeat drops the files that 26 # are matching any regular expression from the list. By default, no files are dropped. 27 #exclude_files: [".gz$"] 28 29 # Optional additional fields. These field can be freely picked 30 # to add additional information to the crawled log files for filtering 31 #fields: 32 # level: debug 33 # review: 1 34 35 ### Multiline options 36 37 # Mutiline can be used for log messages spanning multiple lines. This is common 38 # for Java Stack Traces or C-Line Continuation 39 40 #multiline.pattern: ^\[ 41 42 # Defines if the pattern set under pattern should be negated or not. Default is false. 43 #multiline.negate: false 44 45 #multiline.match: after 46 47 48 #================================ General ===================================== 49 50 # The name of the shipper that publishes the network data. It can be used to group 51 # all the transactions sent by a single shipper in the web interface. 52 #name: 53 54 # The tags of the shipper are included in their own field with each 55 # transaction published. 56 #tags: ["service-X", "web-tier"] 57 58 # Optional fields that you can specify to add additional information to the 59 # output. 60 #fields: 61 # env: staging 62 63 #================================ Outputs ===================================== 64 65 # Configure what outputs to use when sending the data collected by the beat. 66 # Multiple outputs may be used. 67 68 #-------------------------- Elasticsearch output ------------------------------ 69 output.elasticsearch: 70 # Array of hosts to connect to. 71 hosts: ["n2:9200"] #數據輸出到ElasticSearch,填寫集羣其中的一個便可 72 73 # Optional protocol and basic auth credentials. 74 #protocol: "https" 75 #username: "elastic" 76 #password: "changeme" 77 78 #----------------------------- Logstash output -------------------------------- 79 #output.logstash: 80 # The Logstash hosts 81 #hosts: ["localhost:5044"] 82 83 # Optional SSL. By default is off. 84 # List of root certificates for HTTPS server verifications 85 #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] 86 87 # Certificate for SSL client authentication 88 #ssl.certificate: "/etc/pki/client/cert.pem" 89 90 # Client Certificate Key 91 #ssl.key: "/etc/pki/client/cert.key" 92 93 #================================ Logging ===================================== 94 95 # Sets log level. The default log level is info. 96 # Available log levels are: critical, error, warning, info, debug 97 #logging.level: debug 98 99 # At debug level, you can selectively enable logging only for some components. 100 # To enable all selectors use ["*"]. Examples of other selectors are "beat", 101 # "publish", "service". 102 #logging.selectors: ["*"]
systemctl start filebeat
在瀏覽器上訪問n6節點,使Nginx生成日誌文件,觸發Filebeat將數據輸出給ElasticSearch,而後訪問n1節點的Kibana,配置索引模式構建索引。在Nginx被訪問後會自動生成:
在n5節點上安裝Logstash,Logstash的運行依賴JDK環境,因此也須要安裝JDK:
yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel rpm -ivh logstash-5.5.1.rpm
Logstash的組件結構分爲輸入組件(Input plugin)、輸出組件(Output plugin)、過濾組件(Filter plugin),圖示:
測試Logstash是否能正常運行時,爲避免與root發生權限衝突,須要切換至logstash用戶嘗試啓動Logstash:
su - logstash -s /bin/bash
主配置文件爲: /etc/logstash/logstash.yml ,基本上不須要作修改,但要修改n6節點上的Filebeat配置文件,將Filebeat的輸出從ElasticSearch修改爲向Logstash輸出:
#----------------------------- Logstash output -------------------------------- output.logstash: # The Logstash hosts #hosts: ["localhost:5044"] hosts: ["n5:5044"]
在n5節點上編寫Logstash過濾模塊:
1 input { #定義數據輸入來源,這裏定義的是從Filebeat輸入 2 beats { 3 host => '0.0.0.0' #監聽地址 4 port => 5044 5 } 6 } 7 filter { #過濾模塊,將輸入的數據按某種定義的格式作處理切割 8 grok { #由grok模塊來過濾 9 match => { 10 "message" => "%{IPORHOST:clientip}" #切割源message的格式 11 } 12 } 13 } 14 15 output { #將過濾後的數據輸出到ElasticSearch 16 elasticsearch { 17 hosts => ["n2:9200","n2:9200","n2:9200"] 18 index => "logstash-nginxlog-%{+YYYY.MM.dd}" 19 } 20 }
在Kibana上從新查找便能看出已經將 clientip 切割出來了,這種的切割功能用Filebeat是沒辦法實現的: