環境介紹
爲簡化安裝和升級,Elastic Stack各組件版本是同步發佈的。本次安裝,採用最新的7.9通用版。官方有多種安裝方式:tar、rpm、docker、yum形式,我選擇用tar包安裝。html
Elastic Stack 組件:java
Beats7.9 (filebeat)
Elasticsearch7.9
Kibana7.9
Logstash7.9
操做系統:CentOS8.2.2004
JDK版本:jdk-14.0.2_linux-x64_bin.rpm(Logstash依賴JDK)
Redis版本:5.0.3 (yum 安裝)
下載地址:
node
https://artifacts.elastic.co/downloads/kibana/kibana-7.9.0-linux-x86_64.tar.gz https://artifacts.elastic.co/downloads/logstash/logstash-7.9.0.tar.gz https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.0-linux-x86_64.tar.gz https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.0-linux-x86_64.tar.gz https://www.oracle.com/java/technologies/javase-jdk14-downloads.html
IP 規劃:linux
matser 節點: 192.168.2.175 192.168.2.176 192.168.2.177 data 節點: 192.168.2.185 192.168.2.187 查詢節點: 192.168.3.62
操做系統初始化
時鐘同步:
設置主機時區,並啓動chronyd時鐘同步服務。nginx
timedatectl set-timezone Asia/Shanghai systemctl start chronyd
系統參數設置
Elasticsearch默認監聽127.0.0.1,這顯然沒法跨主機交互。當咱們對網絡相關配置進行修改後,Elasticsearch由開發模式切換爲生產模式,會在啓動時進行一系列安全檢查,以防出現配置不當致使的問題。git
爲知足Elasticsearch的啓動要求,須要調整如下參數:redis
一、max_map_count:
Elasticsearch默認使用混合的NioFs( 注:非阻塞文件系統)和MMapFs( 注:內存映射文件系統)存儲索引。請確保配置的最大映射數量,以便有足夠的虛擬內存可用於mmapped文件。此值設置不當,啓動Elasticsearch時日誌會輸出如下錯誤:
[1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low,increase to at least [262144]
解決方法:
docker
# vim /etc/sysctl.conf vm.max_map_count=262144 # sysctl –p
二、修改最大文件描述符json
# vim /etc/security/limits.conf *soft nofile 655350 *hard nofile 655350
三、修改最大線程數bootstrap
# vim /etc/security/limits.conf *soft nproc 40960 *hard nproc 40960
四、建立elastic 帳號 啓動elk 須要
useradd elastic -s /sbin/nologin -M
五、建立 elk 部署目錄
mkdir -p /apps/elk
安裝
安裝順序
一、elasticsearch
二、kibana
三、jdk-14.0.2_linux-x64_bin.rpm
四、Redis
五、logstash
六、filebeat
七、示例 nginx 日誌收集
安裝Elasticsearch
一、下載 Elasticsearch
cd /apps/elk wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.0-linux-x86_64.tar.gz
二、安裝 Elasticsearch
tar -xvf elasticsearch-7.9.0-linux-x86_64.tar.gz
三、集羣相關配置
Elasticsearch配置文件採用YAML格式,經過tar包及yum方式安裝,默認在當前解壓目錄/config,修改主配置文件elasticsearch.yml
cd elasticsearch-7.9.0/config cat elasticsearch.yml # #集羣名稱,只有cluster.name相同時,節點才能加入同一個集羣。建議使用描述性名稱,不建議在不一樣環境中使用相同的集羣名 cluster.name: k8s-es #節點描述名稱,默認狀況下,Elasticsearch將使用隨機生成的UUID的前7個字符做爲節點名稱。此值支持系統變量。 node.name: ${HOSTNAME} #啓動後鎖定內存,禁用swap交換,提升ES性能。伴隨這個參數還須要調整其餘配置,後面討論。 bootstrap.memory_lock: true # 禁用 SecComp bootstrap.system_call_filter: false # 監聽的主機地址,客戶端經過哪一個地址訪問此節點。 network.host: 192.168.2.175 #監聽的WEB端口。 http.port: 9200 # 設置壓縮tcp傳輸時的數據 transport.tcp.compress: true #集羣內節點發現,經過掃描9300-9305端口。列出集羣中全部符合主節點的節點地址。 discovery.seed_hosts: ["192.168.2.175","192.168.2.176", "192.168.2.177"] #在一個全新的集羣中設置符合主節點條件的初始節點集。默認狀況下,此列表爲空,這意味着這個節點但願加入已經引導的集羣 cluster.initial_master_nodes: ["192.168.2.175","192.168.2.176", "192.168.2.177"] # 選主過程當中須要 有多少個節點通訊 discovery.zen.minimum_master_nodes: 2 # 只要指定數量的節點加入集羣,就開始進行恢復 gateway.recover_after_nodes: 2 # 若是指望的節點數量沒有達標,那麼會等待必定的時間,而後就開始進行shard recovery gateway.recover_after_time: 10m # 要求必須有多少個節點在集羣中,當加入集羣中的節點數量達到這個指望數值以後,每一個node的local shard的恢復就會理解開始,默認的值是0,也就是不會作任何的等待 gateway.expected_nodes: 3 # 初始化數據恢復時,併發恢復線程的個數 cluster.routing.allocation.node_initial_primaries_recoveries: 8 # 設置在節點中最大容許同時進行分片分佈的個數 cluster.routing.allocation.node_concurrent_recoveries: 8 # 數據在節點間傳輸最大帶寬 indices.recovery.max_bytes_per_sec: 100mb # 一臺機子能運行的節點數目 node.max_local_storage_nodes: 1 # #此節點是否具備成爲主節點的資格。 # 192.168.2.175-177 設置爲true 192.168.2.185,187,3.62 設置爲false node.master: true # 此節點是否做爲數據節點存儲數據。 # 192.168.2.175-177,3.62 設置爲false 192.168.2.185,187 設置爲true node.data: false # 內存的限額 indices.fielddata.cache.size: 30% # 請求中熔斷器 network.breaker.inflight_requests.limit: 80% # (收費,須要預先設置xpack.ml.enabled=true,本文不考慮) node.ml: false # (收費,須要預先設置xpack.ml.enabled=true,本文不考慮) xpack.ml.enabled: false # 開啓X-Pack監視功能 xpack.monitoring.enabled: true # ES線程池設置 thread_pool: write: queue_size: 200 # 開啓 es 安裝 設置 xpack.security.enabled: true # 開啓集羣ssl 鏈接 配置集羣帳號密碼必須開啓 xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.keystore.path: ./elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: ./elastic-certificates.p12 # 9200 端口 https 鏈接 啓用 #xpack.security.http.ssl.enabled: true #xpack.security.http.ssl.keystore.path: ./elastic-certificates.p12 #xpack.security.http.ssl.truststore.path: ./elastic-certificates.p12 # jvm.options 根據本身服務器配置修改
四、p12 文件生成 及 啓動文件生成
# 建立ssl 證書文件 cd elasticsearch-7.9.0/config ../bin/elasticsearch-certutil ca ../bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 # 建立啓動文件 cat > /usr/lib/systemd/system/elastic.service << EOF [Unit] Description=elasticsearch service After=syslog.target After=network.target [Service] User=elastic Group=elastic LimitNOFILE=128000 LimitNPROC=128000 LimitMEMLOCK=infinity Restart=on-failure KillMode=process ExecStart=/apps/elk/elasticsearch-7.9.0/bin/elasticsearch ExecReload=/bin/kill -HUP \$MAINPID RestartSec=10s [Install] WantedBy=multi-user.target EOF # 分發文件 cd /apps/ scp -r elk 192.168.2.176:/apps/ scp -r elk 192.168.2.177:/apps/ scp -r elk 192.168.2.185:/apps/ scp -r elk 192.168.2.187:/apps/ scp -r elk 192.168.3.62:/apps/ scp /usr/lib/systemd/system/elastic.service 192.168.2.176: /usr/lib/systemd/system/elastic.service scp /usr/lib/systemd/system/elastic.service 192.168.2.177: /usr/lib/systemd/system/elastic.service scp /usr/lib/systemd/system/elastic.service 192.168.2.185: /usr/lib/systemd/system/elastic.service scp /usr/lib/systemd/system/elastic.service 192.168.2.187: /usr/lib/systemd/system/elastic.service scp /usr/lib/systemd/system/elastic.service 192.168.3.62: /usr/lib/systemd/system/elastic.service # 設置 運行目錄 用戶 chown -R elastic:elastic /apps/elk # 192.168.2.176,192.168.2.177 節點修改 node.name: network.host: # 設置 運行目錄 用戶 chown -R elastic:elastic /apps/elk # 192.168.2.185,192.168.2.187 節點修改 node.name: network.host: node.master: false node.data: true # 刪除或者註釋一下配置 # cluster.initial_master_nodes: ["192.168.2.175","192.168.2.176", "192.168.2.177"] # discovery.zen.minimum_master_nodes: 2 # 設置 運行目錄 用戶 chown -R elastic:elastic /apps/elk # 192.168.3.62 節點修改 node.name: network.host: node.master: false node.data: false # 刪除或者註釋一下配置 # cluster.initial_master_nodes: ["192.168.2.175","192.168.2.176", "192.168.2.177"] # discovery.zen.minimum_master_nodes: 2 # 設置 運行目錄 用戶 chown -R elastic:elastic /apps/elk # 啓動全部es 節點 並設置開機啓動 systemctl start elastic.service systemctl enable elastic.service
生成elasticsearch 登錄帳號密碼
# 任意節點執行 bin/elasticsearch-setup-passwords auto # 最後輸出 帳號密碼 請記錄好 Changed password for user apm_system PASSWORD apm_system = 4zmSk6NdfNblKFCdZnHK Changed password for user kibana_system PASSWORD kibana_system = hfcUg1rInYoWBASZFQTE Changed password for user kibana PASSWORD kibana = hfcUg1rInYoWBASZFQTE Changed password for user logstash_system PASSWORD logstash_system = JIQJnlMjUJPRXvYRH5L9 Changed password for user beats_system PASSWORD beats_system = SHNpqmnILwilor2T3Nga Changed password for user remote_monitoring_user PASSWORD remote_monitoring_user = 8LpqFw336wrwubkZiEwZ Changed password for user elastic PASSWORD elastic = yqyY8P3PJ5CP1GrT7xxR # 說明若是配置xpack.security.http.ssl.enabled: true 請先註釋否則生成帳號密碼會報錯 xpack.security.http.ssl.enabled: true xpack.security.http.ssl.keystore.path: ./elastic-certificates.p12 xpack.security.http.ssl.truststore.path: ./elastic-certificates.p12
安裝kibana (節點 192.168.3.62)
一、下載Kibana的tar包
cd /apps/elk wget https://artifacts.elastic.co/downloads/kibana/kibana-7.9.0-linux-x86_64.tar.gz
二、安裝Kibana的tar包
tar kibana-7.9.0-linux-x86_64.tar.gz
三、修改配置文件
cd kibana-7.9.0-linux-x86_64/config vim kibana.yml # Default: 5601。Kibana監聽的端口 server.port: 5601 # Default: "localhost"。Kibana監聽的地址 server.host: "192.168.3.62" # 您的主機名 server.name: "k8s_es" #定義Elasticsearch主節點名稱 elasticsearch.hosts: ["http://192.168.2.175:9200","http://192.168.2.176:9200","http://192.168.2.177:9200"] #elasticsearch 登錄帳號 elasticsearch.username: "kibana_system" #elasticsearch 登錄密碼 elasticsearch.password: "hfcUg1rInYoWBASZFQTE" # 請求服務的最大數據量,單位字節 server.maxPayloadBytes: 1048576 #等待Elasticsearch響應ping的時間,單位ms elasticsearch.pingTimeout: 1500 # 等待來自後端或Elasticsearch的響應時間,必須是正整數。單位ms elasticsearch.requestTimeout: 30000 # 要發送到Elasticsearch的Kibana客戶端標頭列表。要不發送客戶端標頭,請將此值設置爲[](空列表) elasticsearch.requestHeadersWhitelist: [ authorization ] # 要發送到Elasticsearch的標頭名稱和值。不管elasticsearch.requestHeadersWhitelist配置如何,客戶端標頭都不能覆蓋任何自定義標頭 elasticsearch.customHeaders: {} # 對Elasticsearch等待碎片響應的時間(以毫秒爲單位)。設置爲0禁用 elasticsearch.shardTimeout: 30000 # 在重試以前在Kibana啓動時等待Elasticsearch的時間 elasticsearch.startupTimeout: 5000 # 設置kibana 中文 i18n.locale: "zh-CN" # 開啓監控 xpack.monitoring.ui.container.elasticsearch.enabled: true # 至少32 位 xpack.encryptedSavedObjects.encryptionKey: "ae3ca37a74386e07e471eeb842720384"
四、配置kibana service
cat > /usr/lib/systemd/system/kibana.service << EOF [Unit] Description=kibana service daemon After=network.target [Service] User=elastic Group=elastic LimitNOFILE=65536 LimitNPROC=65536 ExecStart=/apps/elk/kibana-7.9.0-linux-x86_64/bin/kibana ExecReload=/bin/kill -HUP \$MAINPID KillMode=process Restart=on-failure RestartSec=10s [Install] WantedBy=multi-user.target EOF chown -R elastic:elastic /apps/elk # 設置開機啓動 systemctl enable kibana.service # 啓動 kibana systemctl start kibana.service
輸入生成的帳號密碼登錄
jdk-14.0.2 安裝及配置(節點192.168.3.62)
一、下載 jdk-14.0.2
https://www.oracle.com/java/technologies/javase-jdk14-downloads.html
二、安裝 jdk-14.0.2
rpm -ivh jdk-14.0.2_linux-x64_bin.rpm
三、 查看安裝是否正常
java -version # 返回 java version "14.0.2" 2020-07-14 Java(TM) SE Runtime Environment (build 14.0.2+12-46) Java HotSpot(TM) 64-Bit Server VM (build 14.0.2+12-46, mixed mode, sharing)
Redis安裝及配置(節點192.168.3.62)
一、安裝 redis
yum install redis
二、配置redis
vim /etc/redis.conf bind 0.0.0.0 protected-mode no
三、啓動redis
# 設置開機啓動 systemctl enable redis.service # 啓動redis systemctl start redis.service
Logstash安裝及配置 (節點192.168.3.62)
一、下載tar 包
cd /apps/elk wget https://artifacts.elastic.co/downloads/logstash/logstash-7.9.0.tar.gz
二、安裝 logstash
tar -xvf logstash-7.9.0.tar.gz
三、 修改 logstash 配置
cd logstash-7.9.0/config/ cat logstash.yml # CPU內核數(或幾倍cpu內核數) pipeline.workers: 8 # 每次發送的事件數 pipeline.batch.size: 5000 # 發送延時 pipeline.batch.delay: 3 # 設置在Logstash正常退出時,若是還有未處理事件是否強制退出 pipeline.unsafe_shutdown: false # 管道事件排序 pipeline.ordered: auto # 日誌配置文件地址 path.config: "/apps/elk/logstash-7.9.0/conf.d" # 自動加載/apps/elk/logstash-7.9.0/conf.d 配置文件 config.reload.automatic: true # 配置刷時間 config.reload.interval: 3s # 監聽的地址 http.host: 192.168.3.62 # 監聽端口 http.port: 9600 # 啓用持久隊列 存在在memory 當中 queue.type: memory # 日誌輸出路徑 path.logs: /apps/elk/logstash-7.9.0/logs # 開啓xpack 監控 es 開啓https 是這個配置失效 xpack.monitoring.enabled: true # es 帳號 xpack.monitoring.elasticsearch.username: logstash_system # es 密碼 xpack.monitoring.elasticsearch.password: JIQJnlMjUJPRXvYRH5L9 # es 節點 xpack.monitoring.elasticsearch.hosts: ["http://192.168.2.175:9200", "http://192.168.2.176:9200","http://192.168.2.177:9200"] # jvm.options 根據本身服務器配置修改
四、配置logstash 啓動腳本
cat > /usr/lib/systemd/system/logstash.service << EOF [Unit] Description=logstash service After=syslog.target After=network.target [Service] Environment="CONFFILE=/apps/elk/logstash-7.9.0/conf.d" LimitNOFILE=65536 LimitNPROC=65536 Restart=on-failure KillMode=process ExecStart=/apps/elk/logstash-7.9.0/bin/logstash -f \$CONFFILE ExecReload=/bin/kill -HUP \$MAINPID RestartSec=10s [Install] WantedBy=multi-user.target EOF # 設置開機啓動 systemctl enable logstash.service
五、設置 logstash 輸入日誌到es 帳號
# 打開 kibana 選擇 開發工具 # 建立logstash_write_role 組的規則 POST /_security/role/logstash_write_role { "cluster": [ "monitor", "manage_index_templates" ], "indices": [ { "names": [ "logstash*" ], "privileges": [ "write", "create_index", "delete", "create_index", "manage", "manage_ilm" ], "field_security": { "grant": [ "*" ] } } ], "run_as": [], "metadata": {}, "transient_metadata": { "enabled": true } } # 返回{"role":{"created":true}} # 建立帳號 logstash_writer POST /_security/user/logstash_writer { "username": "logstash_writer", "roles": [ "logstash_write_role" ], "full_name": null, "email": null, "password": "JIQJnlMjUJPRXvYRH5L9", # 設置密碼 "enabled": true } # 返回{"user":{"created":true}}
以使用 Kibana Users UI(Kibana 用戶 UI)建立:
六、配置一個測試日誌 以nginx 爲例
mkdir -p /apps/elk/logstash-7.9.0/conf.d cd /apps/elk/logstash-7.9.0/conf.d vim nginx.conf input { redis { host => "192.168.3.62" port => "6379" data_type => "list" key => "nginx_key" db => "0" } } filter { if [fields][service] == "rockman_ngx_acs" { grok { patterns_dir => ["/apps/elk/logstash-7.9.0/patterns"] match => { "message" => "%{IP:client_ip} \- %{DATA:username}\[%{HTTPDATE:timestamp}\] (?<server>%{IPORHOST}(?:\S\d+)?|-) \"%{WORD:method} %{URIPATHPARAM:uripath} %{URIPROTO:protocol}/%{NUMBER:httpversion}\" %{NUMBER:status_code} (?:%{NUMBER:bytes}|-) \"%{DATA}\" %{QS:agent} (%{QS:x_forwarded_for}?) (%{IP:CDN_IP}?)"} add_tag => ["nginx_aces"] remove_field => ["message"] } date { match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z","ISO8601"] timezone => "Asia/Shanghai" target => "@timestamp" remove_field => [ "timestamp" ] } geoip { source => "client_ip" target => "geoip" database => "/apps/elk/logstash-7.9.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb" #add_field => {"[geoip][coordinates]" => "%{[geoip][longitude]}"} #add_field => {"[geoip][coordinates]" => "%{[geoip][latitude]}"} remove_field => ["[geoip][latitude]","[geoip][longitude]"] } #ruby { # code => " # timestamp = event.get('@timestamp') # localtime = timestamp.time + 28800 # localtimeStr = localtime.strftime('%Y%m%d%H%M%S') # event.set('localtime', localtimeStr) # " # } mutate { convert => {"bytes" => "integer"} } } else if [fields][service] == "rockman_ngx_err" { grok { patterns_dir => ["/apps/elk/logstash-7.9.0/patterns"] match => { "message" => ["(?<timestamp>%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) \[%{LOGLEVEL:severity}\] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?<remote_addr>%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server}?)(?:, request: %{QS:request})?(?:, upstream: (?<upstream>\"%{URI}\"|%{QS}))?(?:, host: %{QS:request_host})?(?:, referrer: \"%{URI:referrer}\")?"]} add_tag => ["nginx_err"] overwrite => ["message"] } date { match => ["timestamp","yyyy/MM/dd HH:mm:ss","ISO8601"] timezone => "Asia/Shanghai" target => "@timestamp" remove_field => [ "timestamp" ] } geoip { source => "remote_addr" target => "geoip" database => "/apps/elk/logstash-7.9.0/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb" #add_field => {"[geoip][coordinates]" => "%{[geoip][longitude]}"} #add_field => {"[geoip][coordinates]" => "%{[geoip][latitude]}"} remove_field => ["[geoip][latitude]","[geoip][longitude]"] } } } output { if [fields][service] == "rockman_ngx_acs"{ if "nginx_aces" in [tags] { elasticsearch { hosts => ["http://192.168.2.175:9200","http://192.168.2.176:9200","http://192.168.2.177:9200"] sniffing => true index => "logstash-nginx-access-rockman-%{+YYYY.MM}" user => "logstash_writer" password => "JIQJnlMjUJPRXvYRH5L9" #keystore => "/apps/elk/logstash-7.9.0/config/logstash.p12" #keystore_password => "" #truststore => "/apps/elk/logstash-7.9.0/config/logstash.p12" #truststore_password => "" #ssl => true #ssl_certificate_verification => false #cacert => "/apps/elk/logstash-7.9.0/config/logstash.pem" } } } else if [fields][service] == "rockman_ngx_err" { if "nginx_err" in [tags] { elasticsearch { hosts => ["http://192.168.2.175:9200","http://192.168.2.176:9200","http://192.168.2.177:9200"] sniffing => true index => "logstash-nginx-error-%{+YYYY.MM}" user => "logstash_writer" password => "JIQJnlMjUJPRXvYRH5L9" #keystore => "/apps/elk/logstash-7.9.0/config/logstash.p12" #keystore_password => "" #truststore => "/apps/elk/logstash-7.9.0/config/logstash.p12" #truststore_password => "" #ssl => true #ssl_certificate_verification => false #cacert => "/apps/elk/logstash-7.9.0/config/logstash.pem" } } } } # nginx 格式化 http 節點 map $http_x_forwarded_for $clientRealIp { "" $remote_addr; ~^(?P<firstAddr>[0-9\.|:|a-f\.|:|A-F\.|:]+),?.*$ $firstAddr; } log_format main escape=json '$clientRealIp - $remote_user [$time_local] $http_host "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" $remote_addr ' 'ups_add:$upstream_addr ups_resp_time: $upstream_response_time ' 'request_time: $request_time ups_status: $upstream_status request_body: $request_body';
七、啓動# 啓動logstash
systemctl start logstash.service
filebeat 安裝及其配置 收集日誌節點配置
一、下載 tar 包
mkdir -p /apps/elk cd /apps/elk wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.9.0-linux-x86_64.tar.gz
二、 安裝filebeat
tar -xvf filebeat-7.9.0-linux-x86_64.tar.gz
三、配置修改
cd filebeat-7.9.0-linux-x86_64 vim filebeat.yml filebeat.inputs: - type: log enabled: true paths: - /apps/nginx/log/access.* fields: service: rockman_ngx_acs # logstash 判斷使用 exclude_files: [".gz$"] - type: log enabled: true paths: - /apps/nginx/log/error.* fields: service: rockman_ngx_err # logstash 判斷使用 exclude_files: [".gz$"] filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 1 output.redis: enabled: true hosts: ["192.168.3.62:6379"] key: "nginx_key" # redis key name db: 0 timeout: 5 datatype: "list" worker: 5
四、配置 filebeat 啓動文件
cat > /usr/lib/systemd/system/filebeat.service << EOF [Unit] Description=filebeat Server Daemon After=network.target [Service] User=root Group=root ExecStart=/apps/elk/filebeat-7.9.0-linux-x86_64/filebeat -e -c /apps/elk/filebeat-7.9.0-linux-x86_64/filebeat.yml ExecReload=/bin/kill -HUP \$MAINPID KillMode=process Restart=on-failure RestartSec=5s [Install] WantedBy=multi-user.target EOF # 配置開機啓動 systemctl enable filebeat.service # 啓動 filebeat systemctl start filebeat.service