優化ELK(2)

裝完elk跑起來以後,個人心裏幾乎是崩潰的,16G內存16核cpu還常常報錯。node


1、logstash和elasticsearch同時報錯bootstrap

logstash出現大量報錯,多是es佔用heap太多,沒有優化es致使的app

retrying failed action with response code: 503 {:level=>:warn}curl

too many attempts at sending event. dropping: 2016-06-16T05:44:54.464Z %{host} %{message} {:level=>:error}elasticsearch


elasticsearch出現大量報錯ide

too many open filesfetch


是這個值過小了"max_file_descriptors" : 2048,優化


# curl http://localhost:9200/_nodes/process\?prettyui

{url

  "cluster_name" : "elasticsearch",

  "nodes" : {

    "ZLgPzMqBRoyDFvxoy27Lfg" : {

      "name" : "Mass Master",

      "transport_address" : "inet[/192.168.153.200:9301]",

      "host" : "localhost",

      "ip" : "127.0.0.1",

      "version" : "1.6.0",

      "build" : "cdd3ac4",

      "http_address" : "inet[/192.168.153.200:9200]",

      "process" : {

        "refresh_interval_in_millis" : 1000,

        "id" : 943,

        "max_file_descriptors" : 2048,

        "mlockall" : true




解決辦法:

設置文件打開數

# ulimit -n 65535


設置開機自啓動

# vi /etc/profile


在es啓動文件裏面添加,而後從新啓動elasticsearch

# vi /home/elk/elasticsearch-1.6.0/bin/elasticsearch

ulimit -n 65535


# curl http://localhost:9200/_nodes/process\?pretty

{

  "cluster_name" : "elasticsearch",

  "nodes" : {

    "_QXVsjL9QOGMD13Eb6t7Ag" : {

      "name" : "Ocean",

      "transport_address" : "inet[/192.168.153.200:9301]",

      "host" : "localhost",

      "ip" : "127.0.0.1",

      "version" : "1.6.0",

      "build" : "cdd3ac4",

      "http_address" : "inet[/192.168.153.200:9200]",

      "process" : {

        "refresh_interval_in_millis" : 1000,

        "id" : 1693,

        "max_file_descriptors" : 65535,

        "mlockall" : true

      }

    }



2、out of memory內存溢出


優化後的es配置文件內容:

# egrep -v '^$|^#' /home/elk/elasticsearch-1.6.0/config/elasticsearch.yml 

bootstrap.mlockall: true

http.max_content_length: 2000mb

http.compression: true

index.cache.field.type: soft

index.cache.field.max_size: 50000

index.cache.field.expire: 10m



針對bootstrap.mlockall: true還要設置

# ulimit -l unlimited


# vi /etc/sysctl.conf

vm.max_map_count=262144

vm.swappiness = 1


# ulimit -a

core file size          (blocks, -c) 0

data seg size           (kbytes, -d) unlimited

scheduling priority             (-e) 0

file size               (blocks, -f) unlimited

pending signals                 (-i) 127447

max locked memory       (kbytes, -l) unlimited

max memory size         (kbytes, -m) unlimited

open files                      (-n) 65535

pipe size            (512 bytes, -p) 8

POSIX message queues     (bytes, -q) 819200

real-time priority              (-r) 0

stack size              (kbytes, -s) 10240

cpu time               (seconds, -t) unlimited

max user processes              (-u) 127447

virtual memory          (kbytes, -v) unlimited

file locks                      (-x) unlimited



# vi /etc/security/limits.d/90-nproc.conf

*          soft    nproc     320000

root       soft    nproc     unlimited



3、es狀態是yellow

es中用三種顏色狀態表示:green,yellow,red.

green:全部主分片和副本分片均可用

yellow:全部主分片可用,但不是全部副本分片均可用

red:不是全部的主分片均可用


# curl -XGET http://localhost:9200/_cluster/health\?pretty

{

  "cluster_name" : "elasticsearch",

  "status" : "yellow",

  "timed_out" : false,

  "number_of_nodes" : 2,

  "number_of_data_nodes" : 1,

  "active_primary_shards" : 161,

  "active_shards" : 161,

  "relocating_shards" : 0,

  "initializing_shards" : 0,

  "unassigned_shards" : 161,

  "number_of_pending_tasks" : 0,

  "number_of_in_flight_fetch" : 0


解決辦法:創建elasticsearch集羣(下篇博客寫)



4、kibana not indexed錯誤 

https://rafaelmt.net/en/2015/09/01/kibana-tutorial/#refresh-fields

kibana的索引根據事件會常常更新,因此kibana圖有時候會出現 not indexed的報錯:


解決辦法:

咱們訪問kibana,而後選擇settings,點擊indices,點擊logstash-*。點擊刷新的圖標就ok了

相關文章
相關標籤/搜索