docker搭建ELK集羣

首先安裝docker,須要centos7的機子,centos6的百度本身折騰html

#安裝docker
yum install -y docker-io 
systemctl start docker

進入正題,這裏搭建的是ELK7.2版本node

  1. 搭建elasticsearch
  • 修改內核參數
    vim /etc/sysctl.conf
vm.max_map_count = 655360
vm.swappiness = 1
  • 進入data 建立目錄elk
    cd /data && mkdir elk
  • 進入elk,建立目錄data logs
    cd elk
    mkdir data
    mkdir logs
  • 回到data 授予elk1000:1000權限(es默認權限是1000)
    cd /data
    chown 1000:1000 elk -R
  • 這裏搭建的是2個節點的es,在2臺機子分別建目錄,而後分別執行如下命令
docker run --name ES   \
        -d --net=host \
        --restart=always \
        --privileged=true \
        --ulimit nofile=655350 \
        --ulimit memlock=-1 \
        --memory=16G \
        --memory-swap=-1 \
        --volume /data:/data \
        --volume /etc/localtime:/etc/localtime \
        -e TERM=dumb \
        -e ELASTIC_PASSWORD='changeme' \
        -e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
        -e cluster.name="es" \
        -e node.name="node-1" \
        -e node.master=true \
        -e node.data=true \
        -e node.ingest=false \
        -e node.attr.rack="0402-K03" \
        -e discovery.zen.ping.unicast.hosts="ip1,ip2" \
        -e xpack.security.enabled=true  \
        -e xpack.monitoring.collection.enabled=true  \
        -e xpack.monitoring.exporters.my_local.type=local \
        -e xpack.monitoring.exporters.my_local.use_ingest=false \
        -e discovery.zen.minimum_master_nodes=1 \
        -e gateway.recover_after_nodes=1 \
        -e cluster.initial_master_nodes="node-1" \
        -e network.host=0.0.0.0 \
        -e http.port=9200 \
        -e path.data=/data/elk/data \
        -e path.logs=/data/elk/logs \
        -e bootstrap.memory_lock=true \
        -e bootstrap.system_call_filter=false \
        -e indices.fielddata.cache.size="25%" \
        elasticsearch:7.2.0

其中須要注意的幾個參數,nginx

ELASTIC_PASSWORD 這個是設置你elastic這個用戶對應的密碼
    memory是你服務器的內存
    ES_JAVA_OPTS這個數值通常是內存的一半
    discovery.zen.ping.unicast.hosts 這個參數是你集羣的ip

在第二個節點修改如下參數redis

node.name, node.master=false,去除cluster.initial_master_nodes參數
  • 完成之後就能夠執行如下命令看一下日誌了

docker logs ES -fdocker

  • 檢測一下結果
curl --user elastic:changeme -XGET http://ip1:9200/_cat/indices

clipboard.png

看到這張圖就表明你搭建成功了,number_of_nodes表明的節點數 最後一個參數表明加載了多少json

2.搭建kibanabootstrap

  • 搭建kibana比較簡單,先拉一下鏡像
docker pull kibana
  • 執行命令開啓服務
docker run --name kibana \
        --restart=always \
        -d --net=host \
        -v /data:/data \
        -v /etc/localtime:/etc/localtime \
        --privileged \
        -e TERM=dumb \
        -e SERVER_HOST=0.0.0.0 \
        -e SERVER_PORT=5601 \
        -e SERVER_NAME=Kibana-100 \
        -e ELASTICSEARCH_HOSTS=http://localhost:9200 \
        -e ELASTICSEARCH_USERNAME=elastic \
        -e ELASTICSEARCH_PASSWORD=changeme \
        -e XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=true \
        -e LOG_FILE=/data/elasticsearch/logs/kibana.log \
        kibana:7.2.0
  • kibana搭建比較簡單,接下來就作個nginx設置一下就行了,給個簡單的配置
server {
        listen       80;
        #listen       [::]:80 default_server;
        #server_name  ;
        #root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host  $http_host;
        proxy_set_header X-Nginx-Proxy true;
        proxy_set_header Connection "";
        proxy_pass      http://127.0.0.1:5601;

        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
}
  1. logstash安裝
  • 建立目錄受權
cd /data
mkdir config
cd config
mkdir pipeline
cd /data
chown 1000:1000 config
  • 在config中建立如下文件

vim log4j2.propertiesvim

logger.elasticsearchoutput.name = logstash.outputs.elasticsearch
logger.elasticsearchoutput.level = debug

vim logstash.yml
內容直接放空,wq出來就好centos

vim pipelines.yml服務器

- pipeline.id: my-logstash
  path.config: "/usr/share/logstash/config/pipeline/*.conf"
  pipeline.workers: 3

緊接着進入pipeline目錄,建立要

cd pipeline
vim redis2es.conf
input {
    redis {
        data_type => "list"
        codec => "json"
        key => "xxx"
        host => "xxxxxxx"
        port => 6379
        password => "xxxxxx"
        threads => 1
        batch_count => 100
    }
}
output {
    elasticsearch {
        hosts => ["ip1:9200"]
        index => "%{srv}-%{type}-%{+yyyy.MM.dd}"
        document_type => "%{type}"
        workers => 1
        user => "elastic"
        password => "changme"
    }
    file {
        path => "/usr/share/logstash/%{srv}-%{type}-%{+yyyyMMdd}.log"
        gzip => true
        flush_interval => 10
        workers => 1
    }
}

以上xxx請自行腦補


額外擴展,推薦一個工具——dremio
簡單的命令解決

docker pull dremio/dremio-oss
docker run -p 9047:9047 -p 31010:31010 -p 45678:45678 dremio/dremio-oss
相關文章
相關標籤/搜索