首先安裝docker,須要centos7的機子,centos6的百度本身折騰html
#安裝docker yum install -y docker-io systemctl start docker
進入正題,這裏搭建的是ELK7.2版本node
vm.max_map_count = 655360 vm.swappiness = 1
docker run --name ES \ -d --net=host \ --restart=always \ --privileged=true \ --ulimit nofile=655350 \ --ulimit memlock=-1 \ --memory=16G \ --memory-swap=-1 \ --volume /data:/data \ --volume /etc/localtime:/etc/localtime \ -e TERM=dumb \ -e ELASTIC_PASSWORD='changeme' \ -e ES_JAVA_OPTS="-Xms8g -Xmx8g" \ -e cluster.name="es" \ -e node.name="node-1" \ -e node.master=true \ -e node.data=true \ -e node.ingest=false \ -e node.attr.rack="0402-K03" \ -e discovery.zen.ping.unicast.hosts="ip1,ip2" \ -e xpack.security.enabled=true \ -e xpack.monitoring.collection.enabled=true \ -e xpack.monitoring.exporters.my_local.type=local \ -e xpack.monitoring.exporters.my_local.use_ingest=false \ -e discovery.zen.minimum_master_nodes=1 \ -e gateway.recover_after_nodes=1 \ -e cluster.initial_master_nodes="node-1" \ -e network.host=0.0.0.0 \ -e http.port=9200 \ -e path.data=/data/elk/data \ -e path.logs=/data/elk/logs \ -e bootstrap.memory_lock=true \ -e bootstrap.system_call_filter=false \ -e indices.fielddata.cache.size="25%" \ elasticsearch:7.2.0
其中須要注意的幾個參數,nginx
ELASTIC_PASSWORD 這個是設置你elastic這個用戶對應的密碼 memory是你服務器的內存 ES_JAVA_OPTS這個數值通常是內存的一半 discovery.zen.ping.unicast.hosts 這個參數是你集羣的ip
在第二個節點修改如下參數redis
node.name, node.master=false,去除cluster.initial_master_nodes參數
docker logs ES -fdocker
curl --user elastic:changeme -XGET http://ip1:9200/_cat/indices
看到這張圖就表明你搭建成功了,number_of_nodes表明的節點數 最後一個參數表明加載了多少json
2.搭建kibanabootstrap
docker pull kibana
docker run --name kibana \ --restart=always \ -d --net=host \ -v /data:/data \ -v /etc/localtime:/etc/localtime \ --privileged \ -e TERM=dumb \ -e SERVER_HOST=0.0.0.0 \ -e SERVER_PORT=5601 \ -e SERVER_NAME=Kibana-100 \ -e ELASTICSEARCH_HOSTS=http://localhost:9200 \ -e ELASTICSEARCH_USERNAME=elastic \ -e ELASTICSEARCH_PASSWORD=changeme \ -e XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED=true \ -e LOG_FILE=/data/elasticsearch/logs/kibana.log \ kibana:7.2.0
server { listen 80; #listen [::]:80 default_server; #server_name ; #root /usr/share/nginx/html; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Nginx-Proxy true; proxy_set_header Connection ""; proxy_pass http://127.0.0.1:5601; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } }
cd /data mkdir config cd config mkdir pipeline cd /data chown 1000:1000 config
vim log4j2.propertiesvim
logger.elasticsearchoutput.name = logstash.outputs.elasticsearch logger.elasticsearchoutput.level = debug
vim logstash.yml
內容直接放空,wq出來就好centos
vim pipelines.yml服務器
- pipeline.id: my-logstash path.config: "/usr/share/logstash/config/pipeline/*.conf" pipeline.workers: 3
緊接着進入pipeline目錄,建立要
cd pipeline vim redis2es.conf
input { redis { data_type => "list" codec => "json" key => "xxx" host => "xxxxxxx" port => 6379 password => "xxxxxx" threads => 1 batch_count => 100 } } output { elasticsearch { hosts => ["ip1:9200"] index => "%{srv}-%{type}-%{+yyyy.MM.dd}" document_type => "%{type}" workers => 1 user => "elastic" password => "changme" } file { path => "/usr/share/logstash/%{srv}-%{type}-%{+yyyyMMdd}.log" gzip => true flush_interval => 10 workers => 1 } }
以上xxx請自行腦補
額外擴展,推薦一個工具——dremio
簡單的命令解決
docker pull dremio/dremio-oss docker run -p 9047:9047 -p 31010:31010 -p 45678:45678 dremio/dremio-oss