無論需不須要,都裝上JDK吧,省的麻煩,我這裏裝的是jdk1.8.0_151node
tar xf jdk-8u151-linux-x64.tar.gz -C /opt/
配置環境變量linux
vim /etc/profile export JAVA_HOME=/opt/jdk1.8.0_151 export PATH=$JAVA_HOME/bin:$PATH source /etc/profile
以192.168.0.156爲例nginx
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz tar xf zookeeper-3.4.14.tar.gz -C /opt/ # 修改配置信息 cd /opt/zookeeper-3.4.14/conf cp zoo_sample.cfg zoo.cfg
修改zk配置文件:apache
# vim zoo.cfg tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data/elk/zk/data/ clientPort=2181 server.1=192.168.0.156:12888:13888 server.2=192.168.0.42:12888:13888 server.3=192.168.0.133:12888:13888
建立數據目錄,添加zk的競選ID:json
# 添加數據目錄 mkdir -p /data/elk/zk/data/ # 192.168.0.156上 echo 1 > /data/elk/zk/data/myid # 192.168.0.42上 echo 2 > /data/elk/zk/data/myid # 192.168.0.133上 echo 3 > /data/elk/zk/data/myid
其餘兩臺的配置同樣,除了myid不一樣。bootstrap
啓動三臺ZKvim
./bin/zkServer.sh start
查看狀態,輸出以下表示ZK集羣OK了服務器
./bin/zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper-3.4.14/bin/../conf/zoo.cfg Mode: follower
wget https://www-us.apache.org/dist/kafka/2.2.0/kafka_2.11-2.2.0.tgz tar xf kafka_2.11-2.2.0.tgz -C /opt/ # 配置文件 cd /opt/kafka_2.11-2.2.0/config
修改配置文件:markdown
# vim server.properties broker.id=1 listeners=PLAINTEXT://192.168.0.156:9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/data/elk/kafka/logs num.partitions=1 num.recovery.threads.per.data.dir=1 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 zookeeper.connect=192.168.0.156:2181,192.168.0.42:2181,192.168.0.133:2181 zookeeper.connection.timeout.ms=6000 group.initial.rebalance.delay.ms=0
另外兩臺配置信息須要改動的地方分別是broker.id=2和3,listeners改爲本身本機IP。curl
建立日誌目錄:
mkdir -p /data/elk/kafka/logs
配置hosts:
kafka01 192.168.0.156 kafka02 192.168.0.42 kafka03 192.168.0.133
啓動三臺kafka
../bin/kafka-server-start.sh -daemon server.properties
測試:
(1)、建立topic
../bin/kafka-topics.sh --create --zookeeper 192.168.0.156:2181 --replication-factor 1 --partitions 2 --topic message_topic
(2)、查看topic
../bin/kafka-topics.sh --list --zookeeper 192.168.0.156:2181
(3)、測試消費者,生產者
# 在其中一臺執行如下命令 ./bin/kafka-console-consumer.sh --bootstrap-server 192.168.0.156:9092 --topic message_topic --from-beginning # 另開一個終端執行如下命令 ../bin/kafka-console-producer.sh --broker-list 192.168.0.156:9092 --topic message_topic >hello > # 就會輸出如下內容 ./bin/kafka-console-consumer.sh --bootstrap-server 192.168.0.156:9092 --topic message_topic --from-beginning hello
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.1.1.tar.gz tar xf logstash-7.1.1.tar.gz -C /opt/
修改配置文件
vim logstash.yml
path.data: /data/elk/logstash/data pipeline.workers: 4 pipeline.batch.size: 125 pipeline.batch.delay: 50 path.config: /opt/logstash-7.1.1/config/conf.d http.host: "192.168.0.193" log.level: info path.logs: /data/elk/logstash/logs
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.1-linux-x86_64.tar.gz tar xf elasticsearch-7.1.1-linux-x86_64.tar.gz -C /opt/
配置elasticsearch.yml
node.name: node02 path.data: /data/elk/data path.logs: /data/elk/logs network.host: 192.168.0.169 http.port: 9200 discovery.seed_hosts: ["node01", "node02"] cluster.initial_master_nodes: ["node01", "node02"]
另一臺配置更改node.name和network便可。
建立普通用戶
useradd elastic chown elastic.elastic elasticsearch-7.1.1/ -R
建立數據日誌目錄
mkdir -p /data/elk/{data,logs} chown elastic.elastic /data -R
配置內核參數和文件描述符
vim /etc/stsctl.conf fs.file-max=65536 vm.max_map_count = 262144 sysctl -p vim /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 * soft nproc 2048 * hard nproc 4096
查看集羣狀態
# curl http://192.168.0.87:9200/_cluster/health?pretty { "cluster_name" : "my-elk", "status" : "green", "timed_out" : false, "number_of_nodes" : 2, "number_of_data_nodes" : 2, "active_primary_shards" : 2, "active_shards" : 4, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 }
查看節點狀態
# curl http://192.168.0.87:9200/_cat/nodes?v ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 192.168.0.169 16 27 0 0.03 0.09 0.10 mdi - node02 192.168.0.87 14 44 0 0.05 0.08 0.09 mdi * node01
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.1.1-linux-x86_64.tar.gz tar xf kibana-7.1.1-linux-x86_64.tar.gz -C /opt/
修改配置文件
server.port: 5601 server.host: 192.168.0.113 elasticsearch.hosts: ["http://192.168.0.87:9200"] elasticsearch.hosts: ["http://192.168.0.169:9200"]
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.1.1-linux-x86_64.tar.gz tar xf filebeat-7.1.1-linux-x86_64.tar.gz -C /opt/
首先部署filebeat。
修改配置文件:
# vim filebeat.yml filebeat.inputs: - type: log enabled: false paths: - /var/log/*.log - type: log enable: true paths: - /var/log/nginx/access.log fields: name: nginx-access fields_under_root: false tail_files: false - type: log enable: true paths: - /var/log/nginx/error.log fields: name: nginx-error fields_under_root: false tail_files: false filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: false setup.template.settings: index.number_of_shards: 1 setup.kibana: output.kafka: enabled: true hosts: ["192.168.0.156:9092","192.168.0.42:9092","192.168.0.133:9092"] topic: 'nginx-topic' partition.round_robin: reachable_only: true worker: 4 required_acks: 1 compression: gzip max_message_bytes: 1000000 processors: - add_host_metadata: ~ - add_cloud_metadata: ~ logging.level: info logging.to_files: true logging.files: path: /data/elk/filebeat/logs name: filebeat rotateeverybytes: 52428800 # 50MB keepfiles: 5
啓動服務:
nohup ./filebeat &
配置文件:
# vim /opt/logstash-7.1.1/config/conf.d/nginx.conf input { kafka { codec => "json" topics => ["nginx-topic"] bootstrap_servers => ["192.168.0.156:9092, 192.168.0.42:9092, 192.168.0.133:9092"] group_id => "logstash-g1" } } output { elasticsearch { hosts => ["192.168.0.87:9200", "192.168.0.169:9200"] index => "logstash-%{+YYYY.MM.dd}" } }
啓動服務:
nohup ../../bin/logstash -f ../conf.d/nginx.conf &
curl '192.168.0.87:9200/_cat/indices?v' health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .kibana_task_manager xaxQMaJsRnycacsKZJBW5A 1 1 2 9 33.2kb 16.6kb green open .kibana_1 TZ7_EmQMSFy1cPS4Irx7iw 1 1 7 0 87.4kb 43.7kb green open logstash-2019.06.17-000001 vNCkz0a2R8unLxr5m9dSWg 1 1 2 0 82.1kb 41kb
在NG的機器上隨便curl如下:
# curl localhost/121231
日誌比較亂,是由於咱們沒作日誌的過濾。