filebeat+kafka+graylog+es+mongodb可視化日誌詳解

graylog 是一個開源的專業的日誌聚合、分析、審計、展現、預警的工具,跟 ELK 很類似,可是更簡單,下面說一說 graylog 如何部署,使用,以及對 graylog 的工做流程作一個簡單的梳理java

本文篇幅比較長,一共使用了三臺機器,這三臺機器上部署了 kafka 集羣(2.3),es 集羣(7.11.2),MongoDB 副本集(4.2),還有 graylog 集羣(4.0.2),蒐集的日誌是 k8s 的日誌,使用 DaemonSet 的方式經過 filebeat(7.11.2)將日誌蒐集到 kafka 中。本文將從部署開始,一步一步的瞭解 graylog 是怎麼部署,以及簡單的使用。node

graylog 介紹

組件介紹

從架構圖中能夠看出,graylog 是由三部分組成:linux

  • mongodb 存放 gralog 控制檯上的配置信息,以及 graylog 集羣狀態信息,還有一些元信息
  • es 存放日誌數據,以及檢索數據等
  • graylog 至關於一箇中轉的角色

mongodb 和 es 沒什麼好說的,做用都比較清晰,重點說一下 graylog 的一些組件,及其做用。nginx

  • Inputs 日誌數據來源,能夠經過 graylog 的 Sidecar 來主動抓取,也能夠經過其餘 beats,syslog 等主動推送
  • Extractors 日誌數據格式轉換,主要用於 json 解析、kv 解析、時間戳解析、正則解析
  • Streams 日誌信息分類,經過設置一些規則來將日誌發送到指定的索引中
  • Indices 持久化數據存儲,設置索引名及索引的過時策略、分片數、副本數、flush 時間間隔等
  • Outputs 日誌數據的轉發,將解析的 Stream 發送到其餘的 graylog 集羣
  • Pipelines 日誌數據的過濾,創建數據清洗的過濾規則、字段添加或刪除、條件過濾、自定義函數
  • Sidecar 輕量級的日誌採集器
  • LookupTables 服務解析,基於 IP 的 Whois 查詢和基於源 IP 的情報監控
  • Geolocation 可視化地理位置,基於來源 IP 的監控

流程介紹

Graylog 經過設置 Input 來蒐集日誌,好比這裏經過設置好 kafka 或者 redis 或者直接經過 filebeat 將日誌蒐集過來,而後 Input 配置好 Extractors,用來對日誌中的字段作提取和轉換,能夠設置多個 Extractors,按照順序執行,設置好後,系統會把日誌經過在 Stream 中設置的匹配規則保存到 Stream 中,能夠在 Stream 中指定索引位置,而後存儲到 es 的索引中,完成這些操做後,能夠在控制檯中經過指定 Stream 名稱來查看對應的日誌。web

安裝 mongodb

按照官方文檔,裝的是 4.2.x 的redis

時間同步

安裝 ntpdatemongodb

yum install ntpdate -y
cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
複製代碼

添加到計劃任務中docker

# crontab -e
5 * * * * ntpdate -u ntp.ntsc.ac.cn
複製代碼

配置倉庫源並安裝

vim /etc/yum.repos.d/mongodb-org.repo
[mongodb-org-4.2]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc
複製代碼

而後安裝apache

yum makecache
yum -y install mongodb-org
複製代碼

而後啓動json

systemctl daemon-reload
systemctl enable mongod.service
systemctl start mongod.service
systemctl --type=service --state=active | grep mongod
複製代碼

修改配置文件設置副本集

# vim /etc/mongod.conf
# mongod.conf

# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Where and how to store data.
storage:
  dbPath: /var/lib/mongo
  journal:
    enabled: true
# engine:
# wiredTiger:

# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
  port: 27017
  bindIp: 0.0.0.0  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.


#security:

#operationProfiling:

replication:
  replSetName: graylog-rs  #設置副本集名稱

#sharding:

## Enterprise-Only Options

#auditLog:

#snmp:
複製代碼

初始化副本集

> use admin;
switched to db admin
> rs.initiate( {
...      _id : "graylog-rs",
...      members: [
...          { _id: 0, host: "10.0.105.74:27017"},
...          { _id: 1, host: "10.0.105.76:27017"},
...          { _id: 2, host: "10.0.105.96:27017"}
...      ]
...  })
{
        "ok" : 1,
        "$clusterTime" : {
                "clusterTime" : Timestamp(1615885669, 1),
                "signature" : {
                        "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
                        "keyId" : NumberLong(0)
                }
        },
        "operationTime" : Timestamp(1615885669, 1)
}
複製代碼

確認副本集狀態

不出意外的話,集羣會有兩個角色,一個是Primary,另外一個是Secondary,使用命令能夠查看

rs.status()
複製代碼

會返回一堆信息,以下所示:

"members" : [
                {
                        "_id" : 0,
                        "name" : "10.0.105.74:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 623,
                        "optime" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-03-16T09:10:29Z"),
                        "syncingTo" : "",
                        "syncSourceHost" : "",
                        "syncSourceId" : -1,
                        "infoMessage" : "",
                        "electionTime" : Timestamp(1615885679, 1),
                        "electionDate" : ISODate("2021-03-16T09:07:59Z"),
                        "configVersion" : 1,
                        "self" : true,
                        "lastHeartbeatMessage" : ""
                },
                {
                        "_id" : 1,
                        "name" : "10.0.105.76:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 162,
                        "optime" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-03-16T09:10:29Z"),
                        "optimeDurableDate" : ISODate("2021-03-16T09:10:29Z"),
                        "lastHeartbeat" : ISODate("2021-03-16T09:10:31.690Z"),
                        "lastHeartbeatRecv" : ISODate("2021-03-16T09:10:30.288Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "10.0.105.74:27017",
                        "syncSourceHost" : "10.0.105.74:27017",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "10.0.105.96:27017",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 162,
                        "optime" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDurable" : {
                                "ts" : Timestamp(1615885829, 1),
                                "t" : NumberLong(1)
                        },
                        "optimeDate" : ISODate("2021-03-16T09:10:29Z"),
                        "optimeDurableDate" : ISODate("2021-03-16T09:10:29Z"),
                        "lastHeartbeat" : ISODate("2021-03-16T09:10:31.690Z"),
                        "lastHeartbeatRecv" : ISODate("2021-03-16T09:10:30.286Z"),
                        "pingMs" : NumberLong(0),
                        "lastHeartbeatMessage" : "",
                        "syncingTo" : "10.0.105.74:27017",
                        "syncSourceHost" : "10.0.105.74:27017",
                        "syncSourceId" : 0,
                        "infoMessage" : "",
                        "configVersion" : 1
                }
        ]
複製代碼

建立用戶

隨便找一臺機器執行便可

use admin
db.createUser({user: "admin", pwd: "Root_1234", roles: ["root"]})
db.auth("admin","Root_1234")
複製代碼

而後別退出,再建立一個用於 graylog 鏈接的用戶

use graylog
db.createUser("graylog", {
  "roles" : [{
      "role" : "dbOwner",
      "db" : "graylog"
    }, {
      "role" : "readWrite",
      "db" : "graylog"
    }]
})
複製代碼

生成 keyFile 文件

openssl rand -base64 756 > /var/lib/mongo/access.key
複製代碼

修改權限

chown -R mongod.mongod /var/lib/mongo/access.key
chmod 600 /var/lib/mongo/access.key
複製代碼

生成完這個 key 以後,須要拷貝到其餘另外兩臺機器上,並一樣修改好權限

scp -r /var/lib/mongo/access.key 10.0.105.76:/var/lib/mongo/
複製代碼

拷貝完成後,須要修改配置文件

# vim /etc/mongod.conf
#添加以下配置
security:
  keyFile: /var/lib/mongo/access.key
  authorization: enabled
複製代碼

三臺機器都須要如此設置,而後重啓服務

systemctl restart mongod
複製代碼

而後登錄驗證便可,驗證兩塊地方

  • 是否能認證成功
  • 副本集狀態是否正常

若是以上 ok,那經過 yum 安裝的 mongodb4.2 版本的副本集就部署好了,下面去部署 es 集羣

部署 Es 集羣

es 版本爲目前爲止最新的版本:7.11.x

系統優化

  1. 內核參數優化
# vim /etc/sysctl.conf
fs.file-max=655360
vm.max_map_count=655360
vm.swappiness = 0
複製代碼
  1. 修改 limits
# vim /etc/security/limits.conf
* soft nproc 655350
* hard nproc  655350
* soft nofile 655350
* hard nofile 655350
* hard memlock unlimited
* soft memlock unlimited
複製代碼
  1. 添加普通用戶 啓動 es 須要使用普通用戶
useradd es
groupadd es
echo 123456 | passwd es --stdin
複製代碼
  1. 安裝 jdk
yum install -y java-1.8.0-openjdk-devel.x86_64
複製代碼

設置環境變量

# vim /etc/profile
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.282.b08-1.el7_9.x86_64/
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
複製代碼

上傳壓縮包

es 下載地址:artifacts.elastic.co/downloads/e…

解壓

tar zxvf elasticsearch-7.11.2-linux-x86_64.tar.gz -C /usr/local/
複製代碼

修改權限

chown -R es.es /usr/local/elasticsearch-7.11.2
複製代碼

修改 es 配置

配置集羣

# vim /usr/local/elasticsearch-7.11.2/config/elasticsearch.yml
cluster.name: graylog-cluster
node.name: node03
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["10.0.105.74","10.0.105.76","10.0.105.96"]
cluster.initial_master_nodes: ["10.0.105.74","10.0.105.76"]
http.cors.enabled: true
http.cors.allow-origin: "*"
複製代碼

修改 jvm 內存大小

-Xms16g #設置爲宿主機內存的一半
-Xmx16g
複製代碼

使用 systemd 管理服務

# vim /usr/lib/systemd/system/elasticsearch.service
[Unit]
Description=elasticsearch server daemon
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=es
Group=es
LimitMEMLOCK=infinity
LimitNOFILE=655350
LimitNPROC=655350
ExecStart=/usr/local/elasticsearch-7.11.2/bin/elasticsearch
Restart=always

[Install]
WantedBy=multi-user.target
複製代碼

啓動並設置開機啓動

systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch
複製代碼

簡單驗證下

# curl -XGET http://127.0.0.1:9200/_cluster/health?pretty
{
  "cluster_name" : "graylog-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 1,
  "active_shards" : 2,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}
複製代碼

到這裏 es 就安裝完成了

部署 kafka 集羣

由於個人機器是複用的,以前已經安裝過 java 環境了,因此這裏就再也不寫了

下載安裝包

kafka: https://www.dogfei.cn/pkgs/kafka_2.12-2.3.0.tgz
zookeeper: https://www.dogfei.cn/pkgs/apache-zookeeper-3.6.0-bin.tar.gz
複製代碼

解壓

tar zxvf kafka_2.12-2.3.0.tgz -C /usr/local/
tar zxvf apache-zookeeper-3.6.0-bin.tar.gz -C /usr/local/
複製代碼

修改配置文件

kafka

# grep -v -E "^#|^$" /usr/local/kafka_2.12-2.3.0/config/server.properties
broker.id=1
listeners=PLAINTEXT://10.0.105.74:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dirs=/data/kafka/data
num.partitions=8
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
message.max.bytes=20971520
log.retention.hours=1
log.retention.bytes=1073741824
log.segment.bytes=536870912
log.retention.check.interval.ms=300000
zookeeper.connect=10.0.105.74:2181,10.0.105.76:2181,10.0.105.96:2181
zookeeper.connection.timeout.ms=1000000
zookeeper.sync.time.ms=2000
group.initial.rebalance.delay.ms=0
log.cleaner.enable=true
delete.topic.enable=true
複製代碼

zookeeper

# grep -v -E "^#|^$" /usr/local/apache-zookeeper-3.6.0-bin/conf/zoo.cfg
tickTime=10000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data
clientPort=2181
admin.serverPort=8888
server.1=10.0.105.74:22888:33888
server.2=10.0.105.76:22888:33888
server.3=10.0.105.96:22888:33888
複製代碼

嗯,別忘了建立好相應的目錄

加入 systemd

kafka

# cat /usr/lib/systemd/system/kafka.service
[Unit]
Description=Kafka
After=zookeeper.service

[Service]
Type=simple
Environment=LOG_DIR=/data/kafka/logs
WorkingDirectory=/usr/local/kafka_2.12-2.3.0
ExecStart=/usr/local/kafka_2.12-2.3.0/bin/kafka-server-start.sh /usr/local/kafka_2.12-2.3.0/config/server.properties
ExecStop=/usr/local/kafka_2.12-2.3.0/bin/kafka-server-stop.sh
Restart=always

[Install]
WantedBy=multi-user.target
複製代碼

zookeeper

# cat /usr/lib/systemd/system/zookeeper.service
[Unit]
Description=zookeeper.service
After=network.target

[Service]
Type=forking
Environment=ZOO_LOG_DIR=/data/zookeeper/logs
ExecStart=/usr/local/apache-zookeeper-3.6.0-bin/bin/zkServer.sh start
ExecStop=/usr/local/apache-zookeeper-3.6.0-bin/bin/zkServer.sh stop
Restart=always

[Install]
WantedBy=multi-user.target
複製代碼

啓動服務

systemctl daemon-reload
systemctl start zookeeper
systemctl start kafka
systemctl enable zookeeper
systemctl enable kafka
複製代碼

部署 filebeat

因爲收集的是 k8s 的日誌,filebeat 是採用 DaemonSet 方式部署,示例以下: daemonset 參考

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: filebeat
  name: filebeat
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
      name: filebeat
    spec:
      affinity: {}
      containers:
      - args:
        - -e
        - -E
        - http.enabled=true
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: docker.elastic.co/beats/filebeat:7.11.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - | #!/usr/bin/env bash -e curl --fail 127.0.0.1:5066           failureThreshold: 3
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        name: filebeat
        resources:
          limits:
            cpu: "1"
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        securityContext:
          privileged: false
          runAsUser: 0
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/share/filebeat/filebeat.yml
          name: filebeat-config
          readOnly: true
          subPath: filebeat.yml
        - mountPath: /usr/share/filebeat/data
          name: data
        - mountPath: /opt/docker/containers/
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /var/log
          name: varlog
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: filebeat
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      tolerations:
      - operator: Exists
      volumes:
      - configMap:
          defaultMode: 384
          name: filebeat-daemonset-config
        name: filebeat-config
      - hostPath:
          path: /opt/docker/containers
          type: ""
        name: varlibdockercontainers
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
        name: data
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
複製代碼

configmap 參考

apiVersion: v1
data:
  filebeat.yml: | filebeat.inputs: - type: container paths: - /var/log/containers/*.log 
      #多行合併
      multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
      multiline.negate: true
      multiline.match: after
      multiline.timeout: 30
      fields:
        #自定義字段用於logstash識別k8s輸入的日誌
        service: k8s-log

      #禁止收集host.xxxx字段
      #publisher_pipeline.disable_host: true
      processors:
        - add_kubernetes_metadata:
            #添加k8s描述字段
            default_indexers.enabled: true
            default_matchers.enabled: true
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"
        - drop_fields:
            #刪除的多餘字段
            fields: ["host", "tags", "ecs", "log", "prospector", "agent", "input", "beat", "offset"]
            ignore_missing: true
    output.kafka:
      hosts: ["10.0.105.74:9092","10.0.105.76:9092","10.0.105.96:9092"]
      topic: "dev-k8s-log"
      compression: gzip
      max_message_bytes: 1000000
kind: ConfigMap
metadata:
  labels:
    app: filebeat
  name: filebeat-daemonset-config
  namespace: default
複製代碼

而後執行下,把 pod 啓動起來就能夠了

部署 graylog 集羣

導入 rpm 包

rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-4.0-repository_latest.rpm
複製代碼

安裝

yum install graylog-server -y
複製代碼

啓動並加入開機啓動

systemctl enable graylog-server
systemctl start graylog-server
複製代碼

生成祕鑰

生成兩個祕鑰,分別用於配置文件中的root_password_sha2password_secret

# echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | sha256sum | cut -d" " -f1
# pwgen -N -1 -s 40 1 #這個命令要是沒有,就找一臺ubuntu機器,apt install pwgen下載下就能夠了
複製代碼

修改配置文件

# vim /etc/graylog/server/server.conf
is_master = false  #是不是主節點,若是是主節點,則設置爲true, 集羣中只有一個主節點
node_id_file = /etc/graylog/server/node-id
password_secret = iMh21uM57Pt2nMHDicInjPvnE8o894AIs7rJj9SW  #將上面生成的祕鑰配置到這裏
root_password_sha2 = 8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92 #將上面生成的祕鑰配置到這裏
plugin_dir = /usr/share/graylog-server/plugin
http_bind_address = 0.0.0.0:9000
http_publish_uri = http://10.0.105.96:9000/
web_enable = true
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 2
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 5000
output_flush_interval = 120
output_fault_count_threshold = 8
output_fault_penalty_seconds = 120
processbuffer_processors = 20
outputbuffer_processors = 40
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://graylog:Graylog_1234@10.0.105.74:27017,10.0.105.76:27017,10.0.105.96:27017/graylog?replicaSet=graylog-rs
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32
elasticsearch_hosts = http://10.0.105.74:9200,http://10.0.105.76:9200,http://10.0.105.96:9200
elasticsearch_discovery_enabled = true
複製代碼

在這裏要注意 mongodb 和 es 的鏈接方式,我這裏全都是部署的集羣,因此寫的是集羣的鏈接方式

mongodb_uri = mongodb://graylog:Graylog_1234@10.0.105.74:27017,10.0.105.76:27017,10.0.105.96:27017/graylog?replicaSet=graylog-rs
elasticsearch_hosts = http://10.0.105.74:9200,http://10.0.105.76:9200,http://10.0.105.96:9200
複製代碼

到這裏部署工做就結束了,下面是在 graylog 控制檯上進行配置下,可是首先得把 graylog 給代理出來,能夠經過 nginx 進行代理,nginx 配置文件參考:

user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 65535;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;

    upstream graylog_servers {
        server 10.0.105.74:9000;
        server 10.0.105.76:9000;
        server 10.0.105.96:9000;
    }

    server {
        listen       80 default_server;
        server_name  設置一個域名;
        location / {
            proxy_set_header Host $http_host;
            proxy_set_header X-Forwarded-Host $host;
            proxy_set_header X-Forwarded-Server $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Graylog-Server-URL http://$server_name/;
            proxy_pass http://graylog_servers;
        }
    }
}
複製代碼

完過後,重啓下 nginx,瀏覽器上訪問便可,用戶名是 admin,密碼是以前使用 sha25 加密方式建立的密碼

graylog 接入日誌

配置輸入源

System --> Inputs

Raw/Plaintext Kafka ---> Lauch new input

設置 kafka 和 zookeeper 地址,設置 topic 名稱,保存

狀態都要是 running 狀態

建立索引

System/indices

設置索引信息,索引名,副本數、分片數,過時策略,建立索引策略

建立 Streams

添加規則

保存,就能夠了,而後去首頁就能夠看到日誌了

總結

到這裏,一個完整的部署流程就結束了,這裏先講一下 graylog 是怎麼部署的,而後又說了下怎麼使用,後面會對它的其餘功能作下探索,對日誌字段作下提取之類的,敬請關注。

相關文章
相關標籤/搜索