###################node
- Flume是一個分佈式、可靠、和高可用的海量日誌採集、聚合和傳輸的系統。
- Flume能夠採集文件,socket數據包等各類形式源數據,又能夠將採集到的數據輸出到HDFS、hbase、hive、kafka等衆多外部存儲系統中
- 通常的採集需求,經過對flume的簡單配置便可實現
- Flume針對特殊場景也具有良好的自定義擴展能力,所以,flume能夠適用於大部分的平常數據採集場景
fulme採集配置文件:shell
監控端口:服務器
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1socket
啓動:分佈式
bin/flume-ng agent --conf conf --conf-file conf/netcat-logger.conf --name a1 -Dflume.root.logger=INFO,consoleoop
配置採集文件 名稱a1 在前臺顯示須要次條目 在console端運行this
其餘機子使用telent鏈接,發送數據;命令行
監控文件夾:日誌
配置:component
a1.sources.r1.type = spool
監控shell命令行,採集日誌將數據存入hdfs:
啓動命令:
bin/flume-ng agent -c conf -f conf/tail-hdfs.conf -n a1
################################################################
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#exec 指的是命令
# Describe/configure the source
a1.sources.r1.type = exec
#F根據文件名追中, f根據文件的nodeid追中
a1.sources.r1.command = tail -F /home/hadoop/log/test.log
a1.sources.r1.channels = c1
# Describe the sink
#下沉目標
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
#指定目錄, flum幫作目的替換
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/
#文件的命名, 前綴
a1.sinks.k1.hdfs.filePrefix = events-
#10 分鐘就改目錄
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
#文件滾動以前的等待時間(秒)
a1.sinks.k1.hdfs.rollInterval = 3
#文件滾動的大小限制(bytes)
a1.sinks.k1.hdfs.rollSize = 500
#寫入多少個event數據後滾動文件(事件個數)
a1.sinks.k1.hdfs.rollCount = 20
#5個事件就往裏面寫入
a1.sinks.k1.hdfs.batchSize = 5
#用本地時間格式化目錄
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#下沉後, 生成的文件類型,默認是Sequencefile,可用DataStream,則爲普通文本
a1.sinks.k1.hdfs.fileType = DataStream
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
使用avre服務器,多個agent串接將數據存到其餘機子的hdfs: