flume是一個分佈式、可靠、和高可用的海量日誌採集、聚合和傳輸的系統。支持在日誌系統中定製各種數據發送方,用於收集數
據;同時,Flume提供對數據進行簡單處理,並寫到各類數據接受方(好比文本、HDFS、Hbase等)的能力 。
什麼是Flume?
flume 做爲 cloudera 開發的實時日誌收集系統,受到了業界的承認與普遍應用。Flume 初始的發行版本目前被統稱
爲 Flume OG(original generation),屬於 cloudera。但隨着 FLume 功能的擴展,Flume OG 代碼工程臃腫、核心組件
設計不合理、核心配置不標準等缺點暴露出來,尤爲是在 Flume OG 的最後一個發行版本 0.94.0 中,日誌傳輸不穩定的現象
尤其嚴重,爲了解決這些問題,2011 年 10 月 22 號,cloudera 完成了 Flume-728,對 Flume 進行了里程碑式的
改動:重構核心組件、核心配置以及代碼架構,重構後的版本統稱爲 Flume NG(next generation);改動的另外一緣由是
將 Flume 歸入 apache 旗下,cloudera Flume 更名爲 Apache Flume。
flume的特色:
flume是一個分佈式、可靠、和高可用的海量日誌採集、聚合和傳輸的系統。支持在日誌系統中定製各種數據發送方,用於
收集數據;同時,Flume提供對數據進行簡單處理,並寫到各類數據接受方(好比文本、HDFS、Hbase等)的能力 。
flume的數據流由事件(Event)貫穿始終。事件是Flume的基本數據單位,它攜帶日誌數據(字節數組形式)而且攜帶有頭信息,
這些Event由Agent外部的Source生成,當Source捕獲事件後會進行特定的格式化,而後Source會把事件推入(單個或多個)
Channel中。你能夠把Channel看做是一個緩衝區,它將保存事件直到Sink處理完該事件。Sink負責持久化日誌或者把事件推向
另外一個Source。
flume的可靠性
當節點出現故障時,日誌可以被傳送到其餘節點上而不會丟失。Flume提供了三種級別的可靠性保障,從強到弱依次分別
爲:end-to-end(收到數據agent首先將event寫到磁盤上,當數據傳送成功後,再刪除;若是數據發送失敗,能夠從新發送。),
Store on failure(這也是scribe採用的策略,當數據接收方crash時,將數據寫到本地,待恢復後,繼續發送),
Besteffort(數據發送到接收方後,不會進行確認)。
flume的可恢復性
仍是靠Channel。推薦使用FileChannel,事件持久化在本地文件系統裏(性能較差)。
flume的一些核心概念
Agent使用JVM 運行Flume。每臺機器運行一個agent,可是能夠在一個agent中包含多個sources和sinks。
Client生產數據,運行在一個獨立的線程。
Source從Client收集數據,傳遞給Channel。
Sink從Channel收集數據,運行在一個獨立線程。
Channel鏈接 sources 和 sinks ,這個有點像一個隊列。
Events能夠是日誌記錄、 avro 對象等。
Flume以agent爲最小的獨立運行單位。一個agent就是一個JVM。單agent由Source、Sink和Channel三大組件構成,以下圖:
值得注意的是,Flume提供了大量內置的Source、Channel和Sink類型。不一樣類型的Source,Channel和Sink能夠自由組合。
組合方式基於用戶設置的配置文件,很是靈活。好比:Channel能夠把事件暫存在內存裏,也能夠持久化到本地硬盤上。
Sink能夠把日誌寫入HDFS, HBase,甚至是另一個Source等等。Flume支持用戶創建多級流,也就是說,多個agent能夠協同
工做,而且支持Fan-in、Fan-out、Contextual Routing、Backup Routes,這也正是NB之處。以下圖所示:
flume的官方網站在哪裏?
官方網站:http://flume.apache.org/
如何安裝
wget http://mirrors.hust.edu.cn/apache/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz
tar -zxvf apache-flume-1.6.0-bin.tar.gz
mkdir /usr/local/flume
mv apache-flume-1.6.0 /usr/local/flume
cd /usr/local/flume
cp conf/flume-conf.properties.template conf/flume-conf.properties #配置文件
mkdir -p /tmp/log/flume 創建輸出目錄
編輯配置 conf/flume-conf.properties
vim conf/flume-conf.properties
agent.sources = r1
agent.channels = c1
agent.sinks = s1
agent.sources.r1.type = netcat
agent.sources.r1.bind = localhost
agent.sources.r1.port = 8888
agent.sources.r1.channels = c1
agent.sinks.s1.type = file_roll
agent.sinks.s1.sink.directory = /tmp/log/flume
agent.sinks.s1.channel = c1
agent.channels.c1.type = memory
agent.channels.c1.capacity = 100
啓動 flume服務
bin/flume-ng agent --conf conf -f conf/flume-conf.properties -n agent& #啓動服務
注:運行日誌位於logs目錄,或者啓動時添加-Dflume.root.logger=INFO,console 選項前臺啓動,輸出打印日誌,
查看具體運行日誌,服務異常時查緣由。
安裝驗證
bin/flume-ng version #查看flume版本
telnet localhost 8888 #發送數據
輸入數據
hello world!
hello Flume!
查看 /tmp/log/flume文件中文件內容
flume配置文件說明
Avro模式
Avro能夠發送一個給定的文件給Flume,Avro 源使用AVRO RPC機制。
建立agent配置文件
vim /usr/local/flume/cong/avro.conf
-------------------------------------------avro.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
------------------------------------------- end ------------------------------------------------
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/avro.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
echo "hello world" > /usr/local/flume/log/log.00 #建立指定文件
/usr/local/flume/bin/flume-ng avro-client -c . -H m1 -p 4141 -F /usr/local/flume/log/log.00 #使用avro-client發送文件
控制檯信息輸出
Spool模式
Spool監測配置的目錄下新增的文件,並將文件中的數據讀取出來。須要注意兩點:
1) 拷貝到spool目錄下的文件不能夠再打開編輯。
2) spool目錄下不可包含相應的子目錄
建立agent配置文件
vim /usr/local/flume/cong/spool.conf
-------------------------------------------spool.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
a1.sources.r1.spoolDir = /usr/local/flume/log
a1.sources.r1.fileHeader = true
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
------------------------------------------------------------------------------------------------------
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/spool.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a2
echo "spool test1" > /usr/local/flume/log/spool_text.log #內容追加到指定目錄
控制檯信息輸出
Exec模式
EXEC執行一個給定的命令得到輸出的源,若是要使用tail命令,必選使得file足夠大才能看到輸出內容
建立agent配置文件
vim /usr/local/flume/cong/exec_tail.conf
-------------------------------------------exec_tail.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /usr/local/flume/log/log_exec_tail
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
----------------------------------------------------------------------------------------------------------
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/exec_tail.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a3
for i in {1..100};do echo "exec tail$i" >> /usr/local/flume/log/log_exec_tail;echo $i;sleep 0.1;done #內容追加到指定目錄
控制檯信息輸出
Syslogtcp模式
1.Syslogtcp監聽TCP的端口作爲數據源
2.Syslogudp監聽UDP的端口作爲數據源
建立agent配置文件
vim /usr/local/flume/cong/syslog_tcp.conf
-------------------------------------------syslog_tcp.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = localhost
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
-----------------------------------------------------------------------------------------------------------
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/syslog_tcp.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a3
echo "hello idoall.org syslog" | nc localhost 5140 #測試產生syslog
控制檯信息輸出
JSONHandler模式
建立agent配置文件
vim /usr/local/flume/cong/post_json.conf
-------------------------------------------post_json.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = org.apache.flume.source.http.HTTPSource
a1.sources.r1.port = 8888
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
---------------------------------------------------------------------------------------------------------
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/post_json.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a3
curl -X POST -d '[{ "headers" :{"a" : "a1","b" : "b1"},"body" : "idoall.org_body"}]' http://localhost:8888 #測試產生syslog
控制檯信息輸出
Hadoop sink模式
其中關於hadoop2.2.0部分的安裝部署,請參考其餘文檔
建立agent配置文件
vim /usr/local/flume/cong/hdfs_sink.conf
-------------------------------------------hdfs_sink.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = localhost
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://m1:9000/user/flume/syslogtcp
a1.sinks.k1.hdfs.filePrefix = Syslog
a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
---------------------------------------------------------------------------------------------------------
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/hdfs_sink.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a3
echo "hello idoall flume -> hadoop testing one" | nc localhost 5140 #測試產生syslog
控制檯信息輸出
File Roll Sink模式
建立agent配置文件
vim /usr/local/flume/cong/file_roll.conf
-------------------------------------------file_roll.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5555
a1.sources.r1.host = localhost
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = file_roll
a1.sinks.k1.sink.directory = /usr/local/flume/log
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
--------------------------------------------------------------------------------------------------------
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/file_roll.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a3
echo "hello idoall.org syslog" | nc localhost 5555
echo "hello idoall.org syslog 2" | nc localhost 5555
查看/usr/local/flume/log下是否生成文件,默認每30秒生成一個新文件
Replicating Channel Selector模式
Flume支持Fan out流從一個源到多個通道。有兩種模式的Fan out,分別是複製和複用。在複製的狀況下,流的事件被髮送到
全部的配置通道。在複用的狀況下,事件被髮送到可用的渠道中的一個子集。Fan out流須要指定源和Fan out通道的規則。
此次須要用到m1,m2兩臺機器
在m1建立replicating_Channel_Selector配置文件
vim /usr/local/flume/cong/replicating_Channel_Selector.conf
------------------------------------------- replicating_Channel_Selector.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = localhost
a1.sources.r1.channels = c1 c2
a1.sources.r1.selector.type = replicating
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = m1
a1.sinks.k1.port = 5555
a1.sinks.k2.type = avro
a1.sinks.k2.channel = c2
a1.sinks.k2.hostname = m2
a1.sinks.k2.port = 5555
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100
---------------------------------------------------------------------------------------------------------------------------------
vim /usr/local/flume/cong/replicating_Channel_Selector_avro.conf
------------------------------------------- replicating_Channel_Selector_avro.conf-----------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 5555
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
---------------------------------------------------------------------------------------------------------------------------------
在m1上將2個配置文件複製到m2上一份
打開4個窗口,在m1和m2上同時啓動兩個flume agent
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/replicating_Channel_Selector_avro.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/replicating_Channel_Selector.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
而後在m1或m2的任意一臺機器上,測試產生syslog
echo "hello idoall.org syslog" | nc localhost 5140
在m1和m2的sink窗口,分別能夠看到如下信息,這說明信息獲得了同步:
/12/28 14:08:18 INFO ipc.NettyServer: Connection to /192.168.1.51:46844 disconnected.
/12/28 14:08:52 INFO ipc.NettyServer: [id: 0x90f8fe1f, /192.168.1.50:35873 => /192.168.1.50:5555] OPEN
/12/28 14:08:52 INFO ipc.NettyServer: [id: 0x90f8fe1f, /192.168.1.50:35873 => /192.168.1.50:5555] BOUND: /192.168.1.50:5555
/12/28 14:08:52 INFO ipc.NettyServer: [id: 0x90f8fe1f, /192.168.1.50:35873 => /192.168.1.50:5555] CONNECTED: /192.168.1.50:35873
/12/28 14:08:59 INFO ipc.NettyServer: [id: 0xd6318635, /192.168.1.51:46858 => /192.168.1.50:5555] OPEN
/12/28 14:08:59 INFO ipc.NettyServer: [id: 0xd6318635, /192.168.1.51:46858 => /192.168.1.50:5555] BOUND: /192.168.1.50:5555
/12/28 14:08:59 INFO ipc.NettyServer: [id: 0xd6318635, /192.168.1.51:46858 => /192.168.1.50:5555] CONNECTED: /192.168.1.51:46858
/12/28 14:09:20 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 68 65 6C 6C 6F 20 69 64 6F 61 6C 6C 2E 6F 72 67 hello idoall.org }
Multiplexing Channel Selector模式
在m1建立Multiplexing_Channel_Selector配置文件
vim /usr/local/flume/cong/Multiplexing_Channel_Selector.conf
------------------------------------------- Multiplexing_Channel_Selector.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2
# Describe/configure the source
a1.sources.r1.type = org.apache.flume.source.http.HTTPSource
a1.sources.r1.port = 5140
a1.sources.r1.channels = c1 c2
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = type
#映射容許每一個值通道能夠重疊。默認值能夠包含任意數量的通道。
a1.sources.r1.selector.mapping.baidu = c1
a1.sources.r1.selector.mapping.ali = c2
a1.sources.r1.selector.default = c1
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = m1
a1.sinks.k1.port = 5555
a1.sinks.k2.type = avro
a1.sinks.k2.channel = c2
a1.sinks.k2.hostname = m2
a1.sinks.k2.port = 5555
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100
------------------------------------------------------------------------------------------------------------------------------
vim /usr/local/flume/cong/Multiplexing_Channel_Selector_avro.conf
------------------------------------------- Multiplexing_Channel_Selector_avro.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 5555
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
-----------------------------------------------------------------------------------------------------------------------------------
在m1上將2個配置文件複製到m2上一份
打開4個窗口,在m1和m2上同時啓動兩個flume agent
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/Multiplexing_Channel_Selector.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/Multiplexing_Channel_Selector_avro.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
而後在m1或m2的任意一臺機器上,測試產生syslog
curl -X POST -d '[{ "headers" :{"type" : "baidu"},"body" : "idoall_TEST1"}]' http://localhost:5140 && curl -X POST -d '[{ "headers" :{"type" : "ali"},"body" : "idoall_TEST2"}]' http://localhost:5140 && curl -X POST -d '[{ "headers" :{"type" : "qq"},"body" : "idoall_TEST3"}]' http://localhost:5140
----在m1的sink窗口,能夠看到如下信息----->
16/12/28 14:32:21 INFO node.Application: Starting Sink k1
16/12/28 14:32:21 INFO node.Application: Starting Source r1
16/12/28 14:32:21 INFO source.AvroSource: Starting Avro source r1: { bindAddress: 0.0.0.0, port: 5555 }...
16/12/28 14:32:21 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
16/12/28 14:32:21 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
16/12/28 14:32:21 INFO source.AvroSource: Avro source r1 started.
16/12/28 14:32:36 INFO ipc.NettyServer: [id: 0xcf00eea6, /192.168.1.50:35916 => /192.168.1.50:5555] OPEN
16/12/28 14:32:36 INFO ipc.NettyServer: [id: 0xcf00eea6, /192.168.1.50:35916 => /192.168.1.50:5555] BOUND: /192.168.1.50:5555
16/12/28 14:32:36 INFO ipc.NettyServer: [id: 0xcf00eea6, /192.168.1.50:35916 => /192.168.1.50:5555] CONNECTED: /192.168.1.50:35916
16/12/28 14:32:44 INFO ipc.NettyServer: [id: 0x432f5468, /192.168.1.51:46945 => /192.168.1.50:5555] OPEN
16/12/28 14:32:44 INFO ipc.NettyServer: [id: 0x432f5468, /192.168.1.51:46945 => /192.168.1.50:5555] BOUND: /192.168.1.50:5555
16/12/28 14:32:44 INFO ipc.NettyServer: [id: 0x432f5468, /192.168.1.51:46945 => /192.168.1.50:5555] CONNECTED: /192.168.1.51:46945
16/12/28 14:34:11 INFO sink.LoggerSink: Event: { headers:{type=baidu} body: 69 64 6F 61 6C 6C 5F 54 45 53 54 31 idoall_TEST1 }
16/12/28 14:34:57 INFO sink.LoggerSink: Event: { headers:{type=qq} body: 69 64 6F 61 6C 6C 5F 54 45 53 54 33 idoall_TEST3 }
----在m2的sink窗口,能夠看到如下信息----->
16/12/28 14:32:27 INFO node.Application: Starting Sink k1
16/12/28 14:32:27 INFO node.Application: Starting Source r1
16/12/28 14:32:27 INFO source.AvroSource: Starting Avro source r1: { bindAddress: 0.0.0.0, port: 5555 }...
16/12/28 14:32:27 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
16/12/28 14:32:27 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
16/12/28 14:32:27 INFO source.AvroSource: Avro source r1 started.
16/12/28 14:32:36 INFO ipc.NettyServer: [id: 0x7c2f0aec, /192.168.1.50:38104 => /192.168.1.51:5555] OPEN
16/12/28 14:32:36 INFO ipc.NettyServer: [id: 0x7c2f0aec, /192.168.1.50:38104 => /192.168.1.51:5555] BOUND: /192.168.1.51:5555
16/12/28 14:32:36 INFO ipc.NettyServer: [id: 0x7c2f0aec, /192.168.1.50:38104 => /192.168.1.51:5555] CONNECTED: /192.168.1.50:38104
16/12/28 14:32:44 INFO ipc.NettyServer: [id: 0x3d36f553, /192.168.1.51:48599 => /192.168.1.51:5555] OPEN
16/12/28 14:32:44 INFO ipc.NettyServer: [id: 0x3d36f553, /192.168.1.51:48599 => /192.168.1.51:5555] BOUND: /192.168.1.51:5555
16/12/28 14:32:44 INFO ipc.NettyServer: [id: 0x3d36f553, /192.168.1.51:48599 => /192.168.1.51:5555] CONNECTED: /192.168.1.51:48599
16/12/28 14:34:33 INFO sink.LoggerSink: Event: { headers:{type=ali} body: 69 64 6F 61 6C 6C 5F 54 45 53 54 32 idoall_TEST2 }
能夠看到,根據header中不一樣的條件分佈到不一樣的channel上
Flume Sink Processors 模式
failover的機器是一直髮送給其中一個sink,當這個sink不可用的時候,自動發送到下一個sink
在m1建立Flume_Sink_Processors配置文件
vim /usr/local/flume/cong/Flume_Sink_Processors.conf
------------------------------------------- Flume_Sink_Processors.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2
#這個是配置failover的關鍵,須要有一個sink group
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
#處理的類型是failover
a1.sinkgroups.g1.processor.type = failover
#優先級,數字越大優先級越高,每一個sink的優先級必須不相同
a1.sinkgroups.g1.processor.priority.k1 = 5
a1.sinkgroups.g1.processor.priority.k2 = 10
#設置爲10秒,固然能夠根據你的實際情況更改爲更快或者很慢
a1.sinkgroups.g1.processor.maxpenalty = 10000
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.channels = c1 c2
a1.sources.r1.selector.type = replicating
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = m1
a1.sinks.k1.port = 5555
a1.sinks.k2.type = avro
a1.sinks.k2.channel = c2
a1.sinks.k2.hostname = m2
a1.sinks.k2.port = 5555
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100
-----------------------------------------------------------------------------------------------------------------------------------
vim /usr/local/flume/cong/Flume_Sink_Processors_avro.conf
------------------------------------------- Flume_Sink_Processors_avro.conf--------------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 5555
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
------------------------------------------------------------------------------------------------------------------------------------
在m1上將2個配置文件複製到m2上一份
打開4個窗口,在m1和m2上同時啓動兩個flume agent
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/Flume_Sink_Processors_avro.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/Flume_Sink_Processors.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
而後在m1或m2的任意一臺機器上,測試產生log
echo "idoall.org test1 failover" | nc localhost 5140
由於m2的優先級高,因此在m2的sink窗口,能夠看到如下信息,而m1沒有:
16/12/28 15:02:46 INFO ipc.NettyServer: Connection to /192.168.1.51:48692 disconnected.
16/12/28 15:03:12 INFO ipc.NettyServer: [id: 0x09a14036, /192.168.1.51:48704 => /192.168.1.51:5555] OPEN
16/12/28 15:03:12 INFO ipc.NettyServer: [id: 0x09a14036, /192.168.1.51:48704 => /192.168.1.51:5555] BOUND: /192.168.1.51:5555
16/12/28 15:03:12 INFO ipc.NettyServer: [id: 0x09a14036, /192.168.1.51:48704 => /192.168.1.51:5555] CONNECTED: /192.168.1.51:48704
16/12/28 15:03:26 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 31 idoall.org test1 }
這時咱們中止掉m2機器上的sink(ctrl+c),再次輸出測試數據:
echo "idoall.org test2 failover" | nc localhost 5140
能夠在m1的sink窗口,看到讀取到了剛纔發送的兩條測試數據:
16/12/28 15:02:46 INFO ipc.NettyServer: Connection to /192.168.1.51:47036 disconnected.
16/12/28 15:03:12 INFO ipc.NettyServer: [id: 0xbcf79851, /192.168.1.51:47048 => /192.168.1.50:5555] OPEN
16/12/28 15:03:12 INFO ipc.NettyServer: [id: 0xbcf79851, /192.168.1.51:47048 => /192.168.1.50:5555] BOUND: /192.168.1.50:5555
16/12/28 15:03:12 INFO ipc.NettyServer: [id: 0xbcf79851, /192.168.1.51:47048 => /192.168.1.50:5555] CONNECTED: /192.168.1.51:47048
16/12/28 15:07:56 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 31 idoall.org test1 }
16/12/28 15:07:56 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 32 idoall.org test2 }
咱們再在m2的sink窗口中,啓動sink:
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/Flume_Sink_Processors_avro.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
輸入兩批測試數據
echo "idoall.org test3 failover" | nc localhost 5140 && echo "idoall.org test4 failover" | nc localhost 5140
在m2的sink窗口,咱們能夠看到如下信息,由於優先級的關係,log消息會再次落到m2上
16/12/28 15:09:47 INFO node.Application: Starting Sink k1
16/12/28 15:09:47 INFO node.Application: Starting Source r1
16/12/28 15:09:47 INFO source.AvroSource: Starting Avro source r1: { bindAddress: 0.0.0.0, port: 5555 }...
16/12/28 15:09:47 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean.
16/12/28 15:09:47 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: r1 started
16/12/28 15:09:47 INFO source.AvroSource: Avro source r1 started.
16/12/28 15:09:54 INFO ipc.NettyServer: [id: 0x96615732, /192.168.1.51:48741 => /192.168.1.51:5555] OPEN
16/12/28 15:09:54 INFO ipc.NettyServer: [id: 0x96615732, /192.168.1.51:48741 => /192.168.1.51:5555] BOUND: /192.168.1.51:5555
16/12/28 15:09:54 INFO ipc.NettyServer: [id: 0x96615732, /192.168.1.51:48741 => /192.168.1.51:5555] CONNECTED: /192.168.1.51:48741
16/12/28 15:09:57 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 32 idoall.org test2 }
16/12/28 15:10:43 INFO ipc.NettyServer: [id: 0x12621f9a, /192.168.1.50:38166 => /192.168.1.51:5555] OPEN
16/12/28 15:10:43 INFO ipc.NettyServer: [id: 0x12621f9a, /192.168.1.50:38166 => /192.168.1.51:5555] BOUND: /192.168.1.51:5555
16/12/28 15:10:43 INFO ipc.NettyServer: [id: 0x12621f9a, /192.168.1.50:38166 => /192.168.1.51:5555] CONNECTED: /192.168.1.50:38166
16/12/28 15:10:43 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 33 idoall.org test3 }
16/12/28 15:10:43 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 34 idoall.org test4 }
Load balancing Sink Processor模式
load balance type和failover不一樣的地方是,load balance有兩個配置,一個是輪詢,一個是隨機。兩種狀況下若是被選擇的sink不可用,就會自動嘗試發送到下一個可用的sink上面。
在m1建立Load_balancing_Sink_Processors配置文件
vim /usr/local/flume/cong/Load_balancing_Sink_Processors.conf
------------------------------------------- Load_balancing_Sink_Processors.conf------------------------------------------------
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1
#這個是配置Load balancing的關鍵,須要有一個sink group
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = round_robin
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = m1
a1.sinks.k1.port = 5555
a1.sinks.k2.type = avro
a1.sinks.k2.channel = c1
a1.sinks.k2.hostname = m2
a1.sinks.k2.port = 5555
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
---------------------------------------------------------------------------------------------------------------------------------
在m1建立Load_balancing_Sink_Processors_avro配置文件
vim /usr/local/flume/cong/Load_balancing_Sink_Processors_avro.conf
------------------------------------------- Load_balancing_Sink_Processors_avro.conf----------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 5555
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
----------------------------------------------------------------------------------------------------------------------------------
在m1上將2個配置文件複製到m2上一份
打開4個窗口,在m1和m2上同時啓動兩個flume agent
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/Load_balancing_Sink_Processors_avro.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/Load_balancing_Sink_Processors.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
而後在m1或m2的任意一臺機器上,測試產生log,一行一行輸入,輸入太快,容易落到一臺機器上
echo "idoall.org test1" | nc localhost 5140
echo "idoall.org test2" | nc localhost 5140
echo "idoall.org test3" | nc localhost 5140
echo "idoall.org test4" | nc localhost 5140
在m1的sink窗口,能夠看到如下信息:
16/12/28 15:35:29 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 32 idoall.org test2 }
16/12/28 15:35:33 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 34 idoall.org test4 }
在m2的sink窗口,能夠看到如下信息:
16/12/28 15:35:27 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 31 idoall.org test1 }
16/12/28 15:35:29 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 33 idoall.org test3 }
說明輪詢模式起到了做用
Hbase sink模式
在測試以前,請先參考《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分佈式環境部署》將hbase啓動
而後將如下文件複製到flume中:
cp /home/hadoop/hbase-0.96.2-hadoop2/lib/protobuf-java-2.5.0.jar /usr/local/flume/lib
cp /home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-client-0.96.2-hadoop2.jar /usr/local/flume/lib
cp /home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-common-0.96.2-hadoop2.jar /usr/local/flume/lib
cp /home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-protocol-0.96.2-hadoop2.jar /usr/local/flume/lib
cp /home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-server-0.96.2-hadoop2.jar /usr/local/flume/lib
cp /home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-hadoop2-compat-0.96.2-hadoop2.jar /usr/local/flume/lib
cp /home/hadoop/hbase-0.96.2-hadoop2/lib/hbase-hadoop-compat-0.96.2-hadoop2.jar /usr/local/flume/lib@@@
cp /home/hadoop/hbase-0.96.2-hadoop2/lib/htrace-core-2.04.jar /usr/local/flume/lib
確保test_idoall_org表在hbase中已經存在,test_idoall_org表的格式以及字段請參考《ubuntu12.04+hadoop2.2.0+zookeeper3.4.5+hbase0.96.2+hive0.13.1分佈式環境部署》中關於hbase部分的建表代碼。
在m1建立hbase_simple配置文件
vim /usr/local/flume/cong/hbase_simple.conf
------------------------------------------- hbase_simple.conf------------------------------------------------------
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = localhost
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = logger
a1.sinks.k1.type = hbase
a1.sinks.k1.table = test_idoall_org
a1.sinks.k1.columnFamily = name
a1.sinks.k1.column = idoall
a1.sinks.k1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
a1.sinks.k1.channel = memoryChannel
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
-------------------------------------------------------------------------------------------------------------------
啓動flume agent
/usr/local/flume/bin/flume-ng agent -c . -f /usr/local/flume/conf/hbase_simple.conf -n a1 -Dflume.root.logger=INFO,console #啓動flume agent a1
測試產生syslog
echo "hello idoall.org from flume" | nc localhost 5140
這時登陸到hbase中,能夠發現新數據已經插入
Kafka模式
1建立kafka.conf配置文件
vim /usr/local/flume/cong/kafka.conf
------------------------------------------- kafka.conf------------------------------------------------------
agent.sources = r1
agent.channels = c1
agent.sinks = s1
# For each one of the sources, the type is defined
agent.sources.r1.type = syslogudp
agent.sources.r1.bind = 192.168.201.73
agent.sources.r1.port = 7520
#消息平均分佈在多個分區上 官方自帶插件
agent.sources.r1.interceptors = i2
agent.sources.r1.interceptors.i2.type=org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder
agent.sources.r1.interceptors.i2.headerName=key
agent.sources.r1.interceptors.i2.preserveExisting=false
# The channel can be defined as follows.
agent.sources.r1.channels = c1
# Each sink's type must be defined
agent.sinks.s1.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.s1.topic=php #kafka消息池
agent.sinks.s1.brokerList=192.168.201.73:9092 #kafka主機和端口
agent.sinks.s1.requiredAcks = 1
agent.sinks.s1.batchSize = 20
agent.sinks.s1.channel = c1
#agent.sinks.s1.requireAcks 0不須要確認消息是否成功 -1 確認消息成功 還需備份
#agent.sinks.s1.batchSize 配置每次下沉多少條消息 每次下沉的數量越多延遲越高
# Each channel's type is defined.
agent.channels.c1.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.c1.capacity = 10000
-----------------------------------------------------------------------------------------------------------------
啓動flume /usr/local/flume/bin/flume-ng agent --conf conf -f conf/kafka.conf -n agent&