在作日誌分析的過程當中,用到了hadoop框架中的hive,不過有些日誌處理用hive中的函數處理顯得力不從心,就須要用udf來進行擴展處理了 html
1 在eclipse中新建java project hiveudf 而後新建class package(com.afan) name(UDFLower) java
2 添加jar library hadoop-0.20.2-core.jar hive-exec-0.7.0-cdh3u0.jar兩個文件到project linux
3 編寫代碼 apache
5 將udf_hive.jar放入配置好的linux系統的文件夾中路徑爲/home/udf/udf_hive.jar 框架
6 打開hive命令行測試 eclipse
hive> add jar /home/udf/udf_hive.jar; jsp
Added udf_hive.jar to class path
Added resource: udf_hive.jar
函數
建立udf函數
hive> create temporary function my_lower as 'com.afan.UDFLower';
oop
建立測試數據
hive> create table dual (info string);
測試
導入數據文件data.txt
data.txt文件內容爲
WHO
AM
I
HELLO
hive> load data local inpath '/home/data/data.txt' into table dual;
hive> select info from dual;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201105150525_0003, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201105150525_0003
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201105150525_0003
2011-05-15 06:46:05,459 Stage-1 map = 0%, reduce = 0%
2011-05-15 06:46:10,905 Stage-1 map = 100%, reduce = 0%
2011-05-15 06:46:13,963 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201105150525_0003
OK
WHO
AM
I
HELLO
使用udf函數
hive> select my_lower(info) from dual;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201105150525_0002, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201105150525_0002
Kill Command = /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=localhost:9001 -kill job_201105150525_0002
2011-05-15 06:43:26,100 Stage-1 map = 0%, reduce = 0%
2011-05-15 06:43:34,364 Stage-1 map = 100%, reduce = 0%
2011-05-15 06:43:37,484 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201105150525_0002
OK
who
am
i
hello
經測試成功經過
參考文章http://landyer.iteye.com/blog/1070377
——————————————————————————————————
一、編寫函數