python如何連接hadoop,而且使用hadoop的資源,這篇文章介紹了一個簡單的案例!html
首先認爲你們已經對haoop已經有了不少的瞭解,那麼須要創建mapper和reducer,分別代碼以下:python
一、mapper.pyapp
#!/usr/bin/env python import sys for line in sys.stdin: line = line.strip() words = line.split() for word in words: print '%s\t%s' %(word, 1)
二、reducer.pyoop
#!/usr/bin/env python from operator import itemgetter import sys current_word = None current_count = 0 word = None for line in sys.stdin: words = line.strip() word, count = words.split('\t') try: count = int(count) except ValueError: continue if current_word == word: current_count += count else: if current_word: print '%s\t%s' %(current_word, current_count) current_count = count current_word = word if current_word == word: print '%s\t%s' %(current_word, current_count)
創建了兩個代碼以後,測試一下:測試
[qiu.li@l-tdata5.tkt.cn6 /export/python]$ echo "I like python hadoop , hadoop very good" | ./mapper.py | sort -k 1,1 | ./reducer.py , 1 good 1 hadoop 2 I 1 like 1 python 1 very 1
發現沒啥問題,那麼成功一半了,下面上傳幾個文件到hadoop作進一步測試。我在線上找了幾個文件,命令以下:ui
wget http://www.gutenberg.org/ebooks/20417.txt.utf-8 wget http://www.gutenberg.org/files/5000/5000-8.txt wget http://www.gutenberg.org/ebooks/4300.txt.utf-8
查看下載的文件:this
[qiu.li@l-tdata5.tkt.cn6 /export/python]$ ls 20417.txt.utf-8 4300.txt.utf-8 5000-8.txt mapper.py reducer.py run.sh
上傳文件到hadoop上面,命令以下:hadoop dfs -put ./*.txt /user/ticketdev/tmp (hadoop是配置好的,目錄也是創建好的)spa
創建run.shcode
hadoop jar $STREAM \ -files ./mapper.py,./reducer.py \ -mapper ./mapper.py \ -reducer ./reducer.py \ -input /user/ticketdev/tmp/*.txt \ -output /user/ticketdev/tmp/output
查看結果:htm
[qiu.li@l-tdata5.tkt.cn6 /export/python]$ hadoop dfs -cat /user/ticketdev/tmp/output/part-00000 | sort -nk 2 | tail DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. it 2387 which 2387 that 2668 a 3797 is 4097 to 5079 in 5226 and 7611 of 10388 the 20583
3、參考文獻:
http://www.cnblogs.com/wing1995/p/hadoop.html?utm_source=tuicool&utm_medium=referral