理解MapReduce計算構架

1.編寫map函數,reduce函數python

(1)建立mapper.py文件app

cd /home/hadoop/wc 函數

gedit mapper.poop

 

(2)mapper函數測試

 

#!/usr/bin/env python
import sys
for i in stdin:
    i = i.strip()
    words = i.split()
    for word in words:
    print '%s\t%s' % (word,1)

(3)reducer.py文件建立

cd /home/hadoop/wc3d

gedit reducer.pyblog

(4)reducer函數
#!/usr/bin/env python
from operator import itemgetter
import sys

current_word = None
current_count = 0
word = None

for i in stdin:
    i = i.strip()
    word, count = i.split('\t',1)
    try:
    count = int(count)
    except ValueError:
    continue

    if current_word == word:
    current_count += count 
    else:
    if current_word:
        print '%s\t%s' % (current_word, current_count)
    current_count = count
    current_word = word

if current_word == word:
    print '%s\t%s' % (current_word, current_count)



2.將其權限做出相應修改
chmod a+x /home/hadoop/mapper.py
echo "foo foo quux labs foo bar quux" | /home/hadoop/wc/mapper.py
echo "foo foo quux labs foo bar quux" | /home/hadoop/wc/mapper.py | sort -k1,1 | /home/hadoop/wc/reducer.p


3.本機上測試運行代碼

放到HDFS上運行ip

下載並上傳文件到hdfs上hadoop

cd  /home/hadoop/wc
wget http://www.gutenberg.org/files/5000/5000-8.txt
wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt

cd /usr/hadoop/wc
hdfs dfs -put /home/hadoop/hadoop/gutenberg/*.txt /user/hadoop/input

相關文章
相關標籤/搜索