原始Liunx 的python版本不帶numpy ,安裝了anaconda 以後,使用hadoop streaming 時沒法調用anaconda python ,html
後來發現是參數沒設置好。。。java
進入正題:python
4臺服務器:master slave1 slave2 slave3。服務器
所有安裝anaconda2與anaconda3, 主環境py2 。anaconda2與anaconda3共存見:Ubuntu16.04 Liunx下同時安裝Anaconda2與Anaconda3app
安裝目錄:/home/orient/anaconda2函數
Hadoop 版本2.4.0oop
設有一組數字,這組數字的均值和方差以下:post
每一個部分的{count(元素個數)、sum1/count、sum2/count},而後在reduce端將全部map端傳入的sum1加起來在除以總個數n獲得均值mean;將全部的sum2加起來除以n再減去均值mean的平方,就獲得了方差var.測試
inputFile.txt 一共100個數字 所有數據 下載:url
0.970413
0.901817
0.828698
0.197744
0.466887
0.962147
0.187294
0.388509
0.243889
0.115732
0.616292
0.713436
0.761446
0.944123
0.200903
1 #!/usr/bin/env python
2 import sys 3 from numpy import mat, mean, power 4
5 def read_input(file): 6 for line in file: 7 yield line.rstrip() 8
9 input = read_input(sys.stdin)#creates a list of input lines
10 input = [float(line) for line in input] #overwrite with floats
11 numInputs = len(input) 12 input = mat(input) 13 sqInput = power(input,2) 14
15 #output size, mean, mean(square values)
16 print "%d\t%f\t%f" % (numInputs, mean(input), mean(sqInput)) #calc mean of columns
17 print >> sys.stderr, "report: still alive"
1 #!/usr/bin/env python
2 import sys 3 from numpy import mat, mean, power 4
5 def read_input(file): 6 for line in file: 7 yield line.rstrip() 8
9 input = read_input(sys.stdin)#creates a list of input lines
10
11 #split input lines into separate items and store in list of lists
12 mapperOut = [line.split('\t') for line in input] 13
14 #accumulate total number of samples, overall sum and overall sum sq
15 cumVal=0.0
16 cumSumSq=0.0
17 cumN=0.0
18 for instance in mapperOut: 19 nj = float(instance[0]) 20 cumN += nj 21 cumVal += nj*float(instance[1]) 22 cumSumSq += nj*float(instance[2]) 23
24 #calculate means
25 mean = cumVal/cumN 26 meanSq = cumSumSq/cumN 27
28 #output size, mean, mean(square values)
29 print "%d\t%f\t%f" % (cumN, mean, meanSq) 30 print >> sys.stderr, "report: still alive"
cat inputFile.txt |python mrMeanMapper.py |python mrMeanReducer.py
我把 inputFile.txt,mrMeanMapper.py ,mrMeanReducer.py都放在了同一目錄下 ~/zhangle/Ch15/hh/hh
全部的操做也都是這此目錄下!!!
zhangle/mrmean-i 是HDFS上的目錄
hadoop fs -put inputFile.txt zhangle/mrmean-i
1 hadoop jar /usr/programs/hadoop-2.4.0/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar \ 2 -input zhangle/mrmean-i \ 3 -output zhangle/output12222 \ 4 -file mrMeanMapper.py \ 5 -file mrMeanReducer.py \ 6 -mapper "/home/orient/anaconda2/bin/python mrMeanMapper.py" \ 7 -reducer "/home/orient/anaconda2/bin/python mrMeanReducer.py"
參數解釋:
第一行:/usr/programs/hadoop-2.4.0/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar 是我Hadoop streaming 所在的目錄
第二行: zhangle/mrmean-i 是剛纔將inputFile.txt 上傳的目錄
第三行:zhangle/mrmean-out12222 是結果輸出目錄,也是在HDFS上
第四行: mrMeanMapper.py是當前目錄下的mapper程序
第五行: mrMeanRdeducer.py是當前目錄下的reducer程序
第六行: /home/orient/anaconda2/bin/python 是anaconda2目錄下的python ,若是去掉,會直接調用自帶的python,自帶python沒有安裝numpy等python包。!!
第七行: 同第六行。
hadoop fs -cat zhangle/output12222/part-00000
1. 出現「Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1」的錯誤
解決方法:
在hadoop上實施MapReduce以前,必定要在本地運行一下你的python程序,看
首先進入包含map和reduce兩個py腳本文件和數據文件inputFile.txt的文件夾中。而後輸入一下命令,看是否執行經過:
cat inputFile.txt |python mrMeanMapper.py |python mrMeanReducer.py
2.出現錯誤:「Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2」,或者出現jar文件找不到的狀況,或者出現輸出文件夾已經存在的狀況。
1 hadoop jar /usr/programs/hadoop-2.4.0/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar \ 2 -input zhangle/mrmean-i \ 3 -output zhangle/output12222 \ 4 -file mrMeanMapper.py \ 5 -file mrMeanReducer.py \ 6 -mapper "/home/orient/anaconda2/bin/python mrMeanMapper.py" \ 7 -reducer "/home/orient/anaconda2/bin/python mrMeanReducer.py"
3.出現錯誤:「Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 127」.
腳本環境的問題 在第六行與第七行 加上python 環境目錄便可。
參考:
http://www.cnblogs.com/lzllovesyl/p/5286793.html
http://www.zhaizhouwei.cn/hadoop/190.html
http://blog.csdn.net/wangzhiqing3/article/details/8633208