hadoop性能測試

1、hadoop自帶的性能基準評測工具 

(一)TestDFSIO 一、測試寫性能 (1)如有必要,先刪除歷史數據 $hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar TestDFSIO -clean (2)執行測試 $hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar TestDFSIO -write -nrFiles 5 -fileSize 20 (3)查看結果:每一次測試生成一個結果,並以附加的形式添加到TestDFSIO_results.log中 $cat TestDFSIO_results.log ----- TestDFSIO ----- : write            Date & time: Mon May 11 09:41:34 HKT 2015        Number of files: Total MBytes processed: 100.0      Throughput mb/sec: 21.468441391155004 Average IO rate mb/sec: 25.366744995117188  IO rate std deviation: 12.744636924030177     Test exec time sec: 27.585 ----- TestDFSIO ----- : write            Date & time: Mon May 11 09:42:28 HKT 2015        Number of files: 5 Total MBytes processed: 100.0      Throughput mb/sec: 22.779043280182233 Average IO rate mb/sec: 25.440486907958984  IO rate std deviation: 9.930490103638768     Test exec time sec: 26.67 (4)結果說明 Total MBytes processed : 總共須要寫入的數據量 100MB Throughput mb/sec :總共須要寫入的數據量/(每一個map任務實際寫入數據的執行時間之和(這個時間會遠小於Test exec time sec))==》100/(map1寫時間+map2寫時間+...) Average IO rate mb/sec :(每一個map須要寫入的數據量/每一個map任務實際寫入數據的執行時間)之和/任務數==》(20/map1寫時間+20/map2寫時間+...)/1000,因此這個值跟上面一個值老是存在差別。 IO rate std deviation :上一個值的標準差 Test exec time sec :整個job的執行時間 二、測試讀性能 (1)執行測試 $ hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar TestDFSIO -read -nrFiles 5 -fileSize 20 (2)查看測試結果 $ cat TestDFSIO_results.log ----- TestDFSIO ----- : read            Date & time: Mon May 11 09:53:27 HKT 2015        Number of files: 5 Total MBytes processed: 100.0      Throughput mb/sec: 534.75935828877 Average IO rate mb/sec: 540.4888916015625  IO rate std deviation: 53.93029580221512     Test exec time sec: 26.704 (3)結果說明 結果各項意思與write相同,但其讀速率比寫速率快不少,而總執行時間很是接近。真正測試時,應該用較大的數據量來執行,纔可體現出兩者的差別。 (二)排序測試 在api文檔中搜索terasort,可查詢相關信息。 排序測試的三個基本步驟: 生成隨機數據??>排序??>驗證排序結果 關於terasort更詳細的原理,見http://blog.csdn.net/yuesichiu/article/details/17298563 一、生成隨機數據 $ hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar  teragen -Dmapreduce.job.maps=5 10000000 /tmp/hadoop/terasort 此步驟將在hdfs中的 /tmp/hadoop/terasort  中生成數據, $  hadoop fs -ls /tmp/hadoop/terasort Found 6 items -rw-r-----   3 hadoop supergroup          0 2015-05-11 11:32 /tmp/hadoop/terasort/_SUCCESS -rw-r-----   3 hadoop supergroup  200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00000 -rw-r-----   3 hadoop supergroup  200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00001 -rw-r-----   3 hadoop supergroup  200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00002 -rw-r-----   3 hadoop supergroup  200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00003 -rw-r-----   3 hadoop supergroup  200000000 2015-05-11 11:32 /tmp/hadoop/terasort/part-m-00004 $ hadoop fs -du -s -h /tmp/hadoop/terasort 953.7 M  /tmp/hadoop/terasort 生成的5個數據居然是每一個200M,未解,爲何不是10M??? 二、運行測試 $hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar  terasort -Dmapreduce.job.maps=5 /tmp/hadoop/terasort /tmp/hadoop/terasort_out Spent 354ms computing base-splits. Spent 8ms computing TeraScheduler splits. Computing input splits took 365ms Sampling 10 splits of 10 Making 1 from 100000 sampled records Computing parititions took 6659ms Spent 7034ms computing partitions. 三、驗證結果  $ hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar teravalidate  /tmp/hadoop/terasort_out /tmp/hadoop/terasort_report Spent 44ms computing base-splits. Spent 7ms computing TeraScheduler splits. 2、hibench hibench4.0測試不成功,使用3.0代替 一、下載並解壓 wget https://codeload.github.com/intel-hadoop/HiBench/zip/HiBench-3.0.0 unzip HiBench-3.0.0 二、修改文件  bin/hibench-config.sh,主要是這幾個 export JAVA_HOME=/home/hadoop/jdk1.7.0_67 export HADOOP_HOME=/home/hadoop/hadoop export HADOOP_EXECUTABLE=/home/hadoop/hadoop//bin/hadoop export HADOOP_CONF_DIR=/home/hadoop/conf export HADOOP_EXAMPLES_JAR=/home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar export MAPRED_EXECUTABLE=/home/hadoop/hadoop/bin/mapred #Set the varaible below only in YARN mode export HADOOP_JOBCLIENT_TESTS_JAR=/home/hadoop/hadoop/share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.3.0-cdh5.1.2.jar/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.1.2-tests.jar 三、修改conf/benchmarks.lst,哪些不想運行的將之註釋掉 四、運行 bin/run-all.sh 五、查看結果 在當前目錄會生成hibench.report文件,內容以下 Type         Date       Time     Input_data_size      Duration(s)          Throughput(bytes/s)  Throughput/node WORDCOUNT    2015-05-12 19:32:33 251.248 DFSIOE-READ  2015-05-12 19:54:29 54004092852          463.863              116422505            38807501 DFSIOE-WRITE 2015-05-12 20:02:57 27320849148          498.132              54846605             18282201 PAGERANK     2015-05-12 20:27:25 711.391 SORT         2015-05-12 20:33:21 243.603 TERASORT     2015-05-12 20:40:34 10000000000          266.796              37481821             12493940 SLEEP        2015-05-12 20:40:40 0                    .177                 0                    0 
相關文章
相關標籤/搜索