IBM大數據處理平臺BigInsights(2)

接上篇《 初探IBM大數據處理平臺BigInsights(1) 》,本篇講述Hadoop的一些基礎命令及利用MapReduce運行一個簡單的WordCount程序apache

 

1,在HDFS文件系統上建立test目錄小程序

hadoop fs -mkdir /user/biadmin/test瀏覽器

 

2,將文件copy到test目錄下jsp

hadoop fs -put /var/adm/ibmvmcoc-postinstall/BIlicense_en.txt /user/biadmin/testide

 

3,查看test目錄下是否多了這個文件oop

biadmin@bivm:/etc/ibmvmcoc-postinstall> hadoop fs -ls /user/biadmin/testpost

Found 1 items大數據

-rw-r--r-- 1 biadmin biadmin 62949 2016-01-01 22:34 /user/biadmin/test/BIlicense_en.txt命令行

 

4,運行一個簡單的MapReduce程序orm

WordCount是用JAVA寫的針對Hadoop MapReduce的一個小程序,用於統計文本中每一個單詞的出現次數,關於WordCount更多內容請參考-http://wiki.apache.org/hadoop/WordCount

 

執行程序是hadoop-example.jar,內容是在剛剛建立的test目錄下,輸出到WordCount_outpt子目錄中。若是沒有此目錄,會自動建立。

biadmin@bivm:/etc/ibmvmcoc-postinstall> hadoop jar /opt/ibm/biginsights/IHC/hadoop-example.jar wordcount /user/biadmin/test WordCount_output

16/01/01 22:36:08 INFO input.FileInputFormat: Total input paths to process : 1

16/01/01 22:36:18 INFO mapred.JobClient: Running job: job_201601012120_0001

16/01/01 22:36:19 INFO mapred.JobClient: map 0% reduce 0%

16/01/01 22:37:58 INFO mapred.JobClient: map 100% reduce 0%

16/01/01 22:39:07 INFO mapred.JobClient: map 100% reduce 100%

16/01/01 22:39:14 INFO mapred.JobClient: Job complete: job_201601012120_0001

16/01/01 22:39:15 INFO mapred.JobClient: Counters: 29

16/01/01 22:39:15 INFO mapred.JobClient: File System Counters

16/01/01 22:39:15 INFO mapred.JobClient: FILE: BYTES_READ=33219

16/01/01 22:39:15 INFO mapred.JobClient: FILE: BYTES_WRITTEN=419738

16/01/01 22:39:15 INFO mapred.JobClient: HDFS: BYTES_READ=63073

16/01/01 22:39:15 INFO mapred.JobClient: HDFS: BYTES_WRITTEN=24073

16/01/01 22:39:15 INFO mapred.JobClient: org.apache.hadoop.mapreduce.JobCounter

16/01/01 22:39:15 INFO mapred.JobClient: TOTAL_LAUNCHED_MAPS=1

16/01/01 22:39:15 INFO mapred.JobClient: TOTAL_LAUNCHED_REDUCES=1

16/01/01 22:39:15 INFO mapred.JobClient: DATA_LOCAL_MAPS=1

16/01/01 22:39:15 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=95300

16/01/01 22:39:15 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=50249

16/01/01 22:39:15 INFO mapred.JobClient: FALLOW_SLOTS_MILLIS_MAPS=0

16/01/01 22:39:15 INFO mapred.JobClient: FALLOW_SLOTS_MILLIS_REDUCES=0

16/01/01 22:39:15 INFO mapred.JobClient: org.apache.hadoop.mapreduce.TaskCounter

16/01/01 22:39:15 INFO mapred.JobClient: MAP_INPUT_RECORDS=755

16/01/01 22:39:15 INFO mapred.JobClient: MAP_OUTPUT_RECORDS=9865

16/01/01 22:39:15 INFO mapred.JobClient: MAP_OUTPUT_BYTES=102036

16/01/01 22:39:15 INFO mapred.JobClient: MAP_OUTPUT_MATERIALIZED_BYTES=33219

16/01/01 22:39:15 INFO mapred.JobClient: SPLIT_RAW_BYTES=124

16/01/01 22:39:15 INFO mapred.JobClient: COMBINE_INPUT_RECORDS=9865

16/01/01 22:39:15 INFO mapred.JobClient: COMBINE_OUTPUT_RECORDS=2322

16/01/01 22:39:15 INFO mapred.JobClient: REDUCE_INPUT_GROUPS=2322

16/01/01 22:39:15 INFO mapred.JobClient: REDUCE_SHUFFLE_BYTES=33219

16/01/01 22:39:15 INFO mapred.JobClient: REDUCE_INPUT_RECORDS=2322

16/01/01 22:39:15 INFO mapred.JobClient: REDUCE_OUTPUT_RECORDS=2322

16/01/01 22:39:15 INFO mapred.JobClient: SPILLED_RECORDS=4644

16/01/01 22:39:15 INFO mapred.JobClient: CPU_MILLISECONDS=22130

16/01/01 22:39:15 INFO mapred.JobClient: PHYSICAL_MEMORY_BYTES=538050560

16/01/01 22:39:15 INFO mapred.JobClient: VIRTUAL_MEMORY_BYTES=3549384704

16/01/01 22:39:15 INFO mapred.JobClient: COMMITTED_HEAP_BYTES=2097152000

16/01/01 22:39:15 INFO mapred.JobClient: File Input Format Counters

16/01/01 22:39:15 INFO mapred.JobClient: Bytes Read=62949

16/01/01 22:39:15 INFO mapred.JobClient: org.apache.hadoop.mapreduce.lib.output.FileOutputFormat$Counter

16/01/01 22:39:15 INFO mapred.JobClient: BYTES_WRITTEN=24073

 

會自動建立WordCount_output目錄

biadmin@bivm:/etc/ibmvmcoc-postinstall> hadoop fs -ls WordCount_output

Found 3 items

-rw-r--r-- 1 biadmin biadmin 0 2016-01-01 22:39 WordCount_output/_SUCCESS

drwx--x--x - biadmin biadmin 0 2016-01-01 22:36 WordCount_output/_logs

-rw-r--r-- 1 biadmin biadmin 24073 2016-01-01 22:39 WordCount_output/part-r-00000

 

biadmin@bivm:~> hadoop fs -cat WordCount_output/*00

names,  1    
national        1    
nature  1    
necessary       4    
negligence      5    
negligence,     4    
negligence.     1    
negligence;     2    
neither 3    
net     1

上面是用命令行方式來MapReduce,除此以外,IBM BigInsights還提供了基於Web界面的方式,打開Applications子選項,切換到Manage,能夠看到預先定義的一些應用。在Test下面,有個WordCount應用,點開後選擇「Deploy」

p_w_picpath

然切換到Run,能夠看到已經有了WordCount這個應用,

p_w_picpath

選中WordCount,輸入要統計文件所在的目錄及輸出目錄,點擊Run開始運行

p_w_picpath

一樣地,也能夠經過Web界面來操做HDFS文件系統,包括建立、刪除、修改目錄或者文件

p_w_picpath

 

用瀏覽器打開JobTracker(http://192.168.133.135:50030/jobtracker.jsp),顯示出最近運行的MapReduce任務,點開JobID能看到更多詳細信息。

所謂的JobTracker是一個master服務,Hadoop啓動以後JobTracker接收Job,負責調度Job的每個子任務task運行於TaskTracker上,並監控它們,若是發現有失敗的task就從新運行它。

p_w_picpath

 

p_w_picpath

相關文章
相關標籤/搜索