1app
***建立數據表studen 其中info,course分別爲表studen的兩個列族, info存儲學生我的信息——學號、姓名、性別、年齡 course 則存儲課程信息*** create 'studen','info','course' 接着添加學號爲2015001的學生信息 put 'studen','001','info:S_No','2015001' 此行代碼插入一個行健爲001的數據,且在列族info中添加一個列S_No,值爲2015001。同時因爲Hbase一次只能添加一個列數據,因此下面繼續添加2015001其餘信息 put 'studen','001','info:S_Name','Zhangsan' put 'studen','001','info:S_Sex','male' put 'studen','001','info:S_Age','23'
2oop
3spa
cd /home/hadoop/wc sudo gedit mapper.py import sys for i in stdin: i = i.strip() words = i.split() for word in words: print '%s\t%s' % (word,1) from operator import itemgetter import sys current_word = None current_count = 0 word = None for i in stdin: i = i.strip() word, count = i.split('\t',1) try: count = int(count) except ValueError: continue if current_word == word: current_count += count else: if current_word: print '%s\t%s' % (current_word, current_count) current_count = count current_word = word if current_word == word: print '%s\t%s' % (current_word, current_count) cd /home/hadoop/wc sudo gedit reducer.py chmod a+x /home/hadoop/mapper.py #上傳 cd /home/hadoop/wc wget http://www.gutenberg.org/files/5000/5000-8.txt wget http://www.gutenberg.org/cache/epub/20417/pg20417.txt #下載 cd /usr/hadoop/wc hdfs dfs -put /home/hadoop/hadoop/gutenberg/*.txt /user/hadoop/input