折騰了一段時間hadoop的部署管理,寫下此係列博客記錄一下。 java
爲了不各位作部署這種重複性的勞動,我已經把部署的步驟寫成腳本,各位只須要按着本文把腳本執行完,整個環境基本就部署完了。部署的腳本我放在了開源中國的git倉庫裏(http://git.oschina.net/snake1361222/hadoop_scripts)。 node
本文的全部部署都基於cloudera公司的CDH4,CDH4是cloudera公司包裝好的hadoop生態圈一系列yum包,把CDH4放到本身的yum倉庫中,能極大的提升hadoop環境部署的簡易性。 ios
本文的部署過程當中涵蓋了namenode的HA實現,hadoop管理的解決方案(hadoop配置文件的同步,快速部署腳本等)。 git
一共用5臺機器做爲硬件環境,全都是centos 6.4 web
namenode & resourcemanager 主服務器: 192.168.1.1 vim
namenode & resourcemanager 備服務器: 192.168.1.2 centos
datanode & nodemanager 服務器: 192.168.1.100 192.168.1.101 192.168.1.102 緩存
zookeeper 服務器集羣(用於namenode 高可用的自動切換): 192.168.1.100 192.168.1.101 bash
jobhistory 服務器(用於記錄mapreduce的日誌): 192.168.1.1 服務器
用於namenode HA的NFS: 192.168.1.100
wget http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.x86_64.rpm sudo yum --nogpgcheck localinstall cloudera-cdh-4-0.x86_64.rpm
#!/bin/bash yum -y install rpc-bind nfs-utils mkdir -p /data/nn_ha/ echo "/data/nn_ha *(rw,root_squash,all_squash,sync)" >> /etc/exports /etc/init.d/rpcbind start /etc/init.d/nfs start chkconfig --level 234 rpcbind on chkconfig -level 234 nfs on
yum –y install git mkdir –p /opt/ cd /opt/ git clone http://git.oschina.net/snake1361222/hadoop_scripts.git /etc/init.d/iptables stop
sh /opt/hadoop_scripts/deploy/AddHostname.sh
vim /opt/hadoop_scripts/deploy/config#添加master服務器的地址,也就是namenode主服務器 master="192.168.1.1" #添加nfs服務器地址 nfsserver="192.168.1.100"
vim /opt/hadoop_scripts/share_data/resolv_host127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.1 nn.dg.hadoop.cn 192.168.1.2 nn2.dg.hadoop.cn 192.168.1.100 dn100.dg.hadoop.cn 192.168.1.101 dn101.dg.hadoop.cn 192.168.1.102 dn102.dg.hadoop.cn
sh /opt/hadoop_scripts/deploy/CreateNamenode.sh
PS:相似於puppet的服務器管理開源工具,比較輕量,在這裏用於管理hadoop集羣,調度datanode,關於saltstack的詳細請看 SaltStack部署與使用
yum -y install salt salt-master
修改監聽IP: interface: 0.0.0.0 多線程池: worker_threads: 5 開啓任務緩存:(官方描敘開啓緩存能承載5000minion) job_cache 開啓自動認證: auto_accept: True
c.開啓服務
/etc/init.d/salt-master start chkconfig salt-master on
<property> <name>dfs.namenode.rpc-address.mycluster.ns1</name> <value>nn.dg.hadoop.cn:8020</value> <description>定義ns1的rpc地址</description> </property> <property> <name>dfs.namenode.rpc-address.mycluster.ns2</name> <value>nn2.dg.hadoop.cn:8020</value> <description>定義ns2的rpc地址</description> </property> <property> <name>ha.zookeeper.quorum</name> <value>dn100.dg.hadoop.cn:2181,dn101.dg.hadoop.cn:2181,dn102.dg.hadoop.cn:2181,</value> <description>指定用於HA的ZooKeeper集羣機器列表</description> </property>
<property> <name>mapreduce.jobhistory.address</name> <value>nn.dg.hadoop.cn:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>nn.dg.hadoop.cn:19888</value> </property>
<property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>nn.dg.hadoop.cn:8031</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>nn.dg.hadoop.cn:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>nn.dg.hadoop.cn:8030</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>nn.dg.hadoop.cn:8033</value> </property>
/etc/init.d/iptables stop mkdir –p /opt/hadoop_scripts rsync –avz 192.168.1.1::hadoop_s /opt/hadoop_scripts
sh /opt/hadoop_scripts/deploy/AddHostname.sh
sh /opt/hadoop_scripts/deploy/CreateNamenode.sh
rsync –avz 192.168.1.1::hadoop_conf /etc/hadoop/conf
sh /opt/hadoop_scripts/deploy/salt_minion.sh
zookeeper是一個開源分佈式服務,在這裏用於namenode 的auto fail over功能。
yum install zookeeper zookeeper-server
maxClientCnxns=50 # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/var/lib/zookeeper # the port at which the clients will connect clientPort=2181 #這裏指定zookeeper集羣內的全部機器,此配置集羣內機器都是同樣的 server.1=dn100.dg.hadoop.cn :2888:3888 server.2=dn101.dg.hadoop.cn:2888:3888
#譬如當前機器是192.168.1.100(dn100.dg.hadoop.cn),它是server.1,id是1,SO: echo "1" > /var/lib/zookeeper/myid chown -R zookeeper.zookeeper /var/lib/zookeeper/ service zookeeper-server init /etc/init.d/zookeeper-server start chkconfig zookeeper-server on #如此類推,部署192.168.1.101
/etc/init.d/iptables stop mkdir –p /opt/hadoop_scripts rsync –avz 192.168.1.1::hadoop_s /opt/hadoop_scripts
sh /opt/hadoop_scripts/deploy/AddHostname.sh sh /opt/hadoop_scripts/deploy/CreateDatanode.sh
到這裏,hadoop集羣的環境已部署完畢,如今開始初始化集羣
/etc/init.d/zookeeper-server start
sudo –u hdfs hdfs zkfc –formatZK
/etc/init.d/hadoop-hdfs-zkfc start
#確保是用hdfs用戶格式化 sudo -u hdfs hadoop namenode –format
tar -zcvPf /tmp/namedir.tar.gz /data/hadoop/dfs/name/ nc -l 9999 < /tmp/namedir.tar.gz
wget 192.168.1.1:9999 -O /tmp/namedir.tar.gz tar -zxvPf /tmp/namedir.tar.gz
/etc/init.d/hadoop-hdfs-namenode start /etc/init.d/hadoop-yarn-resourcemanager start
http://192.168.1.1:9080 http://192.168.1.2:9080 #若是在web界面看到兩個namenode都是backup狀態,那就是auto fail over配置不成功 #查看zkfc日誌(/var/log/hadoop-hdfs/hadoop-hdfs-zkfc-nn.dg.s.kingsoft.net.log) #查看zookeeper集羣的日誌(/var/log/zookeeper/zookeeper.log)
到這裏,全部hadoop部署已完成,如今開始把集羣啓動,驗證效果
#還記得以前搭建的saltstack管理工具不,如今開始發揮它的做用,登陸saltstack master(192.168.1.1)執行 salt -v "dn*" cmd.run "/etc/init.d/hadoop-hdfs-datanode start"
#建立一個tmp目錄 sudo -u hdfs hdfs dfs -mkdir /tmp #建立一個10G大小的空文件,計算它的MD5值,並放入hdfs dd if=/dev/zero of=/data/test_10G_file bs=1G count=10 md5sum /data/test_10G_file sudo -u hdfs hdfs dfs -put /data/test_10G_file /tmp sudo -u hdfs hdfs dfs -ls /tmp #如今能夠嘗試關閉一臺datanode,而後把剛纔的測試文件拉取出來,再算一次MD5看是否同樣 sudo -u hdfs hdfs dfs -get /tmp/test_10G_file /tmp/ md5sum /tmp/test_10G_file
hadoop除了hdfs用於大數據的分佈式存儲,還有更重要的組件,分佈式計算(mapreduce)。如今咱們來把mapreducev2 yarn集羣啓動
/etc/init.d/hadoop-yarn-resourcemanager start
#仍是登錄saltstack master,執行 salt -v "dn*" cmd.run "/etc/init.d/hadoop-yarn-nodemanager start"
#TestDFSIO測試HDFS的讀寫性能,寫10個文件,每一個文件1G. su hdfs - hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.0.0-cdh4.2.1-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 1000 #Sort測試MapReduce ##向random-data目錄輸出數據 hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomwriter random-data ##運行sort程序 hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort random-data sorted-data ##驗證sorted-data 文件是否排好序 hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.0.0-cdh4.2.1-tests.jar testmapredsort -sortInput random-data \ -sortOutput sorted-data
vim /opt/hadoop_scripts/share_data/resolv_host 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.1 nn.dg.hadoop.cn 192.168.1.2 nn2.dg.hadoop.cn 192.168.1.100 dn100.dg.hadoop.cn 192.168.1.101 dn101.dg.hadoop.cn 192.168.1.102 dn102.dg.hadoop.cn 192.168.1.103 dn103.dg.hadoop.cn
mkdir –p /opt/hadoop_scripts rsync –avz 192.168.1.1::hadoop_s /opt/hadoop_scripts sh /opt/hadoop_scripts/deploy/CreateDatanode.sh sh /opt/hadoop_scripts/deploy/AddHostname.sh
/etc/init.d/hadoop-hdfs-datanode start /etc/init.d/hadoop-yarn-nodemanager start
通常在一個hadoop集羣中維護一份hadoop配置,這份hadoop配置須要分發到集羣中各個成員。這裏的作法是 salt + rsync
#修改namenode主服務器的hadoop配置文件 /etc/hadoop/conf/,而後執行如下命令同步到集羣中全部成員 sync_h_conf #腳本目錄也是須要維護的,譬如hosts文件/opt/hadoop_scripts/share_data/resolv_host,修改後執行如下命令同步到集羣中全部成員 sync_h_script #其實這兩個命令是我本身定義的salt命令的別名,查看這裏/opt/hadoop_scripts/profile.d/hadoop.sh
比較廣泛的方案是,ganglia和nagios監控,ganglia收集大量度量,以圖形化程序,nagios在某度量超出閥值後報警.ganglia監控之後補充一下文檔
其實,hadoop自帶有接口提供咱們本身寫監控程序,並且這個接口仍是比較簡單,經過這樣即可以訪問http://192.168.1.1:9080/jmx,返回值是JSON格式,其中的內容也很是詳細。可是每次查詢都返回一大串的JSON也是浪費,其實接口還提供更新詳細的查詢 譬如我只想查找系統信息,能夠這樣調用接口 http://192.168.1.1:9080/jmx?qry=java.lang:type=OperatingSystem 。qry參考後跟的就是整個JSON的「name」這個key的值
在折騰hadoop集羣的部署是仍是遇到了不少坑,打算下篇寫本身所遭遇的問題。經過本文部署遇到問題的能夠聯繫一下我,互相交流一下。QQ:83766787。固然也歡迎你們一塊兒修改部署的腳本,git地址爲:http://git.oschina.net/snake1361222/hadoop_scripts