Hadoop自動化部署腳本

摘自:http://www.wangsenfeng.com/articles/2016/10/27/1477556261953.html

1 概述

最近本身寫了一個Hadoop自動化部署腳本,包括Hadoop集羣自動化部署腳本和Hadoop增長單節點自動化部署腳本。須要快速部署Hadoop集羣的童鞋能夠使用該腳本。這些腳本我在用5臺虛擬機進行了測試,若是在使用中還有bug,歡迎指出。本文主要介紹Hadoop集羣自動化部署腳本,安裝的Hadoop版本爲2.6.0。html

2 依賴

安裝Hadoop2.6.0集羣須要依賴JDK和Zookeeper。本文安裝的JDK版本爲jdk-7u60-linux-x64,Zookeeper版本爲zookeeper-3.4.6。java

3 各文件及配置說明

該部署腳本由兩部分構成:root用戶下執行的腳本和Hadoop啓動用戶下執行的腳本。這些腳本都只須要在一臺服務器上執行便可,執行腳本的服務器做爲Hadoop的Master服務器。下面分別進行說明。node

3.1 root腳本說明

root腳本的目錄結構以下:linux

  • conf — 配置文件目錄 
    • init.conf
  • expect — expect腳本目錄 
    • password.expect
    • scp.expect
    • otherInstall.expect
  • file — 安裝文件目錄 
    installRoot.sh — 腳本執行文件
    • hadoop-2.6.0.tar.gz
    • jdk-7u60-linux-x64.tar.gz
    • zookeeper-3.4.6.tar.gz

3.1.1 conf目錄

該目錄下的init.conf文件爲root執行腳本使用的配置文件,在執行腳本以前須要對該配置文件進行修改。文件內容以下:shell

#jdk file and version JDK_FILE_TAR=jdk-7u60-linux-x64.tar.gz #jdk unpack name JDK_FILE=jdk1.7.0_60 #java home JAVAHOME=/usr/java #Whether install the package for dependence,0 means no,1 means yes IF_INSTALL_PACKAGE=1 #host conf ALLHOST="hadoop1master hadoop1masterha hadoop1slave1 hadoop1slave2 hadoop1slave3" ALLIP="192.168.0.180 192.168.0.184 192.168.0.181 192.168.0.182 192.168.0.183" #zookeeper conf ZOOKEEPER_TAR=zookeeper-3.4.6.tar.gz ZOOKEEPERHOME=/usr/local/zookeeper-3.4.6 SLAVELIST="hadoop1slave1 hadoop1slave2 hadoop1slave3" #hadoop conf HADOOP_TAR=hadoop-2.6.0.tar.gz HADOOPHOME=/usr/local/hadoop-2.6.0 HADOOP_USER=hadoop2 HADOOP_PASSWORD=hadoop2 #root conf: $MASTER_HA $SLAVE1 $SLAVE2 $SLAVE3 ROOT_PASSWORD="hadoop hadoop hadoop hadoop"

下面是個別參數的解釋及注意事項:bootstrap

  1. ALLHOST爲Hadoop集羣各個服務器的hostname,使用空格分隔;ALLIP爲Hadoop集羣各個服務器的ip地址,使用空格分隔。要求ALLHOST和ALLIP要一一對應。
  2. SLAVELIST爲zookeeper集羣部署的服務器的hostname。
  3. ROOT_PASSWORD爲除了Master服務器之外的其餘服務器root用戶的密碼,使用逗號隔開。(在實際狀況下,可能各個服務器的root密碼並不相同。)

3.1.2 expect目錄

該目錄下包含password.expect、scp.expect、otherInstall.expect三個文件。password.expect用來設置hadoop啓動用戶的密碼;scp.expect用來遠程傳輸文件;otherInstall.expect用來遠程執行其餘服務器上的installRoot.sh。這三個文件都在installRoot.sh中被調用。 
password.expect文件內容以下:bash

#!/usr/bin/expect -f set user [lindex $argv 0] set password [lindex $argv 1] spawn passwd $user expect "New password:" send "$password\r" expect "Retype new password:" send "$password\r" expect eof 

其中argv 0和argv 1都是在installRoot.sh腳本中進行傳值的。其餘兩個文件argv *也是這樣傳值的。 
scp.expect文件內容以下:服務器

#!/usr/bin/expect -f # set dir, host, user, password set dir [lindex $argv 0] set host [lindex $argv 1] set user [lindex $argv 2] set password [lindex $argv 3] set timeout -1 spawn scp -r $dir $user@$host:/root/ expect { "(yes/no)?" { send "yes\n" expect "*assword:" { send "$password\n"} } "*assword:" { send "$password\n" } } expect eof

otherInstall.expect文件內容以下:ssh

#!/usr/bin/expect -f # set dir, host, user, password set dir [lindex $argv 0] set name [lindex $argv 1] set host [lindex $argv 2] set user [lindex $argv 3] set password [lindex $argv 4] set timeout -1 spawn ssh -q $user@$host "$dir/$name" expect { "(yes/no)?" { send "yes\n" expect "*assword:" { send "$password\n"} } "*assword:" { send "$password\n" } } expect eof

3.1.3 file目錄

這裏就是安裝Hadoop集羣及其依賴所需的安裝包。oop

3.1.4 installRoot.sh腳本

該腳本是在root用戶下須要執行的腳本,文件內容以下:

#!/bin/bash  if [ $USER != "root" ]; then echo "[ERROR]:Must run as root"; exit 1 fi # Get absolute path and name of this shell readonly PROGDIR=$(readlink -m $(dirname $0)) readonly PROGNAME=$(basename $0) hostname=`hostname` source /etc/profile # import init.conf source $PROGDIR/conf/init.conf echo "install start..." # install package for dependence if [ $IF_INSTALL_PACKAGE -eq 1 ]; then yum -y install expect >/dev/null 2>&1 echo "expect install successful." # yum install openssh-clients #scp fi #stop iptables or open ports, now stop iptables service iptables stop chkconfig iptables off FF_INFO=`service iptables status` if [ -n "`echo $FF_INFO | grep "Firewall is not running"`" ]; then echo "Firewall is already stop." else echo "[ERROR]:Failed to shut down the firewall.Exit shell." exit 1 fi #stop selinux setenforce 0 SL_INFO=`getenforce` if [ $SL_INFO == "Permissive" -o $SL_INFO == "disabled" ]; then echo "selinux is already stop." else echo "[ERROR]:Failed to shut down the selinux. Exit shell." exit 1 fi #host config hostArr=( $ALLHOST ) IpArr=( $ALLIP ) for (( i=0; i <= ${#hostArr[@]}; i++ )) { if [ -z "`grep "${hostArr[i]}" /etc/hosts`" -o -z "`grep "${IpArr[i]}" /etc/hosts`" ]; then echo "${IpArr[i]} ${hostArr[i]}" >> /etc/hosts fi } #user config groupadd $HADOOP_USER && useradd -g $HADOOP_USER $HADOOP_USER && $PROGDIR/expect/password.expect $HADOOP_USER $HADOOP_PASSWORD >/dev/null 2>&1 # check jdk checkOpenJDK=`rpm -qa | grep java` # already install openJDK ,uninstall if [ -n "$checkOpenJDK" ]; then rpm -e --nodeps $checkOpenJDK echo "uninstall openJDK successful" fi # A way of exception handling. `java -version` perform error then perform after ||. `java -version` || ( [ ! -d $JAVAHOME ] && ( mkdir $JAVAHOME ) tar -zxf $PROGDIR/file/$JDK_FILE_TAR -C $JAVAHOME echo "export JAVA_HOME=$JAVAHOME/$JDK_FILE">>/etc/profile echo'export JAVA_BIN=$JAVA_HOME/bin'>>/etc/profile echo'export PATH=$PATH:$JAVA_HOME/bin'>>/etc/profile echo'export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar'>>/etc/profile echo'export JAVA_HOME JAVA_BIN PATH CLASSPATH'>>/etc/profile echo"sun jdk done")# check zookeeper slaveArr=($SLAVELIST)if[["${slaveArr[@]}"=~$hostname]];then`zkServer.sh status`||[-d$ZOOKEEPERHOME]||( tar -zxf $PROGDIR/file/$ZOOKEEPER_TAR-C /usr/local/ chown -R $HADOOP_USER:$HADOOP_USER$ZOOKEEPERHOMEecho"export ZOOKEEPER_HOME=$ZOOKEEPERHOME">>/etc/profile echo'PATH=$PATH:$ZOOKEEPER_HOME/bin'>>/etc/profile echo"zookeeper done")fi# check hadoop2`hadoop version`||[-d$HADOOPHOME]||( tar -zxf $PROGDIR/file/$HADOOP_TAR-C /usr/local/ chown -R $HADOOP_USER:$HADOOP_USER$HADOOPHOMEecho"export HADOOP_HOME=$HADOOPHOME">>/etc/profile echo'PATH=$PATH:$HADOOP_HOME/bin'>>/etc/profile echo'HADOOP_HOME_WARN_SUPPRESS=1'>>/etc/profile echo"hadoop2 done")source/etc/profile #ssh config sed -i "s/^#RSAAuthentication\ yes/RSAAuthentication\ yes/g"/etc/ssh/sshd_config sed -i "s/^#PubkeyAuthentication\ yes/PubkeyAuthentication\ yes/g"/etc/ssh/sshd_config sed -i "s/^#AuthorizedKeysFile/AuthorizedKeysFile/g"/etc/ssh/sshd_config sed -i "s/^GSSAPIAuthentication\ yes/GSSAPIAuthentication\ no/g"/etc/ssh/sshd_config sed -i "s/^#UseDNS\ yes/UseDNS\ no/g"/etc/ssh/sshd_config service sshd restart # install other servers rootPasswdArr=($ROOT_PASSWORD)if[$hostname==${hostArr[0]}];then i=0for node in$ALLHOST;doif[$hostname==$node];thenecho"this server, do nothing"else# cope install dir to other server$PROGDIR/expect/scp.expect $PROGDIR$node$USER${rootPasswdArr[$i]}$PROGDIR/expect/otherInstall.expect $PROGDIR$PROGNAME$node$USER${rootPasswdArr[$i]} i=$(($i+1))#i++echo$node" install successful."fidone# Let the environment variables take effect su - root fi 

這個腳本主要乾了下面幾件事:

  1. 若是在配置文件中設置了IF_INSTALL_PACKAGE=1,則安裝expect,默認是安裝expect。若是服務器上已經有了expect,則能夠設置IF_INSTALL_PACKAGE=0。
  2. 關閉防火牆,中止selinux。
  3. 將Hadoop集羣的各個機器host及ip對應關係寫到/etc/hosts文件中。
  4. 新建Hadoop啓動用戶及用戶組。
  5. 安裝jdk、zookeeper、hadoop並設置環境變量。
  6. 修改ssh配置文件/etc/ssh/sshd_config。
  7. 若是判斷執行腳本的機器是Master機器,則拷貝本機的root腳本到其餘機器上並執行。 
    注意:在執行該腳本以前,須要確保Hadoop集羣安裝的各個服務器上可以執行scp命令,若是不能執行,須要在各個服務器上安裝openssh-clients,執行腳本爲:yum –y install openssh-clients。

3.2 hadoop腳本說明

hadoop腳本的目錄結構以下:

  • bin — 腳本目錄 
    • config_hadoop.sh
    • config_ssh.sh
    • config_zookeeper.sh
    • ssh_nopassword.expect
    • start_all.sh
  • conf — 配置文件目錄 
    • init.conf
  • template — 配置文件模板目錄 
    installCluster.sh — 腳本執行文件
    • core-site.xml
    • hadoop-env.sh
    • hdfs-site.xml
    • mapred-site.xml
    • mountTable.xml
    • myid
    • slaves
    • yarn-env.sh
    • yarn-site.xml
    • zoo.cfg

3.2.1 bin腳本目錄

該目錄中包含installCluster.sh腳本中調用的全部腳本,下面一一說明。

3.2.1.1 config_hadoop.sh

該腳本主要是建立Hadoop所需目錄,以及配置文件的配置,其中的參數均在init.conf中。

#!/bin/bash  # Get absolute path of this shell readonly PROGDIR=$(readlink -m $(dirname $0)) # import init.conf source $PROGDIR/../conf/init.conf for node in $ALL; do # create dirs ssh -q $HADOOP_USER@$node " mkdir -p $HADOOPDIR_CONF/hadoop2/namedir mkdir -p $HADOOPDIR_CONF/hadoop2/datadir mkdir -p $HADOOPDIR_CONF/hadoop2/jndir mkdir -p $HADOOPDIR_CONF/hadoop2/tmp mkdir -p $HADOOPDIR_CONF/hadoop2/hadoopmrsys mkdir -p $HADOOPDIR_CONF/hadoop2/hadoopmrlocal mkdir -p $HADOOPDIR_CONF/hadoop2/nodemanagerlocal mkdir -p $HADOOPDIR_CONF/hadoop2/nodemanagerlogs " echo "$node create dir done." for conffile in $CONF_FILE; do # copy scp $PROGDIR/../template/$conffile $HADOOP_USER@$node:$HADOOPHOME/etc/hadoop # update ssh -q $HADOOP_USER@$node " sed -i 's%MASTER_HOST%${MASTER_HOST}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%MASTER_HA_HOST%${MASTER_HA_HOST}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%SLAVE1%${SLAVE1}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%SLAVE2%${SLAVE2}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%SLAVE3%${SLAVE3}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%HDFS_CLUSTER_NAME%${HDFS_CLUSTER_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%VIRTUAL_PATH%${VIRTUAL_PATH}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%DFS_NAMESERVICES%${DFS_NAMESERVICES}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%NAMENODE1_NAME%${NAMENODE1_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%NAMENODE2_NAME%${NAMENODE2_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%NAMENODE_JOURNAL%${NAMENODE_JOURNAL}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%HADOOPDIR_CONF%${HADOOPDIR_CONF}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%ZOOKEEPER_ADDRESS%${ZOOKEEPER_ADDRESS}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%YARN1_NAME%${YARN1_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%YARN2_NAME%${YARN2_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%HADOOPHOME%${HADOOPHOME}%g' $HADOOPHOME/etc/hadoop/$conffile sed -i 's%JAVAHOME%${JAVAHOME}%g' $HADOOPHOME/etc/hadoop/$conffile # update yarn.resourcemanager.ha.id for yarn_ha if [ $conffile == 'yarn-site.xml' ]; then if [ $node == $MASTER_HA_HOST ]; then sed -i 's%YARN_ID%${YARN2_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile else sed -i 's%YARN_ID%${YARN1_NAME}%g' $HADOOPHOME/etc/hadoop/$conffile fi fi " done echo "$node copy hadoop template done." done 

3.2.1.2 config_ssh.sh和ssh_nopassword.expect

這兩個文件是配置ssh無密碼登陸的,ssh_nopassword.expect被config_ssh.sh調用。 
config_ssh.sh文件以下:

#!/bin/bash  # Get absolute path of this shell readonly PROGDIR=$(readlink -m $(dirname $0)) # import init.conf source $PROGDIR/../conf/init.conf # Get hostname HOSTNAME=`hostname` # Config ssh nopassword login echo "Config ssh on master" # If the directory "~/.ssh" is not exist, then execute mkdir and chmod [ ! -d ~/.ssh ] && ( mkdir ~/.ssh ) && ( chmod 700 ~/.ssh ) # If the file "~/.ssh/id_rsa.pub" is not exist, then execute ssh-keygen and chmod [ ! -f ~/.ssh/id_rsa.pub ] && ( yes|ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa ) && ( chmod 600 ~/.ssh/id_rsa.pub ) echo "Config ssh nopassword for cluster" # For all node, including master and slaves for node in $ALL; do # execute bin/ssh_nopassword.expect $PROGDIR/ssh_nopassword.expect $node $HADOOP_USER $HADOOP_PASSWORD $HADOOPDIR_CONF/.ssh/id_rsa.pub >/dev/null 2>&1 echo "$node done." done echo "Config ssh successful."

ssh_nopassword.expect文件以下:

#!/usr/bin/expect -f set host [lindex $argv 0] set user [lindex $argv 1] set password [lindex $argv 2] set dir [lindex $argv 3] spawn ssh-copy-id -i $dir $user@$host expect { yes/no { send "yes\r";exp_continue } -nocase "password:" { send "$password\r" } } expect eof

3.2.1.3 config_zookeeper.sh

該文件主要是對zookeeper的配置,文件內容以下:

#!/bin/bash  # Get absolute path of this shell readonly PROGDIR=$(readlink -m $(dirname $0)) # import init.conf source $PROGDIR/../conf/init.conf #update conf sed -i "s%ZOOKEEPERHOME%${ZOOKEEPERHOME}%g" $PROGDIR/../template/zoo.cfg sed -i "s%ZOOKEEPER_SLAVE1%${ZOOKEEPER_SLAVE1}%g" $PROGDIR/../template/zoo.cfg sed -i "s%ZOOKEEPER_SLAVE2%${ZOOKEEPER_SLAVE2}%g" $PROGDIR/../template/zoo.cfg sed -i "s%ZOOKEEPER_SLAVE3%${ZOOKEEPER_SLAVE3}%g" $PROGDIR/../template/zoo.cfg zookeeperArr=( "$ZOOKEEPER_SLAVE1" "$ZOOKEEPER_SLAVE2" "$ZOOKEEPER_SLAVE3" ) myid=1 for node in ${zookeeperArr[@]}; do scp $PROGDIR/../template/zoo.cfg $HADOOP_USER@$node:$ZOOKEEPERHOME/conf echo $myid > $PROGDIR/../template/myid ssh -q $HADOOP_USER@$node " [ ! -d $ZOOKEEPERHOME/data ] && ( mkdir $ZOOKEEPERHOME/data ) [ ! -d $ZOOKEEPERHOME/log ] && ( mkdir $ZOOKEEPERHOME/log ) " scp $PROGDIR/../template/myid $HADOOP_USER@$node:$ZOOKEEPERHOME/data myid=`expr $myid + 1` #i++ echo "$node copy zookeeper template done." done

3.2.1.4 start_all.sh

該腳本主要用來啓動zookeeper及Hadoop所有組件,文件內容以下:

#!/bin/bash  source /etc/profile # Get absolute path of this shell readonly PROGDIR=$(readlink -m $(dirname $0)) # import init.conf source $PROGDIR/../conf/init.conf # start zookeeper zookeeperArr=( "$ZOOKEEPER_SLAVE1" "$ZOOKEEPER_SLAVE2" "$ZOOKEEPER_SLAVE3" ) for znode in ${zookeeperArr[@]}; do ssh -q $HADOOP_USER@$znode " source /etc/profile $ZOOKEEPERHOME/bin/zkServer.sh start " echo "$znode zookeeper start done." done # start journalnode journalArr=( $JOURNALLIST ) for jnode in ${journalArr[@]}; do ssh -q $HADOOP_USER@$jnode " source /etc/profile $HADOOPHOME/sbin/hadoop-daemon.sh start journalnode " echo "$jnode journalnode start done." done # format zookeeper $HADOOPHOME/bin/hdfs zkfc -formatZK # format hdfs $HADOOPHOME/bin/hdfs namenode -format -clusterId $DFS_NAMESERVICES # start namenode $HADOOPHOME/sbin/hadoop-daemon.sh start namenode # sign in master_ha, sync from namenode to namenode_ha ssh -q $HADOOP_USER@$MASTER_HA_HOST " $HADOOPHOME/bin/hdfs namenode -bootstrapStandby " # start zkfc on master $HADOOPHOME/sbin/hadoop-daemon.sh start zkfc # start namenode_ha and datanode $HADOOPHOME/sbin/start-dfs.sh # start yarn $HADOOPHOME/sbin/start-yarn.sh # start yarn_ha ssh -q $HADOOP_USER@$MASTER_HA_HOST " source /etc/profile $HADOOPHOME/sbin/yarn-daemon.sh start resourcemanager " echo "start all done."

4 集羣自動化部署流程

4.1 root腳本的執行

選擇一臺服務器做爲Hadoop2.6.0的主節點,使用root用戶執行。

  1. 確保Hadoop集羣所在服務器能夠執行scp命令:在各個服務器上執行scp,若是提示命令沒有找到,執行安裝命令:yum –y install openssh-clients。
  2. 執行如下操做: 
    檢查/etc/hosts、/etc/profile的配置,執行java -version、hadoop version命令檢查jdk和Hadoop的安裝狀況。若出現java、hadoop命令找不到的狀況,從新登陸一次服務器再進行檢查。
    1. 執行cd ~,進入/root目錄下
    2. 將root腳本所在目錄打成tar包(假設打包後的文件名爲root_install.tar.gz),執行rz -y,上傳root_install.tar.gz(若沒法找到rz命令,執行安裝命令:yum -y install lrzsz)
    3. 執行tar -zxvf root_install.tar.gz解壓
    4. 執行cd root_install,進入到root_install目錄中
    5. 執行. /installRoot.sh,開始安裝jdk、zookeeper、Hadoop,等待安裝結束

4.2 hadoop腳本的執行

在主節點使用Hadoop啓動用戶執行(該啓動用戶是在root中執行的腳本里建立的,下面假設該用戶爲hadoop2):

  1. 在root用戶中直接進入hadoop2用戶,執行su - hadoop2
  2. 執行如下操做: 
    檢查zookeeper、Hadoop啓動日誌,檢查是否安裝成功。經過Hadoop自己提供的監控頁面來檢查Hadoop集羣的狀態。
    1. 執行cd~,進入/home/hadoop2目錄下
    2. 將hadoop腳本所在目錄打成tar包(假設打包後的文件名爲hadoop_install.tar.gz),執行rz -y,上傳hadoop_install.tar.gz(若沒法找到rz命令,執行安裝命令:yum -y install lrzsz)
    3. 執行tar -zxvf hadoop_install.tar.gz解壓
    4. 執行cd hadoop_install,進入到hadoop_install目錄中
    5. 執行./installCluster.sh,開始配置並啓動zookeeper、Hadoop,等待腳本執行結束
  3. 最後根據mountTable.xml中fs.viewfs.mounttable.hCluster.link./tmp的配置,執行以下命令建立該name對應的value目錄: 
    hdfs dfs -mkdir hdfs://hadoop-cluster1/tmp 
    若是不建立,執行hdfs dfs -ls /tmp時會提示找不到目錄。

5 總結

Hadoop2.6.0部署腳本仍有缺陷,好比配置文件中參數較多,有部分重複,腳本的編寫也有待改進。權當拋磚引玉。若有錯誤請童鞋們指正,謝謝。

相關文章
相關標籤/搜索