使用docker部署hadoop集羣

注意:本文不討論linux虛擬機的安裝和docker的安裝node

一、環境linux

    1.一、宿主機git

        內核版本:Linux localhost 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) x86_64 GNU/Linuxgithub

        系統版本:Debian 8
docker

    1.二、docker
centos

        版本:Docker version 1.9.1, build a34a1d5bash

        鏡像版本:crxy/centos服務器

        

二、宿主機中建立用戶和分組ssh

    2.一、建立docker用戶組
分佈式

        sudo groupadd docker

    2.二、添加當前用戶到docker用戶組裏

        sudo gpasswd -a *** docker    注:***爲當前系統用戶名

    2.三、重啓docker後臺監控進程

        sudo service docker restart

    2.四、重啓後,看docker服務是否生效

        docker version

    2.五、若是沒有生效,能夠重試重啓系統

        sudo reboot

三、Dockerfile建立docker鏡像

    3.1建立ssh功能鏡像,並設置鏡像帳號:root密碼:root

        cd /usr/local/

        mkdir dockerfile

        cd dockerfile/

        mkdir centos-ssh-root

        cd centos-ssh-root

        vi Dockerfile    注:docker識別的dockerfile格式Dockerfile(首字母必須大寫)

 
 
 
 
 
# 選擇一個已有的os鏡像做爲基礎  FROM centos # 鏡像的做者  MAINTAINER crxy # 安裝openssh-server和sudo軟件包,而且將sshd的UsePAM參數設置成no  RUN yum install -y openssh-server sudo  RUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config  #安裝openssh-clientsRUN yum  install -y openssh-clients# 添加測試用戶root,密碼root,而且將此用戶添加到sudoers裏  RUN echo "root:root" | chpasswd  RUN echo "root   ALL=(ALL)       ALL" >> /etc/sudoers  # 下面這兩句比較特殊,在centos6上必需要有,不然建立出來的容器sshd不能登陸  RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key  RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key  # 啓動sshd服務而且暴露22端口  RUN mkdir /var/run/sshd  EXPOSE 22  CMD ["/usr/sbin/sshd", "-D"]

        建立鏡像命令:

        docker build -t=’crxy/centos-ssh-root‘ .

        建立完成後查看鏡像生成狀況:

        docker images

        

    3.二、建立jdk鏡像

        注:jdk使用1.7版本及以上版本

        cd ..

        mkdir centos-ssh-root-jdk

        cd centos-ssh-root-jdk

        cp ../../jdk-7u80-linux-x64.tar.gz .

        vi Dockerfile

 
 
 
 
 
#上一步中生成的鏡像FROM crxy/centos-ssh-rootADD jdk-7u75-linux-x64.tar.gz /usr/local/RUN mv /usr/local/jdk1.7.0_75 /usr/local/jdk1.7ENV JAVA_HOME /usr/local/jdk1.7ENV PATH $JAVA_HOME/bin:$PATH

        建立鏡像命令:

        docker build -t=’crxy/centos-ssh-root-jdk‘ .

        建立完成後查看鏡像生成狀況:

        docker images

        

    3.三、根據jdk鏡像建立hadoop鏡像

        cd ..

        mkdir centos-ssh-root-jdk-hadoop

        cd centos-ssh-root-jdk-hadoop

        cp ../../hadoop-2.2.0.tar.gz  .        

        vi Dockerfile

 
 
 
 
 
#從crxy/centos-ssh-root-jdk版本建立FROM crxy/centos-ssh-root-jdkADD hadoop-2.2.0-src.tar.gz /usr/local#安裝which軟件包RUN yum install which#安裝net-tools軟件包RUM yum install net-toolsENV HADOOP_HOME /usr/local/hadoop-2.2.0ENV PATH $HADOOP_HOME/bin:$PATH

        建立鏡像命令:

        docker build -t=’crxy/centos-ssh-root-jdk-hadoop‘ .

        建立完成後查看鏡像生成狀況:

        docker images

        

四、搭建hadoop分佈式集羣

    4.一、hadoop集羣規劃

        master:hadoop0 ip:172.17.0.10

        slave1:hadoop1 ip:172.17.0.10

        slave2:hadoop2 ip:172.17.0.10

        查看docker橋接網卡dcker0

        

    4.二、建立容器並啓動容器,hadoop0、hadoop一、hadoop2        

 
 
 
 
 
#主節點docker run --name hadoop0 --hostname hadoop0 -d -P -p 50070:50070 -p 8088:8088 crxy/centos-ssh-root-jdk-hadoop#nodedocker run --name hadoop1 --hostname hadoop1 -d -P crxy/centos-ssh-root-jdk-hadoop#nodedocker run --name hadoop2 --hostname hadoop2 -d -P crxy/centos-ssh-root-jdk-hadoop

        查看容器:docker ps -a

        4.三、爲hadoop集羣設置固定ip

        4.3.一、下載pipework

        https://github.com/jpetazzo/pipework.git

        4.3.二、把下載的zip包上傳到宿主機服務器上,解壓,更名字

 
 
 
 
 
unzip pipework-master.zipmv pipework-master pipeworkcp -rp pipework/pipework /usr/local/bin/

        4.3.三、安裝bridge-utils             

 
 
 
 
 
yum -y install bridge-utils

        4.3.四、給容器設置固定ip

 
 
 
 
 
pipework docker0 hadoop0 172.17.0.10/24pipework docker0 hadoop1 172.17.0.11/24pipework dcoker0 hadoop2 172.17.0.12/24

        4.3.五、驗證ip是否通

        

    4.四、配置hadoop0

        4.4.一、連接hadoop0

        docker exec -it hadoop0 /bin/bash

        4.4.二、爲hadoop0添加host

        vi /etc/hosts

 
 
 
 
 
172.17.0.10 hadoop0172.17.0.11 hadoop1172.17.0.12 hadoop2

        4.4.三、hadoop0上修改hadoop的配置文件

        cd /usr/local/hadoop/etc/hadoop-2.2.0

        修改四大配置文件:core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml

        1)、hadoop-env.sh

 
 
 
 
 
#導入環境變量export JAVA_HOME=/usr/local/jdk1.7

        2)、core-site.xml

 
 
 
 
 
<configuration>        <property>                <name>fs.defaultFS</name>                <value>hdfs://hadoop0:9000</value>        </property>        <property>                <name>hadoop.tmp.dir</name>                <value>/usr/local/hadoop/tmp</value>        </property>         <property>                 <name>fs.trash.interval</name>                 <value>1440</value>        </property></configuration>

        3)、hdfs-site.xml

 
 
 
 
 
<configuration>    <property>        <name>dfs.replication</name>        <value>1</value>    </property>    <property>        <name>dfs.permissions</name>        <value>false</value>    </property></configuration>

        4)、yarn-site.xml

 
 
 
 
 
<configuration>        <property>            <name>yarn.nodemanager.aux-services</name>            <value>mapreduce_shuffle</value>        </property>        <property>            <name>yarn.log-aggregation-enable</name>            <value>true</value>        </property> <property>            <description>The hostname of the RM.</description>            <name>yarn.resourcemanager.hostname</name>            <value>hadoop0</value>        </property></configuration>

        5)、mapred-site.xml

        cp mapred-site.xml.template mapred-site.xml

 
 
 
 
 
<configuration>    <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property></configuration>

        4.4.四、格式化hdfs

 
 
 
 
 
  1.     
bin/hdfs namenode -format

    4.五、配置hadoop一、hadoop二、

        4.5.一、執行4.4配置

    4.六、切回到hadoop0,執行ssh免密碼登錄

        4.6.1 、配置ssh

 
 
 
 
 
cd  ~mkdir .sshcd .sshssh-keygen -t rsa(一直按回車便可)ssh-copy-id -i localhostssh-copy-id -i hadoop0ssh-copy-id -i hadoop1ssh-copy-id -i hadoop2hadoop1上執行下面操做cd  ~cd .sshssh-keygen -t rsa(一直按回車便可)ssh-copy-id -i localhostssh-copy-id -i hadoop1hadoop2上執行下面操做cd  ~cd .sshssh-keygen -t rsa(一直按回車便可)ssh-copy-id -i localhostssh-copy-id -i hadoop2

        4.6.二、配置slaves

        vi etc/hadoop/slaves

 
 
 
 
 
hadoop1hadoop2

        4.6.三、執行遠程複製

 
 
 
 
 
scp  -rq /usr/local/hadoop-2.2.0   hadoop1:/usr/localscp  -rq /usr/local/hadoop-2.2.0   hadoop2:/usr/local

五、啓動hadoop集羣

    5.一、啓動

    

    hadoop namenode -format -clusterid clustername  

    cd /usr/local/hadoop-2.2.0

    sbin/start-all.sh

    5.二、驗證集羣啓動是否正常

        5.2.一、hadoop0

        jps

        

        5.2.二、hadoop1

        jps

        

        5.2.三、hadoop2

        jps

        

    5.三、驗證hafs文件系統狀態

    bin/hdfs dfsadmin -report

六、測試hdfs、yarn是否正常

    6.一、建立普通文件在master主機上(hadoop0)

    1)、查看文件系統中是否有文件存在

    hadoop fs -ls

    

    2)、建立dfs文件夾,#默認/user/$USER

    hadoop fs -mkdir /user/data

    

    3)、建立普通文件在用戶文件夾

    

    

    4)、將文件寫入dfs文件系統中

    hadoop fs -put /home/suchao/data/1.txt /user/data     

    5)、在終端顯示

    hadoop fs -cat /user/data/1.txt    

摘自:http://blog.csdn.net/xu470438000/article/details/50512442




相關文章
相關標籤/搜索