docker搭建hadoop2.8.5

1.在宿主機上下載好安裝包:hadoop-2.8.5.tar.gz  ; jdk1.8.0_131.tar.gz共兩個包html

2.在Docker倉庫中拉去鏡像: java

               docker pull centoslinux

3.建立容器: docker

              docker run -i -t -d --name centos_hdp centos:centosvim

4.從宿主機拷貝hadoop-2.8.5.tar.gz  、 jdk1.8.0_131.tar.gz兩個包到容器中:centos

             docker cp jdk-8u131-linux-x64.tar.gz  centos_hdp:/usr/localbash

             docker cp hadoop-2.8.5.tar.gz centos_hdp:/opt/ssh

5.進入後臺運行的容器中:oop

           docker exec -it centos_hdp bash測試

           tar -zxvf /usr/local/jdk1.8.0_131.tar.gz -C /usr/local/

           tar -zxvf /opt/hadoop-2.8.5.tar.gz -C /opt/

           vim /etc/proflie

         source /etc/profile

                 vim  /opt/hadoop-2.8.5/etc/hadoop/core-site.xml

  

              vim  /opt/hadoop-2.8.5/etc/hadoop/hdfs-site.xml

            vim /opt/hadoop-2.8.5/etc/hadoop/mapred-site.xml

         vim /opt/hadoop-2.8.5/etc/hadoop/yarn-site.xml

          vim /opt/hadoop-2.8.5/etc/hadoop/hadoop-env.sh

          cat /opt/hadoop-2.8.5/etc/hadoop/slave

6.退出容器,把當前容器打包成鏡像:

                docker commit -a "mayunzhen" -m "hadoop base images" centos_hdp centos_hdp:2.8.5

7.導出鏡像成jar包

               docker save centos_hdp:2.8.5 -o centos_hdp.jar 

8.centos_hdp.jar鏡像包傳到其餘宿主機上

               scp centos_hdp.jar root@192.168.130.166:/

               scp centos_hdp.jar root@192.168.130.167:/

               scp centos_hdp.jar root@192.168.130.168:/

9.分別在三臺(192.168.130.166,192.168.130.167,192.168.130.168)宿主機上執行導入鏡像

              docker load -i centos_hdp.jar

10.在每一個宿主機上,配置docker weave ,讓不一樣宿主機上的容器能通訊

         詳情見連接:http://www.javashuo.com/article/p-qdwcksnw-ch.html

11.運行hadoop容器命令:

       (192.168.130.166主機上)docker run  -itd  -h hadoop-master --name hadoop-master  --net=hadoop -v /etc/localtime:/etc/localtime:ro  -p 50070:50070 -p 8088:8088 -p 9000:9000 iammayunzhen/hadoop:2.8.52

                                                    weave attach 192.168.1.11/24 hadoop-master

       (192.168.130.167主機上)docker run  -itd  -h hadoop-slave1 --name hadoop-slave1 --net=hadoop -v /etc/localtime:/etc/localtime:ro  iammayunzhen/hadoop:2.8.52

                                                    weave attach 192.168.1.12/24 hadoop-slave1

       (192.168.130.168主機上)docker run  -itd  -h hadoop-slave2 --name hadoop-slave2 --net=hadoop  -v /etc/localtime:/etc/localtime:ro  iammayunzhen/hadoop:2.8.52

                                                    weave attach 192.168.1.13/24 hadoop-slave2

        測試三個容器IP(192.168.1.1,192.168.1.2,192.168.1.3)互相能通訊:

12.分別在三個容器(192.168.1.1,192.168.1.2,192.168.1.3)上配置/etc/hosts

 

13.分別在三個容器(192.168.1.1,192.168.1.2,192.168.1.3)上配置無密碼登陸

        詳情:http://www.javashuo.com/article/p-qtttsuto-hk.html

14.master容器中執行啓動hadoop

         source /etc/profile

        start-all.sh

15.查看集羣狀態:

 

 

 

 

 

 

製做好的docker hadoop:2.8.5鏡像已經上傳到https://hub.docker.com/repository/docker/iammayunzhen/centos_hdp(或者可 docker pull  iammayunzhen/centos_hdp:2.8.5進行下載),有須要,可供你們下載,參考。

Refereces:

docker hadoop鏡像製做:https://www.jianshu.com/p/bf76dfedef2f

docker容器時間和宿主機同步:https://www.cnblogs.com/kevingrace/p/5570597.html 

多宿主機容器之間通訊:http://www.javashuo.com/article/p-qdwcksnw-ch.html

docker容器中IP沒法ssh登陸:http://blog.chinaunix.net/uid-26168435-id-5732463.html

安裝netstat命令:yum install net-tools

查看端口狀況:netstat -tulnp |grep 22

安裝passwd命令:yum install -y passwd

多容器IP無密碼登陸:http://www.javashuo.com/article/p-qtttsuto-hk.html 

 yum安裝ssh客戶端 : yum -y install openssh-clients     https://www.cnblogs.com/nulige/articles/9324564.html

相關文章
相關標籤/搜索