Docker部署HDFS

docker部署hadoop只是實驗目的,每一個服務都是經過手動部署,好比namenode, datanode, journalnode等。若是爲了靈活的管理集羣,而不使用官方封裝好的自動化部署腳本,本文仍是有些啓發的。node

準備基礎鏡像

準備jdk鏡像

注意,openjdk啓動datanode的時候,jvm會崩潰。因此換成oraclejdk。linux

基礎鏡像以alpine爲基礎,上面裝上jdk。Dockerfile以下。git

一、openjdk1.8github

FROM alpine:latest

MAINTAINER rabbix@qq.com

RUN echo -e "https://mirrors.aliyun.com/alpine/v3.7/main\nhttps://mirrors.aliyun.com/alpine/v3.7/community" > /etc/apk/repositories && \

    apk --no-cache --update add openjdk8-jre-base bash && \

    rm -rf /var/cache/apk/*

ENV JAVA_HOME=/usr/lib/jvm/default-jvm

ENV PATH=$PATH:$JAVA_HOME/bin

docker build . -t alpine-jdk8:v1.0docker

二、oraclejdk1.8shell

下面這種方式須要手動下載glibc。瀏覽器

下載地址:https://github.com/sgerrand/alpine-pkg-glibc/releases/bash

sgerrand.rsa.pub在項目的readme中有下載地址oracle

FROM alpine:latest
MAINTAINER rabbix@qq.com
ADD sgerrand.rsa.pub /etc/apk/keys/
COPY glibc-2.27-r0.apk /opt/
RUN echo -e "https://mirrors.aliyun.com/alpine/v3.7/main\nhttps://mirrors.aliyun.com/alpine/v3.7/community" > /etc/apk/repositories && \
    apk add /opt/glibc-2.27-r0.apk && rm -rf /opt/glibc-2.27-r0.apk && \
    apk --no-cache --update add bash && \
    rm -rf /var/cache/apk/*

ADD jdk-8u172-linux-x64.tar.gz /opt/

ENV JAVA_HOME=/opt/jdk1.8.0_172
ENV PATH=$PATH:$JAVA_HOME/bin

自動下載glibcdom

FROM alpine:latest
MAINTAINER rabbix@qq.com
RUN echo -e "https://mirrors.aliyun.com/alpine/v3.7/main\nhttps://mirrors.aliyun.com/alpine/v3.7/community" > /etc/apk/repositories && \
    wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://raw.githubusercontent.com/sgerrand/alpine-pkg-glibc/master/sgerrand.rsa.pub && \
    wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.27-r0/glibc-2.27-r0.apk && \
    apk add glibc-2.27-r0.apk && rm -rf glibc-2.27-r0.apk && \
    apk --no-cache --update add bash && \
    rm -rf /var/cache/apk/*

ADD jdk-8u172-linux-x64.tar.gz /opt/

ENV JAVA_HOME=/opt/jdk1.8.0_172
ENV PATH=$PATH:$JAVA_HOME/bin

準備hadoop鏡像

由於hadoop是以nohup的方式後臺運行的,因此須要修改一下啓動腳本。這裏使用的是當前穩定版2.9.1。

腳本位置 hadoop-2.9.1/sbin/hadoop-daemon.sh

修改前

case $command in
      namenode|secondarynamenode|datanode|journalnode|dfs|dfsadmin|fsck|balancer|zkfc|portmap|nfs3|dfsrouter)
        if [ -z "$HADOOP_HDFS_HOME" ]; then
          hdfsScript="$HADOOP_PREFIX"/bin/hdfs
        else
          hdfsScript="$HADOOP_HDFS_HOME"/bin/hdfs
        fi
        nohup nice -n $HADOOP_NICENESS $hdfsScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
      ;;
      (*)
        nohup nice -n $HADOOP_NICENESS $hadoopScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
      ;;
    esac
    echo $! > $pid
    sleep 1

修改後

152     case $command in
153       namenode|secondarynamenode|datanode|journalnode|dfs|dfsadmin|fsck|balancer|zkfc|portmap|nfs3|dfsrouter)
154         if [ -z "$HADOOP_HDFS_HOME" ]; then
155           hdfsScript="$HADOOP_PREFIX"/bin/hdfs
156         else
157           hdfsScript="$HADOOP_HDFS_HOME"/bin/hdfs
158         fi
159         $hdfsScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1
160       ;;
161       (*)
162         $hadoopScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1
163       ;;
164     esac
165     echo $! > $pid
166     sleep 1

修改以後從新壓縮回tar.gz

Dockerfile以下:

FROM alpine-jdk1.8:v1.0

MAINTAINER rabbix@qq.com
ADD ./hadoop-2.9.1.tar.gz /opt/

docker build . -t hadoop2.9.1:v1.0

配置docker容器的ip

docker network create --subnet=172.16.0.0/16 dn0

準備配置文件

新建目錄:

mkdir -p {nn,snn,dn}/{logs,data,etc}

修改配置文件

複製 hadoop-2.9.1/etc/hadoop/ 下面全部的文件到 nn/etc/

複製/etc/hosts 到 nn/etc/

core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9001</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/tmp/hdfs-root/</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>master:50071</value>
    </property>
    <property>
        <name>dfs.datanode.http.address</name>
        <value>slave1:50076</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>secondary:50091</value>
    </property>
    <property>
        <name>dfs.datanode.address</name>
        <value>slave1:50011</value>
    </property>
    <property>
        <name>dfs.datanode.ipc.address</name>
        <value>slave1:50021</value>
    </property>

</configuration>

etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.0.2 master
172.16.0.3 secondary
172.16.0.4 slave1

主機名中不容許有下劃線

啓動命令

namenode

由於namenode第一次須要初始化。

先執行初始化命令

docker run -d --rm --net dn0 --ip 172.16.0.2 -h master \
 --name namenode -p 9001:9001 -p 50071:50071 \
 -v /root/hadoop/nn/data/:/tmp/hdfs-root \
 -v /root/hadoop/nn/etc/:/opt/hadoop-2.9.1/etc/hadoop \
 -v /root/hadoop/nn/logs:/opt/hadoop-2.9.1/logs \
 -v /root/hadoop/nn/etc/hosts:/etc/hosts hadoop2.9.1:v1.0 \
 /opt/hadoop-2.9.1/bin/hdfs namenode -format

後啓動namenode

docker run -d --rm --net dn0 --ip 172.16.0.2 -h master \
 --name namenode -p 9001:9001 -p 50071:50071 \
 -v /root/hadoop/nn/data/:/tmp/hdfs-root \
 -v /root/hadoop/nn/etc/:/opt/hadoop-2.9.1/etc/hadoop \
 -v /root/hadoop/nn/logs:/opt/hadoop-2.9.1/logs \
 -v /root/hadoop/nn/etc/hosts:/etc/hosts hadoop2.9.1:v1.0 \
 /opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start namenode

合併在一塊兒:

docker run -d --rm --net dn0 --ip 172.16.0.2 -h master \
 --name namenode -p 9001:9001 -p 50071:50071 \
 -v /root/hadoop/nn/data/:/tmp/hdfs-root \
 -v /root/hadoop/nn/etc/:/opt/hadoop-2.9.1/etc/hadoop \
 -v /root/hadoop/nn/logs:/opt/hadoop-2.9.1/logs \
 -v /root/hadoop/nn/etc/hosts:/etc/hosts oraclejdk1.8-hadoop2.9.1:latest \
 sh -c "/opt/hadoop-2.9.1/bin/hdfs namenode -format && /opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start namenode"

3.1.1的啓動命令修改成:

bin/hdfs --config .... namenode
bin/hdfs --config .... datanode

secondarynamenode

datanode

新增長的datanode保證其數據目錄爲空,不要與其餘datanode有衝突。

配置文件中的主機名也應該是本身的主機名或者域名。例如:

<property>
    <name>dfs.datanode.address</name>
    <value>slave2:50011</value>
</property>

啓動:

docker run -d --rm --net dn0 --ip 172.16.0.4 -h slave1 \
 --name datanode1 -p 50011:50011 -p 50021:50021 -p 50076:50076 \
 -v /root/hadoop/dn1/data/:/tmp/hdfs-root \
 -v /root/hadoop/dn1/etc/:/opt/hadoop-2.9.1/etc/hadoop \
 -v /root/hadoop/dn1/logs:/opt/hadoop-2.9.1/logs \
 -v /root/hadoop/dn1/etc/hosts:/etc/hosts oraclejdk1.8-hadoop2.9.1:latest \
 /opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start datanode
docker run -d --rm --net dn0 --ip 172.16.0.5 -h slave2 \
 --name datanode2 -p 50012:50012 -p 50022:50022 -p 50077:50077 \
 -v /root/hadoop/dn2/data/:/tmp/hdfs-root \
 -v /root/hadoop/dn2/etc/:/opt/hadoop-2.9.1/etc/hadoop \
 -v /root/hadoop/dn2/logs:/opt/hadoop-2.9.1/logs \
 -v /root/hadoop/dn2/etc/hosts:/etc/hosts oraclejdk1.8-hadoop2.9.1:latest \
 /opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start datanode
docker run -d --rm --net dn0 --ip 172.16.0.6 -h slave3 \
 --name datanode3 -p 50013:50013 -p 50023:50023 -p 50078:50078 \
 -v /root/hadoop/dn3/data/:/tmp/hdfs-root \
 -v /root/hadoop/dn3/etc/:/opt/hadoop-2.9.1/etc/hadoop \
 -v /root/hadoop/dn3/logs:/opt/hadoop-2.9.1/logs \
 -v /root/hadoop/dn3/etc/hosts:/etc/hosts oraclejdk1.8-hadoop2.9.1:latest \
 /opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start datanode

上傳文件

./hdfs dfs -fs hdfs://172.16.0.2:9001 -mkdir /user
./hdfs dfs -fs hdfs://172.16.0.2:9001 -mkdir /user/root
./hdfs dfs -fs hdfs://172.16.0.2:9001 -put hadoop input

在瀏覽器訪問: http://master:50071 就能夠看到namenode的管理界面了。

擴展一臺datanode

複製以分datanode的配置文件。適當修改。

增長新節點的hosts,和其餘節點的hosts。

啓動新的節點。

查看文件塊信息

./hdfs fsck -conf /root/hadoop/nn/etc/hdfs-site.xml -fs hdfs://172.16.0.2:9001 /user/root/hadoop-2.9.1.tar.gz -blocks

相關文章
相關標籤/搜索