docker安裝hadoop集羣?圖啥呢?不圖啥,就是圖好玩.本篇博客主要是來教你們如何搭建一個docker的hadoop集羣.不要問java
爲何我要作這麼無聊的事情,答案你也許知道,由於沒有女票.......node
好了,很少說這些沒有必要的東西了,首先,咱們來安裝docker.linux
一.docker的安裝web
sudo yum install -y docker-io算法
sudo wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O /usr/bin/dockerdocker
咱們來啓動咱們的docker:apache
sudo service docker startvim
開機也自啓動docker
sudo chkconfig docker oncentos
二.拉取一個鏡像bash
若是咱們要6.5的centos 版本,額,不要問我問什麼用6.5的,由於宿主機是內核6.5的...
sudo docker pull insaneworks/centos
而後咱們就能夠去吃個飯喝壺茶了......反正你就就慢慢等吧.....
.......
ok,飯吃完了,咱們來產生一個容器吧.
sudo docker run -it insaneworks/centos /bin/bash
ctrl+p ctrl+q能夠幫助咱們從容器返回宿主機.
sudo docker ps 能夠查看運行的容器.
ok,咱們不想要這個容器了,怎麼辦呢?
sudo docker stop b152861ef001
同時再把容器刪除
sudo docker rm b152861ef001
三.製做一個hadoop鏡像
這是這裏最繁瑣的過程,不過,咱們能夠分解來作.少年郎,我夾你槓哦,你會了這個,就不再用擔憂hadoop
不會裝了.走起!
sudo docker run -it -h master --name master insaneworks/centos /bin/bash
進入到容器裏,咱們第一步就是把gcc給我裝了.
yum install -y gcc
裝vim
yum install -y vim
裝lrzsz
yum install -y lrzsz
裝ssh:
yum -y install openssh-server
yum -y install openssh-clients
注意,這裏咱們要改一下ssh的一些配置:vim /etc/ssh/sshd_config
放開PermitEmptyPasswords no
更改UsePAM no
放開PermitRootLogin yes
啓動sshd
service sshd start
而後咱們要用到ssh密碼設置
ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
這樣完了後呢,咱們ssh連本身試試
ssh master
不錯,很是好使.
接下來咱們把java給裝了.
經過rz將java rpm包給傳上去,好了,又該去喝一壺了........
rpm -ivh jdk-7u75-linux-x64.rpm
修改環境變量
export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL
export JAVA_HOME=/usr/java/jdk1.7.0_75
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
source /etc/profile
下面咱們該裝hadoop了,激動吧,呵呵,先把tar給裝了先.
yum install -y tar
同樣,咱們用rz把hadoop的tar包傳到容器內.事先埋個伏筆,以後會有一件很坑爹的事情,反正到時候你就知道了.
嗯......等的真是好漫長..........
咚咚噠噠呼嚕娃.......咚咚噠噠呼嚕娃.......
好了,解壓:
tar zxvf hadoop-2.6.0.tar.gz
完美!
配置環境變量
export HADOOP_HOME=/home/hadoop/hadoop-2.6.0
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
而後又一件事情要作,這件事情看上去好像不用作,但老子試過n次,不作就是hadoop起不來.
vim hadoop-env.sh 和 yarn-env.sh 在開頭添加以下環境變量(必定要添加切勿少了)
export JAVA_HOME=/usr/java/jdk1.7.0_75
哦,這個文件都在hadoop解壓目錄的etc中.
下面咱們來寫配置文件.
修改hadoop core-site.xml文件
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/songfy/hadoop-2.6.0/tmp</value>
</property>
</configuration>
修改hdfs-site.xml文件
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/songfy/hadoop-2.6.0/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/songfy/hadoop-2.6.0/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
修改mapred-site.xml文件
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
修改yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>
</configuration>
在slaves文件中添加
slave1
slave2
slave3
彷佛一切都好像搞定了,少年,別急,嚇死你!
ldd /home/hadoop/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0
而後你會看到:
/home/hadoop/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /home/hadoop/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0)
linux-vdso.so.1 => (0x00007fff24dbc000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007ff8c6371000)
libc.so.6 => /lib64/libc.so.6 (0x00007ff8c5fdc000)
/lib64/ld-linux-x86-64.so.2 (0x00007ff8c679b000)
人生是這樣的無情,人生是這樣的冷酷,以前有個小朋友問過我這個問題......我沒有理,如今,然我親手滅了這個問題!
不過你們可能明白了爲何我一上來就裝個gcc了吧.
yum install -y wget
wget http://ftp.gnu.org/gnu/glibc/glibc-2.14.tar.gz
tar zxvf glibc-2.14.tar.gz
cd glibc-2.14
mkdir build
cd build
../configure --prefix=/usr/local/glibc-2.14
make
make install
ln -sf /usr/local/glibc-2.14/lib/libc-2.14.so /lib64/libc.so.6
此時,ldd /home/hadoop/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0
就沒有任何問題了
linux-vdso.so.1 => (0x00007fff72b7c000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007fb996ce9000)
libc.so.6 => /lib64/libc.so.6 (0x00007fb99695c000)
/lib64/ld-linux-x86-64.so.2 (0x00007fb997113000
這樣,咱們的鏡像就能夠commit了
docker commit master songfy/hadoop
咱們能夠用docker images來查看鏡像.
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
songfy/hadoop latest 311318c0a407 42 seconds ago 1.781 GB
insaneworks/centos latest 9d29fe7b2e52 9 days ago 121.1 MB
下面咱們來啓動hadoop集羣
三.啓動hadoop集羣
docker rm master
sudo docker run -it -p 50070:50070 -p 19888:19888 -p 8088:8088 -h master --name master songfy/hadoop /bin/bash
sudo docker run -it -h slave1 --name slave1 songfy/hadoop /bin/bash
sudo docker run -it -h slave2 --name slave2 songfy/hadoop /bin/bash
sudo docker run -it -h slave3 --name slave3 songfy/hadoop /bin/bash
attach到每一個節點上執行
source /etc/profile
service sshd start
接下來咱們還要給每臺機器配host
docker inspect --format='{{.NetworkSettings.IPAddress}}' master
這條語句能夠查看ip
172.17.0.4 master
172.17.0.5 slave1
172.17.0.6 slave2
172.17.0.7 slave3
用scp將hosts文件分發到各個node中.
好了,咱們終於要啓動hadoop了.
hadoop namenode -format
/home/hadoop/hadoop-2.6.0/sbin/start-dfs.sh
/home/hadoop/hadoop-2.6.0/sbin/start-yarn.sh
用jps查看,發現都起來了.
下面咱們簡單來對hdfs操做一下.
hadoop fs -mkdir /input
hadoop fs -ls /
drwxr-xr-x - root supergroup 0 2015-08-09 09:09 /input
下面咱們來運行一下大名鼎鼎的wordcount程序來看看.
hadoop fs -put /home/hadoop/hadoop-2.6.0/etc/hadoop/* /input/
hadoop jar /home/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount /input/ /output/wordcount/
不要覺得一下就成功了.咱們發現事實上,程序並無跑出來,查了下日誌,看到:
2015-08-09 09:23:23,481 WARN org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Node : slave1:41978 does not have sufficient resource for request : {Priority: 0, Capability: <memory:2048, vCores:1>, # Containers: 1, Location: *, Relax Locality: true} node total capability : <memory:1024, vCores:8>
嗯,意思是內存不夠,咱們就分2G過去.
咱們發現大名鼎鼎的hadoop運行的簡直奇慢無比........因此說,當你機器多,你會跑的很快,若是是docker,就歇了吧.
固然,本人也試過多宿主機部署hadoop,不過由於沒有那麼多實體機,所以是在多個vmvare虛擬機上部署的docker hadoop集羣.
這就是虛擬機上的雲端hadoop........事實上,除了統計次數的時候,把其中一臺宿主虛擬機跑跪之外,幾乎沒什麼軟用.......
好了,結果出來了,咱們來看看:
policy 3
port 5
ports 2
ports. 2
potential 2
preferred 3
prefix. 1
present, 1
principal 4
principal. 1
printed 1
priorities. 1
priority 1
privileged 2
privileges 1
privileges. 1
properties 6
property 11
protocol 6
protocol, 2
不錯....好玩吧.......下次咱們再選一個有趣的主題吧,嗯,那就hive或者storm吧......固然本人並不可靠,
或許換成lda或者word2vec這種算法主題的,或者cuda異構計算也不必定,博主是個神經病,誰知道吶.