系統準備:html
系統安裝及配置 【略】詳見:http://www.osyunwei.com/archives/7702.htmljava
ip配置:node
# cat /etc/sysconfig/network-scripts/ifcfg-eno16777736 TYPE=Ethernet BOOTPROTO=static DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=eno16777736 UUID=dadee176-cc84-43f4-9ea9-e30a30ca3abf DEVICE=eno16777736 ONBOOT=yes #20160708 add IPADDR0=192.168.128.130 PREFIXO0=24 GATEWAY0=192.168.128.1 #DNS1= #DNS2=
DNS配置
linux
# cat /etc/resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 192.168.128.1
本地yum配置docker
# 掛載iso文件 # mkdir -p /media/cdrom # vi /etc/fstab ''' /opt/rhel-server-7.2-x86_64-dvd.iso /media/cdrom iso9660 defaults,ro,loop 0 0 ''' # mount -a # df -lh ''' /dev/loop0 3.8G 3.8G 0 100% /media/cdrom ''' # vi /etc/yum.repos.d/rhel-media.repo [rhel-media] name=Red Hat Enterprise Linux 7.2 baseurl=file:///media/cdrom enabled=1 gpgcheck=1 gpgkey=file:///media/cdrom/RPM-GPG-KEY-redhat-release # 清理緩存 # yum clean # 將服務器上的軟件包信息在本地緩存,以提升 搜索安裝軟件的速度 # yum makecache
主機名修改apache
hostnamectl --staticset-hostname rhels7-docker
1、安裝dockervim
因國內訪問docker官網速度問題,這裏使用國內的加速鏡像 daocloud.iocentos
curl -sSL https://get.daocloud.io/docker | sh
安裝過程將會建立一個用戶組 docker緩存
查看docker版本bash
docker version
啓動docker,並查看狀態
systemctl start docker.service systemctl status docker.service
顯示系統信息(前提:docker服務處於啓動狀態)
docker info
2、拉取centos鏡像
docker pull daocloud.io/library/centos:centos7
3、啓動鏡像
一、先查看本地鏡像
docker images
以下:
說明:centos是安裝完hadoop後的鏡像,daocloud.io/library/centos是剛剛拉取的,下面的操做都是基於此進行的
二、啓動
docker run -h master --dns=192.168.128.1 -it daocloud.io/library/centos:centos7
說明:
-h master #指定主機名
--dns=192.168.128.1 #因人而異,配置錯誤將影響後期軟件安裝
-it #以交互模式啓動
具體可docker run --help查看
4、安裝必要軟件及配置
一、安裝基礎軟件
yum install -y wget vim openssh-server openssh-clients net-tools
說明: netstat, ifconfig命令包含在net-tools包中
安裝完後並不會啓動sshd服務,容器是被docker管理的,沒法使用一些系統命令,要啓動sshd須要執行以下命令:
/usr/sbin/sshd -D &
注意:sshd服務是hadoop必須的,在此經過腳本實現啓動容器就運行sshd
vi /root/run.sh 內容 #!/bin/bash /usr/sbin/sshd -D 賦權 chmod +x /root/run.sh
二、網絡配置
docker容器經過橋接與外部通訊,不想每次啓動容器都要指定dns
修改默認dns
2.1修改宿主機配置文件 /etc/default/docker
DOCKER_NETWORK_OPTIONS="--dns=192.168.128.1"
2.2修改宿主機配置文件 /lib/systemd/system/docker.service
[Service] EnvironmentFile=-/etc/default/docker ExecStart=/usr/bin/docker daemon -H fd:// $OPTIONS \ $DOCKER_NETWORK_OPTIONS
詳見:http://docs.master.dockerproject.org/engine/admin/systemd/
重啓宿主機docker服務
systemctl daemon-reload systemctl restart docker.service #使用這個命令能夠查看 docker 的啓動命令是否生效 ps -ef | grep docker
root 2415 1 0 14:41 ? 00:00:10 /usr/bin/docker daemon -H fd:// --dns=192.168.128.1
root 2419 2415 0 14:41 ? 00:00:01 docker-containerd -l /var/run/docker/libcontainerd/docker-containerd.sock --runtime docker-runc --start-timeout 2m
三、安裝jdk8
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.tar.gz mkdir /usr/java tar zxf jdk-8u91-linux-x64.tar.gz -C /usr/java echo 'export JAVA_HOME=/usr/java/jdk1.8.0_91' >> /etc/bashrc echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/bashrc echo 'export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar' >> /etc/bashrc source /etc/bashrc
四、安裝hadoop
4.1 安裝hadoop,並配置環境變量
wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz mkdir /usr/local/hadoop tar zxf hadoop-2.7.2.tar.gz -C /usr/local/hadoop echo 'export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.2' >> /etc/bashrc echo 'export HADOOP_CONFIG_HOME=$HADOOP_HOME/etc/hadoop' >> /etc/bashrc echo 'export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin' >> /etc/bashrc source /etc/bashrc
4.2 配置hadoop
在HADOOP_HOME
目錄下建立以下目錄
切換到HADOOP_CONFIG_HOME
目錄
cp mapred-site.xml.template mapred-site.xml
配置 core-site.xml
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/hadoop-2.7.2/tmp</value> <description>A base for other temporary dirctories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> <final>true</final> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implemntation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implemnetation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> </property> </configuration>
配置hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>2</value> <final>true</final> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop/hadoop-2.7.2/namenode</value> <final>true</final> </property> <property> <name>dfs.datenode.data.dir</name> <value>/usr/local/hadoop/hadoop-2.7.2/datanode</value> <final>true</final> </property> </configuration>
配置mapred-site.xml
<configuration> <property> <name>maperd.job.tracker</name> <value>master:9001</value> <description>The host and port that the MapReduce job tracker runs at. IF "local", then jobs are run in-process as a single map and reduce task</description> </property> </configuration>
4.3 配置ssh免密碼登陸
ssh-keygen -q -t rsa -b 2048 -f /etc/ssh/ssh_host_rsa_key -N '' ssh-keygen -q -t ecdsa -f /etc/ssh/ssh_host_ecdsa_key -N '' ssh-keygen -t dsa -f /etc/ssh/ssh_host_ed25519_key -N ''
而後修改master容器/etc/ssh/sshd_config文件
UsePAM yes 改成 UsePAM no
UsePrivilegeSeparation sandbox 改成 UsePrivilegeSeparation no
[root@b5926410fe60 /]# sed -i "s/#UsePrivilegeSeparation.*/UsePrivilegeSeparation no/g" /etc/ssh/sshd_config [root@b5926410fe60 /]# sed -i "s/UsePAM.*/UsePAM no/g" /etc/ssh/sshd_config 修改完後,從新啓動sshd [root@b5926410fe60 /]# /usr/sbin/sshd -D
4.4 修改容器root密碼
passwd root
五、保存該docker容器container
docker commit -m "hadoop installed" 690a57e02578 centos:hadoop
刪除多餘容器
docker rm <container_id>
說明:690a57e02578 爲container_id,因人而異,可經過docker ps查看
保存完成後,可經過docker images查看本地鏡像
REPOSITORY TAG IMAGE ID CREATED SIZE
centos hadoop b01079411e19 45 seconds ago 1.434 GB
daocloud.io/library/centos centos7 ea08fb8c4ba5 7 days ago 196.8 MB
5、啓動hadoop
說明:關鍵因素一、sshd服務,二、/etc/hosts配置到master節點的映射
修改容器的/root/run.sh,容器ip默認從172.17.0.2開始分配,3個節點,最後一個啓動master節點,故能肯定masterip爲172.17.0.4
#!/bin/bash echo '172.17.0.4 master' >> /etc/hosts /usr/sbing/sshd -D
另:在宿主機上也可根據container_id用docker inspect <CONTAINER_ID>:查看容器詳細信息(輸出爲Json)如:ip、mac、hostname等
docker inspect -f '{{ .NetworkSettings.IPAddress }}' 690a57e02578 docker inspect -f '{{ .NetworkSettings.MacAddress }}' 690a57e02578 docker inspect -f '{{ .Config.Hostname }}' 690a57e02578
一、基於新鏡像(centos:hadoop)運行啓動3個容器
docker run -d -p 10012:22 --name slave1 centos:hadoop /root/run.sh docker run -d -p 10022:22 --name slave2 centos:hadoop /root/run.sh docker run -d -p 10002:22 --name master -h master -P --link slave1:slave1 --link slave2:slave2 centos:hadoop /root/run.sh
說明:-p參數指定容器22端口分別映射到宿主機端口,本地可經過ssh訪問宿主機10002/10012/10022端口鏈接到3個容器中
二、啓動hadoop
2.1 連入master容器
docker exec -it 175c3129e021 /bin/bash
2.2 格式化namenode
hdfs namenode -format
顯示以下信息,表示格式化成功
16/07/09 08:12:36 INFO common.Storage: Storage directory /usr/local/hadoop/hadoop-2.7.2/namenode has been successfully formatted.
16/07/09 08:12:36 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/07/09 08:12:36 INFO util.ExitUtil: Exiting with status 0
2.3 啓動hadoop
因已配置好環境變量,進入容器後,可直接運行
start-all.sh
使用jps查看進程
# jps 163 NameNode 675 NodeManager 1316 Jps 581 ResourceManager 279 DataNode 429 SecondaryNameNode
6、宿主機配置iptables實現端口轉發
轉發方向:容器端口50070 <---> 宿主機50070
在宿主機執行
iptables -t nat -A PREROUTING -d 192.168.128.130 -p tcp --dport 50070 -j DNAT --to-destination 172.17.0.4:50070
說明:192.168.128.130 爲宿主機,172.17.0.4爲master容器
至此,本地便可訪問虛擬機(或說宿主機)上容器中的hadoop集羣
參考並修正:https://www.gaoyuexiang.cn/archives/389 下一步:理清docker網絡