超詳細從零記錄Ubuntu16.04.1 3臺服務器上Hadoop2.7.3徹底分佈式集羣部署過程。包含,Ubuntu服務器建立、遠程工具鏈接配置、Ubuntu服務器配置、Hadoop文件配置、Hadoop格式化、啓動。(首更時間2016年10月27日)java
主機名/hostname | IP | 角色 |
---|---|---|
hadoop1 | 192.168.193.131 | ResourceManager/NameNode/SecondaryNameNode |
hadoop2 | 192.168.193.132 | NodeManager/DataNode |
hadoop3 | 192.168.193.133 | NodeManager/DataNode |
文件
而後 新建虛擬機
創建好的虛擬機以下node
經過ipconfig命令查看服務器ip地址 IP 192.168.193.131 默認主機名ubuntu IP 192.168.193.132 默認主機名ubuntu IP 192.168.193.133 默認主機名ubuntu 下一步會修改主機名hostname
ssh-keygen
。打開終端或者服務器版命令行
查看是否安裝(ssh)openssh-server,不然沒法遠程鏈接。linux
sshd
sudo apt install openssh-server
一樣三個虛擬機創建鏈接git
在Hadoop一、Hadoop二、Hadoop3中github
xiaolei@ubuntu:~$ sudo vi /etc/apt/sources.list
# 默認註釋了源碼鏡像以提升 apt update 速度,若有須要可自行取消註釋 deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial main main restricted universe multiverse deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-updates main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-updates main restricted universe multiverse deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-backports main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-backports main restricted universe multiverse deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-security main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-security main restricted universe multiverse # 預發佈軟件源,不建議啓用 # deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-proposed main restricted universe multiverse # deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-proposed main restricted universe multiverse
更新源web
xiaolei@ubuntu:~$ sudo apt update
sudo apt install vim
更新系統(服務器端更新量小,桌面版Ubuntu更新量較大,能夠暫時不更新)shell
sudo apt-get upgrade
#在192.168.193.131 xiaolei@ubuntu:~$ sudo hostname hadoop1 #在192.168.193.131 xiaolei@ubuntu:~$ sudo hostname hadoop2 #在192.168.193.131 xiaolei@ubuntu:~$ sudo hostname hadoop3 #斷開遠程鏈接,從新鏈接便可看到已經改變了主機名。
在Hadoop1,2,3中ubuntu
xiaolei@hadoop1:~$ sudo vim /etc/hosts
192.168.193.131 hadoop1 192.168.193.132 hadoop2 192.168.193.133 hadoop3
xiaolei@hadoop1:~$ date Wed Oct 26 02:42:08 PDT 2016
xiaolei@hadoop1:~$ sudo tzselect
根據提示選擇Asia
China
Beijing Time
yes
最後將Asia/Shanghai shell scripts 複製到/etc/localtimevim
xiaolei@hadoop1:~$ sudo cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
xiaolei@ubuntu:~$ date Wed Oct 26 17:45:30 CST 2016
xiaolei@hadoop1:~$ tar -zxf jdk-8u111-linux-x64.tar.gz hadoop1@hadoop1:~$ sudo mv jdk1.8.0_111 /opt/ [sudo] password for hadoop1: xiaolei@hadoop1:~$
編寫環境變量腳本並使其生效windows
xiaolei@hadoop1:~$ sudo vim /etc/profile.d/jdk1.8.sh
輸入內容(或者在個人github上下載jdk環境配置腳本源碼)
#!/bin/sh # author:wangxiaolei 王小雷 # blog:http://blog.csdn.net/dream_an # date:20161027 export JAVA_HOME=/opt/jdk1.8.0_111 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH
xiaolei@hadoop1:~$ source /etc/profile
xiaolei@hadoop1:~$ java -version java version "1.8.0_111" Java(TM) SE Runtime Environment (build 1.8.0_111-b14) Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
也可經過scp命令
#注意後面帶 : 默認是/home/xiaolei路徑下 xiaolei@hadoop1:~$ scp jdk-8u111-linux-x64.tar.gz hadoop2:
命令解析:scp
遠程複製 -r
遞歸 本機文件地址
app是文件,裏面包含jdk、Hadoop包 遠程主機名@遠程主機ip:遠程文件地址
sudo apt install ssh
sudo apt install rsync
xiaolei@ubuntu:~$ ssh-keygen -t rsa //一路回車就好
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop1 ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop2 ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop3
#不須要輸入密碼 ssh hadoop2
在hadoop1上配置完成後將Hadoop包直接遠程複製scp
到其餘Linux主機便可。
Linux主機Hadoop集羣徹底分佈式分配
xiaolei@hadoop2:~$ sudo vim /etc/profile.d/hadoop2.7.3.sh
輸入
#!/bin/sh # Author:wangxiaolei 王小雷 # Blog:http://blog.csdn.net/dream_an # Github:https://github.com/wxiaolei # Date:20161027 # Path:/etc/profile.d/hadoop2.7.3.sh export HADOOP_HOME="/opt/hadoop-2.7.3" export PATH="$HADOOP_HOME/bin:$PATH" export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export JAVA_HOME=/opt/jdk1.8.0_111
hadoop2 hadoop3
<configuration> <!-- 指定hdfs的nameservice爲ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://Hadoop1:9000</value> </property> <!-- Size of read/write buffer used in SequenceFiles. --> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <!-- 指定hadoop臨時目錄,自行建立 --> <property> <name>hadoop.tmp.dir</name> <value>/home/xiaolei/hadoop/tmp</value> </property> </configuration>
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop1:50090</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/xiaolei/hadoop/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/xiaolei/hadoop/hdfs/data</value> </property> </configuration>
<configuration> <!-- Site specific YARN configuration properties --> <!-- Configurations for ResourceManager --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>hadoop1:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop1:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop1:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>hadoop1:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop1:8088</value> </property> </configuration>
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop1:10020</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop1:19888</value> </property> </configuration>
xiaolei@hadoop1:~$ scp -r hadoop-2.7.3 hadoop3:
將每一個Hadoop包sudo mv
移動到/opt/路徑下。不要sudo cp
,注意權限。
xiaolei@hadoop1:sudo mv hadoop-2.7.3 /opt/
在hadoop1上執行
xiaolei@hadoop1:/opt/hadoop-2.7.3$ hdfs namenode -format
### 5.3.1. 在Hadoop1上執行
xiaolei@hadoop1:/opt/hadoop-2.7.3/sbin$ ./start-all.sh
jps
http://192.168.193.131:8088/
權限問題:
chown -R xiaolei:xiaolei hadoop-2.7.3
解析:將hadoop-2.7.3文件屬主、組更換爲xiaolei:xiaolei
chmod 777 hadoop
解析:將hadoop文件權限變成421 421 421 可寫、可讀可、執行即 7 7 7
查看是否安裝openssh-server
ssd
或者
ps -e|grep ssh
安裝 openssh-server
sudo apt install openssh-server
問題解決:
問題
Network error: Connection refused
解決安裝
Network error: Connection refused