主機名 | ip | 安裝的軟件 | 進程 |
---|---|---|---|
hadoop01 | 192.168.1.101 | jdk、hadoop | NN、DFSZKFailoverController |
hadoop02 | 192.168.1.102 | jdk、hadoop | NN、DFSZKFailoverController |
hadoop03 | 192.168.1.103 | jdk、hadoop | RM |
hadoop04 | 192.168.1.104 | jdk、hadoop、zookeeper | DN、NM、journalnode |
hadoop05 | 192.168.1.105 | jdk、hadoop、zookeeper | DN、NM、journalnode |
hadoop06 | 192.168.1.106 | jdk、hadoop、zookeeper | DN、NM、journalnode |
主機名:
hadoop0一、hadoop0二、hadoop0三、hadoop0四、hadoop0五、hadoop06
若是不會請參考:http://blog.csdn.net/uq_jin/article/details/51355124java
用戶名:Hadoop
密碼:12345678node
將本機的主機名和IP創建映射關係linux
vi /etc/hosts11
加入以下文件:shell
192.168.2.101 hadoop01 192.168.2.102 hadoop02 192.168.2.103 hadoop03 192.168.2.104 hadoop04 192.168.2.105 hadoop05 192.168.2.106 hadoop06123456123456
拷貝/etc/hosts到其它主機apache
scp /etc/hosts hadoop02:/etc/scp /etc/hosts hadoop03:/etc/scp /etc/hosts hadoop04:/etc/scp /etc/hosts hadoop05:/etc/scp /etc/hosts hadoop06:/etc/1234512345
#關閉防火牆sudo systemctl stop firewalld.service#關閉開機啓動sudo systemctl disable firewalld.service12341234
通常是建專有的hadoop用戶,不在root用戶上面搭建vim
這裏每臺虛擬主機都應該有hadoop用戶api
#先建立組cloudgroupadd cloud#建立用戶並加入組clouduseradd -g cloud hadoop#修改用戶hadoop的密碼passwd hadoop123456123456
一、查看/etc/sudoers的權限瀏覽器
ls -l /etc/sudoers11
能夠看的是隻讀權限,若是咱們要修改就必須先改變該文件的權限服務器
二、修改權限微信
chmod 777 /etc/sudoers11
三、將hadoop添加root權限
vim /etc/sudoers11
在root下加入下面hadoop用戶
四、還原權限
chmod 440 /etc/sudoers11
拷貝/etc/sudoers到其它主機
scp /etc/sudoers hadoop02:/etc/scp /etc/sudoers hadoop03:/etc/scp /etc/sudoers hadoop04:/etc/scp /etc/sudoers hadoop05:/etc/scp /etc/sudoers hadoop06:/etc/1234512345
切換hadoop用戶
su hadoop11
進入到當前用戶的根目錄
cd ~ 11
查看全部文件
ls –la11
進入.ssh目錄
cd .ssh11
生產公鑰和私鑰(四個回車)
ssh-keygen -t rsa11
執行完這個命令後,會生成兩個文件id_rsa(私鑰)、id_rsa.pub(公鑰)
將公鑰拷貝到要免登錄的機器上
ssh-copy-id 192.168.2.101ssh-copy-id 192.168.2.102ssh-copy-id 192.168.2.103ssh-copy-id 192.168.2.104ssh-copy-id 192.168.2.105ssh-copy-id 192.168.2.106123456123456
這時會在192.168.2.102主機的.ssh/下產生一個名爲authorized_keys的文件,這時經過 ssh 192.168.2.102時能夠直接免登錄進入主機
以下:
同理能夠給其餘機器也設置免密碼登陸。
在/home/hadoop/下建立cloud文件夾,用來安裝相關軟件,同時所用安裝包放在cloud下的soft-install文件夾下,如:
cd /home/hadoopmkdir cloudmkdir soft-install123123
在soft-install裏上傳咱們須要的軟件:
上傳咱們所須要的軟件到這個目錄
解壓
tar -zxvf jdk-8u91-linux-x64.tar.gz -C /home/hadoop/cloud/11
配置環境變量
# 修改配置文件sudo vi /etc/profile# 在最後下添加export JAVA_HOME=/home/hadoop/cloud/jdk1.8.0_91export PATH=$JAVA_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar# 刷新配置文件source /etc/profile1234567891012345678910
將jdk和環境變量分別拷貝到其餘主機上
能夠直接將cloud文件夾複製過去
scp -r cloud/ hadoop02:/home/hadoop/ scp -r cloud/ hadoop03:/home/hadoop/ scp -r cloud/ hadoop04:/home/hadoop/ scp -r cloud/ hadoop05:/home/hadoop/ scp -r cloud/ hadoop06:/home/hadoop/1234512345
將環境變量拷貝到其餘主機下
sudo scp /etc/profile hadoop02:/etc/sudo scp /etc/profile hadoop03:/etc/sudo scp /etc/profile hadoop04:/etc/sudo scp /etc/profile hadoop05:/etc/sudo scp /etc/profile hadoop06:/etc/1234512345
刷新環境變量
source /etc/profile11
若是不懂Zookeeper請參考:https://www.ibm.com/developerworks/cn/opensource/os-cn-zookeeper/
下載地址:http://mirrors.hust.edu.cn/apache/zookeeper/
前面咱們已經安裝的jdk,如今咱們在hadoop0四、hadoop0五、hadoop06上安裝Zookeeper
一、解壓
tar -zxvf zookeeper-3.4.8.tar.gz -C /home/hadoop/cloud/11
二、修改Zookeeper的默認配置 conf/zoo_sample.cfg
mv zoo_sample.cfg zoo.cfgvi zoo.cfg1212
配置以下:
#修改dataDir指向咱們數據dataDir=/home/hadoop/cloud/zookeeper-3.4.8/data#並在最後添加server.1=hadoop04:2888:3888server.2=hadoop05:2888:3888server.3=hadoop06:2888:3888123456123456
三、在/home/hadoop/cloud/zookeeper-3.4.8/目錄下建立data文件夾
mkdir data11
四、在data文件夾下建立myid文件指明本機id
vim myid11
id 分別對應爲hadoop04爲1,hadoop05爲2,hadoop06爲3 後面咱們再統一拷貝
五、複製zookeeper-3.4.8到10五、106機器上並修改相應的myid
scp -r zookeeper-3.4.8/ hadoop04:/home/hadoop/cloud/ scp -r zookeeper-3.4.8/ hadoop05:/home/hadoop/cloud/ scp -r zookeeper-3.4.8/ hadoop06:/home/hadoop/cloud/123123
分別在hadoop0四、hadoop0五、hadoop06上啓動Zookeeper
#執行/home/hadoop/cloud/zookeeper-3.4.8/bin目錄下的腳本啓動./zkServer.sh start1212
查看zookeeper的狀態
./zkServer.sh status11
在bin/目錄下運行,運行結果以下說明成功(此時至少運行2臺)
其實咱們能夠找到leader 而後stop,會發現Zookeeper會當即切換Leader
解壓
tar -zxvf hadoop-2.7.2.tar.gz -C /home/hadoop/cloud/11
配置環境變量
# 修改配置文件sudo vi /etc/profile# 在最後下添加export HADOOP_HOME=/home/hadoop/cloud/hadoop-2.7.2export PATH=$PATH:$HADOOP_HOME/bin# 刷新配置文件source /etc/profile123456789123456789
測試:
which hadoop11
hadoop-env.sh
# The java implementation to use.export JAVA_HOME=/home/hadoop/cloud/jdk1.8.0_911212
core-site.xml
<configuration> <!-- 指定hadoop運行時產生文件的存儲路徑 --> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/cloud/hadoop-2.7.2/tmp</value> </property> <!-- 指定hdfs的nameservice爲ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定zookeeper地址,多個用,分割 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop04:2181,hadoop05:2181,hadoop06:2181</value> </property></configuration>123456789101112131415161718192021123456789101112131415161718192021
hdfs-site.xml
<configuration> <!-- dfs.nameservices 命名空間的邏輯名稱,多個用,分割 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- 指定ns1下有兩個namenode,分別是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- 指定nn1的RPC通訊地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop01:8020</value> </property> <!-- 指定nn1的HTTP通訊地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop01:50070</value> </property> <!-- 指定nn2的RPC通訊地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop02:8020</value> </property> <!-- 指定nn2的HTTP通訊地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop02:50070</value> </property> <!-- 指定namenode的元數據存放的Journal Node的地址,必須基數,至少三個 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop04:8485;hadoop05:8485;hadoop06:8485/ns1</value> </property> <!--這是JournalNode進程保持邏輯狀態的路徑。這是在linux服務器文件的絕對路徑--> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/cloud/hadoop-2.7.2/journal/</value> </property> <!-- 開啓namenode失敗後自動切換 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失敗自動切換實現方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔離機制方法,多個機制用換行分割 --> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔離機制時須要ssh免登錄 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔離機制超時時間30秒 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property></configuration>1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818212345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182
mapred-site.xml.template
須要重命名: mv mapred-site.xml.template mapred-site.xml
<configuration> <!-- 通知框架MR使用YARN --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property></configuration>1234567812345678
yarn-site.xml
<configuration> <!-- 指定YARN的老大(RM)的地址--> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop03</value> </property> <!-- reducer取數據的方式是mapreduce_shuffle --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property></configuration>1234567891011121312345678910111213
slaves
hadoop04 hadoop05 hadoop06123123
並在 hadoop-2.7.2文件下 建立tmp文件:
mkdir tmp11
將hadoop-2.5.2拷貝到其餘主機下
scp -r hadoop-2.7.2 hadoop02:/home/hadoop/cloud/ scp -r hadoop-2.7.2 hadoop03:/home/hadoop/cloud/ scp -r hadoop-2.7.2 hadoop04:/home/hadoop/cloud/ scp -r hadoop-2.7.2 hadoop05:/home/hadoop/cloud/ scp -r hadoop-2.7.2 hadoop06:/home/hadoop/cloud/1234512345
將環境變量拷貝到其餘主機下
sudo scp /etc/profile hadoop02:/etc/sudo scp /etc/profile hadoop03:/etc/sudo scp /etc/profile hadoop04:/etc/sudo scp /etc/profile hadoop05:/etc/sudo scp /etc/profile hadoop06:/etc/1234512345
刷新環境變量
source /etc/profile11
啓動的時候注意啓動順序
一、啓動zookeeper(在hadoop0四、0五、06 )
二、啓動journal node(在hadoop0四、0五、06)
#hadoop-2.7.2/sbin下./sbin/hadoop-daemon.sh start journalnode1212
三、格式化HDFS(namenode)第一次要格式化(在hadoop0一、02中任意一臺)(這裏直接複製會有問題,最好手動輸入)
./bin/hdfs namenode –format11
並把/home/hadoop/cloud/hadoop-2.7.2/tmp 文件夾拷貝到另外一臺namenode的目錄下
scp -r /home/hadoop/cloud/hadoop-2.7.2/tmp hadoop@hadoop02:/home/hadoop/cloud/hadoop-2.7.2/11
四、格式化 zk(在hadoop01便可)(這裏直接複雜會有問題,最好手動輸入)
./bin/hdfs zkfc –formatZK11
五、啓動zkfc來監控NN狀態(在hadoop0一、02)
./sbin/hadoop-daemon.sh start zkfc11
六、啓動HDFS(namenode)(在hadoop01便可)
#hadoop-2.7.2/sbin下./sbin/start-dfs.sh1212
七、啓動YARN(MR)(在hadoop03便可)
#hadoop-2.7.2/sbin下./sbin/start-yarn.sh1212
若是上面的啓動沒有報錯的的話,這時在咱們的虛擬主機上應該分別有本身的進程,如前文咱們規劃的同樣。
查看本機的Java進程
jps11
經過瀏覽器測試以下:
http://192.168.2.101:50070/11
能夠看出hadoop01的namenode是處於一種standby狀態,那麼hadoop02應該是處於active狀態
查看YARN的狀態
http://192.168.2.103:8088/11