Hadoop集羣搭建-HA高可用(手動切換模式)(四)

步驟和集羣規劃

1)保存徹底分佈式模式配置node

2)在full配置的基礎上修改成高可用HAapache

3)第一次啓動HAbootstrap

4)常規啓動HAcentos

5)運行wordcountssh

集羣規劃:

centos虛擬機:node-00一、node-00二、node-00三、node-004分佈式

node-001:Active NN、JournalNode、resourcemangeride

node-002:Standby NN、DN、JournalNode、nodemangeroop

node-003:DN、JournalNode、nodemangerthis

node-004:DN、JournalNode、nodemangerspa

1、保存full徹底分佈式配置

cp -r hadoop/ hadoop-full

2、修改配置成爲HA(yarn部署)

主要修改core-site.xml、hdfs-site.xml、yarn-site.xml

1.修改core-site.xml文件

<configuration>

<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>

</configuration>

2.修改hdfs-site.xml

<configuration>
<property>
   <name>dfs.replication</name>
   <value>3</value>
</property>
<!--定義nameservices邏輯名稱-->
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>
<!--映射nameservices邏輯名稱到namenode邏輯名稱-->
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<!--映射namenode邏輯名稱到真實主機名稱(RPC)-->
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>node-001:8020</value>
</property>
<!--映射namenode邏輯名稱到真實主機名稱(RPC)-->
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>node-002:8020</value>
</property>
<!--映射namenode邏輯名稱到真實主機名稱(HTTP)-->
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>node-001:50070</value>
</property>
<!--映射namenode邏輯名稱到真實主機名稱(HTTP)-->
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>node-002:50070</value>
</property>

<property>
  <name>dfs.namenode.name.dir</name>
  <value>file:///home/lims/bd/hdfs/name</value>
  <description>Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. </description>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>file:///home/lims/bd/hdfs/data</value>
  <description>Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored. </description>
</property>

<!--配置journalnode集羣位置信息及目錄-->
<property>
  <name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node-002:8485;node-003:8485;node-004:8485/mycluster</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/home/lims/bd/hdfs/journal</value>
</property>
<!--配置故障切換實現類-->
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--指定切換方式爲SSH免密鑰方式-->
<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>
<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/home/lims/.ssh/id_dsa</value>
</property>
<!--設置自動切換-->
<property>
   <name>dfs.ha.automatic-failover.enabled.mycluster</name>
   <value>false</value>
</property>
</configuration>

3.用scp分發到各個節點

scp hadoop/* lims@node-002:/home/lims/bd/hadoop-2.8.5/etc/hadoop scp hadoop/* lims@node-003:/home/lims/bd/hadoop-2.8.5/etc/hadoop scp hadoop/* lims@node-004:/home/lims/bd/hadoop-2.8.5/etc/hadoop

3、第一次啓動HA

1)分別在node-002,node-003,node-004三個節點啓動journalnode

hadoop-daemon.sh start journalnode

2)在node-001上格式化namenode

hdfs namenode -format

3)在node-001上啓動namenode

hadoop-daemon.sh start namenode

4)在node-002,即另外一臺namenode上同步nn1的CID等信息

hdfs namenode -bootstrapStandby

5)在node-001上啓動其餘服務

start-dfs.sh

5)手動切換node-001爲active狀態

hdfs haadmin -transitionToActive nn1

4、常規啓動HA

1)啓動hdfs

start-dfs.sh

2)啓動yarn

start-yarn.sh
相關文章
相關標籤/搜索