【大數據】分佈式集羣部署

一、集羣規劃部署html

節點名稱 NN1 NN2 DN  RM NM
hadoop01 NameNode   DataNode   NodeManager
hadoop02   SecondaryNameNode DataNode ResourceManager NodeManager
hadoop03     DataNode   NodeManager

 二、參考單機部署,拷貝安裝目錄至相同目錄,使用ln -s 創建軟鏈接node

 

 

三、修改配置文件參數及sh啓動文件--根據集羣規劃部署配置oop

 

slaves:記錄了機器名spa

*.sh:修改JAVA_HOME.net

yarn-site.xml code

<configuration>

<!-- Site specific YARN configuration properties -->
          <!-- NodeManager獲取數據的方式是shuffle-->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
          <!-- 指定YARN的老大(resourcemanager)的地址 -->
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>hadoop02</value>
        </property>
</configuration>

hdfs-site.xml xml

<configuration>
        <!-- 指定HDFS保存數據副本數量 --> 
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
        <description>
            If "true", enable permission checking in HDFS.
            If "false", permission checking is turned off,
            but all other behavior is unchanged.
            Switching from one parameter value to the other does not change the mode,
            owner or group of files or directories.
        </description>
    </property>
    <!-- 設置secondname的端口   -->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop02:50090</value>
    </property>
</configuration>

mapred-site.xml htm

<configuration>
          <!-- 告訴hadoop之後MR運行在yarn上 -->
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
</configuration>

 

core-site.xmlblog

<configuration>
        <!-- 用來指定hdfs的老大(NameNode)的地址 -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hadoop01:9000</value>
        </property>
          <!-- 用來指定Hadoop運行時產生文件的存放目錄 -->
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/hadoop/tmp</value>
        </property>
</configuration>

 

四、因爲是在單機基礎上升級擴展,須要刪除hadoop.tmp.dir目錄文件,並用root受權 chmod 777 -R /hadoop進程

五、從新格式化:hdfs namenode -foamate

六、配置拷貝:scp -r /home/hadoop/Soft/hadoop-2.7.6/etc/hadoop hadoop@hadoop03:/home/hadoop/Soft/hadoop-2.7.6/etc/

七、Hadoop01:start-dfs.sh

八、Hadoop02:start-yarn.sh

十、使用jps查看進程

 

 

參考:

https://blog.csdn.net/frank409167848/article/details/80968531

https://www.cnblogs.com/frankdeng/p/9047698.html

相關文章
相關標籤/搜索