Hadoop 3.0.3 集羣部署

#前言html

  初次部署Hadoop,折騰了一成天,處處找資料,發現找到的資料配置都略有差異,一臉懵懂。還好這幾天不忙,能慢慢折騰。晚上貌似部署成功了,怎麼驗證呢?等部署好Spark時再檢查是否正常運行出結果吧。java

 

#閒扯node

  上週未至如今這幾天(上班就晚上幹了),把基本的hoovip.com 改爲一個電影站,採集了5個電影站,每3個小時採集一次,作了微博自動分享電影功能。慢慢優化吧,弄完後再打算弄一個api接口,針對 Youtube,B站,Tumblr 提供下載服務。由於查了下後臺,這三個查詢的比例最大,最後今天上google站長統計發現,csdn上的 Bilibili B站視頻 快速下載、備份影片 Mp4 格式 - videofk.com 帶來100~200的訪問IP,哈哈。web

 

#來上代碼吧apache

1, vim /etc/hosts 
  192.168.1.101 had001(master)
  192.168.1.102 had002
  192.168.1.103 had003 
  #複製到102,103 主機
  
一,操做had001主機
2,將101 的 .ssh/authorized_keys 複製到 102,103二臺服務器
  scp -i /home/xxx.pem .ssh/authorized_keys root@had002:/root/.ssh/
  scp -i /home/xxx.pem .ssh/authorized_keys root@had003:/root/.ssh/
  #若是沒問題,在101上就能夠直接 ssh root@had002 連上102服務器了
  
3,將路徑寫入環境變動 vim /etc/profile  ,保存後執行 source /etc/profile
  export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
  export HADOOP_HOME=/home/hadoop
  export HDFS_NAMENODE_USER=root
  export HDFS_DATANODE_USER=root
  export HDFS_SECONDARYNAMENODE_USER=root
  export YARN_RESOURCEMANAGER_USER=root
  export YARN_NODEMANAGER_USER=root 

  echo export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 >> $HADOOP_HOME/etc/hadoop/hadoop-env.sh 
4,配置core-site vim $HADOOP_HOME/etc/hadoop/core-site.xml
      <configuration>
          <property>
               <name>hadoop.tmp.dir</name>
               <value>file:/usr/local/hadoop/tmp</value> <!-- 指定臨時目錄-->
         </property>
         <property>
            <name>fs.defaultFS</name>
            <value>hdfs://had001:9000</value> <!--訪問地址與端口-->
         </property>
     </configuration>

5,配置 vim $HADOOP_HOME/etc/hadoop/hdfs-site.xml 
    <configuration>
        <property>
            <name>dfs.replication</name> <!--複本數量-->
            <value>1</value>
        </property>
         <property>
            <name>dfs.namenode.name.dir</name> <!-- namenode 存放路徑-->
            <value>file:/usr/local/hadoop/tmp/dfs/name</value>
         </property>
         <property>
            <name>dfs.datanode.data.dir</name> <!-- datanode 存放路徑 -->
            <value>file:/usr/local/hadoop/tmp/dfs/data</value>
         </property>
        <property>
          <name>dfs.namenode.secondary.http-address</name> <!-- # 經過web界面來查看HDFS狀態-->
          <value>had001:9001</value>
        </property>
    </configuration>

6,配置 vim $HADOOP_HOME/etc/hadoop/mapred-site.xml 
    <configuration>
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>
    </configuration>

7,配置 vim $HADOOP_HOME/etc/hadoop/yarn-site.xml
    <configuration>
<!-- Site specific YARN configuration properties -->
        <property>
            <name>yarn.resourcemanager.hostname</name>
            <value>had001</value>
        </property>
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.resourcemanager.address</name>
            <value>had001:8032</value>
        </property>
        <property>
           <name>yarn.resourcemanager.scheduler.address</name>
           <value>had001:8030</value>
        </property>
        <property>
           <name>yarn.resourcemanager.resource-tracker.address</name>
           <value>had001:8031</value>
        </property>
        <property>
           <name>yarn.resourcemanager.admin.address</name>
           <value>had001:8033</value>
        </property>
        <property>
           <name>yarn.resourcemanager.webapp.address</name>
           <value>had001:8088</value>
        </property>
    </configuration>

8,指定datanode 節點主機名 vim $HADOOP_HOME/etc/hadoop/workers 
    had002
    had003

9, 配置完以後,把101上面的 hadoop 打包傳到 102,103二臺主機上
   zip -R hadoop.zip hadoop
   scp /home/hadoop.zip  root@had002:/home
   scp /home/hadoop.zip  root@had003:/home

10,傳輸完以後,在102,103上進行解壓
    ssh root@had002  #登陸
    unzip -r /home/hadoop.zip
    
11,在101上格式化hdfs
    $HADOOP_HOME/bin/hdfs namenode -format

12,啓動dfs $HADOOP_HOME/sbin/start-dfs.sh,使用jps查看啓用的服務 (stop-dfs.sh是中止)

13,啓動yarn(分佈式計算) $HADOOP_HOME/sbin/start-yarn.sh

14,查看HDFS系統狀態  $HADOOP_HOME/bin/hdfs dfsadmin -report

15,最後執行jps查看後臺運行的服務
   101主機
        22656 Jps
        21205 ResourceManager
        20405 SecondaryNameNode
        10233 NodeManager
        20155 NameNode

   102,103主機
        3697 Jps
        3001 DataNode
        3389 NodeManager
參考
 http://www.cnvirtue.com/547.html
https://www.linode.com/docs/databases/hadoop/how-to-install-and-set-up-hadoop-cluster/
還有這幾個官方文檔,方便查詢
http://hadoop.apache.org/docs/r3.0.3/hadoop-project-dist/hadoop-common/core-default.xml
http://hadoop.apache.org/docs/r3.0.3/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
http://hadoop.apache.org/docs/r3.0.3/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
http://hadoop.apache.org/docs/r3.0.3/hadoop-yarn/hadoop-yarn-common/yarn-default.xml

 

您有什麼不一樣的意見或見解? 歡迎留言共同窗習,謝謝。vim

本文連接:http://www.hihubs.com/article/342api

關鍵字:Hadoop 3.0.3 集羣部署服務器

若無特別註明,文章皆爲Hubs'm原創,轉載請註明出處...O(∩_∩)Oapp

相關文章
相關標籤/搜索