Windows平臺安裝配置Hadoop

Windows平臺安裝配置Hadoop

步驟: 
1. JDK安裝(不會的戳這) 
2. 下載hadoop2.5.2.tar.gz,或者自行去百度下載。 
3. 下載hadooponwindows-master.zip【**能支持在windows運行hadoop的工具】html

1、 安裝hadoop2.5.2

下載hadoop2.5.2.tar.gz ,並解壓到你想要的目錄下,我放在D:\dev\hadoop-2.5.2 
這裏寫圖片描述java

2、配置hadoop環境變量

1.windows環境變量配置node

右鍵單擊個人電腦 –>屬性 –>高級環境變量配置 –>高級選項卡 –>環境變量 –> 單擊新建HADOOP_HOME,以下圖 
這裏寫圖片描述web

2.接着編輯環境變量path,將hadoop的bin目錄加入到後面;apache

3、修改hadoop配置文件

  1. 編輯「D:\dev\hadoop-2.5.2\etc\hadoop」下的core-site.xml文件,將下列文本粘貼進去,並保存;
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/D:/dev/hadoop-2.5.2/workplace/tmp</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/D:/dev/hadoop-2.5.2/workplace/name</value>
    </property>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

 

2.編輯「D:\dev\hadoop-2.5.2\etc\hadoop」目錄下的mapred-site.xml(沒有就將mapred-site.xml.template重命名爲mapred-site.xml)文件,粘貼一下內容並保存;windows

<configuration>
    <property>
       <name>mapreduce.framework.name</name>
       <value>yarn</value>
    </property>
    <property>
       <name>mapred.job.tracker</name>
       <value>hdfs://localhost:9001</value>
    </property>
</configuration>

3.編輯「D:\dev\hadoop-2.5.2\etc\hadoop」目錄下的hdfs-site.xml文件,粘貼如下內容並保存。請自行建立data目錄,在這裏我是在HADOOP_HOME目錄下建立了workplace/data目錄;安全

<configuration>
    <!-- 這個參數設置爲1,由於是單機版hadoop -->
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/D:/dev/hadoop-2.5.2/workplace/data</value>
    </property>
</configuration>

4.編輯「D:\dev\hadoop-2.5.2\etc\hadoop」目錄下的yarn-site.xml文件,粘貼如下內容並保存;app

<configuration>
    <property>
       <name>yarn.nodemanager.aux-services</name>
       <value>mapreduce_shuffle</value>
    </property>
    <property>
       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

5.編輯「D:\dev\hadoop-2.5.2\etc\hadoop」目錄下的hadoop-env.cmd文件,將JAVA_HOME用 @rem註釋掉,編輯爲JAVA_HOME的路徑,而後保存;工具

@rem set JAVA_HOME=%JAVA_HOME%

set JAVA_HOME=D:\java\jdk --jdk安裝路徑

4、替換文件

下載到的hadooponwindows-master.zip,解壓,將bin目錄(包含如下.dll和.exe文件)文件替換原來hadoop目錄下的bin目錄;oop

5、運行環境

1.運行cmd窗口,執行「hdfs namenode -format」; 
2.運行cmd窗口,切換到hadoop的sbin目錄,執行「start-all.cmd」,它將會啓動如下進程。

成功後,如圖: 
這裏寫圖片描述

至此,hadoop服務已經搭建完畢。

接下來上傳測試,操做HDFS

根據你core-site.xml的配置,接下來你就能夠經過:hdfs://localhost:9000來對hdfs進行操做了。

1.建立輸入目錄

C:\WINDOWS\system32>hadoop fs -mkdir hdfs://localhost:9000/user/

C:\WINDOWS\system32>hadoop fs -mkdir hdfs://localhost:9000/user/wcinput

2.上傳數據到目錄

C:\WINDOWS\system32>hadoop fs -put D:\file1.txt hdfs://localhost:9000/user/wcinput

C:\WINDOWS\system32>hadoop fs -put D:\file2.txt hdfs://localhost:9000/user/wcinput

3.查看文件 
這裏寫圖片描述

大功告成。

附錄:hadoop自帶的web控制檯GUI

1.資源管理GUI:http://localhost:8088/; 
這裏寫圖片描述

2.節點管理GUI:http://localhost:50070/; 
這裏寫圖片描述

使用Hadoop自帶的例子pi計算圓周率

D:\HADOOP\hadoop>hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.0.jar pi 10 10
Number of Maps  = 10
Samples per Map = 10
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
18/11/09 13:31:06 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
18/11/09 13:31:07 INFO input.FileInputFormat: Total input files to process : 10
18/11/09 13:31:07 INFO mapreduce.JobSubmitter: number of splits:10
18/11/09 13:31:07 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
18/11/09 13:31:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541741344890_0001
18/11/09 13:31:08 INFO impl.YarnClientImpl: Submitted application application_1541741344890_0001
18/11/09 13:31:08 INFO mapreduce.Job: The url to track the job: http://DESKTOP-S0J61R2:8088/proxy/application_1541741344890_0001/
18/11/09 13:31:08 INFO mapreduce.Job: Running job: job_1541741344890_0001
18/11/09 13:31:29 INFO mapreduce.Job: Job job_1541741344890_0001 running in uber mode : false
18/11/09 13:31:29 INFO mapreduce.Job:  map 0% reduce 0%
18/11/09 13:31:43 INFO mapreduce.Job:  map 50% reduce 0%
18/11/09 13:31:44 INFO mapreduce.Job:  map 60% reduce 0%
18/11/09 13:31:52 INFO mapreduce.Job:  map 90% reduce 0%
18/11/09 13:31:53 INFO mapreduce.Job:  map 100% reduce 0%
18/11/09 13:31:54 INFO mapreduce.Job:  map 100% reduce 100%
18/11/09 13:32:04 INFO mapreduce.Job: Job job_1541741344890_0001 completed successfully
18/11/09 13:32:04 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=226
                FILE: Number of bytes written=2238841
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=2680
                HDFS: Number of bytes written=215
                HDFS: Number of read operations=43
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=3
        Job Counters
                Launched map tasks=10
                Launched reduce tasks=1
                Data-local map tasks=10
                Total time spent by all maps in occupied slots (ms)=99705
                Total time spent by all reduces in occupied slots (ms)=8623
                Total time spent by all map tasks (ms)=99705
                Total time spent by all reduce tasks (ms)=8623
                Total vcore-milliseconds taken by all map tasks=99705
                Total vcore-milliseconds taken by all reduce tasks=8623
                Total megabyte-milliseconds taken by all map tasks=102097920
                Total megabyte-milliseconds taken by all reduce tasks=8829952
        Map-Reduce Framework
                Map input records=10
                Map output records=20
                Map output bytes=180
                Map output materialized bytes=280
                Input split bytes=1500
                Combine input records=0
                Combine output records=0
                Reduce input groups=2
                Reduce shuffle bytes=280
                Reduce input records=20
                Reduce output records=0
                Spilled Records=40
                Shuffled Maps =10
                Failed Shuffles=0
                Merged Map outputs=10
                GC time elapsed (ms)=1144
                CPU time spent (ms)=4669
                Physical memory (bytes) snapshot=3203293184
                Virtual memory (bytes) snapshot=3625623552
                Total committed heap usage (bytes)=2142240768
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=1180
        File Output Format Counters
                Bytes Written=97
Job Finished in 58.577 seconds
Estimated value of Pi is 3.20000000000000000000

 

Windows平臺Hadoop出現 Exception message: CreateSymbolicLink error (1314): ???????????

平臺:

   hadoop 2.7.1 

   windows 2008 server R2

問題描述:

  在使用kettel執行ELT任務到hive時 hadoop出現Exception message: CreateSymbolicLink error (1314): ???????????(建立符號表異常),通過分析發現爲windows帳戶不具有建立符號表的權限

 

解決方法:

1. 默認管理員能夠建立符號表,可使用管理員命令行啓動 hadoop的應用;

2. 經過:

   2.1.win+R gpedit.msc 
   2.2. 計算機配置->windows設置->安全設置->本地策略->用戶權限分配->建立符號連接。 
   2.3. 把用戶添加進去,重啓或者註銷

的方式來添加帳戶的建立符號表權限信息。

參考文獻:

https://stackoverflow.com/questions/28958999/hdfs-write-resulting-in-createsymboliclink-error-1314-a-required-privilege  

相關文章
相關標籤/搜索