Hadoop 2.7.2 CentOS7 x64 環境搭建

1、環境配置html

1.CentOSjava

[root@master hadoop-2.7.2]# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core) 

[root@master hadoop]# uname -r
3.10.0-229.20.1.el7.x86_64

2.JDK(jdk8u51)node

[root@master hadoop]# java -version
java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)

http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.htmllinux

3.Hadoop(2.7.2)web

http://hadoop.apache.org/releases.html#25+January%2C+2016%3A+Release+2.7.2+%28stable%29+availableapache

4.基本環境bash

服務器Esxi6.0上開了5臺虛擬機,具體設置以下服務器

[root@master hadoop]# more /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.171 master.hadoop master
192.168.1.172 slave1.hadoop slave1
192.168.1.173 slave2.hadoop slave2
192.168.1.174 slave3.hadoop slave3
192.168.1.175 slave4.hadoop slave4

2、配置安裝oracle

1.安裝JDKapp

(1)下載JDK,利用前面給出的網址下載須要的JDK

(2)解壓JDK,輸入tar -zxvf jdk-8u51-linux-x64.gz 將JDK解壓並放置在/softall/目錄下

(3)配置環境變量,編輯/etc/profile

export JAVA_HOME=/softall/jdk1.8.0_51
export JRE_HOME=/softall/jdk1.8.0_51/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

(4)檢查

[root@master hadoop]# java -version
java version "1.8.0_51"
Java(TM) SE Runtime Environment (build 1.8.0_51-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.51-b03, mixed mode)

2.SSH 免密碼登陸

        Hadoop運行過程當中須要管理遠端Hadoop守護進程,在Hadoop啓動之後,NameNode是經過SSH(Secure Shell)來啓動和中止各個DataNode上的各類守護進程的。這就必須在節點之間執行指令的時候是不須要輸入密碼的形式,因此咱們須要配置SSH運用無密碼公鑰認證的形式,這樣NameNode使用SSH無密碼登陸並啓動DataName進程,一樣原理,DataNode上也能使用SSH無密碼登陸到NameNode。

        PS:新手剛開始配置這個地方確定會迷糊,但若是懂了原理,其實就沒那麼難理解了。好比私人會所,通常狀況下進入都須要憑會員卡進入(比如密碼口令),可是爲了提高用戶體驗,改用人臉識別技術做爲門禁,那麼會員提早在會所照相登記,下次來的時候,根據照相機獲取到會員的面部信息,用上次採集的圖像一比對,就認出來了,也不須要提供會員卡了,多省事。

        在這裏就是須要將每臺服務器生成的公鑰文件交給其餘全部服務器保存下來,這樣,全部服務器兩兩訪問時就都不須要密碼。

        具體來講,就是master和slave服務器都生成本身的公私鑰對,並把公鑰信息存放到其餘機器的authorized_keys文件中。

(1)在每臺機器上修改配置文件,容許SSH免密碼登陸,去掉/etc/ssh/sshd_config其中3行的註釋,每臺服務器都要設置。

RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile      .ssh/authorized_keys

(2)生成公私鑰對,輸入:

ssh-keygen -t rsa -P ''

直接回車生成的密鑰對:id_rsa和id_rsa.pub,默認存儲在"~/.ssh"目錄下。

[root@master utils]# ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
9e:a5:0b:7a:87:49:45:13:ec:1b:6d:0f:c7:9a:7c:c3 root@master.hadoop
The key's randomart image is:
+--[ RSA 2048]----+
|       ...       |
|        +        |
|       o o .     |
|        + + o    |
|       .S=.B     |
|      ...++ E    |
|     ..o+  . .   |
|     .+...       |
|    .. ..        |
+-----------------+

 (3)將全部slave機器的公鑰發送給master

在slave1機器上執行

scp ~/.ssh/id_rsa.pub root@master:~/.ssh/id_rsa.slave1.pub

同理將其餘slave2,slave3,slave4的公鑰文件也發送給master,在master機器上查看

[root@master .ssh]# ll
total 32
-rw------- 1 root root 2000 Jun  6 16:46 authorized_keys
-rw------- 1 root root 1675 Jun  6 16:26 id_rsa
-rw-r--r-- 1 root root  400 Jun  6 16:26 id_rsa.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave1.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave2.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave3.pub
-rw-r--r-- 1 root root  400 Jun  6 16:46 id_rsa.slave4.pub
-rw-r--r-- 1 root root  728 Jun  6 16:27 known_hosts

將全部的公鑰文件追加到受權的key裏面去。

[root@master .ssh]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@master .ssh]# cat ~/.ssh/id_rsa.slave1.pub >> ~/.ssh/authorized_keys
[root@master .ssh]# cat ~/.ssh/id_rsa.slave2.pub >> ~/.ssh/authorized_keys
[root@master .ssh]# cat ~/.ssh/id_rsa.slave3.pub >> ~/.ssh/authorized_keys
[root@master .ssh]# cat ~/.ssh/id_rsa.slave4.pub >> ~/.ssh/authorized_keys

能夠經過查看文件,檢驗結果

[root@master .ssh]# more authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQClpmQd2fUgawiH+RDkgtZDViT98L1D8u8Jx44dv4gci1nNt0TQCoSHK43QnT5/5Ncf4h6II3oYN8o6TrnDF8PXKP2rR0HULmHMUQf0qy45pmM5oUCwbZ1mY
ggB/v77WS9MM2IBcjlPaNb17jvFWvkVGP+zUTfkuv7XfK1RY0CvNl55MFQBB/TbaB8o/8KHVVN7XmUWiRB68cFmRiBiaBuY97IFMbDmADBA+4cHMGiZ9hYNzKw+61Hw4H+OlhVv5cuth24KlUL/cAed7f1Qh/
ToP6aVYfUxmgf9Jc4pAaAss44UNGg0O2RodHsbIenVtYS/T/13iGWjmLckW9aKAFwP root@master.hadoop
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDQH6DbFni2J+A6iSA+fcLRdEOZn/HLFPGSjjkd0VqdXkGhakGGlskLNL2zr7f5nmJonPF64OKjW5fsvdmSRlPXnYhlYT/dF6hw4gYxQIksd5Cm1X2AB6B5C
3WpiRif3m8L0cd99X9EE55rx5hVD0UxMVK0AIAF6Ao1opra1jUm0r0r7ddPJBhClE5nN8b1LZf/QaQHkmWkjO4KqFN6+QrEoEoT2cGPTV08Z+yOsRcognP4eJuc5PnxpY0pCpznstqAsNfPCi4KJwwpGpQ3ZF
pwBYpwhiTFatSc95qtY8ZaQncomjmiJeUCVHpVO+pegdR0J4rhV122U5/6kuWgA6Mt root@slave1.hadoop
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMb6GS9lvOEffh9ntF9q/UeW8bJ/s3U5DMq+696YLSBV9e34cQJo7xcfZhOMHHmJb2/AgCWMkV7LF2y0/+YUzKJZvdrvHSJ3aHCbBnFJ44srNR+754ZDnyt
hyaoyZnx+0bEOsdeIO7HRPuRFKRxma64V8hV7MqO8/K1a3sT9yz2VRcoL1huAyfG8zQPZ7nT0PrMowV3b4CPwdMTIHK6fUjteIBFLIy/CWPWKD2o7bEEh2rxfqhVEGaHi5+EN7Ztex0lmOYzuyBShUYnz4q8u
C7EHCEdMBlH+E04SvUF8n/6KoUPEJ25kVlSM3aqyDDO6CHq9R58iYmODmp9bn2nzgF root@slave2.hadoop
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCrK1J7k9m/x1xIdtE0aCuCWI96OmgZocJ99TvMDp75jzlnWNsDjGHKYIh0UalPzKqjXa8JPLvrJPvUSbKVIvO7CiitUMviPz/EPUZnTnDuZVEEV33nPTeT
NZsdw/EAh+lIkwscdRXNtoLyzKgJwfeAbTvegiBP9XuHt6GMtvf+Syv7u4bpomIO905Ury08km+FHL+JbP0EUsfNEUHfIR/e7qBy+7Yt94dzeKvKxTu1Ar/HfCdg/LJIi98xA3b+eRfZ2V0ACHqPlperQ8duy
qvBtbt06NMOdpx4S9T1RsgYW1Mo9B/vVt7wocBY0IePfQZ0SPL1N4DYijxz7LyIVa/ root@slave3.hadoop
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDD2LLrtBQVUzvKbtUfzjUSq7dnLBTLxLTfrGAEJ6eENdQh0iCEMLdNfgN4AIP8A8CrQWcjag9YylY7fgzcvykbJlbTX8qoGdVqu8sikGrTbBNpkM03ZwfEv
f3PId4q5hByANvPdFKK9IDF6uzEkK07o89zJKgK8BcgKU7OIOyStUz5bxLnrarqgQXe0yeQq+8QdQWly2Ojc4wuiEuI2SHaXxUAHcdVoFYNqiBHWMv1PpK2mULsuvmE343nV6iSifRlv9+Atud2F9W0RidmV2
PZtlva9rXGrxoJxiWsz4A+vhud3l9TxHZMguBukpPZBJ14of1zT1n9bxlcQYPgvGOP root@slave4.hadoop

(4)修改受權key的權限

chmod 600 ~/.ssh/authorized_keys

查看結果

[root@master .ssh]# ll
total 20
-rw-r--r-- 1 root root  400 Jun  6 16:26 authorized_keys
-rw------- 1 root root 1675 Jun  6 16:26 id_rsa
-rw-r--r-- 1 root root  400 Jun  6 16:26 id_rsa.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave1.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave2.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave3.pub
-rw-r--r-- 1 root root  400 Jun  6 16:46 id_rsa.slave4.pub
-rw-r--r-- 1 root root  728 Jun  6 16:27 known_hosts

[root@master .ssh]# chmod 600 authorized_keys 
[root@master .ssh]# ll
total 20
-rw------- 1 root root  400 Jun  6 16:26 authorized_keys
-rw------- 1 root root 1675 Jun  6 16:26 id_rsa
-rw-r--r-- 1 root root  400 Jun  6 16:26 id_rsa.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave1.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave2.pub
-rw-r--r-- 1 root root  400 Jun  6 16:45 id_rsa.slave3.pub
-rw-r--r-- 1 root root  400 Jun  6 16:46 id_rsa.slave4.pub
-rw-r--r-- 1 root root  728 Jun  6 16:27 known_hosts

(5)將authorized_keys文件複製到各個slave機器上

scp ~/.ssh/authorized_keys root@slave1:~/.ssh/
scp ~/.ssh/authorized_keys root@slave2:~/.ssh/
scp ~/.ssh/authorized_keys root@slave3:~/.ssh/
scp ~/.ssh/authorized_keys root@slave4:~/.ssh/

(6)在每臺機器上登陸全部其餘機器,由於在第一次登陸時,服務器會出現登陸提示

[root@slave4 .ssh]# ssh slave2
The authenticity of host 'slave2 (192.168.1.173)' can't be established.
ECDSA key fingerprint is e2:4d:18:4c:61:a0:ca:35:82:82:89:82:21:cc:ca:70.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave2,192.168.1.173' (ECDSA) to the list of known hosts.
Last login: Mon Jun  6 16:49:30 2016 from slave3.hadoop

輸入yes後,之後登陸就能夠實現免密碼登陸了。

3.安裝Hadoop

(1)下載Hadoop

官方下載地址:

http://hadoop.apache.org/releases.html#25+January%2C+2016%3A+Release+2.7.2+%28stable%29+available

PS:須要說明的是,官方提供的bin包是32位的。若是在64位系統上運行會報錯,須要使用src包在64位系統上從新編譯,後續會給出方法。此處安裝64位的Hadoop僅僅多了編譯這步,其它的安裝方法和32位版本一致。

將下載的hadoop-2.7.2.tar.gz包放到 / 目錄下

(2)解壓Hadoop

將下載的hadoop解壓到/softall/下(softall這個目錄憑我的愛好命名,並提早建好)

tar zxvf /hadoop-2.7.2.tar.gz /softall/

在/softall/hadoop目錄下建立數據存放的文件夾,tmp、logs、dfs/data、dfs/name

mkdir logs
mkdir tmp
mkdir -p dfs/data
mkdir -p dfs/name

(3)配置Hadoop

  • 修改/softall/hadoop/etc/hadoop/core-site.xml, 添加以下內容:
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/softall/hadoop-2.7.2/tmp</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
    </property>
    <property>
        <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.GzipCodec,
               org.apache.hadoop.io.compress.DefaultCodec,
               org.apache.hadoop.io.compress.BZip2Codec,
               org.apache.hadoop.io.compress.SnappyCodec
        </value>
    </property>
</configuration>
  • 修改/softall/hadoop/etc/hadoop/mapred-site.xml, 添加以下內容:
<configuration>
<!--
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
-->
    <property>
        <name>mapreduce.job.tracker</name>
        <value>hdfs://master:8001</value>
        <final>true</final>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
    <property>
        <name>mapreduce.map.output.compress</name>
        <value>true</value>
    </property>
    <property>
        <name>mapreduce.map.output.compress.codec</name>
        <value>org.apache.hadoop.io.compress.SnappyCodec</value>
    </property>
</configuration>

PS:註釋塊的內容若是啓用,nodemanager未啓動的時候,運行wordcount例子時會卡主,後面會有說明。

  • 修改/softall/hadoop/etc/hadoop/hdfs-site.xml(確切的說是將mapred-site.xml.template拷貝一份並更名爲mapred-site.xml), 添加以下內容:
<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/softall/hadoop-2.7.2/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/softall/hadoop-2.7.2/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:9001</value>
    </property>
    <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    </property>
</configuration>
  • 修改/softall/hadoop/etc/hadoop/yarn-site.xml, 添加以下內容:
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>768</value>
    </property>
</configuration>
  • 修改/softall/hadoop-2.7.2/etc/hadoop目錄下hadoop-env.sh、yarn-env.sh的JAVA_HOME,不配置會報錯
export JAVA_HOME=/softall/jdk1.8.0_51
  • 修改/softall/hadoop-2.7.2/etc/hadoop目錄下的slaves,刪除默認的localhost,將全部slave節點都加入進去,此處可使用IP也可使用hostname,由於hostname配置時容易出現重啓失效,等問題,通常建議配置成IP。
192.168.1.172
192.168.1.173
192.168.1.174
192.168.1.175
  • 將配置好的Hadoop複製到各個節點對應位置上:
scp -r /softall/hadoop-2.7.2 root@192.168.1.172:/softall/hadoop-2.7.2
scp -r /softall/hadoop-2.7.2 root@192.168.1.173:/softall/hadoop-2.7.2
scp -r /softall/hadoop-2.7.2 root@192.168.1.174:/softall/hadoop-2.7.2
scp -r /softall/hadoop-2.7.2 root@192.168.1.175:/softall/hadoop-2.7.2

4.運行驗證Hadoop

(1)進入/softall/hadoop-2.7.2目錄;

(2)初始化namenode節點:

bin/hadoop namenode -format

當出現successfully formated字樣後表示已成功;

(3)啓動hadoop

sbin/start-all.sh

(4)輸入jps檢查hadoop是否已啓動

在master節點會顯示

[root@master hadoop-2.7.2]# jps
6374 NameNode
31478 Jps
6619 SecondaryNameNode
6815 ResourceManager
[root@master hadoop-2.7.2]#

在slave節點會顯示:

[root@slave1 ~]# jps
27305 Jps
5390 DataNode
[root@slave1 ~]#

這樣就表示運行成功了。

3、在x64操做系統上編譯hadoop

1.編譯環境

操做系統:CentOS 7 64位(須要鏈接互聯網)

Hadoop源代碼版本:hadoop-2.7.2-src.tar.gz

2.編譯準備

(1)安裝JDK,參照上面的方法。

(2)安裝軟件包

須要說明的是,這些軟件包最好仍是到官網或文中提供的連接下載。以前使用yum裝的軟件,結果編譯報錯,猜想多是版本問題。最後仍是刪掉了,從新下載安裝的。

  • 安裝基本包
yum -y install  svn   ncurses-devel   gcc*  
yum -y install lzo-devel zlib-devel autoconf automake libtool cmake openssl–devel
  • 安裝protobuf-2.5.0.tar.gz

下載連接:http://pan.baidu.com/s/1c1D6cow

依次輸入命令完成安裝與檢查

tar zxvf protobuf-2.5.0.tar.gz  
cd protobuf-2.5.0  
./configure  
make  
make install  
protoc --version
  • 安裝maven

下載連接:http://maven.apache.org/download.cgi

tar zxvf apache-maven-3.2.3-bin.tar.gz

配置環境變量,修改/etc/profile,添加下面內容:

export MAVEN_HOME=/usr/local/program/maven/apache-maven-3.2.3  
export PATH=$PATH:$MAVEN_HOME/bin

執行source /etc/profile是環境變量生效,使用mvn -version檢查是否安裝成功

  • 安裝ant

下載連接:http://pan.baidu.com/s/1byGZUm

解壓並添加環境變量:

export ANT_HOME=/home/joywang/apache-ant-1.9.4  
export PATH=$PATH:$ANT_HOME/bin

執行source /etc/profile是環境變量生效,使用ant -version檢查是否安裝成功

(3)編譯Hadoop

解壓Hadoop源碼包

tar zxvf hadoop-2.7.2-src.tar.gz

進入hadoop-2.7.2-src目錄,輸入

mvn clean package –Pdist,native –DskipTests –Dtar

PS:若是須要用到hadoop-snappy,此處須要安裝snappy,並在編譯時加上參數。

  • 安裝snappy

下載地址:http://pan.baidu.com/s/1i49YcgH

yum install svn
yum install autoconf automake libtool cmake
yum install ncurses-devel
yum install openssl-devel
yum install gcc*

安裝snappy

tar -zxvf snappy-1.1.3.tar.gz
cd snappy-1.1.3/
./configure
make
make install

帶snappy編譯:

mvn clean package -Pdist,native -DskipTests -Dtar -Dsnappy.lib=/usr/local/lib -Dbundle.snappy

其中-Dsnappy.lib=/usr/local/lib的路徑是snappy默認安裝路徑,若是修改了,請寫出實際安裝路徑。

而後就是等啊等。。。。

編譯成功後,編譯好的Hadoop爲:

hadoop-2.7.2-src/hadoop-dist/target/hadoop-2.7.2.tar.gz

編譯後的64位庫在hadoop-2.7.2/lib/native下

[root@master native]# pwd
/softall/hadoop-2.7.2/lib/native

[root@master native]# ll
total 5720
-rw-r--r-- 1 root root 1439746 Jun  3 16:23 libhadoop.a
-rw-r--r-- 1 root root 1606968 Jun  3 16:23 libhadooppipes.a
lrwxrwxrwx 1 root root      18 Jun  3 16:23 libhadoop.so -> libhadoop.so.1.0.0
-rwxr-xr-x 1 root root  829581 Jun  3 16:23 libhadoop.so.1.0.0
-rw-r--r-- 1 root root  475090 Jun  3 16:23 libhadooputils.a
-rw-r--r-- 1 root root  433884 Jun  3 16:23 libhdfs.a
lrwxrwxrwx 1 root root      16 Jun  3 16:23 libhdfs.so -> libhdfs.so.0.0.0
-rwxr-xr-x 1 root root  272298 Jun  3 16:23 libhdfs.so.0.0.0
-rw-r--r-- 1 root root  522304 Jun  3 16:23 libsnappy.a
-rwxr-xr-x 1 root root     955 Jun  3 16:23 libsnappy.la
lrwxrwxrwx 1 root root      18 Jun  3 16:23 libsnappy.so -> libsnappy.so.1.3.0
lrwxrwxrwx 1 root root      18 Jun  3 16:23 libsnappy.so.1 -> libsnappy.so.1.3.0
-rwxr-xr-x 1 root root  258613 Jun  3 16:23 libsnappy.so.1.3.0
[root@master native]#

(4)安裝Hadoop

編輯/etc/profile文件,加入以下內容:

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

而後按照上面Hadoop的安裝方法便可。

(5)驗證

輸入以下命令:

hadoop checknative -a

若出現下面提示,則表示安裝成功。

[root@master target]# hadoop checknative -a
16/06/07 16:23:47 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
16/06/07 16:23:47 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop:  true /home/softall/hadoop-2.7.2/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /home/softall/hadoop-2.7.2/lib/native/libsnappy.so.1
lz4:     true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /lib64/libcrypto.so
[root@master target]#

4、可能出現的問題

1.在運行hadoop命令後會出現

[root@master target]# hadoop checknative -a
16/06/07 16:19:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

代表編譯的64位native庫沒有找到

可能緣由有如下幾點:

(1)HADOOP_COMMON_LIB_NATIVE_DIR與HADOOP_OPTS變量設置與實際安裝路徑對不上,須要修改正確的路徑;

(2)編譯hadoop失敗,查找問題從新編譯;

(3)環境變量失效,source /etc/profile以後再次執行。出現這個錯誤是因爲本人腳本中環境變量的順序寫錯了,把export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native寫到了export HADOOP_HOME=/softall/hadoop-2.7.2以前致使的。囧rz~~~

 

2.datanode沒法啓動

產生這個問題最大的緣由就是屢次初始化namenode,致使clusterID和datanode節點的clusterID匹配不上。

解決方法很簡單,刪除全部節點上的tmp、logs、dfs/data、dfs/name文件夾,而後從新初始化namenode

詳見http://www.aboutyun.com/thread-7930-1-1.html

最後,很是感謝aboutyun上的pig2大神和如下各位大神的幫助,若有侵權,告知立刪。

參考文獻:

hadoop安裝部署:http://www.open-open.com/lib/view/open1435761287778.html

hadoop編譯:http://blog.csdn.net/Joy58061678/article/details/45746847

安裝snappy包支持:http://blog.csdn.net/wzy0623/article/details/51263041

相關文章
相關標籤/搜索