如何編譯Apache Hadoop2.6.0源代碼html
我使用的是CentOS6.5,下載地址是http://mirror.neu.edu.cn/centos/6.5/isos/x86_64/,選擇CentOS-6.5-x86_64-bin-DVD1.iso 下載,注意是64位的,大小是4GB,須要下載一段時間的。其實6.x的版本均可以,不必定是6.5。java
我使用的是VMWare虛擬機,分配了2GB內存,20GB磁盤空間。內存過小,會比較慢;磁盤過小,編譯時可能會出現空間不足的狀況。上述不是最低配置,根據本身的機器配置修改吧。還有,必定要保持linux聯網狀態。node
如下是按照各類軟件,我把軟件下載後所有複製到/usr/local目錄下,如下命令執行的路徑是在/usr/local目錄下。請讀者在閱讀時,必定要注意路徑。linux
hadoop是java寫的,編譯hadoop必須安裝jdk。c++
從oracle官網下載jdk,下載地址是http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html,選擇 jdk-7u45-linux-x64.tar.gz下載。web
執行如下命令解壓縮jdkshell
tar -zxvf jdk-7u45-linux-x64.tar.gzapache
會生成一個文件夾jdk1.7.0_45,而後設置環境變量中。centos
執行命令 vi /etc/profile,增長如下內容到配置文件中,結果顯示以下api
hadoop源碼是使用maven組織管理的,必須下載maven。從maven官網下載,下載地址是http://maven.apache.org/download.cgi,選擇 apache-maven-3.3.3-bin.tar.gz 下載。
執行如下命令解壓縮jdk
tar -zxvf apache-maven-3.3.3-bin.tar.gz
會生成一個文件夾apache-maven-3.3.3,而後設置環境變量中。
執行命令vi /etc/profile,編輯結果以下圖所示
hadoop使用protocol buffer通訊,從protoc官網下載protoc,下載地址是https://code.google.com/p/protobuf/downloads/list,選擇protobuf-2.5.0.tar.gz 下載。
爲了編譯安裝protoc,須要下載幾個工具,順序執行如下命令
yum -y install gcc yum -y install gcc-c++ yum -y install make
若是操做系統是CentOS6.5那麼gcc和make已經安裝了。其餘版本不必定。
而後執行如下命令解壓縮protobuf
tar -zxvf protobuf-2.5.0.tar.gz
會生成一個文件夾protobuf-2.5.0,執行如下命令編譯protobuf。
cd protobuf-2.5.0 ./configure --prefix=/usr/local/protoc/ make && make install
只要不出錯就能夠了。
執行完畢後,編譯後的文件位於/usr/local/protoc/目錄下,咱們設置一下環境變量
執行命令vi /etc/profile,編輯結果以下圖所示
順序執行如下命令
yum -y install cmake yum -y install openssl-devel yum -y install ncurses-devel
安裝完畢便可。
從hadoop官網下載2.6穩定版,下載地址是http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.6.0/hadoop-2.6.0-src.tar.gz。
執行如下命令解壓縮jdk
tar -zxvf hadoop-2.6.0-src.tar.gz
會生成一個文件夾 hadoop-2.6.0-src。
好了,如今進入到目錄/usr/local/hadoop-2.6.0-src中,執行命令
cd /usr/local/hadoop-2.6.0-src
mvn package -DskipTests -Pdist,native
該命令會從外網下載依賴的jar,編譯hadoop源碼,須要花費很長時間,你能夠吃飯了。
在等待N久以後,能夠看到以下的結果:
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................. SUCCESS [ 4.414 s]
[INFO] Apache Hadoop Project POM .......................... SUCCESS [ 3.132 s]
[INFO] Apache Hadoop Annotations .......................... SUCCESS [ 5.377 s]
[INFO] Apache Hadoop Assemblies ........................... SUCCESS [ 0.623 s]
[INFO] Apache Hadoop Project Dist POM ..................... SUCCESS [ 3.624 s]
[INFO] Apache Hadoop Maven Plugins ........................ SUCCESS [ 7.253 s]
[INFO] Apache Hadoop MiniKDC .............................. SUCCESS [ 5.040 s]
[INFO] Apache Hadoop Auth ................................. SUCCESS [ 9.449 s]
[INFO] Apache Hadoop Auth Examples ........................ SUCCESS [ 5.894 s]
[INFO] Apache Hadoop Common ............................... SUCCESS [02:35 min]
[INFO] Apache Hadoop NFS .................................. SUCCESS [ 9.395 s]
[INFO] Apache Hadoop KMS .................................. SUCCESS [ 12.661 s]
[INFO] Apache Hadoop Common Project ....................... SUCCESS [ 0.064 s]
[INFO] Apache Hadoop HDFS ................................. SUCCESS [02:58 min]
[INFO] Apache Hadoop HttpFS ............................... SUCCESS [ 20.099 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal .............. SUCCESS [ 8.216 s]
[INFO] Apache Hadoop HDFS-NFS ............................. SUCCESS [ 5.086 s]
[INFO] Apache Hadoop HDFS Project ......................... SUCCESS [ 0.061 s]
[INFO] hadoop-yarn ........................................ SUCCESS [ 0.091 s]
[INFO] hadoop-yarn-api .................................... SUCCESS [01:45 min]
[INFO] hadoop-yarn-common ................................. SUCCESS [ 38.766 s]
[INFO] hadoop-yarn-server ................................. SUCCESS [ 0.131 s]
[INFO] hadoop-yarn-server-common .......................... SUCCESS [ 14.831 s]
[INFO] hadoop-yarn-server-nodemanager ..................... SUCCESS [ 25.612 s]
[INFO] hadoop-yarn-server-web-proxy ....................... SUCCESS [ 6.043 s]
[INFO] hadoop-yarn-server-applicationhistoryservice ....... SUCCESS [ 8.443 s]
[INFO] hadoop-yarn-server-resourcemanager ................. SUCCESS [ 29.911 s]
[INFO] hadoop-yarn-server-tests ........................... SUCCESS [ 8.606 s]
[INFO] hadoop-yarn-client ................................. SUCCESS [ 10.038 s]
[INFO] hadoop-yarn-applications ........................... SUCCESS [ 0.118 s]
[INFO] hadoop-yarn-applications-distributedshell .......... SUCCESS [ 3.389 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher ..... SUCCESS [ 2.003 s]
[INFO] hadoop-yarn-site ................................... SUCCESS [ 0.056 s]
[INFO] hadoop-yarn-registry ............................... SUCCESS [ 6.715 s]
[INFO] hadoop-yarn-project ................................ SUCCESS [ 3.798 s]
[INFO] hadoop-mapreduce-client ............................ SUCCESS [ 0.218 s]
[INFO] hadoop-mapreduce-client-core ....................... SUCCESS [ 40.412 s]
[INFO] hadoop-mapreduce-client-common ..................... SUCCESS [ 24.370 s]
[INFO] hadoop-mapreduce-client-shuffle .................... SUCCESS [ 10.642 s]
[INFO] hadoop-mapreduce-client-app ........................ SUCCESS [ 12.325 s]
[INFO] hadoop-mapreduce-client-hs ......................... SUCCESS [ 13.119 s]
[INFO] hadoop-mapreduce-client-jobclient .................. SUCCESS [ 6.762 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................. SUCCESS [ 1.958 s]
[INFO] Apache Hadoop MapReduce Examples ................... SUCCESS [ 8.129 s]
[INFO] hadoop-mapreduce ................................... SUCCESS [ 3.937 s]
[INFO] Apache Hadoop MapReduce Streaming .................. SUCCESS [ 5.881 s]
[INFO] Apache Hadoop Distributed Copy ..................... SUCCESS [ 10.755 s]
[INFO] Apache Hadoop Archives ............................. SUCCESS [ 2.511 s]
[INFO] Apache Hadoop Rumen ................................ SUCCESS [ 8.135 s]
[INFO] Apache Hadoop Gridmix .............................. SUCCESS [ 5.524 s]
[INFO] Apache Hadoop Data Join ............................ SUCCESS [ 3.702 s]
[INFO] Apache Hadoop Ant Tasks ............................ SUCCESS [ 2.582 s]
[INFO] Apache Hadoop Extras ............................... SUCCESS [ 3.400 s]
[INFO] Apache Hadoop Pipes ................................ SUCCESS [ 7.537 s]
[INFO] Apache Hadoop OpenStack support .................... SUCCESS [ 7.347 s]
[INFO] Apache Hadoop Amazon Web Services support .......... SUCCESS [ 8.864 s]
[INFO] Apache Hadoop Client ............................... SUCCESS [ 5.480 s]
[INFO] Apache Hadoop Mini-Cluster ......................... SUCCESS [ 0.084 s]
[INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 5.272 s]
[INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 6.860 s]
[INFO] Apache Hadoop Tools ................................ SUCCESS [ 0.026 s]
[INFO] Apache Hadoop Distribution ......................... SUCCESS [ 31.834 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 15:44 min
[INFO] Finished at: 2015-07-13T00:23:42-07:00
[INFO] Final Memory: 101M/326M
[INFO] ------------------------------------------------------------------------
[root@crxy96 hadoop-2.6.0-src]#
好了,編譯完成了。
編譯後的代碼在/usr/local/hadoop-2.6.0-src/hadoop-dist/target下面。
這是我整理好的全部資料,在編譯過程當中用到的各個包,這裏都有。結構以下
若是對linux不熟悉,必定使用咱們推薦的centos6.5 64位操做系統。由於本文介紹的各類操做都是針對該版本的操做系統。
編譯過程當中須要的jar依賴,我已經所有下載了,而且打包,你們可使用個人把內容替換。Maven倉庫的默認位置在~/.m2/repository中,你們解壓個人repository替換本身的就行。
重要提示:必定要保證虛擬機的網絡暢通。
1.把從文件夾「編譯成功的hadoop2.6.0的64位版本」中解壓hadoop-dist-2.6.0-binary-64.tar.gz 獲得的hadoop-2.6.0放到/usr/local目錄下。放好後,完整的目錄結構是/usr/local/hadoop-2.6.0
若是是源代碼編譯的話,這裏的路徑指的是
2.文件夾「hadoop2.6.0僞分佈配置文件」中的配置內容是僞分佈設置。把這個目錄中的全部內容複製到/usr/local/hadoop-2.6.0/etc/hadoop目錄下,覆蓋原有文件。
3.修改core-site.xml中的hdfs://crxy213.crxy:9000的值,改爲本身的ip或者主機名
4.格式化,執行命令/usr/local/hadoop-2.6.0/sbin/hdfs namenode -format
5.啓動,執行腳本/usr/local/hadoop-2.6.0/sbin/start-hadoop.sh