spark+hadoop的組合是未來大數據的發展趨勢,簡單說一下hadoop的安裝node
下載hadoop安裝包apache
http://mirrors.hust.edu.cn/apache/hadoop/core/hadoop-2.8.0/oop
解壓:tar -zxvf hadoop-2.8.0.tar.gz 學習
配置環境變量大數據
export HADOOP_HOME=/home/qpx/tool/hadoop-2.8.0 export PATH=.:$SPARK_HOME/bin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
修改 hadoop-env.sh spa
export JAVA_HOME=${JAVA_HOME}
修改core-site.xmlcode
<configuration> <property> <name>fs.default.name</name> <value>hdfs://hadoop:9000</value> <description>change you own hadoop hostname</description> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/tmp</value> </property> </configuration>
修改hdfs-site.xml orm
<configuration> <property> <name>mapred.job.tracker</name> <value>9001</value> <description>change you own hostname</description> </property> </configuration>
修改mapred-site.xmlxml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration>
接下來在啓動hadoop以前須要格式化hdfs。命令:hadoop namenode -format;ip
啓動hadoop並驗證。啓動命令:start-all.sh;驗證命令:jps
但願對您有所幫助
後續進行spark 的入門學習