其實官網有比較詳實的說明,英語好的能夠直接看官網,地址
html
這個省略,官網顯示1.6能夠,可是我用openjdk1.6出了異常,JDK1.6沒試,直接用了JDK1.7
java
配置好環境變量node
vi /etc/profile
export JAVA_HOME=/usr/local/jdk1.7.0_79 export CLASSPATH=.:$JAVE_HOME/lib.tools.jar export PATH=$PATH:$JAVA_HOME/bin
添加完後執行命令使配置生效python
source /etc/profile
$ sudo apt-get install ssh $ sudo apt-get install rsync
查是32仍是64位的辦法shell
cd hadoop-2.7.0/lib/native file libhadoop.so.1.0.0
hadoop-2.7.0/lib/native/libhadoop.so.1.0.0: ELF 64-bit LSB shared object, AMD x86-64, version 1 (SYSV), not stripped
hadoop配置文件指定java路徑
apache
etc/hadoop/hadoop-env.shubuntu
export JAVA_HOME=/usr/local/jdk1.7.0_79
系統環境變量瀏覽器
export HADOOP_HOME=/usr/local/hadoop-2.7.0 export PATH=$PATH:$HADOOP_HOME/bin export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
後兩條不加會出現ssh
You have loaded library /usr/hadoop/hadoop-2.7.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.分佈式
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
添加完後執行命令使配置生效
source /etc/profile
執行命令查看是否成功
hadoop version
etc/hadoop/core-site.xml:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
etc/hadoop/hdfs-site.xml:
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys $ export HADOOP\_PREFIX=/usr/local/hadoop-2.7.0
$ bin/hdfs namenode -format $ sbin/start-dfs.sh
打開瀏覽器 http://localhost:50070/看是否成功
hdfs配置:username最好和當前用戶名相同,否則會可能出現權限問題
$ bin/hdfs dfs -mkdir /user $ bin/hdfs dfs -mkdir /user/<username>
etc/hadoop/mapred-site.xml:
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
etc/hadoop/yarn-site.xml:
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
啓動yarn
$ sbin/start-yarn.sh
http://localhost:8088/查看是否成功
至此hadoop單節點僞分佈式安裝配置完成
spark的安裝相對就要簡單多了
由於我以前已經有hadoop了因此選擇第二個下載
cd conf cp spark-env.sh.template spark-env.sh cp spark-defaults.conf.template spark-defaults.conf vi conf/spark-env.sh
最後添加
export HADOOP_HOME=/usr/local/hadoop-2.7.0 export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export SPARK_DIST_CLASSPATH=$(hadoop classpath)
最後一個須要hadoop添加了環境變量才行。
官網配置中沒有前兩個配置,我運行例子時總報錯,找不到hdfs jar 包。
./bin/run-example SparkPi 10
成功則到此配置完成
想運行python或者scala請參考官網