總共寫了5篇,都是網上找的而後本身搭建完了,把過程和操做寫了一下,供參考。java
傳送門node
1。hadoop安裝:http://www.javashuo.com/article/p-swxkwhye-gs.htmlweb
2。zookeeper安裝:http://www.javashuo.com/article/p-yuphtrbr-hc.htmlapache
3。hbase安裝:http://www.javashuo.com/article/p-xqxhmddb-hh.htmlbash
4。spark安裝:https://my.oschina.net/u/988386/blog/802073網絡
5。Windows遠程Eclipse調試:http://www.javashuo.com/article/p-wkgpymrw-hr.htmlapp
編號 | 主機名 | IP |
1 | d155 | 192.168.158.155 |
2 | d156 | 192.168.158.156 |
#!/bin/bash sudo useradd -m hadoop -s /bin/bash -p mJ6D7vaH7GsrM sudo adduser hadoop sudo sudo apt-get update
#!/bin/bash su hadoop <<EOF if [ ! -f ~/.ssh/id_rsa ] then echo "no id_rsa file create it user keygen:" ssh -o stricthostkeychecking=no localhost ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa else echo "has id_rsa file send to remote server" fi echo "把生成的key發送到要遠程登陸的機器" ssh-copy-id -i hadoop@d155 ssh-copy-id -i hadoop@d156 exit; EOF
設置完成後能夠在d155上直接ssh到d155和d156.(須要在hadoop用戶身份下執行ssh命令)。ssh
安裝hadoop並配置好環境變量。(2臺機器操做相同)(下面是腳本)webapp
執行命令 sudo -E ./xxxx.sh 注意-E參數。jvm
執行命令 source /etc/profile #使配置文件生效。
#!/bin/bash PATH_FILE="/etc/profile" #壓縮包全路徑 HADOOP_TAR="/home/hdp/Downloads/hadoop-2.7.3.tar.gz" HADOOP_INSTALL_HOME="/usr/local" #安裝hadoop if [ -d $HADOOP_INSTALL_HOME/hadoop ] then sudo rm -rf $HADOOP_INSTALL_HOME/hadoop fi #解壓hadoop sudo tar -zxvf $HADOOP_TAR -C $HADOOP_INSTALL_HOME #修改文件名稱 sudo mv $HADOOP_INSTALL_HOME/hadoop-2.7.3 $HADOOP_INSTALL_HOME/hadoop #將全部者修改成hadoop sudo chown -R hadoop $HADOOP_INSTALL_HOME/hadoop #設置環境變量 if [ -z $HADOOP_HOME ] then sudo echo "export HADOOP_HOME=\"$HADOOP_INSTALL_HOME/hadoop\"" >> $PATH_FILE sudo echo "export PATH=\"\${HADOOP_HOME}/bin:\$PATH\"" >> $PATH_FILE #刷新環境變量 source /etc/profile fi
加入jdk環境變量 export JAVA_HOME=/usr/lib/jvm/java #注意路徑
<configuration> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://d155:9000</value> </property> </configuration>
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>d155:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration>
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>d155:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>d155:19888</value> </property> </configuration>
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>d155:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>d155:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>d155:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>d155:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>d155:8088</value> </property> </configuration>
export JAVA_HOME=/usr/lib/jvm/java #注意路徑
$/usr/local/hadoop/sbin/hdfs namenode -format
啓動中止命令 /usr/local/hadoop/sbin/start-all.sh /usr/local/hadoop/sbin/stop-all.sh
檢查安裝是否成功
hadoop@d155$ jps d155主機包含ResourceManager、SecondaryNameNode、NameNode等,則表示啓動成功,例如 2212 ResourceManager 2484 Jps 1917 NameNode 2078 SecondaryNameNode hadoop@d156$ jps d156主機包含DataNode、NodeManager等,則表示啓用成功,例如 17153 DataNode 17334 Jps 17241 NodeManager