java 環境的安裝、設置免密登錄、進行hadoop安裝、關閉防火牆

一、去這個網站下載對應的版本:
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
我這邊下載的是:jdk-8u181-linux-x64.tar.gz
wget -c http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
而後解壓:
tar -zxvf  jdk-8u181-linux-x64.tar.gz
修改配製文件:
vim ~/.bashrc 在最後加入:html

export JAVA_HOME=/usr/local/src/jdk1.8.0_181
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 
export PATH=$PATH:$JAVA_HOME/bin
或是把這個加入到
/etc/profile 裏面
而後,再source  ~/.bashrc  或 source /etc/profile
運行就能夠查看:
java

auto_jdk.shnode

#!/bin/bash
cd /usr/local/src/
a=`ls |grep jdk*.tar.gz`
tar -xvf $a
b=`ls |grep jdk1*`linux

cat >> ~/.bashrc << "eof"
export JAVA_HOME=/usr/local/src/$b
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
eof
source ~/.bashrcweb

 二、設置免密登錄:apache

開啓網絡IDvim

[root@localhost /]# vim /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
HOSTNAME=masterbash

[root@localhost /]# cat >> /etc/hosts << "eof" (在三臺機器上都這樣子配製)
> 192.168.10.7 master
>192.168.10.8 slave1
> 192.168.10.9 slave2
> eof網絡

[root@localhost /]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.7 master
192.168.10.8 slave1
192.168.10.9 slave2
[root@localhost /]#oracle

密鑰生成:ssh-keygen -t rsa -P ''

touch authorized_keys
chmod 600 authorized_keys
cat id_rsa.pub > authorized_keys
用一樣的方式,在slave1和slave2上都生成密鑰(ssh-keygen -t rsa -P '')
而後,把id_rsa.pub都複製到master上去:
scp id_rsa.pub 192.168.10.7:/root/.ssh/id_rsa.pub1 (在slave1上執行)
scp id_rsa.pub 192.168.10.7:/root/.ssh/id_rsa.pub2 (在slave2上執行)
而後,在maser上執行:
cat id_rsa.pub1 >> authorized_keys
cat id_rsa.pub2 >> authorized_keys
scp authorized_keys root@slave1:/root/.ssh
scp authorized_keys root@slave2:/root/.ssh
最後,再進行測試。

 

三、進行hadoop安裝

這個網站,有目前相關的版本:http://apache.fayea.com/hadoop/common/

wget -c http://apache.fayea.com/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz

解壓:tar -xvf hadoop-2.6.5.tar.gz

修改hadoop-env.sh,yarn-env.sh中的JAVA_HOME的路徑,都在最前面加一個:export JAVA_HOME=/usr/local/src/jdk1.8.0_181

 vim core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.10.7:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/src/hadoop-2.6.1/tmp</value>
</property>
<property>
<name>hadoop.native.lib</name>
<value>true</value>
<description>Should native hadoop libraries, if present, be used</description>
</property>
</configuration>

 

vim hdfs-site.xml

cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.datanode.ipc.address</name>
<value>0.0.0.0:50020</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

 

mv mapred-site.xml.templat mapred-site.xml
vim mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

 

vim yarn-site.xml

[root@localhost hadoop]# cat yarn-site.xml
<?xml version="1.0"?>
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<description>The http address of the RM web application.</description>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
</configuration>

 

vim slaves

slave1
slave2


四、關閉三臺機器的防火牆
systemctl stop firewalld
systemctl disable firewalld.service

iptables -F
systemctl status firewalld


格式化name

/usr/local/src/hadoop-2.6.1/bin/hdfs namenode -format

啓動hdfs和yarn

$HADOOP_HOME/sbin/start-dfs.sh 
啓動完成後,輸入jps查看進程,若是看到如下二個進程:
5161 SecondaryNameNode
4989 NameNode
表示master節點基本ok了

再輸入$HADOOP_HOME/sbin/start-yarn.sh ,完成後,再輸入jps查看進程 
2361 SecondaryNameNode
7320 ResourceManager
4989 NameNode

經過web頁面查看hdfs和mapreduce

http://master:50090/

http://master:8088/

查看狀態

另外也能夠經過 bin/hdfs dfsadmin -report 查看hdfs的狀態報告

相關文章
相關標籤/搜索