搭建HDFS和HBase集羣

1、軟件版本node

Oracle JDK七、Hadoop 2.7.三、HBase 1.2.三、Centos 7bash

2、注意點服務器

1. 操做系統的hostname必須配置,不要直接用IP,不然zookeeper會報錯ssh

2. 關閉操做系統防火牆,或者配置port白名單oop

3. 必須同步時鐘,不然HBase的Master進程會報錯(https://my.oschina.net/nk2011/blog/784015操作系統

3、集羣共4臺機器,hostname和ip對應以下.net

192.168.1.1  node1code

192.168.1.2  node2orm

192.168.1.3  node3server

192.168.1.4  node4

4、機器分配以下

Hadoop:

node1  namenode

node2  secondarynamenode

node3  datanode

node4  datanode

HBase:

node1  master  zookeeper

node2  backup  zookeeper regionServer

node3  zookeeper regionServer

5、開始搭建 Hadoop 的 HDFS

1. 關門防火牆

systemctl stop firewalld

2. 編輯 /etc/hostname,修改hostname

3. 編輯 /etc/hosts,添加IP與hostname的映射

4. 建立 hadoop 用戶,並修改密碼

useradd hadoop
passwd hadoop

5. 安裝JDK7,解壓 Hadoop 2.7.3 到 /opt 目錄

6. 修改 Hadoop 和 HBase 目錄的擁有者

chown -R hadoop:hadoop /opt/hadoop-2.7.3
chown -R hadoop:hadoop /opt/hbase-1.2.3

7. 建立HDFS的數據目錄

mkdir /opt/hadoop-2.7.3/hdfs
# name node 數據目錄
mkdir /opt/hadoop-2.7.3/hdfs/name
# data node 數據目錄
mkdir /opt/hadoop-2.7.3/hdfs/data
# 修改擁有者
chmod -R hadoop:hadoop /opt/hadoop-2.7.3/hdfs

8. 設置SSH免密碼登陸(https://my.oschina.net/nk2011/blog/778623

# 切換到 hadoop 用戶
su hadoop
# 建立SSH的RSA密鑰對
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
# 複製公鑰到全部服務器
ssh-copy-id hadoop@node1
ssh-copy-id hadoop@node2
ssh-copy-id hadoop@node3
ssh-copy-id hadoop@node4

9. 配置 Hadoop 集羣

1)打開 ${HADOOP_HOME}/etc/hadoop/hadoop-env.sh,設置 JAVA_HOME

2)打開 ${HADOOP_HOME}/etc/hadoop/core-site.xml,添加如下配置

<configuration>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://node1:8020</value>
  </property>
  <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
  </property>
</configuration>

3)打開 ${HADOOP_HOME}/etc/hadoop/hdfs-site.xml,添加如下配置

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/opt/hadoop-2.7.3/hdfs/name</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/opt/hadoop-2.7.3/hdfs/data</value>
  </property>
  <property>
    <name>dfs.blocksize</name>
    <value>268435456</value>
  </property>
  <property>
    <name>dfs.namenode.handle.count</name>
    <value>100</value>
  </property>
  #secondary name node配置
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>node2:50090</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>node1:50070</value>
  </property>
</configuration>

4)清空 ${HADOOP_HOME}/etc/hadoop/slaves,並加入如下內容

node2
node3
node4

5) 格式化HDFS的 name node

${HADOOP_HOME}/bin/hdfs namenode -format hadoop_cluster

10. 在node1,node2,node3,node4 上重複 1-9 步驟

11. Hadoop 的 HDFS 配置完畢,在 node1 上啓動 Hadoop

[hadoop@node1]$ hadoop-2.7.3/sbin/start-dfs.sh 
Starting namenodes on [node1]
node1: starting namenode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-namenode-node1.out
node3: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-node3.out
node2: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-node2.out
node4: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-node4.out
Starting secondary namenodes [node2]
node2: starting secondarynamenode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-node2.out

6、搭建 HBase

1. 解壓 HBase 1.2.3 到 /opt 目錄

2. 打開 ${HBASE_HOME}/conf/hbase-env.sh,設置 JAVA_HOME

3. 打開 ${HBASE_HOME}/conf/hbase-env.sh,設置 HBASE_PID_DIR

4. 打開 ${HBASE_HOME}/conf/hbase-site.xml,添加如下內容

<configuration>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://node1:8020/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>node1,node2,node3</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/opt/hbase-1.2.3/zookeeper</value>
  </property>
</configuration>

5. 清空 ${HBASE_HOME}/conf/regionservers 文件,添加如下內容

node2
node3

6. 建立 ${HBASE_HOME}/conf/backup-masters 文件,添加如下內容

node2

7. 在 node1,node2,node3 重複 1-6 步驟

8. HBase配置完畢,在 node1 上啓動 HBase(確保Hadoop已經啓動)

[hadoop@node3]$ /opt/hbase-1.2.3/bin/start-hbase.sh 
node1: starting zookeeper, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-node1.out
node2: starting zookeeper, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-node2.out
node3: starting zookeeper, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-node3.out
starting master, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-master-node1.out
node2: starting regionserver, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-node2.out
node3: starting regionserver, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-node3.out
node2: starting master, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-master-node2.out
相關文章
相關標籤/搜索