hadoop2.7.7+habse2.0.5+zookeeper3.4.14+hive2.3.5單機安裝

環境 騰訊雲centos7 java

一、hadoop下載node

http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.7/hadoop-2.7.7.tar.gz

二、解壓mysql

tar -xvf hadoop-2.7.7.tar.gz -C /usr/java

三、修改hadoop-2.7.7/etc/hadoop/hadoop-env.sh文件web

將jdk環境添加進去:
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8

四、添加hadoop環境變量sql

    HADOOP_HOME=/usr/java/hadoop-2.7.7
    MAVEN_HOME=/usr/java/maven3.6
    RABBITMQ_HOME=/usr/java/rabbitmq_server
    TOMCAT_HOME=/usr/java/tomcat8.5
    JAVA_HOME=/usr/java/jdk1.8
    CLASSPATH=$JAVA_HOME/lib/
    PATH=$PATH:$JAVA_HOME/bin:$TOMCAT_HOME/bin:$RABBITMQ_HOME/sbin:$MAVEN_HOME/bin:$HADOOP_HOME/bin
    export PATH JAVA_HOME CLASSPATH TOMCAT_HOME RABBITMQ_HOME MAVEN_HOME HADOOP_HOME

   環境變量生效:source /etc/profile

五、修改hadoop-2.7.7/etc/hadoop/core-site.xml shell

  <!-- 指定HDFS老大(namenode)的通訊地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <!-- 指定hadoop運行時產生文件的存儲路徑 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/usr/java/hadoop-2.7.7/tmp</value>
    </property>

六、修改hadoop-2.7.7/etc/hadoop/hdfs-site.xml apache

  <configuration>
        <property>
            <name>dfs.name.dir</name>
            <value>/usr/java/hadoop-2.7.7/hdfs/name</value>
            <description>namenode上存儲hdfs名字空間元數據 </description>
        </property>

        <property>
            <name>dfs.data.dir</name>
            <value>/usr/java/hadoop-2.7.7/hdfs/data</value>
            <description>datanode上數據塊的物理存儲位置</description>
        </property>
        <!-- 設置hdfs副本數量 -->
        <property>
            <name>dfs.replication</name>
            <value>1</value>
        </property>
    </configuration>

七、免密登錄 vim

    ssh-keygen -t rsa
    cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

八、hdfs啓動與中止centos

    ./bin/hdfs namenode -format #初始化,必須對namenode進行格式化
        出現:19/08/13 09:46:05 INFO common.Storage: Storage directory /usr/java/hadoop-2.7.7/hdfs/name has been successfully formatted。說明格式化成功!
        
      ./sbin/start-dfs.sh #啓動hadoop
        (base) [root@medecineit hadoop-2.7.7]# ./sbin/start-dfs.sh 
        Starting namenodes on [localhost]
        The authenticity of host 'localhost (127.0.0.1)' can't be established.
        ECDSA key fingerprint is SHA256:SLOXW/SMogWE3wmK/H310vL74h0dsYohaSF31oEsdBw.
        ECDSA key fingerprint is MD5:fe:a4:15:38:15:e7:32:c3:9f:c3:8e:43:c6:80:6b:ac.
        Are you sure you want to continue connecting (yes/no)? yes
        localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
        localhost: starting namenode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-namenode-medecineit.out
        localhost: starting datanode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-datanode-medecineit.out
        Starting secondary namenodes [0.0.0.0]
        The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
        ECDSA key fingerprint is SHA256:SLOXW/SMogWE3wmK/H310vL74h0dsYohaSF31oEsdBw.
        ECDSA key fingerprint is MD5:fe:a4:15:38:15:e7:32:c3:9f:c3:8e:43:c6:80:6b:ac.
        Are you sure you want to continue connecting (yes/no)? yes
        0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
        0.0.0.0: starting secondarynamenode, logging to /usr/java/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-medecineit.out

      ./sbin/stop-dfs.sh   #中止hadoop

九、查看是否啓動相應的節點tomcat

  jps命令查看
        (base) [root@medecineit hadoop-2.7.7]# jps 4416 NameNode
                    4916 Jps
                    4740 SecondaryNameNode
                    4553 DataNode
                    975 Bootstrap

    說明NameNode,SecondaryNameNode,DataNode啓動成功。

十、web界面查看

http://ip:50070

十一、配置yarn -->mapred-site.xml

        複製一份文件:cp mapred-site.xml.template mapred-site.xml
    
        <!-- 通知框架MR使用YARN -->
        <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
        </property>    

十二、配置yarn-site.xml文件

    <!-- reducer取數據的方式是mapreduce_shuffle -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

1三、啓動/中止yarn

        ./sbin/start-yarn.sh #啓動
            
            (base) [root@medecineit hadoop-2.7.7]# ./sbin/start-yarn.sh 
            starting yarn daemons
            starting resourcemanager, logging to /usr/java/hadoop-2.7.7/logs/yarn-root-resourcemanager-medecineit.out
            localhost: starting nodemanager, logging to /usr/java/hadoop-2.7.7/logs/yarn-root-nodemanager-medecineit.out
        
            (base) [root@medecineit hadoop-2.7.7]# jps
                8469 ResourceManager
                8585 NodeManager
                8812 Jps
                975 Bootstrap
                
        而後再啓動hdfs : ./sbin/start-dfs.sh 

            (base) [root@medecineit hadoop-2.7.7]# jps
                8469 ResourceManager
                9208 DataNode

                9401 SecondaryNameNode
                9065 NameNode
                8585 NodeManager
                9550 Jps
                975 Bootstrap


      ./sbin/stop-yarn.sh    #中止

1四、web界面查看yarn

http://ip:8088

單機hadoop和yarn的配置完畢!

 

########zookeeper安裝###########

一、下載地址

https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz

二、解壓

tar -xvf zookeeper-3.4.14.tar.gz -C /usr/java/

三、修改配置文件

    cp zoo_sample.cfg  zoo.cfg 
    將數據保存到zookeeper的data目錄中
    dataDir=/usr/java/zookeeper-3.4.14/data

四、啓動zookeeper

    ./bin/zkServer.sh start  #啓動

    ./bin/zkServer.sh status #查看狀態

zookeeper完畢!

 

#######hbase安裝##########

一、下載地址

https://www.apache.org/dyn/closer.lua/hbase/2.0.5/hbase-2.0.5-bin.tar.gz

二、解壓

tar -xvf hbase-2.0.5-bin.tar.gz -C /usr/java/

三、修改hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.8/

四、修改hbase-site.xml

<configuration>
                <property>
                  <name>hbase.rootdir</name>
                  <value>hdfs://medecineit:9000/hbase</value>
                </property>
                <property>
                  <name>hbase.cluster.distributed</name>
                  <value>true</value>
                </property>
                <property>
                  <name>hbase.zookeeper.quorum</name>
                  <value>medecineit</value>
                </property>
                <property>
                  <name>dfs.replication</name>
                  <value>1</value>
                </property>
<property>
    <name>hbase.master.dns.nameserver</name>
    <value>medecineit</value>
    <description>DNS</description>
  </property>

  <property>
    <name>hbase.regionserver.dns.nameserver</name>
    <value>medecineit</value>
    <description>DNS</description>
  </property>
<property>
         <name>hbase.security.authentication</name>
         <value>simple</value>
      </property>
    <property>
      <name>hbase.security.authorization</name>
      <value>false</value>
    </property>
<property> <name>hbase.regionserver.hostname</name> <value>medecineit</value> </property>
</configuration>

##注意,紅色的部分必定要加,不然遠程鏈接hbase報錯!

五、修改 regionservers

改成主機名:medecineit

六、啓動hbase

 ./bin/start-hbase.sh #啓動
(base) [root@medecineit hbase-2.0.5]# jps 8469 ResourceManager 16902 Jps 16823 HRegionServer 9208 DataNode 16152 QuorumPeerMain 9401 SecondaryNameNode 9065 NameNode 16681 HMaster 8585 NodeManager 975 Bootstrap 代表已經啓動了HRegionServer,HMaster。

七、web訪問

http://ip:16010/master-status

八、啓動hbase shell進行表的操做

./bin/hbase shell  #啓動hbase shell

完畢!

 #####關閉順序####

中止集羣服務的順序
中止spark集羣
master>spark/sbin/stop-slaves.sh
master>spark/sbin/stop-master.sh
中止hbase集羣
master>stop-hbase.sh
中止yarn集羣
master>stop-yarn.sh
中止hadoop集羣
master>stop-dfs.sh
中止zookeeper集羣
master>runRemoteCmd.sh 「zkServer.sh stop」 zookeeper
中止集羣服務完畢!

 

#####hive安裝######

一、下載安裝包

https://www-eu.apache.org/dist/hive/hive-2.3.5/apache-hive-2.3.5-bin.tar.gz

二、解壓

tar -xzvf apache-hive-2.3.5-bin.tar.gz

三、配置hive-env.sh  

export HADOOP_HOME=/usr/java/hadoop-2.7.7
export HIVE_CONF_DIR=/usr/java/hive-2.3.5/conf
export HIVE_AUX_JARS_PATH=/usr/java/hive-2.3.5/lib

四、配置vim hive-site.xml文件

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://medecineit:3306/hive?createDatabaseIfNotExist=true</value>
    <description>JDBC connect string for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>username to use against metastore database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>yang156122</value>
    <description>password to use against metastore database</description>
  </property>
</configuration>

五、添加配置文件

cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

cp hive-log4j2.properties.template hive-log4j2.properties

六、啓動hive

./hive --service hiveserver2  #啓動

./beeline -u jdbc:hive2://localhost:10000  #測試 -beeline工具測試使用jdbc方式鏈接

http://ip:10002/  #web界面

完畢!

相關文章
相關標籤/搜索