CentOS7.4僞分佈式搭建 hadoop+zookeeper+hbase+opentsdb

前言

因爲hadoop和hbase都得想zookeeper註冊,因此啓動順序爲 zookeeper——》hadoop——》hbase,關閉順序反之html

1、前期準備

一、配置ip

進入文件編輯模式:java

vim /etc/sysconfig/network-scripts/ifcfg-ens192

 

原內容:node

TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=dhcp DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens192 UUID=f384ed85-2e1e-4087-9f53-81afd746f459 DEVICE=ens192 ONBOOT=no

 修改後內容:python

TYPE=Ethernet PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=static DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=ens192 UUID=f384ed85-2e1e-4087-9f53-81afd746f459 DEVICE=ens192 ONBOOT=yes IPADDR=192.168.0.214 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 DNS=183.***.***.100

重啓網絡使之生效linux

service network restart

用CRT進行登陸git

二、修改hostname

# 查看
hostname # 修改
hostnamectl set-hostname 'hbase3'

 

三、映射hostname

vi /etc/hosts

添加紅框欄:github

四、聯網

方便yum下載安裝包或者安裝一些命令,就必須聯網:web

#檢查是否聯網: ping: www.baidu.com: Name or service not known說明未聯網

#配置/etc/resolv.conf
vi /etc/resolv.conf #添加如下內容: 這裏的ip與第1步的DNS後面的ip相同
nameserver 183.***.***.100
#驗證:PING www.a.shifen.com (39.156.66.18) 56(84) bytes of data. 說明聯網成功 ping www.baidu.com

 

 

五、安裝vim、rz、sz

yum install -y vim
yum install -y lrzsz

六、設置時區

注:操做系統有兩個時間: 軟件時間(date)和硬件時間(hwclock )shell

# 查看時間
date # 設置時區
timedatectl set-timezone Asia/Shanghai #檢查時間
date hwclock

 

 

七、免密登陸

#測試是否免密登陸
ssh localhost #進入路徑
cd ~/.ssh/
#生成對鑰
ssh-keygen -t rsa #將公鑰拷貝到 authorized_keys
cat id_rsa.pub >> authorized_keys #驗證
ssh localhost

 

驗證結果:數據庫

 

八、下載準備安裝包

注:一、hbase與hadoop的匹配表見http://hbase.apache.org/book.html#basic.prerequisites

  二、我在/opt/soft分別準備如下安裝包:點擊連接能夠卡查看並下載最新版本

jdk:          jdk-8u191-linux-x64.tar.gz

hadoop:   hadoop-3.1.2.tar.gz

zookeeper:           zookeeper-3.4.13.tar.gz

hbase:     hbase-2.1.4-bin.tar.gz

opentsdb:opentsdb-2.4.0.tar.gz

 

 

2、開始安裝

一、安裝jdk

注:因爲要安裝的hadoop、zookeeper、habse、opentsdb都是java語言開發的,故首先須要安裝jdk。

#進入到安裝包所在路徑
cd /opt/soft/jdk #解壓安裝包
tar -zxvf jdk-8u191-linux-x64.tar.gz #配置環境變量
vim /etc/profile #在/etc/profile最後添加內容
export JAVA_HOME=/opt/soft/jdk/jdk1.8.0_191 export PATH=$PATH:$JAVA_HOME/bin #使環境變量生效
source /etc/profile #驗證
java -version

驗證時出現如截圖內容則說明安裝成功

二、安裝hadoop

【1】解壓安裝

#進入到安裝包所在路徑
cd /opt/soft/hadoop #解壓安裝包
 tar -zxvf hadoop-3.1.2.tar.gz #配置環境變量
vim /etc/profile #在/etc/profile添加JAVA_HOME的後面繼續添加 綠色部分
export HADOOP_HOME=/opt/soft/hadoop/hadoop-3.1.2 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

  

  export HADOOP_MAPRED_HOME=$HADOOP_HOME
  export HADOOP_MAPRED_HOME=$HADOOP_HOME
  export HADOOP_COMMON_HOME=$HADOOP_HOME
  export HADOOP_HDFS_HOME=$HADOOP_HOME
  export YARN_HOME=$HADOOP_HOME
  export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"

#使環境變量生效
source /etc/profile #驗證:不報-bash: hadoop: command not found則說明成功
hadoop fs -ls

 

 

 

【2】配置

hadoop-env.sh

#進入到配置文件路徑:
cd /opt/soft/hadoop/hadoop-3.1.2/etc/hadoop #備份:
cp hadoop-env.sh hadoop-env.sh.bak #進入配置文件:
vim hadoop-env.sh #設置java 環境變量,雖然系統中定義了java_home,在hadoop中須要從新配置
export JAVA_HOME=/opt/soft/jdk/jdk1.8.0_191 #配置數據路徑
export HADOOP_PID_DIR=/opt/data/hadoop/pids

 

core-site.xml

#備份:
cp core-site.xml core-site.xml.bak #進入配置文件:
vim core-site.xml <!--配置:configration標籤中添加如下內容-->

    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hbase3:9000</value>
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
         <value>/opt/soft/hadoop/hadoop-3.1.2/data</value>
    </property>

 

hdfs-site.xml

#備份:
cp hdfs-site.xml hdfs-site.xml.bak #進入配置文件:
vim hdfs-site.xml <!--配置:configuration標籤中加如下內容-->

    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/opt/soft/hadoop/hadoop-3.1.2/hdfs/name</value>
    </property>
    <property>
        <name>dfs.namenode.data.dir</name>
        <value>/opt/soft/hadoop/hadoop-3.1.2/hdfs/data,/opt/soft/hadoop/hadoop-3.1.2/hdfs/data_bak</value>
    </property>
     <property>
        <name>dfs.http.address</name>
        <value>hbase3:50070</value>
     </property>
     <property>
      <name>dfs.datanode.max.transfer.threads</name>
      <value>4096</value>
     </property>

mapred-site.xml

# 備份:
cp mapred-site.xml mapred-site.xml.bak # 進入配置文件:
vim mapred-site.xml <!--配置:configuration標籤中加如下內容-->

    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>
            /opt/soft/hadoop/hadoop-3.1.2/etc/hadoop, /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/common/*, /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/common/lib/*, /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/hdfs/*, /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/hdfs/lib/*, /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/mapreduce/*, /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/mapreduce/lib/*, /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/yarn/*, /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/yarn/lib/*
        </value>
    </property>
  <property>
      <name>mapreduce.jobhistroy.address</name>
      <value>hbase3:10020</value>
  </property>
  <property>
      <name>mapreduce.jobhistroy.webapp.address</name>
      <value>hbase3:19888</value>
  </property>

yarn-site.xml

#備份:
cp yarn-site.xml yarn-site.xml.bak

#進入配置文件:
vim yarn-site.xml


<!--配置:configuration標籤中加如下內容-->

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hbase3</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>hbase3:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
         <value>hbase3:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
         <value>hbase3:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>hbase3:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>hbase3:8088</value>
    </property>  

注:如下配置是爲了解決報錯:no HDFS_NAMENODE_USER defined

start-dfs.sh、stop-dfs.sh

# 進入目錄
cd /opt/soft/hadoop/hadoop-3.1.2/sbin # 備份:
cp start-dfs.sh  start-dfs.sh.bak cp stop-dfs.sh   stop-dfs.sh.bak # 進入文件編輯模式:
vim start-dfs.sh vim stop-dfs.sh # 配置:添加如下內容(我是用root帳戶安裝和啓動HADOOP)
 HDFS_DATANODE_USER=root HDFS_DATANODE_SECURE_USER=hdfs HDFS_NAMENODE_USER=root HDFS_SECONDARYNAMENODE_USER=root

start-yarn.sh、stop-yarn.sh

# 備份:
cp start-yarn.sh  start-yarn.sh.bak cp stop-yarn.sh   stop-yarn.sh.bak # 進入文件編輯模式:
vim start-yarn.sh vim stop-yarn.sh # 配置:添加如下內容
 YARN_RESOURCEMANAGER_USER=root HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=root

【3】啓動

注:一、從新格式化HADOOP前須要清空全部DATA目錄數據:包括dfs下的、保存緩存數據的、以及hadoop的、zookeeper的log日誌文件;以及zookeeper的data下zookeeper_service.pid

        二、若是出現nativelib不能加載的狀況,須要查看native包的版本是否爲64位(file libhadoop.so.1.0.0 ),若是不匹配則須要用64位環境編譯或者更換64位包。

start-dfs.sh、stop-dfs.sh

# 格式化hdfs
 hdfs  namenode -format # 啓動(start-all.sh 至關於start-dfs.sh+start-yarn.sh )
start-all.sh ## 驗證
# 一、端口驗證
netstat -ano |grep 50070

# 二、web驗證: 如上面驗證正常,不能訪問web,檢查防火牆等
http://192.168.0.214:50070 

 

 

 

 

三、安裝zookeeper

【1】解壓安裝

# 進入目錄
cd /opt/soft/zookeeper # 解壓
tar -zxvf zookeeper-3.4.13.tar.gz
# 修改用戶權限
 chown -R root:root zookeeper-3.4.13

 

【2】配置

日誌路徑zkEnv.sh

若是不指定zkEnv.sh的ZOO_LOG_DIR的話,則當前在什麼位置啓動,則日誌就生成到那個目錄,不方便之後查找日誌

vim /opt/soft/zookeeper/zookeeper-3.4.13/bin/zkEnv.sh #將ZOO_LOG_DIR="."設置爲
ZOO_LOG_DIR="/opt/soft/zookeeper/zookeeper-3.4.13/logs/"

 

 

 

 

 

 

zoo.cfg

# 進入目錄
cd zookeeper-3.4.13/conf/
# 將zoo_sample.cfg複製給zoo.cfg
cp zoo_sample.cfg zoo.cfg # 修改zoo.cfg 
vim zoo.cfg # 配置:添加如下內容(註釋掉dataDir=/tmp/zookeeper)
 dataDir=/opt/soft/zookeeper/zookeeper-3.4.13/data dataLogDir=/opt/soft/zookeeper/zookeeper-3.4.13/logs server.1=127.0.0.1:2888:3888

 

 環境變量  /etc/profile

# 進入環境變量
 
vim /etc/profile
# 繼續添加 export ZOOKEEPER_HOME=/opt/soft/zookeeper/zookeeper-3.4.13/ :$ZOOKEEPER_HOME/bin

# 使環境變量生效
 source /etc/profile

 

【3】啓動

# 啓動
zkServer.sh start # 驗證
# 一、端口驗證
netstat -ano | grep 2181
# 二、客戶端驗證
zkCli.sh -server

 

 

四、安裝hbase

【1】解壓安裝 

 注:hbase與hadoop的匹配表見http://hbase.apache.org/book.html#basic.prerequisites

# 進入目錄
cd /opt/soft/hbase/

#解壓
tar -zxvf hbase-2.1.4-bin.tar.gz

【2】配置

zoo.cfg

# 將zookeeper下的zoo.cfg拷貝到hbase的conf下
cp /opt/soft/zookeeper/zookeeper-3.4.13/conf/zoo.cfg /opt/soft/hbase/hbase-2.1.4/conf/

hbase-env.sh

# 進入目錄
/opt/soft/hbase/hbase-2.1.4/conf/

# 備份
cp hbase-env.sh hbase-env.sh.bak # 進入編輯模式
vim hbase-env.sh # 配置:添加如下內容(註釋掉:export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC")
export JAVA_HOME=/opt/soft/jdk/jdk1.8.0_191
export HBASE_OPTS
="$HBASE_OPTS -Xmx8G -Xms8G -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70" export HBASE_HOME=/opt/soft/hbase/hbase-2.1.4/ export HBASE_CLASSPATH=/opt/soft/hbase/hbase-2.1.4/conf export HBASE_LOG_DIR=/opt/soft/hbase/hbase-2.1.4/logs export HADOOP_HOME=/opt/soft/hadoop/hadoop-3.1.2 export HBASE_PID_DIR=/opt/data/hadoop/pids export HBASE_MANAGES_ZK=false

hbase-site.xml

# 備份:
cp hbase-site.xml hbase-site.xml.bak # 進入編輯模式:
vim hbase-site.xml # 配置:<configuration>標籤中添加

  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://hbase3:9000/hbase</value>
  </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
   <name>hbase.master</name>
   <value>127.0.0.1:60000</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>127.0.0.1</value>
 </property>
 <property>
    <name>hbase.wal.provider</name>
   <value>filesystem</value>
 </property>
 <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
 </property>
 <property>
     <name>hbase.tmp.dir</name>
     <value>/opt/soft/hbase/hbase-2.1.4/tmpdata</value>
  </property>
  <property>
      <name>hfile.block.cache.size</name>
      <value>0.2</value>
  </property>
  <property>
      <name>hbase.snapshot.enabled</name>
      <value>true</value>
  </property>
  <property>
      <name>zookeeper.session.timeout</name>
      <value>180000</value>
  </property>

 

 環境變量  /etc/profile

 

# 進入環境變量
 vim /etc/profile # 繼續添加
export HBASE_HOME=/opt/soft/hbase/hbase-2.1.4/ :$HBASE_HOME/bin # 使環境變量生效
 source /etc/profile

【3】啓動

start-hbase.sh

 

報錯1:java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder

啓動時顯示:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/soft/hadoop/hadoop-3.1.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/soft/hbase/hbase-2.1.4/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

# 查看日誌
 tailf hbase-root-master-hbase3.log -n 500

# 報錯內容
2019-07-08 06:08:48,407 INFO  [main] ipc.NettyRpcServer: Bind to /192.168.0.214:16000
2019-07-08 06:08:48,554 INFO [main] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled 2019-07-08 06:08:48,555 INFO [main] hfile.CacheConfig: Created cacheConfig: CacheConfig:disabled 2019-07-08 06:08:49,105 ERROR [main] regionserver.HRegionServer: Failed construction RegionServer java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:644) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:628) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:362) at org.apache.hadoop.hbase.util.CommonFSUtils.isValidWALRootDir(CommonFSUtils.java:411) at org.apache.hadoop.hbase.util.CommonFSUtils.getWALRootDir(CommonFSUtils.java:387) at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:704) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:613) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:489) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3093) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3111) Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 25 more 2019-07-08 06:08:49,118 ERROR [main] master.HMasterCommandLine: Master exiting java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster. at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3100) at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:236) at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149) at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3111) Caused by: java.lang.NoClassDefFoundError: org/apache/htrace/SamplerBuilder at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:644) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:628) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:93) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2701) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2683) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:372) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.hbase.util.CommonFSUtils.getRootDir(CommonFSUtils.java:362) at org.apache.hadoop.hbase.util.CommonFSUtils.isValidWALRootDir(CommonFSUtils.java:411) at org.apache.hadoop.hbase.util.CommonFSUtils.getWALRootDir(CommonFSUtils.java:387) at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:704) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:613) at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:489) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:3093) ... 5 more Caused by: java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 25 more
## 解決方案
# 找到 htrace-core 開頭的jar

 find / -name 'htrace-core-*'
# 將 htrace-core 開頭的jar 複製到 /opt/soft/hbase/hbase-2.1.4/lib/目錄下

cp /opt/soft/hadoop/hadoop-3.1.2/share/hadoop/yarn/timelineservice/lib/htrace-core-3.1.0-incubating.jar /opt/soft/hbase/hbase-2.1.4/lib/

 

 

【4】驗證

192.168.0.214:16010進入web頁面

 

 

五、安裝opentsdb

【1】解壓

 

# 進入目錄
cd /opt/soft/opentsdb # 解壓
 tar -zxvf opentsdb-2.4.0.tar.gz # 修改用戶權限
chown -R root:root opentsdb-2.4.0

【2】編譯

# 進入目錄
cd opentsdb-2.4.0 # 編譯:會生成一個build文件,但會報錯
 ./build.sh # 將third_party中的文件放入build文件夾中
cp -r third_party build # 再次編譯
./build.sh

【3】配置

opentsdb.conf

# 將/opt/soft/opentsdb/opentsdb-2.4.0/src/opentsdb.conf複製到/opt/soft/opentsdb/opentsdb-2.4.0/build目錄下

cp /opt/soft/opentsdb/opentsdb-2.4.0/src/opentsdb.conf /opt/soft/opentsdb/opentsdb-2.4.0/build/ # 進入編輯模式
vim opentsdb.conf
# 分別配置如下內容

tsd.network.port =4242
tsd.http.staticroot =./staticroot
tsd.http.cachedir =/opt/soft/opentsdb/opentsdb-2.4.0/tsdtmp
tsd.core.auto_create_metrics = true
tsd.storage.hbase.zk_quorum = 127.0.0.1:2181
tsd.http.request.enable_chunked = true
tsd.core.auto_create_metrics = true
tsd.http.request.max_chunk = 1638400

【4】生成表

# 進入目錄
cd  /opt/soft/opentsdb/opentsdb-2.4.0/src # 在hbase中生成表
env COMPRESSION=NONE HBASE_HOME=/opt/soft/hbase/hbase-2.1.4 ./create_table.sh ## 驗證 #一、hbase驗證: 進入hbase的shell命令:更多habse shell命令參考https://www.cnblogs.com/i80386/p/4105423.html
hbase shell # 查看全部表:opentsdb在hbase中會生成4個表(tsdb, tsdb-meta, tsdb-tree, tsdb-uid),其中tsdb這個表最重要,數據遷移時,備份還原此表便可
list
#二、zookeeper驗證:進入zkCli.sh客戶端,相關命令指南參考https://www.e-learn.cn/content/linux/835320 zkCli.sh -server # 查看hbase相關表 ls /hbase/table
#三、hadoop驗證:hbas相關數據在 /hbase/default目錄下,其餘相關命令指南參考https://blog.csdn.net/m0_38003171/article/details/79086780 hdfs dfs -ls -R /hbase/default

 

 

 

【5】啓動

# 進入目錄
cd /opt/soft/opentsdb/opentsdb-2.4.0/build/ 
# 啓動
sh tsdb tsd &
## 驗證 # 一、端口驗證
netstat -ano |grep 4242
# 二、進程驗證
ps -ef | grep opentsdb # 三、web驗證:如上面驗證都正常,web沒法訪問,檢查防火牆等
 http://192.168.0.211:4242/

【6】寫數據

 

#啓動寫入數據程序: /opt/soft/tsdb/property-0.0.1-SNAPSHOT.jar 程序爲寫入數據的程序
java -jar /opt/soft/tsdb/property-0.0.1-SNAPSHOT.jar &

注:在web頁面的Graph中,能夠看到寫入的指標(標籤),則說明寫入成功,如未寫入成,能夠在logs下看相關日誌。想要看具體數據,可安裝 grafana輔助查看。安裝步驟參考下面。

 

 

 

六、安裝grafana

參考:https://grafana.com/grafana/download?platform=linux

【1】安裝

方法一:

# 建立目錄
mkdir /opt/soft/grafana/

# 進入目錄
cd /opt/soft/grafana/

# 下載安裝包
wget https://dl.grafana.com/oss/release/grafana-6.2.5.linux-amd64.tar.gz 

# 解壓

tar -zxvf grafana-6.2.5.linux-amd64.tar.gz

 方法二:

yum install -y https://dl.grafana.com/oss/release/grafana-6.2.5-1.x86_64.rpm

【2】啓動

# 啓動
service grafana-server start ## 驗證

# 一、狀態驗證
systemctl status grafana-server # 二、web驗證:默認端口爲3000
192.168.0.214:3000

 

 

【3】查看數據

一、修改密碼:

帳戶密碼默認:admin  admin,首次登錄要求修改密碼,我這裏改成Zxit@2018

 

二、添加數據庫

Add data source——》Data Sources——》OpenTSDB——》輸入URL(http://localhost:4242)——》Save & Test——》back

 

 

三、查看數據

 

Home——》New dashboard——》Add Query——》選擇數據庫——》選擇指標

相關文章
相關標籤/搜索