Centos7編譯安裝kafka-manager-2.0.0.2

1、kafka-manager簡介

項目地址爲:https://github.com/yahoo/kafka-managerhtml

     爲了簡化開發者和服務工程師維護Kafka集羣的工做,yahoo構建了一個叫作Kafka管理器的基於Web工具,叫作 Kafka Manager。這個管理工具能夠很容易地發現分佈在集羣中的哪些topic分佈不均勻,或者是分區在整個集羣分佈不均勻的的狀況。它支持管理多個集羣、選擇副本、副本從新分配以及建立Topic。同時,這個管理工具也是一個很是好的能夠快速瀏覽這個集羣的工具,有以下功能:java

  1. 管理kafka集羣
  2. 方便集羣狀態監控 (包括topics, consumers, offsets, brokers, replica distribution, partition distribution)
  3. 方便選擇分區副本
  4. 配置分區任務,包括選擇使用哪些brokers
  5. 能夠對分區任務重分配
  6. 提供不一樣的選項來建立及刪除topic
  7. Topic list會指明哪些topic被刪除
  8. 批量產生分區任務而且和多個topic和brokers關聯
  9. 批量運行多個主題對應的多個分區
  10. 向已經存在的主題中添加分區
  11. 對已經存在的topic修改配置
  12. 能夠在broker level和topic level的度量中啓用JMX polling功能
  13. 能夠過濾在ZK上沒有ids/ owners/offsets/ directories的consumer

2、下載kafka-manager源碼包

源碼包下載地址:https://github.com/yahoo/kafka-manager/archive/2.0.0.2.tar.gznode

github上沒有提供安裝包,須要咱們自行編譯以後在進行安裝,須要如今sbt編譯工具。git

再此我已經編譯好安裝包了kafka-manager-2.0.0.2.zip,能夠直接下載使用:https://pan.baidu.com/s/10hiEuECfZ6UuI4yIY1dluwgithub

關注微信公衆號回覆【kafka manager】獲取提取碼web

看到我這篇文章,就不要去編譯,沒意思還浪費時間,直接從百度網盤連接下載編譯好的安裝包就好了,編譯步驟參考一下就好了。apache

3、安裝sbt-1.3.5

[root@localhost ~]# curl https://bintray.com/sbt/rpm/rpm > bintray-sbt-rpm.repo
[root@localhost ~]# mv bintray-sbt-rpm.repo /etc/yum.repos.d/
[root@localhost ~]# yum install sbt -y

由於Kafka-manager使用的Play框架,爲了編譯的速度更快,先配置sbt的maven倉庫,因爲默認倉庫速度較慢,所以使用aliyun提供的maven倉庫。api

修改倉庫地址:(sbt 默認下載庫文件很慢, 還時不時被打斷,不行的話就重試),咱們能夠在用戶目錄下建立 touch ~/.sbt/repositories, 填上阿里雲的鏡像   # vi ~/.sbt/repositories  服務器

cd ~
mkdir .sbt
touch ~/.sbt/repositories
vi ~/.sbt/repositories

 內容:微信

[repositories]
  local
  #oschina: http://maven.oschina.net/content/groups/public/  
  aliyun-nexus: http://maven.aliyun.com/nexus/content/groups/public/
  jcenter: http://jcenter.bintray.com/
  typesafe-ivy-releases: http://repo.typesafe.com/typesafe/ivy-releases/, [organization]/[module]/[revision]/[type]s/[artifact](-[classifier]).[ext], bootOnly
  maven-central: http://repo1.maven.org/maven2/

以上配置文件解釋順序是:本地→阿里雲鏡像→jcenter→typesafe-ivy-releases→Maven主鏡像。若是須要添加公司的maven鏡像,能夠按照 key: value 的形式添加,key 的命名沒有要求(暫時沒注意到,可是最好也不要用什麼特殊符號吧)

驗證:檢查sbt是否安裝成功,查看命令輸出,發現已經成功能夠從maven.aliyun.com/nexus下載到依賴即表示成功

sbt -version

須要很長時間~耐心等待,我這裏已經執行過了,再次執行顯示以下:

[root@localhost ~]# sbt -version
[info] [launcher] getting org.scala-sbt sbt 1.3.5  (this may take some time)...
:: loading settings :: url = jar:file:/usr/share/sbt/bin/sbt-launch.jar!/org/apache/ivy/core/settings/ivysettings.xml
:: retrieving :: org.scala-sbt#boot-app
	confs: [default]
	81 artifacts copied, 0 already retrieved
[info] [launcher] getting Scala 2.12.10 (for sbt)...
:: retrieving :: org.scala-sbt#boot-scala
	confs: [default]
	6 artifacts copied, 0 already retrieved
sbt version in this project: 1.3.5
sbt script version: 1.3.5

4、解壓編譯kafka-manager源碼包

在【2、下載kafka-manager源碼包】步驟中咱們已經已下載了源碼包,

解壓kafka-manager源碼包:

[root@localhost soft]# tar -zxvf kafka-manager-2.0.0.2.tar.gz

解壓後顯示目錄以下

[root@localhost soft]# ll
總用量 56
drwxrwxr-x. 9 root root   109 4月  12 2019 app
-rw-rw-r--. 1 root root  4242 4月  12 2019 build.sbt
drwxrwxr-x. 2 root root   108 4月  12 2019 conf
drwxrwxr-x. 2 root root   156 4月  12 2019 img
-rw-rw-r--. 1 root root 11307 4月  12 2019 LICENSE
drwxrwxr-x. 2 root root    49 4月  12 2019 project
drwxrwxr-x. 5 root root    54 4月  12 2019 public
-rw-rw-r--. 1 root root  8686 4月  12 2019 README.md
-rwxrwxr-x. 1 root root 21353 4月  12 2019 sbt
drwxrwxr-x. 4 root root    37 4月  12 2019 src
drwxrwxr-x. 5 root root    51 4月  12 2019 test

 而後執行:

./sbt clean dist

編譯時間會很長,須要耐心等待,能夠到~/.sbt/boot/update.log 查看sbt更新日誌。sbt更新好,就開始下載各類jar包,最後看到:[info] Your package is ready in /home/soft/kafka-manager-2.0.0.2/target/universal/kafka-manager-2.0.0.2.zip  證實編譯好了。

我這個已經編譯過了,因爲記錄筆記,我再次進行了編譯就很快了,日誌以下:

[root@localhost kafka-manager-2.0.0.2]# ./sbt clean dist
Downloading sbt launcher for 1.2.8:
  From  http://repo.scala-sbt.org/scalasbt/maven-releases/org/scala-sbt/sbt-launch/1.2.8/sbt-launch.jar
    To  /root/.sbt/launchers/1.2.8/sbt-launch.jar
Getting org.scala-sbt sbt 1.2.8  (this may take some time)...
:: retrieving :: org.scala-sbt#boot-app
	confs: [default]
	79 artifacts copied, 0 already retrieved (28496kB/1360ms)
Getting Scala 2.12.7 (for sbt)...
:: retrieving :: org.scala-sbt#boot-scala
	confs: [default]
	5 artifacts copied, 0 already retrieved (19715kB/347ms)
[info] Loading settings for project kafka-manager-2-0-0-2-build from plugins.sbt ...
[info] Loading project definition from /home/soft/kafka-manager-2.0.0.2/project
[info] Updating ProjectRef(uri("file:/home/soft/kafka-manager-2.0.0.2/project/"), "kafka-manager-2-0-0-2-build")...
[info] Done updating.
[warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings.
[info] Loading settings for project root from build.sbt ...
[info] Set current project to kafka-manager (in build file:/home/soft/kafka-manager-2.0.0.2/)
[success] Total time: 0 s, completed 2019-12-25 12:27:23
[info] Packaging /home/soft/kafka-manager-2.0.0.2/target/scala-2.12/kafka-manager_2.12-2.0.0.2-sources.jar ...
[info] Done packaging.
Warning: node.js detection failed, sbt will use the Rhino based Trireme JavaScript engine instead to run JavaScript assets compilation, which in some cases may be orders of magnitude slower than using node.js.
[info] Updating ...
[info] downloading http://maven.aliyun.com/nexus/content/groups/public/org/scala-lang/modules/scala-parser-combinators_2.12/1.0.7/scala-parser-combinators_2.12-1.0.7.jar ...
[info] 	[SUCCESSFUL ] org.scala-lang.modules#scala-parser-combinators_2.12;1.0.7!scala-parser-combinators_2.12.jar(bundle) (2108ms)
[info] Done updating.
[warn] There may be incompatibilities among your library dependencies; run 'evicted' to see detailed eviction warnings.
[info] Wrote /home/soft/kafka-manager-2.0.0.2/target/scala-2.12/kafka-manager_2.12-2.0.0.2.pom
[info] Main Scala API documentation to /home/soft/kafka-manager-2.0.0.2/target/scala-2.12/api...
[info] Non-compiled module 'compiler-bridge_2.12' for Scala 2.12.8. Compiling...
[info]   Compilation completed in 38.745s.
model contains 604 documentable templates
[info] Main Scala API documentation successful.
[info] Compiling 131 Scala sources and 2 Java sources to /home/soft/kafka-manager-2.0.0.2/target/scala-2.12/classes ...
[info] Done compiling.
[info] Packaging /home/soft/kafka-manager-2.0.0.2/target/scala-2.12/kafka-manager_2.12-2.0.0.2-javadoc.jar ...
[info] Done packaging.
[info] LESS compiling on 1 source(s)
[info] Packaging /home/soft/kafka-manager-2.0.0.2/target/scala-2.12/kafka-manager_2.12-2.0.0.2.jar ...
[info] Done packaging.
[info] Packaging /home/soft/kafka-manager-2.0.0.2/target/scala-2.12/kafka-manager_2.12-2.0.0.2-web-assets.jar ...
[info] Done packaging.
[info] Packaging /home/soft/kafka-manager-2.0.0.2/target/scala-2.12/kafka-manager_2.12-2.0.0.2-sans-externalized.jar ...
[info] Done packaging.
[success] All package validations passed
[info] Your package is ready in /home/soft/kafka-manager-2.0.0.2/target/universal/kafka-manager-2.0.0.2.zip
[success] Total time: 355 s, completed 2019-12-25 12:33:19

5、安裝kafka-manager

因爲使用kafka-manager是在JDK8基礎上的,因此先安裝JDK8,JDK安裝再也不敘述。

因爲【4、解壓編譯kafka-manager源碼包】步驟已經編譯完成,咱們能夠把 /home/soft/kafka-manager-2.0.0.2/target/universal/kafka-manager-2.0.0.2.zip 的安裝包移動到你要安裝的地方解壓。

在此我解壓到/usr/local/目錄下

unzip kafka-manager-2.0.0.2.zip -d /usr/local/

解壓後查看目錄以下:

[root@localhost kafka-manager-2.0.0.2]# ll
總用量 28
drwxr-xr-x. 2 root root 4096 12月 25 14:57 bin
drwxr-xr-x. 2 root root  108 12月 25 14:57 conf
drwxr-xr-x. 2 root root 8192 12月 25 14:57 lib
-rw-r--r--. 1 root root 8686 4月  12 2019 README.md
drwxr-xr-x. 3 root root   17 12月 25 14:57 share

接下來就是配置kafka-manager了

vi conf/application.conf 
#修改kafka-manager.zkhosts列表爲本身的zk節點
kafka-manager.zkhosts="192.168.184.133:2181"
#添加http訪問端口配置,默認9000
http.port=9090

想要看到讀取,寫入速度須要開啓JMX,修改kafka-server-start.sh 添加一行便可:添加JMX端口8999

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
    export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
    export JMX_PORT="8999"
fi

注意:每一個kafka broker都須要修改,修改後進行重啓kafka。

最後,啓動kafka-manager

cd kafka-manager/bin

nohup ./kafka-manager -Dconfig.file=../conf/application.conf >/dev/null 2>&1 &

6、測試kafka-manager

訪問:http://192.168.184.133:9090/

一、新建Cluster

點擊【Cluster】>【Add Cluster】打開以下添加集羣配置界面:輸入集羣的名字(如KafkaCluster)和 Zookeeper 服務器地址(如192.168.184.133:2181/kafka),選擇最接近的Kafka版本

其餘broker的配置能夠根據本身須要進行配置,默認狀況下,點擊【保存】時,會提示幾個默認值爲1的配置錯誤,須要配置爲>=2的值。提示以下。

添加完集羣以後查看

topics相關: 

更多頁面功能本身點點就好了。。。

 

更多信息查看github上kafka-manager的README.md:https://github.com/yahoo/kafka-manager/blob/master/README.md

原文出處:https://www.cnblogs.com/coding-farmer/p/12097519.html

相關文章
相關標籤/搜索