企業級Apache Kafka部署實戰篇html
做者:尹正傑java
版權聲明:原創做品,謝絕轉載!不然將追究法律責任。node
一.安裝zookeeper集羣python
1>.下載zookeepergit
[root@node106.yinzhengjie.org.cn ~]# yum -y install wget Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.jdcloud.com * extras: mirrors.tuna.tsinghua.edu.cn * updates: mirrors.tuna.tsinghua.edu.cn base | 3.6 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 Resolving Dependencies --> Running transaction check ---> Package wget.x86_64 0:1.14-18.el7_6.1 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================================================================================================================================================ Installing: wget x86_64 1.14-18.el7_6.1 updates 547 k Transaction Summary ================================================================================================================================================================================================================================================================ Install 1 Package Total download size: 547 k Installed size: 2.0 M Downloading packages: wget-1.14-18.el7_6.1.x86_64.rpm | 547 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : wget-1.14-18.el7_6.1.x86_64 1/1 Verifying : wget-1.14-18.el7_6.1.x86_64 1/1 Installed: wget.x86_64 0:1.14-18.el7_6.1 Complete! [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz --2019-07-11 10:43:34-- https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz Resolving mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)... 101.6.8.193, 2402:f000:1:408:8100::1 Connecting to mirrors.tuna.tsinghua.edu.cn (mirrors.tuna.tsinghua.edu.cn)|101.6.8.193|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 10622522 (10M) [application/x-gzip] Saving to: ‘apache-zookeeper-3.5.5-bin.tar.gz’ 100%[======================================================================================================================================================================================================================>] 10,622,522 1.98MB/s in 5.0s 2019-07-11 10:43:39 (2.04 MB/s) - ‘apache-zookeeper-3.5.5-bin.tar.gz’ saved [10622522/10622522] [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ll total 10376 -rw-r--r-- 1 root root 10622522 May 20 18:40 apache-zookeeper-3.5.5-bin.tar.gz [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]#
2>.解壓zookeepergithub
[root@node106.yinzhengjie.org.cn ~]# tar -zxf apache-zookeeper-3.5.5-bin.tar.gz -C /home/softwares/ [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ll /home/softwares/apache-zookeeper-3.5.5-bin/ total 32 drwxr-xr-x 2 2002 2002 232 Apr 9 19:13 bin drwxr-xr-x 2 2002 2002 77 Apr 2 21:05 conf drwxr-xr-x 5 2002 2002 4096 May 3 20:07 docs drwxr-xr-x 2 root root 4096 Jul 11 10:44 lib -rw-r--r-- 1 2002 2002 11358 Feb 15 20:55 LICENSE.txt -rw-r--r-- 1 2002 2002 432 Apr 9 19:13 NOTICE.txt -rw-r--r-- 1 2002 2002 1560 May 3 19:41 README.md -rw-r--r-- 1 2002 2002 1347 Apr 2 21:05 README_packaging.txt [root@node106.yinzhengjie.org.cn ~]#
3>.建立配置zookeeper的堆內存配置文件web
[root@node106.yinzhengjie.org.cn ~]# cat /home/softwares/apache-zookeeper-3.5.5-bin/conf/java.env #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #指定JDK的安裝路徑 export JAVA_HOME=/home/softwares/jdk1.8.0_201 #指定zookeeper的heap內存大小 export JVMFLAGS="-Xms2048m -Xmx2048m $JVMFLAGS" [root@node106.yinzhengjie.org.cn ~]#
4>.修改zookeeper的配置文件zoo.cfg(須要手動建立,「/home/softwares/apache-zookeeper-3.5.5-bin/conf/zoo_sample.cfg 」是一個配置模板)算法
Zookeeper的每一個服務端的配置文件存放於安裝目錄的conf目錄下,應命名爲zoo.cfg,該目錄下有一個zoo_sample.cfg文件是格式的簡單樣本。 如下咱們介紹zoo.cfg中能夠配置的各類配置項。 一.最小配置:如下三個配置是配置文件中必須存在的「最低消費」. (1)clientPort: zk服務器監聽的端口,客戶端經過該端口創建鏈接,每臺 zk服務器也容許設置爲不一樣的值。默認值2181 (2)dataDir: zk用於保存內存數據庫的快照的目錄,除非設置了 dataLogDir,不然這個目錄也用來保存更新數據庫的事務日誌。在生產環境使用的zk集羣,強烈建議設置dataLogDir,讓dataDir只存放快照,由於寫快照的開銷很低,這樣dataDir就能夠和其餘日誌目錄的掛載點合設. (3)tickTime: 前面已提到過,zk使用的基本時間單位是tick,這個參數用於配置一個tick的長度,單位爲毫秒,默認值3000,不建議修改. 二.高級配置:如下配置都是可選的,但若是要用好zk,有些幾乎是必定要設的. (1)dataLogDir: 生產環境必需要設,注意,由於zk寫事務日誌是順序地、 阻塞地,因此這個目錄必定要單獨劃盤,禁止和其餘高寫操做的目錄合設,不然嚴重影響性能。 (2)globalOutstandingLimit: 由於zk服務器是須要對請求隊列化的,當客戶端不少,提交的寫操做很頻繁時,zk服務器可能在來得及處理前就OOM了。這個參數用於限制系統中還未處理的請求數閾值,默認爲1000,通常不須要修改。 (3)preAllocSize: 用於設置預分配的事務日誌文件大小,單位爲KB,默認值爲64M。這個值明顯太大,由於每次快照後都會生成一個新的事務日誌文件,且一次的事務數是用snapCount限定住的,因此須要的日誌文件大小能夠準確估計,我使用時snapCount默認,單個事務大小不超過100字節,所以該值設爲10240夠了。 (4)snapCount: 用於設置每次快照之間的事務數,默認爲100000。由於寫快照是消耗性能的,也沒有必要頻繁寫,雖然該值看起來很大,通常不須要修改該值。 (5)traceFile: 若是設置了該路徑,將持續跟蹤zk的操做並寫入一個名爲traceFile.year.month.day的跟蹤日誌中,該選項比較消耗性能,只在debug環境能夠開啓,生產環境不能開。 (6)maxClientCnxns: 單個客戶端與單臺服務器之間的鏈接數的限制,是ip級別的,默認是60,若是設置爲0,那麼代表不做任何限制。請注意這個限制的使用範圍,僅僅是單臺客戶端機器與單臺zk服務器之間的鏈接數限制,不是針對指定客戶端IP,也不是zk集羣的鏈接數限制,也不是單臺zk對全部客戶端的鏈接數限制。若是點對點鏈接數不高,建議改小這個值。 (7)clientPortAddress: 用於配置zk的端口監聽在本機哪一個IP上生效,用 於服務器有多塊網卡的狀況,尤爲是既有外網地址又有內網地址的集羣, 只要開放內網地址便可。 (8)minSessionTimeout: 客戶端和zk服務器間的最小超時時間,單位爲 tickTime的倍數,默認爲2。 (9)maxSessionTimeout: 客戶端和zk服務器間的最大超時時間,單位爲 tickTime的倍數,默認爲20。客戶端通常有本身的鏈接管理參數,若是客戶端的參數不在minSessionTimeout--maxSessionTimeout這個範圍內, 會被強制設爲最接近的那個值。 (10)fsync.warningthresholdms: 用於設置觸發警告的存儲同步時間閾值, 單位爲毫秒,默認爲1000,由於只是一個警告而已,無需修改。 (11)autopurge.snapRetainCount: 3.4.0及以後版本zk提供了自動清理快照 文件和事務日誌文件的功能,該參數指定了保留文件的個數,默認爲3, 通常無需修改。 (12)autopurge.purgeInterval: 和上一個參數配合使用,設置自動清理的 頻率,單位爲小時,默認爲0表示不清理,建議設爲6或12之類的值。 (13)syncEnabled: 3.4.6以後新加的參數,在集羣使用observer時,設置其 是否像leader和follower同樣將快照和事務日誌寫到硬盤,默認爲true。 我是沒開observer的,若是你的集羣開了,建議配成false,由於沒有 必要。 三.集羣配置:用於zk集羣模式的高級配置,單機模式是不須要管的。固然,沒幾我的會用單機的zk. (1)electionAlg: 羣首選舉算法,默認爲3表明FastLeaderElection,前面章節已經說過,除了這種算法外其他3種都已經被標註爲deprecated了, 不要修改。 (2)initLimit: follower首次與leader鏈接並同步的超時值,由於集羣啓動或新leader選舉完成後,follower須要從leader拉最新的數據,若是須要同步的數據很大仍是可能超時的,單位爲tickTime的倍數,沒有默認值。 (3)leaderServes: 默認爲yes,此時leader也會接受客戶端的鏈接提供讀 寫服務。若是設置爲no,則leader不會和客戶端鏈接,只負責處理 follower轉發的寫操做請求以及集羣協調事務。改寫該參數須要當心, 有可能會從leader很忙變成follower很忙,若是集羣臺數很少,leader 的CPU和IO也不高的話,就使用默認值。 (4)server.x=[hostname]:nnnnn[:nnnnn][:observer]: 這裏的x是一個數字,與myid文件中的id是一致的。接着能夠配置兩個端口,第一個端口 用於leader和follower之間的數據同步和其它通訊,第二個端口用於leader選舉過程當中投票通訊。另外能夠在最後加上:observer標記,用於表示該服務器以observer模式運行。 (5)syncLimit: follower後續與leader同步的超時值,單位爲tickTime的倍數。該參數與initLimit不一樣之處是,與數據量大小關係不大,只取決於網絡質量,所以還起到心跳做用,若是follower超時了,leader就認爲該follower落後太多須要放棄。若是網絡爲高延遲環境,可適當增長該值,但不宜加得太大,不然會掩蓋某些故障。 (6)group.x=nnnnn[:nnnnn]: 仲裁時除了默認的按每機1票計算,還有加權的方法和分組的方法,這個參數就是用來設置分組方法的。 (7)weight.x=nnnnn: 見上一行。 (8)cnxTimeout: 用於設置羣首選舉時打開一個新的鏈接的超時值,單位爲毫秒,默認值爲5000,不須要修改。 (9)standaloneEnabled: 3.5.0及之後纔有,默認爲true,是爲了向後兼容 的目的。若是設爲false,單個server能夠以集羣模式啓動,一個集羣 能夠經過從新配置降回單個node,也能夠從單個node升回一個集羣。
[root@node106.yinzhengjie.org.cn ~]# cat /home/softwares/apache-zookeeper-3.5.5-bin/conf/zoo.cfg # 滴答,計時的基本單位,默認是2000毫秒,即2秒。它是zookeeper最小的時間單位,用於丈量心跳時間和超時時間等,一般設置成默認2秒便可。 tickTime=2000 # 初始化限制是10滴答,默認是10個滴答,即默認是20秒。指定follower節點初始化是連接leader節點的最大tick次數。 initLimit=5 # 數據同步的時間限制,默認是5個滴答,即默認時間是10秒。設定了follower節點與leader節點進行同步的最大時間。與initLimit相似,它也是以tickTime爲單位進行指定的。 syncLimit=2 # 指定zookeeper的工做目錄,這是一個很是重要的參數,zookeeper會在內存中在內存只能中保存系統快照,並按期寫入該路徑指定的文件夾中。生產環境中須要注意該文件夾的磁盤佔用狀況。 dataDir=/home/data/zookeeper # 監聽zookeeper的默認端口。zookeeper監聽客戶端連接的端口,通常設置成默認2181便可。 clientPort=2181 # 這個操做將限制鏈接到 ZooKeeper 的客戶端的數量,限制併發鏈接的數量,它經過 IP 來區分不一樣的客戶端。此配置選項能夠用來阻止某些類別的 Dos 攻擊。將它設置爲 0 或者忽略而不進行設置將會取消對併發鏈接的限制。 #maxClientCnxns=60 # 在上文中已經提到,3.4.0及以後版本,ZK提供了自動清理事務日誌和快照文件的功能,這個參數指定了清理頻率,單位是小時,須要配置一個1或更大的整數,默認是0,表示不開啓自動清理功能。 #autopurge.purgeInterval=1 # 這個參數和上面的參數搭配使用,這個參數指定了須要保留的文件數目。默認是保留3個。 #autopurge.snapRetainCount=3 #server.x=[hostname]:nnnnn[:nnnnn],這裏的x是一個數字,與myid文件中的id是一致的。右邊能夠配置兩個端口,第一個端口用於F和L之間的數據同步和其它通訊,第二個端口用於Leader選舉過程當中投票通訊。 server.106=node106.yinzhengjie.org.cn:2888:3888 server.107=node107.yinzhengjie.org.cn:2888:3888 server.108=node108.yinzhengjie.org.cn:2888:3888 [root@node106.yinzhengjie.org.cn ~]#
5>.編寫zookeeper的啓動腳本spring
[root@node106.yinzhengjie.org.cn ~]# vi /usr/local/bin/zookeeper_manager.sh [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# cat /usr/local/bin/zookeeper_manager.sh #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判斷用戶是否傳參 if [ $# -ne 1 ];then echo "無效參數,用法爲: $0 {start|stop|restart|status}" exit fi #獲取用戶輸入的命令 cmd=$1 #定義函數功能 function zookeeperManger(){ case $cmd in start) echo "啓動服務" remoteExecution start ;; stop) echo "中止服務" remoteExecution stop ;; restart) echo "重啓服務" remoteExecution restart ;; status) echo "查看狀態" remoteExecution status ;; *) echo "無效參數,用法爲: $0 {start|stop|restart|status}" ;; esac } #定義執行的命令 function remoteExecution(){ for (( i=106 ; i<=108 ; i++ )) ; do tput setaf 2 echo ========== node${i}.yinzhengjie.org.cn zkServer.sh $1 ================ tput setaf 9 ssh node${i}.yinzhengjie.org.cn "source /etc/profile ; zkServer.sh $1" done } #調用函數 zookeeperManger [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# chmod +x /usr/local/bin/zookeeper_manager.sh [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ll /usr/local/bin/zookeeper_manager.sh -rwxr-xr-x 1 root root 1101 Jul 10 19:04 /usr/local/bin/zookeeper_manager.sh [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]#
6>.配置zookeeper的環境變量shell
[root@node106.yinzhengjie.org.cn ~]# tail -3 /etc/profile #ADD Zookeeper PATH BY yinzhengjie ZOOKEEPER=/home/softwares/apache-zookeeper-3.5.5-bin PATH=$PATH:$ZOOKEEPER/bin [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# source /etc/profile [root@node106.yinzhengjie.org.cn ~]#
7>.配置管理節點與其餘節點免密登陸
[root@node106.yinzhengjie.org.cn ~]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa Generating public/private rsa key pair. Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:1f6hE914tCJFEZoCoRk9OJGivY76rt+sSCjtmsXphAw root@node106.yinzhengjie.org.cn The key's randomart image is: +---[RSA 2048]----+ | o=o. +o | | . ++o. .+ | | o .o. ...o.. .| | . . ..... +.| |E . S .o.+.o| |++ o .+.o | |+oO o . | |oO + . | |B*B.o | +----[SHA256]-----+ [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# ssh-copy-id node106.yinzhengjie.org.cn /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'node106.yinzhengjie.org.cn (10.0.2.15)' can't be established. ECDSA key fingerprint is SHA256:EN9dVOkMLQ/20rdlpb+dDXmI8Os43C51nuDxSc61laU. ECDSA key fingerprint is MD5:c6:ce:a2:ea:01:5c:fb:f5:5a:56:4a:24:bb:d6:2a:7d. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@node106.yinzhengjie.org.cn's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node106.yinzhengjie.org.cn'" and check to make sure that only the key(s) you wanted were added. [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# ssh-copy-id node107.yinzhengjie.org.cn /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'node107.yinzhengjie.org.cn (172.30.1.107)' can't be established. ECDSA key fingerprint is SHA256:gMWo+eGV/qjyhw7zK6eLCt4LYPFo1lAYLF56DoYcIuI. ECDSA key fingerprint is MD5:9d:7d:6e:d7:7a:5a:bd:de:30:8a:20:ea:41:cf:0d:06. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@node107.yinzhengjie.org.cn's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node107.yinzhengjie.org.cn'" and check to make sure that only the key(s) you wanted were added. [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# ssh-copy-id node108.yinzhengjie.org.cn /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'node108.yinzhengjie.org.cn (172.30.1.108)' can't be established. ECDSA key fingerprint is SHA256:ywuhrLI5g/YAS+OmkHX/YQMC8LoW8cCxuCShYi4F+To. ECDSA key fingerprint is MD5:cc:ce:62:fe:27:e4:b9:bf:c9:ae:23:0b:d7:14:c0:77. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@node108.yinzhengjie.org.cn's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node108.yinzhengjie.org.cn'" and check to make sure that only the key(s) you wanted were added. [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]#
8>.將zookeeper二進制文件及環境變量拷貝至其餘節點
[root@node106.yinzhengjie.org.cn ~]# scp -r /home/softwares/apache-zookeeper-3.5.5-bin/ node107.yinzhengjie.org.cn:/home/softwares/ ...... log4j-1.2.17.LICENSE.txt 100% 11KB 15.3MB/s 00:00 jline-2.11.LICENSE.txt 100% 1535 2.4MB/s 00:00 slf4j-1.7.25.LICENSE.txt 100% 1133 1.8MB/s 00:00 json-simple-1.1.1.LICENSE.txt 100% 11KB 16.1MB/s 00:00 netty-all-4.1.29.Final.LICENSE.txt 100% 11KB 17.2MB/s 00:00 zookeeper-jute-3.5.5.jar 100% 363KB 76.3MB/s 00:00 audience-annotations-0.5.0.jar 100% 20KB 20.0MB/s 00:00 zookeeper-3.5.5.jar 100% 956KB 88.3MB/s 00:00 netty-all-4.1.29.Final.jar 100% 3771KB 68.1MB/s 00:00 slf4j-api-1.7.25.jar 100% 40KB 34.1MB/s 00:00 slf4j-log4j12-1.7.25.jar 100% 12KB 18.3MB/s 00:00 log4j-1.2.17.jar 100% 478KB 83.9MB/s 00:00 commons-cli-1.2.jar 100% 40KB 16.6MB/s 00:00 jetty-server-9.4.17.v20190418.jar 100% 632KB 84.9MB/s 00:00 javax.servlet-api-3.1.0.jar 100% 94KB 17.0MB/s 00:00 jetty-http-9.4.17.v20190418.jar 100% 198KB 64.4MB/s 00:00 jetty-util-9.4.17.v20190418.jar 100% 514KB 67.1MB/s 00:00 jetty-io-9.4.17.v20190418.jar 100% 153KB 51.6MB/s 00:00 jetty-servlet-9.4.17.v20190418.jar 100% 118KB 55.2MB/s 00:00 jetty-security-9.4.17.v20190418.jar 100% 114KB 59.5MB/s 00:00 jackson-databind-2.9.8.jar 100% 1316KB 70.9MB/s 00:00 jackson-annotations-2.9.0.jar 100% 65KB 50.5MB/s 00:00 jackson-core-2.9.8.jar 100% 318KB 75.7MB/s 00:00 json-simple-1.1.1.jar 100% 23KB 27.1MB/s 00:00 jline-2.11.jar 100% 204KB 73.1MB/s 00:00 LICENSE.txt 100% 11KB 14.1MB/s 00:00 README.md 100% 1560 2.6MB/s 00:00 README_packaging.txt 100% 1347 2.4MB/s 00:00 NOTICE.txt 100% 432 798.8KB/s 00:00 configuration.xsl 100% 535 850.1KB/s 00:00 zoo_sample.cfg 100% 922 1.5MB/s 00:00 log4j.properties 100% 2712 3.7MB/s 00:00 java.env 100% 259 395.0KB/s 00:00 zoo.cfg 100% 2147 3.3MB/s 00:00 README.txt 100% 232 435.6KB/s 00:00 zkTxnLogToolkit.cmd 100% 996 2.0MB/s 00:00 zkTxnLogToolkit.sh 100% 1385 2.8MB/s 00:00 zkCli.cmd 100% 1154 1.9MB/s 00:00 zkServer.cmd 100% 1286 1.8MB/s 00:00 zkEnv.sh 100% 3690 5.5MB/s 00:00 zkCleanup.sh 100% 2067 3.8MB/s 00:00 zkCli.sh 100% 1621 3.6MB/s 00:00 zkEnv.cmd 100% 1766 4.1MB/s 00:00 zkServer-initialize.sh 100% 4573 8.5MB/s 00:00 zkServer.sh 100% 9372 11.7MB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# scp -r /home/softwares/apache-zookeeper-3.5.5-bin/ node108.yinzhengjie.org.cn:/home/softwares/ ...... log4j-1.2.17.LICENSE.txt 100% 11KB 13.4MB/s 00:00 jline-2.11.LICENSE.txt 100% 1535 3.4MB/s 00:00 slf4j-1.7.25.LICENSE.txt 100% 1133 2.5MB/s 00:00 json-simple-1.1.1.LICENSE.txt 100% 11KB 2.8MB/s 00:00 netty-all-4.1.29.Final.LICENSE.txt 100% 11KB 13.8MB/s 00:00 zookeeper-jute-3.5.5.jar 100% 363KB 81.4MB/s 00:00 audience-annotations-0.5.0.jar 100% 20KB 20.5MB/s 00:00 zookeeper-3.5.5.jar 100% 956KB 88.4MB/s 00:00 netty-all-4.1.29.Final.jar 100% 3771KB 71.5MB/s 00:00 slf4j-api-1.7.25.jar 100% 40KB 38.7MB/s 00:00 slf4j-log4j12-1.7.25.jar 100% 12KB 15.3MB/s 00:00 log4j-1.2.17.jar 100% 478KB 83.8MB/s 00:00 commons-cli-1.2.jar 100% 40KB 38.8MB/s 00:00 jetty-server-9.4.17.v20190418.jar 100% 632KB 86.0MB/s 00:00 javax.servlet-api-3.1.0.jar 100% 94KB 41.8MB/s 00:00 jetty-http-9.4.17.v20190418.jar 100% 198KB 30.7MB/s 00:00 jetty-util-9.4.17.v20190418.jar 100% 514KB 52.4MB/s 00:00 jetty-io-9.4.17.v20190418.jar 100% 153KB 58.1MB/s 00:00 jetty-servlet-9.4.17.v20190418.jar 100% 118KB 57.1MB/s 00:00 jetty-security-9.4.17.v20190418.jar 100% 114KB 58.5MB/s 00:00 jackson-databind-2.9.8.jar 100% 1316KB 75.6MB/s 00:00 jackson-annotations-2.9.0.jar 100% 65KB 48.1MB/s 00:00 jackson-core-2.9.8.jar 100% 318KB 42.1MB/s 00:00 json-simple-1.1.1.jar 100% 23KB 24.6MB/s 00:00 jline-2.11.jar 100% 204KB 60.9MB/s 00:00 LICENSE.txt 100% 11KB 15.5MB/s 00:00 README.md 100% 1560 3.3MB/s 00:00 README_packaging.txt 100% 1347 2.6MB/s 00:00 NOTICE.txt 100% 432 867.6KB/s 00:00 configuration.xsl 100% 535 795.3KB/s 00:00 zoo_sample.cfg 100% 922 1.3MB/s 00:00 log4j.properties 100% 2712 4.6MB/s 00:00 java.env 100% 259 596.8KB/s 00:00 zoo.cfg 100% 2147 4.0MB/s 00:00 README.txt 100% 232 437.5KB/s 00:00 zkTxnLogToolkit.cmd 100% 996 1.7MB/s 00:00 zkTxnLogToolkit.sh 100% 1385 1.8MB/s 00:00 zkCli.cmd 100% 1154 2.1MB/s 00:00 zkServer.cmd 100% 1286 2.4MB/s 00:00 zkEnv.sh 100% 3690 7.2MB/s 00:00 zkCleanup.sh 100% 2067 3.6MB/s 00:00 zkCli.sh 100% 1621 3.1MB/s 00:00 zkEnv.cmd 100% 1766 3.3MB/s 00:00 zkServer-initialize.sh 100% 4573 5.7MB/s 00:00 zkServer.sh 100% 9372 12.9MB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# scp /etc/profile node107.yinzhengjie.org.cn:/etc/profile profile 100% 2016 2.8MB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# scp /etc/profile node108.yinzhengjie.org.cn:/etc/profile profile 100% 2016 2.3MB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 172.30.1.101 node101.yinzhengjie.org.cn 172.30.1.102 node102.yinzhengjie.org.cn 172.30.1.103 node103.yinzhengjie.org.cn 172.30.1.104 node104.yinzhengjie.org.cn 172.30.1.105 node105.yinzhengjie.org.cn 172.30.1.106 node106.yinzhengjie.org.cn 172.30.1.107 node107.yinzhengjie.org.cn 172.30.1.108 node108.yinzhengjie.org.cn [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# scp /etc/hosts node107.yinzhengjie.org.cn:/etc/hosts hosts 100% 479 284.7KB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# scp /etc/hosts node108.yinzhengjie.org.cn:/etc/hosts hosts 100% 479 352.2KB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
9>.建立數據目錄並生產myid文件
[root@node106.yinzhengjie.org.cn ~]# yum -y install ansible Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.bit.edu.cn * extras: mirror.bit.edu.cn * updates: mirrors.tuna.tsinghua.edu.cn Resolving Dependencies --> Running transaction check ---> Package ansible.noarch 0:2.4.2.0-2.el7 will be installed --> Processing Dependency: sshpass for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: python2-jmespath for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: python-six for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: python-setuptools for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: python-passlib for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: python-paramiko for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: python-jinja2 for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: python-httplib2 for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: python-cryptography for package: ansible-2.4.2.0-2.el7.noarch --> Processing Dependency: PyYAML for package: ansible-2.4.2.0-2.el7.noarch --> Running transaction check ---> Package PyYAML.x86_64 0:3.10-11.el7 will be installed --> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64 ---> Package python-httplib2.noarch 0:0.9.2-1.el7 will be installed ---> Package python-jinja2.noarch 0:2.7.2-3.el7_6 will be installed --> Processing Dependency: python-babel >= 0.8 for package: python-jinja2-2.7.2-3.el7_6.noarch --> Processing Dependency: python-markupsafe for package: python-jinja2-2.7.2-3.el7_6.noarch ---> Package python-paramiko.noarch 0:2.1.1-9.el7 will be installed --> Processing Dependency: python2-pyasn1 for package: python-paramiko-2.1.1-9.el7.noarch ---> Package python-passlib.noarch 0:1.6.5-2.el7 will be installed ---> Package python-setuptools.noarch 0:0.9.8-7.el7 will be installed --> Processing Dependency: python-backports-ssl_match_hostname for package: python-setuptools-0.9.8-7.el7.noarch ---> Package python-six.noarch 0:1.9.0-2.el7 will be installed ---> Package python2-cryptography.x86_64 0:1.7.2-2.el7 will be installed --> Processing Dependency: python-idna >= 2.0 for package: python2-cryptography-1.7.2-2.el7.x86_64 --> Processing Dependency: python-cffi >= 1.4.1 for package: python2-cryptography-1.7.2-2.el7.x86_64 --> Processing Dependency: python-ipaddress for package: python2-cryptography-1.7.2-2.el7.x86_64 --> Processing Dependency: python-enum34 for package: python2-cryptography-1.7.2-2.el7.x86_64 ---> Package python2-jmespath.noarch 0:0.9.0-3.el7 will be installed ---> Package sshpass.x86_64 0:1.06-2.el7 will be installed --> Running transaction check ---> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed ---> Package python-babel.noarch 0:0.9.6-8.el7 will be installed ---> Package python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 will be installed --> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch ---> Package python-cffi.x86_64 0:1.6.0-5.el7 will be installed --> Processing Dependency: python-pycparser for package: python-cffi-1.6.0-5.el7.x86_64 ---> Package python-enum34.noarch 0:1.0.4-1.el7 will be installed ---> Package python-idna.noarch 0:2.4-1.el7 will be installed ---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed ---> Package python-markupsafe.x86_64 0:0.11-10.el7 will be installed ---> Package python2-pyasn1.noarch 0:0.1.9-7.el7 will be installed --> Running transaction check ---> Package python-backports.x86_64 0:1.0-8.el7 will be installed ---> Package python-pycparser.noarch 0:2.14-1.el7 will be installed --> Processing Dependency: python-ply for package: python-pycparser-2.14-1.el7.noarch --> Running transaction check ---> Package python-ply.noarch 0:3.4-11.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================================================================================================================================================================ Package Arch Version Repository Size ================================================================================================================================================================================================================================================================ Installing: ansible noarch 2.4.2.0-2.el7 extras 7.6 M Installing for dependencies: PyYAML x86_64 3.10-11.el7 base 153 k libyaml x86_64 0.1.4-11.el7_0 base 55 k python-babel noarch 0.9.6-8.el7 base 1.4 M python-backports x86_64 1.0-8.el7 base 5.8 k python-backports-ssl_match_hostname noarch 3.5.0.1-1.el7 base 13 k python-cffi x86_64 1.6.0-5.el7 base 218 k python-enum34 noarch 1.0.4-1.el7 base 52 k python-httplib2 noarch 0.9.2-1.el7 extras 115 k python-idna noarch 2.4-1.el7 base 94 k python-ipaddress noarch 1.0.16-2.el7 base 34 k python-jinja2 noarch 2.7.2-3.el7_6 updates 518 k python-markupsafe x86_64 0.11-10.el7 base 25 k python-paramiko noarch 2.1.1-9.el7 updates 269 k python-passlib noarch 1.6.5-2.el7 extras 488 k python-ply noarch 3.4-11.el7 base 123 k python-pycparser noarch 2.14-1.el7 base 104 k python-setuptools noarch 0.9.8-7.el7 base 397 k python-six noarch 1.9.0-2.el7 base 29 k python2-cryptography x86_64 1.7.2-2.el7 base 502 k python2-jmespath noarch 0.9.0-3.el7 extras 39 k python2-pyasn1 noarch 0.1.9-7.el7 base 100 k sshpass x86_64 1.06-2.el7 extras 21 k Transaction Summary ================================================================================================================================================================================================================================================================ Install 1 Package (+22 Dependent packages) Total download size: 12 M Installed size: 60 M Downloading packages: (1/23): PyYAML-3.10-11.el7.x86_64.rpm | 153 kB 00:00:00 (2/23): python-backports-1.0-8.el7.x86_64.rpm | 5.8 kB 00:00:00 (3/23): python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch.rpm | 13 kB 00:00:00 (4/23): libyaml-0.1.4-11.el7_0.x86_64.rpm | 55 kB 00:00:00 (5/23): python-enum34-1.0.4-1.el7.noarch.rpm | 52 kB 00:00:00 (6/23): python-idna-2.4-1.el7.noarch.rpm | 94 kB 00:00:00 (7/23): python-ipaddress-1.0.16-2.el7.noarch.rpm | 34 kB 00:00:00 (8/23): python-httplib2-0.9.2-1.el7.noarch.rpm | 115 kB 00:00:00 (9/23): python-cffi-1.6.0-5.el7.x86_64.rpm | 218 kB 00:00:01 (10/23): python-jinja2-2.7.2-3.el7_6.noarch.rpm | 518 kB 00:00:00 (11/23): python-markupsafe-0.11-10.el7.x86_64.rpm | 25 kB 00:00:00 (12/23): python-babel-0.9.6-8.el7.noarch.rpm | 1.4 MB 00:00:02 (13/23): python-ply-3.4-11.el7.noarch.rpm | 123 kB 00:00:00 (14/23): python-pycparser-2.14-1.el7.noarch.rpm | 104 kB 00:00:00 (15/23): python-six-1.9.0-2.el7.noarch.rpm | 29 kB 00:00:00 (16/23): python-setuptools-0.9.8-7.el7.noarch.rpm | 397 kB 00:00:00 (17/23): python-passlib-1.6.5-2.el7.noarch.rpm | 488 kB 00:00:00 (18/23): python2-jmespath-0.9.0-3.el7.noarch.rpm | 39 kB 00:00:00 (19/23): sshpass-1.06-2.el7.x86_64.rpm | 21 kB 00:00:00 (20/23): python2-cryptography-1.7.2-2.el7.x86_64.rpm | 502 kB 00:00:00 (21/23): python-paramiko-2.1.1-9.el7.noarch.rpm | 269 kB 00:00:01 (22/23): python2-pyasn1-0.1.9-7.el7.noarch.rpm | 100 kB 00:00:01 (23/23): ansible-2.4.2.0-2.el7.noarch.rpm | 7.6 MB 00:00:08 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.4 MB/s | 12 MB 00:00:08 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : python2-pyasn1-0.1.9-7.el7.noarch 1/23 Installing : python-ipaddress-1.0.16-2.el7.noarch 2/23 Installing : python-six-1.9.0-2.el7.noarch 3/23 Installing : python-httplib2-0.9.2-1.el7.noarch 4/23 Installing : python-enum34-1.0.4-1.el7.noarch 5/23 Installing : libyaml-0.1.4-11.el7_0.x86_64 6/23 Installing : PyYAML-3.10-11.el7.x86_64 7/23 Installing : python-backports-1.0-8.el7.x86_64 8/23 Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 9/23 Installing : python-setuptools-0.9.8-7.el7.noarch 10/23 Installing : python-babel-0.9.6-8.el7.noarch 11/23 Installing : python-passlib-1.6.5-2.el7.noarch 12/23 Installing : python-ply-3.4-11.el7.noarch 13/23 Installing : python-pycparser-2.14-1.el7.noarch 14/23 Installing : python-cffi-1.6.0-5.el7.x86_64 15/23 Installing : python-markupsafe-0.11-10.el7.x86_64 16/23 Installing : python-jinja2-2.7.2-3.el7_6.noarch 17/23 Installing : python-idna-2.4-1.el7.noarch 18/23 Installing : python2-cryptography-1.7.2-2.el7.x86_64 19/23 Installing : python-paramiko-2.1.1-9.el7.noarch 20/23 Installing : sshpass-1.06-2.el7.x86_64 21/23 Installing : python2-jmespath-0.9.0-3.el7.noarch 22/23 Installing : ansible-2.4.2.0-2.el7.noarch 23/23 Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 1/23 Verifying : python2-jmespath-0.9.0-3.el7.noarch 2/23 Verifying : sshpass-1.06-2.el7.x86_64 3/23 Verifying : python-setuptools-0.9.8-7.el7.noarch 4/23 Verifying : python-jinja2-2.7.2-3.el7_6.noarch 5/23 Verifying : python-six-1.9.0-2.el7.noarch 6/23 Verifying : python-idna-2.4-1.el7.noarch 7/23 Verifying : python-markupsafe-0.11-10.el7.x86_64 8/23 Verifying : python-ply-3.4-11.el7.noarch 9/23 Verifying : python-passlib-1.6.5-2.el7.noarch 10/23 Verifying : python-paramiko-2.1.1-9.el7.noarch 11/23 Verifying : python-babel-0.9.6-8.el7.noarch 12/23 Verifying : python-backports-1.0-8.el7.x86_64 13/23 Verifying : python-cffi-1.6.0-5.el7.x86_64 14/23 Verifying : python-pycparser-2.14-1.el7.noarch 15/23 Verifying : libyaml-0.1.4-11.el7_0.x86_64 16/23 Verifying : ansible-2.4.2.0-2.el7.noarch 17/23 Verifying : python-ipaddress-1.0.16-2.el7.noarch 18/23 Verifying : python-enum34-1.0.4-1.el7.noarch 19/23 Verifying : python-httplib2-0.9.2-1.el7.noarch 20/23 Verifying : python2-pyasn1-0.1.9-7.el7.noarch 21/23 Verifying : PyYAML-3.10-11.el7.x86_64 22/23 Verifying : python2-cryptography-1.7.2-2.el7.x86_64 23/23 Installed: ansible.noarch 0:2.4.2.0-2.el7 Dependency Installed: PyYAML.x86_64 0:3.10-11.el7 libyaml.x86_64 0:0.1.4-11.el7_0 python-babel.noarch 0:0.9.6-8.el7 python-backports.x86_64 0:1.0-8.el7 python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 python-cffi.x86_64 0:1.6.0-5.el7 python-enum34.noarch 0:1.0.4-1.el7 python-httplib2.noarch 0:0.9.2-1.el7 python-idna.noarch 0:2.4-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-jinja2.noarch 0:2.7.2-3.el7_6 python-markupsafe.x86_64 0:0.11-10.el7 python-paramiko.noarch 0:2.1.1-9.el7 python-passlib.noarch 0:1.6.5-2.el7 python-ply.noarch 0:3.4-11.el7 python-pycparser.noarch 0:2.14-1.el7 python-setuptools.noarch 0:0.9.8-7.el7 python-six.noarch 0:1.9.0-2.el7 python2-cryptography.x86_64 0:1.7.2-2.el7 python2-jmespath.noarch 0:0.9.0-3.el7 python2-pyasn1.noarch 0:0.1.9-7.el7 sshpass.x86_64 0:1.06-2.el7 Complete! [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# tail -2 /etc/ansible/hosts [zk] node[106:108].yinzhengjie.org.cn [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ansible --list-hosts zk hosts (3): node106.yinzhengjie.org.cn node107.yinzhengjie.org.cn node108.yinzhengjie.org.cn [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# ansible zk -m shell -a 'mkdir -pv /home/data/zookeeper' [WARNING]: Consider using file module with state=directory rather than running mkdir node107.yinzhengjie.org.cn | SUCCESS | rc=0 >> mkdir: created directory ‘/home/data’ mkdir: created directory ‘/home/data/zookeeper’ node108.yinzhengjie.org.cn | SUCCESS | rc=0 >> mkdir: created directory ‘/home/data’ mkdir: created directory ‘/home/data/zookeeper’ node106.yinzhengjie.org.cn | SUCCESS | rc=0 >> mkdir: created directory ‘/home/data’ mkdir: created directory ‘/home/data/zookeeper’ [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# for (( i=106;i<=108;i++ )) do ssh node${i}.yinzhengjie.org.cn "echo -n $i > /home/data/zookeeper/myid" ;done [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ansible zk -m shell -a 'cat /home/data/zookeeper/myid' node108.yinzhengjie.org.cn | SUCCESS | rc=0 >> 108 node107.yinzhengjie.org.cn | SUCCESS | rc=0 >> 107 node106.yinzhengjie.org.cn | SUCCESS | rc=0 >> 106 [root@node106.yinzhengjie.org.cn ~]#
10>.啓動zookeeper並查看狀態
[root@node106.yinzhengjie.org.cn ~]# zookeeper_manager.sh start 啓動服務 ========== node106.yinzhengjie.org.cn zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /home/softwares/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ========== node107.yinzhengjie.org.cn zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /home/softwares/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg Starting zookeeper ... STARTED ========== node108.yinzhengjie.org.cn zkServer.sh start ================ ZooKeeper JMX enabled by default Using config: /home/softwares/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# zookeeper_manager.sh status 查看狀態 ========== node106.yinzhengjie.org.cn zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /home/softwares/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower ========== node107.yinzhengjie.org.cn zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /home/softwares/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: leader ========== node108.yinzhengjie.org.cn zkServer.sh status ================ ZooKeeper JMX enabled by default Using config: /home/softwares/apache-zookeeper-3.5.5-bin/bin/../conf/zoo.cfg Client port found: 2181. Client address: localhost. Mode: follower [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# ansible zk -m shell -a 'jps' node108.yinzhengjie.org.cn | SUCCESS | rc=0 >> 3859 QuorumPeerMain 4020 Jps node107.yinzhengjie.org.cn | SUCCESS | rc=0 >> 4055 Jps 3887 QuorumPeerMain node106.yinzhengjie.org.cn | SUCCESS | rc=0 >> 4000 QuorumPeerMain 4218 Jps [root@node106.yinzhengjie.org.cn ~]#
二.使用zkWeb健康zookeeper狀態
1>.下載「ZkWeb For Zookeeper」的jar包(下載地址:https://github.com/zhitom/zkweb/releases。)
[root@node106.yinzhengjie.org.cn ~]# wget https://github.com/zhitom/zkweb/releases/download/zkWeb-v1.2.1/zkWeb-v1.2.1.jar --2019-07-11 11:11:19-- https://github.com/zhitom/zkweb/releases/download/zkWeb-v1.2.1/zkWeb-v1.2.1.jar Resolving github.com (github.com)... 52.74.223.119 Connecting to github.com (github.com)|52.74.223.119|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/138690727/1450b800-5d72-11e9-9cd3-d62def1384d2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190711%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190711T0 31121Z&X-Amz-Expires=300&X-Amz-Signature=e3e489bba28072ce240b99ff7745d5a55baf9831669e3ef7111c41ffa5d47fc8&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3DzkWeb-v1.2.1.jar&response-content-type=application%2Foctet-stream [following]--2019-07-11 11:11:20-- https://github-production-release-asset-2e65be.s3.amazonaws.com/138690727/1450b800-5d72-11e9-9cd3-d62def1384d2?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20190711%2Fus-east-1%2Fs3%2Faws4_request&X-Amz- Date=20190711T031121Z&X-Amz-Expires=300&X-Amz-Signature=e3e489bba28072ce240b99ff7745d5a55baf9831669e3ef7111c41ffa5d47fc8&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3DzkWeb-v1.2.1.jar&response-content-type=application%2Foctet-streamResolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 54.231.32.83 Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|54.231.32.83|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 28541920 (27M) [application/octet-stream] Saving to: ‘zkWeb-v1.2.1.jar’ 100%[======================================================================================================================================================================================================================>] 28,541,920 33.4KB/s in 14m 34s 2019-07-11 11:25:56 (31.9 KB/s) - ‘zkWeb-v1.2.1.jar’ saved [28541920/28541920] [root@node106.yinzhengjie.org.cn ~]#
2>.運行ZkWeb的jar包(默認啓動端口是8099)
[root@node106.yinzhengjie.org.cn ~]# java -jar zkWeb-v1.2.1.jar 11:41:54.393 [main] INFO com.yasenagat.zkweb.ZkWebSpringBootApplication - applicationYamlFileName(application-zkweb.yaml)=file:/root/zkWeb-v1.2.1.jar!/BOOT-INF/classes!/application-zkweb.yaml . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.0.2.RELEASE) [2019-07-11 11:41:56 INFO main StartupInfoLogger.java:50] c.y.zkweb.ZkWebSpringBootApplication --> Starting ZkWebSpringBootApplication vv1.2.1 on node106.yinzhengjie.org.cn with PID 4470 (/root/zkWeb-v1.2.1.jar started by root in /root) [2019-07-11 11:41:56 INFO main SpringApplication.java:663] c.y.zkweb.ZkWebSpringBootApplication --> The following profiles are active: local [2019-07-11 11:41:56 INFO main AbstractApplicationContext.java:590] o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext --> Refreshing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@30c7da1e: st artup date [Thu Jul 11 11:41:56 CST 2019]; root of context hierarchy[2019-07-11 11:41:58 INFO main DefaultListableBeanFactory.java:824] o.s.b.f.s.DefaultListableBeanFactory --> Overriding bean definition for bean 'requestMappingHandlerAdapter' with a different definition: replacing [Root bean: class [null]; scope=; abstr act=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=zkSpringBootConfiguration; factoryMethodName=requestMappingHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [com/yasenagat/zkweb/util/ZkSpringBootConfiguration.class]] with [Root bean: class [null]; scope=; abstract=false; lazyInit=false; autowireMode=3; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=org.springframework.boot.autoconfigure.web.servlet.WebMvcAutoConfiguration$EnableWebMvcConfiguration; factoryMethodName=requestMappingHandlerAdapter; initMethodName=null; destroyMethodName=(inferred); defined in class path resource [org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration$EnableWebMvcConfiguration.class]][2019-07-11 11:41:59 INFO main TomcatWebServer.java:91] o.s.b.w.e.tomcat.TomcatWebServer --> Tomcat initialized with port(s): 8099 (http) [2019-07-11 11:41:59 INFO main DirectJDKLog.java:180] o.a.coyote.http11.Http11NioProtocol --> Initializing ProtocolHandler ["http-nio-8099"] [2019-07-11 11:41:59 INFO main DirectJDKLog.java:180] o.a.catalina.core.StandardService --> Starting service [Tomcat] [2019-07-11 11:41:59 INFO main DirectJDKLog.java:180] o.a.catalina.core.StandardEngine --> Starting Servlet Engine: Apache Tomcat/8.5.31 [2019-07-11 11:41:59 INFO localhost-startStop-1 DirectJDKLog.java:180] o.a.c.core.AprLifecycleListener --> The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/us r/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib][2019-07-11 11:42:00 INFO localhost-startStop-1 DirectJDKLog.java:180] o.a.c.c.C.[Tomcat].[localhost].[/] --> Initializing Spring embedded WebApplicationContext [2019-07-11 11:42:00 INFO localhost-startStop-1 ServletWebServerApplicationContext.java:285] o.s.web.context.ContextLoader --> Root WebApplicationContext: initialization completed in 3493 ms [2019-07-11 11:42:00 INFO localhost-startStop-1 ServletRegistrationBean.java:185] o.s.b.w.s.ServletRegistrationBean --> Servlet dispatcherServlet mapped to [/] [2019-07-11 11:42:00 INFO localhost-startStop-1 ServletRegistrationBean.java:185] o.s.b.w.s.ServletRegistrationBean --> Servlet webServlet mapped to [/console/*] [2019-07-11 11:42:00 INFO localhost-startStop-1 ServletRegistrationBean.java:185] o.s.b.w.s.ServletRegistrationBean --> Servlet cacheServlet mapped to [/cache/*] [2019-07-11 11:42:00 INFO localhost-startStop-1 AbstractFilterRegistrationBean.java:244] o.s.b.w.s.FilterRegistrationBean --> Mapping filter: 'characterEncodingFilter' to: [/*] [2019-07-11 11:42:00 INFO localhost-startStop-1 AbstractFilterRegistrationBean.java:244] o.s.b.w.s.FilterRegistrationBean --> Mapping filter: 'hiddenHttpMethodFilter' to: [/*] [2019-07-11 11:42:00 INFO localhost-startStop-1 AbstractFilterRegistrationBean.java:244] o.s.b.w.s.FilterRegistrationBean --> Mapping filter: 'httpPutFormContentFilter' to: [/*] [2019-07-11 11:42:00 INFO localhost-startStop-1 AbstractFilterRegistrationBean.java:244] o.s.b.w.s.FilterRegistrationBean --> Mapping filter: 'requestContextFilter' to: [/*] [2019-07-11 11:42:00 INFO MLog-Init-Reporter Slf4jMLog.java:212] com.mchange.v2.log.MLog --> MLog clients using slf4j logging. [2019-07-11 11:42:00 INFO main Slf4jMLog.java:212] com.mchange.v2.c3p0.C3P0Registry --> Initializing c3p0-0.9.5.2 [built 08-December-2015 22:06:04 -0800; debug? true; trace: 10] [2019-07-11 11:42:00 INFO main Slf4jMLog.java:212] c.m.v.c.i.AbstractPoolBackedDataSource --> Initializing c3p0 pool... com.mchange.v2.c3p0.ComboPooledDataSource [ acquireIncrement -> 3, acquireRetryAttempts -> 30, acquireRetryDelay -> 1000, autoCommitOn Close -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 0, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> caller, dataSourceName -> 1br8b56a31pwpvwk1gvtu9d|34b7ac2f, debugUnreturnedConnectionStackTraces -> false, description -> null, driverClass -> org.h2.Driver, extensions -> {}, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, forceUseNamedDriverClass -> false, identityToken -> 1br8b56a31pwpvwk1gvtu9d|34b7ac2f, idleConnectionTestPeriod -> 60, initialPoolSize -> 10, jdbcUrl -> jdbc:h2:file:~/.h2/zkweb;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=TRUE;FILE_LOCK=SOCKET, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 0, maxIdleTime -> 60, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 20, maxStatements -> 100, maxStatementsPerConnection -> 0, minPoolSize -> 5, numHelperThreads -> 3, preferredTestQuery -> null, privilegeSpawnedThreads -> false, properties -> {user=******, password=******}, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> false, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, userOverrides -> {}, usesTraditionalReflectiveProxies -> false ][2019-07-11 11:42:01 ERROR main ZkCfgManagerImpl.java:406] c.y.zkweb.util.ZkCfgManagerImpl --> isTableOk Failed,A problem occurred while trying to acquire a cached PreparedStatement in a background thread. [2019-07-11 11:42:01 ERROR main ZkCfgManagerImpl.java:70] c.y.zkweb.util.ZkCfgManagerImpl --> create table (CREATE TABLE IF NOT EXISTS ZK(ID VARCHAR PRIMARY KEY, DESC VARCHAR, CONNECTSTR VARCHAR, SESSIONTIMEOUT VARCHAR))... [2019-07-11 11:42:01 ERROR main ZkCfgManagerImpl.java:76] c.y.zkweb.util.ZkCfgManagerImpl --> create table OK !ret=0 [2019-07-11 11:42:01 ERROR main ZkCfgManagerImpl.java:81] c.y.zkweb.util.ZkCfgManagerImpl --> table select check OK! [2019-07-11 11:42:01 INFO main ZkCache.java:41] com.yasenagat.zkweb.util.ZkCache --> zk info size=0 [2019-07-11 11:42:01 INFO main ZkCfgManagerImpl.java:436] c.y.zkweb.util.ZkCfgManagerImpl --> afterPropertiesSet init 0 zk instance [2019-07-11 11:42:01 INFO main RequestMappingHandlerAdapter.java:574] o.s.w.s.m.m.a.RequestMappingHandlerAdapter --> Looking for @ControllerAdvice: org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@30c7da1e: startup date [Thu Jul 11 11:41:56 CST 2019]; root of context hierarchy[2019-07-11 11:42:02 INFO main AbstractUrlHandlerMapping.java:373] o.s.w.s.h.SimpleUrlHandlerMapping --> Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] [2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/queryZkCfgById]}" onto public java.util.Map<java.lang.String, java.lang.Object> com.yasenagat.zkweb.web.ZkCfgController. queryZkCfg(java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/queryZkCfg]}" onto public java.util.Map<java.lang.String, java.lang.Object> com.yasenagat.zkweb.web.ZkCfgController.quer yZkCfg(int,int,java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/addZkCfg],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkCfgController.addZ kCfg(java.lang.String,java.lang.String,java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/updateZkCfg],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkCfgController.u pdateZkCfg(java.lang.String,java.lang.String,java.lang.String,java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zkcfg/delZkCfg],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkCfgController.delZ kCfg(java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/queryZKOk]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.queryZKOk(org.springframework.ui.Model,java. lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/queryZKJMXInfo],produces=[application/json;charset=UTF-8]}" onto public java.util.List<com.yasenagat.zkweb.util.ZkManager$P ropertyPanel> com.yasenagat.zkweb.web.ZkController.queryZKJMXInfo(java.lang.String,java.lang.String,javax.servlet.http.HttpServletResponse)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/saveData],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.saveData(j ava.lang.String,java.lang.String,java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/createNode],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.createNo de(java.lang.String,java.lang.String,java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/deleteNode],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.deleteNo de(java.lang.String,java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/queryZnodeInfo],produces=[text/html;charset=UTF-8]}" onto public java.lang.String com.yasenagat.zkweb.web.ZkController.quer yzNodeInfo(java.lang.String,org.springframework.ui.Model,java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/zk/queryZnode]}" onto public java.util.List<com.yasenagat.zkweb.model.Tree> com.yasenagat.zkweb.web.ZkController.query(java.la ng.String,java.lang.String,java.lang.String)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/error],produces=[text/html]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure. web.servlet.error.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest,javax.servlet.http.HttpServletResponse)[2019-07-11 11:42:02 INFO main AbstractHandlerMethodMapping.java:547] o.s.w.s.m.m.a.RequestMappingHandlerMapping --> Mapped "{[/error]}" onto public org.springframework.http.ResponseEntity<java.util.Map<java.lang.String, java.lang.Object>> org.springfram ework.boot.autoconfigure.web.servlet.error.BasicErrorController.error(javax.servlet.http.HttpServletRequest)[2019-07-11 11:42:02 INFO main AbstractUrlHandlerMapping.java:360] o.s.w.s.h.SimpleUrlHandlerMapping --> Root mapping to handler of type [class org.springframework.web.servlet.mvc.ParameterizableViewController] [2019-07-11 11:42:02 INFO main AbstractUrlHandlerMapping.java:373] o.s.w.s.h.SimpleUrlHandlerMapping --> Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] [2019-07-11 11:42:02 INFO main AbstractUrlHandlerMapping.java:373] o.s.w.s.h.SimpleUrlHandlerMapping --> Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] [2019-07-11 11:42:02 INFO main AbstractUrlHandlerMapping.java:373] o.s.w.s.h.SimpleUrlHandlerMapping --> Mapped URL path [/resources/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler] [2019-07-11 11:42:02 INFO main MBeanExporter.java:433] o.s.j.e.a.AnnotationMBeanExporter --> Registering beans for JMX exposure on startup [2019-07-11 11:42:02 INFO main DirectJDKLog.java:180] o.a.coyote.http11.Http11NioProtocol --> Starting ProtocolHandler ["http-nio-8099"] [2019-07-11 11:42:02 INFO main DirectJDKLog.java:180] o.a.tomcat.util.net.NioSelectorPool --> Using a shared selector for servlet write/read [2019-07-11 11:42:03 INFO main DirectJDKLog.java:180] o.a.c.c.C.[Tomcat].[localhost].[/] --> Initializing Spring FrameworkServlet 'dispatcherServlet' [2019-07-11 11:42:03 INFO main FrameworkServlet.java:494] o.s.web.servlet.DispatcherServlet --> FrameworkServlet 'dispatcherServlet': initialization started [2019-07-11 11:42:03 INFO main FrameworkServlet.java:509] o.s.web.servlet.DispatcherServlet --> FrameworkServlet 'dispatcherServlet': initialization completed in 21 ms [2019-07-11 11:42:03 INFO main TomcatWebServer.java:206] o.s.b.w.e.tomcat.TomcatWebServer --> Tomcat started on port(s): 8099 (http) with context path '' [2019-07-11 11:42:03 INFO main StartupInfoLogger.java:59] c.y.zkweb.ZkWebSpringBootApplication --> Started ZkWebSpringBootApplication in 8.271 seconds (JVM running for 9.277)
3>.添加節點保存成功
4>.經過zookeeper的Web界面查看相應的數據
三.搭建kafka徹底分佈式集羣
1>.官網下載kafka(kafka_2.11-0.10.2.1.tgz)
[root@node106.yinzhengjie.org.cn ~]# wget https://archive.apache.org/dist/kafka/0.10.2.1/kafka_2.11-0.10.2.1.tgz --2018-11-10 01:21:08-- https://archive.apache.org/dist/kafka/0.10.2.1/kafka_2.11-0.10.2.1.tgz Resolving archive.apache.org (archive.apache.org)... 163.172.17.199 Connecting to archive.apache.org (archive.apache.org)|163.172.17.199|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 37664956 (36M) [application/x-gzip] Saving to: ‘kafka_2.11-0.10.2.1.tgz’ 100%[====================================================================================================================================================================================================================================>] 37,664,956 228KB/s in 2m 58s 2018-11-10 01:24:07 (207 KB/s) - ‘kafka_2.11-0.10.2.1.tgz’ saved [37664956/37664956] [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ll total 36784 -rw-r--r-- 1 root root 37664956 Apr 27 2017 kafka_2.11-0.10.2.1.tgz [root@node106.yinzhengjie.org.cn ~]#
2>.解壓kafka並配置環境變量
[root@node106.yinzhengjie.org.cn ~]# tar -zxf kafka_2.11-0.10.2.1.tgz -C /home/softwares/ [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ll /home/softwares/kafka_2.11-0.10.2.1/ total 48 drwxr-xr-x 3 root root 4096 Apr 22 2017 bin drwxr-xr-x 2 root root 4096 Apr 22 2017 config drwxr-xr-x 2 root root 4096 Jul 11 14:23 libs -rw-r--r-- 1 root root 28824 Apr 22 2017 LICENSE -rw-r--r-- 1 root root 336 Apr 22 2017 NOTICE drwxr-xr-x 2 root root 47 Apr 22 2017 site-docs [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# tail -3 /etc/profile #ADD kafka PATH BY yinzhengjie KAFKA_HOME=/home/softwares/kafka_2.11-0.10.2.1 PATH=$PATH:$KAFKA_HOME/bin [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# source /etc/profile [root@node106.yinzhengjie.org.cn ~]#
3>.修改kafka的啓動腳本
[root@node106.yinzhengjie.org.cn ~]# cat /home/softwares/kafka_2.11-0.10.2.1/bin/kafka-server-start.sh #!/bin/bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ $# -lt 1 ]; then echo "USAGE: $0 [-daemon] server.properties [--override property=value]*" exit 1 fi base_dir=$(dirname $0) if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties" fi if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
#默認的KAFKA的HEAP內存爲1G,在實際生產環境中顯然是不夠的,在《kafka權威指南》書中說是配置5G,在《Apache Kafka實戰》書中說配置6G,其實差距並非很大,咱們這裏暫且配置6G吧,當時書中的知識是死的,若是Kafka配置了6G的Heap內存嚴重發現Full GC的話,到時候咱們應該學會變通,將其在擴大,但在實際生產環境中,我就是這樣配置的。注意力,這樣配置若是你的虛擬機可用內存若是不足6G可能會直接拋出OOM異常喲~ #export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" export KAFKA_HEAP_OPTS="-Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80" fi EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'} COMMAND=$1 case $COMMAND in -daemon) EXTRA_ARGS="-daemon "$EXTRA_ARGS shift ;; *) ;; esac
#從這行命令不難看出,該腳本會調用kafka-run-class.sh,若是咱們在該配置文件中配置HEAP內存,就不要在Kafka-run-class.sh腳本里再去配置了喲,不然當前腳本配置的HEAP將無效! exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" [root@node106.yinzhengjie.org.cn ~]#
4>.分發kafka相關應用程序
[root@node106.yinzhengjie.org.cn ~]# scp /etc/profile node107.yinzhengjie.org.cn:/etc/profile profile 100% 2126 3.1MB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# scp /etc/profile node108.yinzhengjie.org.cn:/etc/profile profile 100% 2126 2.8MB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# scp -r /home/softwares/kafka_2.11-0.10.2.1/ node107.yinzhengjie.org.cn:/home/softwares/ ...... jackson-databind-2.8.5.jar 100% 1207KB 83.6MB/s 00:00 jackson-annotations-2.8.0.jar 100% 54KB 40.5MB/s 00:00 jackson-core-2.8.5.jar 100% 274KB 32.3MB/s 00:00 connect-api-0.10.2.1.jar 100% 55KB 35.7MB/s 00:00 connect-runtime-0.10.2.1.jar 100% 297KB 78.8MB/s 00:00 connect-transforms-0.10.2.1.jar 100% 54KB 41.6MB/s 00:00 jackson-jaxrs-json-provider-2.8.5.jar 100% 15KB 15.7MB/s 00:00 jersey-container-servlet-2.24.jar 100% 18KB 21.0MB/s 00:00 jetty-server-9.2.15.v20160210.jar 100% 409KB 76.7MB/s 00:00 jetty-servlet-9.2.15.v20160210.jar 100% 113KB 61.3MB/s 00:00 jetty-servlets-9.2.15.v20160210.jar 100% 122KB 58.9MB/s 00:00 reflections-0.9.10.jar 100% 127KB 60.0MB/s 00:00 jackson-jaxrs-base-2.8.5.jar 100% 32KB 34.6MB/s 00:00 jackson-module-jaxb-annotations-2.8.5.jar 100% 34KB 28.9MB/s 00:00 jersey-container-servlet-core-2.24.jar 100% 65KB 50.0MB/s 00:00 jersey-common-2.24.jar 100% 698KB 83.4MB/s 00:00 jersey-server-2.24.jar 100% 919KB 59.9MB/s 00:00 javax.ws.rs-api-2.0.1.jar 100% 113KB 55.9MB/s 00:00 javax.servlet-api-3.1.0.jar 100% 94KB 47.1MB/s 00:00 jetty-http-9.2.15.v20160210.jar 100% 124KB 59.9MB/s 00:00 jetty-io-9.2.15.v20160210.jar 100% 106KB 58.5MB/s 00:00 jetty-security-9.2.15.v20160210.jar 100% 94KB 44.0MB/s 00:00 jetty-continuation-9.2.15.v20160210.jar 100% 16KB 22.8MB/s 00:00 jetty-util-9.2.15.v20160210.jar 100% 360KB 78.7MB/s 00:00 guava-18.0.jar 100% 2203KB 69.1MB/s 00:00 javax.inject-2.5.0-b05.jar 100% 5952 7.6MB/s 00:00 javax.annotation-api-1.2.jar 100% 26KB 23.6MB/s 00:00 jersey-guava-2.24.jar 100% 949KB 61.9MB/s 00:00 hk2-api-2.5.0-b05.jar 100% 174KB 71.1MB/s 00:00 hk2-locator-2.5.0-b05.jar 100% 180KB 66.8MB/s 00:00 osgi-resource-locator-1.0.1.jar 100% 20KB 24.4MB/s 00:00 jersey-client-2.24.jar 100% 165KB 22.4MB/s 00:00 jersey-media-jaxb-2.24.jar 100% 71KB 47.4MB/s 00:00 validation-api-1.1.0.Final.jar 100% 62KB 49.6MB/s 00:00 hk2-utils-2.5.0-b05.jar 100% 116KB 58.6MB/s 00:00 aopalliance-repackaged-2.5.0-b05.jar 100% 14KB 16.7MB/s 00:00 javax.inject-1.jar 100% 2497 5.0MB/s 00:00 jackson-annotations-2.8.5.jar 100% 54KB 43.8MB/s 00:00 javassist-3.20.0-GA.jar 100% 733KB 77.1MB/s 00:00 connect-json-0.10.2.1.jar 100% 42KB 36.3MB/s 00:00 connect-file-0.10.2.1.jar 100% 18KB 20.5MB/s 00:00 kafka-streams-0.10.2.1.jar 100% 593KB 79.4MB/s 00:00 rocksdbjni-5.0.1.jar 100% 7232KB 74.7MB/s 00:00 kafka-streams-examples-0.10.2.1.jar 100% 36KB 33.5MB/s 00:00 kafka_2.11-0.10.2.1-site-docs.tgz 100% 1966KB 94.0MB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# scp -r /home/softwares/kafka_2.11-0.10.2.1/ node108.yinzhengjie.org.cn:/home/softwares/ ...... jackson-databind-2.8.5.jar 100% 1207KB 90.6MB/s 00:00 jackson-annotations-2.8.0.jar 100% 54KB 45.4MB/s 00:00 jackson-core-2.8.5.jar 100% 274KB 73.3MB/s 00:00 connect-api-0.10.2.1.jar 100% 55KB 36.5MB/s 00:00 connect-runtime-0.10.2.1.jar 100% 297KB 34.4MB/s 00:00 connect-transforms-0.10.2.1.jar 100% 54KB 39.5MB/s 00:00 jackson-jaxrs-json-provider-2.8.5.jar 100% 15KB 18.1MB/s 00:00 jersey-container-servlet-2.24.jar 100% 18KB 18.5MB/s 00:00 jetty-server-9.2.15.v20160210.jar 100% 409KB 64.7MB/s 00:00 jetty-servlet-9.2.15.v20160210.jar 100% 113KB 64.9MB/s 00:00 jetty-servlets-9.2.15.v20160210.jar 100% 122KB 62.9MB/s 00:00 reflections-0.9.10.jar 100% 127KB 61.6MB/s 00:00 jackson-jaxrs-base-2.8.5.jar 100% 32KB 32.4MB/s 00:00 jackson-module-jaxb-annotations-2.8.5.jar 100% 34KB 35.3MB/s 00:00 jersey-container-servlet-core-2.24.jar 100% 65KB 47.4MB/s 00:00 jersey-common-2.24.jar 100% 698KB 83.5MB/s 00:00 jersey-server-2.24.jar 100% 919KB 59.3MB/s 00:00 javax.ws.rs-api-2.0.1.jar 100% 113KB 55.1MB/s 00:00 javax.servlet-api-3.1.0.jar 100% 94KB 52.5MB/s 00:00 jetty-http-9.2.15.v20160210.jar 100% 124KB 60.0MB/s 00:00 jetty-io-9.2.15.v20160210.jar 100% 106KB 61.5MB/s 00:00 jetty-security-9.2.15.v20160210.jar 100% 94KB 51.1MB/s 00:00 jetty-continuation-9.2.15.v20160210.jar 100% 16KB 20.2MB/s 00:00 jetty-util-9.2.15.v20160210.jar 100% 360KB 79.1MB/s 00:00 guava-18.0.jar 100% 2203KB 78.6MB/s 00:00 javax.inject-2.5.0-b05.jar 100% 5952 6.2MB/s 00:00 javax.annotation-api-1.2.jar 100% 26KB 25.8MB/s 00:00 jersey-guava-2.24.jar 100% 949KB 61.6MB/s 00:00 hk2-api-2.5.0-b05.jar 100% 174KB 65.6MB/s 00:00 hk2-locator-2.5.0-b05.jar 100% 180KB 67.9MB/s 00:00 osgi-resource-locator-1.0.1.jar 100% 20KB 24.9MB/s 00:00 jersey-client-2.24.jar 100% 165KB 58.3MB/s 00:00 jersey-media-jaxb-2.24.jar 100% 71KB 48.4MB/s 00:00 validation-api-1.1.0.Final.jar 100% 62KB 50.6MB/s 00:00 hk2-utils-2.5.0-b05.jar 100% 116KB 60.5MB/s 00:00 aopalliance-repackaged-2.5.0-b05.jar 100% 14KB 17.3MB/s 00:00 javax.inject-1.jar 100% 2497 3.9MB/s 00:00 jackson-annotations-2.8.5.jar 100% 54KB 45.2MB/s 00:00 javassist-3.20.0-GA.jar 100% 733KB 86.3MB/s 00:00 connect-json-0.10.2.1.jar 100% 42KB 32.7MB/s 00:00 connect-file-0.10.2.1.jar 100% 18KB 21.9MB/s 00:00 kafka-streams-0.10.2.1.jar 100% 593KB 48.3MB/s 00:00 rocksdbjni-5.0.1.jar 100% 7232KB 71.9MB/s 00:00 kafka-streams-examples-0.10.2.1.jar 100% 36KB 35.2MB/s 00:00 kafka_2.11-0.10.2.1-site-docs.tgz 100% 1966KB 73.9MB/s 00:00 [root@node106.yinzhengjie.org.cn ~]#
5>.修改kafka的配置文件(server.properties)
[root@node106.yinzhengjie.org.cn ~]# cat /home/softwares/kafka_2.11-0.10.2.1/config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# #每個broker在集羣中的惟一表示,要求是正數。當該服務器的IP地址發生改變時,broker.id沒有變化,則不會影響consumers的消息狀況 broker.id=106 #這就是說,這條命令其實並不執行刪除動做,僅僅是在zookeeper上標記該topic要被刪除而已,同時也提醒用戶必定要提早打開delete.topic.enable開關,不然刪除動做是不會執行的。 delete.topic.enable=true #是否容許自動建立topic,如果false,就須要經過命令建立topic auto.create.topics.enable=false ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #Socket服務器偵聽的地址。若是沒有配置,它將得到從Java.NET.InAddio.GETCANONICALITHAMEMENE()返回的值 #listeners=PLAINTEXT://172.30.1.106:9092 #broker server服務端口 port=9092 #broker的主機地址,如果設置了,那麼會綁定到這個地址上,如果沒有,會綁定到全部的接口上,並將其中之一發送到ZK,通常不設置 host.name=node106.yinzhengjie.org.cn # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #kafka 0.9.x之後的版本新增了advertised.listeners配置,kafka 0.9.x之後的版本不要使用 advertised.host.name 和 advertised.host.port 已經deprecated.若是配置的話,它使用 "listeners" 的值。不然,它將使用從java.net.InetAddress.getCanonicalHostName()返回的值。 #advertised.listeners=PLAINTEXT://your.host.name:9092 #將偵聽器(listener)名稱映射到安全協議,默認狀況下它們是相同的。有關詳細信息,請參閱配置文檔。 #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL #處理網絡請求的最大線程數 num.network.threads=30 #處理磁盤I/O的線程數 num.io.threads=30 #套接字服務器使用的發送緩衝區(SOYSNDBUF) socket.send.buffer.bytes=5242880 #套接字服務器使用的接收緩衝區(SOYRCVBUF) socket.receive.buffer.bytes=5242880 #套接字服務器將接受的請求的最大大小(對OOM的保護) socket.request.max.bytes=104857600 #I/O線程等待隊列中的最大的請求數,超過這個數量,network線程就不會再接收一個新的請求。應該是一種自我保護機制。 queued.max.requests=1000 ############################# Log Basics ############################# #日誌存放目錄,多個目錄使用逗號分割,若是你有多塊磁盤,建議配置成多個目錄,從而達到I/O的效率的提高。 log.dirs=/home/data/kafka/logs,/home/data/kafka/logs2,/home/data/kafka/logs3 #每一個topic的分區個數,如果在topic建立時候沒有指定的話會被topic建立時的指定參數覆蓋 num.partitions=20 #在啓動時恢復日誌和關閉時刷盤日誌時每一個數據目錄的線程的數量,默認1 num.recovery.threads.per.data.dir=30 # 默認副本數 default.replication.factor=2 #服務器接受單個消息的最大大小,即消息體的最大大小,單位是字節 message.max.bytes=104857600 # 自動負載均衡,若是設爲true,複製控制器會週期性的自動嘗試,爲全部的broker的每一個partition平衡leadership,爲更優先(preferred)的replica分配leadership。 # auto.leader.rebalance.enable=false ############################# Log Flush Policy ############################# #在強制fsync一個partition的log文件以前暫存的消息數量。調低這個值會更頻繁的sync數據到磁盤,影響性能。一般建議人家使用replication來確保持久性,而不是依靠單機上的fsync,可是這能夠帶來更多的可靠性,默認10000。 #log.flush.interval.messages=10000 #2次fsync調用之間最大的時間間隔,單位爲ms。即便log.flush.interval.messages沒有達到,只要這個時間到了也須要調用fsync。默認3000ms. #log.flush.interval.ms=10000 ############################# Log Retention Policy ############################# # 日誌保存時間 (hours|minutes),默認爲7天(168小時)。超過這個時間會根據policy處理數據。bytes和minutes不管哪一個先達到都會觸發。 log.retention.hours=168 #日誌數據存儲的最大字節數。超過這個時間會根據policy處理數據。 #log.retention.bytes=1073741824 #控制日誌segment文件的大小,超出該大小則追加到一個新的日誌segment文件中(-1表示沒有限制) log.segment.bytes=536870912 # 當達到下面時間,會強制新建一個segment #log.roll.hours = 24*7 # 日誌片斷文件的檢查週期,查看它們是否達到了刪除策略的設置(log.retention.hours或log.retention.bytes) log.retention.check.interval.ms=600000 #是否開啓壓縮 #log.cleaner.enable=false #日誌清理策略選擇有:delete和compact主要針對過時數據的處理,或是日誌文件達到限制的額度,會被 topic建立時的指定參數覆蓋 #log.cleanup.policy=delete # 日誌壓縮運行的線程數 #log.cleaner.threads=2 # 壓縮的日誌保留的最長時間 #log.cleaner.delete.retention.ms=3600000 ############################# Zookeeper ############################# #zookeeper集羣的地址,能夠是多個,多個之間用逗號分割. zookeeper.connect=node106.yinzhengjie.org.cn:2181,node107.yinzhengjie.org.cn:2181,node108.yinzhengjie.org.cn:2181 #ZooKeeper的最大超時時間,就是心跳的間隔,如果沒有反映,那麼認爲已經死了,不易過大 zookeeper.session.timeout.ms=180000 #指定多久消費者更新offset到zookeeper中。注意offset更新時基於time而不是每次得到的消息。一旦在更新zookeeper發生異常並重啓,將可能拿到已拿到過的消息,鏈接zk的超時時間 zookeeper.connection.timeout.ms=6000 #請求的最大大小爲字節,請求的最大字節數。這也是對最大記錄尺寸的有效覆蓋。注意:server具備本身對消息記錄尺寸的覆蓋,這些尺寸和這個設置不一樣。此項設置將會限制producer每次批量發送請求的數目,以防發出巨量的請求。 max.request.size=104857600 #每次fetch請求中,針對每次fetch消息的最大字節數。這些字節將會督導用於每一個partition的內存中,所以,此設置將會控制consumer所使用的memory大小。這個fetch請求尺寸必須至少和server容許的最大消息尺寸相等,不然,producer可能發送的消息尺寸大於consumer所能消耗的尺寸 。fetch.message.max.bytes=104857600 #ZooKeeper集羣中leader和follower之間的同步時間,換句話說:一個ZK follower能落後leader多久。 #zookeeper.sync.time.ms=2000 ############################# Replica Basics ############################# # leader接收follower的"fetch請求"的超時時間,默認是10秒。 # replica.lag.time.max.ms=30000 # 若是relicas落後太多,將會認爲此partition relicas已經失效。而通常狀況下,由於網絡延遲等緣由,總會致使replicas中消息同步滯後。若是消息嚴重滯後,leader將認爲此relicas網絡延遲較大或者消息吞吐能力有限。在broker數量較少,或者網絡不足的環境中,建議提升此值.follower落 後於leader的最大message數,這個參數是broker全局的。設置太大 了,影響真正「落後」follower的移除;設置的過小了,致使follower的頻繁進出。沒法給定一個合適的replica.lag.max.messages的值,所以不推薦使用,聽說新版本的Kafka移除了這個參數。#replica.lag.max.messages=4000 # follower與leader之間的socket超時時間 #replica.socket.timeout.ms=30000 # follower每次fetch數據的最大尺寸 replica.fetch.max.bytes=104857600 # follower的fetch請求超時重發時間 replica.fetch.wait.max.ms=2000 # fetch的最小數據尺寸 #replica.fetch.min.bytes=1 #0.11.0.0版本開始unclean.leader.election.enable參數的默認值由原來的true改成false,能夠關閉unclean leader election,也就是不在ISR(IN-Sync Replica)列表中的replica,不會被提高爲新的leader partition。kafka集羣的持久化力大於可用性,若是ISR中沒有其它的replica, 會致使這個partition不能讀寫。unclean.leader.election.enable=false # follower中開啓的fetcher線程數, 同步速度與系統負載均衡 num.replica.fetchers=5 # partition leader與replicas之間通信時,socket的超時時間 #controller.socket.timeout.ms=30000 # partition leader與replicas數據同步時,消息的隊列尺寸. #controller.message.queue.size=10 #指定將使用哪一個版本的 inter-broker 協議。 在全部經紀人升級到新版本以後,這一般會受到衝擊。升級時要設置 #inter.broker.protocol.version=0.10.1 #指定broker將用於將消息添加到日誌文件的消息格式版本。 該值應該是有效的ApiVersion。 一些例子是:0.8.2,0.9.0.0,0.10.0。 經過設置特定的消息格式版本,用戶保證磁盤上的全部現有消息都小於或等於指定的版本。 不正確地設置這個值將致使使用舊版本的用戶出錯,由於他們 將接收到他們不理解的格式的消息。#log.message.format.version=0.10.1 [root@node106.yinzhengjie.org.cn ~]#
[root@node107.yizhengjie.org.cn ~]# cat /home/softwares/kafka_2.11-0.10.2.1/config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# #每個broker在集羣中的惟一表示,要求是正數。當該服務器的IP地址發生改變時,broker.id沒有變化,則不會影響consumers的消息狀況 broker.id=107 #這就是說,這條命令其實並不執行刪除動做,僅僅是在zookeeper上標記該topic要被刪除而已,同時也提醒用戶必定要提早打開delete.topic.enable開關,不然刪除動做是不會執行的。 delete.topic.enable=true #是否容許自動建立topic,如果false,就須要經過命令建立topic auto.create.topics.enable=false ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #Socket服務器偵聽的地址。若是沒有配置,它將得到從Java.NET.InAddio.GETCANONICALITHAMEMENE()返回的值 #listeners=PLAINTEXT://172.30.1.106:9092 #broker server服務端口 port=9092 #broker的主機地址,如果設置了,那麼會綁定到這個地址上,如果沒有,會綁定到全部的接口上,並將其中之一發送到ZK,通常不設置 host.name=node107.yinzhengjie.org.cn # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #kafka 0.9.x之後的版本新增了advertised.listeners配置,kafka 0.9.x之後的版本不要使用 advertised.host.name 和 advertised.host.port 已經deprecated.若是配置的話,它使用 "listeners" 的值。不然,它將使用從java.net.InetAddress.getCanonicalHostName()返回的值。 #advertised.listeners=PLAINTEXT://your.host.name:9092 #將偵聽器(listener)名稱映射到安全協議,默認狀況下它們是相同的。有關詳細信息,請參閱配置文檔。 #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL #處理網絡請求的最大線程數 num.network.threads=30 #處理磁盤I/O的線程數 num.io.threads=30 #套接字服務器使用的發送緩衝區(SOYSNDBUF) socket.send.buffer.bytes=5242880 #套接字服務器使用的接收緩衝區(SOYRCVBUF) socket.receive.buffer.bytes=5242880 #套接字服務器將接受的請求的最大大小(對OOM的保護) socket.request.max.bytes=104857600 #I/O線程等待隊列中的最大的請求數,超過這個數量,network線程就不會再接收一個新的請求。應該是一種自我保護機制。 queued.max.requests=1000 ############################# Log Basics ############################# #日誌存放目錄,多個目錄使用逗號分割,若是你有多塊磁盤,建議配置成多個目錄,從而達到I/O的效率的提高。 log.dirs=/home/data/kafka/logs,/home/data/kafka/logs2,/home/data/kafka/logs3 #每一個topic的分區個數,如果在topic建立時候沒有指定的話會被topic建立時的指定參數覆蓋 num.partitions=20 #在啓動時恢復日誌和關閉時刷盤日誌時每一個數據目錄的線程的數量,默認1 num.recovery.threads.per.data.dir=30 # 默認副本數 default.replication.factor=2 #服務器接受單個消息的最大大小,即消息體的最大大小,單位是字節 message.max.bytes=104857600 # 自動負載均衡,若是設爲true,複製控制器會週期性的自動嘗試,爲全部的broker的每一個partition平衡leadership,爲更優先(preferred)的replica分配leadership。 # auto.leader.rebalance.enable=false ############################# Log Flush Policy ############################# #在強制fsync一個partition的log文件以前暫存的消息數量。調低這個值會更頻繁的sync數據到磁盤,影響性能。一般建議人家使用replication來確保持久性,而不是依靠單機上的fsync,可是這能夠帶來更多的可靠性,默認10000。 #log.flush.interval.messages=10000 #2次fsync調用之間最大的時間間隔,單位爲ms。即便log.flush.interval.messages沒有達到,只要這個時間到了也須要調用fsync。默認3000ms. #log.flush.interval.ms=10000 ############################# Log Retention Policy ############################# # 日誌保存時間 (hours|minutes),默認爲7天(168小時)。超過這個時間會根據policy處理數據。bytes和minutes不管哪一個先達到都會觸發。 log.retention.hours=168 #日誌數據存儲的最大字節數。超過這個時間會根據policy處理數據。 #log.retention.bytes=1073741824 #控制日誌segment文件的大小,超出該大小則追加到一個新的日誌segment文件中(-1表示沒有限制) log.segment.bytes=536870912 # 當達到下面時間,會強制新建一個segment #log.roll.hours = 24*7 # 日誌片斷文件的檢查週期,查看它們是否達到了刪除策略的設置(log.retention.hours或log.retention.bytes) log.retention.check.interval.ms=600000 #是否開啓壓縮 #log.cleaner.enable=false #日誌清理策略選擇有:delete和compact主要針對過時數據的處理,或是日誌文件達到限制的額度,會被 topic建立時的指定參數覆蓋 #log.cleanup.policy=delete # 日誌壓縮運行的線程數 #log.cleaner.threads=2 # 壓縮的日誌保留的最長時間 #log.cleaner.delete.retention.ms=3600000 ############################# Zookeeper ############################# #zookeeper集羣的地址,能夠是多個,多個之間用逗號分割. zookeeper.connect=node106.yinzhengjie.org.cn:2181,node107.yinzhengjie.org.cn:2181,node108.yinzhengjie.org.cn:2181 #ZooKeeper的最大超時時間,就是心跳的間隔,如果沒有反映,那麼認爲已經死了,不易過大 zookeeper.session.timeout.ms=180000 #指定多久消費者更新offset到zookeeper中。注意offset更新時基於time而不是每次得到的消息。一旦在更新zookeeper發生異常並重啓,將可能拿到已拿到過的消息,鏈接zk的超時時間 zookeeper.connection.timeout.ms=6000 #請求的最大大小爲字節,請求的最大字節數。這也是對最大記錄尺寸的有效覆蓋。注意:server具備本身對消息記錄尺寸的覆蓋,這些尺寸和這個設置不一樣。此項設置將會限制producer每次批量發送請求的數目,以防發出巨量的請求。 max.request.size=104857600 #每次fetch請求中,針對每次fetch消息的最大字節數。這些字節將會督導用於每一個partition的內存中,所以,此設置將會控制consumer所使用的memory大小。這個fetch請求尺寸必須至少和server容許的最大消息尺寸相等,不然,producer可能發送的消息尺寸大於consumer所能消耗的尺寸 。fetch.message.max.bytes=104857600 #ZooKeeper集羣中leader和follower之間的同步時間,換句話說:一個ZK follower能落後leader多久。 #zookeeper.sync.time.ms=2000 ############################# Replica Basics ############################# # leader接收follower的"fetch請求"的超時時間,默認是10秒。 # replica.lag.time.max.ms=30000 # 若是relicas落後太多,將會認爲此partition relicas已經失效。而通常狀況下,由於網絡延遲等緣由,總會致使replicas中消息同步滯後。若是消息嚴重滯後,leader將認爲此relicas網絡延遲較大或者消息吞吐能力有限。在broker數量較少,或者網絡不足的環境中,建議提升此值.follower落 後於leader的最大message數,這個參數是broker全局的。設置太大 了,影響真正「落後」follower的移除;設置的過小了,致使follower的頻繁進出。沒法給定一個合適的replica.lag.max.messages的值,所以不推薦使用,聽說新版本的Kafka移除了這個參數。#replica.lag.max.messages=4000 # follower與leader之間的socket超時時間 #replica.socket.timeout.ms=30000 # follower每次fetch數據的最大尺寸 replica.fetch.max.bytes=104857600 # follower的fetch請求超時重發時間 replica.fetch.wait.max.ms=2000 # fetch的最小數據尺寸 #replica.fetch.min.bytes=1 #0.11.0.0版本開始unclean.leader.election.enable參數的默認值由原來的true改成false,能夠關閉unclean leader election,也就是不在ISR(IN-Sync Replica)列表中的replica,不會被提高爲新的leader partition。kafka集羣的持久化力大於可用性,若是ISR中沒有其它的replica, 會致使這個partition不能讀寫。unclean.leader.election.enable=false # follower中開啓的fetcher線程數, 同步速度與系統負載均衡 num.replica.fetchers=5 # partition leader與replicas之間通信時,socket的超時時間 #controller.socket.timeout.ms=30000 # partition leader與replicas數據同步時,消息的隊列尺寸. #controller.message.queue.size=10 #指定將使用哪一個版本的 inter-broker 協議。 在全部經紀人升級到新版本以後,這一般會受到衝擊。升級時要設置 #inter.broker.protocol.version=0.10.1 #指定broker將用於將消息添加到日誌文件的消息格式版本。 該值應該是有效的ApiVersion。 一些例子是:0.8.2,0.9.0.0,0.10.0。 經過設置特定的消息格式版本,用戶保證磁盤上的全部現有消息都小於或等於指定的版本。 不正確地設置這個值將致使使用舊版本的用戶出錯,由於他們 將接收到他們不理解的格式的消息。#log.message.format.version=0.10.1 [root@node107.yizhengjie.org.cn ~]#
[root@node108.yinzhengjie.org.cn ~]# cat /home/softwares/kafka_2.11-0.10.2.1/config/server.properties # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# #每個broker在集羣中的惟一表示,要求是正數。當該服務器的IP地址發生改變時,broker.id沒有變化,則不會影響consumers的消息狀況 broker.id=108 #這就是說,這條命令其實並不執行刪除動做,僅僅是在zookeeper上標記該topic要被刪除而已,同時也提醒用戶必定要提早打開delete.topic.enable開關,不然刪除動做是不會執行的。 delete.topic.enable=true #是否容許自動建立topic,如果false,就須要經過命令建立topic auto.create.topics.enable=false ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #Socket服務器偵聽的地址。若是沒有配置,它將得到從Java.NET.InAddio.GETCANONICALITHAMEMENE()返回的值 #listeners=PLAINTEXT://172.30.1.106:9092 #broker server服務端口 port=9092 #broker的主機地址,如果設置了,那麼會綁定到這個地址上,如果沒有,會綁定到全部的接口上,並將其中之一發送到ZK,通常不設置 host.name=node108.yinzhengjie.org.cn # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #kafka 0.9.x之後的版本新增了advertised.listeners配置,kafka 0.9.x之後的版本不要使用 advertised.host.name 和 advertised.host.port 已經deprecated.若是配置的話,它使用 "listeners" 的值。不然,它將使用從java.net.InetAddress.getCanonicalHostName()返回的值。 #advertised.listeners=PLAINTEXT://your.host.name:9092 #將偵聽器(listener)名稱映射到安全協議,默認狀況下它們是相同的。有關詳細信息,請參閱配置文檔。 #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL #處理網絡請求的最大線程數 num.network.threads=30 #處理磁盤I/O的線程數 num.io.threads=30 #套接字服務器使用的發送緩衝區(SOYSNDBUF) socket.send.buffer.bytes=5242880 #套接字服務器使用的接收緩衝區(SOYRCVBUF) socket.receive.buffer.bytes=5242880 #套接字服務器將接受的請求的最大大小(對OOM的保護) socket.request.max.bytes=104857600 #I/O線程等待隊列中的最大的請求數,超過這個數量,network線程就不會再接收一個新的請求。應該是一種自我保護機制。 queued.max.requests=1000 ############################# Log Basics ############################# #日誌存放目錄,多個目錄使用逗號分割,若是你有多塊磁盤,建議配置成多個目錄,從而達到I/O的效率的提高。 log.dirs=/home/data/kafka/logs,/home/data/kafka/logs2,/home/data/kafka/logs3 #每一個topic的分區個數,如果在topic建立時候沒有指定的話會被topic建立時的指定參數覆蓋 num.partitions=20 #在啓動時恢復日誌和關閉時刷盤日誌時每一個數據目錄的線程的數量,默認1 num.recovery.threads.per.data.dir=30 # 默認副本數 default.replication.factor=2 #服務器接受單個消息的最大大小,即消息體的最大大小,單位是字節 message.max.bytes=104857600 # 自動負載均衡,若是設爲true,複製控制器會週期性的自動嘗試,爲全部的broker的每一個partition平衡leadership,爲更優先(preferred)的replica分配leadership。 # auto.leader.rebalance.enable=false ############################# Log Flush Policy ############################# #在強制fsync一個partition的log文件以前暫存的消息數量。調低這個值會更頻繁的sync數據到磁盤,影響性能。一般建議人家使用replication來確保持久性,而不是依靠單機上的fsync,可是這能夠帶來更多的可靠性,默認10000。 #log.flush.interval.messages=10000 #2次fsync調用之間最大的時間間隔,單位爲ms。即便log.flush.interval.messages沒有達到,只要這個時間到了也須要調用fsync。默認3000ms. #log.flush.interval.ms=10000 ############################# Log Retention Policy ############################# # 日誌保存時間 (hours|minutes),默認爲7天(168小時)。超過這個時間會根據policy處理數據。bytes和minutes不管哪一個先達到都會觸發。 log.retention.hours=168 #日誌數據存儲的最大字節數。超過這個時間會根據policy處理數據。 #log.retention.bytes=1073741824 #控制日誌segment文件的大小,超出該大小則追加到一個新的日誌segment文件中(-1表示沒有限制) log.segment.bytes=536870912 # 當達到下面時間,會強制新建一個segment #log.roll.hours = 24*7 # 日誌片斷文件的檢查週期,查看它們是否達到了刪除策略的設置(log.retention.hours或log.retention.bytes) log.retention.check.interval.ms=600000 #是否開啓壓縮 #log.cleaner.enable=false #日誌清理策略選擇有:delete和compact主要針對過時數據的處理,或是日誌文件達到限制的額度,會被 topic建立時的指定參數覆蓋 #log.cleanup.policy=delete # 日誌壓縮運行的線程數 #log.cleaner.threads=2 # 壓縮的日誌保留的最長時間 #log.cleaner.delete.retention.ms=3600000 ############################# Zookeeper ############################# #zookeeper集羣的地址,能夠是多個,多個之間用逗號分割. zookeeper.connect=node106.yinzhengjie.org.cn:2181,node107.yinzhengjie.org.cn:2181,node108.yinzhengjie.org.cn:2181 #ZooKeeper的最大超時時間,就是心跳的間隔,如果沒有反映,那麼認爲已經死了,不易過大 zookeeper.session.timeout.ms=180000 #指定多久消費者更新offset到zookeeper中。注意offset更新時基於time而不是每次得到的消息。一旦在更新zookeeper發生異常並重啓,將可能拿到已拿到過的消息,鏈接zk的超時時間 zookeeper.connection.timeout.ms=6000 #請求的最大大小爲字節,請求的最大字節數。這也是對最大記錄尺寸的有效覆蓋。注意:server具備本身對消息記錄尺寸的覆蓋,這些尺寸和這個設置不一樣。此項設置將會限制producer每次批量發送請求的數目,以防發出巨量的請求。 max.request.size=104857600 #每次fetch請求中,針對每次fetch消息的最大字節數。這些字節將會督導用於每一個partition的內存中,所以,此設置將會控制consumer所使用的memory大小。這個fetch請求尺寸必須至少和server容許的最大消息尺寸相等,不然,producer可能發送的消息尺寸大於consumer所能消耗的尺寸 。fetch.message.max.bytes=104857600 #ZooKeeper集羣中leader和follower之間的同步時間,換句話說:一個ZK follower能落後leader多久。 #zookeeper.sync.time.ms=2000 ############################# Replica Basics ############################# # leader接收follower的"fetch請求"的超時時間,默認是10秒。 # replica.lag.time.max.ms=30000 # 若是relicas落後太多,將會認爲此partition relicas已經失效。而通常狀況下,由於網絡延遲等緣由,總會致使replicas中消息同步滯後。若是消息嚴重滯後,leader將認爲此relicas網絡延遲較大或者消息吞吐能力有限。在broker數量較少,或者網絡不足的環境中,建議提升此值.follower落 後於leader的最大message數,這個參數是broker全局的。設置太大 了,影響真正「落後」follower的移除;設置的過小了,致使follower的頻繁進出。沒法給定一個合適的replica.lag.max.messages的值,所以不推薦使用,聽說新版本的Kafka移除了這個參數。#replica.lag.max.messages=4000 # follower與leader之間的socket超時時間 #replica.socket.timeout.ms=30000 # follower每次fetch數據的最大尺寸 replica.fetch.max.bytes=104857600 # follower的fetch請求超時重發時間 replica.fetch.wait.max.ms=2000 # fetch的最小數據尺寸 #replica.fetch.min.bytes=1 #0.11.0.0版本開始unclean.leader.election.enable參數的默認值由原來的true改成false,能夠關閉unclean leader election,也就是不在ISR(IN-Sync Replica)列表中的replica,不會被提高爲新的leader partition。kafka集羣的持久化力大於可用性,若是ISR中沒有其它的replica, 會致使這個partition不能讀寫。unclean.leader.election.enable=false # follower中開啓的fetcher線程數, 同步速度與系統負載均衡 num.replica.fetchers=5 # partition leader與replicas之間通信時,socket的超時時間 #controller.socket.timeout.ms=30000 # partition leader與replicas數據同步時,消息的隊列尺寸. #controller.message.queue.size=10 #指定將使用哪一個版本的 inter-broker 協議。 在全部經紀人升級到新版本以後,這一般會受到衝擊。升級時要設置 #inter.broker.protocol.version=0.10.1 #指定broker將用於將消息添加到日誌文件的消息格式版本。 該值應該是有效的ApiVersion。 一些例子是:0.8.2,0.9.0.0,0.10.0。 經過設置特定的消息格式版本,用戶保證磁盤上的全部現有消息都小於或等於指定的版本。 不正確地設置這個值將致使使用舊版本的用戶出錯,由於他們 將接收到他們不理解的格式的消息。#log.message.format.version=0.10.1 [root@node108.yinzhengjie.org.cn ~]#
6>.啓動kafka集羣
[root@node106.yinzhengjie.org.cn ~]# kafka-server-start.sh /home/softwares/kafka_2.11-0.10.2.1/config/server.properties >> /dev/null & #啓動Broker [1] 13760 [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# jps 13760 Kafka 3745 QuorumPeerMain 14083 Jps [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# kafka-server-stop.sh #中止Broker [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# jps 3745 QuorumPeerMain 14810 Jps [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# ansible kafka -m shell -a 'jps' node107.yinzhengjie.org.cn | SUCCESS | rc=0 >> 3705 QuorumPeerMain 12313 Jps node108.yinzhengjie.org.cn | SUCCESS | rc=0 >> 12308 Jps 3708 QuorumPeerMain node106.yinzhengjie.org.cn | SUCCESS | rc=0 >> 3745 QuorumPeerMain 15750 Jps [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# kafka_manager.sh start ========== node106.yinzhengjie.org.cn start ================ node106.yinzhengjie.org.cn 服務已啓動 ========== node107.yinzhengjie.org.cn start ================ node107.yinzhengjie.org.cn 服務已啓動 ========== node108.yinzhengjie.org.cn start ================ node108.yinzhengjie.org.cn 服務已啓動 [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ansible kafka -m shell -a 'jps' node106.yinzhengjie.org.cn | SUCCESS | rc=0 >> 3745 QuorumPeerMain 16218 Jps 15788 Kafka node108.yinzhengjie.org.cn | SUCCESS | rc=0 >> 12338 Kafka 12725 Jps 3708 QuorumPeerMain node107.yinzhengjie.org.cn | SUCCESS | rc=0 >> 12343 Kafka 3705 QuorumPeerMain 12730 Jps [root@node106.yinzhengjie.org.cn ~]#
[root@node106.yinzhengjie.org.cn ~]# ansible kafka -m shell -a 'jps' node106.yinzhengjie.org.cn | SUCCESS | rc=0 >> 3745 QuorumPeerMain 16218 Jps 15788 Kafka node108.yinzhengjie.org.cn | SUCCESS | rc=0 >> 12338 Kafka 12725 Jps 3708 QuorumPeerMain node107.yinzhengjie.org.cn | SUCCESS | rc=0 >> 12343 Kafka 3705 QuorumPeerMain 12730 Jps [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# kafka_manager.sh stop #執行該命令,在生產集羣數據量較大時,可能須要等待一段時間才能中止集羣,此時不要使用kill命令去直接幹掉broker進程,不然可鞥會形成數據丟失。 ========== node106.yinzhengjie.org.cn stop ================ node106.yinzhengjie.org.cn 服務已中止 ========== node107.yinzhengjie.org.cn stop ================ node107.yinzhengjie.org.cn 服務已中止 ========== node108.yinzhengjie.org.cn stop ================ node108.yinzhengjie.org.cn 服務已中止 [root@node106.yinzhengjie.org.cn ~]# [root@node106.yinzhengjie.org.cn ~]# ansible kafka -m shell -a 'jps' node107.yinzhengjie.org.cn | SUCCESS | rc=0 >> 3705 QuorumPeerMain 12846 Jps node108.yinzhengjie.org.cn | SUCCESS | rc=0 >> 12841 Jps 3708 QuorumPeerMain node106.yinzhengjie.org.cn | SUCCESS | rc=0 >> 16384 Jps 3745 QuorumPeerMain [root@node106.yinzhengjie.org.cn ~]#
四.Apache Kafka運維經常使用命令
詳情請參考:https://www.cnblogs.com/yinzhengjie/p/9210029.html