1.1 解壓縮(/opt)java
tar -zxvf scala-2.11.4.tgz mv scala-2.11.4 scala
1.2 配置環境變量bash
vi ~/.bashrc
export SCALA_HOME=/usr/local/scalaless
export PATH=$SCALA_HOME/binoop
source ~/.bashrc
1.3 驗證scala是否安裝成功spa
scala -version
1.4 二三節點安裝scalascala
複製scalacode
scp -r scala 192.168.252.165:/opt scp -r scala 192.168.252.166:/opt
複製環境變量server
scp ~/.bashrc 192.168.252.165:~ scp ~/.bashrc 192.168.252.166:~
2.1 解壓縮ip
tar -zxvf kafka_2.9.2-0.8.1.tgz mv kafka_2.9.2-0.8.1 kafka
2.2 配置kafkahadoop
vi /opt/kafka/config/server.properties
修改
broker.id=0 //依次增加的整數,0、一、二、三、4
zookeeper.connect=192.168.252.164:2181,192.168.252.165:2181,192.168.252.166:2181
2.3 安裝slf4j
將slf4j-1.7.6.zip解壓
把slf4j中的slf4j-nop-1.7.6.jar複製到kafka的libs目錄下面
3.1 複製zk目錄到二三節點
cd /opt scp -r kafka hadoop002:/opt scp -r kafka hadoop003:/opt
3.2 修改二三節點broker.id
二三節點server.properties中的broker.id分別改成1和2
4.1 在三臺機器上分別執行:
nohup /opt/kafka/bin/kafka-server-start.sh config/server.properties &
4.2 使用jps檢查啓動是否成功
hadoop001:kafka
hadoop002:kafka
hadoop003:kafka
(生產端輸入消息,消費端能正常輸出消息,則搭建正常)
建立topic
/opt/kafka/bin/kafka-topics.sh --zookeeper 192.168.252.164:2181,192.168.252.165:2181,192.168.252.166:2181 --topic TestTopic --replication-factor 1 --partitions 1 --create
生產
/opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.252.164:9092,192.168.252.165:9092,192.168.252.166:9092 --topic TestTopic
消費
/opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.252.164:2181,192.168.252.165:2181,192.168.252.166:2181 --topic TestTopic --from-beginning
解決kafka Unrecognized VM option 'UseCompressedOops'問題
vi bin/kafka-run-class.sh
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true"
fi
去掉-XX:+UseCompressedOops便可