首先說明下個人目的是什麼,個人目的是單純的收集nginx的日誌以及各類應用程序的日誌
html
nginx 日誌java
預留的位置node
flume 和 kafka這個大小的做用是什麼我就再也不說了,你們去本身搜下python
一 。 環境linux
AWS Red Hat Enterprise Linux Server release 7.1 (Maipo)nginx
二。 須要的應用包redis
apache-flume-1.6.0-bin.tar.gzapache
kafka_2.10-0.8.1.1.tgz tomcat
jdk-7u67-linux-x64.tar.gzbash
KafkaOffsetMonitor-assembly-0.2.0.jar
kafka-manager-1.2.3.zip
zookeeper-3.4.7.tar.gz
三。 搭建
先看看咱們host的配置
192.168.1.10 zoo1 zoo2 zoo3 kafka_1 kafka_2 kafka_3
ls /opt/tools/ apache-tomcat-7.0.65 flume jdk1.7.0_67 kafka nginx redis-3.0.5 zookeeper
1.安裝zookeeper
zookeeper 的配置比較簡單。
部署3個zookeeper
配置文件舉例
ls zoo1 zoo2 zoo3 zkui 最後這個是zookeeper的WEB管理 cat master/conf/ configuration.xsl log4j.properties zoo.cfg zoo_sample.cfg [root@ip-172-31-9-125 zookeeper]# cat master/conf/zoo.cfg # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/opt/tool/zookeeper/zoo1/data dataLogDir=/opt/tools/zookeeper/zoo1/logs # the port at which the clients will connect clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 server.0=zoo1:8880:7770 server.1=zoo2:8881:7771 server.2=zoo3:8882:7772
分別啓動3個zk 這裏就再也不說了
2. kafka
ls kafka_1 kafka_2 kafka_3 kafka-manager-1.2.3 kafkaOffsetMonitor kfkstart.sh cat kfkstart.sh #!/bin/bash nohup /opt/tools/kafka/kafka_1/bin/kafka-server-start.sh /opt/tools/kafka/kafka_1/config/server.properties & nohup /opt/tools/kafka/kafka_2/bin/kafka-server-start.sh /opt/tools/kafka/kafka_2/config/server.properties & nohup /opt/tools/kafka/kafka_3/bin/kafka-server-start.sh /opt/tools/kafka/kafka_3/config/server.properties & nohup /opt/tools/kafka/kafka-manager-1.2.3/bin/kafka-manager -Dkafka-manager.zkhosts="zoo1:2181,zoo2:2182,zoo3:2183" &
cat /opt/tools/kafka/kafka_1/config/server.properties ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=0 #這個很重要 就是惟一的ID號 ############################# Socket Server Settings ############################# # The port the socket server listens on port=9092 #監聽的端口 # Hostname the broker will bind to. If not set, the server will bind to all interfaces host.name=kafka_1 #這裏看清楚咱們的前面配置的機器名啊 # Hostname the broker will advertise to producers and consumers. If not set, it uses the # value for "host.name" if configured. Otherwise, it will use the value returned from # java.net.InetAddress.getCanonicalHostName(). #advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set, # it will publish the same port that the broker binds to. #advertised.port=<port accessible by clients> # The number of threads handling network requests num.network.threads=2 # The number of threads doing disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files log.dirs=/opt/tools/kafka/kafka_1/logs # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=2 ############################# Log Flush Policy ############################# # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The minimum age of a log file to be eligible for deletion log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining # segments don't drop below log.retention.bytes. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires. # If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction. log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=zoo1:2181,zoo2:2182,zoo3:2183 # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000
啓動kafka
配置位偏移量工具
ls kafkaOffsetMonitor KafkaOffsetMonitor-assembly-0.2.0.jar logs offsetapp.db start.sh cat kafkaOffsetMonitor/start.sh #!/bin/bash nohup java -cp KafkaOffsetMonitor-assembly-0.2.0.jar com.quantifind.kafka.offsetapp.OffsetGetterWeb --zk zoo1:2181,zoo2:2182,zoo3:2183 --port 8087 --refresh 10.seconds --retain 1.days 1>logs/stdout.log 2>logs/stderr.log &
kafka 管理工具
cat kafka-manager-1.2.3/conf/application.conf # Copyright 2015 Yahoo Inc. Licensed under the Apache License, Version 2.0 # See accompanying LICENSE file. # This is the main configuration file for the application. # ~~~~~ # Secret key # ~~~~~ # The secret key is used to secure cryptographics functions. # If you deploy your application to several instances be sure to use the same key! application.secret="changeme" application.secret=${?APPLICATION_SECRET} # The application languages # ~~~~~ application.langs="en" # Global object class # ~~~~~ # Define the Global object class for this application. # Default to Global in the root package. # global=Global # Database configuration # ~~~~~ # You can declare as many datasources as you want. # By convention, the default datasource is named `default` # # db.default.driver=org.h2.Driver # db.default.url="jdbc:h2:mem:play" # db.default.user=sa # db.default.password= # # You can expose this datasource via JNDI if needed (Useful for JPA) # db.default.jndiName=DefaultDS # Evolutions # ~~~~~ # You can disable evolutions if needed # evolutionplugin=disabled # Ebean configuration # ~~~~~ # You can declare as many Ebean servers as you want. # By convention, the default server is named `default` # # ebean.default="models.*" # Logger # ~~~~~ # You can also configure logback (http://logback.qos.ch/), by providing a logger.xml file in the conf directory . # Root logger: logger.root=ERROR # Logger used by the framework: logger.play=INFO # Logger provided to your application: logger.application=DEBUG kafka-manager.zkhosts="zoo1:2181,zoo2:2182,zoo3:2183" kafka-manager.zkhosts=${?ZK_HOSTS} pinned-dispatcher.type="PinnedDispatcher" pinned-dispatcher.executor="thread-pool-executor"
3 flume
(1)目錄模式 以及 exec模式
cat conf/flume-conf.properties #定義agent的名字爲statge_nginx stage_nginx.sources = S1 stage_nginx.channels = M1 stage_nginx.sinks = sink #定義source的一些設置 我在這裏寫了2個模式 stage_nginx.sources.S1.type = spooldir #目錄模式 stage_nginx.sources.S1.channels = M1 stage_nginx.sources.S1.spoolDir = /logs/nginx/log/shop #nginx 日誌目錄 #stage_nginx.sources.S1.type = exec #SH模式 #stage_nginx.sources.S1.channels = M1 #stage_nginx.sources.S1.command = tail -F /logs/nginx/log/shop/access.log #執行命令 若是咱們有不少日誌,那麼久多啓動幾個flume吧。。。沒想到其餘的辦法 #定義sink stage_nginx.sinks.sink.type = org.apache.flume.sink.kafka.KafkaSink stage_nginx.sinks.sink.topic = test #!!!!本身建立的tpoic stage_nginx.sinks.sink.brokerList = kafka_1:9092,kafka_2:9093,kafka_3:9094 stage_nginx.sinks.sink.requiredAcks = 0 stage_nginx.sinks.sink.batchSize = 20 stage_nginx.sinks.sink.channel = M1 #定義channel stage_nginx.channels.M1.type = memory stage_nginx.channels.M1.capacity = 100 # Other config values specific to each type of channel(sink or source) # can be defined as well # In this case, it specifies the capacity of the memory channel
./bin/flume-ng agent -c /opt/tools/flume/conf/ -f /opt/tools/flume/conf/flume-conf.properties -n stage_nginx 啓動flume
搜索python kafka consumer 來編寫一個消費的程序