lagstash + elasticsearch + kibana 3 + kafka 日誌管理系統部署 02

因公司數據安全和分析的須要,故調研了一下  GlusterFS + lagstash + elasticsearch + kibana 3 + redis 整合在一塊兒的日誌管理應用:html

安裝,配置過程,使用狀況等續java

一,glusterfs分佈式文件系統部署: 說明: 公司想作網站業務日誌及系統日誌統一收集和管理,通過對mfs, fastdfs 等分佈式文件系統的調研,最後選擇了 glusterfs,由於Gluster具備高擴展性、高性能、高可用性、可橫向擴展的彈性特色,無元數據服務器設計使glusterfs沒有單點故障隱 患,官網:www.gluster.orgnode

1. 系統環境準備:linux

Centos 6.4 服務端: 192.168.10.101 192.168.10.102 192.168.10.188 192.168.10.189 客戶端: 192.168.10.103 EPEL源和GlusterFS源 增長EPEL源和GLusterFS源,EPEL源中包含GLusterFS,版本比較舊,相對穩定,本次測試採用最新的3.5.0版本。nginx

rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
     wget -P /etc/yum.repos.dhttp://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

2. 部署過程git

服務端安裝:

     yum -y install glusterfs glusterfs-fuseglusterfs-server
     chkconfig glusterd on
     service glusterd start
     服務端配置:
     將4個存儲節點組成一集羣,本文在第一個節點執行,只須要在任意節點執行就ok。

     [root@db1 ~]# gluster peer probe192.168.10.102
     probe successful
     [root@db1 ~]# gluster peer probe192.168.10.188
     probe successful
     [root@db1 ~]# gluster peer probe 192.168.10.189
     probe successful
     查看集羣的節點信息:

     [root@db1 ~]# gluster peer status
     number of peers: 3
     hostname: 192.168.10.102
     uuid:b9437089-b2a1-4848-af2a-395f702adce8
     state: peer in cluster (connected)
     hostname: 192.168.10.188
     uuid: ce51e66f-7509-4995-9531-4c1a7dbc2893
     state: peer in cluster (connected)
     hostname: 192.168.10.189
     uuid:66d7fd67-e667-4f9b-a456-4f37bcecab29
     state: peer in cluster (connected)
     以/data/gluster爲共享目錄,建立名爲test-volume的卷,副本數爲2:

      sh cmd.sh "mkdir /data/gluster"
     [root@db1 ~]#  gluster volume create test-volume replica 2192.168.10.101:/data/gluster 192.168.10.102:/data/gluster192.168.10.188:/data/gluster 192.168.10.189:/data/gluster
     creation of volume test-volume has beensuccessful. please start the volume to access data.
     啓動卷:

     [root@db1 ~]# gluster volume starttest-volume
     starting volume test-volume has beensuccessful
     查看卷狀態:

     [root@db1 ~]# gluster volume info
     volume name: test-volume
     type: distributed-replicate
     status: started
     number of bricks: 2 x 2 = 4
     transport-type: tcp
     bricks:
     brick1: 192.168.10.101:/data/gluster
     brick2: 192.168.10.102:/data/gluster
     brick3: 192.168.10.188:/data/gluster
     brick4: 192.168.10.189:/data/gluster

3. 客戶端安裝配置:github

安裝:

     yum -y installglusterfs glusterfs-fuse
     掛載:

     mount -t glusterfs 192.168.10.102:/test-volume/mnt/ (掛載任意一個節點便可)推薦用這種方式。

     mount -t nfs -o mountproto=tcp,vers=3192.168.10.102:/test-volume /log/mnt/ (使用nfs掛載,注意遠端的rpcbind服務必須開啓)
     echo "192.168.10.102:/test-volume/mnt/ glusterfs defaults,_netdev 0 0" >> /etc/fstab (開機自動掛載)

4. 測試web

檢查文件正確性

     dd if=/dev/zero of=/mnt/1.img bs=1Mcount=1000 # 在掛載客戶端生成測試文件
     cp /data/navy /mnt/  # 文件拷貝到存儲上

     宕機測試。
     使用glusterfs-fuse掛載,即便目標服務器故障,也徹底不影響使用。用NFS則要注意掛載選項,不然服務端故障容易致使文件系統halt住而影響服務!

     # 將其中一個節點中止存儲服務service glusterd stop
     service glusterfsd stop# 在掛載客戶端刪除測試文件
     rm -fv /mnt/navy# 此時在服務端查看,服務被中止的節點上navy並未被刪除。此時啓動服務:serviceglusterd start# 數秒後,navy就被自動刪除了。新增文件效果相同!

5. 運維經常使用命令:redis

刪除卷
     gluster volume stop test-volume
     gluster volume delete test-volume
     將機器移出集羣
     gluster peer detach 192.168.10.102
     只容許172.28.0.0的網絡訪問glusterfs
     gluster volume set test-volumeauth.allow 192.168.10.*
     加入新的機器並添加到卷裏(因爲副本數設置爲2,至少要添加2(四、六、8..)臺機器)
     gluster peer probe 192.168.10.105
     gluster peer probe 192.168.10.106
     gluster volume add-brick test-volume192.168.10.105:/data/gluster 192.168.10.106:/data/gluster
     收縮卷
     # 收縮卷前gluster須要先移動數據到其餘位置
     gluster volume remove-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.102:/data/gluster/test-volume start
     # 查看遷移狀態
     gluster volume remove-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.102:/data/gluster/test-volume status
     # 遷移完成後提交
     gluster volume remove-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.102:/data/gluster/test-volume commit
     遷移卷
     # 將192.168.10.101的數據遷移到,先將192.168.10.107加入集羣
     gluster peer probe 192.168.10.107
     gluster volume replace-bricktest-volume 192.168.10.101:/data/gluster/test-volume192.168.10.107:/data/gluster/test-volume start
     # 查看遷移狀態gluster volume replace-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.107:/data/gluster/test-volume status
     # 數據遷移完畢後提交gluster volume replace-brick test-volume192.168.10.101:/data/gluster/test-volume192.168.10.107:/data/gluster/test-volume commit
     # 若是機器192.168.10.101出現故障已經不能運行,執行強制提交而後要求gluster立刻執行一次同步
     gluster volume replace-bricktest-volume 192.168.10.101:/data/gluster/test-volume192.168.10.102:/data/gluster/test-volume commit -force
     gluster volume heal test-volumes full
     24007

二.日誌收集系統部署數據庫

說明簡解:

系統各部分應用介紹
Logstash:作系統log收集,轉載的工具。同時集成各種日誌插件,對日誌查詢和分析的效率有很大的幫助.通常使用shipper做爲log收集、indexer做爲log轉載.
Logstash shipper收集log 並將log轉發給redis 存儲
Logstash indexer從redis中讀取數據並轉發給elasticsearch
redis:是一個db,logstash shipper將log轉發到redis數據庫中存儲。Logstash indexer從redis中讀取數據並轉發給elasticsearch。
kafka:這裏咱們把redis換做用kafka,主要用於處理活躍的流式數據,高吞吐率,顯式分佈式,支持數據並行加載
Elasticsearch:開源的搜索引擎框架,前期部署簡單,使用也簡單,但後期須要作必要的優化具體請參照博客http://chenlinux.com/categories.html#logstash-ref  中logstash部分.可進行多數據集羣,提升效率。從redis中讀取數據,並轉發到kibana中
Kibana: 開源web展示。

日誌收集系統架構圖:

 

虛擬服務器準備:

192.168.10.143    logstash  shipper        日誌數據生產端
    192.168.10.144   logstash indexer  kafka    日誌消費端將日誌寫入elasticsearch集羣
    192.168.10.145    elasticsearch-node1  kibana3   kibana展顯elasticsearch中的數據
    192.168.10.146   elasticsearch-node2

1.三臺主機都要安裝jdk 1.7 推薦oracle jdk 1.7+版本 java -version 設置java的環境變量,好比

vim ~/.bashrc

    >>
    JAVA_HOME=/usr/java/jdk1.7.0_55
    PATH=$PATH:/$JAVA_HOME/bin
    CLASSPATH=.:$JAVA_HOME/lib
    JRE_HOME=$JAVA_HOME/jre
    export JAVA_HOME PATH CLASSPATH JRE_HOME 
    >>
    source ~/.bashrc

2.安裝kafka(192.168.10.144) wget http://mirrors.hust.edu.cn/apache/kafka/0.8.1.1/kafka2.9.2-0.8.1.1.tgz tar zxvf kafka2.9.2-0.8.1.1.tgz ln –s kafka_2.9.2-0.8.1.1 /usr/local/kafka vim /usr/local/kafka/config/server.properties broker.id=10-144 host.name=kafka-10-144 echo 「192.168.10.144 kafka-10-144」 >> /etc/hosts 注意kafka啓動依懶於zookeeper,須要安裝一下zookeeper-server cdh5.2源上有,配置好yum源便可 yum install zookeeper-server –y vim /etc/zookeeper/conf/zoo.cfg dataDir=/data/zookeeper 配置一下zookeeper 數據存放路徑 啓動zookeeper和kafka /etc/init.d/zookeeper-server start nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &

3.安裝Elasticsearch(192.168.10.145)

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.1.tar.gz 
    elasticsearch解壓便可使用很是方便,接下來咱們看一下效果,首先啓動ES服務,切換到elasticsearch目錄,運行bin下的elasticsearch
    tar zxvf elasticsearch-1.4.1.tar.gz
    ln –s elasticsearch-1.4.1 /usr/local/es
    cd  /usr/local/es/
    vim config/elasticsearch.yml  添加以下配置,要否則kibana調es時會報錯(es 1.4 和kibana 3.1.2版本的問題)
    cluster.name: elasticsearch   打開前面註釋
    node.name: "G1-logs-es02"  打開前面註釋,根據主機名配置,作集羣
    http.cors.enabled: true
    http.cors.allow-origin: "*"

    nohup bin/elasticsearch &
    訪問默認的9200端口

    curl -X GET http://localhost:9200

        Elasticsearch(192.168.10.146)的安裝同上:

4.安裝logstash (192.168.10.143 ,192.168.10.144)生產端和消費端都要安裝 這裏參考了 http://blog.csdn.net/xzwdev/article/details/41278033 https://github.com/joekiller/logstash-kafka git clone https://github.com/joekiller/logstash-kafka

cd  /usr/local/src/logstash-kafka
                    make tarball    編譯kakfa支持logstash 時間比較長,大概兩個小時
                    會在/usr/local/src/logstash-kafka/build/ 目錄生成logstash-1.4.2.tar.gz 文件,
                logstash-1.4.2.tar.gz文件後面會在日誌生產端和消費日誌傳輸用,

生產日誌端配置並啓動(192.168.10.143)

配置收集haproxy日誌示例:
                tar zxvf logstash-1.4.2.tar.gz
                ln –s logstash-1.4.2 /usr/local/logstash

                vim /usr/local/logstash/conf/ logstash_shipper_haproxy.conf
                input{

                    file{
            path => "/data/application/haproxy-1.4.18/logs/haproxy.log"   指定所收集的日誌文件路徑
            type => "haproxylog"                                      日誌所屬業務服務名稱
        }
    }

    output{
        kafka{
        broker_list => "192.168.10.144:9092"    kafka服務地址
        topic_id => "logstash-haproxylog"     標識消費端取日誌的id
        }

    }
                    啓動生產端日誌收集服務
                nohup  /usr/local/logstash/bin/logstash –f /usr/local/logstash/conf/ logstash_shipper_haproxy.conf &

消費端服務器配置:

tar zxvf logstash-1.4.2.tar.gz
    ln –s logstash-1.4.2 /usr/local/logstash
    vim /usr/local/logstash/consumer_conf/logstash_haproxylog_es.conf
    input{
        kafka{
            zk_connect => "192.168.10.144:2181"
            group_id  => 'logstash-haproxylog'
            topic_id  => 'logstash-haproxylog'
        }
    }

    output{
        elasticsearch{
        host => "192.168.10.145"
        port => "9300"
        index => "haproxy-5-13-%{+YYYY.MM.dd}"
        }
    }

啓動消費端服務

nohup /usr/local/logstash/bin/logstash –f /usr/local/logstash/consumer_conf/logstash_haproxylog_es.conf  &

優化補充:

1,es中創建不一樣業務的索引,須要作if判斷,這裏是在logstash消費端配置的
    如: 
    input{
        kafka{
            zk_connect => "192.168.35.130:2181"
            group_id => "g1.api.test.com"
            topic_id => 'g1.api.test.com'

        }
        kafka{
            zk_connect => "192.168.35.130:2181"
            group_id => "go.clientinner.test.com"
            topic_id => "go.clientinner.test.com"
        }
        kafka{
            zk_connect => "192.168.35.130:2181"
            group_id => "api7.mobile.test.com_app"
            topic_id => "api7.mobile.test.com_app"
        }

    }

    filter {

            ruby {
            init => "@kname = ['time','uid','ip','uname','stime','etime','exec_time','url','ua','module','response_status','http_status','query_string']"
            code => "event.append(Hash[@kname.zip(event['message'].split('|'))])"
            }

            mutate {
            convert => ["exec_time", "float"]
            }
            geoip {
            database => "/data/application/logstash/patterns/GeoLiteCity.dat"
            source => "ip"
            fields => ["country_name","city_name"]
            }
            useragent {
            source => "ua"
            target => "useragent"
            }


        }

    output{

        if [type] == "go.clientinner.test.com"{
            elasticsearch{
            template => "/usr/local/logstash/conf/logstash.json"
            template_overwrite => true       #修改url 不分詞的屬性,
            host => "192.168.35.131"
            port => "9300"
            index => "go.clientinner.test.com-%{+YYYY.MM.dd}"
            }

        } else if [type] == "g1.api.test.com"{
            elasticsearch{
            template => "/usr/local/logstash/conf/logstash.json"
            template_overwrite => true
            host => "192.168.35.131"
            port => "9300"
            index => "g1.api.test.com-%{+YYYY.MM.dd}"
            }


        }else if [type] == "api7.mobile.test.com_app"{
            elasticsearch{
            template => "/usr/local/logstash/conf/logstash.json"
            template_overwrite => true
            host => "192.168.35.131"
            port => "9300"
            index => "api7.mobile.test.com_app-%{+YYYY.MM.dd}"
            }

        }

    }

2,logstash 往es存數據,天天創建時間索引,默認會用utc時間,是早上8點才創建,形成當前的日誌數據存入昨天的索引中,須要作以下修改:

修改logstash/lib/logstash/event.rb 能夠解決這個問題

    第226行
    .withZone(org.joda.time.DateTimeZone::UTC)
    修改成

    .withZone(org.joda.time.DateTimeZone.getDefault())

5 安裝kibana(192.168.10.145)

logstash的最新版已經內置kibana,你也能夠單獨部署kibana。kibana3是純粹JavaScript+html的客戶端,因此能夠部署到任意http服務器上。
    wget http://download.elasticsearch.org/kibana/kibana/kibana-latest.zip
    unzip kibana-latest.zip
    cp -r  kibana-latest /var/www/html
    能夠修改config.js來配置elasticsearch的地址和索引。
    修改如下行。
    elasticsearch: "http://192.168.10.145:9200",

6,最終實現以下圖:

7,日誌系統維護:

1,elsasticsearch 集羣擴展
    這裏主要說給新添加一下es新節點
    Es節點的elsasticsearch安裝參考上面,
    添加新節點前,須要執行如下命令
    1) 先暫停集羣的shard自動均衡:
    在主節點上
    curl -XPUT http://192.168.35.131:9200/_cluster/settings -d '{"transient" : {"cluster.routing.allocation.enable" : "none"}}'
    關閉其它節點和主節點:以下:
    curl -XPOST http://192.168.35.132:9200/_cluster/nodes/_local/_shutdown
    curl -XPOST http://192.168.35.131:9200/_cluster/nodes/_local/_shutdown
    2) 啓動主節點,和其它從節點,
    3) 添加新節點,啓動和配置參考其它從節點
    2,kafka+zookeeper集羣擴展:
    安裝包:kafka_2.9.2-0.8.1.1.tgz   elasticsearch-1.4.1.tar.gz
    Kafka+zookeeper配置以下:

    Cat kafka/config/server.properties  主要配置
    broker.id=35125
    host.name=192.168.35.125
    advertised.host.name=192.168.35.125
    log.dirs=/data/kafka-logs
    zookeeper.connect=192.168.35.130:2181,192.168.35.124:2181,192.168.35.125:2181

    cat /etc/zookeeper/conf/zoo.cfg

    dataDir=/data/zookeeper    節點信息存放目錄
    clientPort=2181
    # zookeeper cluster
    server.35130=G1-logs-kafka:2888:3888
    server.35124=bj03-bi-pro-tom01:2888:3888
    server.35125=bj03-bi-pro-tom02:2888:3888
        啓動服務
    Chown zookeeper.zookeeper /data/zookeeper -R
    /etc/init.d/zookeeper-server init   第一次啓動時須要初始化/data/zookeeper目錄
    Echo 「35130」 > /data/zookeeper/myid
    Chown zookeeper.zookeeper /data/zookeeper -R
    /etc/init.d/zookeeper-server start  先啓動zookeeper
    nohup ./bin/kafka-server-start.sh config/server.properties > /data/kafka-logs/kafka.log & 再啓動kafka

8,kibana登陸認證安裝配置:

應用說明:
        Nginx:  記錄日誌,作es的反向代理  
    Nodejs: 跑 kibana-authentication-proxy
    Kibana: 把原來kibana目錄連接到kibana-authentication-proxy 下
    kibana-authentication-proxy:用戶認證和代理請求es,

8.1,nginx安裝配置 #wget http://nginx.org/download/nginx-1.2.9.tar.gz # yum -y install zlib zlib-devel openssl openssl--devel pcre pcre-devel #tar zxvf nginx-1.2.9.tar.gz # cd nginx-1.2.9 # ./configure --prefix=/usr/local/nginx #make && make install

配置以下:
    #cat /usr/local/nginx/conf/nginx.conf

            user  web;
    worker_processes  4;

    error_log  logs/error.log  info;

    pid        logs/nginx.pid;

    events {
    worker_connections  1024;
    use epoll;
    }



    http {


    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
            '$status $body_bytes_sent "$http_referer" '
            '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;
    upstream kianaca {
        server 192.168.35.131:9200 fail_timeout=30s;
        server 192.168.35.132:9200 fail_timeout=30s;
        server 192.168.35.125:9200 fail_timeout=30s;
        }

    server {
        listen       8080;
        server_name  192.168.35.131;


        location / {
        root   /var/www/html/kibana-proxy;
        index  index.html index.htm;
        proxy_next_upstream error timeout invalid_header http_500 http_503 http_404;  
        proxy_pass http://kianaca;  
        proxy_set_header Host lashou.log.com;  
        proxy_set_header X-Real-IP $remote_addr;  
        proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
        root   /var/www/html/kibana-proxy;
        }

    }

    }

        # /usr/local/nginx/sbin/nginx -t  
                #/usr/local/nginx/sbin/nginx

8.2 安裝kibana-authentication-proxy

#cd /var/www/html/
    #git clone https://github.com/wangganyu188/kibana-authentication-proxy.git
    #mv kibana-authentication-proxy kibana-proxy
    # cd kibana-proxy
    #yum install npm
    #npm install express
    #git submodule init
    #npm install
    #node app.js
    配置 kibana-proxy/config.js

    可能有以下參數須要調整:

    es_host      #這裏是nginx地址
    es_port      #nginx的8080
    listen_port      #node的監聽端口, 9201
    listen_host  #node的綁定IP, 能夠0.0.0.0
    cas_server_url

8.3請求路徑

node(9201) <=> nginx(8080) <=> es(9200)
相關文章
相關標籤/搜索