openshift 容器雲從入門到崩潰之八《日誌聚合》

日誌能夠分爲兩部分node

業務日誌

業務日誌通常是要長期保留的,以供之後有問題隨時查詢,elk是如今比較流行的日誌方案,可是容器日誌最好不要落地因此不能把logstash客戶端包在容器裏面nginx

可使用logstash的udp模式容許日誌不落地可是要在程序把日誌扔到到logstash的udp端口當中,配置文件以下:redis

客戶端配置:

架構說明:
 
容器-->logstash客戶端-->Redis-->logstash服務端-->Elasticsearch存儲-->Kibana展現
input {

    #
    獲取本地的日誌
    file {
        type => "nginx_access_logs"
        path => "/data/logs/nginx/*.log"
        codec => json {
            charset => "UTF-8"
        }
    }



    #
    獲取網絡的日誌
    udp {
        host => "127.0.0.1"
        port => "2900"
        type => "logstash_udp_ingress_logs"
        codec => json {
            charset => "UTF-8"
        }

    }


}



output {
    if [type] == "nginx_access_logs" {
        redis {
            host => "elk.xxx.cn"
            port => "6379"
            data_type => "list"#
            logstash redis插件工做方式
            key => "nginx_access_logs:redis"#
            存入Redis的Key名
            password => "xxx123"#
            若是有安全認證, 此項爲密碼
            codec => json {
                charset => ["UTF-8"]
            }
        }
    }

    elseif[type] == "logstash_udp_ingress_logs" {
        redis {
            host => "elk-public.xxx.cn"
            port => "6379"
            data_type => "list"#
            logstash redis插件工做方式
            key => "logstash_udp_ingress_logs"#
            存入Redis的Key名
            password => "xxx123"#
            若是有安全認證, 此項爲密碼
            codec => json {
                charset => ["UTF-8"]
            }

        }
    }
}

那一個nodejs程序舉例怎麼上傳日誌json

log4js配置文件bootstrap

{
    "appenders": [{
            "host": "<192.168.1.1>",
            "port": 2900,
            "type": "logstashUDP",
            "logType": "logstash_udp_ingress_logs",
            "category": "DEBUG::",
            "fields": {
                "project": "Order",
                "field2": "Message"
            }
        },
        {
            "type": "console"
        }
    ]
}

nodejs代碼安全

log4js.configure('./logs/log4js.json');
var logger = log4js.getLogger('DEBUG::');
logger.setLevel('INFO');
EWTRACE = function (Message) {
var myDate = new Date();
var nowStr = myDate.format("yyyyMMdd hh:mm:ss");
logger.info(Message + "\r");
}
//接受參數
EWTRACEIFY = function (Message) {
var myDate = new Date();
var nowStr = myDate.format("yyyyMMdd hh:mm:ss");
// logger.warn(JSON.stringify(Message) + "\r");
logger.info(Message);
}
//提醒
EWTRACETIP = function (Message) {
logger.warn("Tip:" + JSON.stringify(Message) + "\r");
}
//錯誤
EWTRACEERROR = function (Message) {
//logger.error(JSON.stringify(Message) + "\r");
var myDate = new Date();
var nowStr = myDate.format("yyyyMMdd hh:mm:ss");
// logger.warn(JSON.stringify(Message) + "\r");
logger.error(Message);
}

服務端配置:

起一個redis略過。。。。ruby

logstash服務端配置:網絡

input {

    #從redis中獲取本地日誌
    redis {
        host => "192.168.1.1"
        port => 6379
        type => "redis-input"
        data_type => "list"
        key => "nginx_access_logs:redis"
        key => "nginx_access_*"
        password => "xxx123"
        codec => json {
            charset => ["UTF-8"]
        }
    }


    #獲取UDP日誌
    redis {
        host => "192.168.1.1"
        port => 6379
        type => "redis-input"
        data_type => "list"
        key => "logstash_udp_ingress_logs"#
        key => "logstash_udp_ingres*"
        password => "downtown123"
        codec => json {
            charset => ["UTF-8"]
        }
    }
}




output {
    if [type] == "nginx_access_logs" {#
        if [type] == "nginx_access_*" {

            elasticsearch {

                hosts => ["http://192.168.1.1:9200"]
                flush_size => 2000
                idle_flush_time => 15
                codec => json {
                    charset => ["UTF-8"]
                }
                index => "logstash-nginx_access_logs-%{+YYYY.MM.dd}"

            }
            stdout {
                codec => rubydebug
            }

        }

        elseif[type] == "logstash_udp_ingress_logs" {

            elasticsearch {
                hosts => ["http://192.168.1.1:9200"]
                flush_size => 2000
                idle_flush_time => 15
                codec => json {
                    charset => ["UTF-8"]
                }
                index => "logstash-udp_ingress_logs-%{+YYYY.MM.dd}"

            }

        }
        stdout {
            codec => plain
        }

    }

Elasticsearch Kibana根據實際狀況配置優化架構

經常使用的針對日誌Elasticsearch機器的配置app

cluster.name: logging-es
node.name: logging-es01
path.data: /es-data01/data,/es-data02/data
path.logs: /data/logs/elasticsearch
network.host: 192.168.1.1


node.master: true
node.data: true
node.attr.tag: hot

discovery.zen.fd.ping_timeout: 100s
discovery.zen.ping_timeout: 100s
discovery.zen.fd.ping_interval: 30s
discovery.zen.fd.ping_retries: 10
discovery.zen.ping.unicast.hosts: ["logging-es01", "logging-es02","logging-es03","logging-es04"]
discovery.zen.minimum_master_nodes: 2


action.destructive_requires_name: false
bootstrap.memory_lock: false
cluster.routing.allocation.enable: all
bootstrap.system_call_filter: false
thread_pool.bulk.queue_size: 6000
cluster.routing.allocation.node_concurrent_recoveries: 128
indices.recovery.max_bytes_per_sec: 200mb
indices.memory.index_buffer_size: 20%

 

程序日誌

程序日誌通常是指debug日誌或者程序崩潰日誌,特色是不用長期保存可是日誌量會很是大

oc提供了一套日誌聚合方案具體架構以下:

容器console-->Fluentd收集(每一個NODE)-->Elasticsearch存儲-->Kibana展現

加入配置/etc/ansible/hosts

[OSEv3:vars]
openshift_logging_install_logging=true
openshift_logging_master_url=https://www.oc.example.cn:8443
openshift_logging_public_master_url=https://kubernetes.default.svc.cluster.local
openshift_logging_kibana_hostname=kibana.oc.example.cn
openshift_logging_kibana_ops_hostname=kibana.oc.example.cn
openshift_logging_es_cluster_size=2
openshift_logging_master_public_url=https://www.oc.example.cn:8443

執行安裝

# ansible-playbook openshift-ansible/playbooks/openshift-logging/config.yml

查看安裝狀況

# oc project logging
# oc get pod

安裝好以後能夠修改configmap來配置ES和日誌過時時間

# oc get configmap

設置日誌過時時間修改logging-curator

.operations:
  delete:
    days: 2
  
.defaults:
  delete:
    days: 7
相關文章
相關標籤/搜索