K8s系統裏的業務應用是高度「動態化」的,隨着容器編排的進行,業務容器在不斷的被建立、被摧毀、被漂移、被擴縮容…java
咱們須要這樣一套日誌收集、分析的系統:node
cd /opt/src/ wget http://mirror.bit.edu.cn/apache/tomcat/tomcat-8/v8.5.50/bin/apache-tomcat-8.5.50.tar.gz mkdir /data/dockerfile/tomcat tar xf apache-tomcat-8.5.50.tar.gz -C /data/dockerfile/tomcat cd /data/dockerfile/tomcat
刪除自帶網頁linux
rm -rf apache-tomcat-8.5.50/webapps/*
關閉AJP端口git
tomcat]# vim apache-tomcat-8.5.50/conf/server.xml <!-- <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> -->
修改日誌類型github
刪除3manager,4host-manager的handlersweb
tomcat]# vim apache-tomcat-8.5.50/conf/logging.properties handlers = [1catalina.org.apache.juli.AsyncFileHandler](http://1catalina.org.apache.juli.asyncfilehandler/), [2localhost.org.apache.juli.AsyncFileHandler](http://2localhost.org.apache.juli.asyncfilehandler/), java.util.logging.ConsoleHandler
日誌級別改成INFOdocker
1catalina.org.apache.juli.AsyncFileHandler.level = INFO 2localhost.org.apache.juli.AsyncFileHandler.level = INFO java.util.logging.ConsoleHandler.level = INFO
註釋全部關於3manager,4host-manager日誌的配置apache
#3manager.org.apache.juli.AsyncFileHandler.level = FINE #3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs #3manager.org.apache.juli.AsyncFileHandler.prefix = manager. #3manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8 #4host-manager.org.apache.juli.AsyncFileHandler.level = FINE #4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs #4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager. #4host-manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8
cat >Dockerfile <<'EOF' From harbor.od.com/public/jre:8u112 RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ echo 'Asia/Shanghai' >/etc/timezone ENV CATALINA_HOME /opt/tomcat ENV LANG zh_CN.UTF-8 ADD apache-tomcat-8.5.50/ /opt/tomcat ADD config.yml /opt/prom/config.yml ADD jmx_javaagent-0.3.1.jar /opt/prom/jmx_javaagent-0.3.1.jar WORKDIR /opt/tomcat ADD entrypoint.sh /entrypoint.sh CMD ["/bin/bash","/entrypoint.sh"] EOF
JVM監控所需jar包json
wget -O jmx_javaagent-0.3.1.jar https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar
jmx_agent讀取的配置文件bootstrap
cat >config.yml <<'EOF' --- rules: - pattern: '.*' EOF
容器啓動腳本
cat >entrypoint.sh <<'EOF' #!/bin/bash M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml" # Pod ip:port 監控規則傳給jvm監控客戶端 C_OPTS=${C_OPTS} # 啓動追加參數 MIN_HEAP=${MIN_HEAP:-"128m"} # java虛擬機初始化時的最小內存 MAX_HEAP=${MAX_HEAP:-"128m"} # java虛擬機初始化時的最大內存 JAVA_OPTS=${JAVA_OPTS:-"-Xmn384m -Xss256k -Duser.timezone=GMT+08 -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram -Dfile.encoding=UTF8 -Dsun.jnu.encoding=UTF8"} # 年輕代,gc回收 CATALINA_OPTS="${CATALINA_OPTS}" JAVA_OPTS="${M_OPTS} ${C_OPTS} -Xms${MIN_HEAP} -Xmx${MAX_HEAP} ${JAVA_OPTS}" sed -i -e "1a\JAVA_OPTS=\"$JAVA_OPTS\"" -e "1a\CATALINA_OPTS=\"$CATALINA_OPTS\"" /opt/tomcat/bin/catalina.sh cd /opt/tomcat && /opt/tomcat/bin/catalina.sh run 2>&1 >> /opt/tomcat/logs/stdout.log # 日誌文件 EOF
docker build . -t harbor.zq.com/base/tomcat:v8.5.50 docker push harbor.zq.com/base/tomcat:v8.5.50
官網
官方github地址
下載地址
部署HDSS7-12.host.com
上:
cd /opt/src wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.6.tar.gz tar xf elasticsearch-6.8.6.tar.gz -C /opt/ ln -s /opt/elasticsearch-6.8.6/ /opt/elasticsearch cd /opt/elasticsearch
mkdir -p /data/elasticsearch/{data,logs} cat >config/elasticsearch.yml <<'EOF' cluster.name: es.zq.com node.name: hdss7-12.host.com path.data: /data/elasticsearch/data path.logs: /data/elasticsearch/logs bootstrap.memory_lock: true network.host: 10.4.7.12 http.port: 9200 EOF
elasticsearch]# vi config/jvm.options # 根據環境設置,-Xms和-Xmx設置爲相同的值,推薦設置爲機器內存的一半左右 -Xms512m -Xmx512m
useradd -s /bin/bash -M es chown -R es.es /opt/elasticsearch-6.8.6 chown -R es.es /data/elasticsearch/
vim /etc/security/limits.d/es.conf es hard nofile 65536 es soft fsize unlimited es hard memlock unlimited es soft memlock unlimited
sysctl -w vm.max_map_count=262144 echo "vm.max_map_count=262144" > /etc/sysctl.conf sysctl -p
]# su -c "/opt/elasticsearch/bin/elasticsearch -d" es ]# netstat -luntp|grep 9200 tcp6 0 0 10.4.7.12:9200 :::* LISTEN 16784/java
curl -XPUT http://10.4.7.12:9200/_template/k8s -d '{ "template" : "k8s*", "index_patterns": ["k8s*"], "settings": { "number_of_shards": 5, "number_of_replicas": 0 # 生產爲3份副本集,本es爲單節點,不能配置副本集 } }'
官網
官方github地址
下載地址
HDSS7-11.host.com
上:
cd /opt/src wget https://archive.apache.org/dist/kafka/2.2.0/kafka_2.12-2.2.0.tgz tar xf kafka_2.12-2.2.0.tgz -C /opt/ ln -s /opt/kafka_2.12-2.2.0/ /opt/kafka cd /opt/kafka
mkdir /data/kafka/logs -p cat >config/server.properties <<'EOF' log.dirs=/data/kafka/logs zookeeper.connect=localhost:2181 # zk消息隊列地址 log.flush.interval.messages=10000 log.flush.interval.ms=1000 delete.topic.enable=true host.name=hdss7-11.host.com EOF
bin/kafka-server-start.sh -daemon config/server.properties ]# netstat -luntp|grep 9092 tcp6 0 0 10.4.7.11:9092 :::* LISTEN 34240/java
官方github地址
源碼下載地址
運維主機HDSS7-200.host.com
上:
kafka-manager是kafka的一個web管理頁面,非必須
1 準備Dockerfile
cat >/data/dockerfile/kafka-manager/Dockerfile <<'EOF' FROM hseeberger/scala-sbt ENV ZK_HOSTS=10.4.7.11:2181 \ KM_VERSION=2.0.0.2 RUN mkdir -p /tmp && \ cd /tmp && \ wget https://github.com/yahoo/kafka-manager/archive/${KM_VERSION}.tar.gz && \ tar xxf ${KM_VERSION}.tar.gz && \ cd /tmp/kafka-manager-${KM_VERSION} && \ sbt clean dist && \ unzip -d / ./target/universal/kafka-manager-${KM_VERSION}.zip && \ rm -fr /tmp/${KM_VERSION} /tmp/kafka-manager-${KM_VERSION} WORKDIR /kafka-manager-${KM_VERSION} EXPOSE 9000 ENTRYPOINT ["./bin/kafka-manager","-Dconfig.file=conf/application.conf"] EOF
2 製做docker鏡像
cd /data/dockerfile/kafka-manager docker build . -t harbor.od.com/infra/kafka-manager:v2.0.0.2 (漫長的過程) docker push harbor.zq.com/infra/kafka-manager:latest
構建過程極其漫長,大機率會失敗,所以能夠經過第二種方式下載構建好的鏡像
但構建好的鏡像寫死了zk地址,要注意傳入變量修改zk地址
docker pull sheepkiller/kafka-manager:latest docker images|grep kafka-manager docker tag 4e4a8c5dabab harbor.zq.com/infra/kafka-manager:latest docker push harbor.zq.com/infra/kafka-manager:latest
mkdir /data/k8s-yaml/kafka-manager cd /data/k8s-yaml/kafka-manager
cat >deployment.yaml <<'EOF' kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kafka-manager namespace: infra labels: name: kafka-manager spec: replicas: 1 selector: matchLabels: name: kafka-manager template: metadata: labels: app: kafka-manager name: kafka-manager spec: containers: - name: kafka-manager image: harbor.zq.com/infra/kafka-manager:latest ports: - containerPort: 9000 protocol: TCP env: - name: ZK_HOSTS value: zk1.od.com:2181 - name: APPLICATION_SECRET value: letmein imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600 EOF
cat >service.yaml <<'EOF' kind: Service apiVersion: v1 metadata: name: kafka-manager namespace: infra spec: ports: - protocol: TCP port: 9000 targetPort: 9000 selector: app: kafka-manager EOF
cat >ingress.yaml <<'EOF' kind: Ingress apiVersion: extensions/v1beta1 metadata: name: kafka-manager namespace: infra spec: rules: - host: km.zq.com http: paths: - path: / backend: serviceName: kafka-manager servicePort: 9000 EOF
任意一臺運算節點上:
kubectl apply -f http://k8s-yaml.od.com/kafka-manager/deployment.yaml kubectl apply -f http://k8s-yaml.od.com/kafka-manager/service.yaml kubectl apply -f http://k8s-yaml.od.com/kafka-manager/ingress.yaml
HDSS7-11.host.com
上
~]# vim /var/named/zq.com.zone km A 10.4.7.10 ~]# systemctl restart named ~]# dig -t A km.od.com @10.4.7.11 +short 10.4.7.10
添加集羣
查看集羣信息
官方下載地址
運維主機HDSS7-200.host.com
上:
mkdir /data/dockerfile/filebeat cd /data/dockerfile/filebeat
cat >Dockerfile <<'EOF' FROM debian:jessie # 若是更換版本,需在官網下載同版本LINUX64-BIT的sha替換FILEBEAT_SHA1 ENV FILEBEAT_VERSION=7.5.1 \ FILEBEAT_SHA1=daf1a5e905c415daf68a8192a069f913a1d48e2c79e270da118385ba12a93aaa91bda4953c3402a6f0abf1c177f7bcc916a70bcac41977f69a6566565a8fae9c RUN set -x && \ apt-get update && \ apt-get install -y wget && \ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz -O /opt/filebeat.tar.gz && \ cd /opt && \ echo "${FILEBEAT_SHA1} filebeat.tar.gz" | sha512sum -c - && \ tar xzvf filebeat.tar.gz && \ cd filebeat-* && \ cp filebeat /bin && \ cd /opt && \ rm -rf filebeat* && \ apt-get purge -y wget && \ apt-get autoremove -y && \ apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* COPY filebeat.yaml /etc/ COPY docker-entrypoint.sh / ENTRYPOINT ["/bin/bash","/docker-entrypoint.sh"] EOF
cat >/etc/filebeat.yaml << EOF filebeat.inputs: - type: log fields_under_root: true fields: topic: logm-PROJ_NAME paths: - /logm/*.log - /logm/*/*.log - /logm/*/*/*.log - /logm/*/*/*/*.log - /logm/*/*/*/*/*.log scan_frequency: 120s max_bytes: 10485760 multiline.pattern: 'MULTILINE' multiline.negate: true multiline.match: after multiline.max_lines: 100 - type: log fields_under_root: true fields: topic: logu-PROJ_NAME paths: - /logu/*.log - /logu/*/*.log - /logu/*/*/*.log - /logu/*/*/*/*.log - /logu/*/*/*/*/*.log - /logu/*/*/*/*/*/*.log output.kafka: hosts: ["10.4.7.11:9092"] topic: k8s-fb-ENV-%{[topic]} version: 2.0.0 # kafka版本超過2.0,默認寫2.0.0 required_acks: 0 max_message_bytes: 10485760 EOF
cat >docker-entrypoint.sh <<'EOF' #!/bin/bash ENV=${ENV:-"test"} # 定義日誌收集的環境 PROJ_NAME=${PROJ_NAME:-"no-define」} # 定義項目名稱 MULTILINE=${MULTILINE:-"^\d{2}"} # 多行匹配,以2個數據開頭的爲一行,反之 # 替換配置文件中的內容 sed -i 's#PROJ_NAME#${PROJ_NAME}#g' /etc/filebeat.yaml sed -i 's#MULTILINE#${MULTILINE}#g' /etc/filebeat.yaml sed -i 's#ENV#${ENV}#g' /etc/filebeat.yaml if [[ "$1" == "" ]]; then exec filebeat -c /etc/filebeat.yaml else exec "$@" fi EOF
docker build . -t harbor.od.com/infra/filebeat:v7.5.1 docker push harbor.od.com/infra/filebeat:v7.5.1
使用dubbo-demo-consumer的鏡像,以邊車模式運行filebeat
]# vim /data/k8s-yaml/test/dubbo-demo-consumer/deployment.yaml kind: Deployment apiVersion: extensions/v1beta1 metadata: name: dubbo-demo-consumer namespace: test labels: name: dubbo-demo-consumer spec: replicas: 1 selector: matchLabels: name: dubbo-demo-consumer template: metadata: labels: app: dubbo-demo-consumer name: dubbo-demo-consumer annotations: blackbox_path: "/hello?name=health" blackbox_port: "8080" blackbox_scheme: "http" prometheus_io_scrape: "true" prometheus_io_port: "12346" prometheus_io_path: "/" spec: containers: - name: dubbo-demo-consumer image: harbor.zq.com/app/dubbo-tomcat-web:apollo_200513_1808 ports: - containerPort: 8080 protocol: TCP - containerPort: 20880 protocol: TCP env: - name: JAR_BALL value: dubbo-client.jar - name: C_OPTS value: -Denv=fat -Dapollo.meta=http://config-test.zq.com imagePullPolicy: IfNotPresent #--------新增內容-------- volumeMounts: - mountPath: /opt/tomcat/logs name: logm - name: filebeat image: harbor.zq.com/infra/filebeat:v7.5.1 imagePullPolicy: IfNotPresent env: - name: ENV value: test # 測試環境 - name: PROJ_NAME value: dubbo-demo-web # 項目名 volumeMounts: - mountPath: /logm name: logm volumes: - emptyDir: {} #隨機在宿主機找目錄建立,容器刪除時一塊兒刪除 name: logm #--------新增結束-------- imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600
任意node節點
kubectl apply -f http://k8s-yaml.od.com/test/dubbo-demo-consumer/deployment.yaml
瀏覽器訪問http://km.zq.com,看到kafaka-manager裏,topic打進來,即爲成功
進入dubbo-demo-consumer的容器中,查看logm目錄下是否有日誌
kubectl -n test exec -it dobbo...... /bin/bash ls /logm
運維主機HDSS7-200.host.com
上:
docker pull logstash:6.8.6 docker tag d0a2dac51fcb harbor.od.com/infra/logstash:v6.8.6 docker push harbor.zq.com/infra/logstash:v6.8.6
準備目錄
mkdir /etc/logstash/
建立test.conf
cat >/etc/logstash/logstash-test.conf <<'EOF' input { kafka { bootstrap_servers => "10.4.7.11:9092" client_id => "10.4.7.200" consumer_threads => 4 group_id => "k8s_test" # 爲test組 topics_pattern => "k8s-fb-test-.*" # 只收集k8s-fb-test開頭的topics } } filter { json { source => "message" } } output { elasticsearch { hosts => ["10.4.7.12:9200"] index => "k8s-test-%{+YYYY.MM.DD}" } } EOF
建立prod.conf
cat >/etc/logstash/logstash-prod.conf <<'EOF' input { kafka { bootstrap_servers => "10.4.7.11:9092" client_id => "10.4.7.200" consumer_threads => 4 group_id => "k8s_prod" topics_pattern => "k8s-fb-prod-.*" } } filter { json { source => "message" } } output { elasticsearch { hosts => ["10.4.7.12:9200"] index => 「k8s-prod-%{+YYYY.MM.DD}" } } EOF
docker run -d \ --restart=always \ --name logstash-test \ -v /etc/logstash:/etc/logstash \ -f /etc/logstash/logstash-test.conf \ harbor.od.com/infra/logstash:v6.8.6 ~]# docker ps -a|grep logstash
~]# curl http://10.4.7.12:9200/_cat/indices?v health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open k8s-test-2020.01.07 mFEQUyKVTTal8c97VsmZHw 5 0 12 0 78.4kb 78.4kb
docker run -d \ --restart=always \ --name logstash-prod \ -v /etc/logstash:/etc/logstash \ -f /etc/logstash/logstash-prod.conf \ harbor.od.com/infra/logstash:v6.8.6
運維主機HDSS7-200.host.com
上:
docker pull kibana:6.8.6 docker tag adfab5632ef4 harbor.od.com/infra/kibana:v6.8.6 docker push harbor.zq.com/infra/kibana:v6.8.6
準備目錄
mkdir /data/k8s-yaml/kibana cd /data/k8s-yaml/kibana
cat >deployment.yaml <<'EOF' kind: Deployment apiVersion: extensions/v1beta1 metadata: name: kibana namespace: infra labels: name: kibana spec: replicas: 1 selector: matchLabels: name: kibana template: metadata: labels: app: kibana name: kibana spec: containers: - name: kibana image: harbor.zq.com/infra/kibana:v6.8.6 imagePullPolicy: IfNotPresent ports: - containerPort: 5601 protocol: TCP env: - name: ELASTICSEARCH_URL value: http://10.4.7.12:9200 imagePullSecrets: - name: harbor securityContext: runAsUser: 0 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600 EOF
cat >service.yaml <<'EOF' kind: Service apiVersion: v1 metadata: name: kibana namespace: infra spec: ports: - protocol: TCP port: 5601 targetPort: 5601 selector: app: kibana EOF
cat >ingress.yaml <<'EOF' kind: Ingress apiVersion: extensions/v1beta1 metadata: name: kibana namespace: infra spec: rules: - host: kibana.zq.com http: paths: - path: / backend: serviceName: kibana servicePort: 5601 EOF
kubectl apply -f http://k8s-yaml.zq.com/kibana/deployment.yaml kubectl apply -f http://k8s-yaml.zq.com/kibana/service.yaml kubectl apply -f http://k8s-yaml.zq.com/kibana/ingress.yaml
~]# vim /var/named/od.com.zone kibana A 10.4.7.10 ~]# systemctl restart named ~]# dig -t A kibana.od.com @10.4.7.11 +short 10.4.7.10
訪問http://kibana.zq.com
選擇區域
項目 | 用途 |
---|---|
@timestamp | 對應日誌的時間戳 |
og.file.path | 對應日誌文件名 |
message | 對應日誌內容 |
時間選擇器
選擇日誌時間
快速時間 絕對時間 相對時間
環境選擇器
選擇對應環境的日誌
k8s-test-* k8s-prod-*
項目選擇器
關鍵字選擇器 exception error