前情回顾(新一代容器云监控系统Prometheus+Grafana)

  • Exporters(可以自定义开发)
    • HTTP接口
    • 定义监控项和监控项的标签(维度)
    • 按一定的数据结构组织监控数据
    • 以时间序列被收集
  • Prometheus Server
    • Retrieve(数据收集器)
    • TSDB(时间序列数据库)
    • Configure(static_config、kubernetes_sd、file_sd)
    • HTTP Server
  • Grafana
    • 多种多样的插件
    • 数据源(Prometheus)
    • Dashboard(PromQL)
  • Alertmanager
    • rules.yml(PromQL)

第一章: ELK Stack概述

  • 日志,对于任何系统来说都是及其重要的组成部分。在计算机系统里面,更是如此。但是由于现在的计算机系统大多比较复杂,很多系统都不是在一个地方,甚至都是跨国界的;即使是在一个地方的系统,也有不同的来源,比如,操作系统,应用服务,业务逻辑等等。他们都在不停的产生各种各样的日志数据。根据不完全统计,我们全球每天大约要产生2EB的数据。
  • K8S系统里的业务应用是高度“动态化”的,随着容器编排的进行,业务容器在不断的被创建、被摧毁,被迁移(漂)、被扩缩容….
  • 面对如此海量的数据,又是分布在各个不同地方,如果我们需要去查找一些重要的信息,难道还是使用传统的方法,去登录到一台台机器上查看?看来传统的工具和方法已经显得非常笨拙和低效了。于是,一些聪明人就提出了建立一套中式的方法,把不同来源的数据集中整合到一个地方。
  • 我们需要这样一套日志收集、分析的系统:
    • 收集-能够采集多种来源的日志数据(流式日志收集器)
    • 传输-能够稳定的把日志数据传输到中央系统(消息队列)
    • 存储-可以将日志以结构化数据的形式存储起来(搜索引擎)
    • 分析-支持方便的分析、检索方法,最好有GUI管理系统(前端)
    • 警告-能够提供错误报告,监控机制(监控工具)
  • 优秀的社区开源解决方案-ELK Stack
    • E-ElasticSearch
    • L-Logstash
    • K-Kibana
  • 传统ELK模型

  • 缺点:
    • Logstash使用Jruby语言开发,吃资源,大量部署消耗极高
    • 业务程序与logstash耦合过松,不利于业务迁移
    • 日志收集与ES耦合又过紧,易打爆、丢数据
    • 在容器云环境下,传统ELK模型难以完成工作
  • 容器化ELK模型

第二章:将dubbo-demo-consumer项目改造为tomcat启动的web项目

1.准备Tomcat的镜像底包

运维主机hdss7-200.host.com上:
Tomcat8下载链接

[root@hdss7-200 ~]# 
[root@hdss7-200 ~]# cd /opt/src/
[root@hdss7-200 src]# wget https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.50/bin/apache-tomcat-8.5.50.tar.gz
[root@hdss7-200 src]# mkdir -p /data/dockerfile/tomcat8
[root@hdss7-200 src]# tar xf apache-tomcat-8.5.50.tar.gz  -C /data/dockerfile/tomcat8/
[root@hdss7-200 src]# cd /data/dockerfile/tomcat8/

2.简单配置Tomcat

1.关闭AJP端口

[root@hdss7-200 src]# cd /data/dockerfile/tomcat8/
[root@hdss7-200 tomcat8]# vim apache-tomcat-8.5.50/conf/server.xml 

<!-- <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> -->

2.配置日志

  • 删除3manager,4host-manager的handlers
[root@hdss7-200 tomcat8]# vim apache-tomcat-8.5.50/conf/logging.properties 

handlers = 1catalina.org.apache.juli.AsyncFileHandler, 2localhost.org.apache.juli.AsyncFileHandler, java.util.logging.ConsoleHandler
  • 日志级别改为INFO
[root@hdss7-200 tomcat8]# vim apache-tomcat-8.5.50/conf/logging.properties

1catalina.org.apache.juli.AsyncFileHandler.level = INFO
2localhost.org.apache.juli.AsyncFileHandler.level = INFO
java.util.logging.ConsoleHandler.level = INFO
  • 注释掉所有关于3manager,4host-manager日志的配置
[root@hdss7-200 tomcat8]# vim apache-tomcat-8.5.50/conf/logging.properties

#3manager.org.apache.juli.AsyncFileHandler.level = FINE
#3manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
#3manager.org.apache.juli.AsyncFileHandler.prefix = manager.
#3manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8

#4host-manager.org.apache.juli.AsyncFileHandler.level = FINE
#4host-manager.org.apache.juli.AsyncFileHandler.directory = ${catalina.base}/logs
#4host-manager.org.apache.juli.AsyncFileHandler.prefix = host-manager.
#4host-manager.org.apache.juli.AsyncFileHandler.encoding = UTF-8

3.删除自带网页

[root@hdss7-200 tomcat8]# rm -rf apache-tomcat-8.5.50/webapps/*

3.准备Dockerfile

  • Dockerfile
[root@hdss7-200 tomcat8]# pwd
/data/dockerfile/tomcat8
[root@hdss7-200 tomcat8]# cat Dockerfile 
From harbor.od.com/public/jre8:8u112
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ 
    echo 'Asia/Shanghai' >/etc/timezone
ENV CATALINA_HOME /opt/tomcat
ENV LANG zh_CN.UTF-8
ADD apache-tomcat-8.5.50/ /opt/tomcat
ADD config.yml /opt/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /opt/prom/jmx_javaagent-0.3.1.jar
WORKDIR /opt/tomcat
ADD entrypoint.sh /entrypoint.sh
CMD ["/entrypoint.sh"]
[root@hdss7-200 tomcat8]# 
  • config.yml
[root@hdss7-200 tomcat8]# cat config.yml 
---
rules:
  - pattern: '.*'
[root@hdss7-200 tomcat8]# 
  • jmx_javaagent-0.3.1.jar
[root@hdss7-200 tomcat8]# wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
  • entrypoint.sh
[root@hdss7-200 tomcat8]# cat entrypoint.sh 
#!/bin/bash
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
C_OPTS=${C_OPTS}
MIN_HEAP=${MIN_HEAP:-"128m"}
MAX_HEAP=${MAX_HEAP:-"128m"}
JAVA_OPTS=${JAVA_OPTS:-"-Xmn384m -Xss256k -Duser.timezone=GMT+08  -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram  -Dfile.encoding=UTF8 -Dsun.jnu.encoding=UTF8"}
CATALINA_OPTS="${CATALINA_OPTS}"
JAVA_OPTS="${M_OPTS} ${C_OPTS} -Xms${MIN_HEAP} -Xmx${MAX_HEAP} ${JAVA_OPTS}"
sed -i -e "1a\JAVA_OPTS=\"$JAVA_OPTS\"" -e "1a\CATALINA_OPTS=\"$CATALINA_OPTS\"" /opt/tomcat/bin/catalina.sh

cd /opt/tomcat && /opt/tomcat/bin/catalina.sh run 2>&1 >> /opt/tomcat/logs/stdout.log
[root@hdss7-200 tomcat8]# 
[root@hdss7-200 tomcat8]# chmod +x entrypoint.sh 
[root@hdss7-200 tomcat8]# ll
total 372
drwxr-xr-x 9 root root    220 Mar  3 20:29 apache-tomcat-8.5.50
-rw-r--r-- 1 root root     29 Mar  3 20:40 config.yml
-rw-r--r-- 1 root root    405 Mar  3 20:40 Dockerfile
-rwxr-xr-x 1 root root   1024 Mar  3 20:43 entrypoint.sh
-rw-r--r-- 1 root root 367417 May 10  2018 jmx_javaagent-0.3.1.jar
[root@hdss7-200 tomcat8]# 

4.制作镜像并推送

[root@hdss7-200 tomcat8]# docker build . -t harbor.od.com/base/tomcat:v8.5.50
[root@hdss7-200 tomcat8]# docker push harbor.od.com/base/tomcat:v8.5.50

5.新建Jenkins项目

代码:git@gitee.com:yodo1/dubbo-demo-web.git
分支:tomcat

1.配置New job

  • 使用admin登录
  • New Item
  • create new jobs
  • Enter an item name

tomcat-demo

  • Pipeline–>OK
  • Discard old builds

Days to keep builds : 3
Max # of builds to keep : 30

  • 11个参数选项(This project is parameterized)

1.Add Parameter -> String Parameter

Name : app_name
Default Value :
Description : project name. e.g: dubbo-demo-web

2.Add Parameter -> String Parameter

Name : image_name
Default Value :
Description : project docker image name. e.g: app/dubbo-demo-web

3.Add Parameter -> String Parameter

Name : git_repo
Default Value :git@gitee.com:yodo1/dubbo-demo-web.git
Description : project git repository. e.g: git@gitee.com:yodo1/dubbo-demo-web.git

4.Add Parameter -> String Parameter

Name : git_ver
Default Value : tomcat
Description : git commit id of the project.

5.Add Parameter -> String Parameter

Name : add_tag
Default Value :
Description : project docker image tag, date_timestamp recommended. e.g: 190117_1920

6.Add Parameter -> String Parameter

Name : mvn_dir
Default Value : ./
Description : project maven directory. e.g: ./

7.Add Parameter -> String Parameter

Name : target_dir
Default Value : ./dubbo-client/target
Description : the relative path of target file such as .jar or .war package. e.g: ./dubbo-client/target

8.Add Parameter -> String Parameter

Name : mvn_cmd
Default Value : mvn clean package -Dmaven.test.skip=true
Description : maven command. e.g: mvn clean package -e -q -Dmaven.test.skip=true

9.Add Parameter -> Choice Parameter

Name : base_image
Choices :

  • base/tomcat:v7.0.94
  • base/tomcat:v8.5.50
  • base/tomcat:v9.0.17

Description : project base image list in harbor.od.com.

10.Add Parameter -> Choice Parameter

Name : maven
Choices :

  • 3.6.1-8u232
  • 3.2.5-6u025
  • 2.2.1-6u025

Description : different maven edition.

11.Add Parameter -> String Parameter

Name : root_url
Default Value : ROOT
Description : webapp dir.

2.Pipeline Script

pipeline {
  agent any 
    stages {
    stage('pull') { //get project code from repo 
      steps {
        sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
        }
    }
    stage('build') { //exec mvn cmd
      steps {
        sh "cd ${params.app_name}/${env.BUILD_NUMBER}  && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
      }
    }
    stage('unzip') { //unzip  target/*.war -c target/project_dir
      steps {
        sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && unzip *.war -d ./project_dir"
      }
    }
    stage('image') { //build image and push to registry
      steps {
        writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.od.com/${params.base_image}
ADD ${params.target_dir}/project_dir /opt/tomcat/webapps/${params.root_url}"""
        sh "cd  ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
      }
    }
  }
}

6.构建应用镜像

使用Jenkins进行CI,并查看harbor仓库


7.准备资源配置清单

不再续约单独准备资源配置清单,用原来的dubbo-demo-consumer这个就可以,只需要修改一下镜像就可以了。

8.应用资源配置清单

  • TEST环境
    k8s的dashboard上直接修改image的值为jenkins打包出来的镜像,namespace为test,deployment为dubbo-demo-consumer的yaml文件。
    文档里的例子是:harbor.od.com/app/dubbo-demo-web:tomcat_210402_1550
    "image": "harbor.od.com/app/dubbo-demo-web:tomcat_210402_1550",
    启动dubbo-demo-service和dubbo-demo-consumer还有apollo服务
  • 正式环境
    再修改正式环境的yaml文件

9.浏览器访问

http://demo-test.od.com/hello?name=test

http://demo-prod.od.com/hello?name=prod

第三章:实战安装部署ElasticSearch搜索引擎

1.部署Elasticsearch

官网
官方github地址
下载地址

1.安装

在HDSS7-12.host.com主机上:

[root@hdss7-12 ~]# cd /opt/src/
[root@hdss7-12 src]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.6.tar.gz
[root@hdss7-12 src]# tar xf elasticsearch-6.8.6.tar.gz  -C  /opt/
[root@hdss7-12 src]# ln -s /opt/elasticsearch-6.8.6/ /opt/elasticsearch

2.配置

  • elasticsearch.yml
[root@hdss7-12 ~]# mkdir -p /data/elasticsearch/{data,logs}
[root@hdss7-12 ~]# egrep -v "#|^$" /opt/elasticsearch/config/elasticsearch.yml 
cluster.name: es.od.com
node.name: hdss7-12.host.com
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: true
network.host: 10.4.7.12
http.port: 9200
  • jvm.options
[root@hdss7-12 ~]# vim /opt/elasticsearch/config/jvm.options 

-Xms512m
-Xmx512m
  • 创建普通用户
[root@hdss7-12 ~]# useradd  -s /bin/bash -M es
[root@hdss7-12 ~]# chown  -R es.es /opt/elasticsearch-6.8.6/
[root@hdss7-12 ~]# chown -R es.es /data/elasticsearch/
  • 文件描述符
[root@hdss7-12 ~]# vim /etc/security/limits.conf  
*      -      nofile   65535
es    soft    nproc    4096
es    hard    nproc    4096
[root@hdss7-12 ~]# 
  • 调整内核参数
[root@hdss7-12 ~]# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
[root@hdss7-12 ~]# sysctl  -p

3.启动

[root@hdss7-12 ~]# su -c "/opt/elasticsearch/bin/elasticsearch -d" es

或者

[root@hdss7-12 ~]# sudo -ues "/opt/elasticsearch/bin/elasticsearch -d"
  • 检查端口
[root@hdss7-12 ~]# netstat  -lntup|grep 9200
tcp6       0      0 10.4.7.12:9200          :::*                    LISTEN      120250/java         
[root@hdss7-12 ~]# 

4.调整ES索引模板

curl -H "Content-Type:application/json" -XPUT http://10.4.7.12:9200/_template/k8s -d '{
  "template" : "k8s*",
  "index_patterns": ["k8s*"],  
  "settings": {
    "number_of_shards": 5,
    "number_of_replicas": 0
  }
}'

第四章:实战部署kafka消息队列及Kafka-manager

1.部署kafka

官网
官方github地址
下载地址

1.安装

hdss7-11.host.com上:

[root@hdss7-11 src]# wget https://archive.apache.org/dist/kafka/2.2.0/kafka_2.12-2.2.0.tgz
[root@hdss7-11 src]# tar xf kafka_2.12-2.2.0.tgz  -C /opt/
[root@hdss7-11 src]# ln -s /opt/kafka_2.12-2.2.0/ /opt/kafka

2.配置

[root@hdss7-11 ~]# mkdir /data/kafka/logs -p
[root@hdss7-11 src]# vi /opt/kafka/config/server.properties
log.dirs=/data/kafka/logs
zookeeper.connect=localhost:2181
log.flush.interval.messages=10000
log.flush.interval.ms=1000
delete.topic.enable=true    # 行尾追加
host.name=hdss7-11.host.com  # 行尾追加

3.启动

[root@hdss7-11 ~]# cd /opt/kafka
[root@hdss7-11 kafka]# bin/kafka-server-start.sh  -daemon config/server.properties 
[root@hdss7-11 kafka]# netstat  -lntup|grep 9092
tcp6       0      0 10.4.7.11:9092          :::*                    LISTEN      25876/java          
[root@hdss7-11 kafka]# 

2.部署kafka-manager

官方github地址
源码下载地址

1.运维主机hdss7-200.host.com,用方法二部署

  • 方法1:
  1. 准备Dockerfile
[root@hdss7-200 ~]# mkdir /data/dockerfile/kafka-manager
[root@hdss7-200 ~]# cat /data/dockerfile/kafka-manager/Dockerfile 
FROM hseeberger/scala-sbt

ENV ZK_HOSTS=10.4.7.11:2181 \
     KM_VERSION=2.0.0.2

RUN mkdir -p /tmp && \
    cd /tmp && \
    wget https://github.com/yahoo/kafka-manager/archive/${KM_VERSION}.tar.gz && \
    tar xxf ${KM_VERSION}.tar.gz && \
    cd /tmp/kafka-manager-${KM_VERSION} && \
    sbt clean dist && \
    unzip  -d / ./target/universal/kafka-manager-${KM_VERSION}.zip && \
    rm -fr /tmp/${KM_VERSION} /tmp/kafka-manager-${KM_VERSION}

WORKDIR /kafka-manager-${KM_VERSION}

EXPOSE 9000
ENTRYPOINT ["./bin/kafka-manager","-Dconfig.file=conf/application.conf"]
[root@hdss7-200 ~]# 
[root@hdss7-200 ~]# cd /data/dockerfile/kafka-manager
[root@hdss7-200 kafka-manager]# docker build . -t harbor.od.com/infra/kafka-manager:v2.0.0.2

[root@hdss7-200 kafka-manager]# docker push harbor.od.com/infra/kafka-manager:v2.0.0.2

构建过程及其漫长,大概率会失败,因此可以通过第二种方式下载构建好的镜像,但是构建好的镜像写死了zk地址,要注意传入变量修改zk地址

[root@hdss7-200 ~]# docker pull sheepkiller/kafka-manager:stable
[root@hdss7-200 ~]# docker tag 34627743836f  harbor.od.com/infra/kafka-manager:stable
[root@hdss7-200 ~]# docker push harbor.od.com/infra/kafka-manager:stable
  • 方法三:用打包好的镜像
    [root@hdss7-200 ~]# docker load < kafka-manager-v2.0.0.2.tar
    [root@hdss7-200 ~]# docker push harbor.od.com/infra/kafka-manager:v2.0.0.2

2.准备资源配清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/kafka-manager
[root@hdss7-200 ~]# cd /data/k8s-yaml/kafka-manager/

[root@hdss7-200 kafka-manager]# cat dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kafka-manager
  namespace: infra
  labels: 
    name: kafka-manager
spec:
  replicas: 1
  selector:
    matchLabels: 
      app: kafka-manager
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
  template:
    metadata:
      labels: 
        app: kafka-manager
    spec:
      containers:
      - name: kafka-manager
        image: harbor.od.com/infra/kafka-manager:stable
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9000
          protocol: TCP
        env:
        - name: ZK_HOSTS
          value: zk1.od.com:2181
        - name: APPLICATION_SECRET
          value: letmein
      imagePullSecrets:
      - name: harbor
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
[root@hdss7-200 kafka-manager]# 

[root@hdss7-200 kafka-manager]# cat svc.yaml 
kind: Service
apiVersion: v1
metadata: 
  name: kafka-manager
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 9000
    targetPort: 9000
  selector: 
    app: kafka-manager
[root@hdss7-200 kafka-manager]# 

[root@hdss7-200 kafka-manager]# cat ingress.yaml 
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: kafka-manager
  namespace: infra
spec:
  rules:
  - host: km.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: kafka-manager
          servicePort: 9000
[root@hdss7-200 kafka-manager]# 

3.应用资源配置清单

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/kafka-manager/dp.yaml
deployment.extensions/kafka-manager created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/kafka-manager/svc.yaml
service/kafka-manager created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/kafka-manager/ingress.yaml
ingress.extensions/kafka-manager created
[root@hdss7-22 ~]# 

4.解析域名

[root@hdss7-11 ~]# tail -1 /var/named/od.com.zone 
km               A    10.4.7.10
[root@hdss7-11 ~]# systemctl restart named
d[root@hdss7-11 ~]# dig -t A km.od.com +short
10.4.7.10
[root@hdss7-11 ~]# 

5.浏览器访问

http://km.od.com
cluster –> Add Cluster

Cluster Name:kafka-od
Cluster Zookeepker Hosts:zk1.od.com:2181
kafka version:0.8.2.1

第五章:制作filebeat流式日志收集器docker镜像

官方下载地址
运维主机HDSS7-200.host.com

1.准备Dockerfile

[root@hdss7-200 ~]# mkdir /data/dockerfile/filebeat/
[root@hdss7-200 ~]# cd /data/dockerfile/filebeat/
[root@hdss7-200 filebeat]# cat Dockerfile 
FROM debian:jessie
# 如果需要更换版本,需要在官网下载同版本LINUX64-BIT的sha替换FILEBEAT_SHA1
ENV FILEBEAT_VERSION=7.5.1 \
    FILEBEAT_SHA1=daf1a5e905c415daf68a8192a069f913a1d48e2c79e270da118385ba12a93aaa91bda4953c3402a6f0abf1c177f7bcc916a70bcac41977f69a6566565a8fae9c

RUN set -x && \
  apt-get update && \
  apt-get install -y wget && \
  wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz -O /opt/filebeat.tar.gz && \
  cd /opt && \
  echo "${FILEBEAT_SHA1}  filebeat.tar.gz" | sha512sum -c - && \
  tar xzvf filebeat.tar.gz && \
  cd filebeat-* && \
  cp filebeat /bin && \
  cd /opt && \
  rm -rf filebeat* && \
  apt-get purge -y wget && \
  apt-get autoremove -y && \
  apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
[root@hdss7-200 filebeat]# 

[root@hdss7-200 filebeat]# cat docker-entrypoint.sh 
#!/bin/bash

ENV=${ENV:-"test"}
PROJ_NAME=${PROJ_NAME:-"no-define"}
MULTILINE=${MULTILINE:-"^\d{2}"}


cat > /etc/filebeat.yaml << EOF
filebeat.inputs:
- type: log
  fields_under_root: true
  fields:
    topic: logm-${PROJ_NAME}
  paths:
    - /logm/*.log
    - /logm/*/*.log
    - /logm/*/*/*.log
    - /logm/*/*/*/*.log
    - /logm/*/*/*/*/*.log
  scan_frequency: 120s
  max_bytes: 10485760
  multiline.pattern: '$MULTILINE'
  multiline.negate: true
  multiline.match: after
  multiline.max_lines: 100
- type: log
  fields_under_root: true
  fields:
    topic: logu-${PROJ_NAME}
  paths:
    - /logu/*.log
    - /logu/*/*.log
    - /logu/*/*/*.log
    - /logu/*/*/*/*.log
    - /logu/*/*/*/*/*.log
    - /logu/*/*/*/*/*/*.log
output.kafka:
  hosts: ["10.4.7.11:9092"]
  topic: k8s-fb-$ENV-%{[topic]}
  version: 2.0.0 
  required_acks: 0
  max_message_bytes: 10485760
EOF

set -xe

# If user don't provide any command
# Run filebeat
if [[ "$1" == "" ]]; then
     exec filebeat  -c /etc/filebeat.yaml 
else
    # Else allow the user to run arbitrarily commands like bash
    exec "$@"
fi
[root@hdss7-200 filebeat]# 
[root@hdss7-200 filebeat]# chmod +x docker-entrypoint.sh 

2.制作镜像

[root@hdss7-200 filebeat]# docker build . -t harbor.od.com/infra/filebeat:v7.5.1
[root@hdss7-200 filebeat]# docker push harbor.od.com/infra/filebeat:v7.5.1 

#构建镜像会很慢,可以把做好的镜像包导入
docker load < filbeat_v7.5.1.tar
docker tag de0f8515ec16 harbor.od.com/infra/filebeat:v7.5.1
docker push harbor.od.com/infra/filebeat:v7.5.1

第六章:实战K8S内应用接入filebeat并收集日志

1.K8S内应用接入filebeat(test环境)

1.修改资源配置清单

  • 使用dubbo-demo-consumer的Tomcat版镜像,以边车模式运行filebeat
[root@hdss7-200 ~]# cat /data/k8s-yaml/test/dubbo-demo-consumer/dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-consumer
  namespace: test
  labels: 
    name: dubbo-demo-consumer
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-consumer
  template:
    metadata:
      labels: 
        app: dubbo-demo-consumer
        name: dubbo-demo-consumer
    spec:
      containers:
      - name: dubbo-demo-consumer
        image: harbor.od.com/app/dubbo-demo-web:tomcat_210402_1550
        ports:
        - containerPort: 8080
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=fat -Dapollo.meta=http://config-test.od.com
        volumeMounts:
        - mountPath: /opt/tomcat/logs
          name: logm
      - name: filebeat
        image: harbor.od.com/infra/filebeat:v7.5.1
        imagePullPolicy: IfNotPresent
        env:
        - name: ENV
          value: test
        - name: PROJ_NAME
          value: dubbo-demo-web
        volumeMounts:
        - mountPath: /logm
          name: logm
      volumes:
      - emptyDir: {}
        name: logm
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 ~]# 

2.应用资源配置清单

kubectl apply -f http://k8s-yaml.od.com/test/dubbo-demo-consumer/dp.yaml

3.浏览器访问并查看日志

http://demo-test.od.com/hello?name=test1

[root@hdss7-21 ~]# kubectl get pods -n test
NAME                                   READY   STATUS    RESTARTS   AGE
apollo-adminservice-5cccf97c64-rs6gw   1/1     Running   0          2d3h
apollo-configservice-5f6555448-zhqpr   1/1     Running   0          2d4h
dubbo-demo-consumer-74c9978f5-8w5d4    2/2     Running   0          12m
dubbo-demo-service-78cd6dd4b6-qh9vb    1/1     Running   0          167m
[root@hdss7-21 ~]# kubectl exec -it dubbo-demo-consumer-74c9978f5-8w5d4  -c filebeat /bin/bash -n test
lroot@dubbo-demo-consumer-74c9978f5-8w5d4:/# ls /logm/
catalina.2021-04-02.log  localhost.2021-04-02.log  localhost_access_log.2021-04-02.txt    stdout.log
root@dubbo-demo-consumer-74c9978f5-8w5d4:/# tail -f /logm/stdout.log 
2021-04-02 18:53:52 HelloAction接收到请求:test1
2021-04-02 18:53:52 HelloService返回到结果:2021-04-02 18:53:52 <h1>这是Dubbo 消费者端(Tomcat服务)</h1><h2>欢迎来到杨哥教育K8S容器云架构师专题课培训班1期。</h2>hello test1
2021-04-02 18:55:57 HelloAction接收到请求:test12
2021-04-02 18:55:57 HelloService返回到结果:2021-04-02 18:55:57 <h1>这是Dubbo 消费者端(Tomcat服务)</h1><h2>欢迎来到杨哥教育K8S容器云架构师专题课培训班1期。</h2>hello test12
2021-04-02 18:56:19 HelloAction接收到请求:hehe
2021-04-02 18:56:19 HelloService返回到结果:2021-04-02 18:56:19 <h1>这是Dubbo 消费者端(Tomcat服务)</h1><h2>欢迎来到杨哥教育K8S容器云架构师专题课培训班1期。</h2>hello hehe
2021-04-02 19:17:49 HelloAction接收到请求:test1
2021-04-02 19:17:49 HelloService返回到结果:2021-04-02 19:17:49 <h1>这是Dubbo 消费者端(Tomcat服务)</h1><h2>欢迎来到杨哥教育K8S容器云架构师专题课培训班1期。</h2>hello test1

# -c 参数指定pod中的filebeat容器
# /logm是filebeat容器挂载的目录

4.kafka-manager查看topic

浏览器访问 http://km.od.com
看到kafaka-manager里,topic打进来,即为成功。

2.K8S内应用接入filebeat(prod环境)

改造dubbo-demo-consumer的正式环境的项目

1.修改资源配置清单,以边车模式运行filebeat

[root@hdss7-200 ~]# cd /data/k8s-yaml/prod/dubbo-demo-consumer/
# 从测试环境拷贝dp.yaml,修改里面的文件为正式环境的
[root@hdss7-200 dubbo-demo-consumer]# cp /data/k8s-yaml/test/dubbo-demo-consumer/dp.yaml .
[root@hdss7-200 dubbo-demo-consumer]# cat dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-consumer
  namespace: prod
  labels: 
    name: dubbo-demo-consumer
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-consumer
  template:
    metadata:
      labels: 
        app: dubbo-demo-consumer
        name: dubbo-demo-consumer
    spec:
      containers:
      - name: dubbo-demo-consumer
        image: harbor.od.com/app/dubbo-demo-web:tomcat_210402_1550
        ports:
        - containerPort: 8080
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=pro -Dapollo.meta=http://config-prod.od.com
        volumeMounts:
        - mountPath: /opt/tomcat/logs
          name: logm
      - name: filebeat
        image: harbor.od.com/infra/filebeat:v7.5.1
        imagePullPolicy: IfNotPresent
        env:
        - name: ENV
          value: prod
        - name: PROJ_NAME
          value: dubbo-demo-web
        volumeMounts:
        - mountPath: /logm
          name: logm
      volumes:
      - emptyDir: {}
        name: logm
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 dubbo-demo-consumer]# 

2.应用资源配置清单

[root@hdss7-22 ~]# kubectl  apply -f http://k8s-yaml.od.com/prod/dubbo-demo-consumer/dp.yaml
deployment.extensions/dubbo-demo-consumer configured
[root@hdss7-22 ~]# 

3.浏览器访问并查看日志

http://demo-prod.od.com/hello?name=prod

[root@hdss7-21 ~]# kubectl get pods -n prod
NAME                                   READY   STATUS    RESTARTS   AGE
apollo-adminservice-5cccf97c64-c7w5k   1/1     Running   0          2d5h
apollo-configservice-5f6555448-ff877   1/1     Running   0          2d5h
dubbo-demo-consumer-54cf5c84d8-4thvw   2/2     Running   0          2m40s
dubbo-demo-service-7b4688cd7d-7x5zj    1/1     Running   0          9h
[root@hdss7-21 ~]# kubectl exec -it dubbo-demo-consumer-54cf5c84d8-4thvw -c filebeat /bin/bash -n prod
root@dubbo-demo-consumer-54cf5c84d8-4thvw:/# ls /logm/
catalina.2021-04-02.log  localhost.2021-04-02.log  localhost_access_log.2021-04-02.txt    stdout.log
root@dubbo-demo-consumer-54cf5c84d8-4thvw:/# tail -f /logm/stdout.log 
2021-04-02 20:29:56 HelloAction接收到请求:prod
2021-04-02 20:29:56 HelloService返回到结果:2021-04-02 20:29:56 <h1>这是Dubbo 消费者端(Tomcat服务)</h1><h2>欢迎来到杨哥教育K8S容器云架构师专题课培训班1期。</h2>hello prod
2021-04-02 20:31:46 HelloAction接收到请求:prod123
2021-04-02 20:31:46 HelloService返回到结果:2021-04-02 20:31:46 <h1>这是Dubbo 消费者端(Tomcat服务)</h1><h2>欢迎来到杨哥教育K8S容器云架构师专题课培训班1期。</h2>hello prod123

4.kafka-manager查看topic

浏览器访问http://km.od.com

看到kafaka-manager里,topic打进来,即为成功。

第七章:实战部署Logstash及Kibana

1.部署logstash

运维主机HDSS7-200.host.com上:

1.选版本

logstash选型

2.准备docker镜像

[root@hdss7-200 ~]# docker pull logstash:6.8.6
[root@hdss7-200 ~]# docker tag d0a2dac51fcb harbor.od.com/infra/logstash:6.8.6
[root@hdss7-200 ~]# docker push harbor.od.com/infra/logstash:6.8.6

3.启动docker

  • 创建配置
[root@hdss7-200 ~]# mkdir /etc/logstash
[root@hdss7-200 ~]# cd /etc/logstash/
# 测试环境
[root@hdss7-200 logstash]# vim logstash-test.conf
input {
  kafka {
    bootstrap_servers => "10.4.7.11:9092"
    client_id => "10.4.7.200"
    consumer_threads => 4
    group_id => "k8s_test"        #为test组
    topics_pattern => "k8s-fb-test-.*"    #只收集k8s-fb-test开头的topics
  }
}

filter {
  json {
    source => "message"
  }
}

output {
  elasticsearch {
    hosts => ["10.4.7.12:9200"]
    index => "k8s-test-%{+YYYY.MM.DD}"
  }
}

#正式环境
[root@hdss7-200 logstash]# cat /etc/logstash/logstash-prod.conf 
input {
  kafka {
    bootstrap_servers => "10.4.7.11:9092"
    client_id => "10.4.7.200"
    consumer_threads => 4
    group_id => "k8s_prod"        #为prod组
    topics_pattern => "k8s-fb-prod-.*"    #只收集k8s-fb-prod开头的topics
  }
}

filter {
  json {
    source => "message"
  }
}

output {
  elasticsearch {
    hosts => ["10.4.7.12:9200"]
    index => "k8s-prod-%{+YYYY.MM.DD}"
  }
}
[root@hdss7-200 logstash]#
  • 启动docker容器
[root@hdss7-200 ~]# docker run -d --name logstash-test -v /etc/logstash:/etc/logstash --restart=always  harbor.od.com/infra/logstash:6.8.6 -f /etc/logstash/logstash-test.conf
[root@hdss7-200 ~]# docker run -d --name logstash-prod -v /etc/logstash:/etc/logstash --restart=always harbor.od.com/infra/logstash:6.8.6 -f /etc/logstash/logstash-prod.conf
  • 验证ElasticSearch里的索引(等一分钟)
[root@hdss7-200 logstash]# curl -XGET http://10.4.7.12:9200/_cat/shards
k8s-prod-2021.04.92 4 p STARTED 10 73.4kb 10.4.7.12 hdss7-12.host.com
k8s-prod-2021.04.92 1 p STARTED  6 61.9kb 10.4.7.12 hdss7-12.host.com
k8s-prod-2021.04.92 2 p STARTED 13 75.4kb 10.4.7.12 hdss7-12.host.com
k8s-prod-2021.04.92 3 p STARTED 10 77.4kb 10.4.7.12 hdss7-12.host.com
k8s-prod-2021.04.92 0 p STARTED  8 88.5kb 10.4.7.12 hdss7-12.host.com
k8s-test-2021.04.92 3 p STARTED  1 11.7kb 10.4.7.12 hdss7-12.host.com
k8s-test-2021.04.92 1 p STARTED  2 25.5kb 10.4.7.12 hdss7-12.host.com
k8s-test-2021.04.92 2 p STARTED  3 13.2kb 10.4.7.12 hdss7-12.host.com
k8s-test-2021.04.92 4 p STARTED  4 13.9kb 10.4.7.12 hdss7-12.host.com
k8s-test-2021.04.92 0 p STARTED  0   230b 10.4.7.12 hdss7-12.host.com
[root@hdss7-200 logstash]# 

2.部署Kibana

运维主机hdss7-200.host.com上:

1.准备镜像

[root@hdss7-200 ~]# docker pull kibana:6.8.6
[root@hdss7-200 ~]# docker tag adfab5632ef4 harbor.od.com/infra/kibana:v6.8.6
[root@hdss7-200 ~]# docker push harbor.od.com/infra/kibana:v6.8.6

2.准备资源配置清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/kibana
[root@hdss7-200 ~]# cd /data/k8s-yaml/kibana/
[root@hdss7-200 kibana]# cat dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: kibana
  namespace: infra
  labels: 
    name: kibana
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: kibana
  template:
    metadata:
      labels: 
        app: kibana
        name: kibana
    spec:
      containers:
      - name: kibana
        image: harbor.od.com/infra/kibana:v6.8.6
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 5601
          protocol: TCP
        env:
        - name: ELASTICSEARCH_URL
          value: http://10.4.7.12:9200
      imagePullSecrets:
      - name: harbor
      securityContext: 
        runAsUser: 0
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 kibana]# 

[root@hdss7-200 kibana]# cat svc.yaml 
kind: Service
apiVersion: v1
metadata: 
  name: kibana
  namespace: infra
spec:
  ports:
  - protocol: TCP
    port: 5601
    targetPort: 5601
  selector: 
    app: kibana
[root@hdss7-200 kibana]# cat ingress.yaml 
kind: Ingress
apiVersion: extensions/v1beta1
metadata: 
  name: kibana
  namespace: infra
spec:
  rules:
  - host: kibana.od.com
    http:
      paths:
      - path: /
        backend: 
          serviceName: kibana
          servicePort: 5601
[root@hdss7-200 kibana]# 

3.应用资源配置清单

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/kibana/dp.yaml
deployment.extensions/kibana created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/kibana/svc.yaml
service/kibana created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/kibana/ingress.yaml
ingress.extensions/kibana created
[root@hdss7-22 ~]# 

4.解析域名

[root@hdss7-11 ~]# tail -1 /var/named/od.com.zone 
kibana               A    10.4.7.10
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A kibana.od.com +short
10.4.7.10
[root@hdss7-11 ~]# 

5.浏览器访问

http://kibana.od.com/


选择区域

对应日志的时间戳

  • log.file.path

对应日志文件名

  • message

对应日志内容

时间选择器

  • 选择日志时间
    • 快速时间
    • 绝对时间
    • 相对时间

环境选择器

  • 选择对应环境的日志

    k8s-test-
    k8s-prod-

项目选择器

  • 对应filebeat的PROJ_NAME值

  • Add a fillter

  • topic is ${PROJ_NAME}

    dubbo-demo-service
    dubbo-demo-web

关键字选择器

  • exception
  • error

Selected fields

第八章:收集dubbo-demo-service相关的日志

1.修改底包镜像

[root@hdss7-200 ~]# cd /data/dockerfile/jre8/
[root@hdss7-200 jre8]# 
[root@hdss7-200 ~]# cat /data/dockerfile/jre8/entrypoint.sh 
#!/bin/sh
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"
C_OPTS=${C_OPTS}
JAR_BALL=${JAR_BALL}
exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL} 2>&1 >>/opt/logs/stdout.log
[root@hdss7-200 ~]# 
[root@hdss7-200 jre8]# docker build . -t harbor.od.com/base/jre8:8u112_with_logs
[root@hdss7-200 jre8]# docker push harbor.od.com/base/jre8:8u112_with_logs

2.修改jenkins

修改dubbo-demo项目,添加底包镜像base/jre8:8u112_with_logs,重新生成新的镜像
harbor.od.com/app/dubbo-demo-service:apollo_210403_2210_logs


3.修改资源配置清单(test环境)

修改相关的资源配置清单,以边车模式加上filebeat镜像

[root@hdss7-200 dubbo-demo-service]# pwd
/data/k8s-yaml/test/dubbo-demo-service
[root@hdss7-200 dubbo-demo-service]# cat dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-service
  namespace: test
  labels: 
    name: dubbo-demo-service
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-service
  template:
    metadata:
      labels: 
        app: dubbo-demo-service
        name: dubbo-demo-service
    spec:
      containers:
      - name: dubbo-demo-service
        image: harbor.od.com/app/dubbo-demo-service:apollo_210403_2210_logs
        ports:
        - containerPort: 20880
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=fat -Dapollo.meta=http://config-test.od.com
        - name: JAR_BALL
          value: dubbo-server.jar
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /opt/logs
          name: logm
      - name: filebeat
        image: harbor.od.com/infra/filebeat:v7.5.1
        imagePullPolicy: IfNotPresent
        env:
        - name: ENV
          value: test          
        - name: PROJ_NAME     
          value: dubbo-demo-service
        volumeMounts:
        - mountPath: /logm
          name: logm
      volumes:
      - emptyDir: {}
        name: logm
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 dubbo-demo-service]# 

4.应用资源配置清单

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/test/dubbo-demo-service/dp.yaml
deployment.extensions/dubbo-demo-service configured
[root@hdss7-21 ~]# 

5.进入容器,查看日志

访问http://demo-test.od.com/hello?name=haha,生成日志

[root@hdss7-21 ~]# kubectl get pods -n test
NAME                                   READY   STATUS    RESTARTS   AGE
apollo-adminservice-5cccf97c64-rs6gw   1/1     Running   0          3d6h
apollo-configservice-5f6555448-zhqpr   1/1     Running   0          3d8h
dubbo-demo-consumer-74c9978f5-gkcq9    2/2     Running   0          24h
dubbo-demo-service-698975db7d-wgl77    2/2     Running   0          4m57s
[root@hdss7-21 ~]# kubectl  exec -it dubbo-demo-service-698975db7d-wgl77 -c filebeat  /bin/bash -n test
root@dubbo-demo-service-698975db7d-wgl77:/# ls /logm/
stdout.log
root@dubbo-demo-service-698975db7d-wgl77:/# tail -f /logm/stdout.log 
Dubbo server started
Dubbo 服务端已经启动
HelloService接收到消息:haha
HelloService接收到消息:haha1
HelloService接收到消息:wangbadan

6.kafka-manager查看生成的topic

7.通过kibana查看生成的数据

8.修改资源配置清单(prod环境)

修改相关的资源配置清单,以边车模式加上filebeat镜像,用test环境打包好的镜像

harbor.od.com/app/dubbo-demo-service:apollo_210403_2210_logs

[root@hdss7-200 dubbo-demo-service]# pwd
/data/k8s-yaml/prod/dubbo-demo-service
[root@hdss7-200 dubbo-demo-service]# cat dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-service
  namespace: prod
  labels: 
    name: dubbo-demo-service
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-service
  template:
    metadata:
      labels: 
        app: dubbo-demo-service
        name: dubbo-demo-service
    spec:
      containers:
      - name: dubbo-demo-service
        image: harbor.od.com/app/dubbo-demo-service:apollo_210403_2210_logs
        ports:
        - containerPort: 20880
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=pro -Dapollo.meta=http://config-prod.od.com
        - name: JAR_BALL
          value: dubbo-server.jar
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /opt/logs
          name: logm
      - name: filebeat
        image: harbor.od.com/infra/filebeat:v7.5.1
        imagePullPolicy: IfNotPresent
        env:
        - name: ENV
          value: prod
        - name: PROJ_NAME
          value: dubbo-demo-service
        volumeMounts:
        - mountPath: /logm
          name: logm 
      volumes:
      - emptyDir: {}
        name: logm
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 dubbo-demo-service]# 

注意:ENV 为prod

9.应用正式环境的资源配置清单

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/prod/dubbo-demo-service/dp.yaml
deployment.extensions/dubbo-demo-service configured
[root@hdss7-21 ~]# 

10.进入容器,查看日志

访问http://demo-prod.od.com/hello?name=prod,生成日志

[root@hdss7-21 ~]# kubectl get pods -n prod
NAME                                   READY   STATUS    RESTARTS   AGE
apollo-adminservice-5cccf97c64-c7w5k   1/1     Running   0          3d7h
apollo-configservice-5f6555448-ff877   1/1     Running   0          3d7h
dubbo-demo-consumer-758c669c45-7hz2q   2/2     Running   0          25h
dubbo-demo-service-677d49f884-b5ln2    2/2     Running   0          93s
[root@hdss7-21 ~]# kubectl exec -it dubbo-demo-service-677d49f884-b5ln2 -c filebeat /bin/bash -n prod
lsroot@dubbo-demo-service-677d49f884-b5ln2:/# ls /logm/
stdout.log
root@dubbo-demo-service-677d49f884-b5ln2:/# 
root@dubbo-demo-service-677d49f884-b5ln2:/# tail -f /logm/stdout.log 
Dubbo server started
Dubbo 服务端已经启动
HelloService接收到消息:prod
HelloService接收到消息:prod
HelloService接收到消息:prod1

11.kafka-manager查看生成的topic

12.通过kibana查看生成的数据

文档更新时间: 2021-04-03 22:34   作者:xtyang