前情回顾

  • 配置是独立于程序的可配变量,同一份程序在不同配置下会有不同的行为。
  • 云原生(Cloud Native)程序的特点
    • 程序的配置,通过设置环境变量传递到容器内部
    • 程序的配置,通过程序启动参数配置生效
    • 程序的配置,通过集中在配置中心进行统一管理(CRUD)
  • Devops工程师应该做什么?
    • 容器化公司自研的应用程序(通过Docker进行二次封装)
    • 推动容器化应用,转变为云原生应用(一次构建,到处使用)
    • 使用容器编排框架(kubernetes),合理、规范、专业的编排业务容器。

第一章:新一代容器云监控Prometheus的概述

  • Prometheus(普罗米修斯)是一个最初在SoundCloud上构建的监控系统。自2012年成为社区开源项目,拥有非常活跃的开发人员和用户社区。为强调开发及独立维护,Prometheus于2016年加入CNCF,成为继kubernetes之后的第二个托管项目。
  • https://Prometheus.io
  • https://github.com/prometheus
  • Prometheus的特点:
    • 多维数据模型:由度量名称和键值对标识的时间序列数据。
    • 内置时间序列数据库:TSDB
    • promQL:一种灵活的查询语言,可以利用多维数据完成复杂查询
    • 基于HTTP的pull(拉取)方式采集时间序列数据(exporter)
    • 同时支持PushGateway组件收集数据
    • 通过服务发现和静态资源配置发现目标
    • 多种图形模式及仪表盘支持
    • 支持作为数据源接入Grafana
  • Prometheus架构图

  • 新一代容器云监控系统Prometheus和传统监控Zabbix对比

第二章:实战部署容器云监控必备exporter

1.常用的exporter

Prometheus不同于zabbix,没有agent,使用的是针对不同服务的exporter,正常情况下,监控K8S集群及node,pod,常用的exporter有四个:

  • kube-state-metric
    收集K8S集群master和etcd等基本状态信息
  • node-exporter
    收集K8S集群node信息
  • cadvisor
    收集K8S集群的Docker容器内部使用资源信息
  • blackbox-exporter
    收集K8S集群的Docker容器服务是否存活

2.部署kube-state-metrics

运维主机hdss7-200.host.com

1.准备kube-state-metrics镜像

kube-state-metrics官方quay.io地址

[root@hdss7-200 ~]# docker pull quay.io/coreos/kube-state-metrics:v1.5.0
[root@hdss7-200 ~]# docker images|grep kube-state-metrics
quay.io/coreos/kube-state-metrics          v1.5.0                     91599517197a   2 years ago     31.8MB
[root@hdss7-200 ~]# docker tag 91599517197a  harbor.od.com/public/kube-state-metrics:v1.5.0
[root@hdss7-200 ~]# docker push harbor.od.com/public/kube-state-metrics:v1.5.0

2.准备资源配置清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/kube-state-metrics
[root@hdss7-200 ~]# cd /data/k8s-yaml/kube-state-metrics
[root@hdss7-200 kube-state-metrics]# cat rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: kube-state-metrics
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: kube-state-metrics
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs:
  - list
  - watch
- apiGroups:
  - policy
  resources:
  - poddisruptionbudgets
  verbs:
  - list
  - watch
- apiGroups:
  - extensions
  resources:
  - daemonsets
  - deployments
  - replicasets
  verbs:
  - list
  - watch
- apiGroups:
  - apps
  resources:
  - statefulsets
  verbs:
  - list
  - watch
- apiGroups:
  - batch
  resources:
  - cronjobs
  - jobs
  verbs:
  - list
  - watch
- apiGroups:
  - autoscaling
  resources:
  - horizontalpodautoscalers
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: kube-system
[root@hdss7-200 kube-state-metrics]# 
[root@hdss7-200 kube-state-metrics]# cat dp.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
  labels:
    grafanak8sapp: "true"
    app: kube-state-metrics
  name: kube-state-metrics
  namespace: kube-system
spec:
  selector:
    matchLabels:
      grafanak8sapp: "true"
      app: kube-state-metrics
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        grafanak8sapp: "true"
        app: kube-state-metrics
    spec:
      containers:
      - name: kube-state-metrics
        image: harbor.od.com/public/kube-state-metrics:v1.5.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080
          name: http-metrics
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
      serviceAccountName: kube-state-metrics
[root@hdss7-200 kube-state-metrics]# 

3.应用资源配置清单

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/kube-state-metrics/rbac.yaml
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/kube-state-metrics/dp.yaml

4.检查启动情况

[root@hdss7-21 ~]# kubectl get pods -n kube-system -o wide|grep kube-state-metrics
kube-state-metrics-8669f776c6-67s5z     1/1     Running   0          101s    172.7.21.14   hdss7-21.host.com   <none>           <none>
[root@hdss7-21 ~]# curl  172.7.21.14:8080/healthz
ok[root@hdss7-21 ~]# 

3.部署node-exporter

运维主机hdss7-200.host.com

1.准备node-exporter镜像

node-exporter官方dockerhub地址
node-expoerer官方github地址

[root@hdss7-200 ~]# docker pull prom/node-exporter:v0.15.0
[root@hdss7-200 ~]# docker images|grep node
prom/node-exporter                         v0.15.0                    12d51ffa2b22   3 years ago     22.8MB
[root@hdss7-200 ~]# docker tag 12d51ffa2b22  harbor.od.com/public/node-exporter:v0.15.0
[root@hdss7-200 ~]# docker push harbor.od.com/public/node-exporter:v0.15.0

2.准备资源配置清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/node-exporter
[root@hdss7-200 ~]# cd /data/k8s-yaml/node-exporter
[root@hdss7-200 node-exporter]# 
[root@hdss7-200 ~]# cat /data/k8s-yaml/node-exporter/ds.yaml 
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    daemon: "node-exporter"
    grafanak8sapp: "true"
spec:
  selector:
    matchLabels:
      daemon: "node-exporter"
      grafanak8sapp: "true"
  template:
    metadata:
      name: node-exporter
      labels:
        daemon: "node-exporter"
        grafanak8sapp: "true"
    spec:
      volumes:
      - name: proc
        hostPath: 
          path: /proc
          type: ""
      - name: sys
        hostPath:
          path: /sys
          type: ""
      containers:
      - name: node-exporter
        image: harbor.od.com/public/node-exporter:v0.15.0
        imagePullPolicy: IfNotPresent
        args:
        - --path.procfs=/host_proc
        - --path.sysfs=/host_sys
        ports:
        - name: node-exporter
          hostPort: 9100
          containerPort: 9100
          protocol: TCP
        volumeMounts:
        - name: sys
          readOnly: true
          mountPath: /host_sys
        - name: proc
          readOnly: true
          mountPath: /host_proc
      hostNetwork: true
[root@hdss7-200 ~]# 

3.应用资源配置清单

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/node-exporter/ds.yaml

4.检查启动情况

[root@hdss7-21 ~]# netstat  -lntup|grep 9100
tcp6       0      0 :::9100                 :::*                    LISTEN      16640/node_exporter 
[root@hdss7-21 ~]# 

4.部署cadvisor

运维主机hdss7-200.host.com

1.准备cadvisor镜像

cadvisor官方dockerhub地址
cadvisor官方github地址

[root@hdss7-200 ~]# docker pull google/cadvisor:v0.28.3
[root@hdss7-200 ~]# docker images|grep cadvisor
google/cadvisor                            v0.28.3                    75f88e3ec333   3 years ago     62.2MB
[root@hdss7-200 ~]# docker tag 75f88e3ec333 harbor.od.com/public/cadvisor:v0.28.3
[root@hdss7-200 ~]# docker push harbor.od.com/public/cadvisor:v0.28.3

2.准备资源配置清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/cadvisor
[root@hdss7-200 ~]# cd /data/k8s-yaml/cadvisor
[root@hdss7-200 cadvisor]# cat ds.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cadvisor
  namespace: kube-system
  labels:
    app: cadvisor
spec:
  selector:
    matchLabels:
      name: cadvisor
  template:
    metadata:
      labels:
        name: cadvisor
    spec:
      hostNetwork: true
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: cadvisor
        image: harbor.od.com/public/cadvisor:v0.28.3
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: rootfs
          mountPath: /rootfs
          readOnly: true
        - name: var-run
          mountPath: /var/run
        - name: sys
          mountPath: /sys
          readOnly: true
        - name: docker
          mountPath: /var/lib/docker
          readOnly: true
        ports:
          - name: http
            containerPort: 4194
            protocol: TCP
        readinessProbe:
          tcpSocket:
            port: 4194
          initialDelaySeconds: 5
          periodSeconds: 10
        args:
          - --housekeeping_interval=10s
          - --port=4194
      terminationGracePeriodSeconds: 30
      volumes:
      - name: rootfs
        hostPath:
          path: /
      - name: var-run
        hostPath:
          path: /var/run
      - name: sys
        hostPath:
          path: /sys
      - name: docker
        hostPath:
          path: /data/docker
[root@hdss7-200 cadvisor]# 

3.修改运算节点软连接

所有运算节点上

~]# mount -o remount,rw /sys/fs/cgroup/
~]# ln -s /sys/fs/cgroup/cpu,cpuacct /sys/fs/cgroup/cpuacct,cpu

4.应用资源配置清单

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/cadvisor/ds.yaml

5.检查启动情况

[root@hdss7-21 ~]# netstat  -lntup|grep 4194
tcp6       0      0 :::4194                 :::*                    LISTEN      24204/cadvisor      
[root@hdss7-21 ~]# 

6.人为影响K8S调度策略的三种方法:

1.污点、容忍度方法

[root@hdss7-21 ~]# kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:v1.7.9 -n kube-public
deployment.apps/nginx-dp created
[root@hdss7-21 ~]# kubectl get pods -o wide -n kube-public
NAME                        READY   STATUS    RESTARTS   AGE    IP            NODE                NOMINATED NODE   READINESS GATES
nginx-dp-5dfc689474-846jl   1/1     Running   0          100s   172.7.21.16   hdss7-21.host.com   <none>           <none>
[root@hdss7-21 ~]# 
#扩容2台,两个运算节点都会运行pod
[root@hdss7-21 ~]# kubectl scale deployment nginx-dp --replicas=2 -n kube-public
deployment.extensions/nginx-dp scaled
[root@hdss7-21 ~]# 
[root@hdss7-21 ~]# kubectl get pods -o wide -n kube-public
NAME                        READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
nginx-dp-5dfc689474-846jl   1/1     Running   0          3m55s   172.7.21.16   hdss7-21.host.com   <none>           <none>
nginx-dp-5dfc689474-wzwd5   1/1     Running   0          76s     172.7.22.4    hdss7-22.host.com   <none>           <none>
[root@hdss7-21 ~]# 
#1.给hdss7-21.host.com这个运算节点打上污点
[root@hdss7-21 ~]# kubectl taint node hdss7-21.host.com quedian=buxijiao:NoSchedule
node/hdss7-21.host.com tainted
[root@hdss7-21 ~]# kubectl describe node hdss7-21.host.com |grep -i "taints"
Taints:             quedian=buxijiao:NoSchedule
[root@hdss7-21 ~]# 
#2.扩容nginx-dp,这个时候不会调度到hdss7-22.host.com这个运算节点
[root@hdss7-21 ~]# kubectl get deployment -n kube-public
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
nginx-dp   2/2     2            2           9m4s
[root@hdss7-21 ~]# kubectl scale deployment nginx-dp --replicas=4 -n kube-public
deployment.extensions/nginx-dp scaled
[root@hdss7-21 ~]# kubectl get pods -o wide -n kube-public
NAME                        READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
nginx-dp-5dfc689474-846jl   1/1     Running   0          10m     172.7.21.16   hdss7-21.host.com   <none>           <none>
nginx-dp-5dfc689474-drjmd   1/1     Running   0          4s      172.7.22.6    hdss7-22.host.com   <none>           <none>
nginx-dp-5dfc689474-r9wpj   1/1     Running   0          4s      172.7.22.5    hdss7-22.host.com   <none>           <none>
nginx-dp-5dfc689474-wzwd5   1/1     Running   0          7m45s   172.7.22.4    hdss7-22.host.com   <none>           <none>
[root@hdss7-21 ~]# 
#3.修改yaml文件,允许这个污点(不洗脚)
[root@hdss7-21 ~]# kubectl get deployment nginx-dp -o yaml -n kube-public >nginx-dp.yaml
[root@hdss7-21 ~]# cat nginx-dp.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: nginx-dp
  name: nginx-dp
  namespace: kube-public
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-dp
  template:
    metadata:
      labels:
        app: nginx-dp
    spec:
      tolerations:
      - key: quedian
        value: buxijiao
        effect: NoSchedule
      containers:
      - image: harbor.od.com/public/nginx:v1.7.9
        imagePullPolicy: IfNotPresent
        name: nginx
# 应用,这个时候扩散dp,10.4.7.21运算节点上会运行pod
[root@hdss7-21 ~]# kubectl apply -f nginx-dp.yaml 
# 删除污点
[root@hdss7-21 ~]# kubectl taint node hdss7-21.host.com quedian-
node/hdss7-21.host.com untainted
[root@hdss7-21 ~]# kubectl describe node hdss7-21.host.com |grep -i "taints"
Taints:             <none>
[root@hdss7-21 ~]# 
#删除deployment
[root@hdss7-21 ~]# kubectl delete deployment nginx-dp -n kube-public
  • 污点容忍度
[root@hdss7-21 ~]# kubectl taint node hdss7-21.host.com key=broken:NoExecute
# 这个意思是污点 是驱逐 这个node节点的pod,在这个node节点的pod会转移到其他运算节点,用于下线维护维护运算节点。运算节点要下线维修

#删除污点
[root@hdss7-21 ~]# kubectl taint node hdss7-21.host.com key-
node/hdss7-21.host.com untainted
[root@hdss7-21 ~]# 

# 给node节点加入多个污点
kubect taint node hdss7-21.host.com buxizao=:NoSchedule
kubect taint node hdss7-21.host.com buxijiao=:NoSchedule

5.部署blackbox-exporter

运维主机hdss7-200.host.com

1.准备blackbox-exporter镜像

blackbox-exporter官方dockerhub地址
blackbox-exporter官方github地址

[root@hdss7-200 ~]# docker pull prom/blackbox-exporter:v0.15.1
[root@hdss7-200 ~]# docker images|grep blackbox
prom/blackbox-exporter                     v0.15.1                    81b70b6158be   17 months ago   19.7MB
[root@hdss7-200 ~]# docker tag 81b70b6158be  harbor.od.com/public/blackbox-exporter:v0.15.1
[root@hdss7-200 ~]# docker push harbor.od.com/public/blackbox-exporter:v0.15.1

2.准备资源配置清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/blackbox-exporter

[root@hdss7-200 ~]# cat /data/k8s-yaml/blackbox-exporter/cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: blackbox-exporter
  name: blackbox-exporter
  namespace: kube-system
data:
  blackbox.yml: |-
    modules:
      http_2xx:
        prober: http
        timeout: 2s
        http:
          valid_http_versions: ["HTTP/1.1", "HTTP/2"]
          valid_status_codes: [200,301,302]
          method: GET
          preferred_ip_protocol: "ip4"
      tcp_connect:
        prober: tcp
        timeout: 2s
[root@hdss7-200 ~]# 

[root@hdss7-200 ~]# cat /data/k8s-yaml/blackbox-exporter/dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: blackbox-exporter
  namespace: kube-system
  labels:
    app: blackbox-exporter
  annotations:
    deployment.kubernetes.io/revision: 1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: blackbox-exporter
  template:
    metadata:
      labels:
        app: blackbox-exporter
    spec:
      volumes:
      - name: config
        configMap:
          name: blackbox-exporter
          defaultMode: 420
      containers:
      - name: blackbox-exporter
        image: harbor.od.com/public/blackbox-exporter:v0.15.1
        imagePullPolicy: IfNotPresent
        args:
        - --config.file=/etc/blackbox_exporter/blackbox.yml
        - --log.level=info
        - --web.listen-address=:9115
        ports:
        - name: blackbox-port
          containerPort: 9115
          protocol: TCP
        resources:
          limits:
            cpu: 200m
            memory: 256Mi
          requests:
            cpu: 100m
            memory: 50Mi
        volumeMounts:
        - name: config
          mountPath: /etc/blackbox_exporter
        readinessProbe:
          tcpSocket:
            port: 9115
          initialDelaySeconds: 5
          timeoutSeconds: 5
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 3
[root@hdss7-200 ~]# 

[root@hdss7-200 ~]# cat /data/k8s-yaml/blackbox-exporter/svc.yaml 
kind: Service
apiVersion: v1
metadata:
  name: blackbox-exporter
  namespace: kube-system
spec:
  selector:
    app: blackbox-exporter
  ports:
    - name: blackbox-port
      protocol: TCP
      port: 9115
[root@hdss7-200 ~]# 

[root@hdss7-200 ~]# cat /data/k8s-yaml/blackbox-exporter/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: blackbox-exporter
  namespace: kube-system
spec:
  rules:
  - host: blackbox.od.com
    http:
      paths:
      - path: /
        backend:
          serviceName: blackbox-exporter
          servicePort: blackbox-port
[root@hdss7-200 ~]# 

3.应用资源配置清单

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/blackbox-exporter/cm.yaml
configmap/blackbox-exporter created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/blackbox-exporter/dp.yaml
deployment.extensions/blackbox-exporter created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/blackbox-exporter/svc.yaml
service/blackbox-exporter created
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/blackbox-exporter/ingress.yaml
ingress.extensions/blackbox-exporter created
[root@hdss7-21 ~]# 

4.解析域名

[root@hdss7-11 ~]# tail -1 /var/named/od.com.zone 
blackbox           A    10.4.7.10
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A blackbox.od.com @10.4.7.11 +short
10.4.7.10
[root@hdss7-11 ~]# 

5.浏览器访问

http://blackbox.od.com/

第三章:实战部署Prometheus及其配置详解

1.部署prometheus

运维主机hdss7-200.host.com

1.准备prometheus镜像

prometheus官方dockerhub地址
prometheus官方github地址

[root@hdss7-200 ~]# docker pull prom/prometheus:v2.14.0
[root@hdss7-200 ~]# docker images|grep prometheus
prom/prometheus                            v2.14.0                    7317640d555e   15 months ago   130MB
[root@hdss7-200 ~]# docker tag 7317640d555e harbor.od.com/infra/prometheus:v2.14.0
[root@hdss7-200 ~]# docker push harbor.od.com/infra/prometheus:v2.14.0

2.准备资源配置清单

[root@hdss7-200 ~]# mkdir  /data/k8s-yaml/prometheus
[root@hdss7-200 ~]# cat /data/k8s-yaml/prometheus/rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: prometheus
  namespace: infra
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: prometheus
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: infra
[root@hdss7-200 ~]# 
#绑定到运算节点10.4.7.21上运行,nodeName

[root@hdss7-200 ~]# cat /data/k8s-yaml/prometheus/dp.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "5"
  labels:
    name: prometheus
  name: prometheus
  namespace: infra
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      app: prometheus
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      nodeName: 10.4.7.21
      containers:
      - name: prometheus
        image: harbor.od.com/infra/prometheus:v2.14.0
        imagePullPolicy: IfNotPresent
        command:
        - /bin/prometheus
        args:
        - --config.file=/data/etc/prometheus.yml
        - --storage.tsdb.path=/data/prom-db
        - --storage.tsdb.min-block-duration=10m
        - --storage.tsdb.retention=72h
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: /data
          name: data
        resources:
          requests:
            cpu: "1000m"
            memory: "1.5Gi"
          limits:
            cpu: "2000m"
            memory: "3Gi"
      imagePullSecrets:
      - name: harbor
      securityContext:
        runAsUser: 0
      serviceAccountName: prometheus
      volumes:
      - name: data
        nfs:
          server: hdss7-200
          path: /data/nfs-volume/prometheus
[root@hdss7-200 ~]# 

[root@hdss7-200 ~]# cat /data/k8s-yaml/prometheus/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: infra
spec:
  ports:
  - port: 9090
    protocol: TCP
    targetPort: 9090
  selector:
    app: prometheus
[root@hdss7-200 ~]# 

[root@hdss7-200 ~]# cat /data/k8s-yaml/prometheus/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: traefik
  name: prometheus
  namespace: infra
spec:
  rules:
  - host: prometheus.od.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus
          servicePort: 9090
[root@hdss7-200 ~]# 

3.准备promentheus的配置文件

运维主机hdss7-200.host.com

  • 创建目录并拷贝证书
[root@hdss7-200 ~]# mkdir -pv /data/nfs-volume/prometheus/{etc,prom-db}
[root@hdss7-200 ~]# cp /opt/certs/ca.pem  /data/nfs-volume/prometheus/etc/
[root@hdss7-200 ~]# cp /opt/certs/client.pem  /data/nfs-volume/prometheus/etc/
[root@hdss7-200 ~]# cp /opt/certs/client-key.pem  /data/nfs-volume/prometheus/etc
  • 准备配置
[root@hdss7-200 ~]# cat /data/nfs-volume/prometheus/etc/prometheus.yml 
global:
  scrape_interval:     15s
  evaluation_interval: 15s
scrape_configs:
- job_name: 'etcd'
  tls_config:
    ca_file: /data/etc/ca.pem
    cert_file: /data/etc/client.pem
    key_file: /data/etc/client-key.pem
  scheme: https
  static_configs:
  - targets:
    - '10.4.7.12:2379'
    - '10.4.7.21:2379'
    - '10.4.7.22:2379'
- job_name: 'kubernetes-apiservers'
  kubernetes_sd_configs:
  - role: endpoints
  scheme: https
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  relabel_configs:
  - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
    action: keep
    regex: default;kubernetes;https
- job_name: 'kubernetes-pods'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    action: keep
    regex: true
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
- job_name: 'kubernetes-kubelet'
  kubernetes_sd_configs:
  - role: node
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __address__
    replacement: ${1}:10255
- job_name: 'kubernetes-cadvisor'
  kubernetes_sd_configs:
  - role: node
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)
  - source_labels: [__meta_kubernetes_node_name]
    regex: (.+)
    target_label: __address__
    replacement: ${1}:4194
- job_name: 'kubernetes-kube-state'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
  - source_labels: [__meta_kubernetes_pod_label_grafanak8sapp]
    regex: .*true.*
    action: keep
  - source_labels: ['__meta_kubernetes_pod_label_daemon', '__meta_kubernetes_pod_node_name']
    regex: 'node-exporter;(.*)'
    action: replace
    target_label: nodename
- job_name: 'blackbox_http_pod_probe'
  metrics_path: /probe
  kubernetes_sd_configs:
  - role: pod
  params:
    module: [http_2xx]
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_blackbox_scheme]
    action: keep
    regex: http
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_blackbox_port,  __meta_kubernetes_pod_annotation_blackbox_path]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+);(.+)
    replacement: $1:$2$3
    target_label: __param_target
  - action: replace
    target_label: __address__
    replacement: blackbox-exporter.kube-system:9115
  - source_labels: [__param_target]
    target_label: instance
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
- job_name: 'blackbox_tcp_pod_probe'
  metrics_path: /probe
  kubernetes_sd_configs:
  - role: pod
  params:
    module: [tcp_connect]
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_blackbox_scheme]
    action: keep
    regex: tcp
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_blackbox_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __param_target
  - action: replace
    target_label: __address__
    replacement: blackbox-exporter.kube-system:9115
  - source_labels: [__param_target]
    target_label: instance
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
- job_name: 'traefik'
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
    action: keep
    regex: traefik
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    action: replace
    target_label: __metrics_path__
    regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    action: replace
    regex: ([^:]+)(?::\d+)?;(\d+)
    replacement: $1:$2
    target_label: __address__
  - action: labelmap
    regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
    action: replace
    target_label: kubernetes_pod_name
[root@hdss7-200 ~]# 

4.应用资源配置清单

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/prometheus/rbac.yaml
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/prometheus/dp.yaml
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/prometheus/svc.yaml
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/prometheus/ingress.yaml

5.解析域名

在hdss7-11.host.com主机上

[root@hdss7-11 ~]# tail -1 /var/named/od.com.zone 
prometheus         A    10.4.7.10
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A prometheus.od.com @10.4.7.11 +short
10.4.7.10
[root@hdss7-11 ~]# 

6.浏览器访问

http://prometheus.od.com,如果能成功访问的话,表示启动成功

点击status->configuration就是我们的配置文件。

7.Prometheus监控内容

点击status->target,展示的就是我们在prometheus.yml在配置的job-name,这些targets基本可以满足我们收集数据的需求

5个编号的job-name已经被发现并获取数据
接下来就需要将剩下的4个job-name对应的服务纳入监控
纳入监控的方式是给需要收集数据的服务添加annotations

  • 1.Etcd服务

    监控etcd服务

  • 2.kubernetes-apiserver

    监控apiserver服务

  • 3.kubernetes-kubelet

    监控kubelet服务

  • 4.kubernetes-kube-state

    监控基本信息

    • node-exporter
      监控Node节点信息
    • kube-state-metrics

      监控pod信息

  • 5.traefik

    监控traefik-ingress-controller
    注意:在traefik的pod控制器上加annotations,并重启pod,监控生效
    配置示例:

"annotations": {
  "prometheus_io_scheme": "traefik",
  "prometheus_io_path": "/metrics",
  "prometheus_io_port": "8080"
}
  • 6.blackbox*

    监控服务是否存活
    blackbox_tcp_pod_porbe:监控tcp协议是否存活
    注意:在pod控制器上加annotations,并重启pod,监控生效

配置示例:

"annotations": {
  "blackbox_port": "20880",
  "blackbox_scheme": "tcp"
}

blackbox_http_pod_probe:监控http协议服务是否存活
注意:在pod控制器上加annotations,并重启pod,监控生效

配置示例:

"annotations": {
  "blackbox_path": "/",
  "blackbox_port": "8080",
  "blackbox_scheme": "http"
}
  • 7.kubernetes-pods*

    监控jvm信息

key value
jvm_info{version=”1.7.0_80-b15”,vendor=”Oracle Corporation”,runtime=”Java(TM) SE Runtime Environment”,} 1.0
jmx_config_reload_success_total 0.0
process_resident_memory_bytes 4.693897216E9
process_virtual_memory_bytes 1.2138840064E10
process_max_fds 65536.0
process_open_fds 123.0
process_start_time_seconds 1.54331073249E9
process_cpu_seconds_total 196465.74
jvm_buffer_pool_used_buffers{pool=”mapped”,} 0.0
jvm_buffer_pool_used_buffers{pool=”direct”,} 150.0
jvm_buffer_pool_capacity_bytes{pool=”mapped”,} 0.0
jvm_buffer_pool_capacity_bytes{pool=”direct”,} 6216688.0
jvm_buffer_pool_used_bytes{pool=”mapped”,} 0.0
jvm_buffer_pool_used_bytes{pool=”direct”,} 6216688.0
jvm_gc_collection_seconds_sum{gc=”PS MarkSweep”,} 1.867
注意:在pod控制器上加annotations,并重启pod,监控生效
配置示例:
"annotations": {
  "prometheus_io_scrape": "true",
  "prometheus_io_port": "12346",
  "prometheus_io_path": "/"
}

8.修改traefik服务接入prometheus监控

修改traefik的资源配置清单的yaml文件,spec->template->metadata下跟labels同级,添加annotaion的配置

[root@hdss7-200 ~]# cat  /data/k8s-yaml/traefik/ds.yaml 
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: traefik-ingress
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress
spec:
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress
        name: traefik-ingress
      annotations:
        prometheus_io_scheme: "traefik"
        prometheus_io_path: "/metrics"
        prometheus_io_port: "8080"
    spec:
      serviceAccountName: traefik-ingress-controller
      terminationGracePeriodSeconds: 60
      containers:
      - image: harbor.od.com/public/traefik:v1.7.2
        name: traefik-ingress
        ports:
        - name: controller
          containerPort: 80
          hostPort: 81
        - name: admin-web
          containerPort: 8080
        securityContext:
          capabilities:
            drop:
            - ALL
            add:
            - NET_BIND_SERVICE
        args:
        - --api
        - --kubernetes
        - --logLevel=INFO
        - --insecureskipverify=true
        - --kubernetes.endpoint=https://10.4.7.10:7443
        - --accesslog
        - --accesslog.filepath=/var/log/traefik_access.log
        - --traefiklog
        - --traefiklog.filepath=/var/log/traefik.log
        - --metrics.prometheus
[root@hdss7-200 ~]# 

任意运算节点应用配置

[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ds.yaml

重启pod后,再在prometheus上查看traefik是否能正常获取数据

9.用blackbox检测TCP/HTTP服务状态

blackbox是检测容器内服务存活性的,也就是端口监控状态检查,分为tcp和http两种方法

1.修改dubbo-demo-service服务接入prometheus监控(TCP)

dashboard上:
prod名称空间->deployment->dubbo-demo-service->spec->template=>metadata下,添加

"annotations": {
  "blackbox_port": "20880",
  "blackbox_scheme": "tcp",
   "prometheus_io_scrape": "true",
  "prometheus_io_path": "/",
  "prometheus_io_port": "12346"
}
 "prometheus_io_scrape": "true",
  "prometheus_io_path": "/",
  "prometheus_io_port": "12346"
  #这一段是监控jvm信息,都需要加上

或者修改dp.yaml文件

[root@hdss7-200 ~]# cat /data/k8s-yaml/prod/dubbo-demo-service/dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-service
  namespace: prod
  labels: 
    name: dubbo-demo-service
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-service
  template:
    metadata:
      labels: 
        app: dubbo-demo-service
        name: dubbo-demo-service
      annotations:
        "blackbox_port": "20880"
        "blackbox_scheme": "tcp"
        "prometheus_io_scrape": "true"
        "prometheus_io_path": "/"
        "prometheus_io_port": "12346"
    spec:
      containers:
      - name: dubbo-demo-service
        image: harbor.od.com/app/dubbo-demo-service:apollo_210330_2120
        ports:
        - containerPort: 20880
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=pro -Dapollo.meta=http://config-prod.od.com
        - name: JAR_BALL
          value: dubbo-server.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 ~]# 
  • 任意运算节点应用资源配置清单
[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/prod/dubbo-demo-service/dp.yaml
deployment.extensions/dubbo-demo-service configured
[root@hdss7-21 ~]# 
  • 查看监控信息:

http://blackbox.od.com/

10.修改dubbo-demo-consumer服务接入prometheus监控(HTTP)

k8s的dashboard界面->prod名称空间->deployment->dubbo-demo-consumer->spec->template->metadata下,添加

"annotations": {
  "blackbox_path": "/hello?name=health",
  "blackbox_port": "8080",
  "blackbox_scheme": "http",
  "prometheus_io_scrape": "true",
  "prometheus_io_path": "/",
  "prometheus_io_port": "12346"
}
 "prometheus_io_scrape": "true",
  "prometheus_io_path": "/",
  "prometheus_io_port": "12346"

# 这一段是监控jvm信息,都需要加上

或者修改对应的dp.yaml文件

[root@hdss7-200 ~]# cat /data/k8s-yaml/prod/dubbo-demo-consumer/dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-consumer
  namespace: prod
  labels: 
    name: dubbo-demo-consumer
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-consumer
  template:
    metadata:
      labels: 
        app: dubbo-demo-consumer
        name: dubbo-demo-consumer
      annotations:
        blackbox_path: "/hello?name=health"
        blackbox_port: "8080"
        blackbox_scheme: "http"
        prometheus_io_scrape: "true"
        prometheus_io_path: "/"
        prometheus_io_port: "12346"
    spec:
      containers:
      - name: dubbo-demo-consumer
        image: harbor.od.com/app/dubbo-demo-consumer:apollo_210330_2150
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 20880
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=pro -Dapollo.meta=http://config-prod.od.com
        - name: JAR_BALL
          value: dubbo-client.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 ~]# 
  • 应用资源配置清单
    [root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/prod/dubbo-demo-consumer/dp.yaml
    deployment.extensions/dubbo-demo-consumer configured
    [root@hdss7-21 ~]# 
  • 查看监控


11.添加监控jvm信息

dubbo-demo-service和dubbo-demo-consumer都添加下列annotation注释,以便监控pod中的jvm信息

annotations:
  prometheus_io_scrape: "true"
  prometheus_io_path: "/"
  prometheus_io_port: "12346"

具体的dp.yaml

[root@hdss7-200 ~]# cat  /data/k8s-yaml/prod/dubbo-demo-service/dp.yaml 
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: dubbo-demo-service
  namespace: prod
  labels: 
    name: dubbo-demo-service
spec:
  replicas: 1
  selector:
    matchLabels: 
      name: dubbo-demo-service
  template:
    metadata:
      labels: 
        app: dubbo-demo-service
        name: dubbo-demo-service
      annotations:
        "blackbox_port": "20880"
        "blackbox_scheme": "tcp"
        "prometheus_io_scrape": "true"
        "prometheus_io_path": "/"
        "prometheus_io_port": "12346"
    spec:
      containers:
      - name: dubbo-demo-service
        image: harbor.od.com/app/dubbo-demo-service:apollo_210330_2120
        ports:
        - containerPort: 20880
          protocol: TCP
        env:
        - name: C_OPTS
          value: -Denv=pro -Dapollo.meta=http://config-prod.od.com
        - name: JAR_BALL
          value: dubbo-server.jar
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: harbor
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext: 
        runAsUser: 0
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate: 
      maxUnavailable: 1
      maxSurge: 1
  revisionHistoryLimit: 7
  progressDeadlineSeconds: 600
[root@hdss7-200 ~]# 
  • 查看监控

  • 全部的监控

第四章:实战部署容器云监控展示平台Grafana

运维主机hdss7-200.host.com

1.准备grafana镜像

grafana官方dockerhub地址
grafana官方github地址
grafana官网

[root@hdss7-200 ~]# docker pull grafana/grafana:5.4.2
[root@hdss7-200 ~]# docker images|grep grafana
grafana/grafana                            5.4.2                      6f18ddf9e552   2 years ago     243MB
[root@hdss7-200 ~]# docker tag 6f18ddf9e552 harbor.od.com/infra/grafana:v5.4.2
[root@hdss7-200 ~]# docker push harbor.od.com/infra/grafana:v5.4.2

2.准备资源配置清单

# 先创建nfs共享目录
[root@hdss7-200 ~]# mkdir /data/nfs-volume/grafana/
# 创建资源清单目录
[root@hdss7-200 ~]# mkdir /data/k8s-yaml/grafana/
[root@hdss7-200 ~]# vi /data/k8s-yaml/grafana/rbac.yaml
[root@hdss7-200 ~]# cat /data/k8s-yaml/grafana/rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: grafana
rules:
- apiGroups:
  - "*"
  resources:
  - namespaces
  - deployments
  - pods
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"
  name: grafana
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: grafana
subjects:
- kind: User
  name: k8s-node
[root@hdss7-200 ~]# 
[root@hdss7-200 ~]# 
[root@hdss7-200 ~]# vi /data/k8s-yaml/grafana/dp.yaml
[root@hdss7-200 ~]# cat /data/k8s-yaml/grafana/dp.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: grafana
    name: grafana
  name: grafana
  namespace: infra
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 7
  selector:
    matchLabels:
      name: grafana
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: grafana
        name: grafana
    spec:
      containers:
      - name: grafana
        image: harbor.od.com/infra/grafana:v5.4.2
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /var/lib/grafana
          name: data
      imagePullSecrets:
      - name: harbor
      securityContext:
        runAsUser: 0
      volumes:
      - nfs:
          server: hdss7-200
          path: /data/nfs-volume/grafana
        name: data
[root@hdss7-200 ~]# 
[root@hdss7-200 ~]# vi /data/k8s-yaml/grafana/svc.yaml
[root@hdss7-200 ~]# cat /data/k8s-yaml/grafana/svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: infra
spec:
  ports:
  - port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: grafana
[root@hdss7-200 ~]# 
[root@hdss7-200 ~]# vi /data/k8s-yaml/grafana/ingress.yaml
[root@hdss7-200 ~]# cat /data/k8s-yaml/grafana/ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana
  namespace: infra
spec:
  rules:
  - host: grafana.od.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: 3000
[root@hdss7-200 ~]# 

3.应用资源配置清单

[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/grafana/rbac.yaml
clusterrole.rbac.authorization.k8s.io/grafana unchanged
clusterrolebinding.rbac.authorization.k8s.io/grafana created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/grafana/dp.yaml
deployment.extensions/grafana created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/grafana/svc.yaml
service/grafana created
[root@hdss7-22 ~]# kubectl apply -f http://k8s-yaml.od.com/grafana/ingress.yaml
ingress.extensions/grafana created
[root@hdss7-22 ~]# 

4.域名解析

[root@hdss7-11 ~]# tail -1 /var/named/od.com.zone 
grafana            A    10.4.7.10
[root@hdss7-11 ~]# systemctl restart named
[root@hdss7-11 ~]# dig -t A grafana.od.com @10.4.7.11 +short
10.4.7.10
[root@hdss7-11 ~]# 

5.浏览器访问

http://grafana.od.com

  • 用户名:admin
  • 密 码:admin

登录后需要修改管理员密码为admin123

6.配置grafana页面

1.外观

Configuration -> Preferences

  • UI Theme

Light

  • Home Dashboard

Default

  • Timezone

Local browser time

save

2.安装插件

安装方法一:
进入grafana启动的容器:

  • Kubernetes App
grafana-cli plugins install grafana-kubernetes-app

安装方法二:
下载地址

[root@hdss7-200 ~]# # cd /data/nfs-volume/grafana/plugins
[root@hdss7-200 plugins]# wget https://grafana.com/api/plugins/grafana-kubernetes-app/versions/1.0.1/download -O grafana-kubernetes-app.zip
[root@hdss7-200 plugins]# unzip grafana-kubernetes-app.zip
  • Clock Pannel
    下载地址
    grafana-cli plugins install grafana-clock-panel
  • Pie Chart
    下载地址
    grafana-cli plugins install grafana-piechart-panel
  • D3 Gauge
    下载地址
    grafana-cli plugins install briangann-gauge-panel
  • Discrete
    下载地址
    grafana-cli plugins install natel-discrete-panel
  • 重启grafana的pod即可
    Configuration -> Plugins 能够看到安装好的插件。

7.配置grafana数据源

Configuration -> Data Sources
选择prometheus

  • HTTP
key value
URL http://prometheus.od.com
Access Server(Default)
  • Auth

  • TLS Auth Details

  • Save & Test

8.配置Kubernetes集群Dashboard

  • Configuration -> pligins -> Kubernetes


kubernetes-> + New Cluster

  • Add a new cluster

key value
Name myk8s
  • HTTP
key value
URL https://10.4.7.10:7443
Access Server(Default)
  • Auth
key value
TLS Client Auth 勾选
With Ca Cert 勾选

将ca.pem、client.pem和client-key.pem粘贴至文本框内

  • Prometheus Read

Save

网上的模板:
4475         traefik
9965/7587       blackbox
3070            etcd

把原来的模板给删除,然后导入自定义的模板

Etcd Dashboard

Generic Dashboard

k8s cluster

k8s container

k8s Deployments

K8S Node

Traefik Dashboard

Blackbox Dashboard

JMX Dashboard

第五章:实战通过Alertmanager组件进行监控告警

1.准备Alertmanager镜像

在运维主机hdss7-200.host.com

[root@hdss7-200 ~]# docker pull docker.io/prom/alertmanager:v0.14.0
[root@hdss7-200 ~]# docker images|grep alertmanager
prom/alertmanager                          v0.14.0                    23744b2d645c   3 years ago     31.9MB
[root@hdss7-200 ~]# docker tag 23744b2d645c harbor.od.com/infra/alertmanager:v0.14.0
[root@hdss7-200 ~]# docker push harbor.od.com/infra/alertmanager:v0.14.0

2.准备资源配置清单

[root@hdss7-200 ~]# mkdir /data/k8s-yaml/alertmanager
[root@hdss7-200 ~]# cd /data/k8s-yaml/alertmanager
[root@hdss7-200 alertmanager]# cat cm.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: alertmanager-config
  namespace: infra
data:
  config.yml: |-
    global:
      resolve_timeout: 5m
      smtp_smarthost: 'smtp.126.com:465'
      smtp_from: 'xx@126.com'
      smtp_auth_username: 'xx@126.com'
      smtp_auth_password: 'xxxxxx'
      smtp_require_tls: false
    route:
      group_by: ['alertname', 'cluster']
      group_wait: 30s

      group_interval: 5m

      repeat_interval: 5m

      receiver: default

    receivers:
    - name: 'default'
      email_configs:
      - to: 'xx@126.com'
        send_resolved: true
[root@hdss7-200 alertmanager]# 
[root@hdss7-200 alertmanager]# cat dp.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: alertmanager
  namespace: infra
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alertmanager
  template:
    metadata:
      labels:
        app: alertmanager
    spec:
      containers:
      - name: alertmanager
        image: harbor.od.com/infra/alertmanager:v0.14.0
        args:
          - "--config.file=/etc/alertmanager/config.yml"
          - "--storage.path=/alertmanager"
        ports:
        - name: alertmanager
          containerPort: 9093
        volumeMounts:
        - name: alertmanager-cm
          mountPath: /etc/alertmanager
      volumes:
      - name: alertmanager-cm
        configMap:
          name: alertmanager-config
      imagePullSecrets:
      - name: harbor
[root@hdss7-200 alertmanager]# 
[root@hdss7-200 alertmanager]# cat svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: alertmanager
  namespace: infra
spec:
  selector: 
    app: alertmanager
  ports:
    - port: 80
      targetPort: 9093
[root@hdss7-200 alertmanager]# 

3.应用资源配置清单

[root@hdss7-21 ~]# kubectl apply -f  http://k8s-yaml.od.com/alertmanager/cm.yaml
[root@hdss7-21 ~]# kubectl apply -f  http://k8s-yaml.od.com/alertmanager/dp.yaml
[root@hdss7-21 ~]# kubectl apply -f  http://k8s-yaml.od.com/alertmanager/svc.yaml

4.报警规则

[root@hdss7-200 ~]# cat /data/nfs-volume/prometheus/etc/rules.yml
groups:
- name: hostStatsAlert
  rules:
  - alert: hostCpuUsageAlert
    expr: sum(avg without (cpu)(irate(node_cpu{mode!='idle'}[5m]))) by (instance) > 0.85
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "{{ $labels.instance }} CPU usage above 85% (current value: {{ $value }}%)"
  - alert: hostMemUsageAlert
    expr: (node_memory_MemTotal - node_memory_MemAvailable)/node_memory_MemTotal > 0.85
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "{{ $labels.instance }} MEM usage above 85% (current value: {{ $value }}%)"
  - alert: OutOfInodes
    expr: node_filesystem_free{fstype="overlay",mountpoint ="/"} / node_filesystem_size{fstype="overlay",mountpoint ="/"} * 100 < 10
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Out of inodes (instance {{ $labels.instance }})"
      description: "Disk is almost running out of available inodes (< 10% left) (current value: {{ $value }})"
  - alert: OutOfDiskSpace
    expr: node_filesystem_free{fstype="overlay",mountpoint ="/rootfs"} / node_filesystem_size{fstype="overlay",mountpoint ="/rootfs"} * 100 < 10
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Out of disk space (instance {{ $labels.instance }})"
      description: "Disk is almost full (< 10% left) (current value: {{ $value }})"
  - alert: UnusualNetworkThroughputIn
    expr: sum by (instance) (irate(node_network_receive_bytes[2m])) / 1024 / 1024 > 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual network throughput in (instance {{ $labels.instance }})"
      description: "Host network interfaces are probably receiving too much data (> 100 MB/s) (current value: {{ $value }})"
  - alert: UnusualNetworkThroughputOut
    expr: sum by (instance) (irate(node_network_transmit_bytes[2m])) / 1024 / 1024 > 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual network throughput out (instance {{ $labels.instance }})"
      description: "Host network interfaces are probably sending too much data (> 100 MB/s) (current value: {{ $value }})"
  - alert: UnusualDiskReadRate
    expr: sum by (instance) (irate(node_disk_bytes_read[2m])) / 1024 / 1024 > 50
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual disk read rate (instance {{ $labels.instance }})"
      description: "Disk is probably reading too much data (> 50 MB/s) (current value: {{ $value }})"
  - alert: UnusualDiskWriteRate
    expr: sum by (instance) (irate(node_disk_bytes_written[2m])) / 1024 / 1024 > 50
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual disk write rate (instance {{ $labels.instance }})"
      description: "Disk is probably writing too much data (> 50 MB/s) (current value: {{ $value }})"
  - alert: UnusualDiskReadLatency
    expr: rate(node_disk_read_time_ms[1m]) / rate(node_disk_reads_completed[1m]) > 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual disk read latency (instance {{ $labels.instance }})"
      description: "Disk latency is growing (read operations > 100ms) (current value: {{ $value }})"
  - alert: UnusualDiskWriteLatency
    expr: rate(node_disk_write_time_ms[1m]) / rate(node_disk_writes_completedl[1m]) > 100
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Unusual disk write latency (instance {{ $labels.instance }})"
      description: "Disk latency is growing (write operations > 100ms) (current value: {{ $value }})"
- name: http_status
  rules:
  - alert: ProbeFailed
    expr: probe_success == 0
    for: 1m
    labels:
      severity: error
    annotations:
      summary: "Probe failed (instance {{ $labels.instance }})"
      description: "Probe failed (current value: {{ $value }})"
  - alert: StatusCode
    expr: probe_http_status_code <= 199 OR probe_http_status_code >= 400
    for: 1m
    labels:
      severity: error
    annotations:
      summary: "Status Code (instance {{ $labels.instance }})"
      description: "HTTP status code is not 200-399 (current value: {{ $value }})"
  - alert: SslCertificateWillExpireSoon
    expr: probe_ssl_earliest_cert_expiry - time() < 86400 * 30
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "SSL certificate will expire soon (instance {{ $labels.instance }})"
      description: "SSL certificate expires in 30 days (current value: {{ $value }})"
  - alert: SslCertificateHasExpired
    expr: probe_ssl_earliest_cert_expiry - time()  <= 0
    for: 5m
    labels:
      severity: error
    annotations:
      summary: "SSL certificate has expired (instance {{ $labels.instance }})"
      description: "SSL certificate has expired already (current value: {{ $value }})"
  - alert: BlackboxSlowPing
    expr: probe_icmp_duration_seconds > 2
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Blackbox slow ping (instance {{ $labels.instance }})"
      description: "Blackbox ping took more than 2s (current value: {{ $value }})"
  - alert: BlackboxSlowRequests
    expr: probe_http_duration_seconds > 2 
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Blackbox slow requests (instance {{ $labels.instance }})"
      description: "Blackbox request took more than 2s (current value: {{ $value }})"
  - alert: PodCpuUsagePercent
    expr: sum(sum(label_replace(irate(container_cpu_usage_seconds_total[1m]),"pod","$1","container_label_io_kubernetes_pod_name", "(.*)"))by(pod) / on(pod) group_right kube_pod_container_resource_limits_cpu_cores *100 )by(container,namespace,node,pod,severity) > 80
    for: 5m
    labels:
      severity: warning
    annotations:
      summary: "Pod cpu usage percent has exceeded 80% (current value: {{ $value }}%)"
[root@hdss7-200 ~]# 

5.prometheus加载报警规则

  • 在结尾把报警配置追加上
/data/nfs-volume/prometheus/etc/prometheus.yml 

alerting:
  alertmanagers:
    - static_configs:
        - targets: ["alertmanager"]
rule_files:
 - "/data/etc/rules.yml"
  • 平滑重启prometheus
[root@hdss7-21 ~]# ps -ef|grep prometheus
root      71402  71383  2 04:40 ?        00:10:20 /bin/prometheus --config.file=/data/etc/prometheus.yml --storage.tsdb.path=/data/prom-db --storage.tsdb.min-block-duration=10m --storage.tsdb.retention=72h
root     100912 100893  0 05:42 ?        00:01:03 traefik traefik --api --kubernetes --logLevel=INFO --insecureskipverify=true --kubernetes.endpoint=https://10.4.7.10:7443 --accesslog --accesslog.filepath=/var/log/traefik_access.log --traefiklog --traefiklog.filepath=/var/log/traefik.log --metrics.prometheus
root     109435 102257  0 10:44 pts/0    00:00:00 grep --color=auto prometheus
[root@hdss7-21 ~]# kill -SIGHUP 71402
[root@hdss7-21 ~]# 
  • 查看页面

http://prometheus.od.com/alerts

6.验证报警

  • 1.将dubbo-demo-service项目的deployment副本缩减为0
[root@hdss7-21 ~]# kubectl scale deployment dubbo-demo-service --replicas=0 -n prod
deployment.extensions/dubbo-demo-service scaled
[root@hdss7-21 ~]# 
  • 2.等待一分钟后,查看报警邮件

文档更新时间: 2021-04-02 10:54   作者:xtyang