当前位置: 代码迷 >> 综合 >> 进阶之路:从零到一在k8s上部署高可用prometheus —— consul
  详细解决方案

进阶之路:从零到一在k8s上部署高可用prometheus —— consul

热度:20   发布时间:2023-11-27 00:52:35.0

目录

  • 导航
  • 前言
  • 动态发现流程
  • 相关yaml文件
    • consul.yaml
    • consul-service.yaml
    • consul-service-web.yaml
  • 部署
  • 注册exporter
  • 验证

导航

进阶之路:从零到一在k8s上部署高可用prometheus —— 总览
进阶之路:从零到一在k8s上部署高可用prometheus —— 准备工作
进阶之路:从零到一在k8s上部署高可用prometheus —— exporter
进阶之路:从零到一在k8s上部署高可用prometheus —— consul
进阶之路:从零到一在k8s上部署高可用prometheus —— prometheus-operator
进阶之路:从零到一在k8s上部署高可用prometheus —— prometheus
进阶之路:从零到一在k8s上部署高可用prometheus —— alertmanager
进阶之路:从零到一在k8s上部署高可用prometheus —— minio
进阶之路:从零到一在k8s上部署高可用prometheus —— thanos receive、thanos query

前言

consul在整个部署架构中起到的主要是让prometheus可以动态发现exporter的作用,避免频繁改动prometheus的抓取配置。

动态发现流程

prometheus通过consul动态发现exporter的流程如下:

1.在prometheus的配置文件中加入consul的抓取配置consul_sd_configs
2.将exporter通过API注册到consul(参考prometheus + consul实现动态添加监控节点)
3.检查prometheus的target中是否存在对应的exporter,状态是否正常(需要等待一个抓取周期)

相关yaml文件

consul.yaml

kind: StatefulSet
apiVersion: apps/v1
metadata:name: consul
spec:serviceName: consulreplicas: 3selector:matchLabels:app: consultemplate:metadata:labels:app: consulspec:containers:- name: containerimage: consul:1.9.6args:- agent- '-server'- '-bootstrap-expect=3'- '-ui'- '-data-dir=/consul/data'- '-bind=0.0.0.0'- '-client=0.0.0.0'- '-retry-join=consul-0.consul'- '-retry-join=consul-1.consul'- '-retry-join=consul-2.consul'ports:- name: tcp-8300containerPort: 8300protocol: TCP- name: tcp-8301containerPort: 8301protocol: TCP- name: udp-8301containerPort: 8301protocol: UDP- name: tcp-8302containerPort: 8302protocol: TCP- name: udp-8302containerPort: 8302protocol: UDP- name: tcp-8500containerPort: 8500protocol: TCP- name: tcp-8600containerPort: 8600protocol: TCP- name: udp-8600containerPort: 8600protocol: UDPimagePullPolicy: IfNotPresentrestartPolicy: Always

consul-service.yaml

kind: Service
apiVersion: v1
metadata:name: consullabels:app: consul
spec:ports:- name: tcp-8300protocol: TCPport: 8300targetPort: 8300- name: tcp-8301protocol: TCPport: 8301targetPort: 8301- name: udp-8301protocol: UDPport: 8301targetPort: 8301- name: tcp-8302protocol: TCPport: 8302targetPort: 8302- name: udp-8302protocol: UDPport: 8302targetPort: 8302- name: tcp-8500protocol: TCPport: 8500targetPort: 8500- name: tcp-8600protocol: TCPport: 8600targetPort: 8600- name: udp-8600protocol: UDPport: 8600targetPort: 8600selector:app: consulclusterIP: Nonetype: ClusterIP

consul-service-web.yaml

kind: Service
apiVersion: v1
metadata:name: consul-weblabels:app: consul-web
spec:ports:- name: http-webprotocol: TCPport: 8500targetPort: 8500nodePort: 30002selector:app: consultype: NodePortsessionAffinity: None

部署

# 将以上文件放在目录/yaml/consul下
# 执行以下命令验证yaml文件正确性
kubectl apply -f /yaml/consul -n prom-ha --dry-run=client# 验证无误后执行以下命令创建相关k8s资源
kubectl apply -f /yaml/consul -n prom-ha

注册exporter

请求地址:http://192.168.25.80:30002/v1/agent/service/register?replace-existing-checks=1

body:

{
    "ID": "test-exporter-1", //service的唯一标识"Name": "test-exporter-192.168.25.80:30001", //name相同的service在页面上会被归类"Tags": ["ifcloud","node" //标签,会被继承到prometheus的labels,可以用于分类target],"Address": "192.168.25.80", //exporter的ip"Port": 30001, //exporter的端口"Meta": {
    "instance": "i-6ULChRiM8A" //元数据,会被继承到prometheus的labels,可以在relabel后用于数据过滤},"EnableTagOverride": false,"Check": {
    "HTTP": "http://192.168.25.80:30001/metrics","Interval": "10s"},"Weights": {
    "Passing": 10,"Warning": 1}
}

验证

部署完成后http://192.168.25.80:30002,看到以下内容即证明部署成功。

在这里插入图片描述

  相关解决方案