Skip to content

Metrics Retrieval Methods

Prometheus primarily uses the Pull approach to retrieve monitoring metrics from target services' exposed endpoints. Therefore, it requires configuring corresponding scraping jobs to request monitoring data and write it into the storage provided by Prometheus. Currently, Prometheus offers several configurations for these jobs:

  • Native Job Configuration: This provides native Prometheus job configuration for scraping.
  • Pod Monitor: In the Kubernetes ecosystem, it allows scraping of monitoring data from Pods using Prometheus Operator.
  • Service Monitor: In the Kubernetes ecosystem, it allows scraping monitoring data from Endpoints of Services using Prometheus Operator.

Note

[ ] indicates optional configmaps.

Native Job Configuration

The corresponding configmaps are explained as follows:

# Name of the scraping job, also adds a label (job=job_name) to the scraped metrics
job_name: <job_name>

# Time interval between scrapes
[ scrape_interval: <duration> | default = <global_config.scrape_interval> ]

# Timeout for scrape requests
[ scrape_timeout: <duration> | default = <global_config.scrape_timeout> ]

# URI path for the scrape request
[ metrics_path: <path> | default = /metrics ]

# Handling of label conflicts between scraped labels and labels added by the backend Prometheus.
# true: Retains the scraped labels and ignores conflicting labels from the backend Prometheus.
# false: Adds an "exported_<original-label>" prefix to the scraped labels and includes the additional labels added by the backend Prometheus.
[ honor_labels: <boolean> | default = false ]

# Whether to use the timestamp generated by the target being scraped.
# true: Uses the timestamp from the target if available.
# false: Ignores the timestamp from the target.
[ honor_timestamps: <boolean> | default = true ]

# Protocol for the scrape request: http or https
[ scheme: <scheme> | default = http ]

# URL parameters for the scrape request
params:
  [ <string>: [<string>, ...] ]

# Set the value of the `Authorization` header in the scrape request through basic authentication. password/password_file are mutually exclusive, with password_file taking precedence.
basic_auth:
  [ username: <string> ]
  [ password: <secret> ]
  [ password_file: <string> ]

# Set the value of the `Authorization` header in the scrape request through bearer token authentication. bearer_token/bearer_token_file are mutually exclusive, with bearer_token taking precedence.
[ bearer_token: <secret> ]

# Set the value of the `Authorization` header in the scrape request through bearer token authentication. bearer_token/bearer_token_file are mutually exclusive, with bearer_token taking precedence.
[ bearer_token_file: <filename> ]

# Whether the scrape connection should use a TLS secure channel, configure the corresponding TLS parameters
tls_config:
  [ <tls_config> ]

# Use a proxy service to scrape the metrics from the target, specify the address of the proxy service.
[ proxy_url: <string> ]

# Specify the targets using static configuration, see explanation below.
static_configs:
  [ - <static_config> ... ]

# CVM service discovery configuration, see explanation below.
cvm_sd_configs:
  [ - <cvm_sd_config> ... ]

# After scraping the data, rewrite the labels of the corresponding target using the relabel mechanism. Executes multiple relabel rules in order.
# See explanation below for relabel_config.
relabel_configs:
  [ - <relabel_config> ... ]

# Before writing the scraped data, rewrite the values of the labels using the relabel mechanism. Executes multiple relabel rules in order.
# See explanation below for relabel_config.
metric_relabel_configs:
  [ - <relabel_config> ... ]

# Limit the number of data points per scrape, 0: no limit, default is 0
[ sample_limit: <int> | default = 0 ]

# Limit the number of targets per scrape, 0: no limit, default is 0
[ target_limit: <int> | default = 0 ]

Pod Monitor

The explanation for the corresponding configmaps is as follows:

# Prometheus Operator CRD version
apiVersion: monitoring.coreos.com/v1
# Corresponding Kubernetes resource type, here it is PodMonitor
kind: PodMonitor
# Corresponding Kubernetes Metadata, only the name needs to be concerned. If jobLabel is not specified, the value of the job label in the scraped metrics will be <namespace>/<name>
metadata:
  name: redis-exporter # Specify a unique name
  namespace: cm-prometheus  # Fixed namespace, no need to modify
# Describes the selection and configuration of the target Pods to be scraped
  labels:
    operator.insight.io/managed-by: insight # Label indicating managed by Insight
spec:
  # Specify the label of the corresponding Pod, pod monitor will use this value as the job label value.
  # If viewing the Pod YAML, use the values in pod.metadata.labels.
  # If viewing Deployment/Daemonset/Statefulset, use spec.template.metadata.labels.
  [ jobLabel: string ]
  # Adds the corresponding Pod's Labels to the Target's Labels
  [ podTargetLabels: []string ]
  # Limit the number of data points per scrape, 0: no limit, default is 0
  [ sampleLimit: uint64 ]
  # Limit the number of targets per scrape, 0: no limit, default is 0
  [ targetLimit: uint64 ]
  # Configure the Prometheus HTTP endpoints that need to be scraped and exposed. Multiple endpoints can be configured.
  podMetricsEndpoints:
  [ - <endpoint_config> ... ] # See explanation below for endpoint
  # Select the namespaces where the monitored Pods are located. Leave it blank to select all namespaces.
  [ namespaceSelector: ]
    # Select all namespaces
    [ any: bool ]
    # Specify the list of namespaces to be selected
    [ matchNames: []string ]
  # Specify the Label values of the Pods to be monitored in order to locate the target Pods [K8S metav1.LabelSelector](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#labelselector-v1-meta)
  selector:
    [ matchExpressions: array ]
      [ example: - {key: tier, operator: In, values: [cache]} ]
    [ matchLabels: object ]
      [ example: k8s-app: redis-exporter ]

Example 1

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: redis-exporter # Specify a unique name
  namespace: cm-prometheus # Fixed namespace, do not modify
  labels:
    operator.insight.io/managed-by: insight  # Label indicating managed by Insight, required.
spec:
  podMetricsEndpoints:
    - interval: 30s
      port: metric-port # Specify the Port Name corresponding to Prometheus Exporter in the pod YAML
      path: /metrics # Specify the value of the Path corresponding to Prometheus Exporter, if not specified, default is /metrics
      relabelings:
        - action: replace
          sourceLabels:
            - instance
          regex: (.*)
          targetLabel: instance
          replacement: "crs-xxxxxx" # Adjust to the corresponding Redis instance ID
        - action: replace
          sourceLabels:
            - instance
          regex: (.*)
          targetLabel: ip
          replacement: "1.x.x.x" # Adjust to the corresponding Redis instance IP
  namespaceSelector: # Select the namespaces where the monitored Pods are located
    matchNames:
      - redis-test
  selector: # Specify the Label values of the Pods to be monitored in order to locate the target pods
    matchLabels:
      k8s-app: redis-exporter

Example 2

job_name: prometheus
scrape_interval: 30s
static_configs:
- targets:
  - 127.0.0.1:9090

Service Monitor

The explanation for the corresponding configmaps is as follows:

# Prometheus Operator CRD version
apiVersion: monitoring.coreos.com/v1
# Corresponding Kubernetes resource type, here it is ServiceMonitor
kind: ServiceMonitor
# Corresponding Kubernetes Metadata, only the name needs to be concerned. If jobLabel is not specified, the value of the job label in the scraped metrics will be the name of the Service.
metadata:
  name: redis-exporter # Specify a unique name
  namespace: cm-prometheus  # Fixed namespace, no need to modify
# Describes the selection and configuration of the target Pods to be scraped
  labels:
    operator.insight.io/managed-by: insight # Label indicating managed by Insight, required.
spec:
  # Specify the label(metadata/labels) of the corresponding Pod, service monitor will use this value as the job label value.
  [ jobLabel: string ]
  # Adds the Labels of the corresponding service to the Target's Labels
  [ targetLabels: []string ]
  # Adds the Labels of the corresponding Pod to the Target's Labels
  [ podTargetLabels: []string ]
  # Limit the number of data points per scrape, 0: no limit, default is 0
  [ sampleLimit: uint64 ]
  # Limit the number of targets per scrape, 0: no limit, default is 0
  [ targetLimit: uint64 ]
  # Configure the Prometheus HTTP endpoints that need to be scraped and exposed. Multiple endpoints can be configured.
  endpoints:
  [ - <endpoint_config> ... ] # See explanation below for endpoint
  # Select the namespaces where the monitored Pods are located. Leave it blank to select all namespaces.
  [ namespaceSelector: ]
    # Select all namespaces
    [ any: bool ]
    # Specify the list of namespaces to be selected
    [ matchNames: []string ]
  # Specify the Label values of the Pods to be monitored in order to locate the target Pods [K8S metav1.LabelSelector](https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#labelselector-v1-meta)
  selector:
    [ matchExpressions: array ]
      [ example: - {key: tier, operator: In, values: [cache]} ]
    [ matchLabels: object ]
      [ example: k8s-app: redis-exporter ]

Example

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: go-demo # Specify a unique name
  namespace: cm-prometheus # Fixed namespace, do not modify
  labels:
    operator.insight.io/managed-by: insight  # Label indicating managed by Insight, required.
spec:
  endpoints:
    - interval: 30s
      # Specify the Port Name corresponding to Prometheus Exporter in the service YAML
      port: 8080-8080-tcp
      # Specify the value of the Path corresponding to Prometheus Exporter, if not specified, default is /metrics
      path: /metrics
      relabelings:
        # ** There must be a label named 'application', assuming there is a label named 'app' in k8s,
        # we replace it with 'application' using the relabel 'replace' action
        - action: replace
          sourceLabels: [__meta_kubernetes_pod_label_app]
          targetLabel: application
  # Select the namespace where the monitored service is located
  namespaceSelector:
    matchNames:
      - golang-demo
  # Specify the Label values of the service to be monitored in order to locate the target service
  selector:
    matchLabels:
      app: golang-app-demo

endpoint_config

The explanation for the corresponding configmaps is as follows:

# The name of the corresponding port. Please note that it's not the actual port number.
# Default: 80. Possible values are as follows:
# ServiceMonitor: corresponds to Service>spec/ports/name;
# PodMonitor: explained as follows:
#   If viewing the Pod YAML, take the value from pod.spec.containers.ports.name.
#   If viewing Deployment/DaemonSet/StatefulSet, take the value from spec.template.spec.containers.ports.name.
[ port: string | default = 80]
# The URI path for the scrape request.
[ path: string | default = /metrics ]
# The protocol for the scrape: http or https.
[ scheme: string | default = http]
# URL parameters for the scrape request.
[ params: map[string][]string]
# The interval between scrape requests.
[ interval: string | default = 30s ]
# The timeout for the scrape request.
[ scrapeTimeout: string | default = 30s]
# Whether the scrape connection should be made over a secure TLS channel, and the TLS configuration.
[ tlsConfig: TLSConfig ]
# Read the bearer token value from the specified file and include it in the headers of the scrape request.
[ bearerTokenFile: string ]
# Read the bearer token from the specified K8S secret key. Note that the secret namespace must match the PodMonitor/ServiceMonitor.
[ bearerTokenSecret: string ]
# Handling conflicts when scraped labels conflict with labels added by the backend Prometheus.
# true: Keep the scraped labels and ignore the conflicting labels from the backend Prometheus.
# false: For conflicting labels, prefix the scraped label with 'exported_<original-label>' and add the labels added by the backend Prometheus.
[ honorLabels: bool | default = false ]
# Whether to use the timestamp generated on the target during the scrape.
# true: Use the timestamp on the target if available.
# false: Ignore the timestamp on the target.
[ honorTimestamps: bool | default = true ]
# Basic authentication credentials. Fill in the values of username/password from the corresponding K8S secret key. Note that the secret namespace must match the PodMonitor/ServiceMonitor.
[ basicAuth: BasicAuth ]
# Scrape the metrics from the target through a proxy server. Specify the address of the proxy server.
[ proxyUrl: string ]
# After scraping the data, rewrite the values of the labels on the target using the relabeling mechanism. Multiple relabel rules are executed in order.
# See explanation below for relabel_config
relabelings:
[ - <relabel_config> ...]
# Before writing the scraped data, rewrite the values of the corresponding labels on the target using the relabeling mechanism. Multiple relabel rules are executed in order.
# See explanation below for relabel_config
metricRelabelings:
[ - <relabel_config> ...]

relabel_config

The explanation for the corresponding configmaps is as follows:

# Specifies which labels to take from the original labels for relabeling. The values taken are concatenated using the separator defined in the configuration.
# For PodMonitor/ServiceMonitor, the corresponding configmap is sourceLabels.
[ source_labels: '[' <labelname> [, ...] ']' ]
# Defines the character used to concatenate the values of the labels to be relabeled. Default is ';'.
[ separator: <string> | default = ; ]

# When the action is replace/hashmod, target_label is used to specify the corresponding label name.
# For PodMonitor/ServiceMonitor, the corresponding configmap is targetLabel.
[ target_label: <labelname> ]

# Regular expression used to match the values of the source labels.
[ regex: <regex> | default = (.*) ]

# Used when action is hashmod, it takes the modulus value based on the MD5 hash of the source label's value.
[ modulus: <int> ]

# Used when action is replace, it defines the expression to replace when the regex matches. It can use regular expression replacement with regex.
[ replacement: <string> | default = $1 ]

# Actions performed based on the matched values of regex. The available actions are as follows, with replace being the default:
# replace: If the regex matches, replace the corresponding value with the value defined in replacement. Set the value using target_label and add the corresponding label.
# keep: If the regex doesn't match, discard the value.
# drop: If the regex matches, discard the value.
# hashmod: Take the modulus of the MD5 hash of the source label's value based on the value specified in modulus.
# Add a new label with a label name specified by target_label.
# labelmap: If the regex matches, replace the corresponding label name with the value specified in replacement.
# labeldrop: If the regex matches, delete the corresponding label.
# labelkeep: If the regex doesn't match, delete the corresponding label.
[ action: <relabel_action> | default = replace ]

Comments