Parameters

The parent key for all of the following parameters is mimir.

The component allows multi instantiation.

The underlying Helm chart can be found here.

namespace.name

type

string

default

syn-mimir

The namespace in which to deploy this component.

namespace.create

type

boolean

default

true

Wether to create the namespace given by namespace.name.

namespace.metadata

type

dict

default

{}

example

{"labels": {"organization": "bar"}}

Additional metadata to add to the namespace.

charts

type

dict

default
mimir-distributed:
  source: https://grafana.github.io/helm-charts
  version: 'v6.0.6'
This version of the component does not support Helm charts < v6.x!

Holds the reference to the used version of the charts. See class/defaults.yml for the current version.

images

type

dict

default
images:
  mimir:
    registry: docker.io
    repository: grafana/mimir
  memcached:
    registry: docker.io
    repository: library/memcached
  memcachedExporter:
    registry: docker.io
    repository: prom/memcached-exporter
  nginx:
    registry: docker.io
    repository: nginxinc/nginx-unprivileged
example
images:
  mimir:
    registry: my-docker-cache.example.org
    tag: latest
  memcached:
    tag: latest
  memcachedExporter:
    tag: latest
  nginx:
    tag: latest

Configures the image registry, repository and tag.

If the tag parameter is not set, the one from the Helm chart will be used.

preset

type

string

default

small

Choose a preset for scaling the Mimir components.

Available options: none, extra-small, small, large.

components

Configuration of the different Mimir components.

The values of each component reflect the corresponding sections in the Helm charts values.yaml file.

Read path: querier, queryFrontend, queryScheduler, storeGateway. Write path: distributor, ingester. Backend: compactor, rolloutOperator. Ingress: nginx, gateway. Optional: alertmanager, overridesExporter, ruler.

components.querier

type

dict

default
components:
  querier:
    enabled: true

The querier is a stateless component that evaluates PromQL expressions by fetching time series and labels on the read path.

The querier uses the store-gateway component to query the long-term storage and the ingester component to query recently written data.

components.queryFrontend

type

dict

default
components:
  queryFrontend:
    enabled: true

The query-frontend is a stateless component that provides a Prometheus compatible API with a number of features to accelerate the read path. The query-frontend is the primary entry point for the read path of Mimir. The query-scheduler and queriers are required within the cluster to execute the queries.

We recommend that you run at least two query-frontend replicas for high-availability reasons.

components.queryScheduler

type

dict

default
components:
  queryScheduler:
    enabled: true

The query-scheduler is stateless component that retains a queue of queries to execute, and distributes the workload to available queriers.

components.storeGateway

type

dict

default
components:
  storeGateway:
    enabled: true

The store-gateway component, which is stateful, queries blocks from long-term storage. On the read path, the querier and the ruler use the store-gateway when handling the query, whether the query comes from a user or from when a rule is being evaluated.

components.distributor

type

dict

default
components:
  distributor:
    enabled: true

The distributor is a stateless component that acts as the entry point for the Grafana Mimir write path. It receives incoming write requests containing time series data, validates the data for correctness, enforces tenant-specific limits, and then ingests the data into Mimir.

components.ingester

type

dict

default
components:
  ingester:
    enabled: true

The ingester is a stateful component that processes the most recently ingested samples and makes them available for querying. Queriers read recent data from ingesters and older data from long-term object storage via store-gateways.

components.kafka

type

dict

default
components:
  kafka:
    enabled: false

Grafana Mimir supports using Kafka as the first layer of ingestion in the ingest storage architecture. This configuration allows for scalable, decoupled ingestion that separates write and read paths to improve performance and resilience.

components.gateway

type

dict

default
components:
  gateway:
    enabled: true

The Mimir unified gateway is a critical component for query, write, and alert paths. It improves performance and simplifies deployments by acting as a single entry point for all Mimir requests.

components.compactor

type

dict

default
components:
  compactor:
    enabled: true

The compactor increases query performance and reduces long-term storage usage by combining blocks.

Compacting multiple blocks of a given tenant into a single, optimized larger block. This deduplicates chunks and reduces the size of the index, resulting in reduced storage costs. Querying fewer blocks is faster, so it also increases query speed.

components.alertmanager

type

dict

default
components:
  alertmanager:
    enabled: true

The Mimir Alertmanager is an optional component that accepts alert notifications from the Mimir ruler.

components.overridesExporter

type

dict

default
components:
  overridesExporter:
    enabled: true

Grafana Mimir supports applying overrides on a per-tenant basis. A number of overrides configure limits that prevent a single tenant from using too many resources. The overrides-exporter component exposes limits as Prometheus metrics so that operators can understand how close tenants are to their limits.

components.ruler

type

dict

default
components:
  ruler:
    enabled: true

The ruler is an optional component that evaluates PromQL expressions defined in recording and alerting rules. Each tenant has a set of recording and alerting rules and can group those rules into namespaces.

caches

Configuration of the cache components of Mimir. Caching is optional, but highly recommended in a production environment.

The values of each component reflect the corresponding sections in the Helm charts values.yaml file.

caches.chunks

type

dict

default
caches:
  chunks:
    enabled: true

The store-gateway can also use a cache to store chunks that are fetched from long-term storage. Chunks contain actual samples, and can be reused if a query hits the same series for the same time range.

caches.index

type

dict

default
caches:
  index:
    enabled: true

The store-gateway can use the index cache to accelerate series and label lookups from block indexes.

caches.metadata

type

dict

default
caches:
  metadata:
    enabled: true

Using the metadata cache reduces the number of API calls to long-term storage and stops the number of the API calls that scale linearly with the number of querier and store-gateway replicas.

caches.results

type

dict

default
caches:
  results:
    enabled: true

global

Configure global settings.

global.nodeSelector

type

dict

default
global:
  nodeSelector: {}
example
global:
  nodeSelector:
    appuio.io/node-class: plus

Node selector configuration which is used for each component’s nodeSelector field in parameter helm_values.

This value is used verbatim as a Kubernetes node selector.

global.zoneAwareReplication

type

dict

default
global:
  zoneAwareReplication:
    enabled: false
    topologyKey: 'kubernetes.io/hostname'
example
global:
  zoneAwareReplication:
    enabled: true
    zones:
      - name: zone-a
        nodeSelector:
          topology.kubernetes.io/zone: 'zone-a'
      - name: zone-b
        nodeSelector:
          topology.kubernetes.io/zone: 'zone-b'
      - name: zone-c
        nodeSelector:
          topology.kubernetes.io/zone: 'zone-c'

Zone-aware replication is the replication of data across failure domains. Zone-aware replication helps to avoid data loss during a domain outage. Grafana Mimir defines failure domains as zones.

networkPolicy

Configure the NetworkPolicy if necessary.

networkPolicy.enabled

type

boolean

default
networkPolicy:
  enabled: true

Enables or disables NetworkPolicy.

The networkPolicy will only be deployed if it is enabled and has at least 1 entry in allowedNamespaces.

networkPolicy.enabled

type

dict

default
networkPolicy:
  exposedComponents:
    - query-frontend
    - gateway
example
networkPolicy:
  exposedComponents:
    - ~query-frontend

Define what components this NetworkPolicy should allow access to.

The components prefixed with a tilde ~ are removed from the resulting list.

networkPolicy.allowedNamespaces

type

dict

default
networkPolicy:
  allowedNamespaces: []
example
networkPolicy:
  allowedNamespaces:
    - vshn-grafana

Define the namespaces that should be able to access this instance.

The namespaces prefixed with a tilde ~ are removed from the resulting list.

ingress

Ingress configuration

This will configure ingress for either nginx or gateway. If both are enabled, nginx takes precedence.

ingress.enabled

type

boolean

default
ingress:
  enabled: false

Enables ingress.

ingress.tls.enabled

type

dict

default
ingress:
  tls:
    enabled: true

Enables using TLS for ingress.

ingress.tls.clusterIssuer

type

dict

default
ingress:
  tls:
    clusterIssuer: letsencrypt-production

Configures the annotation for the cert-manager ClusterIssuer, this component assumes cert-manager is installed.

ingress.tls.key and ingress.tls.cert

type

dict

default
ingress:
  tls:
    key: null
    cert: null
default
ingress:
  tls:
    clusterIssuer: null
    key: |
      -----BEGIN PRIVATE KEY-----
      ...
      -----END PRIVATE KEY-----
    cert: |
      -----BEGIN CERTIFICATE-----
      ...
      -----END CERTIFICATE-----

Configures private key and certificate for TLS. The secret will automatically be created.

This requires ingress.tls.clusterIssuer to be null. If both are enabled, ingress.tls.clusterIssuer takes precedence.

ingress.url

type

dict

default
ingress:
  url: ''

The URL for witch the ingress is configured.

ingress.annotations and ingress.labels

type

dict

default
ingress:
  annotations: {}
  labels: {}
example
ingress:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-staging

Add custom annotations and labels.

basicAuth

Configures basic authentication for nginx.

basicAuth.enabled

type

boolean

default
basicAuth:
  enabled: false

Enables basic authentication for nginx.

basicAuth.htpasswd

type

boolean

default
basicAuth:
  htpasswd: '?{vaultkv:${cluster:tenant}/${cluster:name}/${_instance}/htpasswd}'

The content of the .htpasswd file.

If you set the basicAuth.htpasswd: null, you can use the basicAuth.existingSecret to inlcude an existing secret.

config

General configuration options for Mimir.

config.tenantFederation

type

dict

default
config:
  tenantFederation: false

Mimir ruler tenant federation is a Mimir feature enabling recording and alerting rules to query and aggregate data across multiple tenant isolation boundaries.

config.haTracker

type

dict

default
config:
  haTracker: false

The Mimir HA Tracker is a component of the Mimir distributor that deduplicates incoming metrics from multiple Prometheus/Alloy replicas, ensuring identical time series sent by HA pairs are only stored once.

config.haLabels

type

dict

default
config:
  haLabels:
    cluster: cluster_id
    replica: prometheus_replica

The labels by which incoming metrics are deduplicated.

config.haStore

type

dict

default
config:
  haStore:
    type: etcd
default
config:
  haStore:
    type: etcd
    endpoints:
      - https://mimir-kv-etcd-0.mimir-kv-etcd-headless.mimir-kv:2379
      - https://mimir-kv-etcd-1.mimir-kv-etcd-headless.mimir-kv:2379
      - https://mimir-kv-etcd-2.mimir-kv-etcd-headless.mimir-kv:2379
    tls_enabled: true
    tls_insecure_skip_verify: true
    # (swi): feels wrong but we use the etcd exclusivly for mimir and don't currently have a way to provision other users
    username: root
    password: '?{vaultkv:${cluster:tenant}/${cluster:name}/mimir-kv/etcd_root_password}'

The HA tracker requires a key-value (KV) store to coordinate which replica is currently elected. See Mimmir documentation for details.

The key-value store has to be setup seperatly!

secrets

type

dict

default
secrets: {}

A dict of secrets to create in the namespace. The key is the name of the secret, the value is the content of the secret. The value must be a dict with a key stringData which is a dict of key/value pairs to add to the secret.

monitoring

type

dict

default
monitoring: true

Enable the service monitors, rules, and alerts from the Helm chart.

alerts

Configurations related to alerts.

alerts.patchRules

type

dict

default

See class/defaults.yml

example
alerts:
  patchRules:
    ignoreNames:
      - MimirContinuousTestNotRunningOnWrites
    patches:
      MimirInconsistentRuntimeConfig:
        for: 15m

This parameter allows users to patch or remove alerts provided by the upstream Mimir chart.

The values in the ignoreNames parameter correspond to the field alert of the alert to ignore.

The keys in the patches parameter correspond to the field alert of the alert to patch. The component expects valid partial Prometheus alert rule objects as values.

The provided values aren’t validated, they’re applied to the corresponding upstream alert as-is.

alerts.additionalRules

type

dict

default

See class/defaults.yml

example
alerts:
  additionalRules:
    "alert:CustomTestAlert":
      expr: vector(1) == 0
      for: 1h
      annotations:
        summary: Test alert
      labels:
        severity: warning

This parameter allows users to configure additional alerting and recording rules. All rules defined in this parameter will be added to rule group mimir-custom.rules.

helm_values

type

dict

default

see class/defaults.yml

Holds the values for the helm chart.

The defaults are close to the upstream defaults, with HA enabled, and the bucket secret added. Replicas and resource limits/requests are taken from the small.yaml recommended production values.

The read path, especially the store-gateway, doesn’t have HA enabled. There is no data loss, but there is a performance impact.

See Planning capacity, small.yaml, and large.yaml for upstream sizing recommendations.

Components describes what the components enabled by this Helm chart do.

Ingester failure and data loss describes the implications of a HA setup.

Example

namespace:
  name: example-mimir
  create: true
  metadata:
    labels:
      example.com/organization: example

secrets:
  mimir-nginx-htpasswd:
    stringData:
      .htpasswd: "?{vaultkv:${cluster:tenant}/${cluster:name}/example-mimir/htpasswd}"

helm_values:
  gateway:
    basicAuth:
      enabled: true
      existingSecret: mimir-nginx-htpasswd
    ingress:
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-production
      enabled: true
      hosts:
        - host: mimir.example.com
          paths:
            - path: /
              pathType: Prefix
      tls:
        - secretName: example-mimir-tls
          hosts:
            - mimir.example.com