Parameters

The parent key for all of the following parameters is openshift4_logging.

See the OpenShift docs for details.

namespace

type

string

default

openshift-logging

The namespace in which to install the operator.

version

type

string

default

5.8

The logging stack version to deploy. This parameter is used in the default values for parameters channel and alerts.

We recommend that you use this parameter to specify the logging stack version which the component should deploy. However, you can still parameters channel and alerts directly.

See the OpenShift Logging life cycle documentation for supported versions of the logging stack. We recommend that you select a version of the logging stack that’s officially listed as compatible with your OpenShift version.

channel

type

string

default

stable-${openshift4_logging:version}

Channel of the operator subscription to use. If you specify the logging stack version through parameter version, you shouldn’t need to modify this parameter.

In OpenShift 4.7, RedHat decoupled the logging stack version from the OpenShift version. The decoupled logging stack versions start at version 5.0. With version 5.1 of the logging stack, channels for specific minor versions were introduced.

Ideally we would just default to the stable channel, as that channel will always be backed by a logging stack version compatible with the OpenShift cluster version by the OpenShift marketplace operator. However, since there’s potential for changes in configuration between logging stack versions which need to be managed through the component, we default to using the stable-5.x channel matching the version specified in parameter version.

See the OpenShift documentation for details.

alerts

type

string

default

release-${openshift4_logging:version}

Release version of the alerting rules. If you specify the logging stack version through parameter version, you shouldn’t need to modify this parameter.

The component uses this parameter to determine the version to download for both Elasticsearch Operator and fluentd alert rules.

Generally, the value for parameter alerts should match the value for parameter channel: if you specify channel: stable-5.5, you should use alerts: release-5.5.

RedHat moved the YAML file containing the alert rules between cluster-logging version 5.4 and 5.5. To ensure the component can deploy alert rules for version 5.5, we’ve implemented support for the different file locations through a lookup map in the component class. However, due to limitations of reclass, there’s no way to specify a fallback value in case the lookup map doesn’t contain a key. Because of this, the component no longer automatically supports new versions of the logging stack.

ignore_alerts

type

list

default

[]

This parameter can be used to disable alerts provided by openshift cluster-logging-operator. The component supports removing entries in this parameter by providing the entry prefixed with ~.

components.elasticsearch

type

dictionary

default
components:
  elasticsearch:
    enabled: false
    kibana_host: null
    predict_elasticsearch_storage_alert:
      enabled: true
      lookback_range: 72h
      predict_hours_from_now: 72
      threshold: 85
      for: 6h
      severity: warning

Configuration of the elasticsearch component.

Elasticsearch is deprecated.

components.elasticsearch.kibana_host

type

string

default

null

example

kibana.apps.cluster.syn.tools

Host name of the Kibana route.

components.elasticsearch.predict_elasticsearch_storage_alert

type

dict

example
components:
  elasticsearch:
    predict_elasticsearch_storage_alert:
      enabled: true
      lookback_range: 72h
      predict_hours_from_now: 72
      threshold: 85
      for: 6h
      severity: warning

Create an alert SYN_ElasticsearchExpectNodeToReachDiskWatermark if the storage allocated for Elasticsearch is predicted to reach the low storage watermark.

components.elasticsearch.predict_elasticsearch_storage_alert.enabled

type

boolean

default

true

Enable or disable this alert.

components.elasticsearch.predict_elasticsearch_storage_alert.lookback_range

type

prometheus duration

default

72h

How for to look back to calculate the prediction.

components.elasticsearch.predict_elasticsearch_storage_alert.predict_hours_from_now

type

number

default

72

How far in the future the prediction is calculated.

components.elasticsearch.predict_elasticsearch_storage_alert.threshold

type

number

default

85

The threshold for the alert. Percentage of disk fill.

components.elasticsearch.predict_elasticsearch_storage_alert.for

type

prometheus duration

default

6h

The alert is firing once the threshold has been reached for this long.

components.elasticsearch.predict_elasticsearch_storage_alert.severity

type

string

default

warning

The severity of the fired alert.

components.lokistack

Configuration of the lokistack component. See subsections for supported keys.

components.lokistack.enabled

type

boolean

default

true

Whether to deploy the LokiStack on the cluster.

components.lokistack.clusterReaderLogAccess

type

list

default
- application
- infrastructure

A list of log categories (supported values are application, infrastructure and audit) which can be viewed by users which have cluster-reader permissions. Entries in the list can be removed in the hierarchy by prefixing them with ~.

We don’t grant access to audit logs to cluster-reader by default since audit logs can contain sensitive information.

components.lokistack.logStore

type

dictionary

default
logStore:
  access_key_id: ''
  access_key_secret: ''
  endpoint: ''
  bucketnames: '${cluster:name}-logstore'

A dictionary holding the connection information for the s3 storage used by the lokistack.

See the Openshift Docs or Lokistack Operator Docs for available parameters.

components.lokistack.spec

type

dictionary

default
spec:
  size: 1x.extra-small
  storage:
    schemas:
      - version: v12
        effectiveDate: '2022-06-01'
    secret:
      type: s3
      name: loki-logstore
  storageClassName: ''
  tenants:
    mode: openshift-logging
  limits:
    global:
      ingestion:
        ingestionBurstSize: 9 (1)
        ingestionRate: 5 (2)
1 value in MiB (per push event)
2 value in MiB/s

A dictionary holding the .spec for the LokiStack resource.

The component configures fluentd as the default log forwarder. The default chunk size limit in fluentd is 8 MiB which the burst size limit of 9 MiB accounts for. The ingestion rate defines the MiB/s limit for a tenant. OpenShift Logging uses the following three tenants:

  • application

  • audit

  • infrastructure

The max allowed volume for a tenant per day can be calculated with the following formula:

\$"volumePerDay"_"tenant" ("in GiB/s") = ("ingestionRate"_"tenant" ("in MiB/s") * 60 * 60 * 24) / 1024\$

The default of 5 MiB/s allows up to ~420 GiB of logs per day for a tenant.

See the Openshift Docs for available parameters. See the Loki Operator Docs for available Lokistack specs.

operatorResources

type

dictionary

default

see defaults.yml

A dictionary holding the .spec.config.resources for OLM subscriptions maintained by this component.

clusterLogging

type

dictionary

default

see defaults.yml

A dictionary holding the .spec for cluster logging.

See the OpenShift docs for available parameters.

clusterLogForwarding

clusterLogForwarding.enabled

type

boolean

default

false

Enables log forwarding for the cluster.

clusterLogForwarding.forwarders

type

dictionary

default

{}

Each key in this dictionary holds the parameters for an .spec.outputs object.

See the OpenShift docs for available parameters.

clusterLogForwarding.namespace_groups

type

dictionary

default

{}

Customization for the logging of a specified group of namespaces.

Enabling forwarders will send the logs of the specified namespaces to a third-party log aggregator. For some log aggregation systems you may need to deploy a separate log forwarder.

Enable json parsing for a 'namespace_group' only makes sense if the logs are forwarded to the clusters default elasticsearch instance. Therefor 'default' will automatically be added to the forwarders.

clusterLogForwarding:
  namespace_groups:
    my-group: (1)
      namespaces: (2)
        - my-namespace
      forwarders: (3)
        - splunk-forwarder
      json: true (4)
      detectMultilineErrors: true (5)
1 Namespace to configure.
2 List of namespaces.
3 List of forwarders (defined in clusterLogForwarding.forwarders).
4 Enable json logging only for defined namespaces.
5 Enable detecting multiline errors for defined namespaces.

clusterLogForwarding.application_logs

type

dictionary

default

{}

Customization for the logging of all applications.

Enabling forwarders will send the logs of all namespaces to a third-party log aggregator. For some log aggregation systems you may need to deploy a separate log forwarder.

clusterLogForwarding:
  application_logs:
    forwarders: (1)
      - splunk-forwarder
    json: true (2)
    detectMultilineErrors: true (3)
1 List of forwarders (defined in clusterLogForwarding.forwarders).
2 Enable json logging for all applications.
3 Enable detecting multiline errors for all applications.

clusterLogForwarding.infrastructure_logs

type

dictionary

default
clusterLogForwarding:
  infrastructure_logs:
    enabled: true

Customization for the logging of openshift*, kube*, or default projects.

Enabled by default.

Enabling forwarders will send the logs of all namespaces to a third-party log aggregator. For some log aggregation systems you may need to deploy a separate log forwarder.

clusterLogForwarding:
  infrastructure_logs:
    forwarders: (1)
      - splunk-forwarder
    json: true (2)
1 List of forwarders (defined in clusterLogForwarding.forwarders).
2 Enable json logging for all applications.

clusterLogForwarding.audit_logs

type

dictionary

default
clusterLogForwarding:
  audit_logs:
    enabled: false

Customization for the logging of audit logs.

Disabled by default.

Enabling forwarders will send the logs of all namespaces to a third-party log aggregator. For some log aggregation systems you may need to deploy a separate log forwarder.

clusterLogForwarding:
  audit_logs:
    forwarders: (1)
      - splunk-forwarder
    json: true (2)
1 List of forwarders (defined in clusterLogForwarding.forwarders).
2 Enable json logging for all applications.

clusterLogForwarding.json

type

dictionary

default

see below

Setting json.enabled is required for json parsing to be available. You need to additionally enable it in clusterLogForwarding.application_logs or clusterLogForwarding.namespace_groups, based on your needs, to actually parse the logs.

clusterLogForwarding:
  json:
    enabled: false (1)
    typekey: 'kubernetes.labels.logFormat' (2)
    typename: 'nologformat' (3)
1 By default JSON parsing is disabled.
2 The value of that field, if present, is used to construct the index name.
3 If typekey isn’t set or its key isn’t present, the value of this field is used to construct the index name.

See the OpenShift docs for a detailed explanation.

Example

clusterLogging:
  logStore:
    retentionPolicy:
      application:
        maxAge: 15d
    elasticsearch:
      nodeCount: 5

Forward logs for all application logs to third-party

clusterLogForwarding:
  enabled: true
  forwarders:
    splunk-forwarder:
      secret:
        name: splunk-forwarder
      type: fluentdForward
      url: tls://splunk-forwarder:24224
  application_logs:
    forwarders:
      - splunk-forwarder

Forward logs for certain namespaces to third-party

clusterLogForwarding:
  enabled: true
  forwarders:
    splunk-forwarder:
      secret:
        name: splunk-forwarder
      type: fluentdForward
      url: tls://splunk-forwarder:24224
  namespace_groups:
    my-group:
      namespaces:
        - my-namespace
      forwarders:
        - splunk-forwarder

Enable JSON parsing for all application logs

clusterLogForwarding:
  enabled: true
  application_logs:
    json: true
  json:
    enabled: true

Enable JSON parsing for certain namespaces

clusterLogForwarding:
  enabled: true
  namespace_groups:
    my-group:
      namespaces:
        - my-namespace
      json: true
  json:
    enabled: true