Parameters

The parent key for all of the following parameters is rook_ceph.

namespace

type

string

default

syn-rook-ceph-operator

The namespace in which the Rook Ceph operator and the CSI drivers are deployed

ceph_cluster

type

dict

The configuration of the Ceph cluster to deploy. See the following sections for individual parameters nested under this key.

name

type

string

default

cluster

The name of the Ceph cluster object. Also used as part of the storageclass and volumesnapshotclass names.

namespace

type

string

default

syn-rook-ceph-${rook_ceph:ceph_cluster:name}

The namespace in which the Ceph cluster is deployed.

By default, the component deploys the Ceph cluster in a different namespace than the operator and the CSI drivers. However, the component also supports deploying the operator, CSI drivers and Ceph cluster in the same namespace.

node_count

type

integer

default

3

The number of storage nodes (disks really) that the Ceph cluster should expect.

The operator will deploy this many Ceph OSDs.

block_storage_class

type

string

default

localblock

The storage class to use for the block storage volumes backing the Ceph cluster. The storage class must support volumeMode=Block.

block_volume_size

type

K8s Quantity

default

1

By default, the component expects that pre-provisioned block storage volumes are used. If you deploy the component on a cluster which dynamically provisions volumes for the storage class configured in ceph_cluster.block_storage_class, set this value to the desired size of the disk for a single node of your Ceph cluster.

tune_fast_device_class

type

boolean

default

false

This parameter can be set to true to tune the Ceph cluster OSD parameters for SSDs (or better).

See the Rook.io Ceph cluster CRD documentation for a more detailed explanation.

osd_placement

type

dict

default

{}

Control placement of the OSD pods, empty by default.

See Storage Class Device Sets for a more detailed explanation.

osd_portable

type

boolean

default

false

Allows OSDs to move between nodes during failover (and initial setup). This requires a storage class that supports portability (such as the CSI-managed storageclasses on cloudscale.ch).

If this is false, OSDs are assigned to a node permanently. This is the default configuration for the component.

It’s recommended to set this parameter to true when deploying the component on CSI-managed PersistentVolumes.

config_override

type

dict

default
global:
  mon_osd_full_ratio: '0.85'
  mon_osd_backfillfull_ratio: '0.8'
  mon_osd_nearfull_ratio: '0.75'
  mon_data_avail_warn: '15'

Additional Ceph configurations which are rendered in ini style format by the component. Each key in the dict is translated into a section in the resulting ini style file containing the key’s value — which is expected to be a dict — as settings.

The component doesn’t validate the resulting configuration. Please be aware that Ceph may fail to start if you provide an invalid configuration file. See the Ceph documentation for a list of valid configuration sections and options.

Use strings to represent fractional values in config_override. Otherwise, the rendered values in the ini style file may suffer from floating point rounding issues. For example, if the fractional value 0.8 is given as a number in YAML, it would be rendered as 0.80000000000000004 in the resulting ini style file.

The default value is translated into the following ini style file:

[global]
mon_osd_full_ratio = 0.85
mon_osd_backfillfull_ratio = 0.8
mon_osd_nearfull_ratio = 0.75
mon_data_avail_warn = 15

The resulting ini style file is written to the ConfigMap rook-config-override in the Ceph cluster namespace.

rbd_enabled

type

boolean

default

true

This parameter controls whether the RBD CSI driver, its associated volumesnapshotclass and any configured CephBlockPool resources and associated storageclasses are provisioned.

The CephBlockPool resources are defined and configured in parameter storage_pools.rbd.

cephfs_enabled

type

boolean

default

false

This parameter controls whether the CephFS CSI driver, its associated volumesnapshotclass and any configured CephFilesystem resources and associated storageclasses are provisioned.

The CephFilesystem resources are defined and configured in parameter storage_pools.cephfs.

monitoring_enabled

type

boolean

default

true

This parameter controls whether the component enables monitoring on the CephCluster resource and sets up Ceph alert rules.

storageClassDeviceSets

type

dict

keys

Names of storageClassDeviceSet resources

values

Spec of storageClassDeviceSet resources

The component adds a storage class device set to the cluster object for every entry. The key is used as the name of the storage class device set and the value is used as the specification with the volumeClaimTemplates key converted from a dictionary to a list.

This means the following example configuration:

storageClassDeviceSets:
  foo:
    count: 3
    volumeClaimTemplates:
      default:
        spec:
          storageClassName: localblock
          volumeMode: Block
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 100Gi

Would be converted to the following specification

storageClassDeviceSets:
  - name: foo
    count: 3
    volumeClaimTemplates:
      - spec:
          storageClassName: localblock
          volumeMode: Block
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 100Gi

See Storage Class Device Sets for a more detailed explanation on how to configure them.

The default storage class device set is called cluster and can also be configured through this parameter.
Storage class device sets added by cephClusterSpec.storage.storageClassDeviceSets can’t be modified through this parameter.

storage_pools.rbd

type

dict

keys

Names of CephBlockPool resources

values

dicts with keys config and mount_options, and storage_class_config

In this parameter CephBlockPool resources are configured. The component creates exactly one storageclass and volumesnapshotclass per block pool.

By default the parameter holds the following configuration:

storagepool:
  config:
    failureDomain: host
    replicated:
      size: 3
      requireSafeReplicaSize: true
  mount_options:
    discard: true
  storage_class_config:
    parameters:
      csi.storage.k8s.io/fstype: ext4
    allowVolumeExpansion: true

This configuration results in

  • A CephBlockPool named storagepool which is configured with 3 replicas distributed on different hosts

  • A storage class which creates PVs on this block pool, uses the ext4 filesystem, supports volume expansion and configures PVs to be mounted with -o discard.

  • A VolumeSnapshotClass associated with the storage class

See the Rook.io CephBlockPool CRD documentation for all possible configurations in key config.

The values in key storage_class_config are merged into the StorageClass resource.

The values in key mount_options are transformed into an array which is injected into the StorageClass resource in field mountOptions. Providing a key with value true in mount_options results in an array entry which just consists of the key. Providing a key with string value results in an array entry which consists of key=value. Providing a key with value false or null will result in the key not being added to the storage class’s mount options.

See the filesystem documentation for the set of supported mount options. For example, see the list of supported mount options for ext4 in the man page.

storage_pools.cephfs

type

dict

keys

Names of CephFilesystem resources

values

dicts with keys data_pools, mount_options, config and storage_class_config

In this parameter CephFilesystem resources are configured. The component creates exactly one storageclass and volumesnapshotclass per CephFS.

By default the parameter holds the following configuration:

fspool:
  data_pools:
    pool0:
      failureDomain: host
      replicated:
        size: 3
        requireSafeReplicaSize: true
      parameters:
        compression_mode: none
        target_size_ratio: '0.8'
  config:
    metadataPool:
      replicated:
        size: 3
        requireSafeReplicaSize: true
    parameters:
      compression_mode: none
      target_size_ratio: '0.2'
    # dataPools rendered from data_pools in Jsonnet
    preserveFilesystemOnDelete: true
    metadataServer:
      activeCount: 1
      activeStandby: true
      resources:
        requests:
          cpu: "1"
          memory: 4Gi
        limits:
          cpu: "1"
          memory: 4Gi
      # metadata server placement done in Jsonnet but can be
      # extended here
    mirroring:
      enabled: false
  mount_options: {}
  storage_class_config:
    allowVolumeExpansion: true

This configuration creates

  • One CephFilesystem resource named fspool. This CephFS instance is configured to have 3 replicas both for the metadata pool and its single data pool. By default, the CephFS instance is configured to assume that metadata will consume roughly 20% and data roughly 80% of the storage cluster.

  • A storage class which creates PVs on the CephFS instance and supports volume expansion.

  • A VolumeSnapshotClass associated with the storage class

CephFS doesn’t require mount option discard, and ceph-csi v3.9.0+ will fail to mount any CephFS volumes if the storage class is configured with mount option discard.

The key data_pools is provided to avoid having to manage a list of data pools directly in the hierarchy. The values of each key in data_pools are placed in the resulting CephFS resource’s field .spec.dataPools

The contents of key config are used as the base value of the resulting resource’s .spec field. Note that data pools given in config in the hierarchy will be overwritten by the pools configured in data_pools.

The component creates a placement configuration for the metadata server (MDS) pods based on the values of parameters tolerations and node_selector. Users can override the default placement configuration by providing their own configuration in key config.metadataServer.placement.

See the Rook.io CephFilesystem CRD documentation for all possible configurations in key config.

The values in key storage_class_config are merged into the StorageClass resource which is for the CephFS instance.

The values in key mount_options are transformed into an array which is injected into the StorageClass resource in field mountOptions. Providing a key with value true in mount_options results in an array entry which just consists of the key. Providing a key with string value results in an array entry which consists of key=value. Providing a key with value false or null will result in the key not being added to the storage class’s mount options.

See the mount.ceph documentation for all possible CephFS mount options.

alerts

Configurations related to alerts.

ignoreNames

type

list

default
- CephPoolQuotaBytesNearExhaustion
- CephPoolQuotaBytesCriticallyExhausted

This parameter can be used to disable alerts provided by Rook.io. The component supports removing entries in this parameter by providing the entry prefixed with ~.

By default, the component disables the CephPoolQuota alerts, since the default configuration doesn’t configure any pool quotas.

However, if the quota alerts are wanted, they can be re-enabled by removing the alerts from the parameter by providing the following configuration.

ignore_alerts:
  - ~CephPoolQuotaBytesNearExhaustion
  - ~CephPoolQuotaBytesCriticallyExhausted
To allow transitioning to the new config structure, the component currently still respects ignored alerts in ceph_cluster.ignore_alerts.

patchRules

type

dict

default
CephClusterWarningState:
  for: 15m
CephOSDDiskNotResponding:
  for: 5m

This parameter allows users to patch alerts provided by Rook.io. The keys in the parameter correspond to the field alertname of the alert to patch. The component expects valid partial Prometheus alert rule objects as values.

The provided values aren’t validated, they’re applied to the corresponding upstream alert as-is.

additionalRules

type

dict

default

See class/defaults.yml

This parameter allows users to configure additional alerting and recording rules. All rules defined in this parameter will be added to rule group syn-rook-ceph-additional.rules.

For alerting rules, a runbook URL is injected if annotation runbook_url is not already set on the rule. The injected runbook URL is derived from the alert name using pattern https://hub.syn.tools/rook-ceph/runbooks/{alertname}.html.

node_selector

type

dict

default
node-role.kubernetes.io/storage: ''

The node selector (if applicable) for all the resources managed by the component.

tolerations

type

dict

default
- key: storagenode
  operator: Exists

The tolerations (if applicable) for all the resources managed by the component.

The component assumes that nodes on which the deployments should be scheduled may be tainted with storagenode=True:NoSchedule.

images

type

dict

default

See class/defaults.yml on Github

This parameter allows selecting the Docker images to use for Rook.io, Ceph, and Ceph-CSI. Each image is specified using keys registry, image and tag. This structure allows easily injecting a registry mirror, if required.

rook-ceph container image versions older than v1.8.0 aren’t supported.

charts

type

dict

default

See class/defaults.yml on Github

This parameter allows selecting the Helm chart version for the rook-ceph operator.

rook-ceph Helm chart versions older than v1.7.0 aren’t supported.

operator_helm_values

type

dict

default

See class/defaults.yml on Github

The Helm values to use when rendering the rook-ceph operator Helm chart.

A few Helm values are configured based on other component parameters by default:

  • The data in parameter images is used to set the image.repository, image.tag, and csi.cephcsi.image Helm values

  • The value of node_selector is used to set Helm value nodeSelector

  • The value of tolerations is used to set Helm value tolerations

  • The component ensures that hostpathRequiresPrivileged is enabled on OpenShift 4 regardless of the contents of the Helm value.

See the Rook.io docs for a full list of Helm values.

toolbox

type

dict

default
enabled: true
image: ${rook_ceph:images:rook}

The configuration for the Rook-Ceph toolbox deployment. This deployment provides an in-cluster shell to observe and administrate the Ceph cluster.

cephClusterSpec

type

dict

default

See class/defaults.yml on Github

The default configuration for the CephCluster resource. The value of this parameter is used as field .spec of the resulting resource.

Selected configurations of the Ceph cluster are inherited from other component parameters. If you overwrite those configurations in this parameter, the values provided in the "source" parameters won’t have an effect.

Inherited configurations

  • cephVersion.image is constructed from the data in parameter images.

  • placement.all.nodeAffinity is built from parameter node_selector. The component constructs the following value for the configuration:

    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions: (1)
            - key: NODE_SELECTOR_KEY
              operator: Exists
            ...
    1 The component creates an entry in matchExpressions with key equal to the node selector key and operator=Exists for each key in parameter node_selector.
  • placement.all.tolerations is set to the value of parameter tolerations.

  • storage.storageClassDeviceSets based on values given in parameter ceph_cluster.storageClassDeviceSets. Users are encouraged to use the parameter ceph_cluster.storageClassDeviceSets to configure the Ceph cluster’s backing storage.

See the Rook.io CephCluster documentation for a full list of configuration parameters.

Example configurations

Configure the component for SElinux-enabled cluster nodes

The component automatically configures the operator on OpenShift 4. However, on other Kubernetes distributions on nodes which use SElinux, users need to enable hostpathRequiresPrivileged in the operator’s helm values.

parameters:
  rook_ceph:
    operator_helm_values:
      hostpathRequiresPrivileged: true (1)
1 The operator needs to be informed that deployments which use hostPath volume mounts need to run with privileged security context. This setting is required on any cluster which uses SELinux on the nodes.