Parameters
The parent key for all of the following parameters is rook_ceph
.
namespace
type |
string |
default |
|
The namespace in which the Rook Ceph operator and the CSI drivers are deployed
ceph_cluster
type |
dict |
The configuration of the Ceph cluster to deploy. See the following sections for individual parameters nested under this key.
name
type |
string |
default |
|
The name of the Ceph cluster object. Also used as part of the storageclass and volumesnapshotclass names.
namespace
type |
string |
default |
|
The namespace in which the Ceph cluster is deployed.
By default, the component deploys the Ceph cluster in a different namespace than the operator and the CSI drivers. However, the component also supports deploying the operator, CSI drivers and Ceph cluster in the same namespace.
node_count
type |
integer |
default |
|
The number of storage nodes (disks really) that the Ceph cluster should expect.
The operator will deploy this many Ceph OSDs.
block_storage_class
type |
string |
default |
|
The storage class to use for the block storage volumes backing the Ceph cluster.
The storage class must support volumeMode=Block
.
block_volume_size
type | |
default |
|
By default, the component expects that pre-provisioned block storage volumes are used.
If you deploy the component on a cluster which dynamically provisions volumes for the storage class configured in ceph_cluster.block_storage_class
, set this value to the desired size of the disk for a single node of your Ceph cluster.
tune_fast_device_class
type |
boolean |
default |
|
This parameter can be set to true
to tune the Ceph cluster OSD parameters for SSDs (or better).
See the Rook.io Ceph cluster CRD documentation for a more detailed explanation.
osd_placement
type |
dict |
default |
{} |
Control placement of the OSD pods, empty by default.
See Storage Class Device Sets for a more detailed explanation.
osd_portable
type |
boolean |
default |
|
Allows OSDs to move between nodes during failover (and initial setup). This requires a storage class that supports portability (such as the CSI-managed storageclasses on cloudscale.ch).
If this is false
, OSDs are assigned to a node permanently.
This is the default configuration for the component.
It’s recommended to set this parameter to true
when deploying the component on CSI-managed PersistentVolumes.
config_override
type |
dict |
default |
|
Additional Ceph configurations which are rendered in ini style format by the component. Each key in the dict is translated into a section in the resulting ini style file containing the key’s value — which is expected to be a dict — as settings.
The component doesn’t validate the resulting configuration. Please be aware that Ceph may fail to start if you provide an invalid configuration file. See the Ceph documentation for a list of valid configuration sections and options. |
Use strings to represent fractional values in |
The default value is translated into the following ini style file:
[global]
mon_osd_full_ratio = 0.85
mon_osd_backfillfull_ratio = 0.8
mon_osd_nearfull_ratio = 0.75
mon_data_avail_warn = 15
The resulting ini style file is written to the ConfigMap rook-config-override
in the Ceph cluster namespace.
rbd_enabled
type |
boolean |
default |
|
This parameter controls whether the RBD CSI driver, its associated volumesnapshotclass and any configured CephBlockPool
resources and associated storageclasses are provisioned.
The CephBlockPool
resources are defined and configured in parameter storage_pools.rbd
.
cephfs_enabled
type |
boolean |
default |
|
This parameter controls whether the CephFS CSI driver, its associated volumesnapshotclass and any configured CephFilesystem
resources and associated storageclasses are provisioned.
The CephFilesystem
resources are defined and configured in parameter storage_pools.cephfs
.
monitoring_enabled
type |
boolean |
default |
|
This parameter controls whether the component enables monitoring on the CephCluster
resource and sets up Ceph alert rules.
storageClassDeviceSets
type |
dict |
keys |
Names of |
values |
Spec of |
The component adds a storage class device set to the cluster object for every entry.
The key is used as the name of the storage class device set and the value is used as the specification with the volumeClaimTemplates
key converted from a dictionary to a list.
This means the following example configuration:
storageClassDeviceSets:
foo:
count: 3
volumeClaimTemplates:
default:
spec:
storageClassName: localblock
volumeMode: Block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
Would be converted to the following specification
storageClassDeviceSets:
- name: foo
count: 3
volumeClaimTemplates:
- spec:
storageClassName: localblock
volumeMode: Block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
See Storage Class Device Sets for a more detailed explanation on how to configure them.
The default storage class device set is called cluster and can also be configured through this parameter.
|
Storage class device sets added by cephClusterSpec.storage.storageClassDeviceSets can’t be modified through this parameter.
|
storage_pools.rbd
type |
dict |
keys |
Names of |
values |
dicts with keys |
In this parameter CephBlockPool
resources are configured.
The component creates one storageclass and volumesnapshotclass per block pool.
The component will create additional storage classes if field extra_storage_classes is provided.
|
By default the parameter holds the following configuration:
storagepool:
config:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
mount_options:
discard: true
storage_class_config:
parameters:
csi.storage.k8s.io/fstype: ext4
allowVolumeExpansion: true
This configuration results in
-
A
CephBlockPool
namedstoragepool
which is configured with 3 replicas distributed on different hosts -
A storage class which creates PVs on this block pool, uses the
ext4
filesystem, supports volume expansion and configures PVs to be mounted with-o discard
. -
A
VolumeSnapshotClass
associated with the storage class
See the Rook.io CephBlockPool
CRD documentation for all possible configurations in key config
.
The values in key storage_class_config
are merged into the StorageClass
resource.
The values in key mount_options
are transformed into an array which is injected into the StorageClass
resource in field mountOptions
.
Providing a key with value true
in mount_options
results in an array entry which just consists of the key.
Providing a key with string value results in an array entry which consists of key=value
.
Providing a key with value false
or null
will result in the key not being added to the storage class’s mount options.
See the filesystem documentation for the set of supported mount options.
For example, see the list of supported mount options for ext4
in the man page.
storage_pools.rbd.<poolname>.extra_storage_classes
type |
dictionary |
default |
|
storagepool:
extra_storage_classes:
small-files:
parameters:
csi.storage.k8s.io/fstype: ext4 (1)
mkfsOptions: -m0 -Enodiscard,lazy_itable_init=1,lazy_journal_init=1 -i1024 (2)
1 | When providing mkfsOptions it makes sense to explicitly specify the fstype, since mismatched fstype and mkfsOptions will likely result in errors.
Note that provinding fstype isn’t strictly necessary, if the same fstype is set in storage_pools.rbd.<poolname>.storage_class_config . |
2 | -m0 -Enodiscard,lazy_itable_init=1,lazy_journal_init=1 are the default mkfsOptions for ext4 in ceph-csi. |
This parameter allows users to configure additional storage classes for an RBD blockpool. The component will generate a storage class for each key-value pair in the parameter.
The key will be used as a suffix for the storage class name.
The resulting storage class name will have the form rbd-<poolname>-<ceph cluster name>-<key>
.
The component will use the configurations provided in storage_pools.rbd.<poolname>.mount_options
and storage_pools.rbd.<poolname>.storage_class_config
as the base for any storage classes defined here.
The provided value will be merged into the base storage class definition.
storage_pools.cephfs
type |
dict |
keys |
Names of |
values |
dicts with keys |
In this parameter CephFilesystem
resources are configured.
The component creates exactly one storageclass and volumesnapshotclass per CephFS.
By default the parameter holds the following configuration:
fspool:
data_pools:
pool0:
failureDomain: host
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: none
target_size_ratio: '0.8'
config:
metadataPool:
replicated:
size: 3
requireSafeReplicaSize: true
parameters:
compression_mode: none
target_size_ratio: '0.2'
# dataPools rendered from data_pools in Jsonnet
preserveFilesystemOnDelete: true
metadataServer:
activeCount: 1
activeStandby: true
resources:
requests:
cpu: "1"
memory: 4Gi
limits:
cpu: "1"
memory: 4Gi
# metadata server placement done in Jsonnet but can be
# extended here
mirroring:
enabled: false
mount_options: {}
storage_class_config:
allowVolumeExpansion: true
This configuration creates
-
One
CephFilesystem
resource namedfspool
. This CephFS instance is configured to have 3 replicas both for the metadata pool and its single data pool. By default, the CephFS instance is configured to assume that metadata will consume roughly 20% and data roughly 80% of the storage cluster. -
A storage class which creates PVs on the CephFS instance and supports volume expansion.
-
A
VolumeSnapshotClass
associated with the storage class
CephFS doesn’t require mount option discard , and ceph-csi v3.9.0+ will fail to mount any CephFS volumes if the storage class is configured with mount option discard .
|
The key data_pools
is provided to avoid having to manage a list of data pools directly in the hierarchy.
The values of each key in data_pools
are placed in the resulting CephFS resource’s field .spec.dataPools
The contents of key config
are used as the base value of the resulting resource’s .spec
field.
Note that data pools given in config
in the hierarchy will be overwritten by the pools configured in data_pools
.
The component creates a placement configuration for the metadata server (MDS) pods based on the values of parameters tolerations
and node_selector
.
Users can override the default placement configuration by providing their own configuration in key config.metadataServer.placement
.
See the Rook.io CephFilesystem
CRD documentation for all possible configurations in key config
.
The values in key storage_class_config
are merged into the StorageClass
resource which is for the CephFS instance.
The values in key mount_options
are transformed into an array which is injected into the StorageClass
resource in field mountOptions
.
Providing a key with value true
in mount_options
results in an array entry which just consists of the key.
Providing a key with string value results in an array entry which consists of key=value
.
Providing a key with value false
or null
will result in the key not being added to the storage class’s mount options.
See the mount.ceph
documentation for all possible CephFS mount options.
alerts
Configurations related to alerts.
ignoreNames
type |
list |
default |
|
This parameter can be used to disable alerts provided by Rook.io.
The component supports removing entries in this parameter by providing the entry prefixed with ~
.
By default, the component disables the CephPoolQuota alerts, since the default configuration doesn’t configure any pool quotas.
However, if the quota alerts are wanted, they can be re-enabled by removing the alerts from the parameter by providing the following configuration.
ignore_alerts:
- ~CephPoolQuotaBytesNearExhaustion
- ~CephPoolQuotaBytesCriticallyExhausted
To allow transitioning to the new config structure, the component currently still respects ignored alerts in ceph_cluster.ignore_alerts .
|
patchRules
type |
dict |
default |
|
This parameter allows users to patch alerts provided by Rook.io.
The keys in the parameter correspond to the field alertname
of the alert to patch.
The component expects valid partial Prometheus alert rule objects as values.
The provided values aren’t validated, they’re applied to the corresponding upstream alert as-is. |
additionalRules
type |
dict |
default |
This parameter allows users to configure additional alerting and recording rules.
All rules defined in this parameter will be added to rule group syn-rook-ceph-additional.rules
.
For alerting rules, a runbook URL is injected if annotation runbook_url
is not already set on the rule.
The injected runbook URL is derived from the alert name using pattern https://hub.syn.tools/rook-ceph/runbooks/{alertname}.html
.
node_selector
type |
dict |
default |
|
The node selector (if applicable) for all the resources managed by the component.
tolerations
type |
dict |
default |
|
The tolerations (if applicable) for all the resources managed by the component.
The component assumes that nodes on which the deployments should be scheduled may be tainted with storagenode=True:NoSchedule
.
images
type |
dict |
default |
This parameter allows selecting the Docker images to use for Rook.io, Ceph, and Ceph-CSI.
Each image is specified using keys registry
, image
and tag
.
This structure allows easily injecting a registry mirror, if required.
|
charts
type |
dict |
default |
This parameter allows selecting the Helm chart version for the rook-ceph
operator.
|
operator_helm_values
type |
dict |
default |
The Helm values to use when rendering the rook-ceph operator Helm chart.
A few Helm values are configured based on other component parameters by default:
-
The data in parameter
images
is used to set theimage.repository
,image.tag
,csi.cephcsi.repository
, andcsi.cephcsi.tag
Helm values -
The value of
node_selector
is used to set Helm valuenodeSelector
-
The value of
tolerations
is used to set Helm valuetolerations
-
The component ensures that
hostpathRequiresPrivileged
is enabled on OpenShift 4 regardless of the contents of the Helm value.
See the Rook.io docs for a full list of Helm values.
toolbox
type |
dict |
default |
|
The configuration for the Rook-Ceph toolbox deployment. This deployment provides an in-cluster shell to observe and administrate the Ceph cluster.
cephClusterSpec
type |
dict |
default |
The default configuration for the CephCluster
resource.
The value of this parameter is used as field .spec
of the resulting resource.
Selected configurations of the Ceph cluster are inherited from other component parameters. If you overwrite those configurations in this parameter, the values provided in the "source" parameters won’t have an effect.
Inherited configurations
-
cephVersion.image
is constructed from the data in parameterimages
. -
placement.all.nodeAffinity
is built from parameternode_selector
. The component constructs the following value for the configuration:requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: (1) - key: NODE_SELECTOR_KEY operator: Exists ...
1 The component creates an entry in matchExpressions
withkey
equal to the node selector key andoperator=Exists
for each key in parameternode_selector
. -
placement.all.tolerations
is set to the value of parametertolerations
. -
storage.storageClassDeviceSets
based on values given in parameterceph_cluster.storageClassDeviceSets
. Users are encouraged to use the parameterceph_cluster.storageClassDeviceSets
to configure the Ceph cluster’s backing storage.
See the Rook.io CephCluster
documentation for a full list of configuration parameters.
Example configurations
Configure the component for SElinux-enabled cluster nodes
The component automatically configures the operator on OpenShift 4.
However, on other Kubernetes distributions on nodes which use SElinux, users need to enable hostpathRequiresPrivileged
in the operator’s helm values.
parameters:
rook_ceph:
operator_helm_values:
hostpathRequiresPrivileged: true (1)
1 | The operator needs to be informed that deployments which use hostPath volume mounts need to run with privileged security context.
This setting is required on any cluster which uses SELinux on the nodes. |