Quick start

This tutorial will guide you to a working Ceph setup using the default configuration.

Prerequisites

  • Cluster is already managed by Project Syn

  • At least 3 adequately sized nodes (minimum 8 vCPU, 16GB RAM) are provisioned for the storage cluster

    • The nodes are labeled with node-role.kubernetes.io/storage='' and tainted with storagenode=True:NoSchedule

  • Block storage volumes (volumeMode=Block) are available in the cluster

  • The block storage volumes are associated with a StorageClass

  • A working commodore command is available locally. See the Running Commodore documentation for details.

Steps

  1. Enable component rook-ceph

    c-cluster-id.yml
    applications:
      - rook-ceph
  2. Configure backing storageclass for the Ceph cluster

    c-cluster-id.yml
    parameters:
      rook_ceph:
        ceph_cluster:
          block_storage_class: <block-storage-class> (1)
    1 Specify the name of a storage class which provides volumes with volumeMode=Block.
  3. If your block storage volumes are provisioned dynamically, specify the volume size

    c-cluster-id.yml
    parameters:
      rook_ceph:
        ceph_cluster:
          block_volume_size: 200Gi (1)
    1 If you are using pre-provisioned block storage volumes, you can leave this parameter at its default value of 1.
  4. If your block storage volumes are backed by SSDs (or better), tune Ceph for SSD backing storage.

    c-cluster-id.yml
    parameters:
      rook_ceph:
        ceph_cluster:
          tune_fast_device_class: true
  5. Compile and push the cluster catalog

    commodore catalog compile <c-cluster-id> --push -i