Parameters

The parent key for all of the following parameters is openshift4_nodes.

This component relies on deep merge of values from several parameters and hierarchy layers. This works pretty straightforward for scalar values and dictionaries. Values of arrays will be appended to each other. There is no way to override values which were set on a lower precedence location.

availabilityZones

type

list of strings

default

[]

List of availability zone names. The list will be used when distributing MachineSets across zones (see [multiAZ]). The first item of this list will also be used to place a MachineSet without an explicit zone defined.

It’s suggested to define its values higher up the config hierarchy. Most probably the one of the cloud region.

This is currently only supported on GCP.

defaultSpecs

type

dictionary

default

Sensible defaults for a growing number of cloud providers.

A dictionary holding the default values applied to each machinesets.machine.openshift.io object created by this component.

The top level keys are the names of cloud providers as reported by the cluster fact ${facts:cloud}. The values can be everything that’s accepted in the spec field of a machinesets.machine.openshift.io object.

infrastructureID

type

string

default

undefined

This is the 12 character infrastructure ID given to a cluster by the OpenShift 4 installer. Use the following command to retrieve this from the cluster:

oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

This is most likely to be configured on cluster level itself. Configuring this higher up in the hierarchy can result in unexpected behavior.

machine_api_namespace

type

string

default

openshift-machine-api

The namespace where machine-api related objects should be created.

Namespaced resources without an explicit .metadata.namespace field will be deployed in this namespace as well.

nodeConfig

type

dictionary

default
cgroupMode: v1

A dictionary holding the default values applied to the node.config.openshift.io object created by this component.

You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see Enabling Linux control group version 2.

nodeGroups

type

dictionary

default

empty

A dictionary of node groups to create on the cluster. It’s centered around the MachineSet CRD but also takes care of some additional aspects like zone distribution and auto scaling.

The top level key is the name of each set of machines. Each set of machines has the values described below.

annotations

type

dictionary

default

{}

The annotations to add to the MachineSet.

instanceType

type

string

default

null

This is currently only supported on GCP.

multiAz

type

boolean

default

false

A machine set will be placed in one single availability zone. If not specified otherwise, the first entry of availabilityZones will be used. When set to true, a MachineSet will be created for each zone listed in availabilityZones. The replicas of the generated MachineSets will be calculated. The replicas given will be divided by the cont of zones in availabilityZones rounded up.

This is currently only supported on GCP.

replicas

type

number

default

1

The number of machines to create. When multiAZ is set to true, the number given here will be divided so that each of the created MachineSets will get a fraction of replicas but the total of created machines will match the one requested here.

See also [multiAZ].

This value can also be set in [spec]. If done so, the value in [spec] will win.

role

type

string

default

worker

The role of the created Nodes. The value will be added as the node-role.kubernetes.io/<role>: "" label to nodes. The worker role label will always be added to inherit the base configuration for nodes.

In order to add additional labels to the resulting Node object, use spec.template.spec.metadata.labels.

spec

type

dictionary

default

See [defaultSpec].

This gives you the full control over the resulting MachineSet. Values given here will be merged with precedence with the defaults configured in [defaultSpec]. The values can be everything that’s accepted in the spec field of a machinesets.machine.openshift.io object.

machineConfigPools

type

dictionary

A dictionary of machine config pools to create on the cluster. The resulting MachineConfigPool object will have the key prefixed with x- as the name and the value is merged into the resource.

Apart from the machine config pool, this dictionary can manage related resources through the following fields, that won’t be added to the MachineConfigPool object.

  • The kubelet field can contain the spec of a KubeletConfig resource. The machine config pool key will be used as the name and the machineConfigPoolSelector will be set automatically.

  • The containerRuntime field can contain the spec of a ContainerRuntimeConfig resource. The machine config pool key will be used as the name and the machineConfigPoolSelector will be set automatically.

  • The machineConfigs accepts a key-value dict where the values are the spec of MachineConfig resources. The resulting MachineConfig objects will have the keys, prefixed with 99x- and the machine config pool key, as the name and the values are used as the spec. The objects will automatically be labeled so that the machine config pool will pick up the config.

By default the component creates app, infra, and storage machine config pools, each of which extends the worker pool. Config pools can be removed by setting their dictionary value to null.

Example
  machineConfigPools:
    master:
      kubelet:
        kubeletConfig:
          maxPods: 60
    app:
      spec:
        maxUnvailable: 3
      kubelet: (1)
        kubeletConfig:
          maxPods: 1337
      containerRuntime: (2)
        containerRuntimeConfig:
          pidsLimit: 1337
      machineConfigs:
        testfile: (3)
          config:
            ignition:
              version: 3.2.0
            storage:
              files:
              - contents:
                  source: data:,custom
                filesystem: root
                mode: 0644
                path: /etc/customtest
        testfile2: (4)
          config:
            ignition:
              version: 3.2.0
            storage:
              files:
              - contents:
                  source: data:,custom
                filesystem: root
                mode: 0644
                path: /etc/customtest2
1 Results in a KubeletConfig object called app
2 Results in a ContainerRuntimeConfig object called app
3 Results in a MachineConfig object called 99x-app-testfile
4 Results in a MachineConfig object called 99x-app-testfile2

Machine config pool names are prefixed with x- as in some cases configurations are applied ordered by their name and we want the config for x-app to be applied after the default worker config of the worker pool.

The component doesn’t manage machine config pools master and worker as these are maintained directly by OpenShift. Any changes to these machine config pools will be ignored.

You can however manage related resources for these pools through the extra fields kubelet, containerRuntime, and machineConfigs.

machineConfigs

type

dict

default

{}

This parameter accepts a key-value dict where the values are of kind machineconfiguration.openshift.io/v1/MachineConfig. The resulting MachineConfig objects will have the keys, prefixed with 99x-, as the name and the values are merged into the resource.

Reference the upstream documentation on how to use machine config objects.

Example
machineConfigs:
  worker-ssh:
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        passwd:
          users:
            - name: core
              sshAuthorizedKeys:
                - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID9BWBmqreqpn7cF9klFEeLrg/NWk3UAyvx7gj/cVFQn

We add special support for defining file contents in the hierarchy with key storage.files.X.source.inline which isn’t part of the Ignition spec.

Inline file contents example (custom chrony.conf)
machineConfigs:
  worker-chrony-custom:
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
    spec:
      config:
        ignition:
          version: 3.2.0
        storage:
          files:
            - path: /etc/chrony.conf
              mode: 0644
              overwrite: true
              contents:
                # The contents of `inline` get rendered as
                # `source: 'data:text/plain;charset=utf-8;base64,<inline|base64>'`
                inline: |
                  # Use ch.pool.ntp.org
                  pool ch.pool.ntp.org iburst
                  # Rest is copied from the default config
                  driftfile /var/lib/chrony/drift
                  makestep 1.0 3
                  rtcsync
                  keyfile /etc/chrony.keys
                  leapsectz right/UTC
                  logdir /var/log/chrony

The resulting machine config for this example looks as follows:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  annotations: (1)
    inline-contents.machineconfig.syn.tools/etc-chrony.conf: '# Use ch.pool.ntp.org

      pool ch.pool.ntp.org iburst

      # Rest is copied from the default config

      driftfile /var/lib/chrony/drift

      makestep 1.0 3

      rtcsync

      keyfile /etc/chrony.keys

      leapsectz right/UTC

      logdir /var/log/chrony

      '
  labels:
    app.kubernetes.io/component: openshift4-nodes
    app.kubernetes.io/managed-by: commodore
    app.kubernetes.io/name: openshift4-nodes
    machineconfiguration.openshift.io/role: worker
    name: 99x-worker-chrony-custom
  name: 99x-worker-chrony-custom
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
        - contents:
            source: data:text/plain;charset=utf-8;base64,IyBVc2UgY2gucG9vbC5udHAub3JnCnBvb2wgY2gucG9vbC5udHAub3JnIGlidXJzdAojIFJlc3QgaXMgY29waWVkIGZyb20gdGhlIGRlZmF1bHQgY29uZmlnCmRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvZHJpZnQKbWFrZXN0ZXAgMS4wIDMKcnRjc3luYwprZXlmaWxlIC9ldGMvY2hyb255LmtleXMKbGVhcHNlY3R6IHJpZ2h0L1VUQwpsb2dkaXIgL3Zhci9sb2cvY2hyb255Cg== (2)
          mode: 420
          overwrite: true
          path: /etc/chrony.conf
1 The original inline file contents are added as an annotation to the resulting machine config object.
2 The actual entry in the files list in the Ignition config is encoded with base64 and added as a data scheme value (data:text/plain;charset=utf-8;base64,…​) in the contents.source field. See the Ignition spec for more details on supported ways to specify file contents.

Keep in mind that machine config objects are evaluated in order of their name. This is also the reason why the machine config names are prefixed with 99x- so that they’re evaluated after the default OpenShift machine configuration.

containerRuntimeConfigs

type

dict

default

{}

This parameter accepts a key-value dict where the values are of kind machineconfiguration.openshift.io/v1/ContainerRuntimeConfig. The keys are resulting metadata.name and the values reflect the .spec field of ContainerRuntimeConfig.

kubeletConfigs

type

dict

default

See class/defaults.yml

This parameter accepts a key-value dict where the values are of kind machineconfiguration.openshift.io/v1/KubeletConfig. The keys are resulting metadata.name and the values reflect the .spec field of KubeletConfig.

Please refer to the upstream version of the relevant kubelet for the valid values of these fields. Invalid values of the kubelet configuration fields may render cluster nodes unusable.
The component will print a warning if the configuration field maxPods is set to a value larger than 110. See supported configuration fields upstream (choose matching release branch for versioned options)

debugNamespace

type

string

default

syn-debug-nodes

The namespace to create for oc debug node/<nodename>. This namespace is annotated to ensure that debug pods can be scheduled on any nodes.

Use oc debug node/<nodename> --to syn-debug-nodes to create the debug pods in the namespace.

This component will take ownership of the namespace specified here. Please make sure you don’t specify a namespace which is already managed by other means.

egressInterfaces

type

dictionary

default
machineConfigPools: []
nodeKubeconfig: /var/lib/kubelet/kubeconfig
shadowRangeConfigMap:
  namespace: cilium
  name: eip-shadow-ranges
This feature is intended to be used when configuring floating egress IPs with Cilium according to the design selected in Floating egress IPs with Cilium on cloudscale.

This parameter will deploy a MachineConfig which configures a systemd service that sets up dummy interfaces for egress IPs on the nodes in each MachineConfigPool listed in field machineConfigPools. The component supports removal of MachineConfigPools by prefixing the pool with ~.

The systemd unit deployed by the script is configured to be executed after the node’s network is online but before the Kubelet starts. The systemd unit executes a script which reads the required egress interfaces and associated IPs from the configmap indicated in parameter shadowRangeConfigMap.

The script uses the file indicated in field nodeKubeconfig to fetch the ConfigMap from the cluster. If the default value is used, the script will use the node’s Kubelet kubeconfig to access the cluster. To ensure the Kubelet can access the configmap, users should ensure that a pod which mounts the ConfigMap is running on the node.

Component cilium can deploy a suitable ConfigMap and DaemonSets which ensure that the Kubelets on all nodes that need to create egress dummy interfaces can access the ConfigMap. See the documentation for component-cilium’s support for egress IP ranges for details.

The script expects the ConfigMap to have a key matching each node name on which egress dummy interfaces should be configured.

Example

data:
  infra-foo: '{"egress_a": {"base": "198.51.100", "from": "32", "to": "63"}}'
  infra-bar: '{"egress_a": {"base": "198.51.100", "from": "64", "to": "95"}}'

For the ConfigMap data shown above the script will configure the following dummy interfaces:

  • egress_a_0 - egress_a_31 with IPs 198.51.100.32 - 195.51.100.63 on node infra-foo.

  • egress_a_0 - egress_a_31 with IPs 198.51.100.64 - 195.51.100.95 on node infra-bar.

Example

infrastructureID: c-mist-sg7hn

nodeGroups:
  infra:
    instanceType: n1-standard-8
    multiAz: true
    replicas: 3
  worker:
    instanceType: n1-standard-8
    replicas: 3
    spec:
      deletePolicy: Oldest
      template:
        spec:
          metadata:
            labels:
              mylabel: myvalue

availabilityZones:
- europe-west6-a
- europe-west6-b
- europe-west6-c

containerRuntimeConfigs:
  workers:
    machineConfigPoolSelector:
      matchExpressions:
        - key: pools.operator.machineconfiguration.openshift.io/worker
          operator: Exists
    containerRuntimeConfig:
      pidsLimit: 2048
kubeletConfigs:
  workers:
    machineConfigPoolSelector:
      matchExpressions:
        - key: pools.operator.machineconfiguration.openshift.io/worker
          operator: Exists
    kubeletConfig:
      maxPods: 100
      podPidsLimit: 2048