Parameters
The parent key for all of the following parameters is openshift4_nodes.
|
This component relies on deep merge of values from several parameters and hierarchy layers. This works pretty straightforward for scalar values and dictionaries. Values of arrays will be appended to each other. There is no way to override values which were set on a lower precedence location. |
openshiftVersion
| type |
dictionary |
| default |
|
The OpenShift version of the cluster on which the component is installed.
We expect that most users will set this parameter to ${dynamic_facts:openshiftVersion}.
availabilityZones
| type |
list of strings |
| default |
[] |
List of availability zone names. The list will be used when distributing MachineSets across zones (see [multiAZ]). The first item of this list will also be used to place a MachineSet without an explicit zone defined.
It’s suggested to define its values higher up the config hierarchy. Most probably the one of the cloud region.
|
This is currently only supported on GCP. |
defaultSpecs
| type |
dictionary |
| default |
Sensible defaults for a growing number of cloud providers. |
A dictionary holding the default values applied to each machinesets.machine.openshift.io object created by this component.
The top level keys are the names of cloud providers as reported by the cluster fact ${facts:cloud}.
The values can be everything that’s accepted in the spec field of a machinesets.machine.openshift.io object.
controlPlaneDefaultSpecs
| type |
dictionary |
| default |
Sensible defaults for a growing number of cloud providers. |
A dictionary holding the default values applied to each controlplanemachinesets.machine.openshift.io object created by this component.
The top level keys are the names of cloud providers as reported by the cluster fact ${facts:cloud}.
The values can be everything that’s accepted in the spec field of a controlplanemachinesets.machine.openshift.io object.
infrastructureID
| type |
string |
| default |
undefined |
This is the 12 character infrastructure ID given to a cluster by the OpenShift 4 installer. Use the following command to retrieve this from the cluster:
oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
|
This is most likely to be configured on cluster level itself. Configuring this higher up in the hierarchy can result in unexpected behavior. |
machine_api_namespace
| type |
string |
| default |
|
The namespace where machine-api related objects should be created.
Namespaced resources without an explicit .metadata.namespace field will be deployed in this namespace as well.
|
nodeConfig
| type |
dictionary |
| default |
|
A dictionary holding the default values applied to the node.config.openshift.io object created by this component.
You can switch between cgroup v1 and cgroup v2, as needed, by editing the node.config object. For more information, see Enabling Linux control group version 2.
nodeGroups
| type |
dictionary |
| default |
empty |
A dictionary of node groups to create on the cluster.
It’s centered around the MachineSet CRD but also takes care of some additional aspects like zone distribution and auto scaling.
The top level key is the name of each set of machines. Each set of machines has the values described below.
If an entry has value null, no MachineSet is created for that entry.
This allows users to remove entries in the hierarchy.
multiAz
| type |
boolean |
| default |
false |
A machine set will be placed in one single availability zone.
If not specified otherwise, the first entry of availabilityZones will be used.
When set to true, a MachineSet will be created for each zone listed in availabilityZones.
The replicas of the generated MachineSets will be calculated.
The replicas given will be divided by the cont of zones in availabilityZones rounded up.
|
This is currently only supported on GCP. |
replicas
| type |
number |
| default |
1 |
The number of machines to create.
When multiAZ is set to true, the number given here will be divided so that each of the created MachineSets will get a fraction of replicas but the total of created machines will match the one requested here.
See also [multiAZ].
role
| type |
string |
| default |
worker |
The role of the created Nodes.
The value will be added as the node-role.kubernetes.io/<role>: "" label to nodes.
Unless role is set to master, the worker role label will always be added to inherit the base configuration for nodes.
When role is set to master, the component will create a ControlPlaneMachineSet instead of a MachineSet.
|
In order to add additional labels to the resulting Node object, use |
portSecurity
| type |
boolean |
| default |
unset |
This parameter allows users to selectively configure port security for networks that are attached to the node group in the defaultSpecs.openstack config.
This parameter has no effect for networks which are defined directly through the node group’s spec parameter.
If this parameter isn’t given the OpenStack platform’s default value is used.
When port security is disabled, nodes in the group are fully accessible in each network they’re attached to.
|
This parameter is only supported on OpenStack. Other infrastructures don’t provide the same port security concept. |
|
On OpenStack platforms which enable security groups (and port security) by default, this parameter must be set to |
spec
| type |
dictionary |
| default |
See [defaultSpec]. |
This gives you the full control over the resulting MachineSet or ControlPlaneMachineSet.
Values given here will be merged with precedence with the defaults configured in [defaultSpec].
The values can be everything that’s accepted in the spec field of a machinesets.machine.openshift.io object.
machineSets
| type |
dictionary |
| default |
|
| example |
|
A dictionary of machine sets to create on the cluster.
The resulting MachineSet object will have the key as the name and the value is merged into the resource.
The MachineSet will will have the namespace from machine_api_namespace applied.
autoscaling
| type |
dictionary |
The fields in this parameter can be used to configure OpenShift’s cluster autoscaling. See the upstream documentation for a detailed description.
autoscaling.addAutoscalerArgs
| type |
list |
| default |
|
Patches the default cluster autoscaler pod with the provided arguments.
Those arguments can’t be set in the ClusterAutoscaler resource itself.
-
--daemonset-eviction-for-occupied-nodes=false: Disables eviction of DaemonSet pods to prevent CSI drivers and other required infrastructure pods from being evicted. -
--skip-nodes-with-local-storage=false: Allows removing nodes with pods that useemptyDirvolumes.
autoscaling.ignoreDownscalingSync
| type |
bool |
| default |
|
If set to true, Argo CD will ignore synchronization of the .spec.scaleDown.enable field in the ClusterAutoscaler resource generated by this component.
This is helpful if .spec.scaleDown.enable is controlled by another resource, otherwise Argo would override this value again.
autoscaling.clusterAutoscaler
| type |
dictionary |
| default |
|
The value of this parameter is merged into the default .spec of the ClusterAutoscaler resource which is generated by the component.
The component deploys the following default ClusterAutoscaler .spec:
spec:
podPriorityThreshold: -10
resourceLimits:
maxNodesTotal: 24
cores:
min: 8
max: 128
memory:
min: 4
max: 256
logVerbosity: 1
scaleDown:
enabled: true
delayAfterAdd: 10m
delayAfterDelete: 5m
delayAfterFailure: 30s
unneededTime: 5m
utilizationThreshold: '0.4'
expanders: [ 'Random' ]
See the upstream configuring the cluster autoscaler documentation for details on each configuration option.
| The component doesn’t validate the provided configuration. |
The fields in the cluster autoscaler’s spec.resourceLimits must be configured to account for the non-autoscaled nodes in the cluster, such as control plane nodes, since they’ll also count against the overall cluster size.
|
autoscaling.machineAutoscalers
| type |
dictionary |
| default |
|
Each key value pair in this parameter is used to create a MachineAutoscaler resource in the namespace indicated by parameter machine_api_namespace.
The component expects that each key matches a MachineSet which is configured through one of the parameters nodeGroups or machineSets.
The component will raise an error if it finds a key which doesn’t have a matching entry in nodeGroups or machineSets.
The component will configure ArgoCD to ignore changes to spec.replicas for each MachineSet resources targeted by a MachineAutoscaler.
|
The value associated with each key is used as the base configuration for .spec of the resulting MachineAutoscaler resource.
The component will always configure .spec.scaleTargetRef.name with the key to ensure the MachineAutoscaler resource targets the desired MachineSet.
The component will raise an error if a value doesn’t have the exact keys minReplicas and maxReplicas
See the upstream configuring machine autoscalers documentation for more details.
autoscaling.priorityExpanderConfig
| type |
dictionary |
| default |
|
If this parameter has any fields set, the component will generate a configmap cluster-autoscaler-priority-expander in the namespace indicated by parameter machine_api_namespace.
When the parameter has any fields set, the component will default parameter .spec.expanders of the ClusterAutoscaler to ['Priority'].
The component will render the provided dictionary as YAML and write it to data.priorities in the cluster-autoscaler-priority-expander configmap.
autoscaling.customMetrics.enabled
| type |
boolean |
| default |
|
When enabled, create PrometheusRules to generate metrics for certain autoscaling parameters:
component_openshift4_nodes_machineset_replicas_max contains the maxReplicas value for each machine set.
component_openshift4_nodes_machineset_replicas_min contains the minReplicas value for each machine set.
autoscaling.customMetrics.extraLabels
| type |
dictionary |
| default |
|
Additional label-value pairs to add to the autoscaling metrics.
autoscaling.schedule.enableExpression
| type |
string |
| default |
|
The cron expression to enable downscaling.
autoscaling.schedule.disableExpression
| type |
string |
| default |
|
The cron expression to disable downscaling.
autoscaling.schedule.timeZone
| type |
string |
| default |
|
The time zone to use for scheduled downscaling.
Example configuration
This configuration assumes that a MachineSet named app is configured through either nodePools or machineSets.
autoscaling:
enabled: true
clusterAutoscaler:
logVerbosity: 4 (1)
machineAutoscalers:
app:
minReplicas: 2
maxReplicas: 6
| 1 | Enable debug logging by setting logVerbosity=4 if you need to troubleshoot the autoscaling behavior. |
machineConfigPools
| type |
dictionary |
A dictionary of machine config pools to create on the cluster.
The resulting MachineConfigPool object will have the key prefixed with x- as the name and the value is merged into the resource.
Apart from the machine config pool, this dictionary can manage related resources through the following fields, that won’t be added to the MachineConfigPool object.
-
The kubelet field can contain the spec of a
KubeletConfigresource. The machine config pool key will be used as the name and themachineConfigPoolSelectorwill be set automatically. -
The containerRuntime field can contain the spec of a
ContainerRuntimeConfigresource. The machine config pool key will be used as the name and themachineConfigPoolSelectorwill be set automatically. -
The machineConfigs accepts a key-value dict where the values are the
specofMachineConfigresources. The resultingMachineConfigobjects will have the keys, prefixed with99x-and the machine config pool key, as the name and the values are used as thespec. The objects will automatically be labeled so that the machine config pool will pick up the config.
By default the component creates app, infra, and storage machine config pools, each of which extends the worker pool.
Config pools can be removed by setting their dictionary value to null.
machineConfigPools:
master:
kubelet:
kubeletConfig:
maxPods: 60
app:
spec:
maxUnvailable: 3
kubelet: (1)
kubeletConfig:
maxPods: 1337
containerRuntime: (2)
containerRuntimeConfig:
pidsLimit: 1337
machineConfigs:
testfile: (3)
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:,custom
filesystem: root
mode: 0644
path: /etc/customtest
testfile2: (4)
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:,custom
filesystem: root
mode: 0644
path: /etc/customtest2
| 1 | Results in a KubeletConfig object called app |
| 2 | Results in a ContainerRuntimeConfig object called app |
| 3 | Results in a MachineConfig object called 99x-app-testfile |
| 4 | Results in a MachineConfig object called 99x-app-testfile2 |
|
Machine config pool names are prefixed with |
|
The component doesn’t manage machine config pools You can however manage related resources for these pools through the extra fields |
machineConfigs
| type |
dict |
| default |
|
This parameter accepts a key-value dict where the values are of kind machineconfiguration.openshift.io/v1/MachineConfig.
The resulting MachineConfig objects will have the keys, prefixed with 99x-, as the name and the values are merged into the resource.
Reference the upstream documentation on how to use machine config objects.
machineConfigs:
worker-ssh:
metadata:
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
passwd:
users:
- name: core
sshAuthorizedKeys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID9BWBmqreqpn7cF9klFEeLrg/NWk3UAyvx7gj/cVFQn
We add special support for defining file contents in the hierarchy with key storage.files.X.source.inline which isn’t part of the Ignition spec.
chrony.conf)machineConfigs:
worker-chrony-custom:
metadata:
annotations:
openshift4-nodes.syn.tools/node-disruption-policies: |- (1)
{
"files": [{
"path": "/etc/chrony.conf",
"actions": [{"type":"Restart","restart":{"serviceName":"chronyd.service"}}]
}]
}
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- path: /etc/chrony.conf
mode: 0644
overwrite: true
contents:
# The contents of `inline` get rendered as
# `source: 'data:text/plain;charset=utf-8;base64,<inline|base64>'`
inline: |
# Use ch.pool.ntp.org
pool ch.pool.ntp.org iburst
# Rest is copied from the default config
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
| 1 | Configure the machine config operator to only restart the chronyd service instead of draining and rebooting the node when this MachineConfig is applied.
See section nodeDisruptionPolicies for a full explanation of how the component supports configuring node disruption policies via annotations on MachineConfig resources. |
The resulting machine config for this example looks as follows:
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
annotations: (1)
inline-contents.machineconfig.syn.tools/etc-chrony.conf: '# Use ch.pool.ntp.org
pool ch.pool.ntp.org iburst
# Rest is copied from the default config
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
'
labels:
app.kubernetes.io/component: openshift4-nodes
app.kubernetes.io/managed-by: commodore
app.kubernetes.io/name: openshift4-nodes
machineconfiguration.openshift.io/role: worker
name: 99x-worker-chrony-custom
name: 99x-worker-chrony-custom
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- contents:
source: data:text/plain;charset=utf-8;base64,IyBVc2UgY2gucG9vbC5udHAub3JnCnBvb2wgY2gucG9vbC5udHAub3JnIGlidXJzdAojIFJlc3QgaXMgY29waWVkIGZyb20gdGhlIGRlZmF1bHQgY29uZmlnCmRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvZHJpZnQKbWFrZXN0ZXAgMS4wIDMKcnRjc3luYwprZXlmaWxlIC9ldGMvY2hyb255LmtleXMKbGVhcHNlY3R6IHJpZ2h0L1VUQwpsb2dkaXIgL3Zhci9sb2cvY2hyb255Cg== (2)
mode: 420
overwrite: true
path: /etc/chrony.conf
| 1 | The original inline file contents are added as an annotation to the resulting machine config object. |
| 2 | The actual entry in the files list in the Ignition config is encoded with base64 and added as a data scheme value (data:text/plain;charset=utf-8;base64,…) in the contents.source field.
See the Ignition spec for more details on supported ways to specify file contents. |
|
Keep in mind that machine config objects are evaluated in order of their name.
This is also the reason why the machine config names are prefixed with |
containerRuntimeConfigs
| type |
dict |
| default |
|
This parameter accepts a key-value dict where the values are of kind machineconfiguration.openshift.io/v1/ContainerRuntimeConfig.
The keys are resulting metadata.name and the values reflect the .spec field of ContainerRuntimeConfig.
kubeletConfigs
| type |
dict |
| default |
See |
This parameter accepts a key-value dict where the values are of kind machineconfiguration.openshift.io/v1/KubeletConfig.
The keys are resulting metadata.name and the values reflect the .spec field of KubeletConfig.
| Please refer to the upstream version of the relevant kubelet for the valid values of these fields. Invalid values of the kubelet configuration fields may render cluster nodes unusable. |
The component will print a warning if the configuration field maxPods is set to a value larger than 110.
See supported configuration fields upstream (choose matching release branch for versioned options)
|
debugNamespace
| type |
string |
| default |
|
The namespace to create for oc debug node/<nodename>.
This namespace is annotated to ensure that debug pods can be scheduled on any nodes.
Use oc debug node/<nodename> --to syn-debug-nodes to create the debug pods in the namespace.
|
This component will take ownership of the namespace specified here. Please make sure you don’t specify a namespace which is already managed by other means. |
egressInterfaces
| type |
dictionary |
| default |
|
| This feature is intended to be used when configuring floating egress IPs with Cilium according to the design selected in Floating egress IPs with Cilium on cloudscale. |
This parameter will deploy a MachineConfig which configures a systemd service that sets up dummy interfaces for egress IPs on the nodes in each MachineConfigPool listed in field machineConfigPools.
The component supports removal of MachineConfigPools by prefixing the pool with ~.
The systemd unit deployed by the script is configured to be executed after the node’s network is online but before the Kubelet starts.
The systemd unit executes a script which reads the required egress interfaces and associated IPs from the configmap indicated in parameter shadowRangeConfigMap.
The script uses the file indicated in field nodeKubeconfig to fetch the ConfigMap from the cluster.
If the default value is used, the script will use the node’s Kubelet kubeconfig to access the cluster.
To ensure the Kubelet can access the configmap, users should ensure that a pod which mounts the ConfigMap is running on the node.
|
The script will apply the following changes to the provided kubeconfig:
This is done to ensure that the script works correctly on IPI clusters which only provide the |
|
Component cilium can deploy a suitable ConfigMap and DaemonSets which ensure that the Kubelets on all nodes that need to create egress dummy interfaces can access the ConfigMap. See the documentation for component-cilium’s support for egress IP ranges for details. |
The script expects the ConfigMap to have a key matching each node name on which egress dummy interfaces should be configured.
Example
data:
infra-foo: '{"egress_a": {"base": "198.51.100", "from": "32", "to": "63"}}'
infra-bar: '{"egress_a": {"base": "198.51.100", "from": "64", "to": "95"}}'
For the ConfigMap data shown above the script will configure the following dummy interfaces:
-
egress_a_0-egress_a_31with IPs 198.51.100.32 - 195.51.100.63 on node infra-foo. -
egress_a_0-egress_a_31with IPs 198.51.100.64 - 195.51.100.95 on node infra-bar.
nodeDisruptionPolicies
| type |
dict |
| defaults |
This parameter allows configuring "static" node disruption policies.
Node disruption policies allow customizing the actions that are executed when a MachineConfig change is applied.
By default the machine config operator (MCO) will reboot nodes when a MachineConfig changes a file or a systemd unit.
With node disruption policies, which are configured in the global machineconfigurations.operator.openshift.io/cluster resource, you can customize the MCO behavior when a file or systemd unit is changed.
The node disruption policies are configured via field spec.nodeDisruptionPolicy in the machineconfiguration.operator.openshift.io/cluster resource.
That object has fields files, units and sshkey.
Field files is a list.
Each entry of files has fields path and actions.
The list is indexable by field path of the entries.
Field units is a list.
Each entry of units has fields name and actions.
The list is indexable by field name of the entries.
Field sshkey is an object with field actions.
Field actions defines the actions to take when the policy matches a change.
When applying policy actions, the longest prefix wins: if there are policies for both /usr/local and /usr/local/bin the policy for /usr/local/bin is applied for /usr/local/bin/script.sh.
|
Consult kubectl explain machineconfiguration.operator.openshift.io.spec.nodeDisruptionPolicy for a more comprehensive explanation of each field.
|
Actions are executed in the order that they appear in field actions.
Currently, the following actions are supported:
Reboot-
Drains and reboots the node (the default action). Must occur alone.
None-
Does nothing. Must occur alone.
Drain-
Cordons, drains and uncordons the node.
Reload-
Reloads a systemd service.
Restart-
Restarts a systemd service.
DaemonReload-
Executes
systemd daemon-reloadto discover new or changed systemd configuration, such as a new systemd unit.
The component expects parameter fields files and units to be dictionaries and treats the keys as values for field path and name for the files and units lists in the MachineConfiguration resource respectively.
The values of each entry in files and units are used verbatim.
The content of parameter field sshkey_actions is used verbatim as the contents of sshkey.actions.
The component has some validation and will reject obviously invalid configurations, such as a list of actions which contains Reboot or None together with other actions.
The component deploys an Espejote ManagedResource which collects node disruption policies from annotation openshift4-nodes.syn.tools/node-disruption-policies from all MachineConfig resources (except the rendered-* resources) and merges those values (which are expected to be string-encoded JSON) with the statically configured policies provided in this parameter.
The contents of this parameter are deployed on the cluster in an Espejote JsonnetLibrary object.
The contents of each annotation are expected to be string-encoded JSON which contains a valid node disruption policy snippet.
The expected top-level fields of the annotation contents are files, units or sshkey.
The ManagedResource uses the same validation logic as the component to reject obviously invalid configurations provided via annotations.
Since there can only be one policy per file path and per systemd unit, the ManagedResource will collect each list of actions for a given path or unit and select the "most disruptive" list of actions.
Reboot is treated as the most disruptive action and has precedence over all other lists of actions.
None is treated as the least disruptive action and all other lists of actions have precedence over None.
Other lists of actions are merged together, since we can’t sensibly define an order of precedence for two lists of actions that are composed from the actions other than None or Reboot.
| Node disruption policies generally won’t have an effect during OpenShift upgrades. |
|
We can’t guarantee that node disruption policies configured through |
|
We don’t recommend configuring node disruption policies which restart systemd services which themselves are deployed via |
Example
The following configuration can be used to ensure that managing files in /usr/local/bin is fully non-disruptive, as long as no other configurations override this config:
nodeDisruptionPolicies:
files:
"/usr/local/bin":
actions:
- None
The following annotations contents can be provided on a MachineConfig which configures a custom /etc/chrony.conf to just restart the chrony service instead of rebooting the node:
machineConfigs: (1)
worker-custom-chrony: (1)
metadata:
annotations:
openshift4-nodes.syn.tools/node-disruption-policies: |- (2)
{
"files": [{
"path": "/etc/chrony.conf",
"actions": [{"type":"Restart","restart":{"serviceName":"chronyd.service"}}]
}]
}
labels:
machineconfiguration.openshift.io/role: worker
spec: { ... }
| 1 | We give the example via component parameter machineConfigs.
See section machineConfigs for a full example MachineConfig which deploys a custom /etc/chrony.conf. |
| 2 | See the upstream documentation for the full set of options. |
capacityAlerts
| type |
dict |
This parameter allows users to enable and configure alerts for node capacity management.
The capacity alerts are disabled by default and can be enabled by setting the key capacityAlerts.enabled to true.
Predictive alerts are disabled by default and can be enabled individually as shown below by setting ExpectClusterCpuUsageHigh.enabled to true.
The dictionary will be transformed into a PrometheusRule object by the component.
The component provides 10 alerts that are grouped in four groups.
You can disable or modify each of these alert rules individually.
The fields in these rules will be added to the final PrometheusRule, with the exception of expr.
The expr field contains fields which can be used to tune the default alert rule.
Alternatively, the default rule can be completely overwritten by setting the expr.raw field (see example below).
See Resource Management for an explanation for every alert rule.
Nodes are grouped by machineSet if machineSets are in use on the cluster. If machineSets are not in use, nodes are grouped by node role instead. The component automatically determines whether to group by machineSets or node roles.
When grouping by node role, it is possible to further group the nodes by additional node labels by specifying capacityAlerts.groupByNodeLabels.
When grouping by machineSets, there is no equivalent setting.
Example:
capacityAlerts:
enabled: true (1)
groupByNodeLabels: [] (2)
monitorNodeRoles:
- app (3)
groups:
PodCapacity:
rules:
TooManyPods:
annotations:
message: 'The number of pods is too damn high' (4)
for: 3h (5)
ExpectTooManyPods:
expr: (6)
range: '2d'
predict: '5*24*60*60'
ResourceRequests:
rules:
TooMuchMemoryRequested:
enabled: true
expr:
raw: sum(kube_pod_resource_request{resource="memory"}) > 9000*1024*1024*1024 (7)
CpuCapacity:
rules:
ClusterCpuUsageHigh:
enabled: false (8)
ExpectClusterCpuUsageHigh:
enabled: false (8)
UnusedCapacity:
rules:
ClusterHasUnusedNodes:
enabled: false (9)
| 1 | Enables capacity alerts |
| 2 | List of node labels (as they show up in the kube_node_labels metric) by which alerts are grouped (only available if machineSets are not in use) |
| 3 | List of node roles to monitor. This is used only if machineSets are not in use. If machineSets are used, the parameter monitorMachineSets is used instead. |
| 4 | Changes the alert message for the pod capacity alert |
| 5 | Only alerts for pod capacity if it fires for 3 hours |
| 6 | Change the pod count prediction to look at the last two days and predict the value in five days |
| 7 | Completely overrides the default alert rule and alerts if the total memory request is over 9000 GB |
| 8 | Disables both CPU capacity alert rules |
| 9 | Disables alert if the cluster has unused nodes. |
Example
infrastructureID: c-mist-sg7hn
nodeGroups:
infra:
instanceType: n1-standard-8
multiAz: true
replicas: 3
worker:
instanceType: n1-standard-8
replicas: 3
spec:
deletePolicy: Oldest
template:
spec:
metadata:
labels:
mylabel: myvalue
availabilityZones:
- europe-west6-a
- europe-west6-b
- europe-west6-c
containerRuntimeConfigs:
workers:
machineConfigPoolSelector:
matchExpressions:
- key: pools.operator.machineconfiguration.openshift.io/worker
operator: Exists
containerRuntimeConfig:
pidsLimit: 2048
kubeletConfigs:
workers:
machineConfigPoolSelector:
matchExpressions:
- key: pools.operator.machineconfiguration.openshift.io/worker
operator: Exists
kubeletConfig:
maxPods: 100
podPidsLimit: 2048