Parameters
The parent key for all of the following parameters is cilium
.
The namespace is cilium and currently non configurable.
|
install_method
type |
string |
default |
|
possible values |
|
The installation method for Cilium.
olm
uses the Openshift Operator Lifecycle Manager.
OLM installation is required for OpenShift clusters.
The Cilium OLM is a thin wrapper around Helm, because of this the Helm values are used for OLM configuration too.
Helm |
OLM |
|
Opensource |
✅ |
✅ |
Enterprise |
✅ |
✅ |
release
type |
string |
default |
|
possible values |
|
Two version of Cilium exist. The open-source version and the enterprise version.
See Upgrade Cilium OSS to Cilium Enterprise (OpenShift 4) for upgrading from the OSS version to the enterprise version.
kubernetes_version
type |
string |
default |
The Kubernetes version to provide to helm template
when rendering the Helm chart.
This parameter has no effect when install_method
is set to olm
.
Set this parameter to ${dynamic_facts:kubernetesVersion:major}.${dynamic_facts:kubernetesVersion.minor} to use the cluster’s reported Kubernetes version when rendering the Helm chart.
|
charts.cilium.source
type |
string |
default |
The Helm repository from which to download the cilium
Helm chart.
charts.cilium-enterprise.source
type |
string |
default |
|
The chart repository URL of the cilium-enterprise
Helm chart.
Users must provide the chart repository URL themselves in their Project Syn global or tenant configuration.
The component default is an invalid string (<CILIUM-ENTERPRISE-CHART-REPO-URL> ) instead of ~ to make the Kapitan error message somewhat useful when the user hasn’t reconfigured the chart repository.
|
charts.cilium-enterprise.version
type |
string |
default |
The version to use for the cilium-enterprise
Helm chart.
olm.source
type |
object |
default |
|
The source for the OLM manifests.
The component selects the opensource
or enterprise
field based on the value of component parameter release
.
The component doesn’t provide the URL of the Cilium Enterprise OLM manifests .tar.gz
archive.
Users must provide the URL themselves in their Project Syn configuration hierarchy.
The component default is an invalid string (<CILIUM-ENTERPRISE-OLM-MANIFESTS-TARGZ-URL> ) instead of ~ to make the Kapitan error message somewhat useful when the user hasn’t reconfigured the chart repository.
|
Example
parameters:
cilium:
olm:
source:
enterprise: https://cilium-ee.example.com/downloads/v${cilium:olm:version}/cilium-ee-${cilium:olm:full_version}.tar.gz (1)
1 | The example configuration uses Reclass references to construct URL parts containing the desired version. The component explicitly provides separate parameters for the OLM minor version and patchlevel. |
olm.full_version
type |
string |
default |
|
The complete version of the OLM release to download.
By default, the component constructs the value for this parameter from parameters version
and patchlevel
.
olm.log_level
type |
string |
default |
|
Zap log level for the OLM operator.
cilium_helm_values
type |
object |
default |
The configuration values of the underlying Cilium helm chart. See Opensource Cilium documentation for supported values.
The component will pre-process certain Helm values to allow users to more gracefully upgrade to newer Cilium versions which remove deprecated Helm values.
On OpenShift 4, the component will deploy a Patch which controls whether OpenShift deploys kube-proxy based on the value of |
hubble_enterprise_helm_values
type |
object |
default |
The configuration values for the Hubble Enterprise helm chart. See the Isovalent Cilium Enterprise documentation for supported values.
hubble_ui_helm_values
type |
object |
default |
The configuration values for the Hubble UI helm chart. See the Isovalent Cilium Enterprise documentation for supported values.
egress_gateway
This section allows users to configure the [Cilium EgressGatewayPolicy] feature.
When deploying Cilium OSS, the component will generate When deploying Cilium EE, the component will generate |
The current implementation (and therefore examples shown here) has only been tested with Cilium EE. Please refer to the example policy in the upstream documentation for Cilium OSS. |
egress_gateway.enabled
type |
boolean |
default |
|
This parameter allows users to set all the configurations necessary to enable the egress gateway policy feature through a single parameter.
The parameter sets the following Helm values:
egressGateway:
enabled: true
bpf:
masquerade: true
l7Proxy: false
Notably, the L7 proxy feature is disabled by default when egress gateway policies are enabled. This is recommended by the Cilium documentation, see also the upstream documentation.
Additionally, BPF masquerading can’t be disabled when the egress gateway feature is enabled.
For Cilium EE, the component uses Helm value egressGateway.enabled
for Helm value enterprise.egressGatewayHA.enabled
by default.
It’s possible to override this by explicitly setting egressGateway.enabled=false
and enterprise.egressGatewayHA.enabled=true
in the component’s cilium_helm_values
.
egress_gateway.policies
type |
object |
default |
|
This parameter allows users to deploy CiliumEgressGatewayPolicy
resources.
When deploying Cilium EE, the parameter will generate IsovalentEgressGatewayPolicy
resources instead.
Each key-value pair in the parameter is converted to a CiliumEgressGatewayPolicy
(or IsovalentEgressGatewayPolicy
) resource.
Entries can be removed by setting the value to null
.
Example
The examples are written for Cilium EE’s IsovalentEgressGatewayPolicy resources.
|
egress_gateway:
policies:
all-example:
metadata:
annotations:
syn.tools/description: |
Route all egress traffic from example-namespace through
203.0.113.100.
spec:
destinationCIDRs:
- 0.0.0.0/0
egressGroups:
- nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ""
egressIP: 203.0.113.100
selectors:
- podSelector:
matchLabels:
io.kubernetes.pod.namespace: example-namespace
removed: null
The component configuration shown above is rendered as follows by the component:
apiVersion: isovalent.com/v1
kind: IsovalentEgressGatewayPolicy
metadata:
annotations:
syn.tools/description: |
Route all egress traffic from example-namespace through
203.0.113.100.
labels:
name: all-example-namespace
name: all-example-namespace
spec:
destinationCIDRs:
- 0.0.0.0/0
egressGroups:
- egressIP: 203.0.113.100
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ''
selectors:
- podSelector:
matchLabels:
io.kubernetes.pod.namespace: example-namespace
egress_gateway.generate_shadow_ranges_configmap
type |
boolean |
default |
|
When this parameter is set to true, the component will deploy a ConfigMap which is suitable for the systemd unit which creates dummy interfaces managed by component openshift4-nodes.
Additionally, the component will deploy one or more DaemonSets (depending on the contents of egress_gateway.egress_ip_ranges
) to ensure the Kubelets on all nodes where egress interfaces need to be created can access the ConfigMap.
See also the documentation for parameter egressInterfaces
in openshift4-nodes.
egress_gateway.egress_ip_ranges
type |
object |
default |
|
This parameter allows users to configure CiliumEgressGatewayPolicy
(or IsovalentEgressGatewayPolicy
) resources which assign a single egress IP to a namespace according to the design selected in Floating egress IPs with Cilium on cloudscale.
Each entry in the parameter is intended to describe a group of dummy interfaces that can be used in CiliumEgressGatewayPolicy
(or IsovalentEgressGatewayPolicy
) resources.
The component expects that each value is an object with fields egress_range
, node_selector
, namespace_egress_ips
, and shadow_ranges
.
Field shadow_ranges is optional, see the section on shadow ranges for more details.
|
Prerequisites
The component expects that the key for each entry matches the prefix of the dummy interface names that are assigned the shadow IPs which map to the egress IP range defined in egress_range
.
To expand, the component expects that each node matching the selector in node_selector
has a dummy interfaces which is named <prefix>_<i>
for \$i \in [0, n)\$ where \$n\$ is the number of IPs contained in the specified egress range.
Additionally, the component expects that the network environment of the cluster ensures that all traffic which originates from the IPs assigned to the dummy interfaces on each node is mapped to the IPs in the range given in egress_range
.
The details of the mapping are left to the operator of the cluster’s network environment, but the component expects that traffic that originates from the IPs assigned to the same dummy interface on different nodes is mapped to a single egress IP.
We recommend that cluster operators allocate a shadow egress IP range of the same size as the egress IP range specified in field For example, a cluster operator could select shadow IP CIDRs In this case, the operator would need to ensure that traffic originating from each shadow IP CIDR is mapped to the egress CIDR. One option to realize this mapping are iptables
This approach assumes that the default gateway has suitable routes to ensure that traffic to |
Policy generation
The component will generate one CiliumEgressGatewayPolicy
(or IsovalentEgressGatewayPolicy
) for each key-value pair in field namespace_egress_ips
for each egress range.
The compilation will abort with an error if the same namespace appears in multiple egress range definitions. |
The component doesn’t enforce that different egress ranges are non-overlapping. |
The component expects that keys in field namespace_egress_ips
are namespace names.
Additionally, the component expects that values in that field are IPs in the defined egress IP range.
The component allows users to assign the same egress IP to multiple namespaces. |
The component expects that the value of egress_range
has format 192.0.2.32-192.0.2.63
.
If the range isn’t given in the expected format or if the component detects that the given range is empty (for example if the first IP is larger than the last IP) compilation is aborted with an error.
Additionally, the component also aborts compilation with an error if an egress IP that’s assigned to a namespace is outside the specified egress range.
Finally, entries in egress_ip_ranges
and namespace_egress_ips
can be removed by setting the value to null
.
Shadow ranges
The component optionally deploys a ConfigMap which contains a map from node names to egress shadow IP ranges.
This ConfigMap is only deployed when parameter egress_gateway.generate_shadow_ranges_configmap
is set to true
.
If the shadow ranges ConfigMap is enabled, the component will check that each shadow range is the same size as the egress IP range it’s associated with.
If there are any mismatches compilation aborts with an error.
If that parameter is set to true, the component will extract the node names and associated shadow IP ranges from field shadow_ranges
in each entry of egress_ip_ranges
.
Ranges which don’t define this field (or where it’s set to null
) are skipped.
This ConfigMap is intended to be consumed by a component running on the node (such as the systemd unit deployed by openshift4-nodes, see the documentation for parameter egressInterfaces
in openshift4-nodes.
To ensure the Kubelet Kubeconfig on the nodes can be used to access this ConfigMap, the component also deploys a DaemonSet which mounts the ConfigMap for each unique node selector in all egress IP ranges. The DaemonSet pods establish a link between the node and the ConfigMap which is required in order for the Kubernetes Node authorization mode to allow the Kubelet to access the ConfigMap.
Example
egress_ip_ranges:
egress_a:
egress_range: '192.0.2.32 - 192.0.2.63'
node_selector:
node-role.kubernetes.io/infra: ''
namespace_egress_ips:
foo: 192.0.2.32
bar: 192.0.2.61
shadow_ranges:
infra-foo: 198.51.100.32 - 198.51.100.63
infra-bar: 198.51.100.64 - 198.51.100.95
The configuration shown above results in the two IsovalentEgressGatewayPolicy
resources shown below.
apiVersion: isovalent.com/v1
kind: IsovalentEgressGatewayPolicy
metadata:
annotations: (1)
cilium.syn.tools/description: Generated policy to assign egress IP 192.0.2.61
in egress range "egress_a" (192.0.2.32 - 192.0.2.63) to namespace bar.
cilium.syn.tools/egress-ip: 192.0.2.61
cilium.syn.tools/egress-range: 192.0.2.32 - 192.0.2.63
cilium.syn.tools/interface-prefix: egress_a
cilium.syn.tools/source-namespace: bar
labels:
name: bar
name: bar (2)
spec:
destinationCIDRs:
- 0.0.0.0/0 (3)
egressGroups:
- interface: egress_a_29 (4)
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: '' (5)
selectors:
- podSelector:
matchLabels:
io.kubernetes.pod.namespace: bar (6)
---
apiVersion: isovalent.com/v1
kind: IsovalentEgressGatewayPolicy
metadata:
annotations: (1)
cilium.syn.tools/description: Generated policy to assign egress IP 192.0.2.32
in egress range "egress_a" (192.0.2.32 - 192.0.2.63) to namespace foo.
cilium.syn.tools/egress-ip: 192.0.2.32
cilium.syn.tools/egress-range: 192.0.2.32 - 192.0.2.63
cilium.syn.tools/interface-prefix: egress_a
cilium.syn.tools/source-namespace: foo
labels:
name: foo
name: foo (2)
spec:
destinationCIDRs:
- 0.0.0.0/0 (3)
egressGroups:
- interface: egress_a_0 (4)
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: '' (5)
selectors:
- podSelector:
matchLabels:
io.kubernetes.pod.namespace: foo (6)
1 | The component adds a number of annotations that contain the input data that was used to generate the policy. Additionally, the component adds an annotation that gives a human-readable description of the policy. |
2 | The namespace name is used as the name for the IsovalentEgressGatewayPolicy resource. |
3 | The policy always masquerades all traffic from the namespace with the defined egress IP. |
4 | The policy uses the key in egress_ip_ranges and the offset of the selected egress IP into the range to generate the name of the dummy interface that’s expected to be assigned the shadow IPs that map to the egress IP. |
5 | The policy uses the node selector that’s defined in the parameter. |
6 | The policy always matches all traffic originating in the specified namespace. |
Additionally, if parameter egress_gateway.generate_shadow_ranges_configmap
is set to true
, the ConfigMap
and DaemonSet
shown below are created.
apiVersion: v1
kind: ConfigMap
metadata:
name: eip-shadow-ranges
data: (1)
infra-foo: '{"egress_a":{"base":"198.51.100","from":"32","to":"63"}}'
infra-bar: '{"egress_a":{"base":"198.51.100","from":"64","to":"95"}}'
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
cilium.syn.tools/description: Daemonset which ensures that the Kubelet on the
nodes where the pods are scheduled can access configmap eip-shadow-ranges in
namespace cilium.
name: eip-shadow-ranges-e70e8
spec:
selector:
matchLabels:
name: eip-shadow-ranges-e70e8
template:
metadata:
labels:
name: eip-shadow-ranges-e70e8
spec:
containers:
- command:
- /bin/sh
- -c
- 'trap : TERM INT; sleep infinity & wait'
image: docker.io/bitnami/kubectl:1.29.4@sha256:f3cee231ead7d61434b7f418b6d10e1b43ff0d33dca43b341bcf3088fcaaa769
imagePullPolicy: IfNotPresent
name: sleep
volumeMounts: (2)
- mountPath: /data/eip-shadow-ranges
name: shadow-ranges
nodeSelector:
node-role.kubernetes.io/infra: '' (3)
volumes: (2)
- configMap:
name: eip-shadow-ranges
name: shadow-ranges
1 | The contents of the ConfigMap are generated in the format that the systemd unit managed by component openshift4-nodes expects. |
2 | The DaemonSet mounts the eip-shadow-ranges ConfigMap as a volume. |
3 | The DaemonSet is scheduled using the same node selector that’s used for the IsovalentEgressGatewayPolicy resources |
l2_announcements
This section allows users to configure the [Cilium L2 Announcements / L2 Aware LB] feature.
The current implementation (and therefore examples shown here) has only been tested with Cilium EE. Please refer to the example policy in the upstream documentation for Cilium OSS. |
l2_announcements.enabled
type |
boolean |
default |
|
This parameter allows users to set all the configurations necessary to enable the l2 announcement policy feature.
It is important to adjust the client rate limit when using this feature, due to increased API usage. See Sizing client rate limit for sizing guidelines. |
Kube Proxy replacement mode must be enabled. |
l2_announcements.policies
type |
object |
default |
|
This parameter allows users to deploy CiliumL2AnnouncementPolicy
resources.
Each key-value pair in the parameter is converted to a CiliumL2AnnouncementPolicy
resource.
Entries can be removed by setting the value to null
.
See the upstream documentation for further explanation.
bgp
This section allows users to configure the Cilium BGP control plane.
bgp.peerings
type |
object |
default |
|
This parameter allows users to configure CiliumBGPPeeringPolicy
resources.
This resource is used to configure peers for the BGP control plane.
The component expects the contents of this parameter to be key value pairs where the value is another object with field virtualRouters
and optional fields nodeSelector
and spec
.
The component generates a CiliumBGPPeeringPolicy
for each entry in the parameter.
The key of the entry is used as the name of the resulting resource.
The values of fields virtualRouters
and nodeSelector
are processed and used as the base values for fields spec.virtualRouters
and spec.nodeSelector
in the resulting resource.
The value of field spec
is merged into spec
of the resource.
Field virtualRouters
in the parameter entry value is expected to be an object.
The key of each entry in this field is expected to be a peer address (in CIDR notation, such as 192.0.2.2/32
) and the value of each entry is expected to be a valid entry for spec.virtualRouters
.
The key is written into field peerAddress
of the value to construct the final entry for spec.virtualRouters
.
See the upstream documentation for the full set of supported fields.
Make sure to check the upstream documentation for the version of Cilium that you’re running. The BGP control plane feature is relatively new and sometimes has significant changes between Cilium minor versions. |
bgp.loadbalancer_ip_pools
type |
object |
default |
|
This parameter allows users to configure CiliumLoadBalancerIPPool
resources.
This resource is used to configure IP pools which Cilium can use to allocate IPs for services with type=LoadBalancer
.
The component expects the contents of this parametr to be key value pairs where the value is another object with field blocks
, and optional fields serviceSelector
and spec
.
The component generates a CiliumLoadBalancerIPPool
for each entry in the parameter.
The key of the entry is used as the name of the resulting resource.
The values of fields blocks
and serviceSelector
are processed and used as the base values for fields spec.blocks
(or spec.cidrs
in Cilium ⇐ 1.14) and spec.serviceSelector
.
The value of field spec
is merged into spec
of the resource.
The component expects field blocks
to be an object whose values are suitable entries for spec.blocks
(or spec.cidrs
) of the resulting resource.
The keys of the object are not used by the component and are only present to allow users to make IP pool configurations more reusable.
See the upstream documentation for the full set of supported fields.
Make sure to check the upstream documentation for the version of Cilium that you’re running. The LoadBalancer IP address management (LB IPAM) feature is under active development and sometimes has significant changes between Cilium minor versions. |