Parameters
The parent key for all of the following parameters is cilium
.
The namespace is cilium and currently non configurable.
|
install_method
type |
string |
default |
|
possible values |
|
The installation method for Cilium.
olm
uses the Openshift Operator Lifecycle Manager.
OLM installation is required for OpenShift clusters.
The Cilium OLM is a thin wrapper around Helm, because of this the Helm values are used for OLM configuration too.
Helm |
OLM |
|
Opensource |
✅ |
✅ |
Enterprise |
✅ |
✅ |
release
type |
string |
default |
|
possible values |
|
Two version of Cilium exist. The open-source version and the enterprise version.
See Upgrade Cilium OSS to Cilium Enterprise (OpenShift 4) for upgrading from the OSS version to the enterprise version.
kubernetes_version
type |
string |
default |
The Kubernetes version to provide to helm template
when rendering the Helm chart.
This parameter has no effect when install_method
is set to olm
.
Set this parameter to ${dynamic_facts:kubernetesVersion:major}.${dynamic_facts:kubernetesVersion.minor} to use the cluster’s reported Kubernetes version when rendering the Helm chart.
|
charts.cilium.source
type |
string |
default |
The Helm repository from which to download the cilium
Helm chart.
charts.cilium-enterprise.source
type |
string |
default |
|
The chart repository URL of the cilium-enterprise
Helm chart.
Users must provide the chart repository URL themselves in their Project Syn global or tenant configuration.
The component default is an invalid string (<CILIUM-ENTERPRISE-CHART-REPO-URL> ) instead of ~ to make the Kapitan error message somewhat useful when the user hasn’t reconfigured the chart repository.
|
charts.cilium-enterprise.version
type |
string |
default |
The version to use for the cilium-enterprise
Helm chart.
olm.source
type |
object |
default |
|
The source for the OLM manifests.
The component selects the opensource
or enterprise
field based on the value of component parameter release
.
The component doesn’t provide the URL of the Cilium Enterprise OLM manifests .tar.gz
archive.
Users must provide the URL themselves in their Project Syn configuration hierarchy.
The component default is an invalid string (<CILIUM-ENTERPRISE-OLM-MANIFESTS-TARGZ-URL> ) instead of ~ to make the Kapitan error message somewhat useful when the user hasn’t reconfigured the chart repository.
|
Example
parameters:
cilium:
olm:
source:
enterprise: https://cilium-ee.example.com/downloads/v${cilium:olm:version}/cilium-ee-${cilium:olm:full_version}.tar.gz (1)
1 | The example configuration uses Reclass references to construct URL parts containing the desired version. The component explicitly provides separate parameters for the OLM minor version and patchlevel. |
olm.full_version
type |
string |
default |
|
The complete version of the OLM release to download.
By default, the component constructs the value for this parameter from parameters version
and patchlevel
.
olm.log_level
type |
string |
default |
|
Zap log level for the OLM operator.
cilium_helm_values
type |
object |
default |
The configuration values of the underlying Cilium helm chart. See Opensource Cilium documentation for supported values.
The component will pre-process certain Helm values to allow users to more gracefully upgrade to newer Cilium versions which remove deprecated Helm values.
On OpenShift 4, the component will deploy a Patch which controls whether OpenShift deploys kube-proxy based on the value of |
hubble_enterprise_helm_values
type |
object |
default |
The configuration values for the Hubble Enterprise helm chart. See the Isovalent Cilium Enterprise documentation for supported values.
hubble_ui_helm_values
type |
object |
default |
The configuration values for the Hubble UI helm chart. See the Isovalent Cilium Enterprise documentation for supported values.
egress_gateway
This section allows users to configure the [Cilium EgressGatewayPolicy] feature.
When deploying Cilium OSS, the component will generate When deploying Cilium EE, the component will generate |
The current implementation (and therefore examples shown here) has only been tested with Cilium EE. Please refer to the example policy in the upstream documentation for Cilium OSS. |
egress_gateway.enabled
type |
boolean |
default |
|
This parameter allows users to set all the configurations necessary to enable the egress gateway policy feature through a single parameter.
The parameter sets the following Helm values:
egressGateway:
enabled: true
bpf:
masquerade: true
l7Proxy: false
Notably, the L7 proxy feature is disabled by default when egress gateway policies are enabled. This is recommended by the Cilium documentation, see also the upstream documentation.
Additionally, BPF masquerading can’t be disabled when the egress gateway feature is enabled.
For Cilium EE, the component uses Helm value egressGateway.enabled
for Helm value enterprise.egressGatewayHA.enabled
by default.
It’s possible to override this by explicitly setting egressGateway.enabled=false
and enterprise.egressGatewayHA.enabled=true
in the component’s cilium_helm_values
.
egress_gateway.policies
type |
object |
default |
|
This parameter allows users to deploy CiliumEgressGatewayPolicy
resources.
When deploying Cilium EE, the parameter will generate IsovalentEgressGatewayPolicy
resources instead.
Each key-value pair in the parameter is converted to a CiliumEgressGatewayPolicy
(or IsovalentEgressGatewayPolicy
) resource.
Entries can be removed by setting the value to null
.
Example
The examples are written for Cilium EE’s IsovalentEgressGatewayPolicy resources.
|
egress_gateway:
policies:
all-example:
metadata:
annotations:
syn.tools/description: |
Route all egress traffic from example-namespace through
203.0.113.100.
spec:
destinationCIDRs:
- 0.0.0.0/0
egressGroups:
- nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ""
egressIP: 203.0.113.100
selectors:
- podSelector:
matchLabels:
io.kubernetes.pod.namespace: example-namespace
removed: null
The component configuration shown above is rendered as follows by the component:
apiVersion: isovalent.com/v1
kind: IsovalentEgressGatewayPolicy
metadata:
annotations:
syn.tools/description: |
Route all egress traffic from example-namespace through
203.0.113.100.
labels:
name: all-example-namespace
name: all-example-namespace
spec:
destinationCIDRs:
- 0.0.0.0/0
egressGroups:
- egressIP: 203.0.113.100
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ''
selectors:
- podSelector:
matchLabels:
io.kubernetes.pod.namespace: example-namespace
egress_gateway.generate_shadow_ranges_configmap
type |
boolean |
default |
|
When this parameter is set to true, the component will deploy a ConfigMap which is suitable for the systemd unit which creates dummy interfaces managed by component openshift4-nodes.
Additionally, the component will deploy one or more DaemonSets (depending on the contents of egress_gateway.egress_ip_ranges
) to ensure the Kubelets on all nodes where egress interfaces need to be created can access the ConfigMap.
See also the documentation for parameter egressInterfaces
in openshift4-nodes.
egress_gateway.shadow_ranges_daemonset_node_selector
type |
object |
default |
|
This parameter can be set when the DaemonSet that mounts the shadow ranges ConfigMap (see parameter generate_shadow_ranges_configmap
) should run on a larger set of nodes than the ones indicated by each egress_ip_ranges
entry’s node_selector
.
The contents of this parameter are used as-is for the DaemonSet’s spec.template.spec.nodeSelector
.
An example configuration where this parameter is useful is when only a subset of nodes in an OpenShift machine config pool are designated egress nodes with an associated shadow range.
In this case, we must ensure that all nodes in the machine config pool can read the shadow ranges ConfigMap, but at the same time we must ensure that policies generated via egress_ip_ranges.<group>.namespace_egress_ips
only select the nodes that have a shadow range assigned.
When setting this parameter, it’s the user’s responsibility to ensure that the provided DaemonSet node selector selects all nodes that are designated egress nodes. |
egress_gateway.self_service_namespace_ips
type |
boolean |
default |
|
If this parameter is set to true, the component deploys an Espejote managed resource which enables users to configure egress IPs for namespaces by adding annotating namespaces with cilium.syn.tools/egress-ip=<desired egress IP>
.
The managed resource dynamically creates, updates and deletes IsovalentEgressGatewayPolicy
resources based on the presence, contents or absence of the cilium.syn.tools/egress-ip
annotation.
Additionally, the managed resource updates the policy resources if the static configuration (which is inherited from the component’s egress_ip_ranges
parameters`) changes.
The managed resource uses the same logic to generate IsovalentEgressGatewayPolicy
resources which the component uses for parameter egress_gateway.egress_ip_ranges.namespace_egress_ips
.
See Policy generation for details on how the base policy generation works.
In contrast to the component, the managed resource does a reverse lookup of the egress range based on the egress IP that’s requested via the annotation.
If this reverse lookup returns no range or multiple ranges, the managed resource doesn’t create a policy but instead only updates annotation cilium.syn.tools/egress-ip-status
with an error message indicating the error that was encountered.
The managed resource sets an owner reference pointing to the Namespace
on the policy so that the policy is deleted when the namespace is deleted.
Additionally, the managed resource deletes the policy when the user removes the cilium.syn.tool/egress-ip
annotation.
If you want to switch a cluster which currently has namespace egress IPs managed by the component to the self-service approach, you can follow the how-to. |
egress_gateway.egress_ip_ranges
type |
object |
default |
|
This parameter allows users to configure CiliumEgressGatewayPolicy
(or IsovalentEgressGatewayPolicy
) resources which assign a single egress IP to a namespace according to the design selected in Floating egress IPs with Cilium on cloudscale.
Each entry in the parameter is intended to describe a group of dummy interfaces that can be used in CiliumEgressGatewayPolicy
(or IsovalentEgressGatewayPolicy
) resources.
The component expects that each value is an object with fields egress_range
, node_selector
, namespace_egress_ips
, and shadow_ranges
.
Field shadow_ranges is optional, see the section on shadow ranges for more details.
|
Field namespace_egress_ips is optional for use cases where only the shadow ranges mechanism is required.
|
Prerequisites
The component expects that the key for each entry matches the prefix of the dummy interface names that are assigned the shadow IPs which map to the egress IP range defined in egress_range
.
To expand, the component expects that each node matching the selector in node_selector
has a dummy interfaces which is named <prefix>_<i>
for \$i \in [0, n)\$ where \$n\$ is the number of IPs contained in the specified egress range.
Additionally, the component expects that the network environment of the cluster ensures that all traffic which originates from the IPs assigned to the dummy interfaces on each node is mapped to the IPs in the range given in egress_range
.
The details of the mapping are left to the operator of the cluster’s network environment, but the component expects that traffic that originates from the IPs assigned to the same dummy interface on different nodes is mapped to a single egress IP.
We recommend that cluster operators allocate a shadow egress IP range of the same size as the egress IP range specified in field For example, a cluster operator could select shadow IP CIDRs In this case, the operator would need to ensure that traffic originating from each shadow IP CIDR is mapped to the egress CIDR. One option to realize this mapping are iptables
This approach assumes that the default gateway has suitable routes to ensure that traffic to |
Policy generation
The component will generate one CiliumEgressGatewayPolicy
(or IsovalentEgressGatewayPolicy
) for each key-value pair in field namespace_egress_ips
for each egress range.
The compilation will abort with an error if the same namespace appears in multiple egress range definitions. |
The component doesn’t enforce that different egress ranges are non-overlapping. |
The component expects that keys in field namespace_egress_ips
are namespace names.
Additionally, the component expects that values in that field are IPs in the defined egress IP range.
The component allows users to assign the same egress IP to multiple namespaces. |
The component expects that the value of egress_range
has format 192.0.2.32-192.0.2.63
.
If the range isn’t given in the expected format or if the component detects that the given range is empty (for example if the first IP is larger than the last IP) compilation is aborted with an error.
Additionally, the component also aborts compilation with an error if an egress IP that’s assigned to a namespace is outside the specified egress range.
Finally, entries in egress_ip_ranges
and namespace_egress_ips
can be removed by setting the value to null
.
Currently, the policies generated by the component have ArgoCD sync option In the future, the component policy generation may utilize the self-service implementation to generate the policies. |
Shadow ranges
The component optionally deploys a ConfigMap which contains a map from node names to egress shadow IP ranges.
This ConfigMap is only deployed when parameter egress_gateway.generate_shadow_ranges_configmap
is set to true
.
If the shadow ranges ConfigMap is enabled, the component will check that each shadow range is the same size as the egress IP range it’s associated with.
If there are any mismatches compilation aborts with an error.
If that parameter is set to true, the component will extract the node names and associated shadow IP ranges from field shadow_ranges
in each entry of egress_ip_ranges
.
Ranges which don’t define this field (or where it’s set to null
) are skipped.
This ConfigMap is intended to be consumed by a component running on the node (such as the systemd unit deployed by openshift4-nodes, see the documentation for parameter egressInterfaces
in openshift4-nodes.
To ensure the Kubelet Kubeconfig on the nodes can be used to access this ConfigMap, the component also deploys a DaemonSet which mounts the ConfigMap for each unique node selector in all egress IP ranges. The DaemonSet pods establish a link between the node and the ConfigMap which is required in order for the Kubernetes Node authorization mode to allow the Kubelet to access the ConfigMap.
Example
egress_ip_ranges:
egress_a:
egress_range: '192.0.2.32 - 192.0.2.63'
node_selector:
node-role.kubernetes.io/infra: ''
namespace_egress_ips:
foo: 192.0.2.32
bar: 192.0.2.61
shadow_ranges:
infra-foo: 198.51.100.32 - 198.51.100.63
infra-bar: 198.51.100.64 - 198.51.100.95
The configuration shown above results in the two IsovalentEgressGatewayPolicy
resources shown below.
apiVersion: isovalent.com/v1
kind: IsovalentEgressGatewayPolicy
metadata:
annotations: (1)
cilium.syn.tools/description: Generated policy to assign egress IP 192.0.2.61
in egress range "egress_a" (192.0.2.32 - 192.0.2.63) to namespace bar.
cilium.syn.tools/egress-ip: 192.0.2.61
cilium.syn.tools/egress-range: 192.0.2.32 - 192.0.2.63
cilium.syn.tools/interface-prefix: egress_a
cilium.syn.tools/source-namespace: bar
labels:
name: bar
name: bar (2)
spec:
destinationCIDRs:
- 0.0.0.0/0 (3)
egressGroups:
- interface: egress_a_29 (4)
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: '' (5)
selectors:
- podSelector:
matchLabels:
io.kubernetes.pod.namespace: bar (6)
---
apiVersion: isovalent.com/v1
kind: IsovalentEgressGatewayPolicy
metadata:
annotations: (1)
cilium.syn.tools/description: Generated policy to assign egress IP 192.0.2.32
in egress range "egress_a" (192.0.2.32 - 192.0.2.63) to namespace foo.
cilium.syn.tools/egress-ip: 192.0.2.32
cilium.syn.tools/egress-range: 192.0.2.32 - 192.0.2.63
cilium.syn.tools/interface-prefix: egress_a
cilium.syn.tools/source-namespace: foo
labels:
name: foo
name: foo (2)
spec:
destinationCIDRs:
- 0.0.0.0/0 (3)
egressGroups:
- interface: egress_a_0 (4)
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: '' (5)
selectors:
- podSelector:
matchLabels:
io.kubernetes.pod.namespace: foo (6)
1 | The component adds a number of annotations that contain the input data that was used to generate the policy. Additionally, the component adds an annotation that gives a human-readable description of the policy. |
2 | The namespace name is used as the name for the IsovalentEgressGatewayPolicy resource. |
3 | The policy always masquerades all traffic from the namespace with the defined egress IP. |
4 | The policy uses the key in egress_ip_ranges and the offset of the selected egress IP into the range to generate the name of the dummy interface that’s expected to be assigned the shadow IPs that map to the egress IP. |
5 | The policy uses the node selector that’s defined in the parameter. |
6 | The policy always matches all traffic originating in the specified namespace. |
Additionally, if parameter egress_gateway.generate_shadow_ranges_configmap
is set to true
, the ConfigMap
and DaemonSet
shown below are created.
apiVersion: v1
kind: ConfigMap
metadata:
name: eip-shadow-ranges
data: (1)
infra-foo: '{"egress_a":{"base":"198.51.100","from":"32","to":"63"}}'
infra-bar: '{"egress_a":{"base":"198.51.100","from":"64","to":"95"}}'
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
cilium.syn.tools/description: Daemonset which ensures that the Kubelet on the
nodes where the pods are scheduled can access configmap eip-shadow-ranges in
namespace cilium.
name: eip-shadow-ranges-e70e8
spec:
selector:
matchLabels:
name: eip-shadow-ranges-e70e8
template:
metadata:
labels:
name: eip-shadow-ranges-e70e8
spec:
containers:
- command:
- /bin/sh
- -c
- 'trap : TERM INT; sleep infinity & wait'
image: docker.io/bitnami/kubectl:1.29.4@sha256:f3cee231ead7d61434b7f418b6d10e1b43ff0d33dca43b341bcf3088fcaaa769
imagePullPolicy: IfNotPresent
name: sleep
volumeMounts: (2)
- mountPath: /data/eip-shadow-ranges
name: shadow-ranges
nodeSelector:
node-role.kubernetes.io/infra: '' (3)
volumes: (2)
- configMap:
name: eip-shadow-ranges
name: shadow-ranges
1 | The contents of the ConfigMap are generated in the format that the systemd unit managed by component openshift4-nodes expects. |
2 | The DaemonSet mounts the eip-shadow-ranges ConfigMap as a volume. |
3 | The DaemonSet is scheduled using the same node selector that’s used for the IsovalentEgressGatewayPolicy resources |
l2_announcements
This section allows users to configure the [Cilium L2 Announcements / L2 Aware LB] feature.
The current implementation (and therefore examples shown here) has only been tested with Cilium EE. Please refer to the example policy in the upstream documentation for Cilium OSS. |
l2_announcements.enabled
type |
boolean |
default |
|
This parameter allows users to set all the configurations necessary to enable the l2 announcement policy feature.
It is important to adjust the client rate limit when using this feature, due to increased API usage. See Sizing client rate limit for sizing guidelines. |
Kube Proxy replacement mode must be enabled. |
l2_announcements.policies
type |
object |
default |
|
This parameter allows users to deploy CiliumL2AnnouncementPolicy
resources.
Each key-value pair in the parameter is converted to a CiliumL2AnnouncementPolicy
resource.
Entries can be removed by setting the value to null
.
See the upstream documentation for further explanation.
bgp
This section allows users to configure the Cilium BGP control plane.
bgp.enabled
type |
bool |
default |
|
Whether to enable the BGP control plane feature in Cilium.
See the upstream BGP control plane documentation for details on the architecture and the individual custom resources mentioned in this section.
bgp.cluster_configs
type |
object |
default |
|
This parameter allows users to configure CiliumBGPClusterConfig
resources.
The CiliumBGPClusterConfig
resource defines the global BGP configuration and holds the list of peers.
The resources references one or more CiliumBGPPeerConfig
resources (see parameter bgp.peer_configs
) which define the details of each peering connection.
The component creates one CiliumBGPClusterConfig
for each entry in this parameter.
The key is used as metadata.name
of the resulting object.
The component supports fields bgpInstances
, nodeSelector
, metadata
and spec
in the parameter values.
Fields metadata
and spec
are added to the resulting CiliumBGPClusterConfig
as is.
Field nodeSelector
is used as the value for spec.nodeSelector
.
Field bgpInstances
is expected to be an object and is transformed into entries for spec.bgpInstances
of the custom resource.
The keys of the bgpInstances
field are used for field name
of the entries for spec.bgpInstances
.
The field peers
of each value of the bgpInstances
field is expected to be an object and is transformed into a list.
Again, the keys of field peers
are used as values for field
name for the resulting peers
entries.
Field spec
is merged over the partial object generated from fields nodeSelector
and bgpInstances
.
The component validates that CiliumBGPClusterConfig
resources only reference CiliumBGPPeerConfig
resources which are defined in parameter bgp.peer_configs
.
See the upstream documentation for all available configuration options.
Example
The following component parameters snippet is transformed into the CiliumBGPClusterConfig
shown below.
bgp:
cluster_configs:
lb-services:
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ''
bgpInstances:
lbs:
localASN: 64512
peers:
peer1:
peerAddress: 192.0.2.2
peerASN: 64512
peerConfigRef:
name: lb-services
peer2:
peerAddress: 192.0.2.3
peerASN: 64512
peerConfigRef:
name: lb-services
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPClusterConfig
metadata:
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
labels:
name: lb-services
name: lb-services
spec:
bgpInstances:
- localASN: 64512
name: lbs
peers:
- name: peer1
peerASN: 64512
peerAddress: 192.0.2.2
peerConfigRef:
name: lb-services
- name: peer2
peerASN: 64512
peerAddress: 192.0.2.3
peerConfigRef:
name: lb-services
nodeSelector:
matchLabels:
node-role.kubernetes.io/infra: ''
bgp.peer_configs
type |
object |
default |
|
This parameter allows users to configure CiliumBGPPeerConfig
resources.
The CiliumBGPPeerConfig
resource defines a set of BGP peering connection details.
The component creates one CiliumBGPPeerConfig
for each entry in this parameter.
The key is used as metadata.name
of the resulting object.
The component supports fields families
, metadata
and spec
for each entry.
Fields metadata
and spec
are added to the resulting CiliumBGPPeerConfig
as is.
Field families
is expected to be an object and its values are used as is for field spec.families
.
Field spec
is merged over the partial object created from field families
.
The component validates that CiliumBGPPeerConfig
resources only reference BGP auth secret Secret
resources which are defined in parameter bgp.auth_secrets
.
See the upstream documentation for details.
Example
bgp:
peer_configs:
lb-services:
spec:
gracefulRestart:
enabled: true
restartTimeSeconds: 30
families:
unicast-v4:
afi: ipv4
safi: unicast
advertisements:
matchLabels:
cilium.syn.tools/advertise: bgp
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPPeerConfig
metadata:
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
labels:
name: lb-services
name: lb-services
spec:
families:
- advertisements:
matchLabels:
cilium.syn.tools/advertise: bgp
afi: ipv4
safi: unicast
gracefulRestart:
enabled: true
restartTimeSeconds: 30
bgp.auth_secrets
type |
object |
default |
|
This parameter allows users to configure Secret
resources to use as BGP auth secrets.
The component creates one Secret
for each entry in this parameter.
The key is used as metadata.name
of the resulting object.
The component expects that each value in this parameter is a valid partial Secret
resource.
The component validates that each secret has field password
, which is required by the BGP control plane for auth secrets.
By default, the component configures Cilium to look for BGP auth secrets in namespace cilium
.
The namespace can be changed by setting Helm value bgpControlPlane.secretsNamespace.name
.
The component sets metadata.namespace
to the configured bgpControlPlane.secretsNamspace.name
for secrets defined through this parameter.
See the upstream documentation for details.
bgp.node_config_overrides
type |
object |
default |
|
This parameter allows users to configure CiliumBGPNodeConfigOverride
resources.
The CiliumBGPNodeConfigOverride
resource can be used to inject per-node customizations into the generated CiliumBGPNodeConfig
resources.
The component creates one CiliumBGPNodeConfigOverride
for each entry in this parameter.
The key is used as metadata.name
of the resulting object.
The component expects that each value in this parameter is a valid partial CiliumBGPNodeConfigOverride
resource and doesn’t apply any processing.
See the upstream documentation for details.
The resource name must match the Kubernetes node name of the node for which the configuration is intended. |
bgp.advertisements
type |
object |
default |
|
This parameter allows users to configure CiliumBGPAdvertisement
resources.
The component creates one CiliumBGPAdvertisement
for each entry in this parameter.
The key is used as metadata.name
of the resulting object.
The component supports fields metadata
and advertisements
for each entry of this parameter.
Field metadata
is added to the resulting resource as is.
Field advertisements
is expected to be an object, and the values of the object are used for field spec.advertisements
in the resulting resource without further processing.
See the upstream documentation for details.
The resource name must match the Kubernetes node name of the node for which the configuration is intended. |
Example
bgp:
advertisements:
lb-services:
metadata:
labels:
cilium.syn.tools/advertise: bgp
advertisements:
lb-ips:
advertisementType: Service
service:
addresses:
- LoadBalancerIP
selector:
matchLabels:
syn.tools/load-balancer-class: cilium
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPAdvertisement
metadata:
annotations:
argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
labels:
cilium.syn.tools/advertise: bgp
name: lb-services
name: lb-services
spec:
advertisements:
- advertisementType: Service
selector:
matchLabels:
syn.tools/load-balancer-class: cilium
service:
addresses:
- LoadBalancerIP
bgp.loadbalancer_ip_pools
type |
object |
default |
|
This parameter allows users to configure CiliumLoadBalancerIPPool
resources.
This resource is used to configure IP pools which Cilium can use to allocate IPs for services with type=LoadBalancer
.
The component expects the contents of this parameter to be key value pairs where the value is another object with field blocks
, and optional fields serviceSelector
and spec
.
The component generates a CiliumLoadBalancerIPPool
for each entry in the parameter.
The key of the entry is used as the name of the resulting resource.
The values of fields blocks
and serviceSelector
are processed and used as the base values for fields spec.blocks
(or spec.cidrs
in Cilium ⇐ 1.14) and spec.serviceSelector
.
The value of field spec
is merged into spec
of the resource.
The component expects field blocks
to be an object whose values are suitable entries for spec.blocks
(or spec.cidrs
) of the resulting resource.
The keys of the object are not used by the component and are only present to allow users to make IP pool configurations more reusable.
See the upstream documentation for the full set of supported fields.
Make sure to check the upstream documentation for the version of Cilium that you’re running. The LoadBalancer IP address management (LB IPAM) feature is under active development and sometimes has significant changes between Cilium minor versions. |
alerts
This section allows users to configure alerts for Cilium.
The component expects that an externally-managed Prometheus stack is running on the target cluster.
For OpenShift 4, the component makes use of the component libraries provided by component openshift4-monitoring
.
On other distributions, the component expects that a component library prom.libsonnet
is available, for example via component prometheus
.
alerts.ignoreNames
type |
list |
default |
|
Alerts which shouldn’t be deployed.
The list supports removal of entries by prefixing them with ~
.
alerts.patches
type |
dict |
default |
|
Patches for alerts managed by the component. The component expects that keys in this object match the name of an alert managed through the component. The value of each entry is expected to be a valid partial Prometheus rule definition.
alerts.additionalRules
type |
dict |
default |
|
This parameter allows users to configure additional Prometheus recording and alerting rules.
The component expects that keys of this object are prefixed with either alert:
or record:
and will use these prefixes to create alerting or recording rules.
The component will take the suffix of the key as the alerting or recording rule name and will set field alert
or record
of the rule to the suffix.
The value of each entry will be used as the base Prometheus rule in which the alert
or record
field will be injected.
Parameters alerts.ignoreNames and alerts.patches are also applied to alerts defined through this parameter.
|