diff --git a/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/_category_.json b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/_category_.json new file mode 100644 index 0000000000..b465995e2d --- /dev/null +++ b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/_category_.json @@ -0,0 +1,3 @@ +{ + "position": 35 +} diff --git a/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/architecture.md b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/architecture.md new file mode 100644 index 0000000000..9ac1d7118e --- /dev/null +++ b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/architecture.md @@ -0,0 +1,87 @@ +--- +sidebar_label: "Architecture" +title: "Architecture" +description: "Learn about the architecture used by Amazon EKS Hybrid Nodes when deployed with Palette." +hide_table_of_contents: false +tags: ["public cloud", "aws", "architecture", "eks hybrid nodes"] +sidebar_position: 0 +--- + +Palette enables importing and managing Amazon Elastic Kubernetes Service (Amazon EKS) Hybrid Nodes. Review the following architectural highlights when using Palette to manage your Amazon EKS Hybrid Nodes. + +- Create hybrid node pools comprised of edge hosts that have been registered with Palette. + +- Define cluster profiles to collectively manage your hybrid nodes. Each cluster profile for a hybrid node pool includes the following layers: + + - Configure Operating System (OS) layers to reference the provider image built during the [EdgeForge](../../../edge/edgeforge-workflow/edgeforge-workflow.md) workflow and optional customizations for your hybrid nodes. + + - Configure Kubernetes layers to specify the correct [Amazon EKS Distro](https://distro.eks.amazonaws.com/) version to be installed on hybrid nodes. + + - Configure Container Network Interface (CNI) layers to handle networking for hybrid nodes using affinity rules. + +## Hybrid Network Connectivity + +Network connectivity between your on-premises environments, edge locations, and Amazon EKS cluster must be established before Palette can manage your Amazon EKS Hybrid Nodes. + +In the following example, an Amazon EKS cluster is connected to an on-premises datacenter and edge location through an AWS Transit Gateway and AWS Site-to-Site Virtual Private Network (VPN). + +![Example Amazon EKS Hybrid Nodes network architecture](/aws_eks-hybrid_architecture_eks-hybrid-architecture.webp) + +Hybrid network connectivity can be configured using a variety of methods, such as: + +- [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) +- [AWS Direct Connect](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html) +- [Software VPN](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/software-vpn.html) + +Refer to [Network-to-Amazon VPC connectivity options](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/network-to-amazon-vpc-connectivity-options.html) for guidance on all available options. + +### Configuration Requirements + +If using a VPN or AWS Direct Connect between AWS and your on-premises and edge environments, review the following configuration requirements: + +- Configure your EKS cluster with static placement so that your nodes are assigned to specific Availability Zones (AZs) and fixed networking configurations. This is required because of the following reasons: + + - The VPN configuration must be set up with predefined routes and IP ranges. + - Node placement cannot change dynamically across AZs. + - Network paths need to remain consistent for VPN tunnels to function properly. + +- Traffic routing in the Amazon EKS VPC requires the following mapping for hybrid nodes: + + - Route table entries mapping hybrid node CIDR ranges to VPN endpoint. For example, **Hybrid Node CIDR 10.200.0.0/16 → VPN endpoint 172.16.0.1**. + - Route table entries mapping hybrid pod CIDR ranges to VPN endpoint. For example, **Hybrid Pod CIDR 192.168.0.0/16 → VPN endpoint 172.16.0.1**. + - For AWS Direct Connect, map traffic to appropriate private subnet CIDR. For example, **Both CIDRs → Private subnet 172.16.1.0/24**. + +- For AWS VPNs, configure two static routes for each of the following connections: + + - Hybrid Node CIDR block. For example, **Hybrid Node CIDR 10.200.0.0/16 → VPN endpoint 172.16.0.1**. + - Hybrid Pod CIDR block. For example, **Hybrid Pod CIDR 192.168.0.0/16 → VPN endpoint 172.16.0.1**. + + :::tip + + If you're using a Virtual Private Gateway or Transit Gateway, route propagation can be enabled to automatically populate your VPC route tables. Ensure you verify your route tables after propagation. + + ::: + +- For on-premises and edge VPNs, set up IPsec Phase 1 tunnels with Phase 2 security associations for the following: + + - Hybrid Node subnet to EKS VPC CIDR. For example, **Hybrid Node Subnet 10.201.0.0/16 → EKS VPC CIDR 10.100.0.0/16**. + - Hybrid Node pod CIDR to EKS VPC CIDR. For example, **Hybrid Node Pod CIDR 192.168.0.0/16 → EKS VPC CIDR 10.100.0.0/16**. + + You should also enable either BGP routing or static routes to ensure proper traffic flow through VPN tunnels. + + For non-primary VPN servers, either broadcast routes via BGP or configure static routes to redirect EKS VPC CIDR traffic appropriately. + +## Operating System Compatibility + +Palette supports the same operating systems as AWS. Refer to [Prepare operating system for hybrid nodes](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-os.html) for guidance. + +Edge hosts require additional dependencies and you can build these into provider images using the [EdgeForge Workflow](../../../edge/edgeforge-workflow/edgeforge-workflow.md). + +## Authentication and Access Management + +Palette supports the following authentication methods for your hybrid nodes: + +- [AWS Systems Manager (SSM)](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html) +- [AWS Identity and Access Management (IAM) Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) + +Refer to [Prepare credentials for hybrid nodes](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-creds.html) for guidance on how to setup credentials for your hybrid nodes. diff --git a/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/configure-cni-hybrid-nodes.md b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/configure-cni-hybrid-nodes.md new file mode 100644 index 0000000000..f63f19a10e --- /dev/null +++ b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/configure-cni-hybrid-nodes.md @@ -0,0 +1,10 @@ +--- +sidebar_label: "Configure CNI for Hybrid Nodes" +title: "Configure CNI for Hybrid Nodes" +description: "Learn how to prepare your container network interface for Amazon EKS Hybrid Nodes." +hide_table_of_contents: false +tags: ["public cloud", "aws", "eks hybrid nodes"] +sidebar_position: 10 +--- + +TBA diff --git a/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md new file mode 100644 index 0000000000..6c30cebb73 --- /dev/null +++ b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/eks-hybrid-nodes.md @@ -0,0 +1,24 @@ +--- +sidebar_label: "EKS Hybrid Nodes" +title: "EKS Hybrid Nodes" +description: "Learn about how Palette supports deployment of Amazon EKS Hybrid Nodes." +tags: ["public cloud", "aws", "eks hybrid nodes"] +hide_table_of_contents: false +--- + +Palette supports management of [Amazon EKS Hybrid Nodes](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-overview.html). Using Palette to manage EKS Hybrid Nodes provides the following benefits: + +- Easier Setup: Palette automates the process of setting up and connecting on-premises devices (bare metal or virtual machines) to EKS clusters, reducing the need for manual configuration. + +- Centralized Management: Palette offers a single interface to manage the lifecycle of EKS Hybrid Nodes, ensuring consistent control over Kubernetes resources across on-premises, edge, and AWS environments. + +- Improved Recovery Options: Palette supports managing multiple edge sites under a single control plane, making it easier to move workloads to other sites in case of hardware or site failures. + +## Resources + +To learn more about Palette and Amazon EKS Hybrid Nodes, check out the following resources: + +- [Architecture](./architecture.md) +- [Import EKS Cluster and Enable Hybrid Mode](./import-eks-cluster-enable-hybrid-mode.md) +- [Configure CNI for Hybrid Nodes](./configure-cni-hybrid-nodes.md) +- [Bringing Amazon EKS Hybrid Nodes to life with Palette](https://www.spectrocloud.com/blog/eks-hybrid-nodes) diff --git a/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/import-eks-cluster-enable-hybrid-mode.md b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/import-eks-cluster-enable-hybrid-mode.md new file mode 100644 index 0000000000..cad3df9f33 --- /dev/null +++ b/docs/docs-content/clusters/public-cloud/aws/eks-hybrid-nodes/import-eks-cluster-enable-hybrid-mode.md @@ -0,0 +1,302 @@ +--- +sidebar_label: "Import EKS Cluster and Enable Hybrid Mode" +title: "Import EKS Cluster and Enable Hybrid Mode" +description: "Learn how to import Amazon EKS clusters and enable hybrid mode with Palette." +hide_table_of_contents: false +tags: ["public cloud", "aws", "eks hybrid nodes"] +sidebar_position: 20 +--- + +This section guides you on how to import an existing Amazon EKS cluster and enable hybrid mode. + +## Limitations + +The following limitations apply after importing an existing Amazon EKS cluster. + +- You cannot use full cluster profiles. You are limited to using add-on profiles when deploying cluster profiles to imported Amazon EKS clusters with Hybrid Nodes. +- You cannot download the cluster's kubeconfig file from Palette. You must use AWS to access the kubeconfig file. + +## Prerequisites + +- Access to an AWS cloud account. + +- Palette integration with AWS account. Review [Add AWS Account](../add-aws-accounts.md) for guidance. + +- Kubernetes version 1.19.X or later on the cluster you are importing. + +- Ensure your environment has network access to Palette SaaS. Refer to [Palette IP Addresses](../../../../architecture/palette-public-ips.md) for guidance. + +- Ensure [kubectl](https://kubernetes.io/docs/tasks/tools/) is installed and available in your local workstation. + +- Access to your Amazon EKS cluster through kubectl. + + - To access your cluster with kubectl, you can use the AWS CLI's built-in authentication capabilities. If you are using a custom OIDC provider, you will need to configure your kubeconfig to use your OIDC provider. + + Refer to the [Access Imported Cluster with Kubectl](#access-imported-cluster-with-kubectl) section for more information. + +- All networking prerequisites completed for hybrid nodes. Refer to [Prepare networking for hybrid nodes](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-networking.html) for guidance. You will need to provide the following details during the import steps: + + - The VPC CIDR range where your EKS cluster resides. + - The CIDR ranges for hybrid nodes in other networks that need to connect to this cluster. + - The CIDR ranges for hybrid pods in other networks that need to connect to this cluster. + +- All credentials prerequisites completed for hybrid nodes. Refer to [Prepare credentials for hybrid nodes](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-creds.html) for guidance. + + If you are using IAM Roles Anywhere, you will need to provide the following details during the import steps: + + - The ARN of the IAM role that the hybrid node _directly assumes_ to access AWS services and perform operations. + - The ARN of the IAM Roles Anywhere profile that defines which roles can be assumed. + - The ARN of the IAM role specified in the IAM Roles Anywhere profile that defines the permissions and policies for roles that can be assumed by hybrid nodes. + - The ARN of the IAM Roles Anywhere trust anchor that contains your certificate authority configuration. + - The PEM-encoded certificate of your Certificate Authority (CA) that serves as the trust anchor. This certificate is used by IAM Roles Anywhere to validate the authenticity of the client certificates presented by your hybrid nodes. + - The private key corresponding to your CA's certificate, used to sign client certificates. + +- An existing Amazon EKS cluster that has been enabled for hybrid nodes. Refer to [Create an Amazon EKS cluster with hybrid nodes](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-cluster-create.html) for guidance. + +- A Hybrid Nodes IAM Role with the required Kubernetes permissions to join your Amazon EKS cluster. Refer to [Prepare cluster access for hybrid nodes](https://docs.aws.amazon.com/eks/latest/userguide/hybrid-nodes-cluster-prep.html) for guidance. + +## Import Amazon EKS Cluster and Enable Hybrid Mode + +1. Log in to [Palette](https://spectrocloud.com). + +2. Navigate to the left **Main Menu** and select **Clusters**. + +3. Click on **Add New Cluster** and select **Import Cluster** in the pop-up box. + +4. Fill out the required information: + + | **Field** | **Description** | + | --- | --- | + | Cluster Name | The name of the cluster you want to import. Ensure it matches the cluster name in AWS. | + | Cloud Type | The cloud infrastructure type. Select **Amazon** from the **drop-down Menu**. | + | Host Path (Optional) | Specify the Certificate Authority (CA) file path for the cluster. This is the location on the physical host machine where the CA file is stored. | + | Container Mount Path (Optional) | Specify the container mount path where the CA file is mounted in the container. | + | Import mode | The Palette permission mode for the imported cluster. Select **Full-permission mode**. | + +5. Click on **Create & Open Cluster Instance** to start the import. + +6. You will be redirected to the cluster details page. A set of instructions with commands is displayed on the right + side of the screen. + + Click the clipboard icon to copy the kubectl command to your clipboard. + + ![A view of the cluster details page with the sidebar instructions box](/aws_eks-hybrid_import-eks-cluster-enable-hybrid-mode_cluster-import-procedure.webp) + +7. Open a terminal session and issue the kubectl command from your clipboard against the Amazon EKS cluster you want to import. The command is customized for your cluster as it contains the assigned cluster ID. + + :::tip + + Refer to [Access Amazon EKS Cluster with Kubectl](#access-amazon-eks-cluster-with-kubectl) for guidance on setting up kubectl to access your cluster. + + ::: + + Example command. + + ```shell hideClipboard + kubectl apply --filename https://api.spectrocloud.com/v1/spectroclusters/123abc456def789ghi012jkl/import/manifest + ``` + + Example output. + + ```shell hideClipboard + namespace/cluster-674f4e3ad861bb1009be468a created + serviceaccount/cluster-management-agent created + clusterrolebinding.rbac.authorization.k8s.io/cma-lite-cluster-admin-binding configured + configmap/log-parser-config created + configmap/upgrade-info-9dtbh55tkc created + configmap/version-info-g9kt4cdkg4 created + priorityclass.scheduling.k8s.io/spectro-cluster-critical configured + deployment.apps/cluster-management-agent-lite created + configmap/cluster-info created + configmap/hubble-info created + secret/hubble-secrets created + customresourcedefinition.apiextensions.k8s.io/awscloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/azurecloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/clusterprofiles.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/customcloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/edgecloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/edgenativecloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/gcpcloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/libvirtcloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/maascloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/nestedcloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/openstackcloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/packs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/spectroclusters.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/tencentcloudconfigs.cluster.spectrocloud.com configured + customresourcedefinition.apiextensions.k8s.io/vspherecloudconfigs.cluster.spectrocloud.com configured + serviceaccount/palette-manager created + clusterrolebinding.rbac.authorization.k8s.io/palette-lite-cluster-admin-binding configured + configmap/palette-version-info-6ktgm4hgdh created + priorityclass.scheduling.k8s.io/palette-spectro-cluster-critical configured + deployment.apps/palette-lite-controller-manager created + job.batch/palette-import-presetup-job created + ``` + +8. Wait for your cluster health to transition to **Healthy**. This will take a few minutes after running the agent install command in the previous step. + +9. Once your cluster displays as **Healthy**, click **Settings** in the top-right corner to reveal the **drop-down Menu**, and select **Cluster Settings**. + +10. Select **Hybrid Configuration** from the **Settings Menu**, and click on the **Enable hybrid mode** toggle. + + ![Enable hybrid mode in Hybrid Configuration - Cluster Settings Menu](/aws_eks-hybrid_import-eks-cluster-enable-hybrid-mode_enable-hybrid-mode.webp) + +11. Fill out the required information. + + | **Field** | **Description** | **Example** | + | --- | --- | --- | + | VPC CIDR | The VPC CIDR range where your EKS cluster resides. | `10.100.0.0/16` | + | Remote Node CIDRs | The CIDR ranges for hybrid nodes in other networks that need to connect to this cluster. | `10.200.0.0/16`, `10.201.0.0/16` | + | Remote Pod CIDRs | The CIDR ranges for hybrid pods in other networks that need to connect to this cluster. | `192.168.0.0/16` | + | Access Management | The Access Management mode for the Amazon EKS Hybrid Nodes. Select either **Systems Manager** or **IAM Roles Anywhere**. | | + +12. If selecting **IAM Roles Anywhere**, you must provide the following additional details. + + | **Field** | **Description** | **Example** | + | --- | --- | --- | + | Assume Role ARN | The ARN of the IAM role that the hybrid node _directly assumes_ to access AWS services and perform operations. | `arn:aws:iam::123456789012:role/AmazonEKSHybridNodesRole` | + | Profile ARN | The ARN of the IAM Roles Anywhere profile that defines which roles can be assumed. | `arn:aws:rolesanywhere:us-east-2:123456789012:profile/abcd1234-5678-90ef-ghij-klmnopqrstuv` | + | Role ARN | The ARN of the IAM role specified in the IAM Roles Anywhere profile that defines the permissions and policies for roles that can be assumed by hybrid nodes. | `arn:aws:iam::123456789012:role/IRAHybridNodesRole` | + | Trust Anchor ARN | The ARN of the IAM Roles Anywhere trust anchor that contains your certificate authority configuration. | `arn:aws:rolesanywhere:us-east-2:123456789012:trust-anchor/abcd1234-5678-90ef-ghij-klmnopqrstuv` | + | Root CA Certificate | The PEM-encoded certificate of your Certificate Authority (CA) that serves as the trust anchor. This certificate is used by IAM Roles Anywhere to validate the authenticity of the client certificates presented by your hybrid nodes. | `-----BEGIN CERTIFICATE-----\nMIIEBjCCAu6gAwIBAgIJAMc0ZzaSUK51MA0...\n-----END CERTIFICATE-----` | + | Root CA Private Key | The private key corresponding to your CA's certificate, used to sign client certificates. | `-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA4RFvKSZ+XVmRE3URXU...\n-----END RSA PRIVATE KEY-----` | + +13. Click **Save Changes** when complete. + +14. From the left **Main Menu**, select **Profiles**. + +15. In **Profile Layers**, click **Import Cluster Profile**. + +16. Copy the contents of the following JSON in to your clipboard, and paste it into the slide panel that opened to the right. + + ```json + {"metadata":{"name":"cilium","description":"","labels":{}},"spec":{"version":"1.0.0","template":{"type":"add-on","cloudType":"all","packs":[{"name":"cni-cilium-oss","type":"spectro","layer":"addon","version":"1.16.0","tag":"1.16.0","values":"# spectrocloud.com/enabled-presets: IPAM mode:ipam-clusterpool,Cilium Operator:op-multi-node,Kube-proxy replacement:ebpf-kubeproxy,VMO Compatibility:vmo-disable,VMO - Bridge interface:vmo-auto,Loadbalancer mode:lb-no-xdp,MicroK8s:microk8s-disable,Hybrid:eks-hybrid-nodes-affinity\npack:\n content:\n images:\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/certgen:v0.2.0\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/cilium:v1.16.0\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/cilium-envoy:v1.29.7-39a2a56bbd5b3a591f69dbca51d3e30ef97e0e51\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/clustermesh-apiserver:v1.16.0\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/hubble-relay:v1.16.0\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/hubble-ui:v0.13.1\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/hubble-ui-backend:v0.13.1\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/operator:v1.16.0\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/operator-generic:v1.16.0\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/operator-aws:v1.16.0\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/operator-azure:v1.16.0\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/startup-script:c54c7edeab7fde4da68e59acd319ab24af242c3f\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/spire-agent:1.9.6\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/spire-server:1.9.6\n - image: gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/busybox:1.36.1\n\n charts:\n - repo: https://helm.cilium.io/\n name: cilium\n version: 1.16.0\n #The namespace (on the target cluster) to install this chart\n #When not found, a new namespace will be created\n namespace: kube-system\n\ncharts:\n cilium:\n # @schema\n # type: [null, string]\n # @schema\n # -- upgradeCompatibility helps users upgrading to ensure that the configMap for\n # Cilium will not change critical values to ensure continued operation\n # This flag is not required for new installations.\n # For example: '1.7', '1.8', '1.9'\n upgradeCompatibility: null\n debug:\n # -- Enable debug logging\n enabled: false\n # @schema\n # type: [null, string]\n # @schema\n # -- Configure verbosity levels for debug logging\n # This option is used to enable debug messages for operations related to such\n # sub-system such as (e.g. kvstore, envoy, datapath or policy), and flow is\n # for enabling debug messages emitted per request, message and connection.\n # Multiple values can be set via a space-separated string (e.g. \"datapath envoy\").\n #\n # Applicable values:\n # - flow\n # - kvstore\n # - envoy\n # - datapath\n # - policy\n verbose: ~\n rbac:\n # -- Enable creation of Resource-Based Access Control configuration.\n create: true\n # -- Configure image pull secrets for pulling container images\n imagePullSecrets: []\n # - name: \"image-pull-secret\"\n\n # -- (string) Kubernetes config path\n # @default -- `\"~/.kube/config\"`\n kubeConfigPath: \"\"\n # -- (string) Kubernetes service host - use \"auto\" for automatic lookup from the cluster-info ConfigMap (kubeadm-based clusters only)\n k8sServiceHost: \"\"\n # @schema\n # type: [string, integer]\n # @schema\n # -- (string) Kubernetes service port\n k8sServicePort: \"\"\n # -- Configure the client side rate limit for the agent and operator\n #\n # If the amount of requests to the Kubernetes API server exceeds the configured\n # rate limit, the agent and operator will start to throttle requests by delaying\n # them until there is budget or the request times out.\n k8sClientRateLimit:\n # @schema\n # type: [null, integer]\n # @schema\n # -- (int) The sustained request rate in requests per second.\n # @default -- 5 for k8s up to 1.26. 10 for k8s version 1.27+\n qps: # @schema\n\n # type: [null, integer]\n # @schema\n # -- (int) The burst request rate in requests per second.\n # The rate limiter will allow short bursts with a higher rate.\n # @default -- 10 for k8s up to 1.26. 20 for k8s version 1.27+\n burst:\n cluster:\n # -- Name of the cluster. Only required for Cluster Mesh and mutual authentication with SPIRE.\n # It must respect the following constraints:\n # * It must contain at most 32 characters;\n # * It must begin and end with a lower case alphanumeric character;\n # * It may contain lower case alphanumeric characters and dashes between.\n # The \"default\" name cannot be used if the Cluster ID is different from 0.\n name: default\n # -- (int) Unique ID of the cluster. Must be unique across all connected\n # clusters and in the range of 1 to 255. Only required for Cluster Mesh,\n # may be 0 if Cluster Mesh is not used.\n id: 0\n # -- Define serviceAccount names for components.\n # @default -- Component's fully qualified name.\n serviceAccounts:\n cilium:\n create: true\n name: cilium\n automount: true\n annotations: {}\n nodeinit:\n create: true\n # -- Enabled is temporary until https://github.com/cilium/cilium-cli/issues/1396 is implemented.\n # Cilium CLI doesn't create the SAs for node-init, thus the workaround. Helm is not affected by\n # this issue. Name and automount can be configured, if enabled is set to true.\n # Otherwise, they are ignored. Enabled can be removed once the issue is fixed.\n # Cilium-nodeinit DS must also be fixed.\n enabled: false\n name: cilium-nodeinit\n automount: true\n annotations: {}\n envoy:\n create: true\n name: cilium-envoy\n automount: true\n annotations: {}\n operator:\n create: true\n name: cilium-operator\n automount: true\n annotations: {}\n preflight:\n create: true\n name: cilium-pre-flight\n automount: true\n annotations: {}\n relay:\n create: true\n name: hubble-relay\n automount: false\n annotations: {}\n ui:\n create: true\n name: hubble-ui\n automount: true\n annotations: {}\n clustermeshApiserver:\n create: true\n name: clustermesh-apiserver\n automount: true\n annotations: {}\n # -- Clustermeshcertgen is used if clustermesh.apiserver.tls.auto.method=cronJob\n clustermeshcertgen:\n create: true\n name: clustermesh-apiserver-generate-certs\n automount: true\n annotations: {}\n # -- Hubblecertgen is used if hubble.tls.auto.method=cronJob\n hubblecertgen:\n create: true\n name: hubble-generate-certs\n automount: true\n annotations: {}\n # -- Configure termination grace period for cilium-agent DaemonSet.\n terminationGracePeriodSeconds: 1\n # -- Install the cilium agent resources.\n agent: true\n # -- Agent container name.\n name: cilium\n # -- Roll out cilium agent pods automatically when configmap is updated.\n rollOutCiliumPods: false\n # -- Agent container image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/cilium\"\n tag: \"v1.16.0\"\n pullPolicy: \"IfNotPresent\"\n # cilium-digest\n digest: \"\"\n useDigest: false\n # -- Affinity for cilium-agent.\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - topologyKey: kubernetes.io/hostname\n labelSelector:\n matchLabels:\n k8s-app: cilium\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: eks.amazonaws.com/compute-type\n operator: In\n values:\n - hybrid\n # -- Node selector for cilium-agent.\n nodeSelector:\n kubernetes.io/os: linux\n # -- Node tolerations for agent scheduling to nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations:\n - operator: Exists\n # - key: \"key\"\n # operator: \"Equal|Exists\"\n # value: \"value\"\n # effect: \"NoSchedule|PreferNoSchedule|NoExecute(1.6 only)\"\n # -- The priority class to use for cilium-agent.\n priorityClassName: \"\"\n # -- DNS policy for Cilium agent pods.\n # Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy\n dnsPolicy: \"\"\n # -- Additional containers added to the cilium DaemonSet.\n extraContainers: []\n # -- Additional initContainers added to the cilium Daemonset.\n extraInitContainers: []\n # -- Additional agent container arguments.\n extraArgs: []\n # -- Additional agent container environment variables.\n extraEnv: []\n # -- Additional agent hostPath mounts.\n extraHostPathMounts: []\n # - name: host-mnt-data\n # mountPath: /host/mnt/data\n # hostPath: /mnt/data\n # hostPathType: Directory\n # readOnly: true\n # mountPropagation: HostToContainer\n\n # -- Additional agent volumes.\n extraVolumes: []\n # -- Additional agent volumeMounts.\n extraVolumeMounts: []\n # -- extraConfig allows you to specify additional configuration parameters to be\n # included in the cilium-config configmap.\n extraConfig: {}\n # my-config-a: \"1234\"\n # my-config-b: |-\n # test 1\n # test 2\n # test 3\n\n # -- Annotations to be added to all top-level cilium-agent objects (resources under templates/cilium-agent)\n annotations: {}\n # -- Security Context for cilium-agent pods.\n podSecurityContext:\n # -- AppArmorProfile options for the `cilium-agent` and init containers\n appArmorProfile:\n type: \"Unconfined\"\n # -- Annotations to be added to agent pods\n podAnnotations: {}\n # -- Labels to be added to agent pods\n podLabels: {}\n # -- Agent resource limits \u0026 requests\n # ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n resources: {}\n # limits:\n # cpu: 4000m\n # memory: 4Gi\n # requests:\n # cpu: 100m\n # memory: 512Mi\n\n # -- resources \u0026 limits for the agent init containers\n initResources: {}\n securityContext:\n # -- User to run the pod with\n # runAsUser: 0\n # -- Run the pod with elevated privileges\n privileged: false\n # -- SELinux options for the `cilium-agent` and init containers\n seLinuxOptions:\n level: 's0'\n # Running with spc_t since we have removed the privileged mode.\n # Users can change it to a different type as long as they have the\n # type available on the system.\n type: 'spc_t'\n capabilities:\n # -- Capabilities for the `cilium-agent` container\n ciliumAgent:\n # Use to set socket permission\n - CHOWN\n # Used to terminate envoy child process\n - KILL\n # Used since cilium modifies routing tables, etc...\n - NET_ADMIN\n # Used since cilium creates raw sockets, etc...\n - NET_RAW\n # Used since cilium monitor uses mmap\n - IPC_LOCK\n # Used in iptables. Consider removing once we are iptables-free\n - SYS_MODULE\n # Needed to switch network namespaces (used for health endpoint, socket-LB).\n # We need it for now but might not need it for \u003e= 5.11 specially\n # for the 'SYS_RESOURCE'.\n # In \u003e= 5.8 there's already BPF and PERMON capabilities\n - SYS_ADMIN\n # Could be an alternative for the SYS_ADMIN for the RLIMIT_NPROC\n - SYS_RESOURCE\n # Both PERFMON and BPF requires kernel 5.8, container runtime\n # cri-o \u003e= v1.22.0 or containerd \u003e= v1.5.0.\n # If available, SYS_ADMIN can be removed.\n #- PERFMON\n #- BPF\n # Allow discretionary access control (e.g. required for package installation)\n - DAC_OVERRIDE\n # Allow to set Access Control Lists (ACLs) on arbitrary files (e.g. required for package installation)\n - FOWNER\n # Allow to execute program that changes GID (e.g. required for package installation)\n - SETGID\n # Allow to execute program that changes UID (e.g. required for package installation)\n - SETUID\n # -- Capabilities for the `mount-cgroup` init container\n mountCgroup:\n # Only used for 'mount' cgroup\n - SYS_ADMIN\n # Used for nsenter\n - SYS_CHROOT\n - SYS_PTRACE\n # -- capabilities for the `apply-sysctl-overwrites` init container\n applySysctlOverwrites:\n # Required in order to access host's /etc/sysctl.d dir\n - SYS_ADMIN\n # Used for nsenter\n - SYS_CHROOT\n - SYS_PTRACE\n # -- Capabilities for the `clean-cilium-state` init container\n cleanCiliumState:\n # Most of the capabilities here are the same ones used in the\n # cilium-agent's container because this container can be used to\n # uninstall all Cilium resources, and therefore it is likely that\n # will need the same capabilities.\n # Used since cilium modifies routing tables, etc...\n - NET_ADMIN\n # Used in iptables. Consider removing once we are iptables-free\n - SYS_MODULE\n # We need it for now but might not need it for \u003e= 5.11 specially\n # for the 'SYS_RESOURCE'.\n # In \u003e= 5.8 there's already BPF and PERMON capabilities\n - SYS_ADMIN\n # Could be an alternative for the SYS_ADMIN for the RLIMIT_NPROC\n - SYS_RESOURCE\n # Both PERFMON and BPF requires kernel 5.8, container runtime\n # cri-o \u003e= v1.22.0 or containerd \u003e= v1.5.0.\n # If available, SYS_ADMIN can be removed.\n #- PERFMON\n #- BPF\n # -- Cilium agent update strategy\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n # @schema\n # type: [integer, string]\n # @schema\n maxUnavailable: 2\n # Configuration Values for cilium-agent\n aksbyocni:\n # -- Enable AKS BYOCNI integration.\n # Note that this is incompatible with AKS clusters not created in BYOCNI mode:\n # use Azure integration (`azure.enabled`) instead.\n enabled: false\n # @schema\n # type: [boolean, string]\n # @schema\n # -- Enable installation of PodCIDR routes between worker\n # nodes if worker nodes share a common L2 network segment.\n autoDirectNodeRoutes: false\n # -- Enable skipping of PodCIDR routes between worker\n # nodes if the worker nodes are in a different L2 network segment.\n directRoutingSkipUnreachable: false\n # -- Annotate k8s node upon initialization with Cilium's metadata.\n annotateK8sNode: false\n azure:\n # -- Enable Azure integration.\n # Note that this is incompatible with AKS clusters created in BYOCNI mode: use\n # AKS BYOCNI integration (`aksbyocni.enabled`) instead.\n enabled: false\n # usePrimaryAddress: false\n # resourceGroup: group1\n # subscriptionID: 00000000-0000-0000-0000-000000000000\n # tenantID: 00000000-0000-0000-0000-000000000000\n # clientID: 00000000-0000-0000-0000-000000000000\n # clientSecret: 00000000-0000-0000-0000-000000000000\n # userAssignedIdentityID: 00000000-0000-0000-0000-000000000000\n alibabacloud:\n # -- Enable AlibabaCloud ENI integration\n enabled: false\n # -- Enable bandwidth manager to optimize TCP and UDP workloads and allow\n # for rate-limiting traffic from individual Pods with EDT (Earliest Departure\n # Time) through the \"kubernetes.io/egress-bandwidth\" Pod annotation.\n bandwidthManager:\n # -- Enable bandwidth manager infrastructure (also prerequirement for BBR)\n enabled: false\n # -- Activate BBR TCP congestion control for Pods\n bbr: false\n # -- Configure standalone NAT46/NAT64 gateway\n nat46x64Gateway:\n # -- Enable RFC8215-prefixed translation\n enabled: false\n # -- EnableHighScaleIPcache enables the special ipcache mode for high scale\n # clusters. The ipcache content will be reduced to the strict minimum and\n # traffic will be encapsulated to carry security identities.\n highScaleIPcache:\n # -- Enable the high scale mode for the ipcache.\n enabled: false\n # -- Configure L2 announcements\n l2announcements:\n # -- Enable L2 announcements\n enabled: false\n # -- If a lease is not renewed for X duration, the current leader is considered dead, a new leader is picked\n # leaseDuration: 15s\n # -- The interval at which the leader will renew the lease\n # leaseRenewDeadline: 5s\n # -- The timeout between retries if renewal fails\n # leaseRetryPeriod: 2s\n # -- Configure L2 pod announcements\n l2podAnnouncements:\n # -- Enable L2 pod announcements\n enabled: false\n # -- Interface used for sending Gratuitous ARP pod announcements\n interface: \"eth0\"\n # -- Configure BGP\n bgp:\n # -- Enable BGP support inside Cilium; embeds a new ConfigMap for BGP inside\n # cilium-agent and cilium-operator\n enabled: false\n announce:\n # -- Enable allocation and announcement of service LoadBalancer IPs\n loadbalancerIP: false\n # -- Enable announcement of node pod CIDR\n podCIDR: false\n # -- This feature set enables virtual BGP routers to be created via\n # CiliumBGPPeeringPolicy CRDs.\n bgpControlPlane:\n # -- Enables the BGP control plane.\n enabled: false\n # -- SecretsNamespace is the namespace which BGP support will retrieve secrets from.\n secretsNamespace:\n # -- Create secrets namespace for BGP secrets.\n create: false\n # -- The name of the secret namespace to which Cilium agents are given read access\n name: kube-system\n pmtuDiscovery:\n # -- Enable path MTU discovery to send ICMP fragmentation-needed replies to\n # the client.\n enabled: false\n bpf:\n autoMount:\n # -- Enable automatic mount of BPF filesystem\n # When `autoMount` is enabled, the BPF filesystem is mounted at\n # `bpf.root` path on the underlying host and inside the cilium agent pod.\n # If users disable `autoMount`, it's expected that users have mounted\n # bpffs filesystem at the specified `bpf.root` volume, and then the\n # volume will be mounted inside the cilium agent pod at the same path.\n enabled: true\n # -- Configure the mount point for the BPF filesystem\n root: /sys/fs/bpf\n # -- Enables pre-allocation of eBPF map values. This increases\n # memory usage but can reduce latency.\n preallocateMaps: false\n # @schema\n # type: [null, integer]\n # @schema\n # -- (int) Configure the maximum number of entries in auth map.\n # @default -- `524288`\n authMapMax: ~\n # @schema\n # type: [null, integer]\n # @schema\n # -- (int) Configure the maximum number of entries in the TCP connection tracking\n # table.\n # @default -- `524288`\n ctTcpMax: ~\n # @schema\n # type: [null, integer]\n # @schema\n # -- (int) Configure the maximum number of entries for the non-TCP connection\n # tracking table.\n # @default -- `262144`\n ctAnyMax: ~\n # -- Control events generated by the Cilium datapath exposed to Cilium monitor and Hubble.\n events:\n drop:\n # -- Enable drop events.\n enabled: true\n policyVerdict:\n # -- Enable policy verdict events.\n enabled: true\n trace:\n # -- Enable trace events.\n enabled: true\n # @schema\n # type: [null, integer]\n # @schema\n # -- Configure the maximum number of service entries in the\n # load balancer maps.\n lbMapMax: 65536\n # @schema\n # type: [null, integer]\n # @schema\n # -- (int) Configure the maximum number of entries for the NAT table.\n # @default -- `524288`\n natMax: ~\n # @schema\n # type: [null, integer]\n # @schema\n # -- (int) Configure the maximum number of entries for the neighbor table.\n # @default -- `524288`\n neighMax: ~\n # @schema\n # type: [null, integer]\n # @schema\n # @default -- `16384`\n # -- (int) Configures the maximum number of entries for the node table.\n nodeMapMax: ~\n # -- Configure the maximum number of entries in endpoint policy map (per endpoint).\n # @schema\n # type: [null, integer]\n # @schema\n policyMapMax: 16384\n # @schema\n # type: [null, number]\n # @schema\n # -- (float64) Configure auto-sizing for all BPF maps based on available memory.\n # ref: https://docs.cilium.io/en/stable/network/ebpf/maps/\n # @default -- `0.0025`\n mapDynamicSizeRatio: ~\n # -- Configure the level of aggregation for monitor notifications.\n # Valid options are none, low, medium, maximum.\n monitorAggregation: medium\n # -- Configure the typical time between monitor notifications for\n # active connections.\n monitorInterval: \"5s\"\n # -- Configure which TCP flags trigger notifications when seen for the\n # first time in a connection.\n monitorFlags: \"all\"\n # -- Allow cluster external access to ClusterIP services.\n lbExternalClusterIP: false\n # @schema\n # type: [null, boolean]\n # @schema\n # -- (bool) Enable native IP masquerade support in eBPF\n # @default -- `false`\n masquerade: ~\n # @schema\n # type: [null, boolean]\n # @schema\n # -- (bool) Configure whether direct routing mode should route traffic via\n # host stack (true) or directly and more efficiently out of BPF (false) if\n # the kernel supports it. The latter has the implication that it will also\n # bypass netfilter in the host namespace.\n # @default -- `false`\n hostLegacyRouting: ~\n # @schema\n # type: [null, boolean]\n # @schema\n # -- (bool) Configure the eBPF-based TPROXY to reduce reliance on iptables rules\n # for implementing Layer 7 policy.\n # @default -- `false`\n tproxy: ~\n # @schema\n # type: [null, array]\n # @schema\n # -- (list) Configure explicitly allowed VLAN id's for bpf logic bypass.\n # [0] will allow all VLAN id's without any filtering.\n # @default -- `[]`\n vlanBypass: ~\n # -- (bool) Disable ExternalIP mitigation (CVE-2020-8554)\n # @default -- `false`\n disableExternalIPMitigation: false\n # -- (bool) Attach endpoint programs using tcx instead of legacy tc hooks on\n # supported kernels.\n # @default -- `true`\n enableTCX: true\n # -- (string) Mode for Pod devices for the core datapath (veth, netkit, netkit-l2, lb-only)\n # @default -- `veth`\n datapathMode: veth\n # -- Enable BPF clock source probing for more efficient tick retrieval.\n bpfClockProbe: false\n # -- Clean all eBPF datapath state from the initContainer of the cilium-agent\n # DaemonSet.\n #\n # WARNING: Use with care!\n cleanBpfState: false\n # -- Clean all local Cilium state from the initContainer of the cilium-agent\n # DaemonSet. Implies cleanBpfState: true.\n #\n # WARNING: Use with care!\n cleanState: false\n # -- Wait for KUBE-PROXY-CANARY iptables rule to appear in \"wait-for-kube-proxy\"\n # init container before launching cilium-agent.\n # More context can be found in the commit message of below PR\n # https://github.com/cilium/cilium/pull/20123\n waitForKubeProxy: false\n cni:\n # -- Install the CNI configuration and binary files into the filesystem.\n install: true\n # -- Remove the CNI configuration and binary files on agent shutdown. Enable this\n # if you're removing Cilium from the cluster. Disable this to prevent the CNI\n # configuration file from being removed during agent upgrade, which can cause\n # nodes to go unmanageable.\n uninstall: false\n # @schema\n # type: [null, string]\n # @schema\n # -- Configure chaining on top of other CNI plugins. Possible values:\n # - none\n # - aws-cni\n # - flannel\n # - generic-veth\n # - portmap\n chainingMode: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- A CNI network name in to which the Cilium plugin should be added as a chained plugin.\n # This will cause the agent to watch for a CNI network with this network name. When it is\n # found, this will be used as the basis for Cilium's CNI configuration file. If this is\n # set, it assumes a chaining mode of generic-veth. As a special case, a chaining mode\n # of aws-cni implies a chainingTarget of aws-cni.\n chainingTarget: ~\n # -- Make Cilium take ownership over the `/etc/cni/net.d` directory on the\n # node, renaming all non-Cilium CNI configurations to `*.cilium_bak`.\n # This ensures no Pods can be scheduled using other CNI plugins during Cilium\n # agent downtime.\n exclusive: true\n # -- Configure the log file for CNI logging with retention policy of 7 days.\n # Disable CNI file logging by setting this field to empty explicitly.\n logFile: /var/run/cilium/cilium-cni.log\n # -- Skip writing of the CNI configuration. This can be used if\n # writing of the CNI configuration is performed by external automation.\n customConf: false\n # -- Configure the path to the CNI configuration directory on the host.\n confPath: /etc/cni/net.d\n # -- Configure the path to the CNI binary directory on the host.\n binPath: /opt/cni/bin\n # -- Specify the path to a CNI config to read from on agent start.\n # This can be useful if you want to manage your CNI\n # configuration outside of a Kubernetes environment. This parameter is\n # mutually exclusive with the 'cni.configMap' parameter. The agent will\n # write this to 05-cilium.conflist on startup.\n # readCniConf: /host/etc/cni/net.d/05-sample.conflist.input\n\n # -- When defined, configMap will mount the provided value as ConfigMap and\n # interpret the cniConf variable as CNI configuration file and write it\n # when the agent starts up\n # configMap: cni-configuration\n\n # -- Configure the key in the CNI ConfigMap to read the contents of\n # the CNI configuration from.\n configMapKey: cni-config\n # -- Configure the path to where to mount the ConfigMap inside the agent pod.\n confFileMountPath: /tmp/cni-configuration\n # -- Configure the path to where the CNI configuration directory is mounted\n # inside the agent pod.\n hostConfDirMountPath: /host/etc/cni/net.d\n # -- Specifies the resources for the cni initContainer\n resources:\n requests:\n cpu: 100m\n memory: 10Mi\n # -- Enable route MTU for pod netns when CNI chaining is used\n enableRouteMTUForCNIChaining: false\n # -- (string) Configure how frequently garbage collection should occur for the datapath\n # connection tracking table.\n # @default -- `\"0s\"`\n conntrackGCInterval: \"\"\n # -- (string) Configure the maximum frequency for the garbage collection of the\n # connection tracking table. Only affects the automatic computation for the frequency\n # and has no effect when 'conntrackGCInterval' is set. This can be set to more frequently\n # clean up unused identities created from ToFQDN policies.\n conntrackGCMaxInterval: \"\"\n # -- (string) Configure timeout in which Cilium will exit if CRDs are not available\n # @default -- `\"5m\"`\n crdWaitTimeout: \"\"\n # -- Tail call hooks for custom eBPF programs.\n customCalls:\n # -- Enable tail call hooks for custom eBPF programs.\n enabled: false\n daemon:\n # -- Configure where Cilium runtime state should be stored.\n runPath: \"/var/run/cilium\"\n # @schema\n # type: [null, string]\n # @schema\n # -- Configure a custom list of possible configuration override sources\n # The default is \"config-map:cilium-config,cilium-node-config\". For supported\n # values, see the help text for the build-config subcommand.\n # Note that this value should be a comma-separated string.\n configSources: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- allowedConfigOverrides is a list of config-map keys that can be overridden.\n # That is to say, if this value is set, config sources (excepting the first one) can\n # only override keys in this list.\n #\n # This takes precedence over blockedConfigOverrides.\n #\n # By default, all keys may be overridden. To disable overrides, set this to \"none\" or\n # change the configSources variable.\n allowedConfigOverrides: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- blockedConfigOverrides is a list of config-map keys that may not be overridden.\n # In other words, if any of these keys appear in a configuration source excepting the\n # first one, they will be ignored\n #\n # This is ignored if allowedConfigOverrides is set.\n #\n # By default, all keys may be overridden.\n blockedConfigOverrides: ~\n # -- Specify which network interfaces can run the eBPF datapath. This means\n # that a packet sent from a pod to a destination outside the cluster will be\n # masqueraded (to an output device IPv4 address), if the output device runs the\n # program. When not specified, probing will automatically detect devices that have\n # a non-local route. This should be used only when autodetection is not suitable.\n # devices: \"\"\n\n # -- Enables experimental support for the detection of new and removed datapath\n # devices. When devices change the eBPF datapath is reloaded and services updated.\n # If \"devices\" is set then only those devices, or devices matching a wildcard will\n # be considered.\n #\n # This option has been deprecated and is a no-op.\n enableRuntimeDeviceDetection: true\n # -- Forces the auto-detection of devices, even if specific devices are explicitly listed\n forceDeviceDetection: false\n # -- Chains to ignore when installing feeder rules.\n # disableIptablesFeederRules: \"\"\n\n # -- Limit iptables-based egress masquerading to interface selector.\n # egressMasqueradeInterfaces: \"\"\n\n # -- Enable setting identity mark for local traffic.\n # enableIdentityMark: true\n\n # -- Enable Kubernetes EndpointSlice feature in Cilium if the cluster supports it.\n # enableK8sEndpointSlice: true\n\n # -- Enable CiliumEndpointSlice feature (deprecated, please use `ciliumEndpointSlice.enabled` instead).\n enableCiliumEndpointSlice: false\n ciliumEndpointSlice:\n # -- Enable Cilium EndpointSlice feature.\n enabled: false\n # -- List of rate limit options to be used for the CiliumEndpointSlice controller.\n # Each object in the list must have the following fields:\n # nodes: Count of nodes at which to apply the rate limit.\n # limit: The sustained request rate in requests per second. The maximum rate that can be configured is 50.\n # burst: The burst request rate in requests per second. The maximum burst that can be configured is 100.\n rateLimits:\n - nodes: 0\n limit: 10\n burst: 20\n - nodes: 100\n limit: 7\n burst: 15\n - nodes: 500\n limit: 5\n burst: 10\n envoyConfig:\n # -- Enable CiliumEnvoyConfig CRD\n # CiliumEnvoyConfig CRD can also be implicitly enabled by other options.\n enabled: false\n # -- SecretsNamespace is the namespace in which envoy SDS will retrieve secrets from.\n secretsNamespace:\n # -- Create secrets namespace for CiliumEnvoyConfig CRDs.\n create: true\n # -- The name of the secret namespace to which Cilium agents are given read access.\n name: cilium-secrets\n # -- Interval in which an attempt is made to reconcile failed EnvoyConfigs. If the duration is zero, the retry is deactivated.\n retryInterval: 15s\n ingressController:\n # -- Enable cilium ingress controller\n # This will automatically set enable-envoy-config as well.\n enabled: false\n # -- Set cilium ingress controller to be the default ingress controller\n # This will let cilium ingress controller route entries without ingress class set\n default: false\n # -- Default ingress load balancer mode\n # Supported values: shared, dedicated\n # For granular control, use the following annotations on the ingress resource:\n # \"ingress.cilium.io/loadbalancer-mode: dedicated\" (or \"shared\").\n loadbalancerMode: dedicated\n # -- Enforce https for host having matching TLS host in Ingress.\n # Incoming traffic to http listener will return 308 http error code with respective location in header.\n enforceHttps: true\n # -- Enable proxy protocol for all Ingress listeners. Note that _only_ Proxy protocol traffic will be accepted once this is enabled.\n enableProxyProtocol: false\n # -- IngressLBAnnotations are the annotation and label prefixes, which are used to filter annotations and/or labels to propagate from Ingress to the Load Balancer service\n ingressLBAnnotationPrefixes: [ 'lbipam.cilium.io', 'nodeipam.cilium.io', 'service.beta.kubernetes.io', 'service.kubernetes.io', 'cloud.google.com' ]\n # @schema\n # type: [null, string]\n # @schema\n # -- Default secret namespace for ingresses without .spec.tls[].secretName set.\n defaultSecretNamespace: # @schema\n\n # type: [null, string]\n # @schema\n # -- Default secret name for ingresses without .spec.tls[].secretName set.\n defaultSecretName: # -- SecretsNamespace is the namespace in which envoy SDS will retrieve TLS secrets from.\n\n secretsNamespace:\n # -- Create secrets namespace for Ingress.\n create: true\n # -- Name of Ingress secret namespace.\n name: cilium-secrets\n # -- Enable secret sync, which will make sure all TLS secrets used by Ingress are synced to secretsNamespace.name.\n # If disabled, TLS secrets must be maintained externally.\n sync: true\n # -- Load-balancer service in shared mode.\n # This is a single load-balancer service for all Ingress resources.\n service:\n # -- Service name\n name: cilium-ingress\n # -- Labels to be added for the shared LB service\n labels: {}\n # -- Annotations to be added for the shared LB service\n annotations: {}\n # -- Service type for the shared LB service\n type: LoadBalancer\n # @schema\n # type: [null, integer]\n # @schema\n # -- Configure a specific nodePort for insecure HTTP traffic on the shared LB service\n insecureNodePort: ~\n # @schema\n # type: [null, integer]\n # @schema\n # -- Configure a specific nodePort for secure HTTPS traffic on the shared LB service\n secureNodePort: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- Configure a specific loadBalancerClass on the shared LB service (requires Kubernetes 1.24+)\n loadBalancerClass: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- Configure a specific loadBalancerIP on the shared LB service\n loadBalancerIP: ~\n # @schema\n # type: [null, boolean]\n # @schema\n # -- Configure if node port allocation is required for LB service\n # ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation\n allocateLoadBalancerNodePorts: ~\n # -- Control how traffic from external sources is routed to the LoadBalancer Kubernetes Service for Cilium Ingress in shared mode.\n # Valid values are \"Cluster\" and \"Local\".\n # ref: https://kubernetes.io/docs/reference/networking/virtual-ips/#external-traffic-policy\n externalTrafficPolicy: Cluster\n # Host Network related configuration\n hostNetwork:\n # -- Configure whether the Envoy listeners should be exposed on the host network.\n enabled: false\n # -- Configure a specific port on the host network that gets used for the shared listener.\n sharedListenerPort: 8080\n # Specify the nodes where the Ingress listeners should be exposed\n nodes:\n # -- Specify the labels of the nodes where the Ingress listeners should be exposed\n #\n # matchLabels:\n # kubernetes.io/os: linux\n # kubernetes.io/hostname: kind-worker\n matchLabels: {}\n gatewayAPI:\n # -- Enable support for Gateway API in cilium\n # This will automatically set enable-envoy-config as well.\n enabled: false\n # -- Enable proxy protocol for all GatewayAPI listeners. Note that _only_ Proxy protocol traffic will be accepted once this is enabled.\n enableProxyProtocol: false\n # -- Enable Backend Protocol selection support (GEP-1911) for Gateway API via appProtocol.\n enableAppProtocol: false\n # -- Enable ALPN for all listeners configured with Gateway API. ALPN will attempt HTTP/2, then HTTP 1.1.\n # Note that this will also enable `appProtocol` support, and services that wish to use HTTP/2 will need to indicate that via their `appProtocol`.\n enableAlpn: false\n # -- The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address.\n xffNumTrustedHops: 0\n # -- Control how traffic from external sources is routed to the LoadBalancer Kubernetes Service for all Cilium GatewayAPI Gateway instances. Valid values are \"Cluster\" and \"Local\".\n # Note that this value will be ignored when `hostNetwork.enabled == true`.\n # ref: https://kubernetes.io/docs/reference/networking/virtual-ips/#external-traffic-policy\n externalTrafficPolicy: Cluster\n gatewayClass:\n # -- Enable creation of GatewayClass resource\n # The default value is 'auto' which decides according to presence of gateway.networking.k8s.io/v1/GatewayClass in the cluster.\n # Other possible values are 'true' and 'false', which will either always or never create the GatewayClass, respectively.\n create: auto\n # -- SecretsNamespace is the namespace in which envoy SDS will retrieve TLS secrets from.\n secretsNamespace:\n # -- Create secrets namespace for Gateway API.\n create: true\n # -- Name of Gateway API secret namespace.\n name: cilium-secrets\n # -- Enable secret sync, which will make sure all TLS secrets used by Ingress are synced to secretsNamespace.name.\n # If disabled, TLS secrets must be maintained externally.\n sync: true\n # Host Network related configuration\n hostNetwork:\n # -- Configure whether the Envoy listeners should be exposed on the host network.\n enabled: false\n # Specify the nodes where the Ingress listeners should be exposed\n nodes:\n # -- Specify the labels of the nodes where the Ingress listeners should be exposed\n #\n # matchLabels:\n # kubernetes.io/os: linux\n # kubernetes.io/hostname: kind-worker\n matchLabels: {}\n # -- Enables the fallback compatibility solution for when the xt_socket kernel\n # module is missing and it is needed for the datapath L7 redirection to work\n # properly. See documentation for details on when this can be disabled:\n # https://docs.cilium.io/en/stable/operations/system_requirements/#linux-kernel.\n enableXTSocketFallback: true\n encryption:\n # -- Enable transparent network encryption.\n enabled: false\n # -- Encryption method. Can be either ipsec or wireguard.\n type: ipsec\n # -- Enable encryption for pure node to node traffic.\n # This option is only effective when encryption.type is set to \"wireguard\".\n nodeEncryption: false\n # -- Configure the WireGuard Pod2Pod strict mode.\n strictMode:\n # -- Enable WireGuard Pod2Pod strict mode.\n enabled: false\n # -- CIDR for the WireGuard Pod2Pod strict mode.\n cidr: \"\"\n # -- Allow dynamic lookup of remote node identities.\n # This is required when tunneling is used or direct routing is used and the node CIDR and pod CIDR overlap.\n allowRemoteNodeIdentities: false\n ipsec:\n # -- Name of the key file inside the Kubernetes secret configured via secretName.\n keyFile: keys\n # -- Path to mount the secret inside the Cilium pod.\n mountPath: /etc/ipsec\n # -- Name of the Kubernetes secret containing the encryption keys.\n secretName: cilium-ipsec-keys\n # -- The interface to use for encrypted traffic.\n interface: \"\"\n # -- Enable the key watcher. If disabled, a restart of the agent will be\n # necessary on key rotations.\n keyWatcher: true\n # -- Maximum duration of the IPsec key rotation. The previous key will be\n # removed after that delay.\n keyRotationDuration: \"5m\"\n # -- Enable IPsec encrypted overlay\n encryptedOverlay: false\n wireguard:\n # -- Enables the fallback to the user-space implementation (deprecated).\n userspaceFallback: false\n # -- Controls WireGuard PersistentKeepalive option. Set 0s to disable.\n persistentKeepalive: 0s\n endpointHealthChecking:\n # -- Enable connectivity health checking between virtual endpoints.\n enabled: true\n endpointRoutes:\n # @schema\n # type: [boolean, string]\n # @schema\n # -- Enable use of per endpoint routes instead of routing via\n # the cilium_host interface.\n enabled: false\n k8sNetworkPolicy:\n # -- Enable support for K8s NetworkPolicy\n enabled: true\n eni:\n # -- Enable Elastic Network Interface (ENI) integration.\n enabled: false\n # -- Update ENI Adapter limits from the EC2 API\n updateEC2AdapterLimitViaAPI: true\n # -- Release IPs not used from the ENI\n awsReleaseExcessIPs: false\n # -- Enable ENI prefix delegation\n awsEnablePrefixDelegation: false\n # -- EC2 API endpoint to use\n ec2APIEndpoint: \"\"\n # -- Tags to apply to the newly created ENIs\n eniTags: {}\n # -- Interval for garbage collection of unattached ENIs. Set to \"0s\" to disable.\n # @default -- `\"5m\"`\n gcInterval: \"\"\n # -- Additional tags attached to ENIs created by Cilium.\n # Dangling ENIs with this tag will be garbage collected\n # @default -- `{\"io.cilium/cilium-managed\":\"true,\"io.cilium/cluster-name\":\"\u003cauto-detected\u003e\"}`\n gcTags: {}\n # -- If using IAM role for Service Accounts will not try to\n # inject identity values from cilium-aws kubernetes secret.\n # Adds annotation to service account if managed by Helm.\n # See https://github.com/aws/amazon-eks-pod-identity-webhook\n iamRole: \"\"\n # -- Filter via subnet IDs which will dictate which subnets are going to be used to create new ENIs\n # Important note: This requires that each instance has an ENI with a matching subnet attached\n # when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium,\n # use the CNI configuration file settings (cni.customConf) instead.\n subnetIDsFilter: []\n # -- Filter via tags (k=v) which will dictate which subnets are going to be used to create new ENIs\n # Important note: This requires that each instance has an ENI with a matching subnet attached\n # when Cilium is deployed. If you only want to control subnets for ENIs attached by Cilium,\n # use the CNI configuration file settings (cni.customConf) instead.\n subnetTagsFilter: []\n # -- Filter via AWS EC2 Instance tags (k=v) which will dictate which AWS EC2 Instances\n # are going to be used to create new ENIs\n instanceTagsFilter: []\n externalIPs:\n # -- Enable ExternalIPs service support.\n enabled: false\n # fragmentTracking enables IPv4 fragment tracking support in the datapath.\n # fragmentTracking: true\n gke:\n # -- Enable Google Kubernetes Engine integration\n enabled: false\n # -- Enable connectivity health checking.\n healthChecking: true\n # -- TCP port for the agent health API. This is not the port for cilium-health.\n healthPort: 9879\n # -- Configure the host firewall.\n hostFirewall:\n # -- Enables the enforcement of host policies in the eBPF datapath.\n enabled: false\n hostPort:\n # -- Enable hostPort service support.\n enabled: false\n # -- Configure socket LB\n socketLB:\n # -- Enable socket LB\n enabled: false\n # -- Disable socket lb for non-root ns. This is used to enable Istio routing rules.\n # hostNamespaceOnly: false\n # -- Enable terminating pod connections to deleted service backends.\n # terminatePodConnections: true\n # -- Configure certificate generation for Hubble integration.\n # If hubble.tls.auto.method=cronJob, these values are used\n # for the Kubernetes CronJob which will be scheduled regularly to\n # (re)generate any certificates not provided manually.\n certgen:\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/certgen\"\n tag: \"v0.2.0\"\n digest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n # -- Seconds after which the completed job pod will be deleted\n ttlSecondsAfterFinished: 1800\n # -- Labels to be added to hubble-certgen pods\n podLabels: {}\n # -- Annotations to be added to the hubble-certgen initial Job and CronJob\n annotations:\n job: {}\n cronJob: {}\n # -- Node tolerations for pod assignment on nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations: []\n # -- Additional certgen volumes.\n extraVolumes: []\n # -- Additional certgen volumeMounts.\n extraVolumeMounts: []\n # -- Affinity for certgen\n affinity: {}\n hubble:\n # -- Enable Hubble (true by default).\n enabled: true\n # -- Annotations to be added to all top-level hubble objects (resources under templates/hubble)\n annotations: {}\n # -- Buffer size of the channel Hubble uses to receive monitor events. If this\n # value is not set, the queue size is set to the default monitor queue size.\n # eventQueueSize: \"\"\n\n # -- Number of recent flows for Hubble to cache. Defaults to 4095.\n # Possible values are:\n # 1, 3, 7, 15, 31, 63, 127, 255, 511, 1023,\n # 2047, 4095, 8191, 16383, 32767, 65535\n # eventBufferCapacity: \"4095\"\n\n # -- Hubble metrics configuration.\n # See https://docs.cilium.io/en/stable/observability/metrics/#hubble-metrics\n # for more comprehensive documentation about Hubble metrics.\n metrics:\n # @schema\n # type: [null, array]\n # @schema\n # -- Configures the list of metrics to collect. If empty or null, metrics\n # are disabled.\n # Example:\n #\n # enabled:\n # - dns:query;ignoreAAAA\n # - drop\n # - tcp\n # - flow\n # - icmp\n # - http\n #\n # You can specify the list of metrics from the helm CLI:\n #\n # --set hubble.metrics.enabled=\"{dns:query;ignoreAAAA,drop,tcp,flow,icmp,http}\"\n #\n enabled: ~\n # -- Enables exporting hubble metrics in OpenMetrics format.\n enableOpenMetrics: false\n # -- Configure the port the hubble metric server listens on.\n port: 9965\n tls:\n # Enable hubble metrics server TLS.\n enabled: false\n # Configure hubble metrics server TLS.\n server:\n # -- base64 encoded PEM values for the Hubble metrics server certificate.\n cert: \"\"\n # -- base64 encoded PEM values for the Hubble metrics server key.\n key: \"\"\n # -- Extra DNS names added to certificate when it's auto generated\n extraDnsNames: []\n # -- Extra IP addresses added to certificate when it's auto generated\n extraIpAddresses: []\n # -- Configure mTLS for the Hubble metrics server.\n mtls:\n # When set to true enforces mutual TLS between Hubble Metrics server and its clients.\n # False allow non-mutual TLS connections.\n # This option has no effect when TLS is disabled.\n enabled: false\n useSecret: false\n # -- Name of the ConfigMap containing the CA to validate client certificates against.\n # If mTLS is enabled and this is unspecified, it will default to the\n # same CA used for Hubble metrics server certificates.\n name: ~\n # -- Entry of the ConfigMap containing the CA.\n key: ca.crt\n # -- Annotations to be added to hubble-metrics service.\n serviceAnnotations: {}\n serviceMonitor:\n # -- Create ServiceMonitor resources for Prometheus Operator.\n # This requires the prometheus CRDs to be available.\n # ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)\n enabled: false\n # -- Labels to add to ServiceMonitor hubble\n labels: {}\n # -- Annotations to add to ServiceMonitor hubble\n annotations: {}\n # -- jobLabel to add for ServiceMonitor hubble\n jobLabel: \"\"\n # -- Interval for scrape metrics.\n interval: \"10s\"\n # -- Relabeling configs for the ServiceMonitor hubble\n relabelings:\n - sourceLabels:\n - __meta_kubernetes_pod_node_name\n targetLabel: node\n replacement: ${1}\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics relabeling configs for the ServiceMonitor hubble\n metricRelabelings: ~\n # Configure TLS for the ServiceMonitor.\n # Note, when using TLS you will either need to specify\n # tlsConfig.insecureSkipVerify or specify a CA to use.\n tlsConfig: {}\n # -- Grafana dashboards for hubble\n # grafana can import dashboards based on the label and value\n # ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards\n dashboards:\n enabled: false\n label: grafana_dashboard\n # @schema\n # type: [null, string]\n # @schema\n namespace: ~\n labelValue: \"1\"\n annotations: {}\n # -- Unix domain socket path to listen to when Hubble is enabled.\n socketPath: /var/run/cilium/hubble.sock\n # -- Enables redacting sensitive information present in Layer 7 flows.\n redact:\n enabled: false\n http:\n # -- Enables redacting URL query (GET) parameters.\n # Example:\n #\n # redact:\n # enabled: true\n # http:\n # urlQuery: true\n #\n # You can specify the options from the helm CLI:\n #\n # --set hubble.redact.enabled=\"true\"\n # --set hubble.redact.http.urlQuery=\"true\"\n urlQuery: false\n # -- Enables redacting user info, e.g., password when basic auth is used.\n # Example:\n #\n # redact:\n # enabled: true\n # http:\n # userInfo: true\n #\n # You can specify the options from the helm CLI:\n #\n # --set hubble.redact.enabled=\"true\"\n # --set hubble.redact.http.userInfo=\"true\"\n userInfo: true\n headers:\n # -- List of HTTP headers to allow: headers not matching will be redacted. Note: `allow` and `deny` lists cannot be used both at the same time, only one can be present.\n # Example:\n # redact:\n # enabled: true\n # http:\n # headers:\n # allow:\n # - traceparent\n # - tracestate\n # - Cache-Control\n #\n # You can specify the options from the helm CLI:\n # --set hubble.redact.enabled=\"true\"\n # --set hubble.redact.http.headers.allow=\"traceparent,tracestate,Cache-Control\"\n allow: []\n # -- List of HTTP headers to deny: matching headers will be redacted. Note: `allow` and `deny` lists cannot be used both at the same time, only one can be present.\n # Example:\n # redact:\n # enabled: true\n # http:\n # headers:\n # deny:\n # - Authorization\n # - Proxy-Authorization\n #\n # You can specify the options from the helm CLI:\n # --set hubble.redact.enabled=\"true\"\n # --set hubble.redact.http.headers.deny=\"Authorization,Proxy-Authorization\"\n deny: []\n kafka:\n # -- Enables redacting Kafka's API key.\n # Example:\n #\n # redact:\n # enabled: true\n # kafka:\n # apiKey: true\n #\n # You can specify the options from the helm CLI:\n #\n # --set hubble.redact.enabled=\"true\"\n # --set hubble.redact.kafka.apiKey=\"true\"\n apiKey: false\n # -- An additional address for Hubble to listen to.\n # Set this field \":4244\" if you are enabling Hubble Relay, as it assumes that\n # Hubble is listening on port 4244.\n listenAddress: \":4244\"\n # -- Whether Hubble should prefer to announce IPv6 or IPv4 addresses if both are available.\n preferIpv6: false\n # @schema\n # type: [null, boolean]\n # @schema\n # -- (bool) Skip Hubble events with unknown cgroup ids\n # @default -- `true`\n skipUnknownCGroupIDs: ~\n peerService:\n # -- Service Port for the Peer service.\n # If not set, it is dynamically assigned to port 443 if TLS is enabled and to\n # port 80 if not.\n # servicePort: 80\n # -- Target Port for the Peer service, must match the hubble.listenAddress'\n # port.\n targetPort: 4244\n # -- The cluster domain to use to query the Hubble Peer service. It should\n # be the local cluster.\n clusterDomain: cluster.local\n # -- TLS configuration for Hubble\n tls:\n # -- Enable mutual TLS for listenAddress. Setting this value to false is\n # highly discouraged as the Hubble API provides access to potentially\n # sensitive network flow metadata and is exposed on the host network.\n enabled: true\n # -- Configure automatic TLS certificates generation.\n auto:\n # -- Auto-generate certificates.\n # When set to true, automatically generate a CA and certificates to\n # enable mTLS between Hubble server and Hubble Relay instances. If set to\n # false, the certs for Hubble server need to be provided by setting\n # appropriate values below.\n enabled: true\n # -- Set the method to auto-generate certificates. Supported values:\n # - helm: This method uses Helm to generate all certificates.\n # - cronJob: This method uses a Kubernetes CronJob the generate any\n # certificates not provided by the user at installation\n # time.\n # - certmanager: This method use cert-manager to generate \u0026 rotate certificates.\n method: helm\n # -- Generated certificates validity duration in days.\n certValidityDuration: 1095\n # -- Schedule for certificates regeneration (regardless of their expiration date).\n # Only used if method is \"cronJob\". If nil, then no recurring job will be created.\n # Instead, only the one-shot job is deployed to generate the certificates at\n # installation time.\n #\n # Defaults to midnight of the first day of every fourth month. For syntax, see\n # https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax\n schedule: \"0 0 1 */4 *\"\n # [Example]\n # certManagerIssuerRef:\n # group: cert-manager.io\n # kind: ClusterIssuer\n # name: ca-issuer\n # -- certmanager issuer used when hubble.tls.auto.method=certmanager.\n certManagerIssuerRef: {}\n # -- base64 encoded PEM values for the Hubble server certificate and private key\n server:\n cert: \"\"\n key: \"\"\n # -- Extra DNS names added to certificate when it's auto generated\n extraDnsNames: []\n # -- Extra IP addresses added to certificate when it's auto generated\n extraIpAddresses: []\n relay:\n # -- Enable Hubble Relay (requires hubble.enabled=true)\n enabled: false\n # -- Roll out Hubble Relay pods automatically when configmap is updated.\n rollOutPods: false\n # -- Hubble-relay container image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/hubble-relay\"\n tag: \"v1.16.0\"\n # hubble-relay-digest\n digest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n # -- Specifies the resources for the hubble-relay pods\n resources: {}\n # -- Number of replicas run for the hubble-relay deployment.\n replicas: 1\n # -- Affinity for hubble-replay\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - topologyKey: kubernetes.io/hostname\n labelSelector:\n matchLabels:\n k8s-app: cilium\n # -- Pod topology spread constraints for hubble-relay\n topologySpreadConstraints: []\n # - maxSkew: 1\n # topologyKey: topology.kubernetes.io/zone\n # whenUnsatisfiable: DoNotSchedule\n\n # -- Node labels for pod assignment\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector\n nodeSelector:\n kubernetes.io/os: linux\n # -- Node tolerations for pod assignment on nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations: []\n # -- Additional hubble-relay environment variables.\n extraEnv: []\n # -- Annotations to be added to all top-level hubble-relay objects (resources under templates/hubble-relay)\n annotations: {}\n # -- Annotations to be added to hubble-relay pods\n podAnnotations: {}\n # -- Labels to be added to hubble-relay pods\n podLabels: {}\n # PodDisruptionBudget settings\n podDisruptionBudget:\n # -- enable PodDisruptionBudget\n # ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/\n enabled: false\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Minimum number/percentage of pods that should remain scheduled.\n # When it's set, maxUnavailable must be disabled by `maxUnavailable: null`\n minAvailable: null\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Maximum number/percentage of pods that may be made unavailable\n maxUnavailable: 1\n # -- The priority class to use for hubble-relay\n priorityClassName: \"\"\n # -- Configure termination grace period for hubble relay Deployment.\n terminationGracePeriodSeconds: 1\n # -- hubble-relay update strategy\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n # @schema\n # type: [integer, string]\n # @schema\n maxUnavailable: 1\n # -- Additional hubble-relay volumes.\n extraVolumes: []\n # -- Additional hubble-relay volumeMounts.\n extraVolumeMounts: []\n # -- hubble-relay pod security context\n podSecurityContext:\n fsGroup: 65532\n # -- hubble-relay container security context\n securityContext:\n # readOnlyRootFilesystem: true\n runAsNonRoot: true\n runAsUser: 65532\n runAsGroup: 65532\n capabilities:\n drop:\n - ALL\n # -- hubble-relay service configuration.\n service:\n # --- The type of service used for Hubble Relay access, either ClusterIP or NodePort.\n type: ClusterIP\n # --- The port to use when the service type is set to NodePort.\n nodePort: 31234\n # -- Host to listen to. Specify an empty string to bind to all the interfaces.\n listenHost: \"\"\n # -- Port to listen to.\n listenPort: \"4245\"\n # -- TLS configuration for Hubble Relay\n tls:\n # -- base64 encoded PEM values for the hubble-relay client certificate and private key\n # This keypair is presented to Hubble server instances for mTLS\n # authentication and is required when hubble.tls.enabled is true.\n # These values need to be set manually if hubble.tls.auto.enabled is false.\n client:\n cert: \"\"\n key: \"\"\n # -- base64 encoded PEM values for the hubble-relay server certificate and private key\n server:\n # When set to true, enable TLS on for Hubble Relay server\n # (ie: for clients connecting to the Hubble Relay API).\n enabled: false\n # When set to true enforces mutual TLS between Hubble Relay server and its clients.\n # False allow non-mutual TLS connections.\n # This option has no effect when TLS is disabled.\n mtls: false\n # These values need to be set manually if hubble.tls.auto.enabled is false.\n cert: \"\"\n key: \"\"\n # -- extra DNS names added to certificate when its auto gen\n extraDnsNames: []\n # -- extra IP addresses added to certificate when its auto gen\n extraIpAddresses: []\n # DNS name used by the backend to connect to the relay\n # This is a simple workaround as the relay certificates are currently hardcoded to\n # *.hubble-relay.cilium.io\n # See https://github.com/cilium/cilium/pull/28709#discussion_r1371792546\n # For GKE Dataplane V2 this should be set to relay.kube-system.svc.cluster.local\n relayName: \"ui.hubble-relay.cilium.io\"\n # @schema\n # type: [null, string]\n # @schema\n # -- Dial timeout to connect to the local hubble instance to receive peer information (e.g. \"30s\").\n dialTimeout: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- Backoff duration to retry connecting to the local hubble instance in case of failure (e.g. \"30s\").\n retryTimeout: ~\n # @schema\n # type: [null, integer]\n # @schema\n # -- (int) Max number of flows that can be buffered for sorting before being sent to the\n # client (per request) (e.g. 100).\n sortBufferLenMax: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- When the per-request flows sort buffer is not full, a flow is drained every\n # time this timeout is reached (only affects requests in follow-mode) (e.g. \"1s\").\n sortBufferDrainTimeout: ~\n # -- Port to use for the k8s service backed by hubble-relay pods.\n # If not set, it is dynamically assigned to port 443 if TLS is enabled and to\n # port 80 if not.\n # servicePort: 80\n\n # -- Enable prometheus metrics for hubble-relay on the configured port at\n # /metrics\n prometheus:\n enabled: false\n port: 9966\n serviceMonitor:\n # -- Enable service monitors.\n # This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)\n enabled: false\n # -- Labels to add to ServiceMonitor hubble-relay\n labels: {}\n # -- Annotations to add to ServiceMonitor hubble-relay\n annotations: {}\n # -- Interval for scrape metrics.\n interval: \"10s\"\n # -- Specify the Kubernetes namespace where Prometheus expects to find\n # service monitors configured.\n # namespace: \"\"\n # @schema\n # type: [null, array]\n # @schema\n # -- Relabeling configs for the ServiceMonitor hubble-relay\n relabelings: ~\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics relabeling configs for the ServiceMonitor hubble-relay\n metricRelabelings: ~\n gops:\n # -- Enable gops for hubble-relay\n enabled: true\n # -- Configure gops listen port for hubble-relay\n port: 9893\n pprof:\n # -- Enable pprof for hubble-relay\n enabled: false\n # -- Configure pprof listen address for hubble-relay\n address: localhost\n # -- Configure pprof listen port for hubble-relay\n port: 6062\n ui:\n # -- Whether to enable the Hubble UI.\n enabled: false\n standalone:\n # -- When true, it will allow installing the Hubble UI only, without checking dependencies.\n # It is useful if a cluster already has cilium and Hubble relay installed and you just\n # want Hubble UI to be deployed.\n # When installed via helm, installing UI should be done via `helm upgrade` and when installed via the cilium cli, then `cilium hubble enable --ui`\n enabled: false\n tls:\n # -- When deploying Hubble UI in standalone, with tls enabled for Hubble relay, it is required\n # to provide a volume for mounting the client certificates.\n certsVolume: {}\n # projected:\n # defaultMode: 0400\n # sources:\n # - secret:\n # name: hubble-ui-client-certs\n # items:\n # - key: tls.crt\n # path: client.crt\n # - key: tls.key\n # path: client.key\n # - key: ca.crt\n # path: hubble-relay-ca.crt\n # -- Roll out Hubble-ui pods automatically when configmap is updated.\n rollOutPods: false\n tls:\n # -- base64 encoded PEM values used to connect to hubble-relay\n # This keypair is presented to Hubble Relay instances for mTLS\n # authentication and is required when hubble.relay.tls.server.enabled is true.\n # These values need to be set manually if hubble.tls.auto.enabled is false.\n client:\n cert: \"\"\n key: \"\"\n backend:\n # -- Hubble-ui backend image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/hubble-ui-backend\"\n tag: \"v0.13.1\"\n digest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n # -- Hubble-ui backend security context.\n securityContext: {}\n # -- Additional hubble-ui backend environment variables.\n extraEnv: []\n # -- Additional hubble-ui backend volumes.\n extraVolumes: []\n # -- Additional hubble-ui backend volumeMounts.\n extraVolumeMounts: []\n livenessProbe:\n # -- Enable liveness probe for Hubble-ui backend (requires Hubble-ui 0.12+)\n enabled: false\n readinessProbe:\n # -- Enable readiness probe for Hubble-ui backend (requires Hubble-ui 0.12+)\n enabled: false\n # -- Resource requests and limits for the 'backend' container of the 'hubble-ui' deployment.\n resources: {}\n # limits:\n # cpu: 1000m\n # memory: 1024M\n # requests:\n # cpu: 100m\n # memory: 64Mi\n frontend:\n # -- Hubble-ui frontend image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/hubble-ui\"\n tag: \"v0.13.1\"\n digest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n # -- Hubble-ui frontend security context.\n securityContext: {}\n # -- Additional hubble-ui frontend environment variables.\n extraEnv: []\n # -- Additional hubble-ui frontend volumes.\n extraVolumes: []\n # -- Additional hubble-ui frontend volumeMounts.\n extraVolumeMounts: []\n # -- Resource requests and limits for the 'frontend' container of the 'hubble-ui' deployment.\n resources: {}\n # limits:\n # cpu: 1000m\n # memory: 1024M\n # requests:\n # cpu: 100m\n # memory: 64Mi\n server:\n # -- Controls server listener for ipv6\n ipv6:\n enabled: true\n # -- The number of replicas of Hubble UI to deploy.\n replicas: 1\n # -- Annotations to be added to all top-level hubble-ui objects (resources under templates/hubble-ui)\n annotations: {}\n # -- Annotations to be added to hubble-ui pods\n podAnnotations: {}\n # -- Labels to be added to hubble-ui pods\n podLabels: {}\n # PodDisruptionBudget settings\n podDisruptionBudget:\n # -- enable PodDisruptionBudget\n # ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/\n enabled: false\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Minimum number/percentage of pods that should remain scheduled.\n # When it's set, maxUnavailable must be disabled by `maxUnavailable: null`\n minAvailable: null\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Maximum number/percentage of pods that may be made unavailable\n maxUnavailable: 1\n # -- Affinity for hubble-ui\n affinity: {}\n # -- Pod topology spread constraints for hubble-ui\n topologySpreadConstraints: []\n # - maxSkew: 1\n # topologyKey: topology.kubernetes.io/zone\n # whenUnsatisfiable: DoNotSchedule\n\n # -- Node labels for pod assignment\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector\n nodeSelector:\n kubernetes.io/os: linux\n # -- Node tolerations for pod assignment on nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations: []\n # -- The priority class to use for hubble-ui\n priorityClassName: \"\"\n # -- hubble-ui update strategy.\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n # @schema\n # type: [integer, string]\n # @schema\n maxUnavailable: 1\n # -- Security context to be added to Hubble UI pods\n securityContext:\n runAsUser: 1001\n runAsGroup: 1001\n fsGroup: 1001\n # -- hubble-ui service configuration.\n service:\n # -- Annotations to be added for the Hubble UI service\n annotations: {}\n # --- The type of service used for Hubble UI access, either ClusterIP or NodePort.\n type: ClusterIP\n # --- The port to use when the service type is set to NodePort.\n nodePort: 31235\n # -- Defines base url prefix for all hubble-ui http requests.\n # It needs to be changed in case if ingress for hubble-ui is configured under some sub-path.\n # Trailing `/` is required for custom path, ex. `/service-map/`\n baseUrl: \"/\"\n # -- hubble-ui ingress configuration.\n ingress:\n enabled: false\n annotations: {}\n # kubernetes.io/ingress.class: nginx\n # kubernetes.io/tls-acme: \"true\"\n className: \"\"\n hosts:\n - chart-example.local\n labels: {}\n tls: []\n # - secretName: chart-example-tls\n # hosts:\n # - chart-example.local\n # -- Hubble flows export.\n export:\n # --- Defines max file size of output file before it gets rotated.\n fileMaxSizeMb: 10\n # --- Defines max number of backup/rotated files.\n fileMaxBackups: 5\n # --- Static exporter configuration.\n # Static exporter is bound to agent lifecycle.\n static:\n enabled: false\n filePath: /var/run/cilium/hubble/events.log\n fieldMask: []\n # - time\n # - source\n # - destination\n # - verdict\n allowList: []\n # - '{\"verdict\":[\"DROPPED\",\"ERROR\"]}'\n denyList: []\n # - '{\"source_pod\":[\"kube-system/\"]}'\n # - '{\"destination_pod\":[\"kube-system/\"]}'\n # --- Dynamic exporters configuration.\n # Dynamic exporters may be reconfigured without a need of agent restarts.\n dynamic:\n enabled: false\n config:\n # ---- Name of configmap with configuration that may be altered to reconfigure exporters within a running agents.\n configMapName: cilium-flowlog-config\n # ---- True if helm installer should create config map.\n # Switch to false if you want to self maintain the file content.\n createConfigMap: true\n # ---- Exporters configuration in YAML format.\n content:\n - name: all\n fieldMask: []\n includeFilters: []\n excludeFilters: []\n filePath: \"/var/run/cilium/hubble/events.log\"\n # - name: \"test002\"\n # filePath: \"/var/log/network/flow-log/pa/test002.log\"\n # fieldMask: [\"source.namespace\", \"source.pod_name\", \"destination.namespace\", \"destination.pod_name\", \"verdict\"]\n # includeFilters:\n # - source_pod: [\"default/\"]\n # event_type:\n # - type: 1\n # - destination_pod: [\"frontend/nginx-975996d4c-7hhgt\"]\n # excludeFilters: []\n # end: \"2023-10-09T23:59:59-07:00\"\n # -- Emit v1.Events related to pods on detection of packet drops.\n # This feature is alpha, please provide feedback at https://github.com/cilium/cilium/issues/33975.\n dropEventEmitter:\n enabled: false\n # --- Minimum time between emitting same events.\n interval: 2m\n # --- Drop reasons to emit events for.\n # ref: https://docs.cilium.io/en/stable/_api/v1/flow/README/#dropreason\n reasons:\n - auth_required\n - policy_denied\n # -- Method to use for identity allocation (`crd` or `kvstore`).\n identityAllocationMode: \"crd\"\n # -- (string) Time to wait before using new identity on endpoint identity change.\n # @default -- `\"5s\"`\n identityChangeGracePeriod: \"\"\n # -- Install Iptables rules to skip netfilter connection tracking on all pod\n # traffic. This option is only effective when Cilium is running in direct\n # routing and full KPR mode. Moreover, this option cannot be enabled when Cilium\n # is running in a managed Kubernetes environment or in a chained CNI setup.\n installNoConntrackIptablesRules: false\n ipam:\n mode: cluster-pool\n operator:\n # -- IPv4 CIDR list range to delegate to individual nodes for IPAM.\n clusterPoolIPv4PodCIDRList:\n - 192.168.0.0/16\n # -- IPv4 CIDR mask size to delegate to individual nodes for IPAM.\n clusterPoolIPv4MaskSize: 25\n # -- IPv6 CIDR list range to delegate to individual nodes for IPAM.\n clusterPoolIPv6PodCIDRList: []\n # -- IPv6 CIDR mask size to delegate to individual nodes for IPAM.\n clusterPoolIPv6MaskSize: 120\n nodeIPAM:\n # -- Configure Node IPAM\n # ref: https://docs.cilium.io/en/stable/network/node-ipam/\n enabled: false\n # @schema\n # type: [null, string]\n # @schema\n # -- The api-rate-limit option can be used to overwrite individual settings of the default configuration for rate limiting calls to the Cilium Agent API\n apiRateLimit: ~\n # -- Configure the eBPF-based ip-masq-agent\n ipMasqAgent:\n enabled: false\n # the config of nonMasqueradeCIDRs\n # config:\n # nonMasqueradeCIDRs: []\n # masqLinkLocal: false\n # masqLinkLocalIPv6: false\n\n # iptablesLockTimeout defines the iptables \"--wait\" option when invoked from Cilium.\n # iptablesLockTimeout: \"5s\"\n ipv4:\n # -- Enable IPv4 support.\n enabled: true\n ipv6:\n # -- Enable IPv6 support.\n enabled: false\n # -- Configure Kubernetes specific configuration\n k8s:\n # -- requireIPv4PodCIDR enables waiting for Kubernetes to provide the PodCIDR\n # range via the Kubernetes node resource\n requireIPv4PodCIDR: false\n # -- requireIPv6PodCIDR enables waiting for Kubernetes to provide the PodCIDR\n # range via the Kubernetes node resource\n requireIPv6PodCIDR: false\n # -- Keep the deprecated selector labels when deploying Cilium DaemonSet.\n keepDeprecatedLabels: false\n # -- Keep the deprecated probes when deploying Cilium DaemonSet\n keepDeprecatedProbes: false\n startupProbe:\n # -- failure threshold of startup probe.\n # 105 x 2s translates to the old behaviour of the readiness probe (120s delay + 30 x 3s)\n failureThreshold: 105\n # -- interval between checks of the startup probe\n periodSeconds: 2\n livenessProbe:\n # -- failure threshold of liveness probe\n failureThreshold: 10\n # -- interval between checks of the liveness probe\n periodSeconds: 30\n readinessProbe:\n # -- failure threshold of readiness probe\n failureThreshold: 3\n # -- interval between checks of the readiness probe\n periodSeconds: 30\n # -- Configure the kube-proxy replacement in Cilium BPF datapath\n # Valid options are \"true\" or \"false\".\n # ref: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/\n #kubeProxyReplacement: \"false\"\n\n # -- healthz server bind address for the kube-proxy replacement.\n # To enable set the value to '0.0.0.0:10256' for all ipv4\n # addresses and this '[::]:10256' for all ipv6 addresses.\n # By default it is disabled.\n kubeProxyReplacementHealthzBindAddr: \"\"\n l2NeighDiscovery:\n # -- Enable L2 neighbor discovery in the agent\n enabled: true\n # -- Override the agent's default neighbor resolution refresh period.\n refreshPeriod: \"30s\"\n # -- Enable Layer 7 network policy.\n l7Proxy: true\n # -- Enable Local Redirect Policy.\n localRedirectPolicy: false\n # To include or exclude matched resources from cilium identity evaluation\n # labels: \"\"\n\n # logOptions allows you to define logging options. eg:\n # logOptions:\n # format: json\n\n # -- Enables periodic logging of system load\n logSystemLoad: false\n # -- Configure maglev consistent hashing\n maglev: {}\n # -- tableSize is the size (parameter M) for the backend table of one\n # service entry\n # tableSize:\n\n # -- hashSeed is the cluster-wide base64 encoded seed for the hashing\n # hashSeed:\n\n # -- Enables masquerading of IPv4 traffic leaving the node from endpoints.\n enableIPv4Masquerade: true\n # -- Enables masquerading of IPv6 traffic leaving the node from endpoints.\n enableIPv6Masquerade: true\n # -- Enables masquerading to the source of the route for traffic leaving the node from endpoints.\n enableMasqueradeRouteSource: false\n # -- Enables IPv4 BIG TCP support which increases maximum IPv4 GSO/GRO limits for nodes and pods\n enableIPv4BIGTCP: false\n # -- Enables IPv6 BIG TCP support which increases maximum IPv6 GSO/GRO limits for nodes and pods\n enableIPv6BIGTCP: false\n egressGateway:\n # -- Enables egress gateway to redirect and SNAT the traffic that leaves the\n # cluster.\n enabled: false\n # -- Time between triggers of egress gateway state reconciliations\n reconciliationTriggerInterval: 1s\n # -- Maximum number of entries in egress gateway policy map\n # maxPolicyEntries: 16384\n vtep:\n # -- Enables VXLAN Tunnel Endpoint (VTEP) Integration (beta) to allow\n # Cilium-managed pods to talk to third party VTEP devices over Cilium tunnel.\n enabled: false\n # -- A space separated list of VTEP device endpoint IPs, for example \"1.1.1.1 1.1.2.1\"\n endpoint: \"\"\n # -- A space separated list of VTEP device CIDRs, for example \"1.1.1.0/24 1.1.2.0/24\"\n cidr: \"\"\n # -- VTEP CIDRs Mask that applies to all VTEP CIDRs, for example \"255.255.255.0\"\n mask: \"\"\n # -- A space separated list of VTEP device MAC addresses (VTEP MAC), for example \"x:x:x:x:x:x y:y:y:y:y:y:y\"\n mac: \"\"\n # -- (string) Allows to explicitly specify the IPv4 CIDR for native routing.\n # When specified, Cilium assumes networking for this CIDR is preconfigured and\n # hands traffic destined for that range to the Linux network stack without\n # applying any SNAT.\n # Generally speaking, specifying a native routing CIDR implies that Cilium can\n # depend on the underlying networking stack to route packets to their\n # destination. To offer a concrete example, if Cilium is configured to use\n # direct routing and the Kubernetes CIDR is included in the native routing CIDR,\n # the user must configure the routes to reach pods, either manually or by\n # setting the auto-direct-node-routes flag.\n ipv4NativeRoutingCIDR: \"\"\n # -- (string) Allows to explicitly specify the IPv6 CIDR for native routing.\n # When specified, Cilium assumes networking for this CIDR is preconfigured and\n # hands traffic destined for that range to the Linux network stack without\n # applying any SNAT.\n # Generally speaking, specifying a native routing CIDR implies that Cilium can\n # depend on the underlying networking stack to route packets to their\n # destination. To offer a concrete example, if Cilium is configured to use\n # direct routing and the Kubernetes CIDR is included in the native routing CIDR,\n # the user must configure the routes to reach pods, either manually or by\n # setting the auto-direct-node-routes flag.\n ipv6NativeRoutingCIDR: \"\"\n # -- cilium-monitor sidecar.\n monitor:\n # -- Enable the cilium-monitor sidecar.\n enabled: false\n # -- Configure service load balancing\n loadBalancer:\n # -- standalone enables the standalone L4LB which does not connect to\n # kube-apiserver.\n # standalone: false\n\n # -- algorithm is the name of the load balancing algorithm for backend\n # selection e.g. random or maglev\n # algorithm: random\n\n # -- mode is the operation mode of load balancing for remote backends\n # e.g. snat, dsr, hybrid\n # mode: snat\n\n # -- acceleration is the option to accelerate service handling via XDP\n # Applicable values can be: disabled (do not use XDP), native (XDP BPF\n # program is run directly out of the networking driver's early receive\n # path), or best-effort (use native mode XDP acceleration on devices\n # that support it).\n acceleration: disabled\n # -- dsrDispatch configures whether IP option or IPIP encapsulation is\n # used to pass a service IP and port to remote backend\n # dsrDispatch: opt\n\n # -- serviceTopology enables K8s Topology Aware Hints -based service\n # endpoints filtering\n # serviceTopology: false\n\n # -- L7 LoadBalancer\n l7:\n # -- Enable L7 service load balancing via envoy proxy.\n # The request to a k8s service, which has specific annotation e.g. service.cilium.io/lb-l7,\n # will be forwarded to the local backend proxy to be load balanced to the service endpoints.\n # Please refer to docs for supported annotations for more configuration.\n #\n # Applicable values:\n # - envoy: Enable L7 load balancing via envoy proxy. This will automatically set enable-envoy-config as well.\n # - disabled: Disable L7 load balancing by way of service annotation.\n backend: disabled\n # -- List of ports from service to be automatically redirected to above backend.\n # Any service exposing one of these ports will be automatically redirected.\n # Fine-grained control can be achieved by using the service annotation.\n ports: []\n # -- Default LB algorithm\n # The default LB algorithm to be used for services, which can be overridden by the\n # service annotation (e.g. service.cilium.io/lb-l7-algorithm)\n # Applicable values: round_robin, least_request, random\n algorithm: round_robin\n # -- Configure N-S k8s service loadbalancing\n nodePort:\n # -- Enable the Cilium NodePort service implementation.\n enabled: false\n # -- Port range to use for NodePort services.\n # range: \"30000,32767\"\n\n # @schema\n # type: [null, string, array]\n # @schema\n # -- List of CIDRs for choosing which IP addresses assigned to native devices are used for NodePort load-balancing.\n # By default this is empty and the first suitable, preferably private, IPv4 and IPv6 address assigned to each device is used.\n #\n # Example:\n #\n # addresses: [\"192.168.1.0/24\", \"2001::/64\"]\n #\n addresses: ~\n # -- Set to true to prevent applications binding to service ports.\n bindProtection: true\n # -- Append NodePort range to ip_local_reserved_ports if clash with ephemeral\n # ports is detected.\n autoProtectPortRange: true\n # -- Enable healthcheck nodePort server for NodePort services\n enableHealthCheck: true\n # -- Enable access of the healthcheck nodePort on the LoadBalancerIP. Needs\n # EnableHealthCheck to be enabled\n enableHealthCheckLoadBalancerIP: false\n # policyAuditMode: false\n\n # -- The agent can be put into one of the three policy enforcement modes:\n # default, always and never.\n # ref: https://docs.cilium.io/en/stable/security/policy/intro/#policy-enforcement-modes\n policyEnforcementMode: \"default\"\n # @schema\n # type: [null, string, array]\n # @schema\n # -- policyCIDRMatchMode is a list of entities that may be selected by CIDR selector.\n # The possible value is \"nodes\".\n policyCIDRMatchMode:\n pprof:\n # -- Enable pprof for cilium-agent\n enabled: false\n # -- Configure pprof listen address for cilium-agent\n address: localhost\n # -- Configure pprof listen port for cilium-agent\n port: 6060\n # -- Configure prometheus metrics on the configured port at /metrics\n prometheus:\n enabled: false\n port: 9962\n serviceMonitor:\n # -- Enable service monitors.\n # This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)\n enabled: false\n # -- Labels to add to ServiceMonitor cilium-agent\n labels: {}\n # -- Annotations to add to ServiceMonitor cilium-agent\n annotations: {}\n # -- jobLabel to add for ServiceMonitor cilium-agent\n jobLabel: \"\"\n # -- Interval for scrape metrics.\n interval: \"10s\"\n # -- Specify the Kubernetes namespace where Prometheus expects to find\n # service monitors configured.\n # namespace: \"\"\n # -- Relabeling configs for the ServiceMonitor cilium-agent\n relabelings:\n - sourceLabels:\n - __meta_kubernetes_pod_node_name\n targetLabel: node\n replacement: ${1}\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics relabeling configs for the ServiceMonitor cilium-agent\n metricRelabelings: ~\n # -- Set to `true` and helm will not check for monitoring.coreos.com/v1 CRDs before deploying\n trustCRDsExist: false\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics that should be enabled or disabled from the default metric list.\n # The list is expected to be separated by a space. (+metric_foo to enable\n # metric_foo , -metric_bar to disable metric_bar).\n # ref: https://docs.cilium.io/en/stable/observability/metrics/\n metrics: ~\n # --- Enable controller group metrics for monitoring specific Cilium\n # subsystems. The list is a list of controller group names. The special\n # values of \"all\" and \"none\" are supported. The set of controller\n # group names is not guaranteed to be stable between Cilium versions.\n controllerGroupMetrics:\n - write-cni-file\n - sync-host-ips\n - sync-lb-maps-with-k8s-services\n # -- Grafana dashboards for cilium-agent\n # grafana can import dashboards based on the label and value\n # ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards\n dashboards:\n enabled: false\n label: grafana_dashboard\n # @schema\n # type: [null, string]\n # @schema\n namespace: ~\n labelValue: \"1\"\n annotations: {}\n # Configure Cilium Envoy options.\n envoy:\n # @schema\n # type: [null, boolean]\n # @schema\n # -- Enable Envoy Proxy in standalone DaemonSet.\n # This field is enabled by default for new installation.\n # @default -- `true` for new installation\n enabled: ~\n # -- (int)\n # Set Envoy'--base-id' to use when allocating shared memory regions.\n # Only needs to be changed if multiple Envoy instances will run on the same node and may have conflicts. Supported values: 0 - 4294967295. Defaults to '0'\n baseID: 0\n log:\n # -- The format string to use for laying out the log message metadata of Envoy.\n format: \"[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v\"\n # -- Path to a separate Envoy log file, if any. Defaults to /dev/stdout.\n path: \"\"\n # -- Time in seconds after which a TCP connection attempt times out\n connectTimeoutSeconds: 2\n # -- ProxyMaxRequestsPerConnection specifies the max_requests_per_connection setting for Envoy\n maxRequestsPerConnection: 0\n # -- Set Envoy HTTP option max_connection_duration seconds. Default 0 (disable)\n maxConnectionDurationSeconds: 0\n # -- Set Envoy upstream HTTP idle connection timeout seconds.\n # Does not apply to connections with pending requests. Default 60s\n idleTimeoutDurationSeconds: 60\n # -- Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the ingress L7 policy enforcement Envoy listeners.\n xffNumTrustedHopsL7PolicyIngress: 0\n # -- Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the egress L7 policy enforcement Envoy listeners.\n xffNumTrustedHopsL7PolicyEgress: 0\n # -- Envoy container image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/cilium-envoy\"\n tag: \"v1.29.7-39a2a56bbd5b3a591f69dbca51d3e30ef97e0e51\"\n pullPolicy: \"IfNotPresent\"\n digest: \"\"\n useDigest: false\n # -- Additional containers added to the cilium Envoy DaemonSet.\n extraContainers: []\n # -- Additional envoy container arguments.\n extraArgs: []\n # -- Additional envoy container environment variables.\n extraEnv: []\n # -- Additional envoy hostPath mounts.\n extraHostPathMounts: []\n # - name: host-mnt-data\n # mountPath: /host/mnt/data\n # hostPath: /mnt/data\n # hostPathType: Directory\n # readOnly: true\n # mountPropagation: HostToContainer\n\n # -- Additional envoy volumes.\n extraVolumes: []\n # -- Additional envoy volumeMounts.\n extraVolumeMounts: []\n # -- Configure termination grace period for cilium-envoy DaemonSet.\n terminationGracePeriodSeconds: 1\n # -- TCP port for the health API.\n healthPort: 9878\n # -- cilium-envoy update strategy\n # ref: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/#updating-a-daemonset\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n # @schema\n # type: [integer, string]\n # @schema\n maxUnavailable: 2\n # -- Roll out cilium envoy pods automatically when configmap is updated.\n rollOutPods: false\n # -- Annotations to be added to all top-level cilium-envoy objects (resources under templates/cilium-envoy)\n annotations: {}\n # -- Security Context for cilium-envoy pods.\n podSecurityContext:\n # -- AppArmorProfile options for the `cilium-agent` and init containers\n appArmorProfile:\n type: \"Unconfined\"\n # -- Annotations to be added to envoy pods\n podAnnotations: {}\n # -- Labels to be added to envoy pods\n podLabels: {}\n # -- Envoy resource limits \u0026 requests\n # ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n resources: {}\n # limits:\n # cpu: 4000m\n # memory: 4Gi\n # requests:\n # cpu: 100m\n # memory: 512Mi\n\n startupProbe:\n # -- failure threshold of startup probe.\n # 105 x 2s translates to the old behaviour of the readiness probe (120s delay + 30 x 3s)\n failureThreshold: 105\n # -- interval between checks of the startup probe\n periodSeconds: 2\n livenessProbe:\n # -- failure threshold of liveness probe\n failureThreshold: 10\n # -- interval between checks of the liveness probe\n periodSeconds: 30\n readinessProbe:\n # -- failure threshold of readiness probe\n failureThreshold: 3\n # -- interval between checks of the readiness probe\n periodSeconds: 30\n securityContext:\n # -- User to run the pod with\n # runAsUser: 0\n # -- Run the pod with elevated privileges\n privileged: false\n # -- SELinux options for the `cilium-envoy` container\n seLinuxOptions:\n level: 's0'\n # Running with spc_t since we have removed the privileged mode.\n # Users can change it to a different type as long as they have the\n # type available on the system.\n type: 'spc_t'\n capabilities:\n # -- Capabilities for the `cilium-envoy` container.\n # Even though granted to the container, the cilium-envoy-starter wrapper drops\n # all capabilities after forking the actual Envoy process.\n # `NET_BIND_SERVICE` is the only capability that can be passed to the Envoy process by\n # setting `envoy.securityContext.capabilities.keepNetBindService=true` (in addition to granting the\n # capability to the container).\n # Note: In case of embedded envoy, the capability must be granted to the cilium-agent container.\n envoy:\n # Used since cilium proxy uses setting IPPROTO_IP/IP_TRANSPARENT\n - NET_ADMIN\n # We need it for now but might not need it for \u003e= 5.11 specially\n # for the 'SYS_RESOURCE'.\n # In \u003e= 5.8 there's already BPF and PERMON capabilities\n - SYS_ADMIN\n # Both PERFMON and BPF requires kernel 5.8, container runtime\n # cri-o \u003e= v1.22.0 or containerd \u003e= v1.5.0.\n # If available, SYS_ADMIN can be removed.\n #- PERFMON\n #- BPF\n # -- Keep capability `NET_BIND_SERVICE` for Envoy process.\n keepCapNetBindService: false\n # -- Affinity for cilium-envoy.\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - topologyKey: kubernetes.io/hostname\n labelSelector:\n matchLabels:\n k8s-app: cilium-envoy\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - topologyKey: kubernetes.io/hostname\n labelSelector:\n matchLabels:\n k8s-app: cilium\n nodeAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n nodeSelectorTerms:\n - matchExpressions:\n - key: cilium.io/no-schedule\n operator: NotIn\n values:\n - \"true\"\n # -- Node selector for cilium-envoy.\n nodeSelector:\n kubernetes.io/os: linux\n # -- Node tolerations for envoy scheduling to nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations:\n - operator: Exists\n # - key: \"key\"\n # operator: \"Equal|Exists\"\n # value: \"value\"\n # effect: \"NoSchedule|PreferNoSchedule|NoExecute(1.6 only)\"\n # @schema\n # type: [null, string]\n # @schema\n # -- The priority class to use for cilium-envoy.\n priorityClassName: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- DNS policy for Cilium envoy pods.\n # Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy\n dnsPolicy: ~\n debug:\n admin:\n # -- Enable admin interface for cilium-envoy.\n # This is useful for debugging and should not be enabled in production.\n enabled: false\n # -- Port number (bound to loopback interface).\n # kubectl port-forward can be used to access the admin interface.\n port: 9901\n # -- Configure Cilium Envoy Prometheus options.\n # Note that some of these apply to either cilium-agent or cilium-envoy.\n prometheus:\n # -- Enable prometheus metrics for cilium-envoy\n enabled: true\n serviceMonitor:\n # -- Enable service monitors.\n # This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)\n # Note that this setting applies to both cilium-envoy _and_ cilium-agent\n # with Envoy enabled.\n enabled: false\n # -- Labels to add to ServiceMonitor cilium-envoy\n labels: {}\n # -- Annotations to add to ServiceMonitor cilium-envoy\n annotations: {}\n # -- Interval for scrape metrics.\n interval: \"10s\"\n # -- Specify the Kubernetes namespace where Prometheus expects to find\n # service monitors configured.\n # namespace: \"\"\n # -- Relabeling configs for the ServiceMonitor cilium-envoy\n # or for cilium-agent with Envoy configured.\n relabelings:\n - sourceLabels:\n - __meta_kubernetes_pod_node_name\n targetLabel: node\n replacement: ${1}\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics relabeling configs for the ServiceMonitor cilium-envoy\n # or for cilium-agent with Envoy configured.\n metricRelabelings: ~\n # -- Serve prometheus metrics for cilium-envoy on the configured port\n port: \"9964\"\n # -- Enable/Disable use of node label based identity\n nodeSelectorLabels: false\n # -- Enable resource quotas for priority classes used in the cluster.\n resourceQuotas:\n enabled: false\n cilium:\n hard:\n # 5k nodes * 2 DaemonSets (Cilium and cilium node init)\n pods: \"10k\"\n operator:\n hard:\n # 15 \"clusterwide\" Cilium Operator pods for HA\n pods: \"15\"\n # Need to document default\n ##################\n #sessionAffinity: false\n\n # -- Do not run Cilium agent when running with clean mode. Useful to completely\n # uninstall Cilium as it will stop Cilium from starting and create artifacts\n # in the node.\n sleepAfterInit: false\n # -- Enable check of service source ranges (currently, only for LoadBalancer).\n svcSourceRangeCheck: true\n # -- Synchronize Kubernetes nodes to kvstore and perform CNP GC.\n synchronizeK8sNodes: true\n # -- Configure TLS configuration in the agent.\n tls:\n # -- This configures how the Cilium agent loads the secrets used TLS-aware CiliumNetworkPolicies\n # (namely the secrets referenced by terminatingTLS and originatingTLS).\n # Possible values:\n # - local\n # - k8s\n secretsBackend: local\n # -- Base64 encoded PEM values for the CA certificate and private key.\n # This can be used as common CA to generate certificates used by hubble and clustermesh components.\n # It is neither required nor used when cert-manager is used to generate the certificates.\n ca:\n # -- Optional CA cert. If it is provided, it will be used by cilium to\n # generate all other certificates. Otherwise, an ephemeral CA is generated.\n cert: \"\"\n # -- Optional CA private key. If it is provided, it will be used by cilium to\n # generate all other certificates. Otherwise, an ephemeral CA is generated.\n key: \"\"\n # -- Generated certificates validity duration in days. This will be used for auto generated CA.\n certValidityDuration: 1095\n # -- Configure the CA trust bundle used for the validation of the certificates\n # leveraged by hubble and clustermesh. When enabled, it overrides the content of the\n # 'ca.crt' field of the respective certificates, allowing for CA rotation with no down-time.\n caBundle:\n # -- Enable the use of the CA trust bundle.\n enabled: false\n # -- Name of the ConfigMap containing the CA trust bundle.\n name: cilium-root-ca.crt\n # -- Entry of the ConfigMap containing the CA trust bundle.\n key: ca.crt\n # -- Use a Secret instead of a ConfigMap.\n useSecret: false\n # If uncommented, creates the ConfigMap and fills it with the specified content.\n # Otherwise, the ConfigMap is assumed to be already present in .Release.Namespace.\n #\n # content: |\n # -----BEGIN CERTIFICATE-----\n # ...\n # -----END CERTIFICATE-----\n # -----BEGIN CERTIFICATE-----\n # ...\n # -----END CERTIFICATE-----\n # -- Tunneling protocol to use in tunneling mode and for ad-hoc tunnels.\n # Possible values:\n # - \"\"\n # - vxlan\n # - geneve\n # @default -- `\"vxlan\"`\n tunnelProtocol: \"\"\n # -- Enable native-routing mode or tunneling mode.\n # Possible values:\n # - \"\"\n # - native\n # - tunnel\n # @default -- `\"tunnel\"`\n routingMode: \"\"\n # -- Configure VXLAN and Geneve tunnel port.\n # @default -- Port 8472 for VXLAN, Port 6081 for Geneve\n tunnelPort: 0\n # -- Configure what the response should be to traffic for a service without backends.\n # \"reject\" only works on kernels \u003e= 5.10, on lower kernels we fallback to \"drop\".\n # Possible values:\n # - reject (default)\n # - drop\n serviceNoBackendResponse: reject\n # -- Configure the underlying network MTU to overwrite auto-detected MTU.\n # This value doesn't change the host network interface MTU i.e. eth0 or ens0.\n # It changes the MTU for cilium_net@cilium_host, cilium_host@cilium_net,\n # cilium_vxlan and lxc_health interfaces.\n MTU: 0\n # -- Disable the usage of CiliumEndpoint CRD.\n disableEndpointCRD: false\n wellKnownIdentities:\n # -- Enable the use of well-known identities.\n enabled: false\n etcd:\n # -- Enable etcd mode for the agent.\n enabled: false\n # -- List of etcd endpoints\n endpoints:\n - https://CHANGE-ME:2379\n # -- Enable use of TLS/SSL for connectivity to etcd.\n ssl: false\n operator:\n # -- Enable the cilium-operator component (required).\n enabled: true\n # -- Roll out cilium-operator pods automatically when configmap is updated.\n rollOutPods: false\n # -- cilium-operator image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/operator\"\n tag: \"v1.16.0\"\n # operator-generic-digest\n genericDigest: \"\"\n # operator-azure-digest\n azureDigest: \"\"\n # operator-aws-digest\n awsDigest: \"\"\n # operator-alibabacloud-digest\n alibabacloudDigest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n suffix: \"\"\n # -- Number of replicas to run for the cilium-operator deployment\n replicas: 2\n # -- The priority class to use for cilium-operator\n priorityClassName: \"\"\n # -- DNS policy for Cilium operator pods.\n # Ref: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy\n dnsPolicy: \"\"\n # -- cilium-operator update strategy\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n # @schema\n # type: [integer, string]\n # @schema\n maxSurge: 25%\n # @schema\n # type: [integer, string]\n # @schema\n maxUnavailable: 50%\n # -- Affinity for cilium-operator\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - topologyKey: kubernetes.io/hostname\n labelSelector:\n matchLabels:\n io.cilium/app: operator\n # -- Pod topology spread constraints for cilium-operator\n topologySpreadConstraints: []\n # - maxSkew: 1\n # topologyKey: topology.kubernetes.io/zone\n # whenUnsatisfiable: DoNotSchedule\n\n # -- Node labels for cilium-operator pod assignment\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector\n nodeSelector:\n kubernetes.io/os: linux\n # -- Node tolerations for cilium-operator scheduling to nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations:\n - operator: Exists\n # - key: \"key\"\n # operator: \"Equal|Exists\"\n # value: \"value\"\n # effect: \"NoSchedule|PreferNoSchedule|NoExecute(1.6 only)\"\n # -- Additional cilium-operator container arguments.\n extraArgs: []\n # -- Additional cilium-operator environment variables.\n extraEnv: []\n # -- Additional cilium-operator hostPath mounts.\n extraHostPathMounts: []\n # - name: host-mnt-data\n # mountPath: /host/mnt/data\n # hostPath: /mnt/data\n # hostPathType: Directory\n # readOnly: true\n # mountPropagation: HostToContainer\n\n # -- Additional cilium-operator volumes.\n extraVolumes: []\n # -- Additional cilium-operator volumeMounts.\n extraVolumeMounts: []\n # -- Annotations to be added to all top-level cilium-operator objects (resources under templates/cilium-operator)\n annotations: {}\n # -- HostNetwork setting\n hostNetwork: true\n # -- Security context to be added to cilium-operator pods\n podSecurityContext: {}\n # -- Annotations to be added to cilium-operator pods\n podAnnotations: {}\n # -- Labels to be added to cilium-operator pods\n podLabels: {}\n # PodDisruptionBudget settings\n podDisruptionBudget:\n # -- enable PodDisruptionBudget\n # ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/\n enabled: false\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Minimum number/percentage of pods that should remain scheduled.\n # When it's set, maxUnavailable must be disabled by `maxUnavailable: null`\n minAvailable: null\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Maximum number/percentage of pods that may be made unavailable\n maxUnavailable: 1\n # -- cilium-operator resource limits \u0026 requests\n # ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n resources: {}\n # limits:\n # cpu: 1000m\n # memory: 1Gi\n # requests:\n # cpu: 100m\n # memory: 128Mi\n\n # -- Security context to be added to cilium-operator pods\n securityContext: {}\n # runAsUser: 0\n\n # -- Interval for endpoint garbage collection.\n endpointGCInterval: \"5m0s\"\n # -- Interval for cilium node garbage collection.\n nodeGCInterval: \"5m0s\"\n # -- Interval for identity garbage collection.\n identityGCInterval: \"15m0s\"\n # -- Timeout for identity heartbeats.\n identityHeartbeatTimeout: \"30m0s\"\n pprof:\n # -- Enable pprof for cilium-operator\n enabled: false\n # -- Configure pprof listen address for cilium-operator\n address: localhost\n # -- Configure pprof listen port for cilium-operator\n port: 6061\n # -- Enable prometheus metrics for cilium-operator on the configured port at\n # /metrics\n prometheus:\n enabled: true\n port: 9963\n serviceMonitor:\n # -- Enable service monitors.\n # This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)\n enabled: false\n # -- Labels to add to ServiceMonitor cilium-operator\n labels: {}\n # -- Annotations to add to ServiceMonitor cilium-operator\n annotations: {}\n # -- jobLabel to add for ServiceMonitor cilium-operator\n jobLabel: \"\"\n # -- Interval for scrape metrics.\n interval: \"10s\"\n # @schema\n # type: [null, array]\n # @schema\n # -- Relabeling configs for the ServiceMonitor cilium-operator\n relabelings: ~\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics relabeling configs for the ServiceMonitor cilium-operator\n metricRelabelings: ~\n # -- Grafana dashboards for cilium-operator\n # grafana can import dashboards based on the label and value\n # ref: https://github.com/grafana/helm-charts/tree/main/charts/grafana#sidecar-for-dashboards\n dashboards:\n enabled: false\n label: grafana_dashboard\n # @schema\n # type: [null, string]\n # @schema\n namespace: ~\n labelValue: \"1\"\n annotations: {}\n # -- Skip CRDs creation for cilium-operator\n skipCRDCreation: false\n # -- Remove Cilium node taint from Kubernetes nodes that have a healthy Cilium\n # pod running.\n removeNodeTaints: true\n # @schema\n # type: [null, boolean]\n # @schema\n # -- Taint nodes where Cilium is scheduled but not running. This prevents pods\n # from being scheduled to nodes where Cilium is not the default CNI provider.\n # @default -- same as removeNodeTaints\n setNodeTaints: ~\n # -- Set Node condition NetworkUnavailable to 'false' with the reason\n # 'CiliumIsUp' for nodes that have a healthy Cilium pod.\n setNodeNetworkStatus: true\n unmanagedPodWatcher:\n # -- Restart any pod that are not managed by Cilium.\n restart: true\n # -- Interval, in seconds, to check if there are any pods that are not\n # managed by Cilium.\n intervalSeconds: 15\n nodeinit:\n # -- Enable the node initialization DaemonSet\n enabled: false\n # -- node-init image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/startup-script\"\n tag: \"c54c7edeab7fde4da68e59acd319ab24af242c3f\"\n digest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n # -- The priority class to use for the nodeinit pod.\n priorityClassName: \"\"\n # -- node-init update strategy\n updateStrategy:\n type: RollingUpdate\n # -- Additional nodeinit environment variables.\n extraEnv: []\n # -- Additional nodeinit volumes.\n extraVolumes: []\n # -- Additional nodeinit volumeMounts.\n extraVolumeMounts: []\n # -- Affinity for cilium-nodeinit\n affinity: {}\n # -- Node labels for nodeinit pod assignment\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector\n nodeSelector:\n kubernetes.io/os: linux\n # -- Node tolerations for nodeinit scheduling to nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations:\n - operator: Exists\n # - key: \"key\"\n # operator: \"Equal|Exists\"\n # value: \"value\"\n # effect: \"NoSchedule|PreferNoSchedule|NoExecute(1.6 only)\"\n # -- Annotations to be added to all top-level nodeinit objects (resources under templates/cilium-nodeinit)\n annotations: {}\n # -- Annotations to be added to node-init pods.\n podAnnotations: {}\n # -- Labels to be added to node-init pods.\n podLabels: {}\n # -- Security Context for cilium-node-init pods.\n podSecurityContext:\n # -- AppArmorProfile options for the `cilium-node-init` and init containers\n appArmorProfile:\n type: \"Unconfined\"\n # -- nodeinit resource limits \u0026 requests\n # ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n resources:\n requests:\n cpu: 100m\n memory: 100Mi\n # -- Security context to be added to nodeinit pods.\n securityContext:\n privileged: false\n seLinuxOptions:\n level: 's0'\n # Running with spc_t since we have removed the privileged mode.\n # Users can change it to a different type as long as they have the\n # type available on the system.\n type: 'spc_t'\n capabilities:\n add:\n # Used in iptables. Consider removing once we are iptables-free\n - SYS_MODULE\n # Used for nsenter\n - NET_ADMIN\n - SYS_ADMIN\n - SYS_CHROOT\n - SYS_PTRACE\n # -- bootstrapFile is the location of the file where the bootstrap timestamp is\n # written by the node-init DaemonSet\n bootstrapFile: \"/tmp/cilium-bootstrap.d/cilium-bootstrap-time\"\n # -- startup offers way to customize startup nodeinit script (pre and post position)\n startup:\n preScript: \"\"\n postScript: \"\"\n # -- prestop offers way to customize prestop nodeinit script (pre and post position)\n prestop:\n preScript: \"\"\n postScript: \"\"\n preflight:\n # -- Enable Cilium pre-flight resources (required for upgrade)\n enabled: false\n # -- Cilium pre-flight image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/cilium\"\n tag: \"v1.16.0\"\n # cilium-digest\n digest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n # -- The priority class to use for the preflight pod.\n priorityClassName: \"\"\n # -- preflight update strategy\n updateStrategy:\n type: RollingUpdate\n # -- Additional preflight environment variables.\n extraEnv: []\n # -- Additional preflight volumes.\n extraVolumes: []\n # -- Additional preflight volumeMounts.\n extraVolumeMounts: []\n # -- Affinity for cilium-preflight\n affinity:\n podAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - topologyKey: kubernetes.io/hostname\n labelSelector:\n matchLabels:\n k8s-app: cilium\n # -- Node labels for preflight pod assignment\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector\n nodeSelector:\n kubernetes.io/os: linux\n # -- Node tolerations for preflight scheduling to nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations:\n - operator: Exists\n # - key: \"key\"\n # operator: \"Equal|Exists\"\n # value: \"value\"\n # effect: \"NoSchedule|PreferNoSchedule|NoExecute(1.6 only)\"\n # -- Annotations to be added to all top-level preflight objects (resources under templates/cilium-preflight)\n annotations: {}\n # -- Security context to be added to preflight pods.\n podSecurityContext: {}\n # -- Annotations to be added to preflight pods\n podAnnotations: {}\n # -- Labels to be added to the preflight pod.\n podLabels: {}\n # PodDisruptionBudget settings\n podDisruptionBudget:\n # -- enable PodDisruptionBudget\n # ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/\n enabled: false\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Minimum number/percentage of pods that should remain scheduled.\n # When it's set, maxUnavailable must be disabled by `maxUnavailable: null`\n minAvailable: null\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Maximum number/percentage of pods that may be made unavailable\n maxUnavailable: 1\n # -- preflight resource limits \u0026 requests\n # ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/\n resources: {}\n # limits:\n # cpu: 4000m\n # memory: 4Gi\n # requests:\n # cpu: 100m\n # memory: 512Mi\n\n readinessProbe:\n # -- For how long kubelet should wait before performing the first probe\n initialDelaySeconds: 5\n # -- interval between checks of the readiness probe\n periodSeconds: 5\n # -- Security context to be added to preflight pods\n securityContext: {}\n # runAsUser: 0\n\n # -- Path to write the `--tofqdns-pre-cache` file to.\n tofqdnsPreCache: \"\"\n # -- Configure termination grace period for preflight Deployment and DaemonSet.\n terminationGracePeriodSeconds: 1\n # -- By default we should always validate the installed CNPs before upgrading\n # Cilium. This will make sure the user will have the policies deployed in the\n # cluster with the right schema.\n validateCNPs: true\n # -- Explicitly enable or disable priority class.\n # .Capabilities.KubeVersion is unsettable in `helm template` calls,\n # it depends on k8s libraries version that Helm was compiled against.\n # This option allows to explicitly disable setting the priority class, which\n # is useful for rendering charts for gke clusters in advance.\n enableCriticalPriorityClass: true\n # disableEnvoyVersionCheck removes the check for Envoy, which can be useful\n # on AArch64 as the images do not currently ship a version of Envoy.\n #disableEnvoyVersionCheck: false\n clustermesh:\n # -- Deploy clustermesh-apiserver for clustermesh\n useAPIServer: false\n # -- The maximum number of clusters to support in a ClusterMesh. This value\n # cannot be changed on running clusters, and all clusters in a ClusterMesh\n # must be configured with the same value. Values \u003e 255 will decrease the\n # maximum allocatable cluster-local identities.\n # Supported values are 255 and 511.\n maxConnectedClusters: 255\n # -- Enable the synchronization of Kubernetes EndpointSlices corresponding to\n # the remote endpoints of appropriately-annotated global services through ClusterMesh\n enableEndpointSliceSynchronization: false\n # -- Enable Multi-Cluster Services API support\n enableMCSAPISupport: false\n # -- Annotations to be added to all top-level clustermesh objects (resources under templates/clustermesh-apiserver and templates/clustermesh-config)\n annotations: {}\n # -- Clustermesh explicit configuration.\n config:\n # -- Enable the Clustermesh explicit configuration.\n enabled: false\n # -- Default dns domain for the Clustermesh API servers\n # This is used in the case cluster addresses are not provided\n # and IPs are used.\n domain: mesh.cilium.io\n # -- List of clusters to be peered in the mesh.\n clusters: []\n # clusters:\n # # -- Name of the cluster\n # - name: cluster1\n # # -- Address of the cluster, use this if you created DNS records for\n # # the cluster Clustermesh API server.\n # address: cluster1.mesh.cilium.io\n # # -- Port of the cluster Clustermesh API server.\n # port: 2379\n # # -- IPs of the cluster Clustermesh API server, use multiple ones when\n # # you have multiple IPs to access the Clustermesh API server.\n # ips:\n # - 172.18.255.201\n # # -- base64 encoded PEM values for the cluster client certificate, private key and certificate authority.\n # # These fields can (and should) be omitted in case the CA is shared across clusters. In that case, the\n # # \"remote\" private key and certificate available in the local cluster are automatically used instead.\n # tls:\n # cert: \"\"\n # key: \"\"\n # caCert: \"\"\n apiserver:\n # -- Clustermesh API server image.\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/clustermesh-apiserver\"\n tag: \"v1.16.0\"\n # clustermesh-apiserver-digest\n digest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n # -- TCP port for the clustermesh-apiserver health API.\n healthPort: 9880\n # -- Configuration for the clustermesh-apiserver readiness probe.\n readinessProbe: {}\n etcd:\n # The etcd binary is included in the clustermesh API server image, so the same image from above is reused.\n # Independent override isn't supported, because clustermesh-apiserver is tested against the etcd version it is\n # built with.\n\n # -- Specifies the resources for etcd container in the apiserver\n resources: {}\n # requests:\n # cpu: 200m\n # memory: 256Mi\n # limits:\n # cpu: 1000m\n # memory: 256Mi\n\n # -- Security context to be added to clustermesh-apiserver etcd containers\n securityContext:\n allowPrivilegeEscalation: false\n capabilities:\n drop:\n - ALL\n # -- lifecycle setting for the etcd container\n lifecycle: {}\n init:\n # -- Specifies the resources for etcd init container in the apiserver\n resources: {}\n # requests:\n # cpu: 100m\n # memory: 100Mi\n # limits:\n # cpu: 100m\n # memory: 100Mi\n\n # -- Additional arguments to `clustermesh-apiserver etcdinit`.\n extraArgs: []\n # -- Additional environment variables to `clustermesh-apiserver etcdinit`.\n extraEnv: []\n # @schema\n # enum: [Disk, Memory]\n # @schema\n # -- Specifies whether etcd data is stored in a temporary volume backed by\n # the node's default medium, such as disk, SSD or network storage (Disk), or\n # RAM (Memory). The Memory option enables improved etcd read and write\n # performance at the cost of additional memory usage, which counts against\n # the memory limits of the container.\n storageMedium: Disk\n kvstoremesh:\n # -- Enable KVStoreMesh. KVStoreMesh caches the information retrieved\n # from the remote clusters in the local etcd instance.\n enabled: true\n # -- TCP port for the KVStoreMesh health API.\n healthPort: 9881\n # -- Configuration for the KVStoreMesh readiness probe.\n readinessProbe: {}\n # -- Additional KVStoreMesh arguments.\n extraArgs: []\n # -- Additional KVStoreMesh environment variables.\n extraEnv: []\n # -- Resource requests and limits for the KVStoreMesh container\n resources: {}\n # requests:\n # cpu: 100m\n # memory: 64Mi\n # limits:\n # cpu: 1000m\n # memory: 1024M\n\n # -- Additional KVStoreMesh volumeMounts.\n extraVolumeMounts: []\n # -- KVStoreMesh Security context\n securityContext:\n allowPrivilegeEscalation: false\n capabilities:\n drop:\n - ALL\n # -- lifecycle setting for the KVStoreMesh container\n lifecycle: {}\n service:\n # -- The type of service used for apiserver access.\n type: NodePort\n # -- Optional port to use as the node port for apiserver access.\n #\n # WARNING: make sure to configure a different NodePort in each cluster if\n # kube-proxy replacement is enabled, as Cilium is currently affected by a known\n # bug (#24692) when NodePorts are handled by the KPR implementation. If a service\n # with the same NodePort exists both in the local and the remote cluster, all\n # traffic originating from inside the cluster and targeting the corresponding\n # NodePort will be redirected to a local backend, regardless of whether the\n # destination node belongs to the local or the remote cluster.\n nodePort: 32379\n # -- Annotations for the clustermesh-apiserver\n # For GKE LoadBalancer, use annotation cloud.google.com/load-balancer-type: \"Internal\"\n # For EKS LoadBalancer, use annotation service.beta.kubernetes.io/aws-load-balancer-internal: \"true\"\n annotations: {}\n # @schema\n # enum: [Local, Cluster]\n # @schema\n # -- The externalTrafficPolicy of service used for apiserver access.\n externalTrafficPolicy: Cluster\n # @schema\n # enum: [Local, Cluster]\n # @schema\n # -- The internalTrafficPolicy of service used for apiserver access.\n internalTrafficPolicy: Cluster\n # @schema\n # enum: [HAOnly, Always, Never]\n # @schema\n # -- Defines when to enable session affinity.\n # Each replica in a clustermesh-apiserver deployment runs its own discrete\n # etcd cluster. Remote clients connect to one of the replicas through a\n # shared Kubernetes Service. A client reconnecting to a different backend\n # will require a full resync to ensure data integrity. Session affinity\n # can reduce the likelihood of this happening, but may not be supported\n # by all cloud providers.\n # Possible values:\n # - \"HAOnly\" (default) Only enable session affinity for deployments with more than 1 replica.\n # - \"Always\" Always enable session affinity.\n # - \"Never\" Never enable session affinity. Useful in environments where\n # session affinity is not supported, but may lead to slightly\n # degraded performance due to more frequent reconnections.\n enableSessionAffinity: \"HAOnly\"\n # @schema\n # type: [null, string]\n # @schema\n # -- Configure a loadBalancerClass.\n # Allows to configure the loadBalancerClass on the clustermesh-apiserver\n # LB service in case the Service type is set to LoadBalancer\n # (requires Kubernetes 1.24+).\n loadBalancerClass: ~\n # @schema\n # type: [null, string]\n # @schema\n # -- Configure a specific loadBalancerIP.\n # Allows to configure a specific loadBalancerIP on the clustermesh-apiserver\n # LB service in case the Service type is set to LoadBalancer.\n loadBalancerIP: ~\n # -- Number of replicas run for the clustermesh-apiserver deployment.\n replicas: 1\n # -- lifecycle setting for the apiserver container\n lifecycle: {}\n # -- terminationGracePeriodSeconds for the clustermesh-apiserver deployment\n terminationGracePeriodSeconds: 30\n # -- Additional clustermesh-apiserver arguments.\n extraArgs: []\n # -- Additional clustermesh-apiserver environment variables.\n extraEnv: []\n # -- Additional clustermesh-apiserver volumes.\n extraVolumes: []\n # -- Additional clustermesh-apiserver volumeMounts.\n extraVolumeMounts: []\n # -- Security context to be added to clustermesh-apiserver containers\n securityContext:\n allowPrivilegeEscalation: false\n capabilities:\n drop:\n - ALL\n # -- Security context to be added to clustermesh-apiserver pods\n podSecurityContext:\n runAsNonRoot: true\n runAsUser: 65532\n runAsGroup: 65532\n fsGroup: 65532\n # -- Annotations to be added to clustermesh-apiserver pods\n podAnnotations: {}\n # -- Labels to be added to clustermesh-apiserver pods\n podLabels: {}\n # PodDisruptionBudget settings\n podDisruptionBudget:\n # -- enable PodDisruptionBudget\n # ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/\n enabled: false\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Minimum number/percentage of pods that should remain scheduled.\n # When it's set, maxUnavailable must be disabled by `maxUnavailable: null`\n minAvailable: null\n # @schema\n # type: [null, integer, string]\n # @schema\n # -- Maximum number/percentage of pods that may be made unavailable\n maxUnavailable: 1\n # -- Resource requests and limits for the clustermesh-apiserver\n resources: {}\n # requests:\n # cpu: 100m\n # memory: 64Mi\n # limits:\n # cpu: 1000m\n # memory: 1024M\n\n # -- Affinity for clustermesh.apiserver\n affinity:\n podAntiAffinity:\n preferredDuringSchedulingIgnoredDuringExecution:\n - weight: 100\n podAffinityTerm:\n labelSelector:\n matchLabels:\n k8s-app: clustermesh-apiserver\n topologyKey: kubernetes.io/hostname\n # -- Pod topology spread constraints for clustermesh-apiserver\n topologySpreadConstraints: []\n # - maxSkew: 1\n # topologyKey: topology.kubernetes.io/zone\n # whenUnsatisfiable: DoNotSchedule\n\n # -- Node labels for pod assignment\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector\n nodeSelector:\n kubernetes.io/os: linux\n # -- Node tolerations for pod assignment on nodes with taints\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations: []\n # -- clustermesh-apiserver update strategy\n updateStrategy:\n type: RollingUpdate\n rollingUpdate:\n # @schema\n # type: [integer, string]\n # @schema\n maxSurge: 1\n # @schema\n # type: [integer, string]\n # @schema\n maxUnavailable: 0\n # -- The priority class to use for clustermesh-apiserver\n priorityClassName: \"\"\n tls:\n # -- Configure the clustermesh authentication mode.\n # Supported values:\n # - legacy: All clusters access remote clustermesh instances with the same\n # username (i.e., remote). The \"remote\" certificate must be\n # generated with CN=remote if provided manually.\n # - migration: Intermediate mode required to upgrade from legacy to cluster\n # (and vice versa) with no disruption. Specifically, it enables\n # the creation of the per-cluster usernames, while still using\n # the common one for authentication. The \"remote\" certificate must\n # be generated with CN=remote if provided manually (same as legacy).\n # - cluster: Each cluster accesses remote etcd instances with a username\n # depending on the local cluster name (i.e., remote-\u003ccluster-name\u003e).\n # The \"remote\" certificate must be generated with CN=remote-\u003ccluster-name\u003e\n # if provided manually. Cluster mode is meaningful only when the same\n # CA is shared across all clusters part of the mesh.\n authMode: legacy\n # -- Allow users to provide their own certificates\n # Users may need to provide their certificates using\n # a mechanism that requires they provide their own secrets.\n # This setting does not apply to any of the auto-generated\n # mechanisms below, it only restricts the creation of secrets\n # via the `tls-provided` templates.\n enableSecrets: true\n # -- Configure automatic TLS certificates generation.\n # A Kubernetes CronJob is used the generate any\n # certificates not provided by the user at installation\n # time.\n auto:\n # -- When set to true, automatically generate a CA and certificates to\n # enable mTLS between clustermesh-apiserver and external workload instances.\n # If set to false, the certs to be provided by setting appropriate values below.\n enabled: true\n # Sets the method to auto-generate certificates. Supported values:\n # - helm: This method uses Helm to generate all certificates.\n # - cronJob: This method uses a Kubernetes CronJob the generate any\n # certificates not provided by the user at installation\n # time.\n # - certmanager: This method use cert-manager to generate \u0026 rotate certificates.\n method: helm\n # -- Generated certificates validity duration in days.\n certValidityDuration: 1095\n # -- Schedule for certificates regeneration (regardless of their expiration date).\n # Only used if method is \"cronJob\". If nil, then no recurring job will be created.\n # Instead, only the one-shot job is deployed to generate the certificates at\n # installation time.\n #\n # Due to the out-of-band distribution of client certs to external workloads the\n # CA is (re)regenerated only if it is not provided as a helm value and the k8s\n # secret is manually deleted.\n #\n # Defaults to none. Commented syntax gives midnight of the first day of every\n # fourth month. For syntax, see\n # https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#schedule-syntax\n # schedule: \"0 0 1 */4 *\"\n\n # [Example]\n # certManagerIssuerRef:\n # group: cert-manager.io\n # kind: ClusterIssuer\n # name: ca-issuer\n # -- certmanager issuer used when clustermesh.apiserver.tls.auto.method=certmanager.\n certManagerIssuerRef: {}\n # -- base64 encoded PEM values for the clustermesh-apiserver server certificate and private key.\n # Used if 'auto' is not enabled.\n server:\n cert: \"\"\n key: \"\"\n # -- Extra DNS names added to certificate when it's auto generated\n extraDnsNames: []\n # -- Extra IP addresses added to certificate when it's auto generated\n extraIpAddresses: []\n # -- base64 encoded PEM values for the clustermesh-apiserver admin certificate and private key.\n # Used if 'auto' is not enabled.\n admin:\n cert: \"\"\n key: \"\"\n # -- base64 encoded PEM values for the clustermesh-apiserver client certificate and private key.\n # Used if 'auto' is not enabled.\n client:\n cert: \"\"\n key: \"\"\n # -- base64 encoded PEM values for the clustermesh-apiserver remote cluster certificate and private key.\n # Used if 'auto' is not enabled.\n remote:\n cert: \"\"\n key: \"\"\n # clustermesh-apiserver Prometheus metrics configuration\n metrics:\n # -- Enables exporting apiserver metrics in OpenMetrics format.\n enabled: true\n # -- Configure the port the apiserver metric server listens on.\n port: 9962\n kvstoremesh:\n # -- Enables exporting KVStoreMesh metrics in OpenMetrics format.\n enabled: true\n # -- Configure the port the KVStoreMesh metric server listens on.\n port: 9964\n etcd:\n # -- Enables exporting etcd metrics in OpenMetrics format.\n enabled: true\n # -- Set level of detail for etcd metrics; specify 'extensive' to include server side gRPC histogram metrics.\n mode: basic\n # -- Configure the port the etcd metric server listens on.\n port: 9963\n serviceMonitor:\n # -- Enable service monitor.\n # This requires the prometheus CRDs to be available (see https://github.com/prometheus-operator/prometheus-operator/blob/main/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)\n enabled: false\n # -- Labels to add to ServiceMonitor clustermesh-apiserver\n labels: {}\n # -- Annotations to add to ServiceMonitor clustermesh-apiserver\n annotations: {}\n # -- Specify the Kubernetes namespace where Prometheus expects to find\n # service monitors configured.\n # namespace: \"\"\n\n # -- Interval for scrape metrics (apiserver metrics)\n interval: \"10s\"\n # @schema\n # type: [null, array]\n # @schema\n # -- Relabeling configs for the ServiceMonitor clustermesh-apiserver (apiserver metrics)\n relabelings: ~\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics relabeling configs for the ServiceMonitor clustermesh-apiserver (apiserver metrics)\n metricRelabelings: ~\n kvstoremesh:\n # -- Interval for scrape metrics (KVStoreMesh metrics)\n interval: \"10s\"\n # @schema\n # type: [null, array]\n # @schema\n # -- Relabeling configs for the ServiceMonitor clustermesh-apiserver (KVStoreMesh metrics)\n relabelings: ~\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics relabeling configs for the ServiceMonitor clustermesh-apiserver (KVStoreMesh metrics)\n metricRelabelings: ~\n etcd:\n # -- Interval for scrape metrics (etcd metrics)\n interval: \"10s\"\n # @schema\n # type: [null, array]\n # @schema\n # -- Relabeling configs for the ServiceMonitor clustermesh-apiserver (etcd metrics)\n relabelings: ~\n # @schema\n # type: [null, array]\n # @schema\n # -- Metrics relabeling configs for the ServiceMonitor clustermesh-apiserver (etcd metrics)\n metricRelabelings: ~\n # -- Configure external workloads support\n externalWorkloads:\n # -- Enable support for external workloads, such as VMs (false by default).\n enabled: false\n # -- Configure cgroup related configuration\n cgroup:\n autoMount:\n # -- Enable auto mount of cgroup2 filesystem.\n # When `autoMount` is enabled, cgroup2 filesystem is mounted at\n # `cgroup.hostRoot` path on the underlying host and inside the cilium agent pod.\n # If users disable `autoMount`, it's expected that users have mounted\n # cgroup2 filesystem at the specified `cgroup.hostRoot` volume, and then the\n # volume will be mounted inside the cilium agent pod at the same path.\n enabled: true\n # -- Init Container Cgroup Automount resource limits \u0026 requests\n resources: {}\n # limits:\n # cpu: 100m\n # memory: 128Mi\n # requests:\n # cpu: 100m\n # memory: 128Mi\n # -- Configure cgroup root where cgroup2 filesystem is mounted on the host (see also: `cgroup.autoMount`)\n hostRoot: /run/cilium/cgroupv2\n # -- Configure sysctl override described in #20072.\n sysctlfix:\n # -- Enable the sysctl override. When enabled, the init container will mount the /proc of the host so that the `sysctlfix` utility can execute.\n enabled: true\n # -- Configure whether to enable auto detect of terminating state for endpoints\n # in order to support graceful termination.\n enableK8sTerminatingEndpoint: true\n # -- Configure whether to unload DNS policy rules on graceful shutdown\n # dnsPolicyUnloadOnShutdown: false\n\n # -- Configure the key of the taint indicating that Cilium is not ready on the node.\n # When set to a value starting with `ignore-taint.cluster-autoscaler.kubernetes.io/`, the Cluster Autoscaler will ignore the taint on its decisions, allowing the cluster to scale up.\n agentNotReadyTaintKey: \"node.cilium.io/agent-not-ready\"\n dnsProxy:\n # -- Timeout (in seconds) when closing the connection between the DNS proxy and the upstream server. If set to 0, the connection is closed immediately (with TCP RST). If set to -1, the connection is closed asynchronously in the background.\n socketLingerTimeout: 10\n # -- DNS response code for rejecting DNS requests, available options are '[nameError refused]'.\n dnsRejectResponseCode: refused\n # -- Allow the DNS proxy to compress responses to endpoints that are larger than 512 Bytes or the EDNS0 option, if present.\n enableDnsCompression: true\n # -- Maximum number of IPs to maintain per FQDN name for each endpoint.\n endpointMaxIpPerHostname: 50\n # -- Time during which idle but previously active connections with expired DNS lookups are still considered alive.\n idleConnectionGracePeriod: 0s\n # -- Maximum number of IPs to retain for expired DNS lookups with still-active connections.\n maxDeferredConnectionDeletes: 10000\n # -- The minimum time, in seconds, to use DNS data for toFQDNs policies. If\n # the upstream DNS server returns a DNS record with a shorter TTL, Cilium\n # overwrites the TTL with this value. Setting this value to zero means that\n # Cilium will honor the TTLs returned by the upstream DNS server.\n minTtl: 0\n # -- DNS cache data at this path is preloaded on agent startup.\n preCache: \"\"\n # -- Global port on which the in-agent DNS proxy should listen. Default 0 is a OS-assigned port.\n proxyPort: 0\n # -- The maximum time the DNS proxy holds an allowed DNS response before sending it along. Responses are sent as soon as the datapath is updated with the new IP information.\n proxyResponseMaxDelay: 100ms\n # -- DNS proxy operation mode (true/false, or unset to use version dependent defaults)\n # enableTransparentMode: true\n # -- SCTP Configuration Values\n sctp:\n # -- Enable SCTP support. NOTE: Currently, SCTP support does not support rewriting ports or multihoming.\n enabled: false\n # Configuration for types of authentication for Cilium (beta)\n authentication:\n # -- Enable authentication processing and garbage collection.\n # Note that if disabled, policy enforcement will still block requests that require authentication.\n # But the resulting authentication requests for these requests will not be processed, therefore the requests not be allowed.\n enabled: true\n # -- Buffer size of the channel Cilium uses to receive authentication events from the signal map.\n queueSize: 1024\n # -- Buffer size of the channel Cilium uses to receive certificate expiration events from auth handlers.\n rotatedIdentitiesQueueSize: 1024\n # -- Interval for garbage collection of auth map entries.\n gcInterval: \"5m0s\"\n # Configuration for Cilium's service-to-service mutual authentication using TLS handshakes.\n # Note that this is not full mTLS support without also enabling encryption of some form.\n # Current encryption options are WireGuard or IPsec, configured in encryption block above.\n mutual:\n # -- Port on the agent where mutual authentication handshakes between agents will be performed\n port: 4250\n # -- Timeout for connecting to the remote node TCP socket\n connectTimeout: 5s\n # Settings for SPIRE\n spire:\n # -- Enable SPIRE integration (beta)\n enabled: false\n # -- Annotations to be added to all top-level spire objects (resources under templates/spire)\n annotations: {}\n # Settings to control the SPIRE installation and configuration\n install:\n # -- Enable SPIRE installation.\n # This will only take effect only if authentication.mutual.spire.enabled is true\n enabled: true\n # -- SPIRE namespace to install into\n namespace: cilium-spire\n # -- SPIRE namespace already exists. Set to true if Helm should not create, manage, and import the SPIRE namespace.\n existingNamespace: false\n # -- init container image of SPIRE agent and server\n initImage:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/busybox\"\n tag: \"1.36.1\"\n digest: \"\"\n useDigest: false\n pullPolicy: \"IfNotPresent\"\n # SPIRE agent configuration\n agent:\n # -- SPIRE agent image\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/spire-agent\"\n tag: \"1.9.6\"\n digest: \"sha256:5106ac601272a88684db14daf7f54b9a45f31f77bb16a906bd5e87756ee7b97c\"\n useDigest: true\n pullPolicy: \"IfNotPresent\"\n # -- SPIRE agent service account\n serviceAccount:\n create: true\n name: spire-agent\n # -- SPIRE agent annotations\n annotations: {}\n # -- SPIRE agent labels\n labels: {}\n # -- SPIRE Workload Attestor kubelet verification.\n skipKubeletVerification: true\n # -- SPIRE agent tolerations configuration\n # By default it follows the same tolerations as the agent itself\n # to allow the Cilium agent on this node to connect to SPIRE.\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations:\n - key: node.kubernetes.io/not-ready\n effect: NoSchedule\n - key: node-role.kubernetes.io/master\n effect: NoSchedule\n - key: node-role.kubernetes.io/control-plane\n effect: NoSchedule\n - key: node.cloudprovider.kubernetes.io/uninitialized\n effect: NoSchedule\n value: \"true\"\n - key: CriticalAddonsOnly\n operator: \"Exists\"\n # -- SPIRE agent affinity configuration\n affinity: {}\n # -- SPIRE agent nodeSelector configuration\n # ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector\n nodeSelector: {}\n # -- Security context to be added to spire agent pods.\n # SecurityContext holds pod-level security attributes and common container settings.\n # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod\n podSecurityContext: {}\n # -- Security context to be added to spire agent containers.\n # SecurityContext holds pod-level security attributes and common container settings.\n # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container\n securityContext: {}\n server:\n # -- SPIRE server image\n image:\n # @schema\n # type: [null, string]\n # @schema\n override: ~\n repository: \"gcr.io/spectro-images-public/packs/cilium-oss/1.16.0/spire-server\"\n tag: \"1.9.6\"\n digest: \"sha256:59a0b92b39773515e25e68a46c40d3b931b9c1860bc445a79ceb45a805cab8b4\"\n useDigest: true\n pullPolicy: \"IfNotPresent\"\n # -- SPIRE server service account\n serviceAccount:\n create: true\n name: spire-server\n # -- SPIRE server init containers\n initContainers: []\n # -- SPIRE server annotations\n annotations: {}\n # -- SPIRE server labels\n labels: {}\n # SPIRE server service configuration\n service:\n # -- Service type for the SPIRE server service\n type: ClusterIP\n # -- Annotations to be added to the SPIRE server service\n annotations: {}\n # -- Labels to be added to the SPIRE server service\n labels: {}\n # -- SPIRE server affinity configuration\n affinity: {}\n # -- SPIRE server nodeSelector configuration\n # ref: ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector\n nodeSelector: {}\n # -- SPIRE server tolerations configuration\n # ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/\n tolerations: []\n # SPIRE server datastorage configuration\n dataStorage:\n # -- Enable SPIRE server data storage\n enabled: true\n # -- Size of the SPIRE server data storage\n size: 1Gi\n # -- Access mode of the SPIRE server data storage\n accessMode: ReadWriteOnce\n # @schema\n # type: [null, string]\n # @schema\n # -- StorageClass of the SPIRE server data storage\n storageClass: null\n # -- Security context to be added to spire server pods.\n # SecurityContext holds pod-level security attributes and common container settings.\n # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod\n podSecurityContext: {}\n # -- Security context to be added to spire server containers.\n # SecurityContext holds pod-level security attributes and common container settings.\n # ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container\n securityContext: {}\n # SPIRE CA configuration\n ca:\n # -- SPIRE CA key type\n # AWS requires the use of RSA. EC cryptography is not supported\n keyType: \"rsa-4096\"\n # -- SPIRE CA Subject\n subject:\n country: \"US\"\n organization: \"SPIRE\"\n commonName: \"Cilium SPIRE CA\"\n # @schema\n # type: [null, string]\n # @schema\n # -- SPIRE server address used by Cilium Operator\n #\n # If k8s Service DNS along with port number is used (e.g. \u003cservice-name\u003e.\u003cnamespace\u003e.svc(.*):\u003cport-number\u003e format),\n # Cilium Operator will resolve its address by looking up the clusterIP from Service resource.\n #\n # Example values: 10.0.0.1:8081, spire-server.cilium-spire.svc:8081\n serverAddress: ~\n # -- SPIFFE trust domain to use for fetching certificates\n trustDomain: spiffe.cilium\n # -- SPIRE socket path where the SPIRE delegated api agent is listening\n adminSocketPath: /run/spire/sockets/admin.sock\n # -- SPIRE socket path where the SPIRE workload agent is listening.\n # Applies to both the Cilium Agent and Operator\n agentSocketPath: /run/spire/sockets/agent/agent.sock\n # -- SPIRE connection timeout\n connectionTimeout: 30s","registry":{"metadata":{"uid":"673cd7018238fc2bcd8e6f36","name":"Toolbox","kind":"pack","isPrivate":false,"providerType":"","isSyncSupported":true}}}]},"variables":[]}} + ``` + +17. Click **Validate**. + +18. In the **Select repositories** pop-up window, select the repository where you want to store the pack from the **drop-down Menu**. + +19. On the **Profiles** page, click **Add Cluster Profile**. + +20. Fill out the basic information and ensure **Type** is set to **Add-on**. Click **Next** when done. + +21. In **Profile Layers**, click **Add New Pack**. + +22. Enter **Cilium** in the search box, and select it. It appears in the **System App** category. + +23. Click the **Presets drop-down Menu**. + +24. For **IPAM mode**, select **Cluster Pool**. + +25. In the YAML editor, search for **clusterPoolIPv4PodCIDRList**. This parameter specifies the overall IP ranges available for pod networking across all your hybrid nodes. + + Adjust the pod CIDR list to match the IP address entered in step 11 for **Remote Pod CIDRs**. For example, `192.168.0.0`. + +26. In the YAML editor, search for **clusterPoolIPv4MaskSize**. This parameter determines the subnet mask size used for pod IP allocation within each hybrid node. + + Adjust the mask size based on your required pods per hybrid node. For example, `/25`. + +27. In the Presets, find the **cilium-agent - Hybrid Nodes Affinity** option, and select **Amazon EKS**. + + This will add the following entry to `charts.cilium.affinity`. No changes are required afterwards. + + ```yaml hideClipboard + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: eks.amazonaws.com/compute-type + operator: In + values: + - hybrid + ``` + + :::info + + The Cilium [Daemonset](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) is configured to run on your hybrid nodes only. If no hybrid nodes are present in your cluster, the DaemonSet will remain inactive. + + ::: + +28. Click **Confirm & Create**. + +29. Click **Next**, and then click **Finish Configuration**. + +30. From the left **Main Menu**, select **Clusters**. + +31. Select your cluster to view its **Overview** tab. + +32. Click **Attach Profile**. + +33. Select the **Cilium** add-on profile that was created, and click **Confirm**. + +34. In the **Cluster profiles** page, click **Save**. This will add the profile to your cluster. + +35. If enabling [Cilium Envoy](https://docs.cilium.io/en/latest/security/network/proxy/envoy/) or other Cilium add-ons, you must apply the following label to all AWS cloud worker nodes. + + ```yaml + cilium.io/no-schedule: "true" + ``` + + This ensures that Kubernetes does not attempt to run Cilium add-on pods on these nodes, which are reserved for your hybrid nodes. + + Example command. + + ```shell + kubectl label node cilium.io/no-schedule=true + ``` + +You can now manage your imported cluster in Palette. + +## Access Imported Cluster with Kubectl + +You can access your imported Amazon EKS cluster by using the kubectl CLI, which requires authentication. + +### Default AWS Authentication + +To access an Amazon EKS cluster with the AWS CLI's built-in authentication, you need to do the following: + +- Configure your AWS CLI credentials. Refer to [Configuration and Credential File Settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) for guidance. + +- Ensure you have the following IAM permissions to download the kubeconfig and access the Amazon EKS cluster. Refer to [Amazon EKS identity-based policy examples](https://docs.aws.amazon.com/eks/latest/userguide/security-iam-id-based-policy-examples.html) for guidance. + + - `eks:DescribeCluster` + - `eks:AccessKubernetesApi` + +- Download the kubeconfig file from the Amazon EKS cluster. Refer to [Connect kubectl to an EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) for guidance. + +Once you have downloaded your kubeconfig, you can use kubectl to access your cluster and apply manifests. + +### Custom OIDC Provider + +To access an Amazon EKS cluster with a custom [OpenID Connect (OIDC)](https://openid.net/developers/how-connect-works/) provider, you need to do the following: + +- If you have not yet installed an OIDC provider for your cluster, install [kubelogin](https://github.com/int128/kubelogin). We recommend kubelogin for its ease of authentication. Visit [Grant users access to Kubernetes with an external OIDC provider](https://docs.aws.amazon.com/eks/latest/userguide/authenticate-oidc-identity-provider.html) to learn how to associate an OIDC identity provider with your cluster. + +- Ensure your OIDC user or group is mapped to an `admin` or `clusteradmin` Kubernetes RBAC Role or ClusterRole. To learn how to map a Kubernetes role to users and groups, refer to [Create Role Bindings](../../../cluster-management/cluster-rbac.md#create-role-bindings). For an example, refer to [Use RBAC with OIDC](../../../../integrations/kubernetes.md#use-rbac-with-oidc). + +- Configure your AWS CLI credentials. Refer to [Configuration and Credential File Settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) for guidance. + +- Ensure you have the following IAM permission to download the kubeconfig from the Amazon EKS cluster. Refer to [Amazon EKS identity-based policy examples](https://docs.aws.amazon.com/eks/latest/userguide/security-iam-id-based-policy-examples.html) for guidance. + + - `eks:DescribeCluster` + +- Download the kubeconfig file from the Amazon EKS cluster. Refer to [Connect kubectl to an EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) for guidance. + + - Once the kubeconfig is downloaded, you must configure it to use your OIDC provider. Refer to [Using kubectl](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-kubectl) for guidance. + +Once you have downloaded your kubeconfig and configured it to use your OIDC provider, you can use kubectl to access your cluster and apply manifests. + +## Next Steps + +Learn how to create a hybrid node pool on your cluster and add your edge hosts to the pool. + +## Resources + +- [Add AWS Account](../add-aws-accounts.md) + +- [Palette IP Addresses](../../../../architecture/palette-public-ips.md) + +- [Create Role Bindings](../../../cluster-management/cluster-rbac.md#create-role-bindings) + +- [Use RBAC with OIDC](../../../../integrations/kubernetes.md#use-rbac-with-oidc) diff --git a/static/assets/docs/images/aws_eks-hybrid_architecture_eks-hybrid-architecture.webp b/static/assets/docs/images/aws_eks-hybrid_architecture_eks-hybrid-architecture.webp new file mode 100644 index 0000000000..23f822d2ba Binary files /dev/null and b/static/assets/docs/images/aws_eks-hybrid_architecture_eks-hybrid-architecture.webp differ diff --git a/static/assets/docs/images/aws_eks-hybrid_import-eks-cluster-enable-hybrid-mode_cluster-import-procedure.webp b/static/assets/docs/images/aws_eks-hybrid_import-eks-cluster-enable-hybrid-mode_cluster-import-procedure.webp new file mode 100644 index 0000000000..7b412a2e63 Binary files /dev/null and b/static/assets/docs/images/aws_eks-hybrid_import-eks-cluster-enable-hybrid-mode_cluster-import-procedure.webp differ diff --git a/static/assets/docs/images/aws_eks-hybrid_import-eks-cluster-enable-hybrid-mode_enable-hybrid-mode.webp b/static/assets/docs/images/aws_eks-hybrid_import-eks-cluster-enable-hybrid-mode_enable-hybrid-mode.webp new file mode 100644 index 0000000000..f14828117c Binary files /dev/null and b/static/assets/docs/images/aws_eks-hybrid_import-eks-cluster-enable-hybrid-mode_enable-hybrid-mode.webp differ