diff --git a/keps/sig-network/4962-network-topology-standard/README.md b/keps/sig-network/4962-network-topology-standard/README.md new file mode 100644 index 00000000000..49855bc30c0 --- /dev/null +++ b/keps/sig-network/4962-network-topology-standard/README.md @@ -0,0 +1,867 @@ + +# KEP-4962: Standardizing the Representation of Cluster Network Topology + + + + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Story 2](#story-2) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + +This document proposes a standard for declaring network topology in Kubernetes clusters, +representing the hierarchy of nodes, switches, and interconnects. +In this context, a `switch` can refer to a physical network device or a collection of such devices +with close proximity and functionality. + +## Motivation + +With the rise of multi-node Kubernetes workloads that demand intensive inter-node communication, +scheduling pods in close network proximity is becoming essential. +Examples of such workloads include AI/ML training jobs or sets of interdependent, data-intensive services. + +However, Kubernetes currently lacks a standard method to describe cluster network topology, which is a key area for improvement. +By establishing a consistent way to represent cluster network topology, this proposal lays the groundwork for +advanced scheduling capabilities that take network topology and performance into account. + +Some major CSPs already offer mechanisms to discover instance network topology: +- **Amazon AWS** provides the [DescribeInstanceTopology API](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceTopology.html). +- **Google Cloud Platform (GCP)** exposes tools via the [Google Cloud SDK](https://cloud.google.com/go/docs/reference/cloud.google.com/go/compute/latest/apiv1), allowing the retrieval of rack and cluster IDs, which can be used to reconstruct network hierarchies. +- **Oracle Cloud Infrastructure (OCI)** offers the [Capacity Topology API](https://docs.oracle.com/en-us/iaas/tools/oci-cli/3.50.1/oci_cli_docs/cmdref/compute/capacity-topology.html), providing topology-related information for their compute nodes. + +Beyond CSPs, certain on-premises clusters support network topology discovery, though this capability depends on the features of the underlying switch network vendors. + +An open-source project, [Topograph](https://github.com/NVIDIA/topograph), has implemented these approaches and is successfully deployed in production environments. +However, what remains missing is a common and standardized method to convey network topology information to the Kubernetes ecosystem. + +This gap creates challenges for developing control plane components and applications that could leverage network-topology-aware features. + +Some CSPs have already taken steps to describe cluster network topology. For instance, AWS has begun addressing this by introducing `topology.k8s.aws/network-node-layer-N` node labels, which represent its 3-tier networking structure. However, this solution is specific to AWS and does not cater to broader, cross-cloud use cases. + +In this KEP, we propose establishing a standardized representation of network topology within Kubernetes clusters. + +Such topology information could significantly enhance various Kubernetes components and features, including: +- Pod affinity settings in deployments and pod specs. +- Topology-aware scheduling in Kueue. +- Development of Kubernetes-native scheduler plugins for network-topology-aware scheduling, such as: + - Topology-aware gang-scheduling plugin. + - Gang-scheduling auto-scaler. + - Device resource allocation (DRA) scheduler plugin. + +### Goals + +- Introduce a standard way of representing network topology in Kubernetes clusters + +### Non-Goals + +- Implement a network-topology-aware gang-scheduling scheduler plugin +- Define or implement network topology discovery mechanisms for CSPs or on-premises environments + +## Proposal + +We propose new node label type to capture network topology information: + +### Network Topology Label +Format: `network.topology.kubernetes.io/: `, where +- `` defines the topology and characteristics of a network switch. The term `switch` may refer to a physical network device or a collection of closely connected devices with similar functionality. +- `` is a unique identifier for the hierarchy layer type. + +We propose to use the following four network hierarchy layer types: +1. `accelerator`: Network interconnect for direct accelerator communication (e.g., Multi-node NVLink interconnect between NVIDIA GPUs) +2. `block`: Rack-level switches connecting hosts in one or more racks as a block. +3. `datacenter`: Spine-level switches connecting multiple blocks inside a datacenter. +4. `zone`: Zonal switches connecting multiple datacenters inside an availability zone. + +These types will accommodate the majority of common network hierarchies across different CSP and on-prem environments. +Having these labels available in Kubernetes clusters will help in designing cloud agnostic scheduling systems. +The scheduler will prioritize hierarchy layers according to the order outlined above, providing a standardized +approach for network-aware scheduling across a range of configurations. + +### User Stories + + + +#### Story 1 + +As a data scientist running a data-intensive large-scale AI training job, I want to optimize the runtime +by binding pods to nodes that are in close network proximity. +This ensures better performance for my distributed workloads. + +Additionally, I do not want to modify the job specification when migrating the job across different Kubernetes environments. + +To achieve this, I can leverage the pod affinity feature of the default Kubernetes scheduler with topology keys: +```yaml + spec: + affinity: + podAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 70 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - training + topologyKey: network.topology.kubernetes.io/block + - weight: 90 + podAffinityTerm: + labelSelector: + matchExpressions: + - key: app + operator: In + values: + - training + topologyKey: network.topology.kubernetes.io/accelerator +``` +#### Story 2 + +As a developer, I would like to extend Kubernetes-native scheduling capabilities to +support gang-scheduling for multi-node jobs. + +My goal is to ensure that this plugin remains cloud-agnostic. The design leverages the +presence of `network.topology.kubernetes.io/...` node labels to reconstruct the cluster +network topology and implement a network-aware placement algorithm. + +### Notes/Constraints/Caveats (Optional) + +The delivery method for the cluster network topology lies outside the scope of this proposal. +However, this information could be: +- Provided directly by CSPs, where CSPs apply node labels during node creation. +- Extracted from CSPs using specialized tools like [Topograph](https://github.com/NVIDIA/topograph). +- Manually configured by cluster administrators. +- Derived using a combination of the above methods. + +### Risks and Mitigations + + + +## Design Details + +### Example: network topology representation with reserved network types: + +Consider the following network topology: + +![Network topology with reserved network types](./img/topo-reserved-labels.png) + +Let's examine node `vm12` as an example. This node is connected to NVSwitch `nvl10` and network switch `sw11`, which in turn is connected to switches `sw21` and `sw31`. +In this case, node `vm12` labels would be: +```yaml +network.topology.kubernetes.io/accelerator: nvl10 +network.topology.kubernetes.io/block: sw11 +network.topology.kubernetes.io/datacenter: sw21 +network.topology.kubernetes.io/zone: sw31 +``` + +### Test Plan + + + +[ ] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- ``: `` - `` + +##### Integration tests + + + + + +- : + +##### e2e tests + + + +- : + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: + - Components depending on the feature gate: +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + +One alternative is to delegate network topology representation to CSPs. For example, AWS uses network topology +labels in the format `topology.k8s.aws/network-node-layer-N` to describe its three-tier network hierarchy. + +However, this approach is CSP-specific and tightly coupled to a predefined network layout. + +In contrast, our proposal provides a cloud-agnostic approach that accommodates commonly used network topologies. + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-network/4962-network-topology-standard/img/topo-reserved-labels.png b/keps/sig-network/4962-network-topology-standard/img/topo-reserved-labels.png new file mode 100644 index 00000000000..8bb65ac93e6 Binary files /dev/null and b/keps/sig-network/4962-network-topology-standard/img/topo-reserved-labels.png differ diff --git a/keps/sig-network/4962-network-topology-standard/kep.yaml b/keps/sig-network/4962-network-topology-standard/kep.yaml new file mode 100644 index 00000000000..e69de29bb2d