From c8879c1c4e84c02b819e0a9616f59d6aba55d443 Mon Sep 17 00:00:00 2001 From: HHDocs Date: Fri, 8 Nov 2024 21:07:08 +0000 Subject: [PATCH] Deployed 35be28b to beta-1 with MkDocs 1.5.3 and mike 1.1.2 --- beta-1/404.html | 2 +- beta-1/architecture/fabric/index.html | 2 +- beta-1/architecture/overview/index.html | 2 +- beta-1/concepts/overview/index.html | 2 +- beta-1/contribute/docs/index.html | 2 +- beta-1/contribute/overview/index.html | 2 +- beta-1/getting-started/download/index.html | 6 +- beta-1/index.html | 2 +- .../install-upgrade/build-wiring/index.html | 152 ++++- beta-1/install-upgrade/config/index.html | 2 +- beta-1/install-upgrade/overview/index.html | 2 +- .../install-upgrade/requirements/index.html | 16 +- .../supported-devices/index.html | 2 +- beta-1/reference/api/index.html | 2 +- beta-1/reference/cli/index.html | 2 +- beta-1/reference/profiles/index.html | 2 +- beta-1/release-notes/index.html | 2 +- beta-1/search/search_index.json | 2 +- beta-1/sitemap.xml | 58 +- beta-1/sitemap.xml.gz | Bin 463 -> 463 bytes beta-1/troubleshooting/overview/index.html | 2 +- beta-1/user-guide/connections/index.html | 2 +- beta-1/user-guide/devices/index.html | 2 +- beta-1/user-guide/external/index.html | 2 +- beta-1/user-guide/grafana/index.html | 2 +- beta-1/user-guide/harvester/index.html | 2 +- beta-1/user-guide/overview/index.html | 2 +- beta-1/user-guide/profiles/index.html | 2 +- beta-1/user-guide/shrink-expand/index.html | 2 +- beta-1/user-guide/vpcs/index.html | 2 +- beta-1/vlab/demo/index.html | 634 +++++++++--------- beta-1/vlab/overview/index.html | 118 +++- beta-1/vlab/running/index.html | 105 ++- 33 files changed, 708 insertions(+), 431 deletions(-) diff --git a/beta-1/404.html b/beta-1/404.html index 681568d9..6f479b93 100644 --- a/beta-1/404.html +++ b/beta-1/404.html @@ -375,7 +375,7 @@ - Overview + VLAB Overview diff --git a/beta-1/architecture/fabric/index.html b/beta-1/architecture/fabric/index.html index e5877f2c..19e9e2b8 100644 --- a/beta-1/architecture/fabric/index.html +++ b/beta-1/architecture/fabric/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/architecture/overview/index.html b/beta-1/architecture/overview/index.html index a6d305e3..3fd5c433 100644 --- a/beta-1/architecture/overview/index.html +++ b/beta-1/architecture/overview/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/concepts/overview/index.html b/beta-1/concepts/overview/index.html index 9b08a2a8..f7ec1e36 100644 --- a/beta-1/concepts/overview/index.html +++ b/beta-1/concepts/overview/index.html @@ -485,7 +485,7 @@ - Overview + VLAB Overview diff --git a/beta-1/contribute/docs/index.html b/beta-1/contribute/docs/index.html index d1cedd8f..f8fa9032 100644 --- a/beta-1/contribute/docs/index.html +++ b/beta-1/contribute/docs/index.html @@ -395,7 +395,7 @@ - Overview + VLAB Overview diff --git a/beta-1/contribute/overview/index.html b/beta-1/contribute/overview/index.html index 00299ef0..b4a1f493 100644 --- a/beta-1/contribute/overview/index.html +++ b/beta-1/contribute/overview/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/getting-started/download/index.html b/beta-1/getting-started/download/index.html index 9f2dd75b..6e95827e 100644 --- a/beta-1/getting-started/download/index.html +++ b/beta-1/getting-started/download/index.html @@ -471,7 +471,7 @@ - Overview + VLAB Overview @@ -1302,7 +1302,7 @@

Getting access

Please submit a ticket with the request using Hedgehog Support Portal.

After that you will be provided with the credentials to access the software on GitHub Package. In order to use the software, log in to the registry using the following command:

-
docker login ghcr.io
+
docker login ghcr.io --username provided_user_name --password provided_token_string
 

Downloading hhfab

Currently hhfab is supported on Linux x86/arm64 (tested on Ubuntu 22.04) and MacOS x86/arm64 for building @@ -1335,7 +1335,7 @@

Next steps

Last update: - October 24, 2024 + October 31, 2024
Created: diff --git a/beta-1/index.html b/beta-1/index.html index db3c14a8..077afa69 100644 --- a/beta-1/index.html +++ b/beta-1/index.html @@ -405,7 +405,7 @@ - Overview + VLAB Overview diff --git a/beta-1/install-upgrade/build-wiring/index.html b/beta-1/install-upgrade/build-wiring/index.html index 52cd8744..b5359d0f 100644 --- a/beta-1/install-upgrade/build-wiring/index.html +++ b/beta-1/install-upgrade/build-wiring/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview @@ -1500,7 +1500,26 @@

Sample Switch Configuration

Design Discussion

This section is meant to help the reader understand how to assemble the primitives presented by the Fabric API into a functional fabric.

VPC

-

A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than a than one switch, which provides great flexibility, for instance workload mobility.

+

A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than one switch, which provides great flexibility.

+
graph TD
+    L1([Leaf 1])
+    L2([Leaf 2])
+    S1["Server 1
+      10.7.71.1"]
+    S2["Server 2
+      172.16.2.31"]
+    S3["Server 3
+       192.168.18.85"]
+
+    L1 <--> S1
+    L1 <--> S2
+    L2 <--> S3
+
+    subgraph VPC 1
+    S1
+    S2
+    S3
+    end

Connection

A connection represents the physical wires in your data center. They connect switches to other switches or switches to servers.

Server Connections

@@ -1511,14 +1530,141 @@

Server Connections

  • MCLAG - Two cables going to two different switches, also called dual homing. The switches will need a fabric link between them.
  • ESLAG - Two to four cables going to different switches, also called multi-homing. If four links are used there will need to be four switches connected to a single server with four NIC ports.
  • +
    graph TD
    +    S1([Spine 1])
    +    S2([Spine 2])
    +    L1([Leaf 1])
    +    L2([Leaf 2])
    +    L3([Leaf 3])
    +    L4([Leaf 4])
    +    L5([Leaf 5])
    +    L6([Leaf 6])
    +    L7([Leaf 7])
    +
    +    TS1[Server1]
    +    TS2[Server2]
    +    TS3[Server3]
    +    TS4[Server4]
    +
    +    S1 & S2 ---- L1 & L2 & L3 & L4 & L5 & L6 & L7
    +    L1 <-- Bundled --> TS1
    +    L1 <-- Bundled --> TS1
    +    L1 <-- Unbundled --> TS2
    +    L2 <-- MCLAG --> TS3
    +    L3 <-- MCLAG --> TS3
    +    L4 <-- ESLAG --> TS4
    +    L5 <-- ESLAG --> TS4
    +    L6 <-- ESLAG --> TS4
    +    L7 <-- ESLAG --> TS4
    +
    +    subgraph VPC 1
    +    TS1
    +    TS2
    +    TS3
    +    TS4
    +    end
    +
    +    subgraph MCLAG
    +    L2
    +    L3
    +    end
    +
    +    subgraph ESLAG
    +    L3
    +    L4
    +    L5
    +    L6
    +    L7
    +    end
    +

    Fabric Connections

    Fabric connections serve as connections between switches, they form the fabric of the network.

    VPC Peering

    VPCs need VPC Peerings to talk to each other. VPC Peerings come in two varieties: local and remote.

    +
    graph TD
    +    S1([Spine 1])
    +    S2([Spine 2])
    +    L1([Leaf 1])
    +    L2([Leaf 2])
    +    TS1[Server1]
    +    TS2[Server2]
    +    TS3[Server3]
    +    TS4[Server4]
    +
    +    S1 & S2 <--> L1 & L2
    +    L1 <--> TS1 & TS2
    +    L2 <--> TS3 & TS4
    +
    +
    +    subgraph VPC 1
    +    TS1
    +    TS2
    +    end
    +
    +    subgraph VPC 2
    +    TS3
    +    TS4
    +    end

    Local VPC Peering

    When there is no dedicated border/peering switch available in the fabric we can use local VPC peering. This kind of peering tries sends traffic between the two VPC's on the switch where either of the VPC's has workloads attached. Due to limitation in the Sonic network operating system this kind of peering bandwidth is limited to the number of VPC loopbacks you have selected while initializing the fabric. Traffic between the VPCs will use the loopback interface, the bandwidth of this connection will be equal to the bandwidth of port used in the loopback.

    +

    graph TD
    +
    +    L1([Leaf 1])
    +    S1[Server1]
    +    S2[Server2]
    +    S3[Server3]
    +    S4[Server4]
    +
    +    L1 <-.2,loopback.-> L1;
    +    L1 <-.3.-> S1;
    +    L1 <--> S2 & S4;
    +    L1 <-.1.-> S3;
    +
    +    subgraph VPC 1
    +    S1
    +    S2
    +    end
    +
    +    subgraph VPC 2
    +    S3
    +    S4
    +    end
    +The dotted line in the diagram shows the traffic flow for local peering. The traffic originates in VPC 2, travels to the switch, travels out the first loopback port, into the second loopback port, and finally out the port destined for VPC 1.

    Remote VPC Peering

    Remote Peering is used when you need a high bandwidth connection between the VPCs, you will dedicate a switch to the peering traffic. This is either done on the border leaf or on a switch where either of the VPC's are not present. This kind of peering allows peer traffic between different VPC's at line rate and is only limited by fabric bandwidth. Remote peering introduces a few additional hops in the traffic and may cause a small increase in latency.

    +

    graph TD
    +    S1([Spine 1])
    +    S2([Spine 2])
    +    L1([Leaf 1])
    +    L2([Leaf 2])
    +    L3([Leaf 3])
    +    TS1[Server1]
    +    TS2[Server2]
    +    TS3[Server3]
    +    TS4[Server4]
    +
    +    S1 <-.5.-> L1;
    +    S1 <-.2.-> L2;
    +    S1 <-.3,4.-> L3;
    +    S2 <--> L1;
    +    S2 <--> L2;
    +    S2 <--> L3;
    +    L1 <-.6.-> TS1;
    +    L1 <--> TS2;
    +    L2 <--> TS3;
    +    L2 <-.1.-> TS4;
    +
    +
    +    subgraph VPC 1
    +    TS1
    +    TS2
    +    end
    +
    +    subgraph VPC 2
    +    TS3
    +    TS4
    +    end
    +The dotted line in the diagram shows the traffic flow for remote peering. The traffic could take a different path because of ECMP. It is important to note that Leaf 3 cannot have any servers from VPC 1 or VPC 2 on it, but it can have a different VPC attached to it.

    VPC Loopback

    A VPC loopback is a physical cable with both ends plugged into the same switch, suggested but not required to be the adjacent ports. This loopback allows two different VPCs to communicate with each other. This is due to a Broadcom limitation.

    @@ -1527,7 +1673,7 @@

    October 24, 2024 + October 31, 2024
    Created: diff --git a/beta-1/install-upgrade/config/index.html b/beta-1/install-upgrade/config/index.html index 1d812216..38b6a413 100644 --- a/beta-1/install-upgrade/config/index.html +++ b/beta-1/install-upgrade/config/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/install-upgrade/overview/index.html b/beta-1/install-upgrade/overview/index.html index a1309ce6..4f6106db 100644 --- a/beta-1/install-upgrade/overview/index.html +++ b/beta-1/install-upgrade/overview/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/install-upgrade/requirements/index.html b/beta-1/install-upgrade/requirements/index.html index 2af8ad18..a559e36a 100644 --- a/beta-1/install-upgrade/requirements/index.html +++ b/beta-1/install-upgrade/requirements/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview @@ -1327,14 +1327,24 @@

    Out of Band Management Network

    In order to provision and manage the switches that comprise the fabric, an out of band switch must also be installed. This network is to be used exclusively by the control node and the fabric switches, no other access is permitted. This switch (or switches) is not managed by the fabric. It is recommended that this switch have at least a 10GbE port and that port connect to the control node.

    Control Node

      -
    • Fast SSDs for system/root is mandatory for Control Nodes
    • +
    • Fast SSDs for system/root is mandatory for Control Nodes
      • NVMe SSDs are recommended
      • DRAM-less NAND SSDs are not supported (e.g. Crucial BX500)
      • +
      +
    • 10 GbE port for connection to management network is recommended
    • Minimal (non-HA) setup is a single Control Node
    • (Future) Full (HA) setup is at least 3 Control Nodes
    • (Future) Extra nodes could be used for things like Logging, Monitoring, Alerting stack, and more
    +

    In internal testing Hedgehog uses a server with the following specifications:

    +
      +
    • CPU - AMD EPYC 4344P
    • +
    • Memory - 32 GiB DDR5 ECC 4800MT/s
    • +
    • Storage - PCIe Gen 4 NVMe M.2 400GB
    • +
    • Network - AOC-STG-i4S Intel X710-BM1 controller
    • +
    • Motherboard - H13SAE-MF
    • +

    Non-HA (minimal) setup - 1 Control Node

    • Control Node runs non-HA Kubernetes Control Plane installation with non-HA Hedgehog Fabric Control Plane on top of it
    • @@ -1442,7 +1452,7 @@

      Device participat Last update: - October 24, 2024 + November 7, 2024
      Created: diff --git a/beta-1/install-upgrade/supported-devices/index.html b/beta-1/install-upgrade/supported-devices/index.html index fde86929..a448a8c0 100644 --- a/beta-1/install-upgrade/supported-devices/index.html +++ b/beta-1/install-upgrade/supported-devices/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/reference/api/index.html b/beta-1/reference/api/index.html index eca46786..af711c6e 100644 --- a/beta-1/reference/api/index.html +++ b/beta-1/reference/api/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/reference/cli/index.html b/beta-1/reference/cli/index.html index 0d193beb..c925655e 100644 --- a/beta-1/reference/cli/index.html +++ b/beta-1/reference/cli/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/reference/profiles/index.html b/beta-1/reference/profiles/index.html index 49643b3a..cc800e03 100644 --- a/beta-1/reference/profiles/index.html +++ b/beta-1/reference/profiles/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/release-notes/index.html b/beta-1/release-notes/index.html index e904fde7..7fd7f60b 100644 --- a/beta-1/release-notes/index.html +++ b/beta-1/release-notes/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/search/search_index.json b/beta-1/search/search_index.json index 07cdcb19..113fe05e 100644 --- a/beta-1/search/search_index.json +++ b/beta-1/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

      The Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to private environments. It comes without vendor lock-in.

      The Fabric is built around the concept of VPCs (Virtual Private Clouds), similar to public cloud offerings. It provides a multi-tenant API to define the user intent on network isolation and connectivity, which is automatically transformed into configuration for switches and software appliances.

      You can read more about its concepts and architecture in the documentation.

      You can find out how to download and try the Fabric on the self-hosted fully virtualized lab or on hardware.

      "},{"location":"architecture/fabric/","title":"Hedgehog Network Fabric","text":"

      The Hedgehog Open Network Fabric is an open-source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standard BGP EVPN and VXLAN technology. The fabric provides a standard Kubernetes interface to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this, it defines different abstractions starting from the physical network where a set of Connection objects defines how a physical server on the network connects to a physical switch on the fabric.

      "},{"location":"architecture/fabric/#underlay-network","title":"Underlay Network","text":"

      The Hedgehog Fabric currently supports two underlay network topologies.

      "},{"location":"architecture/fabric/#collapsed-core","title":"Collapsed Core","text":"

      A collapsed core topology is just a pair of switches connected in a MCLAG configuration with no other network elements. All workloads attach to these two switches.

      The leaves in this setup are configured to be in a MCLAG pair and servers can either be connected to both switches as a MCLAG port channel or as orphan ports connected to only one switch. Both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.

      "},{"location":"architecture/fabric/#spine-leaf","title":"Spine-Leaf","text":"

      A spine-leaf topology is a standard Clos network with workloads attaching to leaf switches and the spines providing connectivity between different leaves.

      This kind of topology is useful for bigger deployments and provides all the advantages of a typical Clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. RFC7938 was used as the reference for establishing the underlay network.

      "},{"location":"architecture/fabric/#overlay-network","title":"Overlay Network","text":"

      The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Virtualization is achieved in the Hedgehog Fabric by encapsulating workload traffic over VXLAN tunnels that are source and terminated on the leaf switches in the network. The fabric uses BGP-EVPN/VXLAN to enable the creation and management of virtual networks on top of the physical one. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the Hedgehog Fabric is identified by a VPC. The following subsections contain a high-level overview of how VPCs are implemented in the Hedgehog Fabric and its associated objects.

      "},{"location":"architecture/fabric/#vpc","title":"VPC","text":"

      The previous subsections have described what a VPC is, and how to attach workloads to a specific VPC. The following bullet points describe how VPCs are actually implemented in the network to ensure a private view the network.

      • Each VPC is modeled as a VRF on each switch where there are VPC attachments defined for this VPC. The VRF is allocated its own VNI. The VRF is local to each switch and the VNI is global for the entire fabric. By mapping the VRF to a VNI and configuring an EVPN instance in each VRF, a shared L3VNI is established across the entire fabric. All VRFs participating in this VNI can freely communicate with each other without the need for a policy. A VLAN is allocated for each VRF which functions as an IRB VLAN for the VRF.
      • The VRF created on each switch corresponding to a VPC configures a BGP instance with EVPN to advertise its locally attached subnets and import routes from its peered VPCs. The BGP instance in the tenant VRFs does not establish neighbor relationships and is purely used to advertise locally attached routes into the VPC (all VRFs with the same L3VNI) across leaves in the network.
      • A VPC can have multiple subnets. Each subnet in the VPC is modeled as a VLAN on the switch. The VLAN is only locally significant and a given subnet might have different VLANs on different leaves on the network. A globally significant VNI is assigned to each subnet. This VNI is used to extend the subnet across different leaves in the network and provides a view of single stretched L2 domain if the applications need it.
      • The Hedgehog Fabric has a built-in DHCP server which will automatically assign IP addresses to each workload depending on the VPC it belongs to. This is achieved by configuring a DHCP relay on each of the server facing VLANs. The DHCP server is accessible through the underlay network and is shared by all VPCs in the fabric. The inbuilt DHCP server is capable of identifying the source VPC of the request and assigning IP addresses from a pool allocated to the VPC at creation.
      • A VPC by default cannot communicate to anyone outside the VPC and specific peering rules are required to allow communication to external networks or to other VPCs.
      "},{"location":"architecture/fabric/#vpc-peering","title":"VPC Peering","text":"

      To enable communication between 2 different VPCs, one needs to configure a VPC peering policy. The Hedgehog Fabric supports two different peering modes.

      • Local Peering: A local peering directly imports routes from another VPC locally. This is achieved by a simple import route from the peer VPC. In case there are no locally attached workloads to the peer VPC the fabric automatically creates a stub VPC for peering and imports routes from it. This allows VPCs to peer with each other without the need for a dedicated peering leaf. If a local peering is done for a pair of VPCs which have locally attached workloads, the fabric automatically allocates a pair of ports on the switch to route traffic between these VRFs using static routes. This is required because of limitations in the underlying platform. The net result of these limitations is that the bandwidth between these 2 VPCs is limited by the bandwidth of the loopback interfaces allocated on the switch. Traffic between the peered VPCs will not leave the switch that connects them.
      • Remote Peering: Remote peering is implemented using a dedicated peering switch/switches which is used as a rendezvous point for the 2 VPC's in the fabric. The set of switches to be used for peering is determined by configuration in the peering policy. When a remote peering policy is applied for a pair of VPCs, the VRFs corresponding to these VPCs on the peering switch advertise default routes into their specific VRFs identified by the L3VNI. All traffic that does not belong to the VPCs is forwarded to the peering switch which has routes to the other VPCs and gets forwarded from there. The bandwidth limitation that exists in the local peering solution is solved here as the bandwidth between the two VPCs is determined by the fabric cross section bandwidth.
      "},{"location":"architecture/overview/","title":"Overview","text":"

      Under construction.

      "},{"location":"concepts/overview/","title":"Concepts","text":""},{"location":"concepts/overview/#introduction","title":"Introduction","text":"

      Hedgehog Open Network Fabric is built on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs), so you can use standard Kubernetes tools to manage Fabric resources.

      Hedgehog Fabric consists of the following components:

      • Fabricator - special tool to install and configure Fabric, or to run virtual labs
      • Control Node - one or more Kubernetes nodes in a single cluster running Fabric software:
        • Fabric Controller - main control plane component that manages Fabric resources
      • Fabric Kubectl plugin (Fabric CLI) - kubectl plugin to manage Fabric resources in an easy way
      • Fabric Agent - runs on every switch and manages switch configuration
      "},{"location":"concepts/overview/#fabric-api","title":"Fabric API","text":"

      All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. With this representation, Fabric defines switches, servers, control nodes, external systems and connections between them in a single place and then uses these definitions to deploy and manage the whole infrastructure. On top of the Wiring Diagram, Fabric provides a set of APIs to manage the VPCs and the connections between them and between VPCs and External systems.

      "},{"location":"concepts/overview/#wiring-diagram-api","title":"Wiring Diagram API","text":"

      Wiring Diagram consists of the following resources:

      • \"Devices\": describe any device in the Fabric and can be of two types:
        • Switch: configuration of the switch, containing for example: port group speeds, port breakouts, switch IP/ASN
        • Server: any physical server attached to the Fabric including Control Nodes
      • Connection: any logical connection for devices
        • usually it's a connection between two or more ports on two different devices
        • for example: MCLAG Peer Link, Unbundled/MCLAG server connections, Fabric connection between spine and leaf
      • VLANNamespace -> non-overlapping VLAN ranges for attaching servers
      • IPv4Namespace -> non-overlapping IPv4 ranges for VPC subnets
      "},{"location":"concepts/overview/#user-facing-api","title":"User-facing API","text":"
      • VPC API
        • VPC: Virtual Private Cloud, similar to a public cloud VPC, provides an isolated private network for the resources, with support for multiple subnets, each with user-defined VLANs and optional DHCP service
        • VPCAttachment: represents a specific VPC subnet assignment to the Connection object which means exact server port to a VPC binding
        • VPCPeering: enables VPC-to-VPC connectivity (could be Local where VPCs are used or Remote peering on the border/mixed leaves)
      • External API
        • External: definition of the \"external system\" to peer with (could be one or multiple devices such as edge/provider routers)
        • ExternalAttachment: configuration for a specific switch (using Connection object) describing how it connects to an external system
        • ExternalPeering: provides VPC with External connectivity by exposing specific VPC subnets to the external system and allowing inbound routes from it
      "},{"location":"concepts/overview/#fabricator","title":"Fabricator","text":"

      Installer builder and VLAB.

      • Installer builder based on a preset (currently: vlab for virtual and lab for physical)
        • Main input: Wiring Diagram
        • All input artifacts coming from OCI registry
        • Always full airgap (everything running from private registry)
        • Flatcar Linux for Control Node, generated ignition.json
        • Automatic K3s installation and private registry setup
        • All components and their dependencies running in Kubernetes
      • Integrated Virtual Lab (VLAB) management
      • Future:
        • In-cluster (control) Operator to manage all components
        • Upgrades handling for everything starting Control Node OS
        • Installation progress, status and retries
        • Disaster recovery and backups
      "},{"location":"concepts/overview/#fabric","title":"Fabric","text":"

      Control plane and switch agent.

      • Currently Fabric is basically a single controller manager running in Kubernetes
        • It includes controllers for different CRDs and needs
        • For example, auto assigning VNIs to VPCs or generating the Agent configuration
        • Additionally, it's running the admission webhook for Hedgehog's CRD APIs
      • The Agent is watching for the corresponding Agent CRD in Kubernetes API
        • It applies the changes and saves the new configuration locally
        • It reports status and information back to the API
        • It can perform reinstallation and reboot of SONiC
      "},{"location":"contribute/docs/","title":"Documentation","text":""},{"location":"contribute/docs/#getting-started","title":"Getting started","text":"

      This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.

      In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:

      make serve\n

      Now you can open continuously updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.

      Additionally you can run

      make build\n

      to make sure that your changes will be built correctly and doesn't break documentation.

      "},{"location":"contribute/docs/#workflow","title":"Workflow","text":"

      If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.

      Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.

      Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.

      "},{"location":"contribute/docs/#repository","title":"Repository","text":"

      Documentation is organized in per-release branches:

      • master - ongoing development, not released yet, referenced as dev version in the documentation
      • release/alpha-1/release/alpha-2 - alpha releases, referenced as alpha-1/alpha-2 versions in the documentation, if patches released for alpha-1, they'll be merged into release/alpha-1 branch
      • release/v1.0 - first stable release, referenced as v1.0 version in the documentation, if patches (e.g. v1.0.1) released for v1.0, they'll be merged into release/v1.0 branch

      Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.

      "},{"location":"contribute/docs/#file-layout","title":"File layout","text":"

      All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.

      For example, top-level .pages in this repository looks like this:

      nav:\n  - index.md\n  - getting-started\n  - concepts\n  - Wiring Diagram: wiring\n  - Install & Upgrade: install-upgrade\n  - User Guide: user-guide\n  - Reference: reference\n  - Troubleshooting: troubleshooting\n  - ...\n  - release-notes\n  - contribute\n

      Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.

      More details in the MkDocs Pages plugin.

      "},{"location":"contribute/docs/#abbreviations","title":"Abbreviations","text":"

      You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.

      For example, we have following in includes/abbreviations.md:

      *[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n

      It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.

      "},{"location":"contribute/docs/#markdown-extensions","title":"Markdown extensions","text":"

      We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.

      To view code for examples, please, check the source code of this page.

      "},{"location":"contribute/docs/#text-formatting","title":"Text formatting","text":"

      Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.

      Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.

      Keyboard keys can be written like so:

      Ctrl+Alt+Del

      Amd inline icons/emojis can be added like this:

      :fontawesome-regular-face-laugh-wink:\n:fontawesome-brands-twitter:{ .twitter }\n

      "},{"location":"contribute/docs/#admonitions","title":"Admonitions","text":"

      Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.

      Lorem ipsum

      Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

      "},{"location":"contribute/docs/#code-blocks","title":"Code blocks","text":"

      Details can be found here.

      Simple code block with line nums and highlighted lines:

      bubble_sort.py
      def bubble_sort(items):\n    for i in range(len(items)):\n        for j in range(len(items) - 1 - i):\n            if items[j] > items[j + 1]:\n                items[j], items[j + 1] = items[j + 1], items[j]\n

      Code annotations:

      theme:\n  features:\n    - content.code.annotate # (1)\n
      1. I'm a code annotation! I can contain code, formatted text, images, ... basically anything that can be written in Markdown.
      "},{"location":"contribute/docs/#tabs","title":"Tabs","text":"

      You can use Tabs to better organize content.

      CC++
      #include <stdio.h>\n\nint main(void) {\n  printf(\"Hello world!\\n\");\n  return 0;\n}\n
      #include <iostream>\n\nint main(void) {\n  std::cout << \"Hello world!\" << std::endl;\n  return 0;\n}\n
      "},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"

      You can directly include Mermaid diagrams in your Markdown files. Details can be found here.

      graph LR\n  A[Start] --> B{Error?};\n  B -->|Yes| C[Hmm...];\n  C --> D[Debug];\n  D --> B;\n  B ---->|No| E[Yay!];
      sequenceDiagram\n  autonumber\n  Alice->>John: Hello John, how are you?\n  loop Healthcheck\n      John->>John: Fight against hypochondria\n  end\n  Note right of John: Rational thoughts!\n  John-->>Alice: Great!\n  John->>Bob: How about you?\n  Bob-->>John: Jolly good!
      "},{"location":"contribute/overview/","title":"Overview","text":"

      Under construction.

      "},{"location":"getting-started/download/","title":"Download","text":"

      The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. It is a command-line tool that allows to build installer for the Hedgehog Fabric, upgrade the existing installation, or run the Virtual LAB.

      "},{"location":"getting-started/download/#getting-access","title":"Getting access","text":"

      Prior to General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.

      After that you will be provided with the credentials to access the software on GitHub Package. In order to use the software, log in to the registry using the following command:

      docker login ghcr.io\n
      "},{"location":"getting-started/download/#downloading-hhfab","title":"Downloading hhfab","text":"

      Currently hhfab is supported on Linux x86/arm64 (tested on Ubuntu 22.04) and MacOS x86/arm64 for building installers/upgraders. It may work on Windows WSL2 (with Ubuntu), but it's not tested. For running VLAB only Linux x86 is currently supported.

      All software is published into the OCI registry GitHub Package including binaries, container images, or Helm charts. Download the latest stable hhfab binary from the GitHub Package using the following command, it requires ORAS to be installed (see below):

      curl -fsSL https://i.hhdev.io/hhfab | bash\n

      Or download a specific version (e.g. beta-1) using the following command:

      curl -fsSL https://i.hhdev.io/hhfab | VERSION=beta-1 bash\n

      Use the VERSION environment variable to specify the version of the software to download. By default, the latest stable release is downloaded. You can pick a specific release series (e.g. alpha-2) or a specific release.

      "},{"location":"getting-started/download/#installing-oras","title":"Installing ORAS","text":"

      The download script requires ORAS to be installed. ORAS is used to download the binary from the OCI registry and can be installed using following command:

      curl -fsSL https://i.hhdev.io/oras | bash\n
      "},{"location":"getting-started/download/#next-steps","title":"Next steps","text":"
      • Concepts
      • Virtual LAB
      • Installation
      • User guide
      "},{"location":"install-upgrade/build-wiring/","title":"Build Wiring Diagram","text":"

      Under construction.

      "},{"location":"install-upgrade/build-wiring/#overview","title":"Overview","text":"

      A wiring diagram is a YAML file that is a digital representation of your network. You can find more YAML level details in the User Guide section switch features and port naming and the api. It's mandatory for all switches to reference a SwitchProfile in the spec.profile of the Switch object. Only port naming defined by switch profiles could be used in the wiring diagram, NOS (or any other) port names aren't supported.

      In the meantime, to have a look at working wiring diagram for Hedgehog Fabric, run the sample generator that produces working wiring diagrams:

      ubuntu@sl-dev:~$ hhfab sample -h\n\nNAME:\n   hhfab sample - generate sample wiring diagram\n\nUSAGE:\n   hhfab sample command [command options]\n\nCOMMANDS:\n   spine-leaf, sl      generate sample spine-leaf wiring diagram\n   collapsed-core, cc  generate sample collapsed-core wiring diagram\n   help, h             Shows a list of commands or help for one command\n\nOPTIONS:\n   --help, -h  show help\n

      Or you can generate a wiring diagram for a VLAB environment with flags to customize number of switches, links, servers, etc.:

      ubuntu@sl-dev:~$ hhfab vlab gen --help\nNAME:\n   hhfab vlab generate - generate VLAB wiring diagram\n\nUSAGE:\n   hhfab vlab generate [command options]\n\nOPTIONS:\n   --bundled-servers value      number of bundled servers to generate for switches (only for one of the second switch in the redundancy group or orphan switch) (default: 1)\n   --eslag-leaf-groups value    eslag leaf groups (comma separated list of number of ESLAG switches in each group, should be 2-4 per group, e.g. 2,4,2 for 3 groups with 2, 4 and 2 switches)\n   --eslag-servers value        number of ESLAG servers to generate for ESLAG switches (default: 2)\n   --fabric-links-count value   number of fabric links if fabric mode is spine-leaf (default: 0)\n   --help, -h                   show help\n   --mclag-leafs-count value    number of mclag leafs (should be even) (default: 0)\n   --mclag-peer-links value     number of mclag peer links for each mclag leaf (default: 0)\n   --mclag-servers value        number of MCLAG servers to generate for MCLAG switches (default: 2)\n   --mclag-session-links value  number of mclag session links for each mclag leaf (default: 0)\n   --no-switches                do not generate any switches (default: false)\n   --orphan-leafs-count value   number of orphan leafs (default: 0)\n   --spines-count value         number of spines if fabric mode is spine-leaf (default: 0)\n   --unbundled-servers value    number of unbundled servers to generate for switches (only for one of the first switch in the redundancy group or orphan switch) (default: 1)\n   --vpc-loopbacks value        number of vpc loopbacks for each switch (default: 0)\n
      "},{"location":"install-upgrade/build-wiring/#sample-switch-configuration","title":"Sample Switch Configuration","text":"
      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: ds3000-02\nspec:\n  boot:\n    serial: ABC123XYZ\n  role: server-leaf\n  description: leaf-2\n  profile: celestica-ds3000\n  portBreakouts:\n    E1/1: 4x10G\n    E1/2: 4x10G\n    E1/17: 4x25G\n    E1/18: 4x25G\n    E1/32: 4x25G\n  redundancy:\n    group: mclag-1\n    type: mclag\n
      "},{"location":"install-upgrade/build-wiring/#design-discussion","title":"Design Discussion","text":"

      This section is meant to help the reader understand how to assemble the primitives presented by the Fabric API into a functional fabric.

      "},{"location":"install-upgrade/build-wiring/#vpc","title":"VPC","text":"

      A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than a than one switch, which provides great flexibility, for instance workload mobility.

      "},{"location":"install-upgrade/build-wiring/#connection","title":"Connection","text":"

      A connection represents the physical wires in your data center. They connect switches to other switches or switches to servers.

      "},{"location":"install-upgrade/build-wiring/#server-connections","title":"Server Connections","text":"

      A server connection is a connection used to connect servers to the fabric. The fabric will configure the server-facing port according to the type of the connection (MLAG, Bundle, etc). The configuration of the actual server needs to be done by the server administrator. The server port names are not validated by the fabric and used as metadata to identify the connection. A server connection can be one of:

      • Unbundled - A single cable connecting switch to server.
      • Bundled - Two or more cables going to a single switch, a LAG or similar.
      • MCLAG - Two cables going to two different switches, also called dual homing. The switches will need a fabric link between them.
      • ESLAG - Two to four cables going to different switches, also called multi-homing. If four links are used there will need to be four switches connected to a single server with four NIC ports.
      "},{"location":"install-upgrade/build-wiring/#fabric-connections","title":"Fabric Connections","text":"

      Fabric connections serve as connections between switches, they form the fabric of the network.

      "},{"location":"install-upgrade/build-wiring/#vpc-peering","title":"VPC Peering","text":"

      VPCs need VPC Peerings to talk to each other. VPC Peerings come in two varieties: local and remote.

      "},{"location":"install-upgrade/build-wiring/#local-vpc-peering","title":"Local VPC Peering","text":"

      When there is no dedicated border/peering switch available in the fabric we can use local VPC peering. This kind of peering tries sends traffic between the two VPC's on the switch where either of the VPC's has workloads attached. Due to limitation in the Sonic network operating system this kind of peering bandwidth is limited to the number of VPC loopbacks you have selected while initializing the fabric. Traffic between the VPCs will use the loopback interface, the bandwidth of this connection will be equal to the bandwidth of port used in the loopback.

      "},{"location":"install-upgrade/build-wiring/#remote-vpc-peering","title":"Remote VPC Peering","text":"

      Remote Peering is used when you need a high bandwidth connection between the VPCs, you will dedicate a switch to the peering traffic. This is either done on the border leaf or on a switch where either of the VPC's are not present. This kind of peering allows peer traffic between different VPC's at line rate and is only limited by fabric bandwidth. Remote peering introduces a few additional hops in the traffic and may cause a small increase in latency.

      "},{"location":"install-upgrade/build-wiring/#vpc-loopback","title":"VPC Loopback","text":"

      A VPC loopback is a physical cable with both ends plugged into the same switch, suggested but not required to be the adjacent ports. This loopback allows two different VPCs to communicate with each other. This is due to a Broadcom limitation.

      "},{"location":"install-upgrade/config/","title":"Fabric Configuration","text":""},{"location":"install-upgrade/config/#overview","title":"Overview","text":"

      The fab.yaml file is the configuration file for the fabric. It supplies the configuration of the users, their credentials, logging, telemetry, and other non wiring related settings. The fab.yaml file is composed of multiple YAML documents inside of a single file. Per the YAML spec 3 hyphens (---) on a single line separate the end of one document from the beginning of the next. There are two YAML documents in the fab.yaml file. For more information about how to use hhfab init, run hhfab init --help.

      "},{"location":"install-upgrade/config/#typical-hhfab-workflows","title":"Typical HHFAB workflows","text":""},{"location":"install-upgrade/config/#hhfab-for-vlab","title":"HHFAB for VLAB","text":"

      For a VLAB user, the typical workflow with hhfab is:

      1. hhfab init --dev
      2. hhfab vlab gen
      3. hhfab vlab up

      The above workflow will get a user up and running with a spine-leaf VLAB.

      "},{"location":"install-upgrade/config/#hhfab-for-physical-machines","title":"HHFAB for Physical Machines","text":"

      It's possible to start from scratch:

      1. hhfab init (see different flags to cusomize initial configuration)
      2. Adjust the fab.yaml file to your needs
      3. hhfab validate
      4. hhfab build

      Or import existing config and wiring files:

      1. hhfab init -c fab.yaml -w wiring-file.yaml -w extra-wiring-file.yaml
      2. hhfab validate
      3. hhfab build

      After the above workflow a user will have a .img file suitable for installing the control node, then bringing up the switches which comprise the fabric.

      "},{"location":"install-upgrade/config/#fabyaml","title":"Fab.yaml","text":""},{"location":"install-upgrade/config/#configure-control-node-and-switch-users","title":"Configure control node and switch users","text":"

      Configuring control node and switch users is done either passing --default-password-hash to hhfab init or editing the resulting fab.yaml file emitted by hhfab init. You can specify users to be configured on the control node(s) and switches in the following format:

      spec:\n    config:\n      control:\n        defaultUser: # user 'core' on all control nodes\n          password: \"hashhashhashhashhash\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 SecREKeyJumblE\"\n\n        fabric:\n          mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n\n          defaultSwitchUsers:\n            admin: # at least one user with name 'admin' and role 'admin'\n              role: admin\n              #password: \"$5$8nAYPGcl4...\" # password hash\n              #authorizedKeys: # optional SSH authorized keys\n              #  - \"ssh-ed25519 AAAAC3Nza...\"\n            op: # optional read-only user\n              role: operator\n              #password: \"$5$8nAYPGcl4...\" # password hash\n              #authorizedKeys: # optional SSH authorized keys\n              #  - \"ssh-ed25519 AAAAC3Nza...\"\n

      Control node(s) user is always named core.

      The role of the user,operator is read-only access to sonic-cli command on the switches. In order to avoid conflicts, do not use the following usernames: operator,hhagent,netops.

      "},{"location":"install-upgrade/config/#ntp-and-dhcp","title":"NTP and DHCP","text":"

      The control node uses public ntp servers from cloudflare and google by default. The control node runs a dhcp server on the management network. See the example file.

      "},{"location":"install-upgrade/config/#control-node","title":"Control Node","text":"

      The control node is the host that manages all the switches, runs k3s, and serves images. This is the YAML document configure the control node:

      apiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n  name: control-1\n  namespace: fab\nspec:\n  bootstrap:\n   disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n  external:\n    interface: enp2s0 # interface for external\n    ip: dhcp # IP address for external interface\n  management:\n    interface: enp2s1 # interface for management\n\n# Currently only one ControlNode is supported\n
      The management interface is for the control node to manage the fabric switches, not end-user management of the control node. For end-user management of the control node specify the external interface name.

      "},{"location":"install-upgrade/config/#forward-switch-metrics-and-logs","title":"Forward switch metrics and logs","text":"

      There is an option to enable Grafana Alloy on all switches to forward metrics and logs to the configured targets using Prometheus Remote-Write API and Loki API. If those APIs are available from Control Node(s), but not from the switches, it's possible to enable HTTP Proxy on Control Node(s) that will be used by Grafana Alloy running on the switches to access the configured targets. It could be done by passing --control-proxy=true to hhfab init.

      Metrics includes port speeds, counters, errors, operational status, transceivers, fans, power supplies, temperature sensors, BGP neighbors, LLDP neighbors, and more. Logs include agent logs.

      Configuring the exporters and targets is currently only possible by editing the fab.yaml configuration file. An example configuration is provided below:

      spec:\n  config:\n      ...\n      defaultAlloyConfig:\n        agentScrapeIntervalSeconds: 120\n        unixScrapeIntervalSeconds: 120\n        unixExporterEnabled: true\n        lokiTargets:\n          grafana_cloud: # target name, multiple targets can be configured\n              basicAuth: # optional\n                  password: \"<password>\"\n                  username: \"<username>\"\n              labels: # labels to be added to all logs\n                  env: env-1\n              url: https://logs-prod-021.grafana.net/loki/api/v1/push\n              useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n        prometheusTargets:\n          grafana_cloud: # target name, multiple targets can be configured\n              basicAuth: # optional\n                  password: \"<password>\"\n                  username: \"<username>\"\n              labels: # labels to be added to all metrics\n                  env: env-1\n              sendIntervalSeconds: 120\n              url: https://prometheus-prod-36-prod-us-west-0.grafana.net/api/prom/push\n              useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n              unixExporterCollectors: # list of node-exporter collectors to enable, https://grafana.com/docs/alloy/latest/reference/components/prometheus.exporter.unix/#collectors-list\n                  - cpu\n                  - filesystem\n                  - loadavg\n                  - meminfo\n              collectSyslogEnabled: true # collect /var/log/syslog on switches and forward to the lokiTargets\n

      For additional options, see the AlloyConfig struct in Fabric repo.

      "},{"location":"install-upgrade/config/#complete-example-file","title":"Complete Example File","text":"
      apiVersion: fabricator.githedgehog.com/v1beta1\nkind: Fabricator\nmetadata:\n  name: default\n  namespace: fab\nspec:\n  config:\n    control:\n      tlsSAN: # IPs and DNS names to access API\n        - \"customer.site.io\"\n\n      ntpServers:\n      - time.cloudflare.com\n      - time1.google.com\n\n      defaultUser: # user 'core' on all control nodes\n        password: \"hash...\" # password hash\n        authorizedKeys:\n          - \"ssh-ed25519 hash...\"\n\n    fabric:\n      mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n      includeONIE: true\n      defaultSwitchUsers:\n        admin: # at least one user with name 'admin' and role 'admin'\n          role: admin\n          password: \"hash...\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 hash...\"\n        op: # optional read-only user\n          role: operator\n          password: \"hash...\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 hash...\"\n\n      defaultAlloyConfig:\n        agentScrapeIntervalSeconds: 120\n        unixScrapeIntervalSeconds: 120\n        unixExporterEnabled: true\n        collectSyslogEnabled: true\n        lokiTargets:\n          lab:\n            url: http://url.io:3100/loki/api/v1/push\n            useControlProxy: true\n            labels:\n              descriptive: name\n        prometheusTargets:\n          lab:\n            url: http://url.io:9100/api/v1/push\n            useControlProxy: true\n            labels:\n              descriptive: name\n            sendIntervalSeconds: 120\n\n---\napiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n  name: control-1\n  namespace: fab\nspec:\n  bootstrap:\n    disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n  external:\n    interface: eno2 # interface for external\n    ip: dhcp # IP address for external interface\n  management:\n    interface: eno1\n\n# Currently only one ControlNode is supported\n
      "},{"location":"install-upgrade/overview/","title":"Install Fabric","text":"

      Under construction.

      "},{"location":"install-upgrade/overview/#prerequisites","title":"Prerequisites","text":"
      • A machine with access to the Internet to use Fabricator and build installer with at least 8 GB RAM and 25 GB of disk space
      • An 16 GB USB flash drive, if you are not using virtual media
      • Have a machine to function as the Fabric Control Node. System Requirements as well as IPMI access to it to install the OS.
      • A management switch with at least 1 10GbE port is recommended
      • Enough Supported Switches for your Fabric
      "},{"location":"install-upgrade/overview/#overview-of-install-process","title":"Overview of Install Process","text":"

      This section is dedicated to the Hedgehog Fabric installation on bare-metal control node(s) and switches, their preparation and configuration. To install the VLAB see VLAB Overview.

      Download and install hhfab following instructions from the Download section.

      The main steps to install Fabric are:

      1. Install hhfab on the machines with access to the Internet
        1. Prepare Wiring Diagram
        2. Select Fabric Configuration
        3. Build Control Node configuration and installer
      2. Install Control Node
        1. Insert USB with control-os image into Fabric Control Node
        2. Boot the node off the USB to initiate the installation
      3. Prepare Management Network
        1. Connect management switch to Fabric control node
        2. Connect 1GbE Management port of switches to management switch
      4. Prepare supported switches
        1. Ensure switch serial numbers and / or first management interface MAC addresses are recorded in wiring diagram
        2. Boot them into ONIE Install Mode to have them automatically provisioned
      "},{"location":"install-upgrade/overview/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and Installer","text":"

      Hedgehog has created a command line utility, called hhfab, that helps generate the wiring diagram and fabric configuration, validate the supplied configurations, and generate an installation image (.img) suitable for writing to a USB flash drive or mounting via IPMI virtual media. The first hhfab command to run is hhfab init. This will generate the main configuration file, fab.yaml. fab.yaml is responsible for almost every configuration of the fabric with the exception of the wiring. Each command and subcommand have usage messages, simply supply the -h flag to your command or sub command to see the available options. For example hhfab vlab -h and hhfab vlab gen -h.

      "},{"location":"install-upgrade/overview/#hhfab-commands-to-make-a-bootable-image","title":"HHFAB commands to make a bootable image","text":"
      1. hhfab init --wiring wiring-lab.yaml
      2. The init command generates a fab.yaml file, edit the fab.yaml file for your needs
        1. ensure the correct boot disk (e.g. /dev/sda) and control node NIC names are supplied
      3. hhfab validate
      4. hhfab build

      The installer for the fabric is generated in $CWD/result/. This installation image is named control-1-install-usb.img and is 7.5 GB in size. Once the image is created, you can write it to a USB drive, or mount it via virtual media.

      "},{"location":"install-upgrade/overview/#write-usb-image-to-disk","title":"Write USB Image to Disk","text":"

      This will erase data on the USB disk.

      1. Insert the USB to your machine
      2. Identify the path to your USB stick, for example: /dev/sdc
      3. Issue the command to write the image to the USB drive
        • sudo dd if=control-1-install-usb.img of=/dev/sdc bs=4k status=progress

      There are utilities that assist this process such as etcher.

      "},{"location":"install-upgrade/overview/#install-control-node","title":"Install Control Node","text":"

      This control node should be given a static IP address. Either a lease or statically assigned.

      1. Configure the server to use UEFI boot without secure boot

      2. Attach the image to the server either by inserting via USB, or attaching via virtual media

      3. Select boot off of the attached media, the installation process is automated

      4. Once the control node has booted, it logs in automatically and begins the installation process

        1. Optionally use journalctl -f -u flatcar-install.service to monitor progress
      5. Once the installation is complete, the system automatically reboots.

      6. After the system has shutdown but before the boot up process reaches the operating system, remove the USB image from the system. Removal during the UEFI boot screen is acceptable.

      7. Upon booting into the freshly installed system, the fabric installation will automatically begin

        1. If the insecure --dev flag was passed to hhfab init the password for the core user is HHFab.Admin!, the switches have two users created admin and op. admin has administrator privileges and password HHFab.Admin!, whereas the op user is a read-only, non-sudo user with password HHFab.Op!.
        2. Optionally this can be monitored with journalctl -f -u fabric-install.service
      8. The install is complete when the log emits \"Control Node installation complete\". Additionally, the systemctl status will show inactive (dead) indicating that the executable has finished.

      "},{"location":"install-upgrade/overview/#configure-management-network","title":"Configure Management Network","text":"

      The control node is dual-homed. It has a 10GbE interface that connects to the management network. The other link called external in the fab.yaml file is for the customer to access the control node. The management network is for the command and control of the switches that comprise the fabric. This management network can be a simple broadcast domain with layer 2 connectivity. The control node will run a DHCP and small http servers. The management network is not accessible to machines or devices not associated with the fabric.

      "},{"location":"install-upgrade/overview/#fabric-manages-switches","title":"Fabric Manages Switches","text":"

      Now that the install has finished, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s, all pre-installed as part of the Control Node installer.

      At this stage, the fabric hands out DHCP addresses to the switches via the management network. Optionally, you can monitor this process by going through the following steps: - enter k9s at the command prompt - use the arrow keys to select the pod named fabric-boot - the logs of the pod will be displayed showing the DHCP lease process - use the switches screen of k9s to see the heartbeat column to verify the connection between switch and controller. - to see the switches type :switches (like a vim command) into k9s

      "},{"location":"install-upgrade/requirements/","title":"System Requirements","text":""},{"location":"install-upgrade/requirements/#out-of-band-management-network","title":"Out of Band Management Network","text":"

      In order to provision and manage the switches that comprise the fabric, an out of band switch must also be installed. This network is to be used exclusively by the control node and the fabric switches, no other access is permitted. This switch (or switches) is not managed by the fabric. It is recommended that this switch have at least a 10GbE port and that port connect to the control node.

      "},{"location":"install-upgrade/requirements/#control-node","title":"Control Node","text":"
      • Fast SSDs for system/root is mandatory for Control Nodes
      • NVMe SSDs are recommended
      • DRAM-less NAND SSDs are not supported (e.g. Crucial BX500)
      • 10 GbE port for connection to management network is recommended
      • Minimal (non-HA) setup is a single Control Node
      • (Future) Full (HA) setup is at least 3 Control Nodes
      • (Future) Extra nodes could be used for things like Logging, Monitoring, Alerting stack, and more
      "},{"location":"install-upgrade/requirements/#non-ha-minimal-setup-1-control-node","title":"Non-HA (minimal) setup - 1 Control Node","text":"
      • Control Node runs non-HA Kubernetes Control Plane installation with non-HA Hedgehog Fabric Control Plane on top of it
      • Not recommended for more then 10 devices participating in the Hedgehog Fabric or production deployments
      Minimal Recommended CPU 6 8 RAM 16 GB 32 GB Disk 150 GB 250 GB"},{"location":"install-upgrade/requirements/#future-ha-setup-3-control-nodes-per-node","title":"(Future) HA setup - 3+ Control Nodes (per node)","text":"
      • Each Control Node runs part of the HA Kubernetes Control Plane installation with Hedgehog Fabric Control Plane on top of it in HA mode as well
      • Recommended for all cases where more than 10 devices participating in the Hedgehog Fabric
      Minimal Recommended CPU 6 8 RAM 16 GB 32 GB Disk 150 GB 250 GB"},{"location":"install-upgrade/requirements/#reference-control-node-configuration","title":"Reference Control Node Configuration","text":"
      • AMD EPYC 4344P (8C/16T, 3.8 GHz, 32 MB L3, 65W, single socket)
      • 32 GB DDR5-4800 ECC UDIMM (2 x 16 GB)
      • Micron 7450 MAX 400GB NVMe
      "},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":"
      • (Future) Each participating device is part of the Kubernetes cluster, so it runs Kubernetes kubelet
      • Additionally, it runs the Hedgehog Fabric Agent that controls devices configuration

      Following resources should be available on a device to run in the Hedgehog Fabric (after other software such as SONiC usage):

      Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":"

      You can find detailed information about devices in the Switch Profiles Catalog and in the User Guide switch features and port naming.

      "},{"location":"install-upgrade/supported-devices/#spine","title":"Spine","text":"
      • Celestica DS3000
      • Celestica DS4000
      • Dell S5232F-ON
      • Edgecore DCS204 (AS7726-32X)
      • Edgecore DCS501 (AS7712-32X-EC)
      • Supermicro SSE-C4632SB
      "},{"location":"install-upgrade/supported-devices/#leaf","title":"Leaf","text":"

      (could be used for collapsed-core)

      • Celestica DS3000
      • Dell S5232F-ON
      • Dell S5248F-ON
      • Edgecore DCS203 (AS7326-56X)
      • Edgecore DCS204 (AS7726-32X)
      • Edgecore EPS203 (AS4630-54NPE)
      • Supermicro SSE-C4632SB
      "},{"location":"reference/api/","title":"API Reference","text":""},{"location":"reference/api/#packages","title":"Packages","text":"
      • agent.githedgehog.com/v1beta1
      • dhcp.githedgehog.com/v1beta1
      • vpc.githedgehog.com/v1beta1
      • wiring.githedgehog.com/v1beta1
      "},{"location":"reference/api/#agentgithedgehogcomv1beta1","title":"agent.githedgehog.com/v1beta1","text":"

      Package v1beta1 contains API Schema definitions for the agent v1beta1 API group. This is the internal API group for the switch and control node agents. Not intended to be modified by the user.

      "},{"location":"reference/api/#resource-types","title":"Resource Types","text":"
      • Agent
      "},{"location":"reference/api/#adminstatus","title":"AdminStatus","text":"

      Underlying type: string

      Appears in: - SwitchStateInterface

      Field Description `` up down testing"},{"location":"reference/api/#agent","title":"Agent","text":"

      Agent is an internal API object used by the controller to pass all relevant information to the agent running on a specific switch in order to fully configure it and manage its lifecycle. It is not intended to be used directly by users. Spec of the object isn't user-editable, it is managed by the controller. Status of the object is updated by the agent and is used by the controller to track the state of the agent and the switch it is running on. Name of the Agent object is the same as the name of the switch it is running on and it's created in the same namespace as the Switch object.

      Field Description Default Validation apiVersion string agent.githedgehog.com/v1beta1 kind string Agent metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. status AgentStatus Status is the observed state of the Agent"},{"location":"reference/api/#agentstatus","title":"AgentStatus","text":"

      AgentStatus defines the observed state of the agent running on a specific switch and includes information about the switch itself as well as the state of the agent and applied configuration.

      Appears in: - Agent

      Field Description Default Validation version string Current running agent version installID string ID of the agent installation, used to track NOS re-installs runID string ID of the agent run, used to track NOS reboots lastHeartbeat Time Time of the last heartbeat from the agent lastAttemptTime Time Time of the last attempt to apply configuration lastAttemptGen integer Generation of the last attempt to apply configuration lastAppliedTime Time Time of the last successful configuration application lastAppliedGen integer Generation of the last successful configuration application state SwitchState Detailed switch state updated with each heartbeat conditions Condition array Conditions of the agent, includes readiness marker for use with kubectl wait"},{"location":"reference/api/#bgpmessages","title":"BGPMessages","text":"

      Appears in: - SwitchStateBGPNeighbor

      Field Description Default Validation received BGPMessagesCounters sent BGPMessagesCounters"},{"location":"reference/api/#bgpmessagescounters","title":"BGPMessagesCounters","text":"

      Appears in: - BGPMessages

      Field Description Default Validation capability integer keepalive integer notification integer open integer routeRefresh integer update integer"},{"location":"reference/api/#bgpneighborsessionstate","title":"BGPNeighborSessionState","text":"

      Underlying type: string

      Appears in: - SwitchStateBGPNeighbor

      Field Description `` idle connect active openSent openConfirm established"},{"location":"reference/api/#bgppeertype","title":"BGPPeerType","text":"

      Underlying type: string

      Appears in: - SwitchStateBGPNeighbor

      Field Description `` internal external"},{"location":"reference/api/#operstatus","title":"OperStatus","text":"

      Underlying type: string

      Appears in: - SwitchStateInterface

      Field Description `` up down testing unknown dormant notPresent lowerLayerDown"},{"location":"reference/api/#switchstate","title":"SwitchState","text":"

      Appears in: - AgentStatus

      Field Description Default Validation nos SwitchStateNOS Information about the switch and NOS interfaces object (keys:string, values:SwitchStateInterface) Switch interfaces state (incl. physical, management and port channels) breakouts object (keys:string, values:SwitchStateBreakout) Breakout ports state (port -> breakout state) bgpNeighbors object (keys:string, values:map[string]SwitchStateBGPNeighbor) State of all BGP neighbors (VRF -> neighbor address -> state) platform SwitchStatePlatform State of the switch platform (fans, PSUs, sensors) criticalResources SwitchStateCRM State of the critical resources (ACLs, routes, etc.)"},{"location":"reference/api/#switchstatebgpneighbor","title":"SwitchStateBGPNeighbor","text":"

      Appears in: - SwitchState

      Field Description Default Validation connectionsDropped integer enabled boolean establishedTransitions integer lastEstablished Time lastRead Time lastResetReason string lastResetTime Time lastWrite Time localAS integer messages BGPMessages peerAS integer peerGroup string peerPort integer peerType BGPPeerType remoteRouterID string sessionState BGPNeighborSessionState shutdownMessage string prefixes object (keys:string, values:SwitchStateBGPNeighborPrefixes)"},{"location":"reference/api/#switchstatebgpneighborprefixes","title":"SwitchStateBGPNeighborPrefixes","text":"

      Appears in: - SwitchStateBGPNeighbor

      Field Description Default Validation received integer receivedPrePolicy integer sent integer"},{"location":"reference/api/#switchstatebreakout","title":"SwitchStateBreakout","text":"

      Appears in: - SwitchState

      Field Description Default Validation mode string nosMembers string array status string"},{"location":"reference/api/#switchstatecrm","title":"SwitchStateCRM","text":"

      Appears in: - SwitchState

      Field Description Default Validation aclStats SwitchStateCRMACLStats stats SwitchStateCRMStats"},{"location":"reference/api/#switchstatecrmacldetails","title":"SwitchStateCRMACLDetails","text":"

      Appears in: - SwitchStateCRMACLInfo

      Field Description Default Validation groupsAvailable integer groupsUsed integer tablesAvailable integer tablesUsed integer"},{"location":"reference/api/#switchstatecrmaclinfo","title":"SwitchStateCRMACLInfo","text":"

      Appears in: - SwitchStateCRMACLStats

      Field Description Default Validation lag SwitchStateCRMACLDetails port SwitchStateCRMACLDetails rif SwitchStateCRMACLDetails switch SwitchStateCRMACLDetails vlan SwitchStateCRMACLDetails"},{"location":"reference/api/#switchstatecrmaclstats","title":"SwitchStateCRMACLStats","text":"

      Appears in: - SwitchStateCRM

      Field Description Default Validation egress SwitchStateCRMACLInfo ingress SwitchStateCRMACLInfo"},{"location":"reference/api/#switchstatecrmstats","title":"SwitchStateCRMStats","text":"

      Appears in: - SwitchStateCRM

      Field Description Default Validation dnatEntriesAvailable integer dnatEntriesUsed integer fdbEntriesAvailable integer fdbEntriesUsed integer ipmcEntriesAvailable integer ipmcEntriesUsed integer ipv4NeighborsAvailable integer ipv4NeighborsUsed integer ipv4NexthopsAvailable integer ipv4NexthopsUsed integer ipv4RoutesAvailable integer ipv4RoutesUsed integer ipv6NeighborsAvailable integer ipv6NeighborsUsed integer ipv6NexthopsAvailable integer ipv6NexthopsUsed integer ipv6RoutesAvailable integer ipv6RoutesUsed integer nexthopGroupMembersAvailable integer nexthopGroupMembersUsed integer nexthopGroupsAvailable integer nexthopGroupsUsed integer snatEntriesAvailable integer snatEntriesUsed integer"},{"location":"reference/api/#switchstateinterface","title":"SwitchStateInterface","text":"

      Appears in: - SwitchState

      Field Description Default Validation enabled boolean adminStatus AdminStatus operStatus OperStatus mac string lastChanged Time speed string counters SwitchStateInterfaceCounters transceiver SwitchStateTransceiver lldpNeighbors SwitchStateLLDPNeighbor array"},{"location":"reference/api/#switchstateinterfacecounters","title":"SwitchStateInterfaceCounters","text":"

      Appears in: - SwitchStateInterface

      Field Description Default Validation inBitsPerSecond float inDiscards integer inErrors integer inPktsPerSecond float inUtilization integer lastClear Time outBitsPerSecond float outDiscards integer outErrors integer outPktsPerSecond float outUtilization integer"},{"location":"reference/api/#switchstatelldpneighbor","title":"SwitchStateLLDPNeighbor","text":"

      Appears in: - SwitchStateInterface

      Field Description Default Validation chassisID string systemName string systemDescription string portID string portDescription string manufacturer string model string serialNumber string"},{"location":"reference/api/#switchstatenos","title":"SwitchStateNOS","text":"

      SwitchStateNOS contains information about the switch and NOS received from the switch itself by the agent

      Appears in: - SwitchState

      Field Description Default Validation asicVersion string ASIC name, such as \"broadcom\" or \"vs\" buildCommit string NOS build commit buildDate string NOS build date builtBy string NOS build user configDbVersion string NOS config DB version, such as \"version_4_2_1\" distributionVersion string Distribution version, such as \"Debian 10.13\" hardwareVersion string Hardware version, such as \"X01\" hwskuVersion string Hwsku version, such as \"DellEMC-S5248f-P-25G-DPB\" kernelVersion string Kernel version, such as \"5.10.0-21-amd64\" mfgName string Manufacturer name, such as \"Dell EMC\" platformName string Platform name, such as \"x86_64-dellemc_s5248f_c3538-r0\" productDescription string NOS product description, such as \"Enterprise SONiC Distribution by Broadcom - Enterprise Base package\" productVersion string NOS product version, empty for Broadcom SONiC serialNumber string Switch serial number softwareVersion string NOS software version, such as \"4.2.0-Enterprise_Base\" uptime string Switch uptime, such as \"21:21:27 up 1 day, 23:26, 0 users, load average: 1.92, 1.99, 2.00 \""},{"location":"reference/api/#switchstateplatform","title":"SwitchStatePlatform","text":"

      Appears in: - SwitchState

      Field Description Default Validation fans object (keys:string, values:SwitchStatePlatformFan) psus object (keys:string, values:SwitchStatePlatformPSU) temperature object (keys:string, values:SwitchStatePlatformTemperature)"},{"location":"reference/api/#switchstateplatformfan","title":"SwitchStatePlatformFan","text":"

      Appears in: - SwitchStatePlatform

      Field Description Default Validation direction string speed float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformpsu","title":"SwitchStatePlatformPSU","text":"

      Appears in: - SwitchStatePlatform

      Field Description Default Validation inputCurrent float inputPower float inputVoltage float outputCurrent float outputPower float outputVoltage float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformtemperature","title":"SwitchStatePlatformTemperature","text":"

      Appears in: - SwitchStatePlatform

      Field Description Default Validation temperature float alarms string highThreshold float criticalHighThreshold float lowThreshold float criticalLowThreshold float"},{"location":"reference/api/#switchstatetransceiver","title":"SwitchStateTransceiver","text":"

      Appears in: - SwitchStateInterface

      Field Description Default Validation description string cableClass string formFactor string connectorType string present string cableLength float operStatus string temperature float voltage float serialNumber string vendor string vendorPart string vendorOUI string vendorRev string"},{"location":"reference/api/#dhcpgithedgehogcomv1beta1","title":"dhcp.githedgehog.com/v1beta1","text":"

      Package v1beta1 contains API Schema definitions for the dhcp v1beta1 API group. It is the primary internal API group for the intended Hedgehog DHCP server configuration and storing leases as well as making them available to the end user through API. Not intended to be modified by the user.

      "},{"location":"reference/api/#resource-types_1","title":"Resource Types","text":"
      • DHCPSubnet
      "},{"location":"reference/api/#dhcpallocated","title":"DHCPAllocated","text":"

      DHCPAllocated is a single allocated IP with expiry time and hostname from DHCP requests, it's effectively a DHCP lease

      Appears in: - DHCPSubnetStatus

      Field Description Default Validation ip string Allocated IP address expiry Time Expiry time of the lease hostname string Hostname from DHCP request"},{"location":"reference/api/#dhcpsubnet","title":"DHCPSubnet","text":"

      DHCPSubnet is the configuration (spec) for the Hedgehog DHCP server and storage for the leases (status). It's primary internal API group, but it makes allocated IPs / leases information available to the end user through API. Not intended to be modified by the user.

      Field Description Default Validation apiVersion string dhcp.githedgehog.com/v1beta1 kind string DHCPSubnet metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec DHCPSubnetSpec Spec is the desired state of the DHCPSubnet status DHCPSubnetStatus Status is the observed state of the DHCPSubnet"},{"location":"reference/api/#dhcpsubnetspec","title":"DHCPSubnetSpec","text":"

      DHCPSubnetSpec defines the desired state of DHCPSubnet

      Appears in: - DHCPSubnet

      Field Description Default Validation subnet string Full VPC subnet name (including VPC name), such as \"vpc-0/default\" cidrBlock string CIDR block to use for VPC subnet, such as \"10.10.10.0/24\" gateway string Gateway, such as 10.10.10.1 startIP string Start IP from the CIDRBlock to allocate IPs, such as 10.10.10.10 endIP string End IP from the CIDRBlock to allocate IPs, such as 10.10.10.99 vrf string VRF name to identify specific VPC (will be added to DHCP packets by DHCP relay in suboption 151), such as \"VrfVvpc-1\" as it's named on switch circuitID string VLAN ID to identify specific subnet within the VPC, such as \"Vlan1000\" as it's named on switch pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option. defaultURL string DefaultURL (optional) is the option 114 \"default-url\" to be sent to the clients"},{"location":"reference/api/#dhcpsubnetstatus","title":"DHCPSubnetStatus","text":"

      DHCPSubnetStatus defines the observed state of DHCPSubnet

      Appears in: - DHCPSubnet

      Field Description Default Validation allocated object (keys:string, values:DHCPAllocated) Allocated is a map of allocated IPs with expiry time and hostname from DHCP requests"},{"location":"reference/api/#vpcgithedgehogcomv1beta1","title":"vpc.githedgehog.com/v1beta1","text":"

      Package v1beta1 contains API Schema definitions for the vpc v1beta1 API group. It is public API group for the VPCs and Externals APIs. Intended to be used by the user.

      "},{"location":"reference/api/#resource-types_2","title":"Resource Types","text":"
      • External
      • ExternalAttachment
      • ExternalPeering
      • IPv4Namespace
      • VPC
      • VPCAttachment
      • VPCPeering
      "},{"location":"reference/api/#external","title":"External","text":"

      External object represents an external system connected to the Fabric and available to the specific IPv4Namespace. Users can do external peering with the external system by specifying the name of the External Object without need to worry about the details of how external system is attached to the Fabric.

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string External metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalSpec Spec is the desired state of the External status ExternalStatus Status is the observed state of the External"},{"location":"reference/api/#externalattachment","title":"ExternalAttachment","text":"

      ExternalAttachment is a definition of how specific switch is connected with external system (External object). Effectively it represents BGP peering between the switch and external system including all needed configuration.

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string ExternalAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalAttachmentSpec Spec is the desired state of the ExternalAttachment status ExternalAttachmentStatus Status is the observed state of the ExternalAttachment"},{"location":"reference/api/#externalattachmentneighbor","title":"ExternalAttachmentNeighbor","text":"

      ExternalAttachmentNeighbor defines the BGP neighbor configuration for the external attachment

      Appears in: - ExternalAttachmentSpec

      Field Description Default Validation asn integer ASN is the ASN of the BGP neighbor ip string IP is the IP address of the BGP neighbor to peer with"},{"location":"reference/api/#externalattachmentspec","title":"ExternalAttachmentSpec","text":"

      ExternalAttachmentSpec defines the desired state of ExternalAttachment

      Appears in: - ExternalAttachment

      Field Description Default Validation external string External is the name of the External object this attachment belongs to connection string Connection is the name of the Connection object this attachment belongs to (essentially the name of the switch/port) switch ExternalAttachmentSwitch Switch is the switch port configuration for the external attachment neighbor ExternalAttachmentNeighbor Neighbor is the BGP neighbor configuration for the external attachment"},{"location":"reference/api/#externalattachmentstatus","title":"ExternalAttachmentStatus","text":"

      ExternalAttachmentStatus defines the observed state of ExternalAttachment

      Appears in: - ExternalAttachment

      "},{"location":"reference/api/#externalattachmentswitch","title":"ExternalAttachmentSwitch","text":"

      ExternalAttachmentSwitch defines the switch port configuration for the external attachment

      Appears in: - ExternalAttachmentSpec

      Field Description Default Validation vlan integer VLAN (optional) is the VLAN ID used for the subinterface on a switch port specified in the connection, set to 0 if no VLAN is used ip string IP is the IP address of the subinterface on a switch port specified in the connection"},{"location":"reference/api/#externalpeering","title":"ExternalPeering","text":"

      ExternalPeering is the Schema for the externalpeerings API

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string ExternalPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalPeeringSpec Spec is the desired state of the ExternalPeering status ExternalPeeringStatus Status is the observed state of the ExternalPeering"},{"location":"reference/api/#externalpeeringspec","title":"ExternalPeeringSpec","text":"

      ExternalPeeringSpec defines the desired state of ExternalPeering

      Appears in: - ExternalPeering

      Field Description Default Validation permit ExternalPeeringSpecPermit Permit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit"},{"location":"reference/api/#externalpeeringspecexternal","title":"ExternalPeeringSpecExternal","text":"

      ExternalPeeringSpecExternal defines the External-side of the configuration to peer with

      Appears in: - ExternalPeeringSpecPermit

      Field Description Default Validation name string Name is the name of the External to peer with prefixes ExternalPeeringSpecPrefix array Prefixes is the list of prefixes to permit from the External to the VPC"},{"location":"reference/api/#externalpeeringspecpermit","title":"ExternalPeeringSpecPermit","text":"

      ExternalPeeringSpecPermit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit

      Appears in: - ExternalPeeringSpec

      Field Description Default Validation vpc ExternalPeeringSpecVPC VPC is the VPC-side of the configuration to peer with external ExternalPeeringSpecExternal External is the External-side of the configuration to peer with"},{"location":"reference/api/#externalpeeringspecprefix","title":"ExternalPeeringSpecPrefix","text":"

      ExternalPeeringSpecPrefix defines the prefix to permit from the External to the VPC

      Appears in: - ExternalPeeringSpecExternal

      Field Description Default Validation prefix string Prefix is the subnet to permit from the External to the VPC, e.g. 0.0.0.0/0 for any route including default route.It matches any prefix length less than or equal to 32 effectively permitting all prefixes within the specified one."},{"location":"reference/api/#externalpeeringspecvpc","title":"ExternalPeeringSpecVPC","text":"

      ExternalPeeringSpecVPC defines the VPC-side of the configuration to peer with

      Appears in: - ExternalPeeringSpecPermit

      Field Description Default Validation name string Name is the name of the VPC to peer with subnets string array Subnets is the list of subnets to advertise from VPC to the External"},{"location":"reference/api/#externalpeeringstatus","title":"ExternalPeeringStatus","text":"

      ExternalPeeringStatus defines the observed state of ExternalPeering

      Appears in: - ExternalPeering

      "},{"location":"reference/api/#externalspec","title":"ExternalSpec","text":"

      ExternalSpec describes IPv4 namespace External belongs to and inbound/outbound communities which are used to filter routes from/to the external system.

      Appears in: - External

      Field Description Default Validation ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this External belongs to inboundCommunity string InboundCommunity is the inbound community to filter routes from the external system (e.g. 65102:5000) outboundCommunity string OutboundCommunity is theoutbound community that all outbound routes will be stamped with (e.g. 50000:50001)"},{"location":"reference/api/#externalstatus","title":"ExternalStatus","text":"

      ExternalStatus defines the observed state of External

      Appears in: - External

      "},{"location":"reference/api/#ipv4namespace","title":"IPv4Namespace","text":"

      IPv4Namespace represents a namespace for VPC subnets allocation. All VPC subnets within a single IPv4Namespace are non-overlapping. Users can create multiple IPv4Namespaces to allocate same VPC subnets.

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string IPv4Namespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec IPv4NamespaceSpec Spec is the desired state of the IPv4Namespace status IPv4NamespaceStatus Status is the observed state of the IPv4Namespace"},{"location":"reference/api/#ipv4namespacespec","title":"IPv4NamespaceSpec","text":"

      IPv4NamespaceSpec defines the desired state of IPv4Namespace

      Appears in: - IPv4Namespace

      Field Description Default Validation subnets string array Subnets is the list of subnets to allocate VPC subnets from, couldn't overlap between each other and with Fabric reserved subnets MaxItems: 20 MinItems: 1"},{"location":"reference/api/#ipv4namespacestatus","title":"IPv4NamespaceStatus","text":"

      IPv4NamespaceStatus defines the observed state of IPv4Namespace

      Appears in: - IPv4Namespace

      "},{"location":"reference/api/#vpc","title":"VPC","text":"

      VPC is Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPC metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCSpec Spec is the desired state of the VPC status VPCStatus Status is the observed state of the VPC"},{"location":"reference/api/#vpcattachment","title":"VPCAttachment","text":"

      VPCAttachment is the Schema for the vpcattachments API

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPCAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCAttachmentSpec Spec is the desired state of the VPCAttachment status VPCAttachmentStatus Status is the observed state of the VPCAttachment"},{"location":"reference/api/#vpcattachmentspec","title":"VPCAttachmentSpec","text":"

      VPCAttachmentSpec defines the desired state of VPCAttachment

      Appears in: - VPCAttachment

      Field Description Default Validation subnet string Subnet is the full name of the VPC subnet to attach to, such as \"vpc-1/default\" connection string Connection is the name of the connection to attach to the VPC nativeVLAN boolean NativeVLAN is the flag to indicate if the native VLAN should be used for attaching the VPC subnet"},{"location":"reference/api/#vpcattachmentstatus","title":"VPCAttachmentStatus","text":"

      VPCAttachmentStatus defines the observed state of VPCAttachment

      Appears in: - VPCAttachment

      "},{"location":"reference/api/#vpcdhcp","title":"VPCDHCP","text":"

      VPCDHCP defines the on-demand DHCP configuration for the subnet

      Appears in: - VPCSubnet

      Field Description Default Validation relay string Relay is the DHCP relay IP address, if specified, DHCP server will be disabled enable boolean Enable enables DHCP server for the subnet range VPCDHCPRange Range (optional) is the DHCP range for the subnet if DHCP server is enabled options VPCDHCPOptions Options (optional) is the DHCP options for the subnet if DHCP server is enabled"},{"location":"reference/api/#vpcdhcpoptions","title":"VPCDHCPOptions","text":"

      VPCDHCPOptions defines the DHCP options for the subnet if DHCP server is enabled

      Appears in: - VPCDHCP

      Field Description Default Validation pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option."},{"location":"reference/api/#vpcdhcprange","title":"VPCDHCPRange","text":"

      VPCDHCPRange defines the DHCP range for the subnet if DHCP server is enabled

      Appears in: - VPCDHCP

      Field Description Default Validation start string Start is the start IP address of the DHCP range end string End is the end IP address of the DHCP range"},{"location":"reference/api/#vpcpeer","title":"VPCPeer","text":"

      Appears in: - VPCPeeringSpec

      Field Description Default Validation subnets string array Subnets is the list of subnets to advertise from current VPC to the peer VPC MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeering","title":"VPCPeering","text":"

      VPCPeering represents a peering between two VPCs with corresponding filtering rules. Minimal example of the VPC peering showing vpc-1 to vpc-2 peering with all subnets allowed:

      spec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n
      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPCPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCPeeringSpec Spec is the desired state of the VPCPeering status VPCPeeringStatus Status is the observed state of the VPCPeering"},{"location":"reference/api/#vpcpeeringspec","title":"VPCPeeringSpec","text":"

      VPCPeeringSpec defines the desired state of VPCPeering

      Appears in: - VPCPeering

      Field Description Default Validation remote string permit map[string]VPCPeer array Permit defines a list of the peering policies - which VPC subnets will have access to the peer VPC subnets. MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeeringstatus","title":"VPCPeeringStatus","text":"

      VPCPeeringStatus defines the observed state of VPCPeering

      Appears in: - VPCPeering

      "},{"location":"reference/api/#vpcspec","title":"VPCSpec","text":"

      VPCSpec defines the desired state of VPC. At least one subnet is required.

      Appears in: - VPC

      Field Description Default Validation subnets object (keys:string, values:VPCSubnet) Subnets is the list of VPC subnets to configure ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this VPC belongs to (if not specified, \"default\" is used) vlanNamespace string VLANNamespace is the name of the VLANNamespace this VPC belongs to (if not specified, \"default\" is used) defaultIsolated boolean DefaultIsolated sets default behavior for isolated mode for the subnets (disabled by default) defaultRestricted boolean DefaultRestricted sets default behavior for restricted mode for the subnets (disabled by default) permit string array array Permit defines a list of the access policies between the subnets within the VPC - each policy is a list of subnets that have access to each other.It's applied on top of the subnet isolation flag and if subnet isn't isolated it's not required to have it in a permit list while if vpc is markedas isolated it's required to have it in a permit list to have access to other subnets. staticRoutes VPCStaticRoute array StaticRoutes is the list of additional static routes for the VPC"},{"location":"reference/api/#vpcstaticroute","title":"VPCStaticRoute","text":"

      VPCStaticRoute defines the static route for the VPC

      Appears in: - VPCSpec

      Field Description Default Validation prefix string Prefix for the static route (mandatory), e.g. 10.42.0.0/24 nextHops string array NextHops for the static route (at least one is required), e.g. 10.99.0.0"},{"location":"reference/api/#vpcstatus","title":"VPCStatus","text":"

      VPCStatus defines the observed state of VPC

      Appears in: - VPC

      "},{"location":"reference/api/#vpcsubnet","title":"VPCSubnet","text":"

      VPCSubnet defines the VPC subnet configuration

      Appears in: - VPCSpec

      Field Description Default Validation subnet string Subnet is the subnet CIDR block, such as \"10.0.0.0/24\", should belong to the IPv4Namespace and be unique within the namespace gateway string Gateway (optional) for the subnet, if not specified, the first IP (e.g. 10.0.0.1) in the subnet is used as the gateway dhcp VPCDHCP DHCP is the on-demand DHCP configuration for the subnet vlan integer VLAN is the VLAN ID for the subnet, should belong to the VLANNamespace and be unique within the namespace isolated boolean Isolated is the flag to enable isolated mode for the subnet which means no access to and from the other subnets within the VPC restricted boolean Restricted is the flag to enable restricted mode for the subnet which means no access between hosts within the subnet itself"},{"location":"reference/api/#wiringgithedgehogcomv1beta1","title":"wiring.githedgehog.com/v1beta1","text":"

      Package v1beta1 contains API Schema definitions for the wiring v1beta1 API group. It is public API group mainly for the underlay definition including Switches, Server, wiring between them and etc. Intended to be used by the user.

      "},{"location":"reference/api/#resource-types_3","title":"Resource Types","text":"
      • Connection
      • Server
      • Switch
      • SwitchGroup
      • SwitchProfile
      • VLANNamespace
      "},{"location":"reference/api/#baseportname","title":"BasePortName","text":"

      BasePortName defines the full name of the switch port

      Appears in: - ConnExternalLink - ConnFabricLinkSwitch - ConnStaticExternalLinkSwitch - ServerToSwitchLink - SwitchToSwitchLink

      Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object."},{"location":"reference/api/#connbundled","title":"ConnBundled","text":"

      ConnBundled defines the bundled connection (port channel, single server to a single switch with multiple links)

      Appears in: - ConnectionSpec

      Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#conneslag","title":"ConnESLAG","text":"

      ConnESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)

      Appears in: - ConnectionSpec

      Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connexternal","title":"ConnExternal","text":"

      ConnExternal defines the external connection (single switch to a single external device with a single link)

      Appears in: - ConnectionSpec

      Field Description Default Validation link ConnExternalLink Link is the external connection link"},{"location":"reference/api/#connexternallink","title":"ConnExternalLink","text":"

      ConnExternalLink defines the external connection link

      Appears in: - ConnExternal

      Field Description Default Validation switch BasePortName"},{"location":"reference/api/#connfabric","title":"ConnFabric","text":"

      ConnFabric defines the fabric connection (single spine to a single leaf with at least one link)

      Appears in: - ConnectionSpec

      Field Description Default Validation links FabricLink array Links is the list of spine-to-leaf links MinItems: 1"},{"location":"reference/api/#connfabriclinkswitch","title":"ConnFabricLinkSwitch","text":"

      ConnFabricLinkSwitch defines the switch side of the fabric link

      Appears in: - FabricLink

      Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the fabric link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$"},{"location":"reference/api/#connmclag","title":"ConnMCLAG","text":"

      ConnMCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)

      Appears in: - ConnectionSpec

      Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connmclagdomain","title":"ConnMCLAGDomain","text":"

      ConnMCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch or redundancy group and allows to use MCLAG connections to connect servers in a multi-homed way.

      Appears in: - ConnectionSpec

      Field Description Default Validation peerLinks SwitchToSwitchLink array PeerLinks is the list of peer links between the switches, used to pass server traffic between switch MinItems: 1 sessionLinks SwitchToSwitchLink array SessionLinks is the list of session links between the switches, used only to pass MCLAG control plane and BGPtraffic between switches MinItems: 1"},{"location":"reference/api/#connstaticexternal","title":"ConnStaticExternal","text":"

      ConnStaticExternal defines the static external connection (single switch to a single external device with a single link)

      Appears in: - ConnectionSpec

      Field Description Default Validation link ConnStaticExternalLink Link is the static external connection link withinVPC string WithinVPC is the optional VPC name to provision the static external connection within the VPC VRF instead of default one to make resource available to the specific VPC"},{"location":"reference/api/#connstaticexternallink","title":"ConnStaticExternalLink","text":"

      ConnStaticExternalLink defines the static external connection link

      Appears in: - ConnStaticExternal

      Field Description Default Validation switch ConnStaticExternalLinkSwitch Switch is the switch side of the static external connection link"},{"location":"reference/api/#connstaticexternallinkswitch","title":"ConnStaticExternalLinkSwitch","text":"

      ConnStaticExternalLinkSwitch defines the switch side of the static external connection link

      Appears in: - ConnStaticExternalLink

      Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the static external connection link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$ nextHop string NextHop is the next hop IP address for static routes that will be created for the subnets Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}$ subnets string array Subnets is the list of subnets that will get static routes using the specified next hop vlan integer VLAN is the optional VLAN ID to be configured on the switch port"},{"location":"reference/api/#connunbundled","title":"ConnUnbundled","text":"

      ConnUnbundled defines the unbundled connection (no port channel, single server to a single switch with a single link)

      Appears in: - ConnectionSpec

      Field Description Default Validation link ServerToSwitchLink Link is the server-to-switch link mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connvpcloopback","title":"ConnVPCLoopback","text":"

      ConnVPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) that enables automated workaround named \"VPC Loopback\" that allow to avoid switch hardware limitations and traffic going through CPU in some cases

      Appears in: - ConnectionSpec

      Field Description Default Validation links SwitchToSwitchLink array Links is the list of VPC loopback links MinItems: 1"},{"location":"reference/api/#connection","title":"Connection","text":"

      Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all physical and logical connections between the devices in the Wiring Diagram. Connection type is defined by the top-level field in the ConnectionSpec. Exactly one of them could be used in a single Connection object.

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Connection metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ConnectionSpec Spec is the desired state of the Connection status ConnectionStatus Status is the observed state of the Connection"},{"location":"reference/api/#connectionspec","title":"ConnectionSpec","text":"

      ConnectionSpec defines the desired state of Connection

      Appears in: - Connection

      Field Description Default Validation unbundled ConnUnbundled Unbundled defines the unbundled connection (no port channel, single server to a single switch with a single link) bundled ConnBundled Bundled defines the bundled connection (port channel, single server to a single switch with multiple links) mclag ConnMCLAG MCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links) eslag ConnESLAG ESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links) mclagDomain ConnMCLAGDomain MCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch for server multi-homing fabric ConnFabric Fabric defines the fabric connection (single spine to a single leaf with at least one link) vpcLoopback ConnVPCLoopback VPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) for automated workaround external ConnExternal External defines the external connection (single switch to a single external device with a single link) staticExternal ConnStaticExternal StaticExternal defines the static external connection (single switch to a single external device with a single link)"},{"location":"reference/api/#connectionstatus","title":"ConnectionStatus","text":"

      ConnectionStatus defines the observed state of Connection

      Appears in: - Connection

      "},{"location":"reference/api/#fabriclink","title":"FabricLink","text":"

      FabricLink defines the fabric connection link

      Appears in: - ConnFabric

      Field Description Default Validation spine ConnFabricLinkSwitch Spine is the spine side of the fabric link leaf ConnFabricLinkSwitch Leaf is the leaf side of the fabric link"},{"location":"reference/api/#server","title":"Server","text":"

      Server is the Schema for the servers API

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Server metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ServerSpec Spec is desired state of the server status ServerStatus Status is the observed state of the server"},{"location":"reference/api/#serverfacingconnectionconfig","title":"ServerFacingConnectionConfig","text":"

      ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration

      Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

      Field Description Default Validation mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#serverspec","title":"ServerSpec","text":"

      ServerSpec defines the desired state of Server

      Appears in: - Server

      Field Description Default Validation description string Description is a description of the server profile string Profile is the profile of the server, name of the ServerProfile object to be used for this server, currently not used by the Fabric"},{"location":"reference/api/#serverstatus","title":"ServerStatus","text":"

      ServerStatus defines the observed state of Server

      Appears in: - Server

      "},{"location":"reference/api/#servertoswitchlink","title":"ServerToSwitchLink","text":"

      ServerToSwitchLink defines the server-to-switch link

      Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

      Field Description Default Validation server BasePortName Server is the server side of the connection switch BasePortName Switch is the switch side of the connection"},{"location":"reference/api/#switch","title":"Switch","text":"

      Switch is the Schema for the switches API

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Switch metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchSpec Spec is desired state of the switch status SwitchStatus Status is the observed state of the switch"},{"location":"reference/api/#switchboot","title":"SwitchBoot","text":"

      Appears in: - SwitchSpec

      Field Description Default Validation serial string Identify switch by serial number mac string Identify switch by MAC address of the management port"},{"location":"reference/api/#switchgroup","title":"SwitchGroup","text":"

      SwitchGroup is the marker API object to group switches together, switch can belong to multiple groups

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string SwitchGroup metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchGroupSpec Spec is the desired state of the SwitchGroup status SwitchGroupStatus Status is the observed state of the SwitchGroup"},{"location":"reference/api/#switchgroupspec","title":"SwitchGroupSpec","text":"

      SwitchGroupSpec defines the desired state of SwitchGroup

      Appears in: - SwitchGroup

      "},{"location":"reference/api/#switchgroupstatus","title":"SwitchGroupStatus","text":"

      SwitchGroupStatus defines the observed state of SwitchGroup

      Appears in: - SwitchGroup

      "},{"location":"reference/api/#switchprofile","title":"SwitchProfile","text":"

      SwitchProfile represents switch capabilities and configuration

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string SwitchProfile metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchProfileSpec status SwitchProfileStatus"},{"location":"reference/api/#switchprofileconfig","title":"SwitchProfileConfig","text":"

      Defines switch-specific configuration options

      Appears in: - SwitchProfileSpec

      Field Description Default Validation maxPathsEBGP integer MaxPathsIBGP defines the maximum number of IBGP paths to be configured"},{"location":"reference/api/#switchprofilefeatures","title":"SwitchProfileFeatures","text":"

      Defines features supported by a specific switch which is later used for roles and Fabric API features usage validation

      Appears in: - SwitchProfileSpec

      Field Description Default Validation subinterfaces boolean Subinterfaces defines if switch supports subinterfaces vxlan boolean VXLAN defines if switch supports VXLANs acls boolean ACLs defines if switch supports ACLs"},{"location":"reference/api/#switchprofileport","title":"SwitchProfilePort","text":"

      Defines a switch port configuration Only one of Profile or Group can be set

      Appears in: - SwitchProfileSpec

      Field Description Default Validation nos string NOSName defines how port is named in the NOS baseNOSName string BaseNOSName defines the base NOS name that could be used together with the profile to generate the actual NOS name (e.g. breakouts) label string Label defines the physical port label you can see on the actual switch group string If port isn't directly manageable, group defines the group it belongs to, exclusive with profile profile string If port is directly configurable, profile defines the profile it belongs to, exclusive with group management boolean Management defines if port is a management port, it's a special case and it can't have a group or profile oniePortName string OniePortName defines the ONIE port name for management ports only"},{"location":"reference/api/#switchprofileportgroup","title":"SwitchProfilePortGroup","text":"

      Defines a switch port group configuration

      Appears in: - SwitchProfileSpec

      Field Description Default Validation nos string NOSName defines how group is named in the NOS profile string Profile defines the possible configuration profile for the group, could only have speed profile"},{"location":"reference/api/#switchprofileportprofile","title":"SwitchProfilePortProfile","text":"

      Defines a switch port profile configuration

      Appears in: - SwitchProfileSpec

      Field Description Default Validation speed SwitchProfilePortProfileSpeed Speed defines the speed configuration for the profile, exclusive with breakout breakout SwitchProfilePortProfileBreakout Breakout defines the breakout configuration for the profile, exclusive with speed autoNegAllowed boolean AutoNegAllowed defines if configuring auto-negotiation is allowed for the port autoNegDefault boolean AutoNegDefault defines the default auto-negotiation state for the port"},{"location":"reference/api/#switchprofileportprofilebreakout","title":"SwitchProfilePortProfileBreakout","text":"

      Defines a switch port profile breakout configuration

      Appears in: - SwitchProfilePortProfile

      Field Description Default Validation default string Default defines the default breakout mode for the profile supported object (keys:string, values:SwitchProfilePortProfileBreakoutMode) Supported defines the supported breakout modes for the profile with the NOS name offsets"},{"location":"reference/api/#switchprofileportprofilebreakoutmode","title":"SwitchProfilePortProfileBreakoutMode","text":"

      Defines a switch port profile breakout mode configuration

      Appears in: - SwitchProfilePortProfileBreakout

      Field Description Default Validation offsets string array Offsets defines the breakout NOS port name offset from the port NOS Name for each breakout mode"},{"location":"reference/api/#switchprofileportprofilespeed","title":"SwitchProfilePortProfileSpeed","text":"

      Defines a switch port profile speed configuration

      Appears in: - SwitchProfilePortProfile

      Field Description Default Validation default string Default defines the default speed for the profile supported string array Supported defines the supported speeds for the profile"},{"location":"reference/api/#switchprofilespec","title":"SwitchProfileSpec","text":"

      SwitchProfileSpec defines the desired state of SwitchProfile

      Appears in: - SwitchProfile

      Field Description Default Validation displayName string DisplayName defines the human-readable name of the switch otherNames string array OtherNames defines alternative names for the switch features SwitchProfileFeatures Features defines the features supported by the switch config SwitchProfileConfig Config defines the switch-specific configuration options ports object (keys:string, values:SwitchProfilePort) Ports defines the switch port configuration portGroups object (keys:string, values:SwitchProfilePortGroup) PortGroups defines the switch port group configuration portProfiles object (keys:string, values:SwitchProfilePortProfile) PortProfiles defines the switch port profile configuration nosType NOSType NOSType defines the NOS type to be used for the switch platform string Platform is what expected to be request by ONIE and displayed in the NOS"},{"location":"reference/api/#switchprofilestatus","title":"SwitchProfileStatus","text":"

      SwitchProfileStatus defines the observed state of SwitchProfile

      Appears in: - SwitchProfile

      "},{"location":"reference/api/#switchredundancy","title":"SwitchRedundancy","text":"

      SwitchRedundancy is the switch redundancy configuration which includes name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections. It defines how redundancy will be configured and handled on the switch as well as which connection types will be available. If not specified, switch will not be part of any redundancy group. If name isn't empty, type must be specified as well and name should be the same as one of the SwitchGroup objects.

      Appears in: - SwitchSpec

      Field Description Default Validation group string Group is the name of the redundancy group switch belongs to type RedundancyType Type is the type of the redundancy group, could be mclag or eslag"},{"location":"reference/api/#switchrole","title":"SwitchRole","text":"

      Underlying type: string

      SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf

      Validation: - Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge]

      Appears in: - SwitchSpec

      Field Description spine server-leaf border-leaf mixed-leaf virtual-edge"},{"location":"reference/api/#switchspec","title":"SwitchSpec","text":"

      SwitchSpec defines the desired state of Switch

      Appears in: - Switch

      Field Description Default Validation role SwitchRole Role is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge] Required: {} description string Description is a description of the switch profile string Profile is the profile of the switch, name of the SwitchProfile object to be used for this switch, currently not used by the Fabric groups string array Groups is a list of switch groups the switch belongs to redundancy SwitchRedundancy Redundancy is the switch redundancy configuration including name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap asn integer ASN is the ASN of the switch ip string IP is the IP of the switch that could be used to access it from other switches and control nodes in the Fabric vtepIP string VTEPIP is the VTEP IP of the switch protocolIP string ProtocolIP is used as BGP Router ID for switch configuration portGroupSpeeds object (keys:string, values:string) PortGroupSpeeds is a map of port group speeds, key is the port group name, value is the speed, such as '\"2\": 10G' portSpeeds object (keys:string, values:string) PortSpeeds is a map of port speeds, key is the port name, value is the speed portBreakouts object (keys:string, values:string) PortBreakouts is a map of port breakouts, key is the port name, value is the breakout configuration, such as \"1/55: 4x25G\" portAutoNegs object (keys:string, values:boolean) PortAutoNegs is a map of port auto negotiation, key is the port name, value is true or false boot SwitchBoot Boot is the boot/provisioning information of the switch"},{"location":"reference/api/#switchstatus","title":"SwitchStatus","text":"

      SwitchStatus defines the observed state of Switch

      Appears in: - Switch

      "},{"location":"reference/api/#switchtoswitchlink","title":"SwitchToSwitchLink","text":"

      SwitchToSwitchLink defines the switch-to-switch link

      Appears in: - ConnMCLAGDomain - ConnVPCLoopback

      Field Description Default Validation switch1 BasePortName Switch1 is the first switch side of the connection switch2 BasePortName Switch2 is the second switch side of the connection"},{"location":"reference/api/#vlannamespace","title":"VLANNamespace","text":"

      VLANNamespace is the Schema for the vlannamespaces API

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string VLANNamespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VLANNamespaceSpec Spec is the desired state of the VLANNamespace status VLANNamespaceStatus Status is the observed state of the VLANNamespace"},{"location":"reference/api/#vlannamespacespec","title":"VLANNamespaceSpec","text":"

      VLANNamespaceSpec defines the desired state of VLANNamespace

      Appears in: - VLANNamespace

      Field Description Default Validation ranges VLANRange array Ranges is a list of VLAN ranges to be used in this namespace, couldn't overlap between each other and with Fabric reserved VLAN ranges MaxItems: 20 MinItems: 1"},{"location":"reference/api/#vlannamespacestatus","title":"VLANNamespaceStatus","text":"

      VLANNamespaceStatus defines the observed state of VLANNamespace

      Appears in: - VLANNamespace

      "},{"location":"reference/cli/","title":"Fabric CLI","text":"

      Under construction.

      Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.

      core@control-1 ~ $ kubectl fabric\nNAME:\n   hhfctl - Hedgehog Fabric user client\n\nUSAGE:\n   hhfctl [global options] command [command options] [arguments...]\n\nVERSION:\n   v0.23.0\n\nCOMMANDS:\n   vpc                VPC commands\n   switch, sw, agent  Switch/Agent commands\n   connection, conn   Connection commands\n   switchgroup, sg    SwitchGroup commands\n   external           External commands\n   help, h            Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n   --verbose, -v  verbose output (includes debug) (default: true)\n   --help, -h     show help\n   --version, -V  print the version\n
      "},{"location":"reference/cli/#vpc","title":"VPC","text":"

      Create VPC named vpc-1 with subnet 10.0.1.0/24 and VLAN 1001 with DHCP enabled and DHCP range starting from 10.0.1.10 (optional):

      core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n

      Attach previously created VPC to the server server-01 (which is connected to the Fabric using the server-01--mclag--leaf-01--leaf-02 Connection):

      core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n

      To peer VPC with another VPC (e.g. vpc-2) use the following command:

      core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n
      "},{"location":"reference/profiles/","title":"Switch Profiles Catalog","text":"

      The following is a list of all supported switches. Please, make sure to use the version of documentation that matches your environment to get an up-to-date list of supported switches, their features and port naming scheme.

      "},{"location":"reference/profiles/#celestica-ds3000","title":"Celestica DS3000","text":"

      Profile Name (to use in switch.spec.profile): celestica-ds3000

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#celestica-ds4000","title":"Celestica DS4000","text":"

      Profile Name (to use in switch.spec.profile): celestica-ds4000

      Supported features:

      • Subinterfaces: false
      • VXLAN: false
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/2 2 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/3 3 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/4 4 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/5 5 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/6 6 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/7 7 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/8 8 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/9 9 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/10 10 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/11 11 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/12 12 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/13 13 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/14 14 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/15 15 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/16 16 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/17 17 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/18 18 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/19 19 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/20 20 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/21 21 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/22 22 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/23 23 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/24 24 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/25 25 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/26 26 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/27 27 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/28 28 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/29 29 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/30 30 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/31 31 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/32 32 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#dell-s5232f-on","title":"Dell S5232F-ON","text":"

      Profile Name (to use in switch.spec.profile): dell-s5232f-on

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/32 32 Direct 100G 40G, 100G E1/33 33 Direct 10G 1G, 10G E1/34 34 Direct 10G 1G, 10G"},{"location":"reference/profiles/#dell-s5248f-on","title":"Dell S5248F-ON","text":"

      Profile Name (to use in switch.spec.profile): dell-s5248f-on

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"reference/profiles/#edgecore-dcs203","title":"Edgecore DCS203","text":"

      Profile Name (to use in switch.spec.profile): edgecore-dcs203

      Other names: Edgecore AS7326-56X

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 1 25G 10G, 25G E1/6 6 Port Group 1 25G 10G, 25G E1/7 7 Port Group 1 25G 10G, 25G E1/8 8 Port Group 1 25G 10G, 25G E1/9 9 Port Group 1 25G 10G, 25G E1/10 10 Port Group 1 25G 10G, 25G E1/11 11 Port Group 1 25G 10G, 25G E1/12 12 Port Group 1 25G 10G, 25G E1/13 13 Port Group 2 25G 10G, 25G E1/14 14 Port Group 2 25G 10G, 25G E1/15 15 Port Group 2 25G 10G, 25G E1/16 16 Port Group 2 25G 10G, 25G E1/17 17 Port Group 2 25G 10G, 25G E1/18 18 Port Group 2 25G 10G, 25G E1/19 19 Port Group 2 25G 10G, 25G E1/20 20 Port Group 2 25G 10G, 25G E1/21 21 Port Group 2 25G 10G, 25G E1/22 22 Port Group 2 25G 10G, 25G E1/23 23 Port Group 2 25G 10G, 25G E1/24 24 Port Group 2 25G 10G, 25G E1/25 25 Port Group 3 25G 10G, 25G E1/26 26 Port Group 3 25G 10G, 25G E1/27 27 Port Group 3 25G 10G, 25G E1/28 28 Port Group 3 25G 10G, 25G E1/29 29 Port Group 3 25G 10G, 25G E1/30 30 Port Group 3 25G 10G, 25G E1/31 31 Port Group 3 25G 10G, 25G E1/32 32 Port Group 3 25G 10G, 25G E1/33 33 Port Group 3 25G 10G, 25G E1/34 34 Port Group 3 25G 10G, 25G E1/35 35 Port Group 3 25G 10G, 25G E1/36 36 Port Group 3 25G 10G, 25G E1/37 37 Port Group 4 25G 10G, 25G E1/38 38 Port Group 4 25G 10G, 25G E1/39 39 Port Group 4 25G 10G, 25G E1/40 40 Port Group 4 25G 10G, 25G E1/41 41 Port Group 4 25G 10G, 25G E1/42 42 Port Group 4 25G 10G, 25G E1/43 43 Port Group 4 25G 10G, 25G E1/44 44 Port Group 4 25G 10G, 25G E1/45 45 Port Group 4 25G 10G, 25G E1/46 46 Port Group 4 25G 10G, 25G E1/47 47 Port Group 4 25G 10G, 25G E1/48 48 Port Group 4 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/56 56 Direct 100G 40G, 100G E1/57 57 Direct 10G 1G, 10G E1/58 58 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs204","title":"Edgecore DCS204","text":"

      Profile Name (to use in switch.spec.profile): edgecore-dcs204

      Other names: Edgecore AS7726-32X

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/32 32 Direct 100G 40G, 100G E1/33 33 Direct 10G 1G, 10G E1/34 34 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs501","title":"Edgecore DCS501","text":"

      Profile Name (to use in switch.spec.profile): edgecore-dcs501

      Other names: Edgecore AS7712-32X

      Supported features:

      • Subinterfaces: false
      • VXLAN: false
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G"},{"location":"reference/profiles/#edgecore-eps203","title":"Edgecore EPS203","text":"

      Profile Name (to use in switch.spec.profile): edgecore-eps203

      Other names: Edgecore AS4630-54NPE

      Supported features:

      • Subinterfaces: false
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/2 2 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/3 3 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/4 4 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/5 5 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/6 6 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/7 7 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/8 8 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/9 9 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/10 10 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/11 11 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/12 12 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/13 13 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/14 14 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/15 15 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/16 16 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/17 17 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/18 18 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/19 19 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/20 20 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/21 21 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/22 22 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/23 23 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/24 24 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/25 25 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/26 26 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/27 27 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/28 28 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/29 29 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/30 30 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/31 31 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/32 32 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/33 33 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/34 34 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/35 35 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/36 36 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/37 37 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/38 38 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/39 39 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/40 40 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/41 41 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/42 42 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/43 43 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/44 44 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/45 45 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/46 46 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/47 47 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/48 48 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/49 49 Direct 25G 1G, 10G, 25G E1/50 50 Direct 25G 1G, 10G, 25G E1/51 51 Direct 25G 1G, 10G, 25G E1/52 52 Direct 25G 1G, 10G, 25G E1/53 53 Direct 100G 40G, 100G E1/54 54 Direct 100G 40G, 100G"},{"location":"reference/profiles/#supermicro-sse-c4632sb","title":"Supermicro SSE-C4632SB","text":"

      Profile Name (to use in switch.spec.profile): supermicro-sse-c4632sb

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#virtual-switch","title":"Virtual Switch","text":"

      Profile Name (to use in switch.spec.profile): vs

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: false

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#beta-1","title":"Beta-1","text":""},{"location":"release-notes/#device-support","title":"Device support","text":"
      • Celestica DS4000 as a spine
      "},{"location":"release-notes/#sonic","title":"SONiC","text":"
      • Broadcom SONiC 4.4.0 support
      "},{"location":"release-notes/#fabric-provisioning-management","title":"Fabric provisioning, management","text":"
      • Out-of-band management network connectivity
      • Deprecated support for in-band management network connectivity, chain boot, and front-panel boot until further notice
      • Automatic zero touch switch provisioning [ ZTP ] is based on the serial number or the first management interface MAC address
      • Full support for airgap installations and upgrades by default
      • Self-contained USB image generation for control node installation
      • Automated in-place upgrades for control node(s) moving forward
      "},{"location":"release-notes/#api","title":"API","text":"
      • API version v1beta1
      • Guaranteed backward compatibility moving forward
      "},{"location":"release-notes/#alpha-7","title":"Alpha-7","text":""},{"location":"release-notes/#device-support_1","title":"Device Support","text":"

      New devices supported by the fabric:

      • Clos Spine

        • Celestica DS3000
        • Edgecore AS7712-32X-EC
        • Supermicro SSE-C4632SB
      • Clos Leaf

        • Celestica DS3000
        • Supermicro SSE-C4632SB
      • Collapsed Core ToR

        • Celestica DS3000
        • Supermicro SSE-C4632SB
      "},{"location":"release-notes/#switchprofiles","title":"SwitchProfiles","text":"
      • Metadata describing switch capabilities, feature capacities, and resource naming mapping.
      • Switch Profiles are used for providing normalized name/id mapping, validation and internal resource management.
      • Switch Profiles are Mandatory. Each switch model must have a corresponding switch profile to be supported by the fabric.
      • Each switch defined in the wiring diagram should be pointing to the switch profile document.
      • Detailed overview
      • Catalog of switch profiles
      "},{"location":"release-notes/#new-universal-port-naming-scheme","title":"New Universal Port Naming Scheme","text":"
      • E<asic>/<port>/<breakout> or M<port>
      • Enabled via switch profiles
      "},{"location":"release-notes/#improved-per-switch-modelplatform-validation","title":"Improved per switch-model/platform validation","text":"
      • Enabled via switch profiles
      "},{"location":"release-notes/#vpc","title":"VPC","text":"
      • It\u2019s now possible to explicitly specify a gateway to use in VPC subnets
      • StaticExternal now supports default routes
      "},{"location":"release-notes/#inspection-cli","title":"Inspection CLI","text":"

      CLI commands are intended to navigate fabric configuration and state and allow introspection of the dependencies and cross-domain checking:

      • Fabric (overall control nodes and switches overview incl. status, serials, etc.)
      • Switch (status, used ports, counters, etc.)
      • Switch sort (connection if used in one, counters, VPC and External attachments, etc.)
      • Server (connection if used in one, VPC attachments, etc.)
      • Connection (incl. VPC and External attachments, Loobpback Workaround usage, etc.)
      • VPC/VPCSubnet (incl. where is it attached and what's reachable from it)
      • IP Address (incl. IPv4Namespace, VPCSubnet and DHCPLease or External/StaticExternal usage)
      • MAC Address (incl. switch ports and DHCP leases)
      • Access between pair of IPs, Server names or VPCSubnets (everything except external IPs will be translated to VPCSubnets)
      "},{"location":"release-notes/#observability","title":"Observability","text":"
      • Example Grafana Dashboards added to the docs
      • Syslog (/var/log/syslog) is now could be collected from all switches and forwarded to Loki targets
      "},{"location":"release-notes/#bug-fixes","title":"Bug Fixes","text":"
      • Fixed: Restricted subnet isn't accessible from other subnets of the same VPC
      "},{"location":"release-notes/#alpha-6","title":"Alpha-6","text":""},{"location":"release-notes/#observability_1","title":"Observability","text":""},{"location":"release-notes/#telemetry-prometheus-exporter","title":"Telemetry - Prometheus Exporter","text":"
      • Hedgehog Fabric Control Plane Agents on switches function as Prometheus Exporters

      • Telemetry data provided by Broadcom SONiC is now supported:

        • port and interface status and counters
        • transceiver state
        • environmental information (temperature, fans, psu, etc.)
        • BGP state and counters
      • Export to Prometheus using Prometheus Remote-Write API or any API-compatible platform

      "},{"location":"release-notes/#logging","title":"Logging","text":"
      • Grafana Alloy is supported as a certified logging agent that is installed and managed by the Fabric

      • Data collected

        • Agent logs
        • Agent, switch, and host-level metrics
      • Export to API-compliant platforms and products such as Prometheus, Loki, Grafana Cloud, or any LGTM stack

      "},{"location":"release-notes/#agent-status-api-enhancements","title":"Agent Status API Enhancements","text":"
      • Ports status and counters
      • Port breakout status and counters
      • Transceiver status and counters
      • Environmental and platform information
      • LLDP neighbors
      "},{"location":"release-notes/#networking-enhancements","title":"Networking enhancements","text":"
      • Multiple direct control links per switch are now supported
      • Custom static routes could be installed into VPC using API
      • ExternalAttachment could be configured without VLAN now
      "},{"location":"release-notes/#other-improvements","title":"Other improvements","text":"
      • PXE boot with HTTP
      • The hhfab and hhfctl (kubectl plugin) are now published for Linux/MacOS amd64/arm64
      • Switch users can now be configured as part of installation preparation (username, password hash, role, and public keys)
      "},{"location":"release-notes/#bugs-fixed","title":"Bugs fixed","text":"
      • DHCP service assigning IP multiple times if restarted in between
      • Remote peering was configured as a local
      "},{"location":"release-notes/#alpha-5","title":"Alpha-5","text":""},{"location":"release-notes/#open-source","title":"Open Source","text":"
      • Apache License 2.0
      • The main repos are public:
        • Fabric
        • Fabricator
        • Das-boot
        • Toolbox
        • Docs
      • Items not open-sourced:
        • HONIE with front panel booting support
      "},{"location":"release-notes/#dhcppxe-boot-support-for-multi-homed-connections","title":"DHCP/PXE boot support for multi-homed connections","text":"
      • PXE URL support for on-demand DHCP service
      • LACP link (MCLAG and ESLAG) fallback allows support of one of the links without the use of a host-level bond
      "},{"location":"release-notes/#improvements","title":"Improvements","text":"
      • Native VLAN support for server-facing connections
      • Extended wiring validation at hhfab init/build time
      • External peering failover in case of using remote peering on the same switches as external connectivity
      "},{"location":"release-notes/#alpha-4","title":"Alpha-4","text":""},{"location":"release-notes/#documentation","title":"Documentation","text":"
      • Fabric API reference
      "},{"location":"release-notes/#host-connectivity-dual-homing-improvements","title":"Host connectivity dual homing improvements","text":"
      • ESI for VXLAN-based BGP EVPN
      • Support in Fabric and VLAB
      • Host connectivity Redundancy Groups
      • Groups LEAF switches to provide multi-homed connectivity to the Fabric
      • 2-4 switches per group
      • Support for MCLAG and ESLAG (EVPN MH / ESI)
      • A single redundancy group can only support multi-homing of one type (ESLAG or MCLAG)
      • Multiple types of redundancy groups can be used in the fabric simultaneously
      "},{"location":"release-notes/#improved-vpc-security-policy-better-zero-trust","title":"Improved VPC security policy - better Zero Trust","text":"
      • Inter-VPC
        • Allow inter-VPC and external peering with per subnet control
      • Intra-VPC intra-subnet policies
        • Isolated Subnets
          • subnets isolated by default from other subnets in the VPC
          • require a user-defined explicitly permit list to allow communications to other subnets within the VPC
          • can be set on individual subnets within VPC or per entire VPC - off by default
          • Inter-VPC and external peering configurations are not affected and work the same as before
        • Restricted Subnets
          • Hosts within a subnet have no mutual reachability
          • Hosts within a subnet can be reached by members of other subnets or peered VPCs as specified by the policy
          • Inter-VPC and external peering configurations are not affected and work the same as before
        • Permit Lists
          • Intra-VPC Permit Lists govern connectivity between subnets within the VPC for isolated subnets
          • Inter-VPC Permit Lists govern which subnets of one VPC have access to some subnets of the other VPC for finer-grained control of inter-VPC and external peering
      "},{"location":"release-notes/#static-external-connection","title":"Static External Connection","text":"
      • Allows access between hosts within the VPC and devices attached to a switch with user-defined static routes
      "},{"location":"release-notes/#internal-improvements","title":"Internal Improvements","text":"
      • A new, more reliable automated ID allocation system
      • Extra validation of object lifecycle (e.g., object-in-use removal validation)
      "},{"location":"release-notes/#known-issues","title":"Known Issues","text":"
      • External Peering Failover
        • Conditions: ExternalPeering is specified for the VPC, and the same VPC has Border Leaf VPCPeering
        • Issue: Detaching ExternalPeering may cause VPCPeering on the Border Leaf group to stop working
        • Workaround: VPCPeering on the Border Leaf group should be recreated
      "},{"location":"release-notes/#alpha-3","title":"Alpha-3","text":""},{"location":"release-notes/#sonic-support","title":"SONiC support","text":"
      • Broadcom Enterprise SONiC 4.2.0 (previously 4.1.1)
      "},{"location":"release-notes/#multiple-ipv4-namespaces","title":"Multiple IPv4 namespaces","text":"
      • Support for multiple overlapping IPv4 addresses in the Fabric
      • Integrated with on-demand DHCP Service (see below)
      • All IPv4 addresses within a given VPC must be unique
      • Only VPCs with non-overlapping IPv4 subnets can peer within the Fabric
      • An external NAT device is required for peering of VPCs with overlapping subnets
      "},{"location":"release-notes/#hedgehog-fabric-dhcp-and-ipam-service","title":"Hedgehog Fabric DHCP and IPAM Service","text":"
      • Custom DHCP server executing in the controllers
      • Multiple IPv4 namespaces with overlapping subnets
      • Multiple VLAN namespaces with overlapping VLAN ranges
      • DHCP leases exposed through the Fabric API
      • Available for VLAB as well as the Fabric
      "},{"location":"release-notes/#hedgehog-fabric-ntp-service","title":"Hedgehog Fabric NTP Service","text":"
      • Custom NTP servers at the controller
      • Switches automatically configured to use control node as NTP server
      • NTP servers can be configured to sync to external time/NTP server
      "},{"location":"release-notes/#staticexternal-connections","title":"StaticExternal connections","text":"
      • Directly connect external infrastructure services (such as NTP, DHCP, DNS) to the Fabric
      • No BGP is required, just automatically configured static routes
      "},{"location":"release-notes/#dhcp-relay-to-3rd-party-dhcp-service","title":"DHCP Relay to 3rd party DHCP service","text":"

      Support for 3rd party DHCP server (DHCP Relay config) through the API

      "},{"location":"release-notes/#alpha-2","title":"Alpha-2","text":""},{"location":"release-notes/#controller","title":"Controller","text":"

      A single controller. No controller redundancy.

      "},{"location":"release-notes/#controller-connectivity","title":"Controller connectivity","text":"

      For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.

      Spine controller connectivity is not supported.

      For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.

      Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.

      "},{"location":"release-notes/#controller-requirements","title":"Controller requirements","text":"
      • One 1 gig+ port per to connect to each controller attached switch
      • One+ 1 gig+ ports connecting to the external management network.
      • 4 Cores, 12GB RAM, 100GB SSD.
      "},{"location":"release-notes/#chain-booting","title":"Chain booting","text":"

      Switches not directly connecting to the controllers can chain boot via the data network.

      "},{"location":"release-notes/#topology-support","title":"Topology support","text":"

      CLOS/LEAF-SPINE and Collapsed Core topologies are supported.

      "},{"location":"release-notes/#leaf-roles-for-clos-topology","title":"LEAF Roles for CLOS topology","text":"

      server leaf, border leaf, and mixed leaf modes are supported.

      "},{"location":"release-notes/#collapsed-core-topology","title":"Collapsed Core Topology","text":"

      Two ToR/LEAF switches with MCLAG server connection.

      "},{"location":"release-notes/#server-multihoming","title":"Server multihoming","text":"

      MCLAG-only.

      "},{"location":"release-notes/#device-support_2","title":"Device support","text":""},{"location":"release-notes/#leafs","title":"LEAFs","text":"
      • DELL:

        • S5248F-ON
        • S5232F-ON
      • Edge-Core:

        • DCS204 (AS7726-32X)
        • DCS203 (AS7326-56X)
        • EPS203 (AS4630-54NPE)
      "},{"location":"release-notes/#spines","title":"SPINEs","text":"
      • DELL:
        • S5232F-ON
      • Edge-Core:
        • DCS204 (AS7726-32X)
      "},{"location":"release-notes/#underlay-configuration","title":"Underlay configuration:","text":"

      Port speed, port group speed, port breakouts are configurable through the API

      "},{"location":"release-notes/#vpc-overlay-implementation","title":"VPC (overlay) Implementation","text":"

      VXLAN-based BGP eVPN.

      "},{"location":"release-notes/#multi-subnet-vpcs","title":"Multi-subnet VPCs","text":"

      A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.

      "},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"

      Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.

      "},{"location":"release-notes/#vlan-namespace","title":"VLAN Namespace","text":"

      VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.

      This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).

      "},{"location":"release-notes/#switch-groups","title":"Switch Groups","text":"

      Each switch belongs to a list of switch groups used for identifying redundancy groups for things like external connectivity.

      "},{"location":"release-notes/#mutual-vpc-peering","title":"Mutual VPC Peering","text":"

      VPC peering is supported and possible between a pair of VPCs that belong to the same IPv4 and VLAN namespaces.

      "},{"location":"release-notes/#external-vpc-peering","title":"External VPC Peering","text":"

      VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.

      The user controls what subnets/prefixes to import and export from/to the external system.

      No NAT function is supported for external peering.

      "},{"location":"release-notes/#host-connectivity","title":"Host connectivity","text":"

      Servers can be attached as Unbundled, Bundled (LAG) and MCLAG

      "},{"location":"release-notes/#dhcp-service","title":"DHCP Service","text":"

      VPC is provided with an optional DHCP service with simple IPAM

      "},{"location":"release-notes/#local-vpc-peering-loopbacks","title":"Local VPC peering loopbacks","text":"

      To enable local inter-vpc peering that allows routing of traffic between VPCs, local loopbacks are required to overcome silicon limitations.

      "},{"location":"release-notes/#scale","title":"Scale","text":"
      • Maximum fabric size: 20 LEAF/ToR switches.
      • Routes per switch: 64k
      • [ silicon platform limitation in Trident 3; limits to number of endpoints in the fabric ]
      • Total VPCs per switch: up to 1000
      • [ Including VPCs attached at the given switch and VPCs peered with ]
      • Total VPCs per VLAN namespace: up to 3000
      • [ assuming 1 subnet per VPC ]
      • Total VPCs per fabric: unlimited
      • [ if using multiple VLAN namespaces ]
      • VPC subnets per switch: up to 3000
      • VPC subnets per VLAN namespace up to 3000
      • Subnets per VPC: up to 20
      • [ just a validation; the current design allows up to 100, but it could be increased even more in the future ]
      • VPC Slots per remote peering @ switch: 2
      • Max VPC loopbacks per switch: 500
      • [ VPC loopback workarounds per switch are needed for local peering when both VPCs are attached to the switch or for external peering with VPC attached on the same switch that is peering with external ]
      "},{"location":"release-notes/#software-versions","title":"Software versions","text":"
      • Fabric: v0.23.0
      • Das-boot: v0.11.4
      • Fabricator: v0.8.0
      • K3s: v1.27.4-k3s1
      • Zot: v1.4.3
      • SONiC
      • Broadcom Enterprise Base 4.1.1
      • Broadcom Enterprise Campus 4.1.1
      "},{"location":"release-notes/#known-limitations","title":"Known Limitations","text":"
      • MTU setting inflexibility:
      • Fabric MTU is 9100 and not configurable right now (A3 planned)
      • Server-facing MTU is 9136 and not configurable right now (A3+)
      • no support for Access VLANs for attaching servers (A3 planned)
      • VPC peering is enabled on all subnets of the participating VPCs. No subnet selection for peering. (A3 planned)
      • peering with external is only possible with a VLAN (by design)
      • If you have VPCs with remote peering on a switch group, you can't attach those VPCs on that switch group (by definition of remote peering)
      • if a group of VPCs has remote peering on a switch group, any other VPC that will peer with those VPCs remotely will need to use the same switch group (by design)
      • if VPC peers with external, it can only be remotely peered with on the same switches that have a connection to that external (by design)
      • the server-facing connection object is immutable as it\u2019s very easy to get into a deadlock, re-create to change it (A3+)
      "},{"location":"release-notes/#alpha-1","title":"Alpha-1","text":"
      • Controller:

        • A single controller connecting to each switch management port. No redundancy.
      • Controller requirements:

        • One 1 gig port per switch
        • One+ 1 gig+ ports connecting to the external management network.
        • 4 Cores, 12GB RAM, 100GB SSD.
      • Seeder:

        • Seeder and Controller functions co-resident on the control node. Switch booting and ZTP on management ports directly connected to the controller.
      • HHFab - the fabricator:

        • An operational tool to generate, initiate, and maintain the fabric software appliance. Allows fabrication of the environment-specific image with all of the required underlay and security configuration baked in.
      • DHCP Service:

        • A simple DHCP server for assigning IP addresses to hosts connecting to the fabric, optimized for use with VPC overlay.
      • Topology:

        • Support for a Collapsed Core topology with 2 switch nodes.
      • Underlay:

        • A simple single-VRF network with a BGP control plane. IPv4 support only.
      • External connectivity:

        • An edge router must be connected to selected ports of one or both switches. IPv4 support only.
      • Dual-homing:

        • L2 Dual homing with MCLAG is implemented to connect servers, storage, and other devices in the data center. NIC bonding and LACP configuration at the host are required.
      • VPC overlay implementation:

        • VPC is implemented as a set of ACLs within the underlay VRF. External connectivity to the VRF is performed via internally managed VLANs. IPv4 support only.
      • VPC Peering:

        • VPC peering is performed via ACLs with no fine-grained control.
      • NAT

        • DNAT + SNAT are supported per VPC. SNAT and DNAT can't be enabled per VPC simultaneously.
      • Hardware support:

        • Please see the supported hardware list.
      • Virtual Lab:

        • A simulation of the two-node Collapsed Core Topology as a virtual environment. Designed for use as a network simulation, a configuration scratchpad, or a training/demonstration tool. Minimum requirements: 8 cores, 24GB RAM, 100GB SSD
      • Limitations:

        • 40 VPCs max
        • 50 VPC peerings
        • [ 768 ACL entry platform limitation from Broadcom ]
      • Software versions:

        • Fabricator: v0.5.2
        • Fabric: v0.18.6
        • Das-boot: v0.8.2
        • K3s: v1.27.4-k3s1
        • Zot: v1.4.3
        • SONiC: Broadcom Enterprise Base 4.1.1
      "},{"location":"troubleshooting/overview/","title":"Troubleshooting","text":"

      Under construction.

      "},{"location":"user-guide/connections/","title":"Connections","text":"

      Connection objects represent logical and physical connections between the devices in the Fabric (Switch, Server and External objects) and are needed to define all the connections in the Wiring Diagram.

      All connections reference switch or server ports. Only port names defined by switch profiles can be used in the wiring diagram for the switches. NOS (or any other) port names aren't supported. Currently, server ports aren't validated by the Fabric API other than for uniqueness. See the Switch Profiles and Port Naming section for more details.

      There are several types of connections.

      "},{"location":"user-guide/connections/#workload-server-connections","title":"Workload server connections","text":"

      Server connections are used to connect workload servers to switches.

      "},{"location":"user-guide/connections/#unbundled","title":"Unbundled","text":"

      Unbundled server connections are used to connect servers to a single switch using a single port.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-4--unbundled--s5248-02\n  namespace: default\nspec:\n  unbundled:\n    link: # Defines a single link between a server and a switch\n      server:\n        port: server-4/enp2s1\n      switch:\n        port: s5248-02/Ethernet3\n
      "},{"location":"user-guide/connections/#bundled","title":"Bundled","text":"

      Bundled server connections are used to connect servers to a single switch using multiple ports (port channel, LAG).

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-3--bundled--s5248-01\n  namespace: default\nspec:\n  bundled:\n    links: # Defines multiple links between a single server and a single switch\n    - server:\n        port: server-3/enp2s1\n      switch:\n        port: s5248-01/Ethernet3\n    - server:\n        port: server-3/enp2s2\n      switch:\n        port: s5248-01/Ethernet4\n
      "},{"location":"user-guide/connections/#mclag","title":"MCLAG","text":"

      MCLAG server connections are used to connect servers to a pair of switches using multiple ports (Dual-homing). Switches should be configured as an MCLAG pair which requires them to be in a single redundancy group of type mclag and a Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  mclag:\n    links: # Defines multiple links between a single server and a pair of switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
      "},{"location":"user-guide/connections/#eslag","title":"ESLAG","text":"

      ESLAG server connections are used to connect servers to the 2-4 switches using multiple ports (Multi-homing). Switches should belong to the same redundancy group with type eslag, but contrary to the MCLAG case, no other configuration is required.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-1--eslag--s5248-01--s5248-02\n  namespace: default\nspec:\n  eslag:\n    links: # Defines multiple links between a single server and a 2-4 switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
      "},{"location":"user-guide/connections/#switch-connections-fabric-facing","title":"Switch connections (fabric-facing)","text":"

      Switch connections are used to connect switches to each other and provide any needed \"service\" connectivity to implement the Fabric features.

      "},{"location":"user-guide/connections/#fabric","title":"Fabric","text":"

      A Fabric Connection is used between a specific pair of spine and leaf switches, representing all of the wires between them.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5232-01--fabric--s5248-01\n  namespace: default\nspec:\n  fabric:\n    links: # Defines multiple links between a spine-leaf pair of switches with IP addresses\n    - leaf:\n        ip: 172.30.30.1/31\n        port: s5248-01/Ethernet48\n      spine:\n        ip: 172.30.30.0/31\n        port: s5232-01/Ethernet0\n    - leaf:\n        ip: 172.30.30.3/31\n        port: s5248-01/Ethernet56\n      spine:\n        ip: 172.30.30.2/31\n        port: s5232-01/Ethernet4\n
      "},{"location":"user-guide/connections/#mclag-domain","title":"MCLAG-Domain","text":"

      MCLAG-Domain connections define a pair of MCLAG switches with Session and Peer link between them. Switches should be configured as an MCLAG, pair which requires them to be in a single redundancy group of type mclag and Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-01--mclag-domain--s5248-02\n  namespace: default\nspec:\n  mclagDomain:\n    peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n    - switch1:\n        port: s5248-01/Ethernet72\n      switch2:\n        port: s5248-02/Ethernet72\n    - switch1:\n        port: s5248-01/Ethernet73\n      switch2:\n        port: s5248-02/Ethernet73\n    sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n    - switch1:\n        port: s5248-01/Ethernet74\n      switch2:\n        port: s5248-02/Ethernet74\n    - switch1:\n        port: s5248-01/Ethernet75\n      switch2:\n        port: s5248-02/Ethernet75\n
      "},{"location":"user-guide/connections/#vpc-loopback","title":"VPC-Loopback","text":"

      VPC-Loopback connections are required in order to implement a workaround for the local VPC peering (when both VPC are attached to the same switch), which is caused by a hardware limitation of the currently supported switches.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-01--vpc-loopback\n  namespace: default\nspec:\n  vpcLoopback:\n    links: # Defines multiple loopbacks on a single switch\n    - switch1:\n        port: s5248-01/Ethernet16\n      switch2:\n        port: s5248-01/Ethernet17\n    - switch1:\n        port: s5248-01/Ethernet18\n      switch2:\n        port: s5248-01/Ethernet19\n
      "},{"location":"user-guide/connections/#connecting-fabric-to-the-outside-world","title":"Connecting Fabric to the outside world","text":"

      Connections in this section provide connectivity to the outside world. For example, they can be connections to the Internet, to other networks, or to some other systems such as DHCP, NTP, LMA, or AAA services.

      "},{"location":"user-guide/connections/#staticexternal","title":"StaticExternal","text":"

      StaticExternal connections provide a simple way to connect things like DHCP servers directly to the Fabric by connecting them to specific switch ports.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: third-party-dhcp-server--static-external--s5248-04\n  namespace: default\nspec:\n  staticExternal:\n    link:\n      switch:\n        port: s5248-04/Ethernet1 # Switch port to use\n        ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n        vlan: 1005 # Optional VLAN ID to use for the switch port; if 0, no VLAN is configured\n        subnets: # List of subnets to route to the switch port using static routes and next hop\n          - 10.99.0.1/24\n          - 10.199.0.100/32\n        nextHop: 172.30.50.1 # Next hop IP address to use when configuring static routes for the \"subnets\" list\n

      Additionally, it's possible to configure StaticExternal within the VPC to provide access to the third-party resources within a specific VPC, with the rest of the YAML configuration remaining unchanged.

      ...\nspec:\n  staticExternal:\n    withinVPC: vpc-1 # VPC name to attach the static external to\n    link:\n      ...\n
      "},{"location":"user-guide/connections/#external","title":"External","text":"

      Connection to external systems, such as edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what gets advertised and which routes are accepted.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-03--external--5835\n  namespace: default\nspec:\n  external:\n    link: # Defines a single link between a switch and an external system\n      switch:\n        port: s5248-03/Ethernet3\n
      "},{"location":"user-guide/devices/","title":"Switches and Servers","text":"

      All devices in a Hedgehog Fabric are divided into two groups: switches and servers, represented by the corresponding Switch and Server objects in the API. These objects are needed to define all of the participants of the Fabric and their roles in the Wiring Diagram, together with Connection objects (see Connections).

      "},{"location":"user-guide/devices/#switches","title":"Switches","text":"

      Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API. These objects consist of basic metadata like name, description, role, serial, management port mac, as well as port group speeds, port breakouts, ASN, IP addresses, and more. Additionally, a Switch contains a reference to a SwitchProfile object that defines the switch model and capabilities. More details can be found in the Switch Profiles and Port Naming section.

      In order for the fabric to manage a switch the profile needs to include either the serial or mac need to be defined in the YAML doc.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  boot: # at least one of the serial or mac needs to be defined\n    serial: XYZPDQ1234\n    mac: 00:11:22:33:44:55 # Usually the first management port MAC address\n  profile: dell-s5248f-on # Mandatory reference to the SwitchProfile object defining the switch model and capabilities\n  asn: 65101 # ASN of the switch\n  description: leaf-1\n  ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n  portBreakouts: # Configures port breakouts for the switch, see the SwitchProfile for available options\n    E1/55: 4x25G\n  portGroupSpeeds: # Configures port group speeds for the switch, see the SwitchProfile for available options\n    \"1\": 10G\n    \"2\": 10G\n  portSpeeds: # Configures port speeds for the switch, see the SwitchProfile for available options\n    E1/1: 25G\n  protocolIP: 172.30.11.100/32 # Used as BGP router ID\n  role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n  vlanNamespaces: # Defines which VLANs could be used to attach servers\n  - default\n  vtepIP: 172.30.12.100/32\n  groups: # Defines which groups the switch belongs to, by referring to SwitchGroup objects\n  - some-group\n  redundancy: # Optional field to define that switch belongs to the redundancy group\n    group: eslag-1 # Name of the redundancy group\n    type: eslag # Type of the redundancy group, one of mclag or eslag\n

      The SwitchGroup is just a marker at that point and doesn't have any configuration options.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: border\n  namespace: default\nspec: {}\n
      "},{"location":"user-guide/devices/#redundancy-groups","title":"Redundancy Groups","text":"

      Redundancy groups are used to define the redundancy between switches. It's a regular SwitchGroup used by multiple switches and currently it could be MCLAG or ESLAG (EVPN MH / ESI). A switch can only belong to a single redundancy group.

      MCLAG is only supported for pairs of switches and ESLAG is supported for up to 4 switches. Multiple types of redundancy groups can be used in the fabric simultaneously.

      Connections with types mclag and eslag are used to define the servers connections to switches. They are only supported if the switch belongs to a redundancy group with the corresponding type.

      In order to define a MCLAG or ESLAG redundancy group, you need to create a SwitchGroup object and assign it to the switches using the redundancy field.

      Example of switch configured for ESLAG:

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: eslag-1\n  namespace: default\nspec: {}\n---\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-03\n  namespace: default\nspec:\n  ...\n  redundancy:\n    group: eslag-1\n    type: eslag\n  ...\n

      And example of switch configured for MCLAG:

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: mclag-1\n  namespace: default\nspec: {}\n---\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  ...\n  redundancy:\n    group: mclag-1\n    type: mclag\n  ...\n

      In case of MCLAG it's required to have a special connection with type mclag-domain that defines the peer and session links between switches. For more details, see Connections.

      "},{"location":"user-guide/devices/#servers","title":"Servers","text":"

      Regular workload server:

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Server\nmetadata:\n  name: server-1\n  namespace: default\nspec:\n  description: MH s5248-01/E1 s5248-02/E1\n
      "},{"location":"user-guide/external/","title":"External Peering","text":"

      Hedgehog Fabric uses the Border Leaf concept to exchange VPC routes outside the Fabric and provide L3 connectivity. The External Peering feature allows you to set up an external peering endpoint and to enforce several policies between internal and external endpoints.

      Note

      Hedgehog Fabric does not operate Edge side devices.

      "},{"location":"user-guide/external/#overview","title":"Overview","text":"

      Traffic exits from the Fabric on Border Leaves that are connected with Edge devices. Border Leaves are suitable to terminate L2VPN connections, to distinguish VPC L3 routable traffic towards Edge devices, and to land VPC servers. Border Leaves (or Borders) can connect to several Edge devices.

      Note

      External Peering is only available on the switch devices that are capable for sub-interfaces.

      "},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"

      In order to distinguish VPC traffic, an Edge device should be able to:

      • Set up BGP IPv4 to advertise and receive routes from the Fabric
      • Connect to a Fabric Border Leaf over VLAN
      • Be able to mark egress routes towards the Fabric with BGP Communities
      • Be able to filter ingress routes from the Fabric by BGP Communities

      All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.

      "},{"location":"user-guide/external/#control-plane","title":"Control Plane","text":"

      The Fabric shares VPC routes with Edge devices via BGP. Peering is done over VLAN in IPv4 Unicast AFI/SAFI.

      "},{"location":"user-guide/external/#data-plane","title":"Data Plane","text":"

      VPC L3 routable traffic will be tagged with VLAN and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.

      "},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"

      Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that the VPC can export to Edge devices and import from the Edge devices.

      "},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"

      General configuration starts with the specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with the following configuration:

      • Name of External
      • Inbound routes marked with the dedicated BGP community
      • Outbound routes marked with the dedicated community

      Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: External\nmetadata:\n  name: default--5835\nspec:\n  ipv4Namespace: # VPC IP Namespace\n  inboundCommunity: # BGP Standard Community of routes from Edge devices\n  outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
      "},{"location":"user-guide/external/#connection","title":"Connection","text":"

      A Connection of type external is used to identify the switch port on Border leaf that is cabled with an Edge device.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: # specified or generated\nspec:\n  external:\n    link:\n      switch:\n        port: # SwitchName/EthernetXXX\n
      "},{"location":"user-guide/external/#external-attachment","title":"External Attachment","text":"

      External Attachment defines BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to a Connection with type external and they specify an optional vlan that will be used to segregate particular Edge peering.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n  name: #\nspec:\n  connection: # Name of the Connection with type external\n  external: # Name of the External to pick config\n  neighbor:\n    asn: # Edge device ASN\n    ip: # IP address of Edge device to peer with\n  switch:\n    ip: # IP address on the Border Leaf to set up BGP peering\n    vlan: # VLAN (optional) ID to tag control and data traffic, use 0 for untagged\n

      Several External Attachment can be configured for the same Connection but for different vlan.

      "},{"location":"user-guide/external/#external-vpc-peering","title":"External VPC Peering","text":"

      To allow a specific VPC to have access to Edge devices, bind the VPC to a specific External object. To do so, define an External Peering object.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n  name: # Name of ExternalPeering\nspec:\n  permit:\n    external:\n      name: # External Name\n      prefixes: # List of prefixes (routes) to be allowed to pick up from External\n      - # IPv4 prefix\n    vpc:\n      name: # VPC Name\n      subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n      - # Name of the subnet within VPC\n

      Prefixes is the list of subnets to permit from the External to the VPC. It matches any prefix length less than or equal to 32, effectively permitting all prefixes within the specified one. Use 0.0.0.0/0 for any route, including the default route.

      This example allows any IPv4 prefix that came from External:

      spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - prefix: 0.0.0.0/0 # Any route will be allowed including default route\n

      This example allows all prefixes that match the default route, with any prefix length:

      spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - prefix: 77.0.0.0/8 # Any route that belongs to the specified prefix is allowed (such as 77.0.0.0/8 or 77.1.2.0/24)\n
      "},{"location":"user-guide/external/#examples","title":"Examples","text":"

      This example shows how to peer with the External object with name HedgeEdge, given a Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable connecting it to an Edge device on the port Ethernet42. Specifying vpc-1 is required to receive any prefixes advertised from the External.

      "},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"
      # hhfctl external create --name HedgeEdge --ipns default --in 65102:5000 --out 5000:65102\n
      apiVersion: vpc.githedgehog.com/v1beta1\nkind: External\nmetadata:\n  name: HedgeEdge\n  namespace: default\nspec:\n  inboundCommunity: 65102:5000\n  ipv4Namespace: default\n  outboundCommunity: 5000:65102\n
      "},{"location":"user-guide/external/#connection_1","title":"Connection","text":"

      Connection should be specified in the wiring diagram.

      ###\n### switchBorder--external--HedgeEdge\n###\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: switchBorder--external--HedgeEdge\nspec:\n  external:\n    link:\n      switch:\n        port: switchBorder/Ethernet42\n
      "},{"location":"user-guide/external/#externalattachment","title":"ExternalAttachment","text":"

      Specified in wiring diagram

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n  name: switchBorder--HedgeEdge\nspec:\n  connection: switchBorder--external--HedgeEdge\n  external: HedgeEdge\n  neighbor:\n    asn: 65102\n    ip: 100.100.0.6\n  switch:\n    ip: 100.100.0.1/24\n    vlan: 100\n
      "},{"location":"user-guide/external/#externalpeering","title":"ExternalPeering","text":"
      apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n  name: vpc-1--HedgeEdge\nspec:\n  permit:\n    external:\n      name: HedgeEdge\n      prefixes:\n      - prefix: 0.0.0.0/0\n    vpc:\n      name: vpc-1\n      subnets:\n      - default\n
      "},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"

      Warning

      Hedgehog does not recommend using the following configuration for production. It is only provided as an example of Edge Peer configuration.

      Interface configuration:

      interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n

      BGP configuration:

      !\nrouter bgp 65102 vrf VrfHedge\n log-neighbor-changes\n timers 60 180\n !\n address-family ipv4 unicast\n  maximum-paths 64\n  maximum-paths ibgp 1\n  import vrf VrfPublic\n !\n neighbor 100.100.0.1\n  remote-as 65103\n  !\n  address-family ipv4 unicast\n   activate\n   route-map HedgeIn in\n   route-map HedgeOut out\n   send-community both\n !\n

      Route Map configuration:

      route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n
      "},{"location":"user-guide/grafana/","title":"Grafana Dashboards","text":"

      To provide monitoring for most critical metrics from the switches managed by Hedgehog Fabric there are several Dashboards that may be used in Grafana deployments. Make sure that you've enabled metrics and logs collection for the switches in the Fabric that is described in Fabric Config section.

      "},{"location":"user-guide/grafana/#variables","title":"Variables","text":"

      List of common variables used in Hedgehog Grafana dashboards

      • env (Label: Env): label_values(env) - Environment to monitor
      • node (Label: Switch): label_values(hostname) - Switch Name
      • vrf (Label: VRF): label_values(vrf) - VRF name (Multi-value)
      • neighbor (Label: Neighbor): label_values(neighbor) - BGP Neighbor IP address(Multi-value)
      • interface (Label: Interface): label_values(interface) - Switch Interface name as defined in wiring (Multi-value)
      • file (Label: File): label_valuse(filename) - Name of Logs file to inspect (Loki)
      "},{"location":"user-guide/grafana/#switch-critical-resources","title":"Switch Critical Resources","text":"

      This table reports usage and capacity of ASIC's programmable resources such as:

      • ACLs
      • IPv4 Routes
      • IPv4 Nexthops
      • IPv4 Neihbours
      • IPMC Table
      • FDB

      JSON

      "},{"location":"user-guide/grafana/#fabric","title":"Fabric","text":"

      Fabric underlay and external peering monitoring. Including reporing for:

      • BGP Neighbors
      • BGP Session state
      • Number of BGP Updates and Prefixes sent/received for each BGP Neighbor
      • Keepalive counters

      JSON

      "},{"location":"user-guide/grafana/#interfaces","title":"Interfaces","text":"

      Switch interfaces monitoring visualization that includes:

      • Interface Oper/Admin state
      • Total input/output packets counter
      • Input/output PPS/Bits rate
      • Interface utilization
      • Counters for Unicast/Broadcast/Multicast packets
      • Errors and discards counters

      JSON

      "},{"location":"user-guide/grafana/#logs","title":"Logs","text":"

      System and fabric logs:

      • Kernel and BGP logs from Syslog
      • Errors in agent and syslog
      • Full output of defined file

      JSON

      "},{"location":"user-guide/grafana/#platform","title":"Platform","text":"

      Information from PSU, temperature sensors and fan trays:

      • Input/output PSU voltage
      • Fan speed
      • Temperature from switch sensors (CPU, PSU, etc)
      • For transceivers with DOM - optic sensor temperature

      JSON

      "},{"location":"user-guide/grafana/#node-exporter","title":"Node Exporter","text":"

      Grafana Node Exporter Full is an opensource Grafana board that provide visualizations for monitoring Linux nodes. In particular case Node Exporter is used to track SONiC OS own stats such as

      • Memory/disks usage
      • CPU/System utilization
      • Networking stats (traffic that hits SONiC interfaces) ...

      JSON

      "},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"

      This section contains an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.

      You need to define a Server object for each server running Harvester and a Connection object for each server connection to the switches.

      You can have multiple VPCs created and attached to the Connections to the servers to make them available to the VMs in Harvester or any other hypervisor.

      "},{"location":"user-guide/harvester/#configure-harvester","title":"Configure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"

      From the \"Cluster Networks/Configs\" side menu, create a new Cluster Network.

      Here is a cleaned-up version of what the CRD looks like:

      apiVersion: network.harvesterhci.io/v1beta1\nkind: ClusterNetwork\nmetadata:\n  name: testnet\n
      "},{"location":"user-guide/harvester/#add-a-network-config","title":"Add a Network Config","text":"

      Click \"Create Network Config\". Add your connections and select the bonding type.

      The resulting CRD (cleaned up) looks like the following:

      apiVersion: network.harvesterhci.io/v1beta1\nkind: VlanConfig\nmetadata:\n  name: testconfig\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\nspec:\n  clusterNetwork: testnet\n  uplink:\n    bondOptions:\n      miimon: 100\n      mode: 802.3ad\n    linkAttributes:\n      txQLen: -1\n    nics:\n      - enp5s0f0\n      - enp3s0f1\n
      "},{"location":"user-guide/harvester/#add-vlan-based-vm-networks","title":"Add VLAN based VM Networks","text":"

      Browse over to \"VM Networks\" and add one network for each VLAN you want to support. Assign them to the cluster network.

      Here is what the CRDs will look like for both VLANs:

      apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1001'\n  name: testnet1001\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1001\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1001,\"ipam\":{}}\n
      apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  name: testnet1000\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1000'\n    #  key: string\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1000\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1000,\"ipam\":{}}\n
      "},{"location":"user-guide/harvester/#using-the-vpcs","title":"Using the VPCs","text":"

      Now you can choose the new VM Networks when creating a VM in Harvester, and have them created as part of the VPC.

      "},{"location":"user-guide/overview/","title":"Overview","text":"

      This chapter gives an overview of the main features of Hedgehog Fabric and their usage.

      "},{"location":"user-guide/profiles/","title":"Switch Profiles and Port Naming","text":""},{"location":"user-guide/profiles/#switch-profiles","title":"Switch Profiles","text":"

      All supported switches have a SwitchProfile that defines the switch model, supported features, and available ports with supported configurations such as port group and speeds as well as port breakouts. SwitchProfiles available in-cluster or generated documentation can be found in the Reference section.

      Each switch used in the wiring diagram should have a SwitchProfile references in the spec.profile of the Switch object.

      Switch profile defines what features and ports are available on the switch. Based on the ports data in the profile, it's possible to set port speeds (for non-breakout and non-group ports), port group speeds and port breakout modes in the Switch object in the Fabric API.

      "},{"location":"user-guide/profiles/#port-naming","title":"Port Naming","text":"

      Each switch port is named using one of the the following formats:

      • M<management-port-number>

        • <management-port-number> is the management port number starting from 1 (usually only one named 1 for most switches)
      • E<asic-or-chassis-number>/<port-number>[/<breakout>][.<subinterface.]

        • <asic-or-chassis-number> is the ASIC or chassis number (usually only one named 1 for the most switches)
        • <port-number> is the port number on the ASIC or chassis, starting from 1
        • optional /<breakout> is the breakout number for the port, starting from 1, only for breakout ports and always consecutive numbers independent of the lanes allocation and other implementation details
        • optional .<subinterface> is the subinterface number for the port

      Examples of port names:

      • M1 - management port
      • E1/1 - port 1 on the ASIC or chassis 1, usually a first port on the switch
      • E1/55/1 - first breakout port of the switch port 55 on the ASIC or chassis 1
      "},{"location":"user-guide/profiles/#available-ports","title":"Available Ports","text":"

      Each switch profile defines a set of ports available on the switch. Ports could be divided into the following types.

      "},{"location":"user-guide/profiles/#directly-configurable-ports","title":"Directly configurable ports","text":"

      Non-breakout and non-group ports. Would have a reference to the port profile with default and available speeds. Could be configured by setting the speed in the Switch object in the Fabric API:

      .spec:\n  portSpeeds:\n    E1/1: 25G\n
      "},{"location":"user-guide/profiles/#port-groups","title":"Port groups","text":"

      Ports that belong to a port group, non-breakout and not directly configurable. Would have a reference to the port group which will have a reference to the port profile with default and available speeds. Port couldn't be configured directly, speed configuration is applied to the whole group in the Switch object in the Fabric API:

      .spec:\n  portGroupSpeeds:\n    \"1\": 10G\n

      It'll set the speed of all ports in the group 1 to 10G, e.g. if the group 1 contains ports E1/1, E1/2, E1/3 and E1/4, all of them will be set to 10G speed.

      "},{"location":"user-guide/profiles/#breakout-ports","title":"Breakout ports","text":"

      Ports that are breakouts and non-group ports. Would have a reference to the port profile with default and available breakout modes. Could be configured by setting the breakout mode in the Switch object in the Fabric API:

      .spec:\n  portBreakouts:\n    E1/55: 4x25G\n

      Configuring a port breakout mode will make \"breakout\" ports available for use in the wiring diagram. The breakout ports are named as E<asic-or-chassis-number>/<port-number>/<breakout>, e.g. E1/55/1, E1/55/2, E1/55/3, E1/55/4 for the example above. Omitting the breakout number is allowed for the first breakout port, e.g. E1/55 is the same as E1/55/1. The breakout ports are always consecutive numbers independent of the lanes allocation and other implementation details.

      "},{"location":"user-guide/shrink-expand/","title":"Fabric Shrink/Expand","text":"

      This section provides a brief overview of how to add or remove switches within the fabric using Hedgehog Fabric API, and how to manage connections between them.

      Manipulating API objects is done with the assumption that target devices are correctly cabled and connected.

      This article uses terms that can be found in the Hedgehog Concepts, the User Guide documentation, and the Fabric API reference.

      "},{"location":"user-guide/shrink-expand/#add-a-switch-to-the-existing-fabric","title":"Add a switch to the existing fabric","text":"

      In order to be added to the Hedgehog Fabric, a switch should have a corresponding Switch object. An example on how to define this object is available in the User Guild.

      Note

      If theSwitch will be used in ESLAG or MCLAG groups, appropriate groups should exist. Redundancy groups should be specified in the Switch object before creation.

      After the Switch object has been created, you can define and create dedicated device Connections. The types of the connections may differ based on the Switch role given to the device. For more details, refer to Connections section.

      Note

      Switch devices should be booted in ONIE installation mode to install SONiC OS and configure the Fabric Agent.

      Ensure the management port of the switch is connected to fabric management network.

      "},{"location":"user-guide/shrink-expand/#remove-a-switch-from-the-existing-fabric","title":"Remove a switch from the existing fabric","text":"

      Before you decommission a switch from the Hedgehog Fabric, several preparation steps are necessary.

      Warning

      Currently the Wiring diagram used for initial deployment is saved in /var/lib/rancher/k3s/server/manifests/hh-wiring.yaml on the Control node. Fabric will sustain objects within the original wiring diagram. In order to remove any object, first remove the dedicated API objects from this file. It is recommended to reapply hh-wiring.yaml after changing its internals.

      • If the Switch is a Leaf switch (including Mixed and Border leaf configurations), remove all VPCAttachments bound to all switches Connections.
      • If the Switch was used for ExternalPeering, remove all ExternalAttachment objects that are bound to the Connections of the Switch.
      • Remove all connections of the Switch.
      • At last, remove the Switch and Agent objects.
      "},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"

      A Virtual Private Cloud (VPC) is similar to a public cloud VPC. It provides an isolated private network with support for multiple subnets, each with user-defined VLANs and optional DHCP services.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: default # Limits which subnets can the VPC use to guarantee non-overlapping IPv4 ranges\n  vlanNamespace: default # Limits which Vlan Ids can the VPC use to guarantee non-overlapping VLANs\n\n  defaultIsolated: true # Sets default behavior for the current VPC subnets to be isolated\n  defaultRestricted: true # Sets default behavior for the current VPC subnets to be restricted\n\n  subnets:\n    default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n      dhcp:\n        enable: true # On-demand DHCP server\n        range: # Optionally, start/end range could be specified, otherwise all available IPs are used\n          start: 10.10.1.10\n          end: 10.10.1.99\n        options: # Optional, additional DHCP options to enable for DHCP server, only available when enable is true\n          pxeURL: tftp://10.10.10.99/bootfilename # PXEURL (optional) to identify the PXE server to use to boot hosts; HTTP query strings are not supported\n          dnsServers: # (optional) configure DNS servers\n            - 1.1.1.1\n          timeServers: # (optional) configure Time (NTP) Servers\n            - 1.1.1.1\n          interfaceMTU: 1500 # (optional) configure the MTU (default is 9036); doesn't affect the actual MTU of the switch interfaces\n      subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n      gateway: 10.10.1.1 # User-defined gateway (optional, default is .1)\n      vlan: 1001 # User-defined VLAN from VLAN namespace\n      isolated: true # Makes subnet isolated from other subnets within the VPC (doesn't affect VPC peering)\n      restricted: true # Causes all hosts in the subnet to be isolated from each other\n\n    thrird-party-dhcp: # Another subnet\n      dhcp:\n        relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n      subnet: \"10.10.2.0/24\"\n      vlan: 1002\n\n    another-subnet: # Minimal configuration is just a name, subnet and VLAN\n      subnet: 10.10.100.0/24\n      vlan: 1100\n\n  permit: # Defines which subnets of the current VPC can communicate to each other, applied on top of subnets \"isolated\" flag (doesn't affect VPC peering)\n    - [subnet-1, subnet-2, subnet-3] # 1, 2 and 3 subnets can communicate to each other\n    - [subnet-4, subnet-5] # Possible to define multiple lists\n\n  staticRoutes: # Optional, static routes to be added to the VPC\n    - prefix: 10.100.0.0/24 # Destination prefix\n      nextHops: # Next hop IP addresses\n        - 10.200.0.0\n
      "},{"location":"user-guide/vpcs/#isolated-and-restricted-subnets-permit-lists","title":"Isolated and restricted subnets, permit lists","text":"

      Subnets can be isolated and restricted, with the ability to define permit lists to allow communication between specific isolated subnets. The permit list is applied on top of the isolated flag and doesn't affect VPC peering.

      Isolated subnet means that the subnet has no connectivity with other subnets within the VPC, but it could still be allowed by permit lists.

      Restricted subnet means that all hosts in the subnet are isolated from each other within the subnet.

      A Permit list contains a list. Every element of the list is a set of subnets that can communicate with each other.

      "},{"location":"user-guide/vpcs/#third-party-dhcp-server-configuration","title":"Third-party DHCP server configuration","text":"

      In case you use a third-party DHCP server, by configuring spec.subnets.<subnet>.dhcp.relay, additional information is added to the DHCP packet forwarded to the DHCP server to make it possible to identify the VPC and subnet. This information is added under the RelayAgentInfo (option 82) in the DHCP packet. The relay sets two suboptions in the packet:

      • VirtualSubnetSelection (suboption 151) is populated with the VRF which uniquely identifies a VPC on the Hedgehog Fabric and will be in VrfV<VPC-name> format, for example VrfVvpc-1 for a VPC named vpc-1 in the Fabric API.
      • CircuitID (suboption 1) identifies the VLAN which, together with the VRF (VPC) name, maps to a specific VPC subnet.
      "},{"location":"user-guide/vpcs/#vpcattachment","title":"VPCAttachment","text":"

      A VPCAttachment represents a specific VPC subnet assignment to the Connection object which means a binding between an exact server port and a VPC. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

      VPC could be attached to a switch that is part of the VLAN namespace used by the VPC.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCAttachment\nmetadata:\n  name: vpc-1-server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n  subnet: vpc-1/default # VPC subnet name\n  nativeVLAN: true # (Optional) if true, the port will be configured as a native VLAN port (untagged)\n
      "},{"location":"user-guide/vpcs/#vpcpeering","title":"VPCPeering","text":"

      A VPCPeering enables VPC-to-VPC connectivity. There are two types of VPC peering:

      • Local: peering is implemented on the same switches where VPCs are attached
      • Remote: peering is implemented on the border/mixed leaves defined by the SwitchGroup object

      VPC peering is only possible between VPCs attached to the same IPv4 namespace (see IPv4Namespace)

      "},{"location":"user-guide/vpcs/#local-vpc-peering","title":"Local VPC peering","text":"
      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit: # Defines a pair of VPCs to peer\n  - vpc-1: {} # Meaning all subnets of two VPCs will be able to communicate with each other\n    vpc-2: {} # See \"Subnet filtering\" for more advanced configuration\n
      "},{"location":"user-guide/vpcs/#remote-vpc-peering","title":"Remote VPC peering","text":"
      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n  remote: border # Indicates a switch group to implement the peering on\n
      "},{"location":"user-guide/vpcs/#subnet-filtering","title":"Subnet filtering","text":"

      It's possible to specify which specific subnets of the peering VPCs could communicate to each other using the permit field.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit: # subnet-1 and subnet-2 of vpc-1 could communicate to subnet-3 of vpc-2 as well as subnet-4 of vpc-2 could communicate to subnet-5 and subnet-6 of vpc-2\n  - vpc-1:\n      subnets: [subnet-1, subnet-2]\n    vpc-2:\n      subnets: [subnet-3]\n  - vpc-1:\n      subnets: [subnet-4]\n    vpc-2:\n      subnets: [subnet-5, subnet-6]\n
      "},{"location":"user-guide/vpcs/#ipv4namespace","title":"IPv4Namespace","text":"

      An IPv4Namespace defines a set of (non-overlapping) IPv4 address ranges available for use by VPC subnets. Each VPC belongs to a specific IPv4 namespace. Therefore, its subnet prefixes must be from that IPv4 namespace.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  subnets: # List of prefixes that VPCs can pick their subnets from\n  - 10.10.0.0/16\n
      "},{"location":"user-guide/vpcs/#vlannamespace","title":"VLANNamespace","text":"

      A VLANNamespace defines a set of VLAN ranges available for attaching servers to switches. Each switch can belong to one or more disjoint VLANNamespaces.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: VLANNamespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n  - from: 1000\n    to: 2999\n
      "},{"location":"vlab/demo/","title":"Demo on VLAB","text":""},{"location":"vlab/demo/#goals","title":"Goals","text":"

      The goal of this demo is to show how to use VPCs, attach and peer them and run test connectivity between the servers. Examples are based on the default VLAB topology.

      You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.

      "},{"location":"vlab/demo/#default-topology","title":"Default topology","text":"

      The default topology is Spine-Leaf with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.

      For more details on customizing topologies see the Running VLAB section.

      In the default topology, the following Control Node and Switch VMs are created, the Control Node is connected to every switch, the lines are ommitted for clarity:

      graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n\n    L1([MCLAG Leaf 1])\n    L2([MCLAG Leaf 2])\n    L3([ESLAG Leaf 3])\n    L4([ESLAG Leaf 4])\n    L5([Leaf 5])\n\n\n    L1 & L2 & L5 & L3 & L4 --> S1 & S2

      As well as the following test servers, as above Control Node connections are omitted:

      graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([MCLAG Leaf 1])\n    L2([MCLAG Leaf 2])\n    L3([ESLAG Leaf 3])\n    L4([ESLAG Leaf 4])\n    L5([Leaf 5])\n\n    TS1[Server 1]\n    TS2[Server 2]\n    TS3[Server 3]\n    TS4[Server 4]\n    TS5[Server 5]\n    TS6[Server 6]\n    TS7[Server 7]\n    TS8[Server 8]\n    TS9[Server 9]\n    TS10[Server 10]\n\n    subgraph MCLAG\n    L1\n    L2\n    end\n    TS3 --> L1\n    TS1 --> L1\n    TS1 --> L2\n\n    TS2 --> L1\n    TS2 --> L2\n\n    TS4 --> L2\n\n    subgraph ESLAG\n    L3\n    L4\n    end\n\n    TS7 --> L3\n    TS5 --> L3\n    TS5 --> L4\n    TS6 --> L3\n    TS6 --> L4\n\n    TS8 --> L4\n    TS9 --> L5\n    TS10 --> L5\n\n    L1 & L2 & L2 & L3 & L4 & L5 <----> S1 & S2
      "},{"location":"vlab/demo/#manual-vpc-creation","title":"Manual VPC creation","text":""},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"

      You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers:

      core@control-1 ~ $ kubectl get conn | grep server\nserver-01--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-02--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-03--unbundled--leaf-01        unbundled      5h13m\nserver-04--bundled--leaf-02          bundled        5h13m\nserver-05--unbundled--leaf-03        unbundled      5h13m\nserver-06--bundled--leaf-03          bundled        5h13m\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n06:48:46 INF VPC created name=vpc-1\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10\n06:49:04 INF VPC created name=vpc-2\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02\n06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02\n

      The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is 10.0.0.0/16:

      core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   5h14m\n

      After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches:

      core@control-1 ~ $ kubectl get agents\nNAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION\nleaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0\nleaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0\nleaf-03    server-leaf   VS-03           112s      5          5          v0.23.0\nspine-01   spine         VS-04           16m       3          3          v0.23.0\nspine-02   spine         VS-05           18m       4          4          v0.23.0\n

      In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration has been applied.

      "},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"

      You can use hhfab vlab ssh on the host to SSH into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure a bond with a VLAN on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a VLAN and they both will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use the little helper pre-installed by Fabricator on test servers, hhnet.

      For server-01:

      core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n       valid_lft 86396sec preferred_lft 86396sec\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n

      And for server-02:

      core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n       valid_lft 86185sec preferred_lft 86185sec\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n
      "},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"

      You can test connectivity between the servers before peering the switches using the ping command:

      core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
      core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
      "},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"

      To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:

      core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n07:04:58 INF VPCPeering created name=vpc-1--vpc-2\n

      Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that, you can test connectivity between the servers again:

      core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
      core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n

      If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again:

      core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2\nvpcpeering.vpc.githedgehog.com \"vpc-1--vpc-2\" deleted\n
      core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n

      You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

      core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
      "},{"location":"vlab/demo/#utility-based-vpc-creation","title":"Utility based VPC creation","text":""},{"location":"vlab/demo/#setup-vpcs","title":"Setup VPCs","text":"

      hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.

      NAME:\n   hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them\n\nUSAGE:\n   hhfab vlab setup-vpcs [command options]\n\nOPTIONS:\n   --dns-servers value, --dns value [ --dns-servers value, --dns value ]    DNS servers for VPCs advertised by DHCP\n   --force-clenup, -f                                                       start with removing all existing VPCs and VPCAttachments (default: false)\n   --help, -h                                                               show help\n   --interface-mtu value, --mtu value                                       interface MTU for VPCs advertised by DHCP (default: 0)\n   --ipns value                                                             IPv4 namespace for VPCs (default: \"default\")\n   --name value, -n value                                                   name of the VM or HW to access\n   --servers-per-subnet value, --servers value                              number of servers per subnet (default: 1)\n   --subnets-per-vpc value, --subnets value                                 number of subnets per VPC (default: 1)\n   --time-servers value, --ntp value [ --time-servers value, --ntp value ]  Time servers for VPCs advertised by DHCP\n   --vlanns value                                                           VLAN namespace for VPCs (default: \"default\")\n   --wait-switches-ready, --wait                                            wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
      "},{"location":"vlab/demo/#setup-peering","title":"Setup Peering","text":"

      hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.

      NAME:\n   hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)\n\nUSAGE:\n   Setup test scenario with VPC/External Peerings by specifying requests in the format described below.\n\n   Example command:\n\n   $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24\n\n   Which will produce:\n   1. VPC peering between vpc-01 and vpc-02\n   2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border\n   3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted\n   4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route\n      from external permitted as well any route that belongs to 22.22.22.0/24\n\n   VPC Peerings:\n\n   1+2 -- VPC peering between vpc-01 and vpc-02\n   demo-1+demo-2 -- VPC peering between demo-1 and demo-2\n   1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present\n   1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border\n   1+2:remote=border -- same as above\n\n   External Peerings:\n\n   1~as5835 -- external peering for vpc-01 with External as5835\n   1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing\n     default subnet and any route from external\n   1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and\n     default route from external permitted\n   1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details\n   1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above\n\nOPTIONS:\n   --help, -h                     show help\n   --name value, -n value         name of the VM or HW to access\n   --wait-switches-ready, --wait  wait for switches to be ready before before and after configuring peerings (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
      "},{"location":"vlab/demo/#test-connectivity","title":"Test Connectivity","text":"

      hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.

      NAME:\n   hhfab vlab test-connectivity - test connectivity between all servers\n\nUSAGE:\n   hhfab vlab test-connectivity [command options]\n\nOPTIONS:\n   --curls value                  number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)\n   --help, -h                     show help\n   --iperfs value                 seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)\n   --iperfs-speed value           minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)\n   --name value, -n value         name of the VM or HW to access\n   --pings value                  number of pings to send between each pair of servers (0 to disable) (default: 5)\n   --wait-switches-ready, --wait  wait for switches to be ready before testing connectivity (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
      "},{"location":"vlab/demo/#using-vpcs-with-overlapping-subnets","title":"Using VPCs with overlapping subnets","text":"

      First, create a second IPv4Namespace with the same subnet as the default one:

      core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   24m\n\ncore@control-1 ~ $ cat <<EOF > ipns-2.yaml\napiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n  name: ipns-2\n  namespace: default\nspec:\n  subnets:\n  - 10.0.0.0/16\nEOF\n\ncore@control-1 ~ $ kubectl apply -f ipns-2.yaml\nipv4namespace.vpc.githedgehog.com/ipns-2 created\n\ncore@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   30m\nipns-2    [\"10.0.0.0/16\"]   8s\n

      Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:

      core@control-1 ~ $ cat <<EOF > vpc-3.yaml\napiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n  name: vpc-3\n  namespace: default\nspec:\n  ipv4Namespace: ipns-2\n  subnets:\n    default:\n      dhcp:\n        enable: true\n        range:\n          start: 10.0.1.10\n      subnet: 10.0.1.0/24\n      vlan: 2001\n  vlanNamespace: default\nEOF\n\ncore@control-1 ~ $ kubectl apply -f vpc-3.yaml\n

      At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in a previous section. Once you have configured networking, server-01 and server-03 have IP addresses from the same subnets.

      "},{"location":"vlab/overview/","title":"Overview","text":"

      It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any data plane or performance testing, or for production use.

      In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery, boot and installation process like on real hardware.

      "},{"location":"vlab/overview/#overview_1","title":"Overview","text":"

      The hhfab CLI provides a special command vlab to manage the virtual labs. It allows you to run sets of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs the installer to get Fabric up and running.

      You can find more information about getting hhfab in the download section.

      "},{"location":"vlab/overview/#system-requirements","title":"System Requirements","text":"

      Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

      The following packages needs to be installed: qemu-kvm swtpm-tools tpm2-tools socat. Docker is also required, to login into the OCI registry.

      By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.

      You can calculate the system requirements based on the allocated resources to the VMs using the following table:

      Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB

      These numbers give approximately the following requirements for the default topologies:

      • Spine-Leaf: 38 vCPUs, 36352 MB, 410 GB disk
      • Collapsed Core: 22 vCPUs, 19456 MB, 240 GB disk

      Usually, none of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

      NVMe SSD for VM disks is highly recommended.

      "},{"location":"vlab/overview/#installing-prerequisites","title":"Installing prerequisites","text":"

      On Ubuntu 22.04 LTS you can install all required packages using the following commands:

      curl -fsSL https://get.docker.com -o install-docker.sh\nsudo sh install-docker.sh\nsudo usermod -aG docker $USER\nnewgrp docker\n
      sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat\nsudo usermod -aG kvm $USER\nnewgrp kvm\nkvm-ok\n

      Good output of the kvm-ok command should look like this:

      ubuntu@docs:~$ kvm-ok\nINFO: /dev/kvm exists\nKVM acceleration can be used\n
      "},{"location":"vlab/overview/#next-steps","title":"Next steps","text":"
      • Running VLAB
      "},{"location":"vlab/running/","title":"Running VLAB","text":"

      Make sure to follow the prerequisites and check system requirements in the VLAB Overview section before running VLAB.

      "},{"location":"vlab/running/#initialize-vlab","title":"Initialize VLAB","text":"

      First, initialize Fabricator by running hhfab init --dev. This command supports several customization options that are listed in the output of hhfab init --help.

      ubuntu@docs:~$ hhfab init --dev\n11:26:52 INF Hedgehog Fabricator version=v0.30.0\n11:26:52 INF Generated initial config\n11:26:52 INF Adjust configs (incl. credentials, modes, subnets, etc.) file=fab.yaml\n11:26:52 INF Include wiring files (.yaml) or adjust imported ones dir=include\n
      "},{"location":"vlab/running/#vlab-topology","title":"VLAB Topology","text":"

      By default, the command creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.

      ubuntu@docs:~$ hhfab vlab gen\n21:27:16 INF Hedgehog Fabricator version=v0.30.0\n21:27:16 INF Building VLAB wiring diagram fabricMode=spine-leaf\n21:27:16 INF >>> spinesCount=2 fabricLinksCount=2\n21:27:16 INF >>> eslagLeafGroups=2\n21:27:16 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2\n21:27:16 INF >>> orphanLeafsCount=1 vpcLoopbacks=2\n21:27:16 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n21:27:16 INF Generated wiring file name=vlab.generated.yaml\n
      "},{"location":"vlab/running/#collapsed-core","title":"Collapsed Core","text":"

      If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core. Or if you want to run Collapsed Core topology with 2 MCLAG switches:

      ubuntu@docs:~$ hhfab vlab gen\n11:39:02 INF Hedgehog Fabricator version=v0.30.0\n11:39:02 INF Building VLAB wiring diagram fabricMode=collapsed-core\n11:39:02 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2\n11:39:02 INF >>> orphanLeafsCount=0 vpcLoopbacks=2\n11:39:02 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n11:39:02 INF Generated wiring file name=vlab.generated.yaml\n
      "},{"location":"vlab/running/#custom-spine-leaf","title":"Custom Spine Leaf","text":"

      Or you can run custom topology with 2 spines, 4 MCLAG leaves and 2 non-MCLAG leaves using flags:

      ubuntu@docs:~$ hhfab vlab gen --mclag-leafs-count 4 --orphan-leafs-count 2\n11:41:06 INF Hedgehog Fabricator version=v0.30.0\n11:41:06 INF Building VLAB wiring diagram fabricMode=spine-leaf\n11:41:06 INF >>> spinesCount=2 fabricLinksCount=2\n11:41:06 INF >>> eslagLeafGroups=\"\"\n11:41:06 INF >>> mclagLeafsCount=4 mclagSessionLinks=2 mclagPeerLinks=2\n11:41:06 INF >>> orphanLeafsCount=2 vpcLoopbacks=2\n11:41:06 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n11:41:06 INF Generated wiring file name=vlab.generated.yaml\n

      Additionally, you can pass extra Fabric configuration items using flags on init command or by passing a configuration file. For more information, refer to the Fabric Configuration section.

      Once you have initialized the VLAB, download the artifacts and build the installer using hhfab build. This command automatically downloads all required artifacts from the OCI registry and builds the installer and all other prerequisites for running the VLAB.

      "},{"location":"vlab/running/#build-the-installer-and-start-vlab","title":"Build the Installer and Start VLAB","text":"

      In VLAB the build and run step are combined into one command for simplicity, hhfab vlab up. For successive runs use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. This command does not return, it runs as long as the VLAB is up. This is done so that shutdown is a simple ctrl + c.

      ubuntu@docs:~$ hhfab vlab up\n11:48:22 INF Hedgehog Fabricator version=v0.30.0\n11:48:22 INF Wiring hydrated successfully mode=if-not-present\n11:48:22 INF VLAB config created file=vlab/config.yaml\n11:48:22 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:22 INF Building installer control=control-1\n11:48:22 INF Adding recipe bin to installer control=control-1\n11:48:24 INF Adding k3s and tools to installer control=control-1\n11:48:25 INF Adding zot to installer control=control-1\n11:48:25 INF Adding cert-manager to installer control=control-1\n11:48:26 INF Adding config and included wiring to installer control=control-1\n11:48:26 INF Adding airgap artifacts to installer control=control-1\n11:48:36 INF Archiving installer path=/home/ubuntu/result/control-1-install.tgz control=control-1\n11:48:45 INF Creating ignition path=/home/ubuntu/result/control-1-install.ign control=control-1\n11:48:46 INF Taps and bridge are ready count=8\n11:48:46 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:46 INF Preparing new vm=control-1 type=control\n11:48:51 INF Preparing new vm=server-01 type=server\n11:48:52 INF Preparing new vm=server-02 type=server\n11:48:54 INF Preparing new vm=server-03 type=server\n11:48:55 INF Preparing new vm=server-04 type=server\n11:48:57 INF Preparing new vm=server-05 type=server\n11:48:58 INF Preparing new vm=server-06 type=server\n11:49:00 INF Preparing new vm=server-07 type=server\n11:49:01 INF Preparing new vm=server-08 type=server\n11:49:03 INF Preparing new vm=server-09 type=server\n11:49:04 INF Preparing new vm=server-10 type=server\n11:49:05 INF Preparing new vm=leaf-01 type=switch\n11:49:06 INF Preparing new vm=leaf-02 type=switch\n11:49:06 INF Preparing new vm=leaf-03 type=switch\n11:49:06 INF Preparing new vm=leaf-04 type=switch\n11:49:06 INF Preparing new vm=leaf-05 type=switch\n11:49:06 INF Preparing new vm=spine-01 type=switch\n11:49:06 INF Preparing new vm=spine-02 type=switch\n11:49:06 INF Starting VMs count=18 cpu=\"54 vCPUs\" ram=\"49664 MB\" disk=\"550 GB\"\n11:49:59 INF Uploading control install vm=control-1 type=control\n11:53:39 INF Running control install vm=control-1 type=control\n11:53:40 INF control-install: 01:53:39 INF Hedgehog Fabricator Recipe version=v0.30.0 vm=control-1\n11:53:40 INF control-install: 01:53:39 INF Running control node installation vm=control-1\n12:00:32 INF control-install: 02:00:31 INF Control node installation complete vm=control-1\n12:00:32 INF Control node is ready vm=control-1 type=control\n12:00:32 INF All VMs are ready\n
      When the message INF Control node is ready vm=control-1 type=control from the installer's output means that the installer has finished. After this line has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned. See Accessing the VLAB.

      "},{"location":"vlab/running/#configuring-vlab-vms","title":"Configuring VLAB VMs","text":"

      By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure enable connectivity using hhfab vlab up --restrict-servers=false to allow the test servers to access the Internet and the host. When you enable connectivity, VMs get a default route pointing to the host, which means that in case of the VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific subnets).

      "},{"location":"vlab/running/#default-credentials","title":"Default credentials","text":"

      Fabricator creates default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.

      Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!. Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!. Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.

      "},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"

      The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.

      You can select device you want to access or pass the name using the --vm flag.

      ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190  and / toggles search\nSSH to VM:\n  \ud83e\udd94 control-1\n  server-01\n  server-02\n  server-03\n  server-04\n  server-05\n  server-06\n  leaf-01\n  leaf-02\n  leaf-03\n  spine-01\n  spine-02\n\n----------- VM Details ------------\nID:             0\nName:           control-1\nReady:          true\nBasedir:        .hhfab/vlab-vms/control-1\n

      On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. You can find information about the switches provisioning by running kubectl get agents -o wide. It usually takes about 10-15 minutes for the switches to get installed.

      After the switches are provisioned, the command returns something like this:

      core@control-1 ~ $ kubectl get agents -o wide\nNAME       ROLE          DESCR           HWSKU                      ASIC   HEARTBEAT   APPLIED   APPLIEDG   CURRENTG   VERSION   SOFTWARE                ATTEMPT   ATTEMPTG   AGE\nleaf-01    server-leaf   VS-01 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     30s         5m5s      4          4          v0.23.0   4.1.1-Enterprise_Base   5m5s      4          10m\nleaf-02    server-leaf   VS-02 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     27s         3m30s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m30s     3          10m\nleaf-03    server-leaf   VS-03           DellEMC-S5248f-P-25G-DPB   vs     18s         3m52s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m52s     4          10m\nspine-01   spine         VS-04           DellEMC-S5248f-P-25G-DPB   vs     26s         3m59s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m59s     3          10m\nspine-02   spine         VS-05           DellEMC-S5248f-P-25G-DPB   vs     19s         3m53s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m53s     4          10m\n

      The Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. The Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. Different values for AppliedG and CurrentG mean that the switch is in the process of applying the configuration.

      At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about managing the Fabric in the Running Demo and User Guide sections.

      "},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"

      You can list the main Fabric objects by running kubectl get on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

      For example, to get the list of switches, run:

      core@control-1 ~ $ kubectl get switch\nNAME       ROLE          DESCR           GROUPS   LOCATIONUUID                           AGE\nleaf-01    server-leaf   VS-01 MCLAG 1            5e2ae08a-8ba9-599a-ae0f-58c17cbbac67   6h10m\nleaf-02    server-leaf   VS-02 MCLAG 1            5a310b84-153e-5e1c-ae99-dff9bf1bfc91   6h10m\nleaf-03    server-leaf   VS-03                    5f5f4ad5-c300-5ae3-9e47-f7898a087969   6h10m\nspine-01   spine         VS-04                    3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da   6h10m\nspine-02   spine         VS-05                    96fbd4eb-53b5-5a4c-8d6a-bbc27d883030   6h10m\n

      Similarly, to get the list of servers, run:

      core@control-1 ~ $ kubectl get server\nNAME        TYPE      DESCR                        AGE\ncontrol-1   control   Control node                 6h10m\nserver-01             S-01 MCLAG leaf-01 leaf-02   6h10m\nserver-02             S-02 MCLAG leaf-01 leaf-02   6h10m\nserver-03             S-03 Unbundled leaf-01       6h10m\nserver-04             S-04 Bundled leaf-02         6h10m\nserver-05             S-05 Unbundled leaf-03       6h10m\nserver-06             S-06 Bundled leaf-03         6h10m\n

      For connections, use:

      core@control-1 ~ $ kubectl get connection\nNAME                                 TYPE           AGE\nleaf-01--mclag-domain--leaf-02       mclag-domain   6h11m\nleaf-01--vpc-loopback                vpc-loopback   6h11m\nleaf-02--vpc-loopback                vpc-loopback   6h11m\nleaf-03--vpc-loopback                vpc-loopback   6h11m\nserver-01--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-02--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-03--unbundled--leaf-01        unbundled      6h11m\nserver-04--bundled--leaf-02          bundled        6h11m\nserver-05--unbundled--leaf-03        unbundled      6h11m\nserver-06--bundled--leaf-03          bundled        6h11m\nspine-01--fabric--leaf-01            fabric         6h11m\nspine-01--fabric--leaf-02            fabric         6h11m\nspine-01--fabric--leaf-03            fabric         6h11m\nspine-02--fabric--leaf-01            fabric         6h11m\nspine-02--fabric--leaf-02            fabric         6h11m\nspine-02--fabric--leaf-03            fabric         6h11m\n

      For IPv4 and VLAN namespaces, use:

      core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   6h12m\n\ncore@control-1 ~ $ kubectl get vlanns\nNAME      AGE\ndefault   6h12m\n
      "},{"location":"vlab/running/#reset-vlab","title":"Reset VLAB","text":"

      To reset VLAB and start over directory and run hhfab init -f which will force overwrite your existing configuration, fab.yaml.

      "},{"location":"vlab/running/#next-steps","title":"Next steps","text":"
      • Running Demo
      "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"

      The Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to private environments. It comes without vendor lock-in.

      The Fabric is built around the concept of VPCs (Virtual Private Clouds), similar to public cloud offerings. It provides a multi-tenant API to define the user intent on network isolation and connectivity, which is automatically transformed into configuration for switches and software appliances.

      You can read more about its concepts and architecture in the documentation.

      You can find out how to download and try the Fabric on the self-hosted fully virtualized lab or on hardware.

      "},{"location":"architecture/fabric/","title":"Hedgehog Network Fabric","text":"

      The Hedgehog Open Network Fabric is an open-source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standard BGP EVPN and VXLAN technology. The fabric provides a standard Kubernetes interface to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this, it defines different abstractions starting from the physical network where a set of Connection objects defines how a physical server on the network connects to a physical switch on the fabric.

      "},{"location":"architecture/fabric/#underlay-network","title":"Underlay Network","text":"

      The Hedgehog Fabric currently supports two underlay network topologies.

      "},{"location":"architecture/fabric/#collapsed-core","title":"Collapsed Core","text":"

      A collapsed core topology is just a pair of switches connected in a MCLAG configuration with no other network elements. All workloads attach to these two switches.

      The leaves in this setup are configured to be in a MCLAG pair and servers can either be connected to both switches as a MCLAG port channel or as orphan ports connected to only one switch. Both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.

      "},{"location":"architecture/fabric/#spine-leaf","title":"Spine-Leaf","text":"

      A spine-leaf topology is a standard Clos network with workloads attaching to leaf switches and the spines providing connectivity between different leaves.

      This kind of topology is useful for bigger deployments and provides all the advantages of a typical Clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. RFC7938 was used as the reference for establishing the underlay network.

      "},{"location":"architecture/fabric/#overlay-network","title":"Overlay Network","text":"

      The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Virtualization is achieved in the Hedgehog Fabric by encapsulating workload traffic over VXLAN tunnels that are source and terminated on the leaf switches in the network. The fabric uses BGP-EVPN/VXLAN to enable the creation and management of virtual networks on top of the physical one. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the Hedgehog Fabric is identified by a VPC. The following subsections contain a high-level overview of how VPCs are implemented in the Hedgehog Fabric and its associated objects.

      "},{"location":"architecture/fabric/#vpc","title":"VPC","text":"

      The previous subsections have described what a VPC is, and how to attach workloads to a specific VPC. The following bullet points describe how VPCs are actually implemented in the network to ensure a private view the network.

      • Each VPC is modeled as a VRF on each switch where there are VPC attachments defined for this VPC. The VRF is allocated its own VNI. The VRF is local to each switch and the VNI is global for the entire fabric. By mapping the VRF to a VNI and configuring an EVPN instance in each VRF, a shared L3VNI is established across the entire fabric. All VRFs participating in this VNI can freely communicate with each other without the need for a policy. A VLAN is allocated for each VRF which functions as an IRB VLAN for the VRF.
      • The VRF created on each switch corresponding to a VPC configures a BGP instance with EVPN to advertise its locally attached subnets and import routes from its peered VPCs. The BGP instance in the tenant VRFs does not establish neighbor relationships and is purely used to advertise locally attached routes into the VPC (all VRFs with the same L3VNI) across leaves in the network.
      • A VPC can have multiple subnets. Each subnet in the VPC is modeled as a VLAN on the switch. The VLAN is only locally significant and a given subnet might have different VLANs on different leaves on the network. A globally significant VNI is assigned to each subnet. This VNI is used to extend the subnet across different leaves in the network and provides a view of single stretched L2 domain if the applications need it.
      • The Hedgehog Fabric has a built-in DHCP server which will automatically assign IP addresses to each workload depending on the VPC it belongs to. This is achieved by configuring a DHCP relay on each of the server facing VLANs. The DHCP server is accessible through the underlay network and is shared by all VPCs in the fabric. The inbuilt DHCP server is capable of identifying the source VPC of the request and assigning IP addresses from a pool allocated to the VPC at creation.
      • A VPC by default cannot communicate to anyone outside the VPC and specific peering rules are required to allow communication to external networks or to other VPCs.
      "},{"location":"architecture/fabric/#vpc-peering","title":"VPC Peering","text":"

      To enable communication between 2 different VPCs, one needs to configure a VPC peering policy. The Hedgehog Fabric supports two different peering modes.

      • Local Peering: A local peering directly imports routes from another VPC locally. This is achieved by a simple import route from the peer VPC. In case there are no locally attached workloads to the peer VPC the fabric automatically creates a stub VPC for peering and imports routes from it. This allows VPCs to peer with each other without the need for a dedicated peering leaf. If a local peering is done for a pair of VPCs which have locally attached workloads, the fabric automatically allocates a pair of ports on the switch to route traffic between these VRFs using static routes. This is required because of limitations in the underlying platform. The net result of these limitations is that the bandwidth between these 2 VPCs is limited by the bandwidth of the loopback interfaces allocated on the switch. Traffic between the peered VPCs will not leave the switch that connects them.
      • Remote Peering: Remote peering is implemented using a dedicated peering switch/switches which is used as a rendezvous point for the 2 VPC's in the fabric. The set of switches to be used for peering is determined by configuration in the peering policy. When a remote peering policy is applied for a pair of VPCs, the VRFs corresponding to these VPCs on the peering switch advertise default routes into their specific VRFs identified by the L3VNI. All traffic that does not belong to the VPCs is forwarded to the peering switch which has routes to the other VPCs and gets forwarded from there. The bandwidth limitation that exists in the local peering solution is solved here as the bandwidth between the two VPCs is determined by the fabric cross section bandwidth.
      "},{"location":"architecture/overview/","title":"Overview","text":"

      Under construction.

      "},{"location":"concepts/overview/","title":"Concepts","text":""},{"location":"concepts/overview/#introduction","title":"Introduction","text":"

      Hedgehog Open Network Fabric is built on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs), so you can use standard Kubernetes tools to manage Fabric resources.

      Hedgehog Fabric consists of the following components:

      • Fabricator - special tool to install and configure Fabric, or to run virtual labs
      • Control Node - one or more Kubernetes nodes in a single cluster running Fabric software:
        • Fabric Controller - main control plane component that manages Fabric resources
      • Fabric Kubectl plugin (Fabric CLI) - kubectl plugin to manage Fabric resources in an easy way
      • Fabric Agent - runs on every switch and manages switch configuration
      "},{"location":"concepts/overview/#fabric-api","title":"Fabric API","text":"

      All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. With this representation, Fabric defines switches, servers, control nodes, external systems and connections between them in a single place and then uses these definitions to deploy and manage the whole infrastructure. On top of the Wiring Diagram, Fabric provides a set of APIs to manage the VPCs and the connections between them and between VPCs and External systems.

      "},{"location":"concepts/overview/#wiring-diagram-api","title":"Wiring Diagram API","text":"

      Wiring Diagram consists of the following resources:

      • \"Devices\": describe any device in the Fabric and can be of two types:
        • Switch: configuration of the switch, containing for example: port group speeds, port breakouts, switch IP/ASN
        • Server: any physical server attached to the Fabric including Control Nodes
      • Connection: any logical connection for devices
        • usually it's a connection between two or more ports on two different devices
        • for example: MCLAG Peer Link, Unbundled/MCLAG server connections, Fabric connection between spine and leaf
      • VLANNamespace -> non-overlapping VLAN ranges for attaching servers
      • IPv4Namespace -> non-overlapping IPv4 ranges for VPC subnets
      "},{"location":"concepts/overview/#user-facing-api","title":"User-facing API","text":"
      • VPC API
        • VPC: Virtual Private Cloud, similar to a public cloud VPC, provides an isolated private network for the resources, with support for multiple subnets, each with user-defined VLANs and optional DHCP service
        • VPCAttachment: represents a specific VPC subnet assignment to the Connection object which means exact server port to a VPC binding
        • VPCPeering: enables VPC-to-VPC connectivity (could be Local where VPCs are used or Remote peering on the border/mixed leaves)
      • External API
        • External: definition of the \"external system\" to peer with (could be one or multiple devices such as edge/provider routers)
        • ExternalAttachment: configuration for a specific switch (using Connection object) describing how it connects to an external system
        • ExternalPeering: provides VPC with External connectivity by exposing specific VPC subnets to the external system and allowing inbound routes from it
      "},{"location":"concepts/overview/#fabricator","title":"Fabricator","text":"

      Installer builder and VLAB.

      • Installer builder based on a preset (currently: vlab for virtual and lab for physical)
        • Main input: Wiring Diagram
        • All input artifacts coming from OCI registry
        • Always full airgap (everything running from private registry)
        • Flatcar Linux for Control Node, generated ignition.json
        • Automatic K3s installation and private registry setup
        • All components and their dependencies running in Kubernetes
      • Integrated Virtual Lab (VLAB) management
      • Future:
        • In-cluster (control) Operator to manage all components
        • Upgrades handling for everything starting Control Node OS
        • Installation progress, status and retries
        • Disaster recovery and backups
      "},{"location":"concepts/overview/#fabric","title":"Fabric","text":"

      Control plane and switch agent.

      • Currently Fabric is basically a single controller manager running in Kubernetes
        • It includes controllers for different CRDs and needs
        • For example, auto assigning VNIs to VPCs or generating the Agent configuration
        • Additionally, it's running the admission webhook for Hedgehog's CRD APIs
      • The Agent is watching for the corresponding Agent CRD in Kubernetes API
        • It applies the changes and saves the new configuration locally
        • It reports status and information back to the API
        • It can perform reinstallation and reboot of SONiC
      "},{"location":"contribute/docs/","title":"Documentation","text":""},{"location":"contribute/docs/#getting-started","title":"Getting started","text":"

      This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.

      In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:

      make serve\n

      Now you can open continuously updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.

      Additionally you can run

      make build\n

      to make sure that your changes will be built correctly and doesn't break documentation.

      "},{"location":"contribute/docs/#workflow","title":"Workflow","text":"

      If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.

      Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.

      Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.

      "},{"location":"contribute/docs/#repository","title":"Repository","text":"

      Documentation is organized in per-release branches:

      • master - ongoing development, not released yet, referenced as dev version in the documentation
      • release/alpha-1/release/alpha-2 - alpha releases, referenced as alpha-1/alpha-2 versions in the documentation, if patches released for alpha-1, they'll be merged into release/alpha-1 branch
      • release/v1.0 - first stable release, referenced as v1.0 version in the documentation, if patches (e.g. v1.0.1) released for v1.0, they'll be merged into release/v1.0 branch

      Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.

      "},{"location":"contribute/docs/#file-layout","title":"File layout","text":"

      All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.

      For example, top-level .pages in this repository looks like this:

      nav:\n  - index.md\n  - getting-started\n  - concepts\n  - Wiring Diagram: wiring\n  - Install & Upgrade: install-upgrade\n  - User Guide: user-guide\n  - Reference: reference\n  - Troubleshooting: troubleshooting\n  - ...\n  - release-notes\n  - contribute\n

      Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.

      More details in the MkDocs Pages plugin.

      "},{"location":"contribute/docs/#abbreviations","title":"Abbreviations","text":"

      You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.

      For example, we have following in includes/abbreviations.md:

      *[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n

      It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.

      "},{"location":"contribute/docs/#markdown-extensions","title":"Markdown extensions","text":"

      We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.

      To view code for examples, please, check the source code of this page.

      "},{"location":"contribute/docs/#text-formatting","title":"Text formatting","text":"

      Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.

      Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.

      Keyboard keys can be written like so:

      Ctrl+Alt+Del

      Amd inline icons/emojis can be added like this:

      :fontawesome-regular-face-laugh-wink:\n:fontawesome-brands-twitter:{ .twitter }\n

      "},{"location":"contribute/docs/#admonitions","title":"Admonitions","text":"

      Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.

      Lorem ipsum

      Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.

      "},{"location":"contribute/docs/#code-blocks","title":"Code blocks","text":"

      Details can be found here.

      Simple code block with line nums and highlighted lines:

      bubble_sort.py
      def bubble_sort(items):\n    for i in range(len(items)):\n        for j in range(len(items) - 1 - i):\n            if items[j] > items[j + 1]:\n                items[j], items[j + 1] = items[j + 1], items[j]\n

      Code annotations:

      theme:\n  features:\n    - content.code.annotate # (1)\n
      1. I'm a code annotation! I can contain code, formatted text, images, ... basically anything that can be written in Markdown.
      "},{"location":"contribute/docs/#tabs","title":"Tabs","text":"

      You can use Tabs to better organize content.

      CC++
      #include <stdio.h>\n\nint main(void) {\n  printf(\"Hello world!\\n\");\n  return 0;\n}\n
      #include <iostream>\n\nint main(void) {\n  std::cout << \"Hello world!\" << std::endl;\n  return 0;\n}\n
      "},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"

      You can directly include Mermaid diagrams in your Markdown files. Details can be found here.

      graph LR\n  A[Start] --> B{Error?};\n  B -->|Yes| C[Hmm...];\n  C --> D[Debug];\n  D --> B;\n  B ---->|No| E[Yay!];
      sequenceDiagram\n  autonumber\n  Alice->>John: Hello John, how are you?\n  loop Healthcheck\n      John->>John: Fight against hypochondria\n  end\n  Note right of John: Rational thoughts!\n  John-->>Alice: Great!\n  John->>Bob: How about you?\n  Bob-->>John: Jolly good!
      "},{"location":"contribute/overview/","title":"Overview","text":"

      Under construction.

      "},{"location":"getting-started/download/","title":"Download","text":"

      The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. It is a command-line tool that allows to build installer for the Hedgehog Fabric, upgrade the existing installation, or run the Virtual LAB.

      "},{"location":"getting-started/download/#getting-access","title":"Getting access","text":"

      Prior to General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.

      After that you will be provided with the credentials to access the software on GitHub Package. In order to use the software, log in to the registry using the following command:

      docker login ghcr.io --username provided_user_name --password provided_token_string\n
      "},{"location":"getting-started/download/#downloading-hhfab","title":"Downloading hhfab","text":"

      Currently hhfab is supported on Linux x86/arm64 (tested on Ubuntu 22.04) and MacOS x86/arm64 for building installers/upgraders. It may work on Windows WSL2 (with Ubuntu), but it's not tested. For running VLAB only Linux x86 is currently supported.

      All software is published into the OCI registry GitHub Package including binaries, container images, or Helm charts. Download the latest stable hhfab binary from the GitHub Package using the following command, it requires ORAS to be installed (see below):

      curl -fsSL https://i.hhdev.io/hhfab | bash\n

      Or download a specific version (e.g. beta-1) using the following command:

      curl -fsSL https://i.hhdev.io/hhfab | VERSION=beta-1 bash\n

      Use the VERSION environment variable to specify the version of the software to download. By default, the latest stable release is downloaded. You can pick a specific release series (e.g. alpha-2) or a specific release.

      "},{"location":"getting-started/download/#installing-oras","title":"Installing ORAS","text":"

      The download script requires ORAS to be installed. ORAS is used to download the binary from the OCI registry and can be installed using following command:

      curl -fsSL https://i.hhdev.io/oras | bash\n
      "},{"location":"getting-started/download/#next-steps","title":"Next steps","text":"
      • Concepts
      • Virtual LAB
      • Installation
      • User guide
      "},{"location":"install-upgrade/build-wiring/","title":"Build Wiring Diagram","text":"

      Under construction.

      "},{"location":"install-upgrade/build-wiring/#overview","title":"Overview","text":"

      A wiring diagram is a YAML file that is a digital representation of your network. You can find more YAML level details in the User Guide section switch features and port naming and the api. It's mandatory for all switches to reference a SwitchProfile in the spec.profile of the Switch object. Only port naming defined by switch profiles could be used in the wiring diagram, NOS (or any other) port names aren't supported.

      In the meantime, to have a look at working wiring diagram for Hedgehog Fabric, run the sample generator that produces working wiring diagrams:

      ubuntu@sl-dev:~$ hhfab sample -h\n\nNAME:\n   hhfab sample - generate sample wiring diagram\n\nUSAGE:\n   hhfab sample command [command options]\n\nCOMMANDS:\n   spine-leaf, sl      generate sample spine-leaf wiring diagram\n   collapsed-core, cc  generate sample collapsed-core wiring diagram\n   help, h             Shows a list of commands or help for one command\n\nOPTIONS:\n   --help, -h  show help\n

      Or you can generate a wiring diagram for a VLAB environment with flags to customize number of switches, links, servers, etc.:

      ubuntu@sl-dev:~$ hhfab vlab gen --help\nNAME:\n   hhfab vlab generate - generate VLAB wiring diagram\n\nUSAGE:\n   hhfab vlab generate [command options]\n\nOPTIONS:\n   --bundled-servers value      number of bundled servers to generate for switches (only for one of the second switch in the redundancy group or orphan switch) (default: 1)\n   --eslag-leaf-groups value    eslag leaf groups (comma separated list of number of ESLAG switches in each group, should be 2-4 per group, e.g. 2,4,2 for 3 groups with 2, 4 and 2 switches)\n   --eslag-servers value        number of ESLAG servers to generate for ESLAG switches (default: 2)\n   --fabric-links-count value   number of fabric links if fabric mode is spine-leaf (default: 0)\n   --help, -h                   show help\n   --mclag-leafs-count value    number of mclag leafs (should be even) (default: 0)\n   --mclag-peer-links value     number of mclag peer links for each mclag leaf (default: 0)\n   --mclag-servers value        number of MCLAG servers to generate for MCLAG switches (default: 2)\n   --mclag-session-links value  number of mclag session links for each mclag leaf (default: 0)\n   --no-switches                do not generate any switches (default: false)\n   --orphan-leafs-count value   number of orphan leafs (default: 0)\n   --spines-count value         number of spines if fabric mode is spine-leaf (default: 0)\n   --unbundled-servers value    number of unbundled servers to generate for switches (only for one of the first switch in the redundancy group or orphan switch) (default: 1)\n   --vpc-loopbacks value        number of vpc loopbacks for each switch (default: 0)\n
      "},{"location":"install-upgrade/build-wiring/#sample-switch-configuration","title":"Sample Switch Configuration","text":"
      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: ds3000-02\nspec:\n  boot:\n    serial: ABC123XYZ\n  role: server-leaf\n  description: leaf-2\n  profile: celestica-ds3000\n  portBreakouts:\n    E1/1: 4x10G\n    E1/2: 4x10G\n    E1/17: 4x25G\n    E1/18: 4x25G\n    E1/32: 4x25G\n  redundancy:\n    group: mclag-1\n    type: mclag\n
      "},{"location":"install-upgrade/build-wiring/#design-discussion","title":"Design Discussion","text":"

      This section is meant to help the reader understand how to assemble the primitives presented by the Fabric API into a functional fabric.

      "},{"location":"install-upgrade/build-wiring/#vpc","title":"VPC","text":"

      A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than one switch, which provides great flexibility.

      graph TD\n    L1([Leaf 1])\n    L2([Leaf 2])\n    S1[\"Server 1\n      10.7.71.1\"]\n    S2[\"Server 2\n      172.16.2.31\"]\n    S3[\"Server 3\n       192.168.18.85\"]\n\n    L1 <--> S1\n    L1 <--> S2\n    L2 <--> S3\n\n    subgraph VPC 1\n    S1\n    S2\n    S3\n    end
      "},{"location":"install-upgrade/build-wiring/#connection","title":"Connection","text":"

      A connection represents the physical wires in your data center. They connect switches to other switches or switches to servers.

      "},{"location":"install-upgrade/build-wiring/#server-connections","title":"Server Connections","text":"

      A server connection is a connection used to connect servers to the fabric. The fabric will configure the server-facing port according to the type of the connection (MLAG, Bundle, etc). The configuration of the actual server needs to be done by the server administrator. The server port names are not validated by the fabric and used as metadata to identify the connection. A server connection can be one of:

      • Unbundled - A single cable connecting switch to server.
      • Bundled - Two or more cables going to a single switch, a LAG or similar.
      • MCLAG - Two cables going to two different switches, also called dual homing. The switches will need a fabric link between them.
      • ESLAG - Two to four cables going to different switches, also called multi-homing. If four links are used there will need to be four switches connected to a single server with four NIC ports.
      graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    L3([Leaf 3])\n    L4([Leaf 4])\n    L5([Leaf 5])\n    L6([Leaf 6])\n    L7([Leaf 7])\n\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 & S2 ---- L1 & L2 & L3 & L4 & L5 & L6 & L7\n    L1 <-- Bundled --> TS1\n    L1 <-- Bundled --> TS1\n    L1 <-- Unbundled --> TS2\n    L2 <-- MCLAG --> TS3\n    L3 <-- MCLAG --> TS3\n    L4 <-- ESLAG --> TS4\n    L5 <-- ESLAG --> TS4\n    L6 <-- ESLAG --> TS4\n    L7 <-- ESLAG --> TS4\n\n    subgraph VPC 1\n    TS1\n    TS2\n    TS3\n    TS4\n    end\n\n    subgraph MCLAG\n    L2\n    L3\n    end\n\n    subgraph ESLAG\n    L3\n    L4\n    L5\n    L6\n    L7\n    end\n
      "},{"location":"install-upgrade/build-wiring/#fabric-connections","title":"Fabric Connections","text":"

      Fabric connections serve as connections between switches, they form the fabric of the network.

      "},{"location":"install-upgrade/build-wiring/#vpc-peering","title":"VPC Peering","text":"

      VPCs need VPC Peerings to talk to each other. VPC Peerings come in two varieties: local and remote.

      graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 & S2 <--> L1 & L2\n    L1 <--> TS1 & TS2\n    L2 <--> TS3 & TS4\n\n\n    subgraph VPC 1\n    TS1\n    TS2\n    end\n\n    subgraph VPC 2\n    TS3\n    TS4\n    end
      "},{"location":"install-upgrade/build-wiring/#local-vpc-peering","title":"Local VPC Peering","text":"

      When there is no dedicated border/peering switch available in the fabric we can use local VPC peering. This kind of peering tries sends traffic between the two VPC's on the switch where either of the VPC's has workloads attached. Due to limitation in the Sonic network operating system this kind of peering bandwidth is limited to the number of VPC loopbacks you have selected while initializing the fabric. Traffic between the VPCs will use the loopback interface, the bandwidth of this connection will be equal to the bandwidth of port used in the loopback.

      graph TD\n\n    L1([Leaf 1])\n    S1[Server1]\n    S2[Server2]\n    S3[Server3]\n    S4[Server4]\n\n    L1 <-.2,loopback.-> L1;\n    L1 <-.3.-> S1;\n    L1 <--> S2 & S4;\n    L1 <-.1.-> S3;\n\n    subgraph VPC 1\n    S1\n    S2\n    end\n\n    subgraph VPC 2\n    S3\n    S4\n    end
      The dotted line in the diagram shows the traffic flow for local peering. The traffic originates in VPC 2, travels to the switch, travels out the first loopback port, into the second loopback port, and finally out the port destined for VPC 1.

      "},{"location":"install-upgrade/build-wiring/#remote-vpc-peering","title":"Remote VPC Peering","text":"

      Remote Peering is used when you need a high bandwidth connection between the VPCs, you will dedicate a switch to the peering traffic. This is either done on the border leaf or on a switch where either of the VPC's are not present. This kind of peering allows peer traffic between different VPC's at line rate and is only limited by fabric bandwidth. Remote peering introduces a few additional hops in the traffic and may cause a small increase in latency.

      graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([Leaf 1])\n    L2([Leaf 2])\n    L3([Leaf 3])\n    TS1[Server1]\n    TS2[Server2]\n    TS3[Server3]\n    TS4[Server4]\n\n    S1 <-.5.-> L1;\n    S1 <-.2.-> L2;\n    S1 <-.3,4.-> L3;\n    S2 <--> L1;\n    S2 <--> L2;\n    S2 <--> L3;\n    L1 <-.6.-> TS1;\n    L1 <--> TS2;\n    L2 <--> TS3;\n    L2 <-.1.-> TS4;\n\n\n    subgraph VPC 1\n    TS1\n    TS2\n    end\n\n    subgraph VPC 2\n    TS3\n    TS4\n    end
      The dotted line in the diagram shows the traffic flow for remote peering. The traffic could take a different path because of ECMP. It is important to note that Leaf 3 cannot have any servers from VPC 1 or VPC 2 on it, but it can have a different VPC attached to it.

      "},{"location":"install-upgrade/build-wiring/#vpc-loopback","title":"VPC Loopback","text":"

      A VPC loopback is a physical cable with both ends plugged into the same switch, suggested but not required to be the adjacent ports. This loopback allows two different VPCs to communicate with each other. This is due to a Broadcom limitation.

      "},{"location":"install-upgrade/config/","title":"Fabric Configuration","text":""},{"location":"install-upgrade/config/#overview","title":"Overview","text":"

      The fab.yaml file is the configuration file for the fabric. It supplies the configuration of the users, their credentials, logging, telemetry, and other non wiring related settings. The fab.yaml file is composed of multiple YAML documents inside of a single file. Per the YAML spec 3 hyphens (---) on a single line separate the end of one document from the beginning of the next. There are two YAML documents in the fab.yaml file. For more information about how to use hhfab init, run hhfab init --help.

      "},{"location":"install-upgrade/config/#typical-hhfab-workflows","title":"Typical HHFAB workflows","text":""},{"location":"install-upgrade/config/#hhfab-for-vlab","title":"HHFAB for VLAB","text":"

      For a VLAB user, the typical workflow with hhfab is:

      1. hhfab init --dev
      2. hhfab vlab gen
      3. hhfab vlab up

      The above workflow will get a user up and running with a spine-leaf VLAB.

      "},{"location":"install-upgrade/config/#hhfab-for-physical-machines","title":"HHFAB for Physical Machines","text":"

      It's possible to start from scratch:

      1. hhfab init (see different flags to cusomize initial configuration)
      2. Adjust the fab.yaml file to your needs
      3. hhfab validate
      4. hhfab build

      Or import existing config and wiring files:

      1. hhfab init -c fab.yaml -w wiring-file.yaml -w extra-wiring-file.yaml
      2. hhfab validate
      3. hhfab build

      After the above workflow a user will have a .img file suitable for installing the control node, then bringing up the switches which comprise the fabric.

      "},{"location":"install-upgrade/config/#fabyaml","title":"Fab.yaml","text":""},{"location":"install-upgrade/config/#configure-control-node-and-switch-users","title":"Configure control node and switch users","text":"

      Configuring control node and switch users is done either passing --default-password-hash to hhfab init or editing the resulting fab.yaml file emitted by hhfab init. You can specify users to be configured on the control node(s) and switches in the following format:

      spec:\n    config:\n      control:\n        defaultUser: # user 'core' on all control nodes\n          password: \"hashhashhashhashhash\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 SecREKeyJumblE\"\n\n        fabric:\n          mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n\n          defaultSwitchUsers:\n            admin: # at least one user with name 'admin' and role 'admin'\n              role: admin\n              #password: \"$5$8nAYPGcl4...\" # password hash\n              #authorizedKeys: # optional SSH authorized keys\n              #  - \"ssh-ed25519 AAAAC3Nza...\"\n            op: # optional read-only user\n              role: operator\n              #password: \"$5$8nAYPGcl4...\" # password hash\n              #authorizedKeys: # optional SSH authorized keys\n              #  - \"ssh-ed25519 AAAAC3Nza...\"\n

      Control node(s) user is always named core.

      The role of the user,operator is read-only access to sonic-cli command on the switches. In order to avoid conflicts, do not use the following usernames: operator,hhagent,netops.

      "},{"location":"install-upgrade/config/#ntp-and-dhcp","title":"NTP and DHCP","text":"

      The control node uses public ntp servers from cloudflare and google by default. The control node runs a dhcp server on the management network. See the example file.

      "},{"location":"install-upgrade/config/#control-node","title":"Control Node","text":"

      The control node is the host that manages all the switches, runs k3s, and serves images. This is the YAML document configure the control node:

      apiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n  name: control-1\n  namespace: fab\nspec:\n  bootstrap:\n   disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n  external:\n    interface: enp2s0 # interface for external\n    ip: dhcp # IP address for external interface\n  management:\n    interface: enp2s1 # interface for management\n\n# Currently only one ControlNode is supported\n
      The management interface is for the control node to manage the fabric switches, not end-user management of the control node. For end-user management of the control node specify the external interface name.

      "},{"location":"install-upgrade/config/#forward-switch-metrics-and-logs","title":"Forward switch metrics and logs","text":"

      There is an option to enable Grafana Alloy on all switches to forward metrics and logs to the configured targets using Prometheus Remote-Write API and Loki API. If those APIs are available from Control Node(s), but not from the switches, it's possible to enable HTTP Proxy on Control Node(s) that will be used by Grafana Alloy running on the switches to access the configured targets. It could be done by passing --control-proxy=true to hhfab init.

      Metrics includes port speeds, counters, errors, operational status, transceivers, fans, power supplies, temperature sensors, BGP neighbors, LLDP neighbors, and more. Logs include agent logs.

      Configuring the exporters and targets is currently only possible by editing the fab.yaml configuration file. An example configuration is provided below:

      spec:\n  config:\n      ...\n      defaultAlloyConfig:\n        agentScrapeIntervalSeconds: 120\n        unixScrapeIntervalSeconds: 120\n        unixExporterEnabled: true\n        lokiTargets:\n          grafana_cloud: # target name, multiple targets can be configured\n              basicAuth: # optional\n                  password: \"<password>\"\n                  username: \"<username>\"\n              labels: # labels to be added to all logs\n                  env: env-1\n              url: https://logs-prod-021.grafana.net/loki/api/v1/push\n              useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n        prometheusTargets:\n          grafana_cloud: # target name, multiple targets can be configured\n              basicAuth: # optional\n                  password: \"<password>\"\n                  username: \"<username>\"\n              labels: # labels to be added to all metrics\n                  env: env-1\n              sendIntervalSeconds: 120\n              url: https://prometheus-prod-36-prod-us-west-0.grafana.net/api/prom/push\n              useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n              unixExporterCollectors: # list of node-exporter collectors to enable, https://grafana.com/docs/alloy/latest/reference/components/prometheus.exporter.unix/#collectors-list\n                  - cpu\n                  - filesystem\n                  - loadavg\n                  - meminfo\n              collectSyslogEnabled: true # collect /var/log/syslog on switches and forward to the lokiTargets\n

      For additional options, see the AlloyConfig struct in Fabric repo.

      "},{"location":"install-upgrade/config/#complete-example-file","title":"Complete Example File","text":"
      apiVersion: fabricator.githedgehog.com/v1beta1\nkind: Fabricator\nmetadata:\n  name: default\n  namespace: fab\nspec:\n  config:\n    control:\n      tlsSAN: # IPs and DNS names to access API\n        - \"customer.site.io\"\n\n      ntpServers:\n      - time.cloudflare.com\n      - time1.google.com\n\n      defaultUser: # user 'core' on all control nodes\n        password: \"hash...\" # password hash\n        authorizedKeys:\n          - \"ssh-ed25519 hash...\"\n\n    fabric:\n      mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n      includeONIE: true\n      defaultSwitchUsers:\n        admin: # at least one user with name 'admin' and role 'admin'\n          role: admin\n          password: \"hash...\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 hash...\"\n        op: # optional read-only user\n          role: operator\n          password: \"hash...\" # password hash\n          authorizedKeys:\n            - \"ssh-ed25519 hash...\"\n\n      defaultAlloyConfig:\n        agentScrapeIntervalSeconds: 120\n        unixScrapeIntervalSeconds: 120\n        unixExporterEnabled: true\n        collectSyslogEnabled: true\n        lokiTargets:\n          lab:\n            url: http://url.io:3100/loki/api/v1/push\n            useControlProxy: true\n            labels:\n              descriptive: name\n        prometheusTargets:\n          lab:\n            url: http://url.io:9100/api/v1/push\n            useControlProxy: true\n            labels:\n              descriptive: name\n            sendIntervalSeconds: 120\n\n---\napiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n  name: control-1\n  namespace: fab\nspec:\n  bootstrap:\n    disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n  external:\n    interface: eno2 # interface for external\n    ip: dhcp # IP address for external interface\n  management:\n    interface: eno1\n\n# Currently only one ControlNode is supported\n
      "},{"location":"install-upgrade/overview/","title":"Install Fabric","text":"

      Under construction.

      "},{"location":"install-upgrade/overview/#prerequisites","title":"Prerequisites","text":"
      • A machine with access to the Internet to use Fabricator and build installer with at least 8 GB RAM and 25 GB of disk space
      • An 16 GB USB flash drive, if you are not using virtual media
      • Have a machine to function as the Fabric Control Node. System Requirements as well as IPMI access to it to install the OS.
      • A management switch with at least 1 10GbE port is recommended
      • Enough Supported Switches for your Fabric
      "},{"location":"install-upgrade/overview/#overview-of-install-process","title":"Overview of Install Process","text":"

      This section is dedicated to the Hedgehog Fabric installation on bare-metal control node(s) and switches, their preparation and configuration. To install the VLAB see VLAB Overview.

      Download and install hhfab following instructions from the Download section.

      The main steps to install Fabric are:

      1. Install hhfab on the machines with access to the Internet
        1. Prepare Wiring Diagram
        2. Select Fabric Configuration
        3. Build Control Node configuration and installer
      2. Install Control Node
        1. Insert USB with control-os image into Fabric Control Node
        2. Boot the node off the USB to initiate the installation
      3. Prepare Management Network
        1. Connect management switch to Fabric control node
        2. Connect 1GbE Management port of switches to management switch
      4. Prepare supported switches
        1. Ensure switch serial numbers and / or first management interface MAC addresses are recorded in wiring diagram
        2. Boot them into ONIE Install Mode to have them automatically provisioned
      "},{"location":"install-upgrade/overview/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and Installer","text":"

      Hedgehog has created a command line utility, called hhfab, that helps generate the wiring diagram and fabric configuration, validate the supplied configurations, and generate an installation image (.img) suitable for writing to a USB flash drive or mounting via IPMI virtual media. The first hhfab command to run is hhfab init. This will generate the main configuration file, fab.yaml. fab.yaml is responsible for almost every configuration of the fabric with the exception of the wiring. Each command and subcommand have usage messages, simply supply the -h flag to your command or sub command to see the available options. For example hhfab vlab -h and hhfab vlab gen -h.

      "},{"location":"install-upgrade/overview/#hhfab-commands-to-make-a-bootable-image","title":"HHFAB commands to make a bootable image","text":"
      1. hhfab init --wiring wiring-lab.yaml
      2. The init command generates a fab.yaml file, edit the fab.yaml file for your needs
        1. ensure the correct boot disk (e.g. /dev/sda) and control node NIC names are supplied
      3. hhfab validate
      4. hhfab build

      The installer for the fabric is generated in $CWD/result/. This installation image is named control-1-install-usb.img and is 7.5 GB in size. Once the image is created, you can write it to a USB drive, or mount it via virtual media.

      "},{"location":"install-upgrade/overview/#write-usb-image-to-disk","title":"Write USB Image to Disk","text":"

      This will erase data on the USB disk.

      1. Insert the USB to your machine
      2. Identify the path to your USB stick, for example: /dev/sdc
      3. Issue the command to write the image to the USB drive
        • sudo dd if=control-1-install-usb.img of=/dev/sdc bs=4k status=progress

      There are utilities that assist this process such as etcher.

      "},{"location":"install-upgrade/overview/#install-control-node","title":"Install Control Node","text":"

      This control node should be given a static IP address. Either a lease or statically assigned.

      1. Configure the server to use UEFI boot without secure boot

      2. Attach the image to the server either by inserting via USB, or attaching via virtual media

      3. Select boot off of the attached media, the installation process is automated

      4. Once the control node has booted, it logs in automatically and begins the installation process

        1. Optionally use journalctl -f -u flatcar-install.service to monitor progress
      5. Once the installation is complete, the system automatically reboots.

      6. After the system has shutdown but before the boot up process reaches the operating system, remove the USB image from the system. Removal during the UEFI boot screen is acceptable.

      7. Upon booting into the freshly installed system, the fabric installation will automatically begin

        1. If the insecure --dev flag was passed to hhfab init the password for the core user is HHFab.Admin!, the switches have two users created admin and op. admin has administrator privileges and password HHFab.Admin!, whereas the op user is a read-only, non-sudo user with password HHFab.Op!.
        2. Optionally this can be monitored with journalctl -f -u fabric-install.service
      8. The install is complete when the log emits \"Control Node installation complete\". Additionally, the systemctl status will show inactive (dead) indicating that the executable has finished.

      "},{"location":"install-upgrade/overview/#configure-management-network","title":"Configure Management Network","text":"

      The control node is dual-homed. It has a 10GbE interface that connects to the management network. The other link called external in the fab.yaml file is for the customer to access the control node. The management network is for the command and control of the switches that comprise the fabric. This management network can be a simple broadcast domain with layer 2 connectivity. The control node will run a DHCP and small http servers. The management network is not accessible to machines or devices not associated with the fabric.

      "},{"location":"install-upgrade/overview/#fabric-manages-switches","title":"Fabric Manages Switches","text":"

      Now that the install has finished, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s, all pre-installed as part of the Control Node installer.

      At this stage, the fabric hands out DHCP addresses to the switches via the management network. Optionally, you can monitor this process by going through the following steps: - enter k9s at the command prompt - use the arrow keys to select the pod named fabric-boot - the logs of the pod will be displayed showing the DHCP lease process - use the switches screen of k9s to see the heartbeat column to verify the connection between switch and controller. - to see the switches type :switches (like a vim command) into k9s

      "},{"location":"install-upgrade/requirements/","title":"System Requirements","text":""},{"location":"install-upgrade/requirements/#out-of-band-management-network","title":"Out of Band Management Network","text":"

      In order to provision and manage the switches that comprise the fabric, an out of band switch must also be installed. This network is to be used exclusively by the control node and the fabric switches, no other access is permitted. This switch (or switches) is not managed by the fabric. It is recommended that this switch have at least a 10GbE port and that port connect to the control node.

      "},{"location":"install-upgrade/requirements/#control-node","title":"Control Node","text":"
      • Fast SSDs for system/root is mandatory for Control Nodes
        • NVMe SSDs are recommended
        • DRAM-less NAND SSDs are not supported (e.g. Crucial BX500)
      • 10 GbE port for connection to management network is recommended
      • Minimal (non-HA) setup is a single Control Node
      • (Future) Full (HA) setup is at least 3 Control Nodes
      • (Future) Extra nodes could be used for things like Logging, Monitoring, Alerting stack, and more

      In internal testing Hedgehog uses a server with the following specifications:

      • CPU - AMD EPYC 4344P
      • Memory - 32 GiB DDR5 ECC 4800MT/s
      • Storage - PCIe Gen 4 NVMe M.2 400GB
      • Network - AOC-STG-i4S Intel X710-BM1 controller
      • Motherboard - H13SAE-MF
      "},{"location":"install-upgrade/requirements/#non-ha-minimal-setup-1-control-node","title":"Non-HA (minimal) setup - 1 Control Node","text":"
      • Control Node runs non-HA Kubernetes Control Plane installation with non-HA Hedgehog Fabric Control Plane on top of it
      • Not recommended for more then 10 devices participating in the Hedgehog Fabric or production deployments
      Minimal Recommended CPU 6 8 RAM 16 GB 32 GB Disk 150 GB 250 GB"},{"location":"install-upgrade/requirements/#future-ha-setup-3-control-nodes-per-node","title":"(Future) HA setup - 3+ Control Nodes (per node)","text":"
      • Each Control Node runs part of the HA Kubernetes Control Plane installation with Hedgehog Fabric Control Plane on top of it in HA mode as well
      • Recommended for all cases where more than 10 devices participating in the Hedgehog Fabric
      Minimal Recommended CPU 6 8 RAM 16 GB 32 GB Disk 150 GB 250 GB"},{"location":"install-upgrade/requirements/#reference-control-node-configuration","title":"Reference Control Node Configuration","text":"
      • AMD EPYC 4344P (8C/16T, 3.8 GHz, 32 MB L3, 65W, single socket)
      • 32 GB DDR5-4800 ECC UDIMM (2 x 16 GB)
      • Micron 7450 MAX 400GB NVMe
      "},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":"
      • (Future) Each participating device is part of the Kubernetes cluster, so it runs Kubernetes kubelet
      • Additionally, it runs the Hedgehog Fabric Agent that controls devices configuration

      Following resources should be available on a device to run in the Hedgehog Fabric (after other software such as SONiC usage):

      Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":"

      You can find detailed information about devices in the Switch Profiles Catalog and in the User Guide switch features and port naming.

      "},{"location":"install-upgrade/supported-devices/#spine","title":"Spine","text":"
      • Celestica DS3000
      • Celestica DS4000
      • Dell S5232F-ON
      • Edgecore DCS204 (AS7726-32X)
      • Edgecore DCS501 (AS7712-32X-EC)
      • Supermicro SSE-C4632SB
      "},{"location":"install-upgrade/supported-devices/#leaf","title":"Leaf","text":"

      (could be used for collapsed-core)

      • Celestica DS3000
      • Dell S5232F-ON
      • Dell S5248F-ON
      • Edgecore DCS203 (AS7326-56X)
      • Edgecore DCS204 (AS7726-32X)
      • Edgecore EPS203 (AS4630-54NPE)
      • Supermicro SSE-C4632SB
      "},{"location":"reference/api/","title":"API Reference","text":""},{"location":"reference/api/#packages","title":"Packages","text":"
      • agent.githedgehog.com/v1beta1
      • dhcp.githedgehog.com/v1beta1
      • vpc.githedgehog.com/v1beta1
      • wiring.githedgehog.com/v1beta1
      "},{"location":"reference/api/#agentgithedgehogcomv1beta1","title":"agent.githedgehog.com/v1beta1","text":"

      Package v1beta1 contains API Schema definitions for the agent v1beta1 API group. This is the internal API group for the switch and control node agents. Not intended to be modified by the user.

      "},{"location":"reference/api/#resource-types","title":"Resource Types","text":"
      • Agent
      "},{"location":"reference/api/#adminstatus","title":"AdminStatus","text":"

      Underlying type: string

      Appears in: - SwitchStateInterface

      Field Description `` up down testing"},{"location":"reference/api/#agent","title":"Agent","text":"

      Agent is an internal API object used by the controller to pass all relevant information to the agent running on a specific switch in order to fully configure it and manage its lifecycle. It is not intended to be used directly by users. Spec of the object isn't user-editable, it is managed by the controller. Status of the object is updated by the agent and is used by the controller to track the state of the agent and the switch it is running on. Name of the Agent object is the same as the name of the switch it is running on and it's created in the same namespace as the Switch object.

      Field Description Default Validation apiVersion string agent.githedgehog.com/v1beta1 kind string Agent metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. status AgentStatus Status is the observed state of the Agent"},{"location":"reference/api/#agentstatus","title":"AgentStatus","text":"

      AgentStatus defines the observed state of the agent running on a specific switch and includes information about the switch itself as well as the state of the agent and applied configuration.

      Appears in: - Agent

      Field Description Default Validation version string Current running agent version installID string ID of the agent installation, used to track NOS re-installs runID string ID of the agent run, used to track NOS reboots lastHeartbeat Time Time of the last heartbeat from the agent lastAttemptTime Time Time of the last attempt to apply configuration lastAttemptGen integer Generation of the last attempt to apply configuration lastAppliedTime Time Time of the last successful configuration application lastAppliedGen integer Generation of the last successful configuration application state SwitchState Detailed switch state updated with each heartbeat conditions Condition array Conditions of the agent, includes readiness marker for use with kubectl wait"},{"location":"reference/api/#bgpmessages","title":"BGPMessages","text":"

      Appears in: - SwitchStateBGPNeighbor

      Field Description Default Validation received BGPMessagesCounters sent BGPMessagesCounters"},{"location":"reference/api/#bgpmessagescounters","title":"BGPMessagesCounters","text":"

      Appears in: - BGPMessages

      Field Description Default Validation capability integer keepalive integer notification integer open integer routeRefresh integer update integer"},{"location":"reference/api/#bgpneighborsessionstate","title":"BGPNeighborSessionState","text":"

      Underlying type: string

      Appears in: - SwitchStateBGPNeighbor

      Field Description `` idle connect active openSent openConfirm established"},{"location":"reference/api/#bgppeertype","title":"BGPPeerType","text":"

      Underlying type: string

      Appears in: - SwitchStateBGPNeighbor

      Field Description `` internal external"},{"location":"reference/api/#operstatus","title":"OperStatus","text":"

      Underlying type: string

      Appears in: - SwitchStateInterface

      Field Description `` up down testing unknown dormant notPresent lowerLayerDown"},{"location":"reference/api/#switchstate","title":"SwitchState","text":"

      Appears in: - AgentStatus

      Field Description Default Validation nos SwitchStateNOS Information about the switch and NOS interfaces object (keys:string, values:SwitchStateInterface) Switch interfaces state (incl. physical, management and port channels) breakouts object (keys:string, values:SwitchStateBreakout) Breakout ports state (port -> breakout state) bgpNeighbors object (keys:string, values:map[string]SwitchStateBGPNeighbor) State of all BGP neighbors (VRF -> neighbor address -> state) platform SwitchStatePlatform State of the switch platform (fans, PSUs, sensors) criticalResources SwitchStateCRM State of the critical resources (ACLs, routes, etc.)"},{"location":"reference/api/#switchstatebgpneighbor","title":"SwitchStateBGPNeighbor","text":"

      Appears in: - SwitchState

      Field Description Default Validation connectionsDropped integer enabled boolean establishedTransitions integer lastEstablished Time lastRead Time lastResetReason string lastResetTime Time lastWrite Time localAS integer messages BGPMessages peerAS integer peerGroup string peerPort integer peerType BGPPeerType remoteRouterID string sessionState BGPNeighborSessionState shutdownMessage string prefixes object (keys:string, values:SwitchStateBGPNeighborPrefixes)"},{"location":"reference/api/#switchstatebgpneighborprefixes","title":"SwitchStateBGPNeighborPrefixes","text":"

      Appears in: - SwitchStateBGPNeighbor

      Field Description Default Validation received integer receivedPrePolicy integer sent integer"},{"location":"reference/api/#switchstatebreakout","title":"SwitchStateBreakout","text":"

      Appears in: - SwitchState

      Field Description Default Validation mode string nosMembers string array status string"},{"location":"reference/api/#switchstatecrm","title":"SwitchStateCRM","text":"

      Appears in: - SwitchState

      Field Description Default Validation aclStats SwitchStateCRMACLStats stats SwitchStateCRMStats"},{"location":"reference/api/#switchstatecrmacldetails","title":"SwitchStateCRMACLDetails","text":"

      Appears in: - SwitchStateCRMACLInfo

      Field Description Default Validation groupsAvailable integer groupsUsed integer tablesAvailable integer tablesUsed integer"},{"location":"reference/api/#switchstatecrmaclinfo","title":"SwitchStateCRMACLInfo","text":"

      Appears in: - SwitchStateCRMACLStats

      Field Description Default Validation lag SwitchStateCRMACLDetails port SwitchStateCRMACLDetails rif SwitchStateCRMACLDetails switch SwitchStateCRMACLDetails vlan SwitchStateCRMACLDetails"},{"location":"reference/api/#switchstatecrmaclstats","title":"SwitchStateCRMACLStats","text":"

      Appears in: - SwitchStateCRM

      Field Description Default Validation egress SwitchStateCRMACLInfo ingress SwitchStateCRMACLInfo"},{"location":"reference/api/#switchstatecrmstats","title":"SwitchStateCRMStats","text":"

      Appears in: - SwitchStateCRM

      Field Description Default Validation dnatEntriesAvailable integer dnatEntriesUsed integer fdbEntriesAvailable integer fdbEntriesUsed integer ipmcEntriesAvailable integer ipmcEntriesUsed integer ipv4NeighborsAvailable integer ipv4NeighborsUsed integer ipv4NexthopsAvailable integer ipv4NexthopsUsed integer ipv4RoutesAvailable integer ipv4RoutesUsed integer ipv6NeighborsAvailable integer ipv6NeighborsUsed integer ipv6NexthopsAvailable integer ipv6NexthopsUsed integer ipv6RoutesAvailable integer ipv6RoutesUsed integer nexthopGroupMembersAvailable integer nexthopGroupMembersUsed integer nexthopGroupsAvailable integer nexthopGroupsUsed integer snatEntriesAvailable integer snatEntriesUsed integer"},{"location":"reference/api/#switchstateinterface","title":"SwitchStateInterface","text":"

      Appears in: - SwitchState

      Field Description Default Validation enabled boolean adminStatus AdminStatus operStatus OperStatus mac string lastChanged Time speed string counters SwitchStateInterfaceCounters transceiver SwitchStateTransceiver lldpNeighbors SwitchStateLLDPNeighbor array"},{"location":"reference/api/#switchstateinterfacecounters","title":"SwitchStateInterfaceCounters","text":"

      Appears in: - SwitchStateInterface

      Field Description Default Validation inBitsPerSecond float inDiscards integer inErrors integer inPktsPerSecond float inUtilization integer lastClear Time outBitsPerSecond float outDiscards integer outErrors integer outPktsPerSecond float outUtilization integer"},{"location":"reference/api/#switchstatelldpneighbor","title":"SwitchStateLLDPNeighbor","text":"

      Appears in: - SwitchStateInterface

      Field Description Default Validation chassisID string systemName string systemDescription string portID string portDescription string manufacturer string model string serialNumber string"},{"location":"reference/api/#switchstatenos","title":"SwitchStateNOS","text":"

      SwitchStateNOS contains information about the switch and NOS received from the switch itself by the agent

      Appears in: - SwitchState

      Field Description Default Validation asicVersion string ASIC name, such as \"broadcom\" or \"vs\" buildCommit string NOS build commit buildDate string NOS build date builtBy string NOS build user configDbVersion string NOS config DB version, such as \"version_4_2_1\" distributionVersion string Distribution version, such as \"Debian 10.13\" hardwareVersion string Hardware version, such as \"X01\" hwskuVersion string Hwsku version, such as \"DellEMC-S5248f-P-25G-DPB\" kernelVersion string Kernel version, such as \"5.10.0-21-amd64\" mfgName string Manufacturer name, such as \"Dell EMC\" platformName string Platform name, such as \"x86_64-dellemc_s5248f_c3538-r0\" productDescription string NOS product description, such as \"Enterprise SONiC Distribution by Broadcom - Enterprise Base package\" productVersion string NOS product version, empty for Broadcom SONiC serialNumber string Switch serial number softwareVersion string NOS software version, such as \"4.2.0-Enterprise_Base\" uptime string Switch uptime, such as \"21:21:27 up 1 day, 23:26, 0 users, load average: 1.92, 1.99, 2.00 \""},{"location":"reference/api/#switchstateplatform","title":"SwitchStatePlatform","text":"

      Appears in: - SwitchState

      Field Description Default Validation fans object (keys:string, values:SwitchStatePlatformFan) psus object (keys:string, values:SwitchStatePlatformPSU) temperature object (keys:string, values:SwitchStatePlatformTemperature)"},{"location":"reference/api/#switchstateplatformfan","title":"SwitchStatePlatformFan","text":"

      Appears in: - SwitchStatePlatform

      Field Description Default Validation direction string speed float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformpsu","title":"SwitchStatePlatformPSU","text":"

      Appears in: - SwitchStatePlatform

      Field Description Default Validation inputCurrent float inputPower float inputVoltage float outputCurrent float outputPower float outputVoltage float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformtemperature","title":"SwitchStatePlatformTemperature","text":"

      Appears in: - SwitchStatePlatform

      Field Description Default Validation temperature float alarms string highThreshold float criticalHighThreshold float lowThreshold float criticalLowThreshold float"},{"location":"reference/api/#switchstatetransceiver","title":"SwitchStateTransceiver","text":"

      Appears in: - SwitchStateInterface

      Field Description Default Validation description string cableClass string formFactor string connectorType string present string cableLength float operStatus string temperature float voltage float serialNumber string vendor string vendorPart string vendorOUI string vendorRev string"},{"location":"reference/api/#dhcpgithedgehogcomv1beta1","title":"dhcp.githedgehog.com/v1beta1","text":"

      Package v1beta1 contains API Schema definitions for the dhcp v1beta1 API group. It is the primary internal API group for the intended Hedgehog DHCP server configuration and storing leases as well as making them available to the end user through API. Not intended to be modified by the user.

      "},{"location":"reference/api/#resource-types_1","title":"Resource Types","text":"
      • DHCPSubnet
      "},{"location":"reference/api/#dhcpallocated","title":"DHCPAllocated","text":"

      DHCPAllocated is a single allocated IP with expiry time and hostname from DHCP requests, it's effectively a DHCP lease

      Appears in: - DHCPSubnetStatus

      Field Description Default Validation ip string Allocated IP address expiry Time Expiry time of the lease hostname string Hostname from DHCP request"},{"location":"reference/api/#dhcpsubnet","title":"DHCPSubnet","text":"

      DHCPSubnet is the configuration (spec) for the Hedgehog DHCP server and storage for the leases (status). It's primary internal API group, but it makes allocated IPs / leases information available to the end user through API. Not intended to be modified by the user.

      Field Description Default Validation apiVersion string dhcp.githedgehog.com/v1beta1 kind string DHCPSubnet metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec DHCPSubnetSpec Spec is the desired state of the DHCPSubnet status DHCPSubnetStatus Status is the observed state of the DHCPSubnet"},{"location":"reference/api/#dhcpsubnetspec","title":"DHCPSubnetSpec","text":"

      DHCPSubnetSpec defines the desired state of DHCPSubnet

      Appears in: - DHCPSubnet

      Field Description Default Validation subnet string Full VPC subnet name (including VPC name), such as \"vpc-0/default\" cidrBlock string CIDR block to use for VPC subnet, such as \"10.10.10.0/24\" gateway string Gateway, such as 10.10.10.1 startIP string Start IP from the CIDRBlock to allocate IPs, such as 10.10.10.10 endIP string End IP from the CIDRBlock to allocate IPs, such as 10.10.10.99 vrf string VRF name to identify specific VPC (will be added to DHCP packets by DHCP relay in suboption 151), such as \"VrfVvpc-1\" as it's named on switch circuitID string VLAN ID to identify specific subnet within the VPC, such as \"Vlan1000\" as it's named on switch pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option. defaultURL string DefaultURL (optional) is the option 114 \"default-url\" to be sent to the clients"},{"location":"reference/api/#dhcpsubnetstatus","title":"DHCPSubnetStatus","text":"

      DHCPSubnetStatus defines the observed state of DHCPSubnet

      Appears in: - DHCPSubnet

      Field Description Default Validation allocated object (keys:string, values:DHCPAllocated) Allocated is a map of allocated IPs with expiry time and hostname from DHCP requests"},{"location":"reference/api/#vpcgithedgehogcomv1beta1","title":"vpc.githedgehog.com/v1beta1","text":"

      Package v1beta1 contains API Schema definitions for the vpc v1beta1 API group. It is public API group for the VPCs and Externals APIs. Intended to be used by the user.

      "},{"location":"reference/api/#resource-types_2","title":"Resource Types","text":"
      • External
      • ExternalAttachment
      • ExternalPeering
      • IPv4Namespace
      • VPC
      • VPCAttachment
      • VPCPeering
      "},{"location":"reference/api/#external","title":"External","text":"

      External object represents an external system connected to the Fabric and available to the specific IPv4Namespace. Users can do external peering with the external system by specifying the name of the External Object without need to worry about the details of how external system is attached to the Fabric.

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string External metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalSpec Spec is the desired state of the External status ExternalStatus Status is the observed state of the External"},{"location":"reference/api/#externalattachment","title":"ExternalAttachment","text":"

      ExternalAttachment is a definition of how specific switch is connected with external system (External object). Effectively it represents BGP peering between the switch and external system including all needed configuration.

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string ExternalAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalAttachmentSpec Spec is the desired state of the ExternalAttachment status ExternalAttachmentStatus Status is the observed state of the ExternalAttachment"},{"location":"reference/api/#externalattachmentneighbor","title":"ExternalAttachmentNeighbor","text":"

      ExternalAttachmentNeighbor defines the BGP neighbor configuration for the external attachment

      Appears in: - ExternalAttachmentSpec

      Field Description Default Validation asn integer ASN is the ASN of the BGP neighbor ip string IP is the IP address of the BGP neighbor to peer with"},{"location":"reference/api/#externalattachmentspec","title":"ExternalAttachmentSpec","text":"

      ExternalAttachmentSpec defines the desired state of ExternalAttachment

      Appears in: - ExternalAttachment

      Field Description Default Validation external string External is the name of the External object this attachment belongs to connection string Connection is the name of the Connection object this attachment belongs to (essentially the name of the switch/port) switch ExternalAttachmentSwitch Switch is the switch port configuration for the external attachment neighbor ExternalAttachmentNeighbor Neighbor is the BGP neighbor configuration for the external attachment"},{"location":"reference/api/#externalattachmentstatus","title":"ExternalAttachmentStatus","text":"

      ExternalAttachmentStatus defines the observed state of ExternalAttachment

      Appears in: - ExternalAttachment

      "},{"location":"reference/api/#externalattachmentswitch","title":"ExternalAttachmentSwitch","text":"

      ExternalAttachmentSwitch defines the switch port configuration for the external attachment

      Appears in: - ExternalAttachmentSpec

      Field Description Default Validation vlan integer VLAN (optional) is the VLAN ID used for the subinterface on a switch port specified in the connection, set to 0 if no VLAN is used ip string IP is the IP address of the subinterface on a switch port specified in the connection"},{"location":"reference/api/#externalpeering","title":"ExternalPeering","text":"

      ExternalPeering is the Schema for the externalpeerings API

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string ExternalPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalPeeringSpec Spec is the desired state of the ExternalPeering status ExternalPeeringStatus Status is the observed state of the ExternalPeering"},{"location":"reference/api/#externalpeeringspec","title":"ExternalPeeringSpec","text":"

      ExternalPeeringSpec defines the desired state of ExternalPeering

      Appears in: - ExternalPeering

      Field Description Default Validation permit ExternalPeeringSpecPermit Permit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit"},{"location":"reference/api/#externalpeeringspecexternal","title":"ExternalPeeringSpecExternal","text":"

      ExternalPeeringSpecExternal defines the External-side of the configuration to peer with

      Appears in: - ExternalPeeringSpecPermit

      Field Description Default Validation name string Name is the name of the External to peer with prefixes ExternalPeeringSpecPrefix array Prefixes is the list of prefixes to permit from the External to the VPC"},{"location":"reference/api/#externalpeeringspecpermit","title":"ExternalPeeringSpecPermit","text":"

      ExternalPeeringSpecPermit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit

      Appears in: - ExternalPeeringSpec

      Field Description Default Validation vpc ExternalPeeringSpecVPC VPC is the VPC-side of the configuration to peer with external ExternalPeeringSpecExternal External is the External-side of the configuration to peer with"},{"location":"reference/api/#externalpeeringspecprefix","title":"ExternalPeeringSpecPrefix","text":"

      ExternalPeeringSpecPrefix defines the prefix to permit from the External to the VPC

      Appears in: - ExternalPeeringSpecExternal

      Field Description Default Validation prefix string Prefix is the subnet to permit from the External to the VPC, e.g. 0.0.0.0/0 for any route including default route.It matches any prefix length less than or equal to 32 effectively permitting all prefixes within the specified one."},{"location":"reference/api/#externalpeeringspecvpc","title":"ExternalPeeringSpecVPC","text":"

      ExternalPeeringSpecVPC defines the VPC-side of the configuration to peer with

      Appears in: - ExternalPeeringSpecPermit

      Field Description Default Validation name string Name is the name of the VPC to peer with subnets string array Subnets is the list of subnets to advertise from VPC to the External"},{"location":"reference/api/#externalpeeringstatus","title":"ExternalPeeringStatus","text":"

      ExternalPeeringStatus defines the observed state of ExternalPeering

      Appears in: - ExternalPeering

      "},{"location":"reference/api/#externalspec","title":"ExternalSpec","text":"

      ExternalSpec describes IPv4 namespace External belongs to and inbound/outbound communities which are used to filter routes from/to the external system.

      Appears in: - External

      Field Description Default Validation ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this External belongs to inboundCommunity string InboundCommunity is the inbound community to filter routes from the external system (e.g. 65102:5000) outboundCommunity string OutboundCommunity is theoutbound community that all outbound routes will be stamped with (e.g. 50000:50001)"},{"location":"reference/api/#externalstatus","title":"ExternalStatus","text":"

      ExternalStatus defines the observed state of External

      Appears in: - External

      "},{"location":"reference/api/#ipv4namespace","title":"IPv4Namespace","text":"

      IPv4Namespace represents a namespace for VPC subnets allocation. All VPC subnets within a single IPv4Namespace are non-overlapping. Users can create multiple IPv4Namespaces to allocate same VPC subnets.

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string IPv4Namespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec IPv4NamespaceSpec Spec is the desired state of the IPv4Namespace status IPv4NamespaceStatus Status is the observed state of the IPv4Namespace"},{"location":"reference/api/#ipv4namespacespec","title":"IPv4NamespaceSpec","text":"

      IPv4NamespaceSpec defines the desired state of IPv4Namespace

      Appears in: - IPv4Namespace

      Field Description Default Validation subnets string array Subnets is the list of subnets to allocate VPC subnets from, couldn't overlap between each other and with Fabric reserved subnets MaxItems: 20 MinItems: 1"},{"location":"reference/api/#ipv4namespacestatus","title":"IPv4NamespaceStatus","text":"

      IPv4NamespaceStatus defines the observed state of IPv4Namespace

      Appears in: - IPv4Namespace

      "},{"location":"reference/api/#vpc","title":"VPC","text":"

      VPC is Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPC metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCSpec Spec is the desired state of the VPC status VPCStatus Status is the observed state of the VPC"},{"location":"reference/api/#vpcattachment","title":"VPCAttachment","text":"

      VPCAttachment is the Schema for the vpcattachments API

      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPCAttachment metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCAttachmentSpec Spec is the desired state of the VPCAttachment status VPCAttachmentStatus Status is the observed state of the VPCAttachment"},{"location":"reference/api/#vpcattachmentspec","title":"VPCAttachmentSpec","text":"

      VPCAttachmentSpec defines the desired state of VPCAttachment

      Appears in: - VPCAttachment

      Field Description Default Validation subnet string Subnet is the full name of the VPC subnet to attach to, such as \"vpc-1/default\" connection string Connection is the name of the connection to attach to the VPC nativeVLAN boolean NativeVLAN is the flag to indicate if the native VLAN should be used for attaching the VPC subnet"},{"location":"reference/api/#vpcattachmentstatus","title":"VPCAttachmentStatus","text":"

      VPCAttachmentStatus defines the observed state of VPCAttachment

      Appears in: - VPCAttachment

      "},{"location":"reference/api/#vpcdhcp","title":"VPCDHCP","text":"

      VPCDHCP defines the on-demand DHCP configuration for the subnet

      Appears in: - VPCSubnet

      Field Description Default Validation relay string Relay is the DHCP relay IP address, if specified, DHCP server will be disabled enable boolean Enable enables DHCP server for the subnet range VPCDHCPRange Range (optional) is the DHCP range for the subnet if DHCP server is enabled options VPCDHCPOptions Options (optional) is the DHCP options for the subnet if DHCP server is enabled"},{"location":"reference/api/#vpcdhcpoptions","title":"VPCDHCPOptions","text":"

      VPCDHCPOptions defines the DHCP options for the subnet if DHCP server is enabled

      Appears in: - VPCDHCP

      Field Description Default Validation pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option."},{"location":"reference/api/#vpcdhcprange","title":"VPCDHCPRange","text":"

      VPCDHCPRange defines the DHCP range for the subnet if DHCP server is enabled

      Appears in: - VPCDHCP

      Field Description Default Validation start string Start is the start IP address of the DHCP range end string End is the end IP address of the DHCP range"},{"location":"reference/api/#vpcpeer","title":"VPCPeer","text":"

      Appears in: - VPCPeeringSpec

      Field Description Default Validation subnets string array Subnets is the list of subnets to advertise from current VPC to the peer VPC MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeering","title":"VPCPeering","text":"

      VPCPeering represents a peering between two VPCs with corresponding filtering rules. Minimal example of the VPC peering showing vpc-1 to vpc-2 peering with all subnets allowed:

      spec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n
      Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1 kind string VPCPeering metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCPeeringSpec Spec is the desired state of the VPCPeering status VPCPeeringStatus Status is the observed state of the VPCPeering"},{"location":"reference/api/#vpcpeeringspec","title":"VPCPeeringSpec","text":"

      VPCPeeringSpec defines the desired state of VPCPeering

      Appears in: - VPCPeering

      Field Description Default Validation remote string permit map[string]VPCPeer array Permit defines a list of the peering policies - which VPC subnets will have access to the peer VPC subnets. MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeeringstatus","title":"VPCPeeringStatus","text":"

      VPCPeeringStatus defines the observed state of VPCPeering

      Appears in: - VPCPeering

      "},{"location":"reference/api/#vpcspec","title":"VPCSpec","text":"

      VPCSpec defines the desired state of VPC. At least one subnet is required.

      Appears in: - VPC

      Field Description Default Validation subnets object (keys:string, values:VPCSubnet) Subnets is the list of VPC subnets to configure ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this VPC belongs to (if not specified, \"default\" is used) vlanNamespace string VLANNamespace is the name of the VLANNamespace this VPC belongs to (if not specified, \"default\" is used) defaultIsolated boolean DefaultIsolated sets default behavior for isolated mode for the subnets (disabled by default) defaultRestricted boolean DefaultRestricted sets default behavior for restricted mode for the subnets (disabled by default) permit string array array Permit defines a list of the access policies between the subnets within the VPC - each policy is a list of subnets that have access to each other.It's applied on top of the subnet isolation flag and if subnet isn't isolated it's not required to have it in a permit list while if vpc is markedas isolated it's required to have it in a permit list to have access to other subnets. staticRoutes VPCStaticRoute array StaticRoutes is the list of additional static routes for the VPC"},{"location":"reference/api/#vpcstaticroute","title":"VPCStaticRoute","text":"

      VPCStaticRoute defines the static route for the VPC

      Appears in: - VPCSpec

      Field Description Default Validation prefix string Prefix for the static route (mandatory), e.g. 10.42.0.0/24 nextHops string array NextHops for the static route (at least one is required), e.g. 10.99.0.0"},{"location":"reference/api/#vpcstatus","title":"VPCStatus","text":"

      VPCStatus defines the observed state of VPC

      Appears in: - VPC

      "},{"location":"reference/api/#vpcsubnet","title":"VPCSubnet","text":"

      VPCSubnet defines the VPC subnet configuration

      Appears in: - VPCSpec

      Field Description Default Validation subnet string Subnet is the subnet CIDR block, such as \"10.0.0.0/24\", should belong to the IPv4Namespace and be unique within the namespace gateway string Gateway (optional) for the subnet, if not specified, the first IP (e.g. 10.0.0.1) in the subnet is used as the gateway dhcp VPCDHCP DHCP is the on-demand DHCP configuration for the subnet vlan integer VLAN is the VLAN ID for the subnet, should belong to the VLANNamespace and be unique within the namespace isolated boolean Isolated is the flag to enable isolated mode for the subnet which means no access to and from the other subnets within the VPC restricted boolean Restricted is the flag to enable restricted mode for the subnet which means no access between hosts within the subnet itself"},{"location":"reference/api/#wiringgithedgehogcomv1beta1","title":"wiring.githedgehog.com/v1beta1","text":"

      Package v1beta1 contains API Schema definitions for the wiring v1beta1 API group. It is public API group mainly for the underlay definition including Switches, Server, wiring between them and etc. Intended to be used by the user.

      "},{"location":"reference/api/#resource-types_3","title":"Resource Types","text":"
      • Connection
      • Server
      • Switch
      • SwitchGroup
      • SwitchProfile
      • VLANNamespace
      "},{"location":"reference/api/#baseportname","title":"BasePortName","text":"

      BasePortName defines the full name of the switch port

      Appears in: - ConnExternalLink - ConnFabricLinkSwitch - ConnStaticExternalLinkSwitch - ServerToSwitchLink - SwitchToSwitchLink

      Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object."},{"location":"reference/api/#connbundled","title":"ConnBundled","text":"

      ConnBundled defines the bundled connection (port channel, single server to a single switch with multiple links)

      Appears in: - ConnectionSpec

      Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#conneslag","title":"ConnESLAG","text":"

      ConnESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)

      Appears in: - ConnectionSpec

      Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connexternal","title":"ConnExternal","text":"

      ConnExternal defines the external connection (single switch to a single external device with a single link)

      Appears in: - ConnectionSpec

      Field Description Default Validation link ConnExternalLink Link is the external connection link"},{"location":"reference/api/#connexternallink","title":"ConnExternalLink","text":"

      ConnExternalLink defines the external connection link

      Appears in: - ConnExternal

      Field Description Default Validation switch BasePortName"},{"location":"reference/api/#connfabric","title":"ConnFabric","text":"

      ConnFabric defines the fabric connection (single spine to a single leaf with at least one link)

      Appears in: - ConnectionSpec

      Field Description Default Validation links FabricLink array Links is the list of spine-to-leaf links MinItems: 1"},{"location":"reference/api/#connfabriclinkswitch","title":"ConnFabricLinkSwitch","text":"

      ConnFabricLinkSwitch defines the switch side of the fabric link

      Appears in: - FabricLink

      Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the fabric link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$"},{"location":"reference/api/#connmclag","title":"ConnMCLAG","text":"

      ConnMCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)

      Appears in: - ConnectionSpec

      Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connmclagdomain","title":"ConnMCLAGDomain","text":"

      ConnMCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch or redundancy group and allows to use MCLAG connections to connect servers in a multi-homed way.

      Appears in: - ConnectionSpec

      Field Description Default Validation peerLinks SwitchToSwitchLink array PeerLinks is the list of peer links between the switches, used to pass server traffic between switch MinItems: 1 sessionLinks SwitchToSwitchLink array SessionLinks is the list of session links between the switches, used only to pass MCLAG control plane and BGPtraffic between switches MinItems: 1"},{"location":"reference/api/#connstaticexternal","title":"ConnStaticExternal","text":"

      ConnStaticExternal defines the static external connection (single switch to a single external device with a single link)

      Appears in: - ConnectionSpec

      Field Description Default Validation link ConnStaticExternalLink Link is the static external connection link withinVPC string WithinVPC is the optional VPC name to provision the static external connection within the VPC VRF instead of default one to make resource available to the specific VPC"},{"location":"reference/api/#connstaticexternallink","title":"ConnStaticExternalLink","text":"

      ConnStaticExternalLink defines the static external connection link

      Appears in: - ConnStaticExternal

      Field Description Default Validation switch ConnStaticExternalLinkSwitch Switch is the switch side of the static external connection link"},{"location":"reference/api/#connstaticexternallinkswitch","title":"ConnStaticExternalLinkSwitch","text":"

      ConnStaticExternalLinkSwitch defines the switch side of the static external connection link

      Appears in: - ConnStaticExternalLink

      Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the static external connection link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$ nextHop string NextHop is the next hop IP address for static routes that will be created for the subnets Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}$ subnets string array Subnets is the list of subnets that will get static routes using the specified next hop vlan integer VLAN is the optional VLAN ID to be configured on the switch port"},{"location":"reference/api/#connunbundled","title":"ConnUnbundled","text":"

      ConnUnbundled defines the unbundled connection (no port channel, single server to a single switch with a single link)

      Appears in: - ConnectionSpec

      Field Description Default Validation link ServerToSwitchLink Link is the server-to-switch link mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connvpcloopback","title":"ConnVPCLoopback","text":"

      ConnVPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) that enables automated workaround named \"VPC Loopback\" that allow to avoid switch hardware limitations and traffic going through CPU in some cases

      Appears in: - ConnectionSpec

      Field Description Default Validation links SwitchToSwitchLink array Links is the list of VPC loopback links MinItems: 1"},{"location":"reference/api/#connection","title":"Connection","text":"

      Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all physical and logical connections between the devices in the Wiring Diagram. Connection type is defined by the top-level field in the ConnectionSpec. Exactly one of them could be used in a single Connection object.

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Connection metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ConnectionSpec Spec is the desired state of the Connection status ConnectionStatus Status is the observed state of the Connection"},{"location":"reference/api/#connectionspec","title":"ConnectionSpec","text":"

      ConnectionSpec defines the desired state of Connection

      Appears in: - Connection

      Field Description Default Validation unbundled ConnUnbundled Unbundled defines the unbundled connection (no port channel, single server to a single switch with a single link) bundled ConnBundled Bundled defines the bundled connection (port channel, single server to a single switch with multiple links) mclag ConnMCLAG MCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links) eslag ConnESLAG ESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links) mclagDomain ConnMCLAGDomain MCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch for server multi-homing fabric ConnFabric Fabric defines the fabric connection (single spine to a single leaf with at least one link) vpcLoopback ConnVPCLoopback VPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) for automated workaround external ConnExternal External defines the external connection (single switch to a single external device with a single link) staticExternal ConnStaticExternal StaticExternal defines the static external connection (single switch to a single external device with a single link)"},{"location":"reference/api/#connectionstatus","title":"ConnectionStatus","text":"

      ConnectionStatus defines the observed state of Connection

      Appears in: - Connection

      "},{"location":"reference/api/#fabriclink","title":"FabricLink","text":"

      FabricLink defines the fabric connection link

      Appears in: - ConnFabric

      Field Description Default Validation spine ConnFabricLinkSwitch Spine is the spine side of the fabric link leaf ConnFabricLinkSwitch Leaf is the leaf side of the fabric link"},{"location":"reference/api/#server","title":"Server","text":"

      Server is the Schema for the servers API

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Server metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ServerSpec Spec is desired state of the server status ServerStatus Status is the observed state of the server"},{"location":"reference/api/#serverfacingconnectionconfig","title":"ServerFacingConnectionConfig","text":"

      ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration

      Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

      Field Description Default Validation mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#serverspec","title":"ServerSpec","text":"

      ServerSpec defines the desired state of Server

      Appears in: - Server

      Field Description Default Validation description string Description is a description of the server profile string Profile is the profile of the server, name of the ServerProfile object to be used for this server, currently not used by the Fabric"},{"location":"reference/api/#serverstatus","title":"ServerStatus","text":"

      ServerStatus defines the observed state of Server

      Appears in: - Server

      "},{"location":"reference/api/#servertoswitchlink","title":"ServerToSwitchLink","text":"

      ServerToSwitchLink defines the server-to-switch link

      Appears in: - ConnBundled - ConnESLAG - ConnMCLAG - ConnUnbundled

      Field Description Default Validation server BasePortName Server is the server side of the connection switch BasePortName Switch is the switch side of the connection"},{"location":"reference/api/#switch","title":"Switch","text":"

      Switch is the Schema for the switches API

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string Switch metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchSpec Spec is desired state of the switch status SwitchStatus Status is the observed state of the switch"},{"location":"reference/api/#switchboot","title":"SwitchBoot","text":"

      Appears in: - SwitchSpec

      Field Description Default Validation serial string Identify switch by serial number mac string Identify switch by MAC address of the management port"},{"location":"reference/api/#switchgroup","title":"SwitchGroup","text":"

      SwitchGroup is the marker API object to group switches together, switch can belong to multiple groups

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string SwitchGroup metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchGroupSpec Spec is the desired state of the SwitchGroup status SwitchGroupStatus Status is the observed state of the SwitchGroup"},{"location":"reference/api/#switchgroupspec","title":"SwitchGroupSpec","text":"

      SwitchGroupSpec defines the desired state of SwitchGroup

      Appears in: - SwitchGroup

      "},{"location":"reference/api/#switchgroupstatus","title":"SwitchGroupStatus","text":"

      SwitchGroupStatus defines the observed state of SwitchGroup

      Appears in: - SwitchGroup

      "},{"location":"reference/api/#switchprofile","title":"SwitchProfile","text":"

      SwitchProfile represents switch capabilities and configuration

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string SwitchProfile metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchProfileSpec status SwitchProfileStatus"},{"location":"reference/api/#switchprofileconfig","title":"SwitchProfileConfig","text":"

      Defines switch-specific configuration options

      Appears in: - SwitchProfileSpec

      Field Description Default Validation maxPathsEBGP integer MaxPathsIBGP defines the maximum number of IBGP paths to be configured"},{"location":"reference/api/#switchprofilefeatures","title":"SwitchProfileFeatures","text":"

      Defines features supported by a specific switch which is later used for roles and Fabric API features usage validation

      Appears in: - SwitchProfileSpec

      Field Description Default Validation subinterfaces boolean Subinterfaces defines if switch supports subinterfaces vxlan boolean VXLAN defines if switch supports VXLANs acls boolean ACLs defines if switch supports ACLs"},{"location":"reference/api/#switchprofileport","title":"SwitchProfilePort","text":"

      Defines a switch port configuration Only one of Profile or Group can be set

      Appears in: - SwitchProfileSpec

      Field Description Default Validation nos string NOSName defines how port is named in the NOS baseNOSName string BaseNOSName defines the base NOS name that could be used together with the profile to generate the actual NOS name (e.g. breakouts) label string Label defines the physical port label you can see on the actual switch group string If port isn't directly manageable, group defines the group it belongs to, exclusive with profile profile string If port is directly configurable, profile defines the profile it belongs to, exclusive with group management boolean Management defines if port is a management port, it's a special case and it can't have a group or profile oniePortName string OniePortName defines the ONIE port name for management ports only"},{"location":"reference/api/#switchprofileportgroup","title":"SwitchProfilePortGroup","text":"

      Defines a switch port group configuration

      Appears in: - SwitchProfileSpec

      Field Description Default Validation nos string NOSName defines how group is named in the NOS profile string Profile defines the possible configuration profile for the group, could only have speed profile"},{"location":"reference/api/#switchprofileportprofile","title":"SwitchProfilePortProfile","text":"

      Defines a switch port profile configuration

      Appears in: - SwitchProfileSpec

      Field Description Default Validation speed SwitchProfilePortProfileSpeed Speed defines the speed configuration for the profile, exclusive with breakout breakout SwitchProfilePortProfileBreakout Breakout defines the breakout configuration for the profile, exclusive with speed autoNegAllowed boolean AutoNegAllowed defines if configuring auto-negotiation is allowed for the port autoNegDefault boolean AutoNegDefault defines the default auto-negotiation state for the port"},{"location":"reference/api/#switchprofileportprofilebreakout","title":"SwitchProfilePortProfileBreakout","text":"

      Defines a switch port profile breakout configuration

      Appears in: - SwitchProfilePortProfile

      Field Description Default Validation default string Default defines the default breakout mode for the profile supported object (keys:string, values:SwitchProfilePortProfileBreakoutMode) Supported defines the supported breakout modes for the profile with the NOS name offsets"},{"location":"reference/api/#switchprofileportprofilebreakoutmode","title":"SwitchProfilePortProfileBreakoutMode","text":"

      Defines a switch port profile breakout mode configuration

      Appears in: - SwitchProfilePortProfileBreakout

      Field Description Default Validation offsets string array Offsets defines the breakout NOS port name offset from the port NOS Name for each breakout mode"},{"location":"reference/api/#switchprofileportprofilespeed","title":"SwitchProfilePortProfileSpeed","text":"

      Defines a switch port profile speed configuration

      Appears in: - SwitchProfilePortProfile

      Field Description Default Validation default string Default defines the default speed for the profile supported string array Supported defines the supported speeds for the profile"},{"location":"reference/api/#switchprofilespec","title":"SwitchProfileSpec","text":"

      SwitchProfileSpec defines the desired state of SwitchProfile

      Appears in: - SwitchProfile

      Field Description Default Validation displayName string DisplayName defines the human-readable name of the switch otherNames string array OtherNames defines alternative names for the switch features SwitchProfileFeatures Features defines the features supported by the switch config SwitchProfileConfig Config defines the switch-specific configuration options ports object (keys:string, values:SwitchProfilePort) Ports defines the switch port configuration portGroups object (keys:string, values:SwitchProfilePortGroup) PortGroups defines the switch port group configuration portProfiles object (keys:string, values:SwitchProfilePortProfile) PortProfiles defines the switch port profile configuration nosType NOSType NOSType defines the NOS type to be used for the switch platform string Platform is what expected to be request by ONIE and displayed in the NOS"},{"location":"reference/api/#switchprofilestatus","title":"SwitchProfileStatus","text":"

      SwitchProfileStatus defines the observed state of SwitchProfile

      Appears in: - SwitchProfile

      "},{"location":"reference/api/#switchredundancy","title":"SwitchRedundancy","text":"

      SwitchRedundancy is the switch redundancy configuration which includes name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections. It defines how redundancy will be configured and handled on the switch as well as which connection types will be available. If not specified, switch will not be part of any redundancy group. If name isn't empty, type must be specified as well and name should be the same as one of the SwitchGroup objects.

      Appears in: - SwitchSpec

      Field Description Default Validation group string Group is the name of the redundancy group switch belongs to type RedundancyType Type is the type of the redundancy group, could be mclag or eslag"},{"location":"reference/api/#switchrole","title":"SwitchRole","text":"

      Underlying type: string

      SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf

      Validation: - Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge]

      Appears in: - SwitchSpec

      Field Description spine server-leaf border-leaf mixed-leaf virtual-edge"},{"location":"reference/api/#switchspec","title":"SwitchSpec","text":"

      SwitchSpec defines the desired state of Switch

      Appears in: - Switch

      Field Description Default Validation role SwitchRole Role is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge] Required: {} description string Description is a description of the switch profile string Profile is the profile of the switch, name of the SwitchProfile object to be used for this switch, currently not used by the Fabric groups string array Groups is a list of switch groups the switch belongs to redundancy SwitchRedundancy Redundancy is the switch redundancy configuration including name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap asn integer ASN is the ASN of the switch ip string IP is the IP of the switch that could be used to access it from other switches and control nodes in the Fabric vtepIP string VTEPIP is the VTEP IP of the switch protocolIP string ProtocolIP is used as BGP Router ID for switch configuration portGroupSpeeds object (keys:string, values:string) PortGroupSpeeds is a map of port group speeds, key is the port group name, value is the speed, such as '\"2\": 10G' portSpeeds object (keys:string, values:string) PortSpeeds is a map of port speeds, key is the port name, value is the speed portBreakouts object (keys:string, values:string) PortBreakouts is a map of port breakouts, key is the port name, value is the breakout configuration, such as \"1/55: 4x25G\" portAutoNegs object (keys:string, values:boolean) PortAutoNegs is a map of port auto negotiation, key is the port name, value is true or false boot SwitchBoot Boot is the boot/provisioning information of the switch"},{"location":"reference/api/#switchstatus","title":"SwitchStatus","text":"

      SwitchStatus defines the observed state of Switch

      Appears in: - Switch

      "},{"location":"reference/api/#switchtoswitchlink","title":"SwitchToSwitchLink","text":"

      SwitchToSwitchLink defines the switch-to-switch link

      Appears in: - ConnMCLAGDomain - ConnVPCLoopback

      Field Description Default Validation switch1 BasePortName Switch1 is the first switch side of the connection switch2 BasePortName Switch2 is the second switch side of the connection"},{"location":"reference/api/#vlannamespace","title":"VLANNamespace","text":"

      VLANNamespace is the Schema for the vlannamespaces API

      Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1 kind string VLANNamespace metadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VLANNamespaceSpec Spec is the desired state of the VLANNamespace status VLANNamespaceStatus Status is the observed state of the VLANNamespace"},{"location":"reference/api/#vlannamespacespec","title":"VLANNamespaceSpec","text":"

      VLANNamespaceSpec defines the desired state of VLANNamespace

      Appears in: - VLANNamespace

      Field Description Default Validation ranges VLANRange array Ranges is a list of VLAN ranges to be used in this namespace, couldn't overlap between each other and with Fabric reserved VLAN ranges MaxItems: 20 MinItems: 1"},{"location":"reference/api/#vlannamespacestatus","title":"VLANNamespaceStatus","text":"

      VLANNamespaceStatus defines the observed state of VLANNamespace

      Appears in: - VLANNamespace

      "},{"location":"reference/cli/","title":"Fabric CLI","text":"

      Under construction.

      Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.

      core@control-1 ~ $ kubectl fabric\nNAME:\n   hhfctl - Hedgehog Fabric user client\n\nUSAGE:\n   hhfctl [global options] command [command options] [arguments...]\n\nVERSION:\n   v0.23.0\n\nCOMMANDS:\n   vpc                VPC commands\n   switch, sw, agent  Switch/Agent commands\n   connection, conn   Connection commands\n   switchgroup, sg    SwitchGroup commands\n   external           External commands\n   help, h            Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n   --verbose, -v  verbose output (includes debug) (default: true)\n   --help, -h     show help\n   --version, -V  print the version\n
      "},{"location":"reference/cli/#vpc","title":"VPC","text":"

      Create VPC named vpc-1 with subnet 10.0.1.0/24 and VLAN 1001 with DHCP enabled and DHCP range starting from 10.0.1.10 (optional):

      core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n

      Attach previously created VPC to the server server-01 (which is connected to the Fabric using the server-01--mclag--leaf-01--leaf-02 Connection):

      core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n

      To peer VPC with another VPC (e.g. vpc-2) use the following command:

      core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n
      "},{"location":"reference/profiles/","title":"Switch Profiles Catalog","text":"

      The following is a list of all supported switches. Please, make sure to use the version of documentation that matches your environment to get an up-to-date list of supported switches, their features and port naming scheme.

      "},{"location":"reference/profiles/#celestica-ds3000","title":"Celestica DS3000","text":"

      Profile Name (to use in switch.spec.profile): celestica-ds3000

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#celestica-ds4000","title":"Celestica DS4000","text":"

      Profile Name (to use in switch.spec.profile): celestica-ds4000

      Supported features:

      • Subinterfaces: false
      • VXLAN: false
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/2 2 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/3 3 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/4 4 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/5 5 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/6 6 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/7 7 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/8 8 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/9 9 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/10 10 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/11 11 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/12 12 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/13 13 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/14 14 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/15 15 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/16 16 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/17 17 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/18 18 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/19 19 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/20 20 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/21 21 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/22 22 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/23 23 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/24 24 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/25 25 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/26 26 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/27 27 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/28 28 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/29 29 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/30 30 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/31 31 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/32 32 Breakout 1x400G 1x100G, 1x10G, 1x25G, 1x400G, 1x40G, 2x100G, 2x200G, 2x40G, 4x100G, 4x10G, 4x25G, 8x10G, 8x25G, 8x50G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#dell-s5232f-on","title":"Dell S5232F-ON","text":"

      Profile Name (to use in switch.spec.profile): dell-s5232f-on

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/32 32 Direct 100G 40G, 100G E1/33 33 Direct 10G 1G, 10G E1/34 34 Direct 10G 1G, 10G"},{"location":"reference/profiles/#dell-s5248f-on","title":"Dell S5248F-ON","text":"

      Profile Name (to use in switch.spec.profile): dell-s5248f-on

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"reference/profiles/#edgecore-dcs203","title":"Edgecore DCS203","text":"

      Profile Name (to use in switch.spec.profile): edgecore-dcs203

      Other names: Edgecore AS7326-56X

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 1 25G 10G, 25G E1/6 6 Port Group 1 25G 10G, 25G E1/7 7 Port Group 1 25G 10G, 25G E1/8 8 Port Group 1 25G 10G, 25G E1/9 9 Port Group 1 25G 10G, 25G E1/10 10 Port Group 1 25G 10G, 25G E1/11 11 Port Group 1 25G 10G, 25G E1/12 12 Port Group 1 25G 10G, 25G E1/13 13 Port Group 2 25G 10G, 25G E1/14 14 Port Group 2 25G 10G, 25G E1/15 15 Port Group 2 25G 10G, 25G E1/16 16 Port Group 2 25G 10G, 25G E1/17 17 Port Group 2 25G 10G, 25G E1/18 18 Port Group 2 25G 10G, 25G E1/19 19 Port Group 2 25G 10G, 25G E1/20 20 Port Group 2 25G 10G, 25G E1/21 21 Port Group 2 25G 10G, 25G E1/22 22 Port Group 2 25G 10G, 25G E1/23 23 Port Group 2 25G 10G, 25G E1/24 24 Port Group 2 25G 10G, 25G E1/25 25 Port Group 3 25G 10G, 25G E1/26 26 Port Group 3 25G 10G, 25G E1/27 27 Port Group 3 25G 10G, 25G E1/28 28 Port Group 3 25G 10G, 25G E1/29 29 Port Group 3 25G 10G, 25G E1/30 30 Port Group 3 25G 10G, 25G E1/31 31 Port Group 3 25G 10G, 25G E1/32 32 Port Group 3 25G 10G, 25G E1/33 33 Port Group 3 25G 10G, 25G E1/34 34 Port Group 3 25G 10G, 25G E1/35 35 Port Group 3 25G 10G, 25G E1/36 36 Port Group 3 25G 10G, 25G E1/37 37 Port Group 4 25G 10G, 25G E1/38 38 Port Group 4 25G 10G, 25G E1/39 39 Port Group 4 25G 10G, 25G E1/40 40 Port Group 4 25G 10G, 25G E1/41 41 Port Group 4 25G 10G, 25G E1/42 42 Port Group 4 25G 10G, 25G E1/43 43 Port Group 4 25G 10G, 25G E1/44 44 Port Group 4 25G 10G, 25G E1/45 45 Port Group 4 25G 10G, 25G E1/46 46 Port Group 4 25G 10G, 25G E1/47 47 Port Group 4 25G 10G, 25G E1/48 48 Port Group 4 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/56 56 Direct 100G 40G, 100G E1/57 57 Direct 10G 1G, 10G E1/58 58 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs204","title":"Edgecore DCS204","text":"

      Profile Name (to use in switch.spec.profile): edgecore-dcs204

      Other names: Edgecore AS7726-32X

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/32 32 Direct 100G 40G, 100G E1/33 33 Direct 10G 1G, 10G E1/34 34 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs501","title":"Edgecore DCS501","text":"

      Profile Name (to use in switch.spec.profile): edgecore-dcs501

      Other names: Edgecore AS7712-32X

      Supported features:

      • Subinterfaces: false
      • VXLAN: false
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G"},{"location":"reference/profiles/#edgecore-eps203","title":"Edgecore EPS203","text":"

      Profile Name (to use in switch.spec.profile): edgecore-eps203

      Other names: Edgecore AS4630-54NPE

      Supported features:

      • Subinterfaces: false
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/2 2 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/3 3 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/4 4 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/5 5 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/6 6 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/7 7 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/8 8 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/9 9 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/10 10 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/11 11 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/12 12 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/13 13 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/14 14 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/15 15 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/16 16 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/17 17 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/18 18 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/19 19 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/20 20 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/21 21 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/22 22 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/23 23 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/24 24 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/25 25 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/26 26 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/27 27 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/28 28 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/29 29 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/30 30 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/31 31 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/32 32 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/33 33 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/34 34 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/35 35 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/36 36 Direct 2.5G 1G, 2.5G, AutoNeg supported (default: true) E1/37 37 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/38 38 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/39 39 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/40 40 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/41 41 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/42 42 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/43 43 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/44 44 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/45 45 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/46 46 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/47 47 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/48 48 Direct 10G 1G, 10G, AutoNeg supported (default: true) E1/49 49 Direct 25G 1G, 10G, 25G E1/50 50 Direct 25G 1G, 10G, 25G E1/51 51 Direct 25G 1G, 10G, 25G E1/52 52 Direct 25G 1G, 10G, 25G E1/53 53 Direct 100G 40G, 100G E1/54 54 Direct 100G 40G, 100G"},{"location":"reference/profiles/#supermicro-sse-c4632sb","title":"Supermicro SSE-C4632SB","text":"

      Profile Name (to use in switch.spec.profile): supermicro-sse-c4632sb

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: true

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/2 2 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/3 3 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/4 4 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/5 5 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/6 6 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/7 7 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/8 8 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/9 9 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/10 10 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/11 11 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/12 12 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/13 13 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/14 14 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/15 15 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/16 16 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/17 17 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/18 18 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/19 19 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/20 20 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/21 21 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/22 22 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/23 23 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/24 24 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/25 25 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/26 26 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/27 27 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/28 28 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/29 29 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/30 30 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/31 31 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/32 32 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/33 33 Direct 10G 1G, 10G"},{"location":"reference/profiles/#virtual-switch","title":"Virtual Switch","text":"

      Profile Name (to use in switch.spec.profile): vs

      Supported features:

      • Subinterfaces: true
      • VXLAN: true
      • ACLs: false

      Available Ports:

      Label column is a port label on a physical switch.

      Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#beta-1","title":"Beta-1","text":""},{"location":"release-notes/#device-support","title":"Device support","text":"
      • Celestica DS4000 as a spine
      "},{"location":"release-notes/#sonic","title":"SONiC","text":"
      • Broadcom SONiC 4.4.0 support
      "},{"location":"release-notes/#fabric-provisioning-management","title":"Fabric provisioning, management","text":"
      • Out-of-band management network connectivity
      • Deprecated support for in-band management network connectivity, chain boot, and front-panel boot until further notice
      • Automatic zero touch switch provisioning [ ZTP ] is based on the serial number or the first management interface MAC address
      • Full support for airgap installations and upgrades by default
      • Self-contained USB image generation for control node installation
      • Automated in-place upgrades for control node(s) moving forward
      "},{"location":"release-notes/#api","title":"API","text":"
      • API version v1beta1
      • Guaranteed backward compatibility moving forward
      "},{"location":"release-notes/#alpha-7","title":"Alpha-7","text":""},{"location":"release-notes/#device-support_1","title":"Device Support","text":"

      New devices supported by the fabric:

      • Clos Spine

        • Celestica DS3000
        • Edgecore AS7712-32X-EC
        • Supermicro SSE-C4632SB
      • Clos Leaf

        • Celestica DS3000
        • Supermicro SSE-C4632SB
      • Collapsed Core ToR

        • Celestica DS3000
        • Supermicro SSE-C4632SB
      "},{"location":"release-notes/#switchprofiles","title":"SwitchProfiles","text":"
      • Metadata describing switch capabilities, feature capacities, and resource naming mapping.
      • Switch Profiles are used for providing normalized name/id mapping, validation and internal resource management.
      • Switch Profiles are Mandatory. Each switch model must have a corresponding switch profile to be supported by the fabric.
      • Each switch defined in the wiring diagram should be pointing to the switch profile document.
      • Detailed overview
      • Catalog of switch profiles
      "},{"location":"release-notes/#new-universal-port-naming-scheme","title":"New Universal Port Naming Scheme","text":"
      • E<asic>/<port>/<breakout> or M<port>
      • Enabled via switch profiles
      "},{"location":"release-notes/#improved-per-switch-modelplatform-validation","title":"Improved per switch-model/platform validation","text":"
      • Enabled via switch profiles
      "},{"location":"release-notes/#vpc","title":"VPC","text":"
      • It\u2019s now possible to explicitly specify a gateway to use in VPC subnets
      • StaticExternal now supports default routes
      "},{"location":"release-notes/#inspection-cli","title":"Inspection CLI","text":"

      CLI commands are intended to navigate fabric configuration and state and allow introspection of the dependencies and cross-domain checking:

      • Fabric (overall control nodes and switches overview incl. status, serials, etc.)
      • Switch (status, used ports, counters, etc.)
      • Switch sort (connection if used in one, counters, VPC and External attachments, etc.)
      • Server (connection if used in one, VPC attachments, etc.)
      • Connection (incl. VPC and External attachments, Loobpback Workaround usage, etc.)
      • VPC/VPCSubnet (incl. where is it attached and what's reachable from it)
      • IP Address (incl. IPv4Namespace, VPCSubnet and DHCPLease or External/StaticExternal usage)
      • MAC Address (incl. switch ports and DHCP leases)
      • Access between pair of IPs, Server names or VPCSubnets (everything except external IPs will be translated to VPCSubnets)
      "},{"location":"release-notes/#observability","title":"Observability","text":"
      • Example Grafana Dashboards added to the docs
      • Syslog (/var/log/syslog) is now could be collected from all switches and forwarded to Loki targets
      "},{"location":"release-notes/#bug-fixes","title":"Bug Fixes","text":"
      • Fixed: Restricted subnet isn't accessible from other subnets of the same VPC
      "},{"location":"release-notes/#alpha-6","title":"Alpha-6","text":""},{"location":"release-notes/#observability_1","title":"Observability","text":""},{"location":"release-notes/#telemetry-prometheus-exporter","title":"Telemetry - Prometheus Exporter","text":"
      • Hedgehog Fabric Control Plane Agents on switches function as Prometheus Exporters

      • Telemetry data provided by Broadcom SONiC is now supported:

        • port and interface status and counters
        • transceiver state
        • environmental information (temperature, fans, psu, etc.)
        • BGP state and counters
      • Export to Prometheus using Prometheus Remote-Write API or any API-compatible platform

      "},{"location":"release-notes/#logging","title":"Logging","text":"
      • Grafana Alloy is supported as a certified logging agent that is installed and managed by the Fabric

      • Data collected

        • Agent logs
        • Agent, switch, and host-level metrics
      • Export to API-compliant platforms and products such as Prometheus, Loki, Grafana Cloud, or any LGTM stack

      "},{"location":"release-notes/#agent-status-api-enhancements","title":"Agent Status API Enhancements","text":"
      • Ports status and counters
      • Port breakout status and counters
      • Transceiver status and counters
      • Environmental and platform information
      • LLDP neighbors
      "},{"location":"release-notes/#networking-enhancements","title":"Networking enhancements","text":"
      • Multiple direct control links per switch are now supported
      • Custom static routes could be installed into VPC using API
      • ExternalAttachment could be configured without VLAN now
      "},{"location":"release-notes/#other-improvements","title":"Other improvements","text":"
      • PXE boot with HTTP
      • The hhfab and hhfctl (kubectl plugin) are now published for Linux/MacOS amd64/arm64
      • Switch users can now be configured as part of installation preparation (username, password hash, role, and public keys)
      "},{"location":"release-notes/#bugs-fixed","title":"Bugs fixed","text":"
      • DHCP service assigning IP multiple times if restarted in between
      • Remote peering was configured as a local
      "},{"location":"release-notes/#alpha-5","title":"Alpha-5","text":""},{"location":"release-notes/#open-source","title":"Open Source","text":"
      • Apache License 2.0
      • The main repos are public:
        • Fabric
        • Fabricator
        • Das-boot
        • Toolbox
        • Docs
      • Items not open-sourced:
        • HONIE with front panel booting support
      "},{"location":"release-notes/#dhcppxe-boot-support-for-multi-homed-connections","title":"DHCP/PXE boot support for multi-homed connections","text":"
      • PXE URL support for on-demand DHCP service
      • LACP link (MCLAG and ESLAG) fallback allows support of one of the links without the use of a host-level bond
      "},{"location":"release-notes/#improvements","title":"Improvements","text":"
      • Native VLAN support for server-facing connections
      • Extended wiring validation at hhfab init/build time
      • External peering failover in case of using remote peering on the same switches as external connectivity
      "},{"location":"release-notes/#alpha-4","title":"Alpha-4","text":""},{"location":"release-notes/#documentation","title":"Documentation","text":"
      • Fabric API reference
      "},{"location":"release-notes/#host-connectivity-dual-homing-improvements","title":"Host connectivity dual homing improvements","text":"
      • ESI for VXLAN-based BGP EVPN
      • Support in Fabric and VLAB
      • Host connectivity Redundancy Groups
      • Groups LEAF switches to provide multi-homed connectivity to the Fabric
      • 2-4 switches per group
      • Support for MCLAG and ESLAG (EVPN MH / ESI)
      • A single redundancy group can only support multi-homing of one type (ESLAG or MCLAG)
      • Multiple types of redundancy groups can be used in the fabric simultaneously
      "},{"location":"release-notes/#improved-vpc-security-policy-better-zero-trust","title":"Improved VPC security policy - better Zero Trust","text":"
      • Inter-VPC
        • Allow inter-VPC and external peering with per subnet control
      • Intra-VPC intra-subnet policies
        • Isolated Subnets
          • subnets isolated by default from other subnets in the VPC
          • require a user-defined explicitly permit list to allow communications to other subnets within the VPC
          • can be set on individual subnets within VPC or per entire VPC - off by default
          • Inter-VPC and external peering configurations are not affected and work the same as before
        • Restricted Subnets
          • Hosts within a subnet have no mutual reachability
          • Hosts within a subnet can be reached by members of other subnets or peered VPCs as specified by the policy
          • Inter-VPC and external peering configurations are not affected and work the same as before
        • Permit Lists
          • Intra-VPC Permit Lists govern connectivity between subnets within the VPC for isolated subnets
          • Inter-VPC Permit Lists govern which subnets of one VPC have access to some subnets of the other VPC for finer-grained control of inter-VPC and external peering
      "},{"location":"release-notes/#static-external-connection","title":"Static External Connection","text":"
      • Allows access between hosts within the VPC and devices attached to a switch with user-defined static routes
      "},{"location":"release-notes/#internal-improvements","title":"Internal Improvements","text":"
      • A new, more reliable automated ID allocation system
      • Extra validation of object lifecycle (e.g., object-in-use removal validation)
      "},{"location":"release-notes/#known-issues","title":"Known Issues","text":"
      • External Peering Failover
        • Conditions: ExternalPeering is specified for the VPC, and the same VPC has Border Leaf VPCPeering
        • Issue: Detaching ExternalPeering may cause VPCPeering on the Border Leaf group to stop working
        • Workaround: VPCPeering on the Border Leaf group should be recreated
      "},{"location":"release-notes/#alpha-3","title":"Alpha-3","text":""},{"location":"release-notes/#sonic-support","title":"SONiC support","text":"
      • Broadcom Enterprise SONiC 4.2.0 (previously 4.1.1)
      "},{"location":"release-notes/#multiple-ipv4-namespaces","title":"Multiple IPv4 namespaces","text":"
      • Support for multiple overlapping IPv4 addresses in the Fabric
      • Integrated with on-demand DHCP Service (see below)
      • All IPv4 addresses within a given VPC must be unique
      • Only VPCs with non-overlapping IPv4 subnets can peer within the Fabric
      • An external NAT device is required for peering of VPCs with overlapping subnets
      "},{"location":"release-notes/#hedgehog-fabric-dhcp-and-ipam-service","title":"Hedgehog Fabric DHCP and IPAM Service","text":"
      • Custom DHCP server executing in the controllers
      • Multiple IPv4 namespaces with overlapping subnets
      • Multiple VLAN namespaces with overlapping VLAN ranges
      • DHCP leases exposed through the Fabric API
      • Available for VLAB as well as the Fabric
      "},{"location":"release-notes/#hedgehog-fabric-ntp-service","title":"Hedgehog Fabric NTP Service","text":"
      • Custom NTP servers at the controller
      • Switches automatically configured to use control node as NTP server
      • NTP servers can be configured to sync to external time/NTP server
      "},{"location":"release-notes/#staticexternal-connections","title":"StaticExternal connections","text":"
      • Directly connect external infrastructure services (such as NTP, DHCP, DNS) to the Fabric
      • No BGP is required, just automatically configured static routes
      "},{"location":"release-notes/#dhcp-relay-to-3rd-party-dhcp-service","title":"DHCP Relay to 3rd party DHCP service","text":"

      Support for 3rd party DHCP server (DHCP Relay config) through the API

      "},{"location":"release-notes/#alpha-2","title":"Alpha-2","text":""},{"location":"release-notes/#controller","title":"Controller","text":"

      A single controller. No controller redundancy.

      "},{"location":"release-notes/#controller-connectivity","title":"Controller connectivity","text":"

      For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.

      Spine controller connectivity is not supported.

      For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.

      Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.

      "},{"location":"release-notes/#controller-requirements","title":"Controller requirements","text":"
      • One 1 gig+ port per to connect to each controller attached switch
      • One+ 1 gig+ ports connecting to the external management network.
      • 4 Cores, 12GB RAM, 100GB SSD.
      "},{"location":"release-notes/#chain-booting","title":"Chain booting","text":"

      Switches not directly connecting to the controllers can chain boot via the data network.

      "},{"location":"release-notes/#topology-support","title":"Topology support","text":"

      CLOS/LEAF-SPINE and Collapsed Core topologies are supported.

      "},{"location":"release-notes/#leaf-roles-for-clos-topology","title":"LEAF Roles for CLOS topology","text":"

      server leaf, border leaf, and mixed leaf modes are supported.

      "},{"location":"release-notes/#collapsed-core-topology","title":"Collapsed Core Topology","text":"

      Two ToR/LEAF switches with MCLAG server connection.

      "},{"location":"release-notes/#server-multihoming","title":"Server multihoming","text":"

      MCLAG-only.

      "},{"location":"release-notes/#device-support_2","title":"Device support","text":""},{"location":"release-notes/#leafs","title":"LEAFs","text":"
      • DELL:

        • S5248F-ON
        • S5232F-ON
      • Edge-Core:

        • DCS204 (AS7726-32X)
        • DCS203 (AS7326-56X)
        • EPS203 (AS4630-54NPE)
      "},{"location":"release-notes/#spines","title":"SPINEs","text":"
      • DELL:
        • S5232F-ON
      • Edge-Core:
        • DCS204 (AS7726-32X)
      "},{"location":"release-notes/#underlay-configuration","title":"Underlay configuration:","text":"

      Port speed, port group speed, port breakouts are configurable through the API

      "},{"location":"release-notes/#vpc-overlay-implementation","title":"VPC (overlay) Implementation","text":"

      VXLAN-based BGP eVPN.

      "},{"location":"release-notes/#multi-subnet-vpcs","title":"Multi-subnet VPCs","text":"

      A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.

      "},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"

      Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.

      "},{"location":"release-notes/#vlan-namespace","title":"VLAN Namespace","text":"

      VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.

      This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).

      "},{"location":"release-notes/#switch-groups","title":"Switch Groups","text":"

      Each switch belongs to a list of switch groups used for identifying redundancy groups for things like external connectivity.

      "},{"location":"release-notes/#mutual-vpc-peering","title":"Mutual VPC Peering","text":"

      VPC peering is supported and possible between a pair of VPCs that belong to the same IPv4 and VLAN namespaces.

      "},{"location":"release-notes/#external-vpc-peering","title":"External VPC Peering","text":"

      VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.

      The user controls what subnets/prefixes to import and export from/to the external system.

      No NAT function is supported for external peering.

      "},{"location":"release-notes/#host-connectivity","title":"Host connectivity","text":"

      Servers can be attached as Unbundled, Bundled (LAG) and MCLAG

      "},{"location":"release-notes/#dhcp-service","title":"DHCP Service","text":"

      VPC is provided with an optional DHCP service with simple IPAM

      "},{"location":"release-notes/#local-vpc-peering-loopbacks","title":"Local VPC peering loopbacks","text":"

      To enable local inter-vpc peering that allows routing of traffic between VPCs, local loopbacks are required to overcome silicon limitations.

      "},{"location":"release-notes/#scale","title":"Scale","text":"
      • Maximum fabric size: 20 LEAF/ToR switches.
      • Routes per switch: 64k
      • [ silicon platform limitation in Trident 3; limits to number of endpoints in the fabric ]
      • Total VPCs per switch: up to 1000
      • [ Including VPCs attached at the given switch and VPCs peered with ]
      • Total VPCs per VLAN namespace: up to 3000
      • [ assuming 1 subnet per VPC ]
      • Total VPCs per fabric: unlimited
      • [ if using multiple VLAN namespaces ]
      • VPC subnets per switch: up to 3000
      • VPC subnets per VLAN namespace up to 3000
      • Subnets per VPC: up to 20
      • [ just a validation; the current design allows up to 100, but it could be increased even more in the future ]
      • VPC Slots per remote peering @ switch: 2
      • Max VPC loopbacks per switch: 500
      • [ VPC loopback workarounds per switch are needed for local peering when both VPCs are attached to the switch or for external peering with VPC attached on the same switch that is peering with external ]
      "},{"location":"release-notes/#software-versions","title":"Software versions","text":"
      • Fabric: v0.23.0
      • Das-boot: v0.11.4
      • Fabricator: v0.8.0
      • K3s: v1.27.4-k3s1
      • Zot: v1.4.3
      • SONiC
      • Broadcom Enterprise Base 4.1.1
      • Broadcom Enterprise Campus 4.1.1
      "},{"location":"release-notes/#known-limitations","title":"Known Limitations","text":"
      • MTU setting inflexibility:
      • Fabric MTU is 9100 and not configurable right now (A3 planned)
      • Server-facing MTU is 9136 and not configurable right now (A3+)
      • no support for Access VLANs for attaching servers (A3 planned)
      • VPC peering is enabled on all subnets of the participating VPCs. No subnet selection for peering. (A3 planned)
      • peering with external is only possible with a VLAN (by design)
      • If you have VPCs with remote peering on a switch group, you can't attach those VPCs on that switch group (by definition of remote peering)
      • if a group of VPCs has remote peering on a switch group, any other VPC that will peer with those VPCs remotely will need to use the same switch group (by design)
      • if VPC peers with external, it can only be remotely peered with on the same switches that have a connection to that external (by design)
      • the server-facing connection object is immutable as it\u2019s very easy to get into a deadlock, re-create to change it (A3+)
      "},{"location":"release-notes/#alpha-1","title":"Alpha-1","text":"
      • Controller:

        • A single controller connecting to each switch management port. No redundancy.
      • Controller requirements:

        • One 1 gig port per switch
        • One+ 1 gig+ ports connecting to the external management network.
        • 4 Cores, 12GB RAM, 100GB SSD.
      • Seeder:

        • Seeder and Controller functions co-resident on the control node. Switch booting and ZTP on management ports directly connected to the controller.
      • HHFab - the fabricator:

        • An operational tool to generate, initiate, and maintain the fabric software appliance. Allows fabrication of the environment-specific image with all of the required underlay and security configuration baked in.
      • DHCP Service:

        • A simple DHCP server for assigning IP addresses to hosts connecting to the fabric, optimized for use with VPC overlay.
      • Topology:

        • Support for a Collapsed Core topology with 2 switch nodes.
      • Underlay:

        • A simple single-VRF network with a BGP control plane. IPv4 support only.
      • External connectivity:

        • An edge router must be connected to selected ports of one or both switches. IPv4 support only.
      • Dual-homing:

        • L2 Dual homing with MCLAG is implemented to connect servers, storage, and other devices in the data center. NIC bonding and LACP configuration at the host are required.
      • VPC overlay implementation:

        • VPC is implemented as a set of ACLs within the underlay VRF. External connectivity to the VRF is performed via internally managed VLANs. IPv4 support only.
      • VPC Peering:

        • VPC peering is performed via ACLs with no fine-grained control.
      • NAT

        • DNAT + SNAT are supported per VPC. SNAT and DNAT can't be enabled per VPC simultaneously.
      • Hardware support:

        • Please see the supported hardware list.
      • Virtual Lab:

        • A simulation of the two-node Collapsed Core Topology as a virtual environment. Designed for use as a network simulation, a configuration scratchpad, or a training/demonstration tool. Minimum requirements: 8 cores, 24GB RAM, 100GB SSD
      • Limitations:

        • 40 VPCs max
        • 50 VPC peerings
        • [ 768 ACL entry platform limitation from Broadcom ]
      • Software versions:

        • Fabricator: v0.5.2
        • Fabric: v0.18.6
        • Das-boot: v0.8.2
        • K3s: v1.27.4-k3s1
        • Zot: v1.4.3
        • SONiC: Broadcom Enterprise Base 4.1.1
      "},{"location":"troubleshooting/overview/","title":"Troubleshooting","text":"

      Under construction.

      "},{"location":"user-guide/connections/","title":"Connections","text":"

      Connection objects represent logical and physical connections between the devices in the Fabric (Switch, Server and External objects) and are needed to define all the connections in the Wiring Diagram.

      All connections reference switch or server ports. Only port names defined by switch profiles can be used in the wiring diagram for the switches. NOS (or any other) port names aren't supported. Currently, server ports aren't validated by the Fabric API other than for uniqueness. See the Switch Profiles and Port Naming section for more details.

      There are several types of connections.

      "},{"location":"user-guide/connections/#workload-server-connections","title":"Workload server connections","text":"

      Server connections are used to connect workload servers to switches.

      "},{"location":"user-guide/connections/#unbundled","title":"Unbundled","text":"

      Unbundled server connections are used to connect servers to a single switch using a single port.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-4--unbundled--s5248-02\n  namespace: default\nspec:\n  unbundled:\n    link: # Defines a single link between a server and a switch\n      server:\n        port: server-4/enp2s1\n      switch:\n        port: s5248-02/Ethernet3\n
      "},{"location":"user-guide/connections/#bundled","title":"Bundled","text":"

      Bundled server connections are used to connect servers to a single switch using multiple ports (port channel, LAG).

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-3--bundled--s5248-01\n  namespace: default\nspec:\n  bundled:\n    links: # Defines multiple links between a single server and a single switch\n    - server:\n        port: server-3/enp2s1\n      switch:\n        port: s5248-01/Ethernet3\n    - server:\n        port: server-3/enp2s2\n      switch:\n        port: s5248-01/Ethernet4\n
      "},{"location":"user-guide/connections/#mclag","title":"MCLAG","text":"

      MCLAG server connections are used to connect servers to a pair of switches using multiple ports (Dual-homing). Switches should be configured as an MCLAG pair which requires them to be in a single redundancy group of type mclag and a Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  mclag:\n    links: # Defines multiple links between a single server and a pair of switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
      "},{"location":"user-guide/connections/#eslag","title":"ESLAG","text":"

      ESLAG server connections are used to connect servers to the 2-4 switches using multiple ports (Multi-homing). Switches should belong to the same redundancy group with type eslag, but contrary to the MCLAG case, no other configuration is required.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: server-1--eslag--s5248-01--s5248-02\n  namespace: default\nspec:\n  eslag:\n    links: # Defines multiple links between a single server and a 2-4 switches\n    - server:\n        port: server-1/enp2s1\n      switch:\n        port: s5248-01/Ethernet1\n    - server:\n        port: server-1/enp2s2\n      switch:\n        port: s5248-02/Ethernet1\n
      "},{"location":"user-guide/connections/#switch-connections-fabric-facing","title":"Switch connections (fabric-facing)","text":"

      Switch connections are used to connect switches to each other and provide any needed \"service\" connectivity to implement the Fabric features.

      "},{"location":"user-guide/connections/#fabric","title":"Fabric","text":"

      A Fabric Connection is used between a specific pair of spine and leaf switches, representing all of the wires between them.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5232-01--fabric--s5248-01\n  namespace: default\nspec:\n  fabric:\n    links: # Defines multiple links between a spine-leaf pair of switches with IP addresses\n    - leaf:\n        ip: 172.30.30.1/31\n        port: s5248-01/Ethernet48\n      spine:\n        ip: 172.30.30.0/31\n        port: s5232-01/Ethernet0\n    - leaf:\n        ip: 172.30.30.3/31\n        port: s5248-01/Ethernet56\n      spine:\n        ip: 172.30.30.2/31\n        port: s5232-01/Ethernet4\n
      "},{"location":"user-guide/connections/#mclag-domain","title":"MCLAG-Domain","text":"

      MCLAG-Domain connections define a pair of MCLAG switches with Session and Peer link between them. Switches should be configured as an MCLAG, pair which requires them to be in a single redundancy group of type mclag and Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-01--mclag-domain--s5248-02\n  namespace: default\nspec:\n  mclagDomain:\n    peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n    - switch1:\n        port: s5248-01/Ethernet72\n      switch2:\n        port: s5248-02/Ethernet72\n    - switch1:\n        port: s5248-01/Ethernet73\n      switch2:\n        port: s5248-02/Ethernet73\n    sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n    - switch1:\n        port: s5248-01/Ethernet74\n      switch2:\n        port: s5248-02/Ethernet74\n    - switch1:\n        port: s5248-01/Ethernet75\n      switch2:\n        port: s5248-02/Ethernet75\n
      "},{"location":"user-guide/connections/#vpc-loopback","title":"VPC-Loopback","text":"

      VPC-Loopback connections are required in order to implement a workaround for the local VPC peering (when both VPC are attached to the same switch), which is caused by a hardware limitation of the currently supported switches.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-01--vpc-loopback\n  namespace: default\nspec:\n  vpcLoopback:\n    links: # Defines multiple loopbacks on a single switch\n    - switch1:\n        port: s5248-01/Ethernet16\n      switch2:\n        port: s5248-01/Ethernet17\n    - switch1:\n        port: s5248-01/Ethernet18\n      switch2:\n        port: s5248-01/Ethernet19\n
      "},{"location":"user-guide/connections/#connecting-fabric-to-the-outside-world","title":"Connecting Fabric to the outside world","text":"

      Connections in this section provide connectivity to the outside world. For example, they can be connections to the Internet, to other networks, or to some other systems such as DHCP, NTP, LMA, or AAA services.

      "},{"location":"user-guide/connections/#staticexternal","title":"StaticExternal","text":"

      StaticExternal connections provide a simple way to connect things like DHCP servers directly to the Fabric by connecting them to specific switch ports.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: third-party-dhcp-server--static-external--s5248-04\n  namespace: default\nspec:\n  staticExternal:\n    link:\n      switch:\n        port: s5248-04/Ethernet1 # Switch port to use\n        ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n        vlan: 1005 # Optional VLAN ID to use for the switch port; if 0, no VLAN is configured\n        subnets: # List of subnets to route to the switch port using static routes and next hop\n          - 10.99.0.1/24\n          - 10.199.0.100/32\n        nextHop: 172.30.50.1 # Next hop IP address to use when configuring static routes for the \"subnets\" list\n

      Additionally, it's possible to configure StaticExternal within the VPC to provide access to the third-party resources within a specific VPC, with the rest of the YAML configuration remaining unchanged.

      ...\nspec:\n  staticExternal:\n    withinVPC: vpc-1 # VPC name to attach the static external to\n    link:\n      ...\n
      "},{"location":"user-guide/connections/#external","title":"External","text":"

      Connection to external systems, such as edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what gets advertised and which routes are accepted.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: s5248-03--external--5835\n  namespace: default\nspec:\n  external:\n    link: # Defines a single link between a switch and an external system\n      switch:\n        port: s5248-03/Ethernet3\n
      "},{"location":"user-guide/devices/","title":"Switches and Servers","text":"

      All devices in a Hedgehog Fabric are divided into two groups: switches and servers, represented by the corresponding Switch and Server objects in the API. These objects are needed to define all of the participants of the Fabric and their roles in the Wiring Diagram, together with Connection objects (see Connections).

      "},{"location":"user-guide/devices/#switches","title":"Switches","text":"

      Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API. These objects consist of basic metadata like name, description, role, serial, management port mac, as well as port group speeds, port breakouts, ASN, IP addresses, and more. Additionally, a Switch contains a reference to a SwitchProfile object that defines the switch model and capabilities. More details can be found in the Switch Profiles and Port Naming section.

      In order for the fabric to manage a switch the profile needs to include either the serial or mac need to be defined in the YAML doc.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  boot: # at least one of the serial or mac needs to be defined\n    serial: XYZPDQ1234\n    mac: 00:11:22:33:44:55 # Usually the first management port MAC address\n  profile: dell-s5248f-on # Mandatory reference to the SwitchProfile object defining the switch model and capabilities\n  asn: 65101 # ASN of the switch\n  description: leaf-1\n  ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n  portBreakouts: # Configures port breakouts for the switch, see the SwitchProfile for available options\n    E1/55: 4x25G\n  portGroupSpeeds: # Configures port group speeds for the switch, see the SwitchProfile for available options\n    \"1\": 10G\n    \"2\": 10G\n  portSpeeds: # Configures port speeds for the switch, see the SwitchProfile for available options\n    E1/1: 25G\n  protocolIP: 172.30.11.100/32 # Used as BGP router ID\n  role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n  vlanNamespaces: # Defines which VLANs could be used to attach servers\n  - default\n  vtepIP: 172.30.12.100/32\n  groups: # Defines which groups the switch belongs to, by referring to SwitchGroup objects\n  - some-group\n  redundancy: # Optional field to define that switch belongs to the redundancy group\n    group: eslag-1 # Name of the redundancy group\n    type: eslag # Type of the redundancy group, one of mclag or eslag\n

      The SwitchGroup is just a marker at that point and doesn't have any configuration options.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: border\n  namespace: default\nspec: {}\n
      "},{"location":"user-guide/devices/#redundancy-groups","title":"Redundancy Groups","text":"

      Redundancy groups are used to define the redundancy between switches. It's a regular SwitchGroup used by multiple switches and currently it could be MCLAG or ESLAG (EVPN MH / ESI). A switch can only belong to a single redundancy group.

      MCLAG is only supported for pairs of switches and ESLAG is supported for up to 4 switches. Multiple types of redundancy groups can be used in the fabric simultaneously.

      Connections with types mclag and eslag are used to define the servers connections to switches. They are only supported if the switch belongs to a redundancy group with the corresponding type.

      In order to define a MCLAG or ESLAG redundancy group, you need to create a SwitchGroup object and assign it to the switches using the redundancy field.

      Example of switch configured for ESLAG:

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: eslag-1\n  namespace: default\nspec: {}\n---\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-03\n  namespace: default\nspec:\n  ...\n  redundancy:\n    group: eslag-1\n    type: eslag\n  ...\n

      And example of switch configured for MCLAG:

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: SwitchGroup\nmetadata:\n  name: mclag-1\n  namespace: default\nspec: {}\n---\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n  name: s5248-01\n  namespace: default\nspec:\n  ...\n  redundancy:\n    group: mclag-1\n    type: mclag\n  ...\n

      In case of MCLAG it's required to have a special connection with type mclag-domain that defines the peer and session links between switches. For more details, see Connections.

      "},{"location":"user-guide/devices/#servers","title":"Servers","text":"

      Regular workload server:

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Server\nmetadata:\n  name: server-1\n  namespace: default\nspec:\n  description: MH s5248-01/E1 s5248-02/E1\n
      "},{"location":"user-guide/external/","title":"External Peering","text":"

      Hedgehog Fabric uses the Border Leaf concept to exchange VPC routes outside the Fabric and provide L3 connectivity. The External Peering feature allows you to set up an external peering endpoint and to enforce several policies between internal and external endpoints.

      Note

      Hedgehog Fabric does not operate Edge side devices.

      "},{"location":"user-guide/external/#overview","title":"Overview","text":"

      Traffic exits from the Fabric on Border Leaves that are connected with Edge devices. Border Leaves are suitable to terminate L2VPN connections, to distinguish VPC L3 routable traffic towards Edge devices, and to land VPC servers. Border Leaves (or Borders) can connect to several Edge devices.

      Note

      External Peering is only available on the switch devices that are capable for sub-interfaces.

      "},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"

      In order to distinguish VPC traffic, an Edge device should be able to:

      • Set up BGP IPv4 to advertise and receive routes from the Fabric
      • Connect to a Fabric Border Leaf over VLAN
      • Be able to mark egress routes towards the Fabric with BGP Communities
      • Be able to filter ingress routes from the Fabric by BGP Communities

      All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.

      "},{"location":"user-guide/external/#control-plane","title":"Control Plane","text":"

      The Fabric shares VPC routes with Edge devices via BGP. Peering is done over VLAN in IPv4 Unicast AFI/SAFI.

      "},{"location":"user-guide/external/#data-plane","title":"Data Plane","text":"

      VPC L3 routable traffic will be tagged with VLAN and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.

      "},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"

      Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that the VPC can export to Edge devices and import from the Edge devices.

      "},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"

      General configuration starts with the specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with the following configuration:

      • Name of External
      • Inbound routes marked with the dedicated BGP community
      • Outbound routes marked with the dedicated community

      Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: External\nmetadata:\n  name: default--5835\nspec:\n  ipv4Namespace: # VPC IP Namespace\n  inboundCommunity: # BGP Standard Community of routes from Edge devices\n  outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
      "},{"location":"user-guide/external/#connection","title":"Connection","text":"

      A Connection of type external is used to identify the switch port on Border leaf that is cabled with an Edge device.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: # specified or generated\nspec:\n  external:\n    link:\n      switch:\n        port: # SwitchName/EthernetXXX\n
      "},{"location":"user-guide/external/#external-attachment","title":"External Attachment","text":"

      External Attachment defines BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to a Connection with type external and they specify an optional vlan that will be used to segregate particular Edge peering.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n  name: #\nspec:\n  connection: # Name of the Connection with type external\n  external: # Name of the External to pick config\n  neighbor:\n    asn: # Edge device ASN\n    ip: # IP address of Edge device to peer with\n  switch:\n    ip: # IP address on the Border Leaf to set up BGP peering\n    vlan: # VLAN (optional) ID to tag control and data traffic, use 0 for untagged\n

      Several External Attachment can be configured for the same Connection but for different vlan.

      "},{"location":"user-guide/external/#external-vpc-peering","title":"External VPC Peering","text":"

      To allow a specific VPC to have access to Edge devices, bind the VPC to a specific External object. To do so, define an External Peering object.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n  name: # Name of ExternalPeering\nspec:\n  permit:\n    external:\n      name: # External Name\n      prefixes: # List of prefixes (routes) to be allowed to pick up from External\n      - # IPv4 prefix\n    vpc:\n      name: # VPC Name\n      subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n      - # Name of the subnet within VPC\n

      Prefixes is the list of subnets to permit from the External to the VPC. It matches any prefix length less than or equal to 32, effectively permitting all prefixes within the specified one. Use 0.0.0.0/0 for any route, including the default route.

      This example allows any IPv4 prefix that came from External:

      spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - prefix: 0.0.0.0/0 # Any route will be allowed including default route\n

      This example allows all prefixes that match the default route, with any prefix length:

      spec:\n  permit:\n    external:\n      name: ###\n      prefixes:\n      - prefix: 77.0.0.0/8 # Any route that belongs to the specified prefix is allowed (such as 77.0.0.0/8 or 77.1.2.0/24)\n
      "},{"location":"user-guide/external/#examples","title":"Examples","text":"

      This example shows how to peer with the External object with name HedgeEdge, given a Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable connecting it to an Edge device on the port Ethernet42. Specifying vpc-1 is required to receive any prefixes advertised from the External.

      "},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"
      # hhfctl external create --name HedgeEdge --ipns default --in 65102:5000 --out 5000:65102\n
      apiVersion: vpc.githedgehog.com/v1beta1\nkind: External\nmetadata:\n  name: HedgeEdge\n  namespace: default\nspec:\n  inboundCommunity: 65102:5000\n  ipv4Namespace: default\n  outboundCommunity: 5000:65102\n
      "},{"location":"user-guide/external/#connection_1","title":"Connection","text":"

      Connection should be specified in the wiring diagram.

      ###\n### switchBorder--external--HedgeEdge\n###\napiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n  name: switchBorder--external--HedgeEdge\nspec:\n  external:\n    link:\n      switch:\n        port: switchBorder/Ethernet42\n
      "},{"location":"user-guide/external/#externalattachment","title":"ExternalAttachment","text":"

      Specified in wiring diagram

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n  name: switchBorder--HedgeEdge\nspec:\n  connection: switchBorder--external--HedgeEdge\n  external: HedgeEdge\n  neighbor:\n    asn: 65102\n    ip: 100.100.0.6\n  switch:\n    ip: 100.100.0.1/24\n    vlan: 100\n
      "},{"location":"user-guide/external/#externalpeering","title":"ExternalPeering","text":"
      apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n  name: vpc-1--HedgeEdge\nspec:\n  permit:\n    external:\n      name: HedgeEdge\n      prefixes:\n      - prefix: 0.0.0.0/0\n    vpc:\n      name: vpc-1\n      subnets:\n      - default\n
      "},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"

      Warning

      Hedgehog does not recommend using the following configuration for production. It is only provided as an example of Edge Peer configuration.

      Interface configuration:

      interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n

      BGP configuration:

      !\nrouter bgp 65102 vrf VrfHedge\n log-neighbor-changes\n timers 60 180\n !\n address-family ipv4 unicast\n  maximum-paths 64\n  maximum-paths ibgp 1\n  import vrf VrfPublic\n !\n neighbor 100.100.0.1\n  remote-as 65103\n  !\n  address-family ipv4 unicast\n   activate\n   route-map HedgeIn in\n   route-map HedgeOut out\n   send-community both\n !\n

      Route Map configuration:

      route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n
      "},{"location":"user-guide/grafana/","title":"Grafana Dashboards","text":"

      To provide monitoring for most critical metrics from the switches managed by Hedgehog Fabric there are several Dashboards that may be used in Grafana deployments. Make sure that you've enabled metrics and logs collection for the switches in the Fabric that is described in Fabric Config section.

      "},{"location":"user-guide/grafana/#variables","title":"Variables","text":"

      List of common variables used in Hedgehog Grafana dashboards

      • env (Label: Env): label_values(env) - Environment to monitor
      • node (Label: Switch): label_values(hostname) - Switch Name
      • vrf (Label: VRF): label_values(vrf) - VRF name (Multi-value)
      • neighbor (Label: Neighbor): label_values(neighbor) - BGP Neighbor IP address(Multi-value)
      • interface (Label: Interface): label_values(interface) - Switch Interface name as defined in wiring (Multi-value)
      • file (Label: File): label_valuse(filename) - Name of Logs file to inspect (Loki)
      "},{"location":"user-guide/grafana/#switch-critical-resources","title":"Switch Critical Resources","text":"

      This table reports usage and capacity of ASIC's programmable resources such as:

      • ACLs
      • IPv4 Routes
      • IPv4 Nexthops
      • IPv4 Neihbours
      • IPMC Table
      • FDB

      JSON

      "},{"location":"user-guide/grafana/#fabric","title":"Fabric","text":"

      Fabric underlay and external peering monitoring. Including reporing for:

      • BGP Neighbors
      • BGP Session state
      • Number of BGP Updates and Prefixes sent/received for each BGP Neighbor
      • Keepalive counters

      JSON

      "},{"location":"user-guide/grafana/#interfaces","title":"Interfaces","text":"

      Switch interfaces monitoring visualization that includes:

      • Interface Oper/Admin state
      • Total input/output packets counter
      • Input/output PPS/Bits rate
      • Interface utilization
      • Counters for Unicast/Broadcast/Multicast packets
      • Errors and discards counters

      JSON

      "},{"location":"user-guide/grafana/#logs","title":"Logs","text":"

      System and fabric logs:

      • Kernel and BGP logs from Syslog
      • Errors in agent and syslog
      • Full output of defined file

      JSON

      "},{"location":"user-guide/grafana/#platform","title":"Platform","text":"

      Information from PSU, temperature sensors and fan trays:

      • Input/output PSU voltage
      • Fan speed
      • Temperature from switch sensors (CPU, PSU, etc)
      • For transceivers with DOM - optic sensor temperature

      JSON

      "},{"location":"user-guide/grafana/#node-exporter","title":"Node Exporter","text":"

      Grafana Node Exporter Full is an opensource Grafana board that provide visualizations for monitoring Linux nodes. In particular case Node Exporter is used to track SONiC OS own stats such as

      • Memory/disks usage
      • CPU/System utilization
      • Networking stats (traffic that hits SONiC interfaces) ...

      JSON

      "},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"

      This section contains an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.

      You need to define a Server object for each server running Harvester and a Connection object for each server connection to the switches.

      You can have multiple VPCs created and attached to the Connections to the servers to make them available to the VMs in Harvester or any other hypervisor.

      "},{"location":"user-guide/harvester/#configure-harvester","title":"Configure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"

      From the \"Cluster Networks/Configs\" side menu, create a new Cluster Network.

      Here is a cleaned-up version of what the CRD looks like:

      apiVersion: network.harvesterhci.io/v1beta1\nkind: ClusterNetwork\nmetadata:\n  name: testnet\n
      "},{"location":"user-guide/harvester/#add-a-network-config","title":"Add a Network Config","text":"

      Click \"Create Network Config\". Add your connections and select the bonding type.

      The resulting CRD (cleaned up) looks like the following:

      apiVersion: network.harvesterhci.io/v1beta1\nkind: VlanConfig\nmetadata:\n  name: testconfig\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\nspec:\n  clusterNetwork: testnet\n  uplink:\n    bondOptions:\n      miimon: 100\n      mode: 802.3ad\n    linkAttributes:\n      txQLen: -1\n    nics:\n      - enp5s0f0\n      - enp3s0f1\n
      "},{"location":"user-guide/harvester/#add-vlan-based-vm-networks","title":"Add VLAN based VM Networks","text":"

      Browse over to \"VM Networks\" and add one network for each VLAN you want to support. Assign them to the cluster network.

      Here is what the CRDs will look like for both VLANs:

      apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1001'\n  name: testnet1001\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1001\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1001,\"ipam\":{}}\n
      apiVersion: k8s.cni.cncf.io/v1\nkind: NetworkAttachmentDefinition\nmetadata:\n  name: testnet1000\n  labels:\n    network.harvesterhci.io/clusternetwork: testnet\n    network.harvesterhci.io/ready: 'true'\n    network.harvesterhci.io/type: L2VlanNetwork\n    network.harvesterhci.io/vlan-id: '1000'\n    #  key: string\n  namespace: default\nspec:\n  config: >-\n    {\"cniVersion\":\"0.3.1\",\"name\":\"testnet1000\",\"type\":\"bridge\",\"bridge\":\"testnet-br\",\"promiscMode\":true,\"vlan\":1000,\"ipam\":{}}\n
      "},{"location":"user-guide/harvester/#using-the-vpcs","title":"Using the VPCs","text":"

      Now you can choose the new VM Networks when creating a VM in Harvester, and have them created as part of the VPC.

      "},{"location":"user-guide/overview/","title":"Overview","text":"

      This chapter gives an overview of the main features of Hedgehog Fabric and their usage.

      "},{"location":"user-guide/profiles/","title":"Switch Profiles and Port Naming","text":""},{"location":"user-guide/profiles/#switch-profiles","title":"Switch Profiles","text":"

      All supported switches have a SwitchProfile that defines the switch model, supported features, and available ports with supported configurations such as port group and speeds as well as port breakouts. SwitchProfiles available in-cluster or generated documentation can be found in the Reference section.

      Each switch used in the wiring diagram should have a SwitchProfile references in the spec.profile of the Switch object.

      Switch profile defines what features and ports are available on the switch. Based on the ports data in the profile, it's possible to set port speeds (for non-breakout and non-group ports), port group speeds and port breakout modes in the Switch object in the Fabric API.

      "},{"location":"user-guide/profiles/#port-naming","title":"Port Naming","text":"

      Each switch port is named using one of the the following formats:

      • M<management-port-number>

        • <management-port-number> is the management port number starting from 1 (usually only one named 1 for most switches)
      • E<asic-or-chassis-number>/<port-number>[/<breakout>][.<subinterface.]

        • <asic-or-chassis-number> is the ASIC or chassis number (usually only one named 1 for the most switches)
        • <port-number> is the port number on the ASIC or chassis, starting from 1
        • optional /<breakout> is the breakout number for the port, starting from 1, only for breakout ports and always consecutive numbers independent of the lanes allocation and other implementation details
        • optional .<subinterface> is the subinterface number for the port

      Examples of port names:

      • M1 - management port
      • E1/1 - port 1 on the ASIC or chassis 1, usually a first port on the switch
      • E1/55/1 - first breakout port of the switch port 55 on the ASIC or chassis 1
      "},{"location":"user-guide/profiles/#available-ports","title":"Available Ports","text":"

      Each switch profile defines a set of ports available on the switch. Ports could be divided into the following types.

      "},{"location":"user-guide/profiles/#directly-configurable-ports","title":"Directly configurable ports","text":"

      Non-breakout and non-group ports. Would have a reference to the port profile with default and available speeds. Could be configured by setting the speed in the Switch object in the Fabric API:

      .spec:\n  portSpeeds:\n    E1/1: 25G\n
      "},{"location":"user-guide/profiles/#port-groups","title":"Port groups","text":"

      Ports that belong to a port group, non-breakout and not directly configurable. Would have a reference to the port group which will have a reference to the port profile with default and available speeds. Port couldn't be configured directly, speed configuration is applied to the whole group in the Switch object in the Fabric API:

      .spec:\n  portGroupSpeeds:\n    \"1\": 10G\n

      It'll set the speed of all ports in the group 1 to 10G, e.g. if the group 1 contains ports E1/1, E1/2, E1/3 and E1/4, all of them will be set to 10G speed.

      "},{"location":"user-guide/profiles/#breakout-ports","title":"Breakout ports","text":"

      Ports that are breakouts and non-group ports. Would have a reference to the port profile with default and available breakout modes. Could be configured by setting the breakout mode in the Switch object in the Fabric API:

      .spec:\n  portBreakouts:\n    E1/55: 4x25G\n

      Configuring a port breakout mode will make \"breakout\" ports available for use in the wiring diagram. The breakout ports are named as E<asic-or-chassis-number>/<port-number>/<breakout>, e.g. E1/55/1, E1/55/2, E1/55/3, E1/55/4 for the example above. Omitting the breakout number is allowed for the first breakout port, e.g. E1/55 is the same as E1/55/1. The breakout ports are always consecutive numbers independent of the lanes allocation and other implementation details.

      "},{"location":"user-guide/shrink-expand/","title":"Fabric Shrink/Expand","text":"

      This section provides a brief overview of how to add or remove switches within the fabric using Hedgehog Fabric API, and how to manage connections between them.

      Manipulating API objects is done with the assumption that target devices are correctly cabled and connected.

      This article uses terms that can be found in the Hedgehog Concepts, the User Guide documentation, and the Fabric API reference.

      "},{"location":"user-guide/shrink-expand/#add-a-switch-to-the-existing-fabric","title":"Add a switch to the existing fabric","text":"

      In order to be added to the Hedgehog Fabric, a switch should have a corresponding Switch object. An example on how to define this object is available in the User Guild.

      Note

      If theSwitch will be used in ESLAG or MCLAG groups, appropriate groups should exist. Redundancy groups should be specified in the Switch object before creation.

      After the Switch object has been created, you can define and create dedicated device Connections. The types of the connections may differ based on the Switch role given to the device. For more details, refer to Connections section.

      Note

      Switch devices should be booted in ONIE installation mode to install SONiC OS and configure the Fabric Agent.

      Ensure the management port of the switch is connected to fabric management network.

      "},{"location":"user-guide/shrink-expand/#remove-a-switch-from-the-existing-fabric","title":"Remove a switch from the existing fabric","text":"

      Before you decommission a switch from the Hedgehog Fabric, several preparation steps are necessary.

      Warning

      Currently the Wiring diagram used for initial deployment is saved in /var/lib/rancher/k3s/server/manifests/hh-wiring.yaml on the Control node. Fabric will sustain objects within the original wiring diagram. In order to remove any object, first remove the dedicated API objects from this file. It is recommended to reapply hh-wiring.yaml after changing its internals.

      • If the Switch is a Leaf switch (including Mixed and Border leaf configurations), remove all VPCAttachments bound to all switches Connections.
      • If the Switch was used for ExternalPeering, remove all ExternalAttachment objects that are bound to the Connections of the Switch.
      • Remove all connections of the Switch.
      • At last, remove the Switch and Agent objects.
      "},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"

      A Virtual Private Cloud (VPC) is similar to a public cloud VPC. It provides an isolated private network with support for multiple subnets, each with user-defined VLANs and optional DHCP services.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n  name: vpc-1\n  namespace: default\nspec:\n  ipv4Namespace: default # Limits which subnets can the VPC use to guarantee non-overlapping IPv4 ranges\n  vlanNamespace: default # Limits which Vlan Ids can the VPC use to guarantee non-overlapping VLANs\n\n  defaultIsolated: true # Sets default behavior for the current VPC subnets to be isolated\n  defaultRestricted: true # Sets default behavior for the current VPC subnets to be restricted\n\n  subnets:\n    default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n      dhcp:\n        enable: true # On-demand DHCP server\n        range: # Optionally, start/end range could be specified, otherwise all available IPs are used\n          start: 10.10.1.10\n          end: 10.10.1.99\n        options: # Optional, additional DHCP options to enable for DHCP server, only available when enable is true\n          pxeURL: tftp://10.10.10.99/bootfilename # PXEURL (optional) to identify the PXE server to use to boot hosts; HTTP query strings are not supported\n          dnsServers: # (optional) configure DNS servers\n            - 1.1.1.1\n          timeServers: # (optional) configure Time (NTP) Servers\n            - 1.1.1.1\n          interfaceMTU: 1500 # (optional) configure the MTU (default is 9036); doesn't affect the actual MTU of the switch interfaces\n      subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n      gateway: 10.10.1.1 # User-defined gateway (optional, default is .1)\n      vlan: 1001 # User-defined VLAN from VLAN namespace\n      isolated: true # Makes subnet isolated from other subnets within the VPC (doesn't affect VPC peering)\n      restricted: true # Causes all hosts in the subnet to be isolated from each other\n\n    thrird-party-dhcp: # Another subnet\n      dhcp:\n        relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n      subnet: \"10.10.2.0/24\"\n      vlan: 1002\n\n    another-subnet: # Minimal configuration is just a name, subnet and VLAN\n      subnet: 10.10.100.0/24\n      vlan: 1100\n\n  permit: # Defines which subnets of the current VPC can communicate to each other, applied on top of subnets \"isolated\" flag (doesn't affect VPC peering)\n    - [subnet-1, subnet-2, subnet-3] # 1, 2 and 3 subnets can communicate to each other\n    - [subnet-4, subnet-5] # Possible to define multiple lists\n\n  staticRoutes: # Optional, static routes to be added to the VPC\n    - prefix: 10.100.0.0/24 # Destination prefix\n      nextHops: # Next hop IP addresses\n        - 10.200.0.0\n
      "},{"location":"user-guide/vpcs/#isolated-and-restricted-subnets-permit-lists","title":"Isolated and restricted subnets, permit lists","text":"

      Subnets can be isolated and restricted, with the ability to define permit lists to allow communication between specific isolated subnets. The permit list is applied on top of the isolated flag and doesn't affect VPC peering.

      Isolated subnet means that the subnet has no connectivity with other subnets within the VPC, but it could still be allowed by permit lists.

      Restricted subnet means that all hosts in the subnet are isolated from each other within the subnet.

      A Permit list contains a list. Every element of the list is a set of subnets that can communicate with each other.

      "},{"location":"user-guide/vpcs/#third-party-dhcp-server-configuration","title":"Third-party DHCP server configuration","text":"

      In case you use a third-party DHCP server, by configuring spec.subnets.<subnet>.dhcp.relay, additional information is added to the DHCP packet forwarded to the DHCP server to make it possible to identify the VPC and subnet. This information is added under the RelayAgentInfo (option 82) in the DHCP packet. The relay sets two suboptions in the packet:

      • VirtualSubnetSelection (suboption 151) is populated with the VRF which uniquely identifies a VPC on the Hedgehog Fabric and will be in VrfV<VPC-name> format, for example VrfVvpc-1 for a VPC named vpc-1 in the Fabric API.
      • CircuitID (suboption 1) identifies the VLAN which, together with the VRF (VPC) name, maps to a specific VPC subnet.
      "},{"location":"user-guide/vpcs/#vpcattachment","title":"VPCAttachment","text":"

      A VPCAttachment represents a specific VPC subnet assignment to the Connection object which means a binding between an exact server port and a VPC. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.

      VPC could be attached to a switch that is part of the VLAN namespace used by the VPC.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCAttachment\nmetadata:\n  name: vpc-1-server-1--mclag--s5248-01--s5248-02\n  namespace: default\nspec:\n  connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n  subnet: vpc-1/default # VPC subnet name\n  nativeVLAN: true # (Optional) if true, the port will be configured as a native VLAN port (untagged)\n
      "},{"location":"user-guide/vpcs/#vpcpeering","title":"VPCPeering","text":"

      A VPCPeering enables VPC-to-VPC connectivity. There are two types of VPC peering:

      • Local: peering is implemented on the same switches where VPCs are attached
      • Remote: peering is implemented on the border/mixed leaves defined by the SwitchGroup object

      VPC peering is only possible between VPCs attached to the same IPv4 namespace (see IPv4Namespace)

      "},{"location":"user-guide/vpcs/#local-vpc-peering","title":"Local VPC peering","text":"
      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit: # Defines a pair of VPCs to peer\n  - vpc-1: {} # Meaning all subnets of two VPCs will be able to communicate with each other\n    vpc-2: {} # See \"Subnet filtering\" for more advanced configuration\n
      "},{"location":"user-guide/vpcs/#remote-vpc-peering","title":"Remote VPC peering","text":"
      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit:\n  - vpc-1: {}\n    vpc-2: {}\n  remote: border # Indicates a switch group to implement the peering on\n
      "},{"location":"user-guide/vpcs/#subnet-filtering","title":"Subnet filtering","text":"

      It's possible to specify which specific subnets of the peering VPCs could communicate to each other using the permit field.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n  name: vpc-1--vpc-2\n  namespace: default\nspec:\n  permit: # subnet-1 and subnet-2 of vpc-1 could communicate to subnet-3 of vpc-2 as well as subnet-4 of vpc-2 could communicate to subnet-5 and subnet-6 of vpc-2\n  - vpc-1:\n      subnets: [subnet-1, subnet-2]\n    vpc-2:\n      subnets: [subnet-3]\n  - vpc-1:\n      subnets: [subnet-4]\n    vpc-2:\n      subnets: [subnet-5, subnet-6]\n
      "},{"location":"user-guide/vpcs/#ipv4namespace","title":"IPv4Namespace","text":"

      An IPv4Namespace defines a set of (non-overlapping) IPv4 address ranges available for use by VPC subnets. Each VPC belongs to a specific IPv4 namespace. Therefore, its subnet prefixes must be from that IPv4 namespace.

      apiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  subnets: # List of prefixes that VPCs can pick their subnets from\n  - 10.10.0.0/16\n
      "},{"location":"user-guide/vpcs/#vlannamespace","title":"VLANNamespace","text":"

      A VLANNamespace defines a set of VLAN ranges available for attaching servers to switches. Each switch can belong to one or more disjoint VLANNamespaces.

      apiVersion: wiring.githedgehog.com/v1beta1\nkind: VLANNamespace\nmetadata:\n  name: default\n  namespace: default\nspec:\n  ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n  - from: 1000\n    to: 2999\n
      "},{"location":"vlab/demo/","title":"Demo on VLAB","text":""},{"location":"vlab/demo/#goals","title":"Goals","text":"

      The goal of this demo is to show how to use VPCs, attach and peer them and run test connectivity between the servers. Examples are based on the default VLAB topology.

      You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.

      "},{"location":"vlab/demo/#default-topology","title":"Default topology","text":"

      The default topology is Spine-Leaf with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.

      For more details on customizing topologies see the Running VLAB section.

      In the default topology, the following Control Node and Switch VMs are created, the Control Node is connected to every switch, the lines are ommitted for clarity:

      graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n\n    L1([MCLAG Leaf 1])\n    L2([MCLAG Leaf 2])\n    L3([ESLAG Leaf 3])\n    L4([ESLAG Leaf 4])\n    L5([Leaf 5])\n\n\n    L1 & L2 & L5 & L3 & L4 --> S1 & S2

      As well as the following test servers, as above Control Node connections are omitted:

      graph TD\n    S1([Spine 1])\n    S2([Spine 2])\n    L1([MCLAG Leaf 1])\n    L2([MCLAG Leaf 2])\n    L3([ESLAG Leaf 3])\n    L4([ESLAG Leaf 4])\n    L5([Leaf 5])\n\n    TS1[Server 1]\n    TS2[Server 2]\n    TS3[Server 3]\n    TS4[Server 4]\n    TS5[Server 5]\n    TS6[Server 6]\n    TS7[Server 7]\n    TS8[Server 8]\n    TS9[Server 9]\n    TS10[Server 10]\n\n    subgraph MCLAG\n    L1\n    L2\n    end\n    TS3 --> L1\n    TS1 --> L1\n    TS1 --> L2\n\n    TS2 --> L1\n    TS2 --> L2\n\n    TS4 --> L2\n\n    subgraph ESLAG\n    L3\n    L4\n    end\n\n    TS7 --> L3\n    TS5 --> L3\n    TS5 --> L4\n    TS6 --> L3\n    TS6 --> L4\n\n    TS8 --> L4\n    TS9 --> L5\n    TS10 --> L5\n\n    L1 & L2 & L2 & L3 & L4 & L5 <----> S1 & S2
      "},{"location":"vlab/demo/#utility-based-vpc-creation","title":"Utility based VPC creation","text":""},{"location":"vlab/demo/#setup-vpcs","title":"Setup VPCs","text":"

      hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.

      NAME:\n   hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them\n\nUSAGE:\n   hhfab vlab setup-vpcs [command options]\n\nOPTIONS:\n   --dns-servers value, --dns value [ --dns-servers value, --dns value ]    DNS servers for VPCs advertised by DHCP\n   --force-clenup, -f                                                       start with removing all existing VPCs and VPCAttachments (default: false)\n   --help, -h                                                               show help\n   --interface-mtu value, --mtu value                                       interface MTU for VPCs advertised by DHCP (default: 0)\n   --ipns value                                                             IPv4 namespace for VPCs (default: \"default\")\n   --name value, -n value                                                   name of the VM or HW to access\n   --servers-per-subnet value, --servers value                              number of servers per subnet (default: 1)\n   --subnets-per-vpc value, --subnets value                                 number of subnets per VPC (default: 1)\n   --time-servers value, --ntp value [ --time-servers value, --ntp value ]  Time servers for VPCs advertised by DHCP\n   --vlanns value                                                           VLAN namespace for VPCs (default: \"default\")\n   --wait-switches-ready, --wait                                            wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
      "},{"location":"vlab/demo/#setup-peering","title":"Setup Peering","text":"

      hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.

      NAME:\n   hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)\n\nUSAGE:\n   Setup test scenario with VPC/External Peerings by specifying requests in the format described below.\n\n   Example command:\n\n   $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24\n\n   Which will produce:\n   1. VPC peering between vpc-01 and vpc-02\n   2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border\n   3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted\n   4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route\n      from external permitted as well any route that belongs to 22.22.22.0/24\n\n   VPC Peerings:\n\n   1+2 -- VPC peering between vpc-01 and vpc-02\n   demo-1+demo-2 -- VPC peering between demo-1 and demo-2\n   1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present\n   1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border\n   1+2:remote=border -- same as above\n\n   External Peerings:\n\n   1~as5835 -- external peering for vpc-01 with External as5835\n   1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing\n     default subnet and any route from external\n   1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and\n     default route from external permitted\n   1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details\n   1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above\n\nOPTIONS:\n   --help, -h                     show help\n   --name value, -n value         name of the VM or HW to access\n   --wait-switches-ready, --wait  wait for switches to be ready before before and after configuring peerings (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
      "},{"location":"vlab/demo/#test-connectivity","title":"Test Connectivity","text":"

      hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.

      NAME:\n   hhfab vlab test-connectivity - test connectivity between all servers\n\nUSAGE:\n   hhfab vlab test-connectivity [command options]\n\nOPTIONS:\n   --curls value                  number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)\n   --help, -h                     show help\n   --iperfs value                 seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)\n   --iperfs-speed value           minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)\n   --name value, -n value         name of the VM or HW to access\n   --pings value                  number of pings to send between each pair of servers (0 to disable) (default: 5)\n   --wait-switches-ready, --wait  wait for switches to be ready before testing connectivity (default: true)\n\n   Global options:\n\n   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
      "},{"location":"vlab/demo/#manual-vpc-creation","title":"Manual VPC creation","text":""},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"

      You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers:

      core@control-1 ~ $ kubectl get conn | grep server\nserver-01--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-02--mclag--leaf-01--leaf-02   mclag          5h13m\nserver-03--unbundled--leaf-01        unbundled      5h13m\nserver-04--bundled--leaf-02          bundled        5h13m\nserver-05--unbundled--leaf-03        unbundled      5h13m\nserver-06--bundled--leaf-03          bundled        5h13m\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10\n06:48:46 INF VPC created name=vpc-1\n\ncore@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10\n06:49:04 INF VPC created name=vpc-2\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02\n06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02\n\ncore@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02\n06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02\n

      The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is 10.0.0.0/16:

      core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   5h14m\n

      After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches:

      core@control-1 ~ $ kubectl get agents\nNAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION\nleaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0\nleaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0\nleaf-03    server-leaf   VS-03           112s      5          5          v0.23.0\nspine-01   spine         VS-04           16m       3          3          v0.23.0\nspine-02   spine         VS-05           18m       4          4          v0.23.0\n

      In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration has been applied.

      "},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"

      You can use hhfab vlab ssh on the host to SSH into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure a bond with a VLAN on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a VLAN and they both will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use the little helper pre-installed by Fabricator on test servers, hhnet.

      For server-01:

      core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n       valid_lft 86396sec preferred_lft 86396sec\n    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n       valid_lft forever preferred_lft forever\n

      And for server-02:

      core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n       valid_lft 86185sec preferred_lft 86185sec\n    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n       valid_lft forever preferred_lft forever\n
      "},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"

      You can test connectivity between the servers before peering the switches using the ping command:

      core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
      core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
      "},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"

      To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:

      core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2\n07:04:58 INF VPCPeering created name=vpc-1--vpc-2\n

      Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that, you can test connectivity between the servers again:

      core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
      core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n

      If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again:

      core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2\nvpcpeering.vpc.githedgehog.com \"vpc-1--vpc-2\" deleted\n
      core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n

      You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

      core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
      "},{"location":"vlab/demo/#using-vpcs-with-overlapping-subnets","title":"Using VPCs with overlapping subnets","text":"

      First, create a second IPv4Namespace with the same subnet as the default one:

      core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   24m\n\ncore@control-1 ~ $ cat <<EOF > ipns-2.yaml\napiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n  name: ipns-2\n  namespace: default\nspec:\n  subnets:\n  - 10.0.0.0/16\nEOF\n\ncore@control-1 ~ $ kubectl apply -f ipns-2.yaml\nipv4namespace.vpc.githedgehog.com/ipns-2 created\n\ncore@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   30m\nipns-2    [\"10.0.0.0/16\"]   8s\n

      Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:

      core@control-1 ~ $ cat <<EOF > vpc-3.yaml\napiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n  name: vpc-3\n  namespace: default\nspec:\n  ipv4Namespace: ipns-2\n  subnets:\n    default:\n      dhcp:\n        enable: true\n        range:\n          start: 10.0.1.10\n      subnet: 10.0.1.0/24\n      vlan: 2001\n  vlanNamespace: default\nEOF\n\ncore@control-1 ~ $ kubectl apply -f vpc-3.yaml\n

      At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in a previous section. Once you have configured networking, server-01 and server-03 have IP addresses from the same subnets.

      "},{"location":"vlab/overview/","title":"VLAB Overview","text":"

      It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any data plane or performance testing, or for production use.

      In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery, boot and installation process like on real hardware.

      "},{"location":"vlab/overview/#hhfab","title":"HHFAB","text":"

      The hhfab CLI provides a special command vlab to manage the virtual labs. It allows you to run sets of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs the installer to get Fabric up and running.

      You can find more information about getting hhfab in the download section.

      "},{"location":"vlab/overview/#system-requirements","title":"System Requirements","text":"

      Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

      The following packages needs to be installed: qemu-kvm socat. Docker is also required, to login into the OCI registry.

      By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.

      You can calculate the system requirements based on the allocated resources to the VMs using the following table:

      Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB

      These numbers give approximately the following requirements for the default topologies:

      • Spine-Leaf: 38 vCPUs, 36352 MB, 410 GB disk
      • Collapsed Core: 22 vCPUs, 19456 MB, 240 GB disk

      Usually, none of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

      NVMe SSD for VM disks is highly recommended.

      "},{"location":"vlab/overview/#installing-prerequisites","title":"Installing Prerequisites","text":"

      To run VLAB, your system needs docker,qemu,kvm, and hhfab. On Ubuntu 22.04 LTS you can install all required packages using the following commands:

      "},{"location":"vlab/overview/#docker","title":"Docker","text":"
      curl -fsSL https://get.docker.com -o install-docker.sh\nsudo sh install-docker.sh\nsudo usermod -aG docker $USER\nnewgrp docker\n
      "},{"location":"vlab/overview/#qemukvm","title":"Qemu/KVM","text":"
      sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat\nsudo usermod -aG kvm $USER\nnewgrp kvm\nkvm-ok\n

      Good output of the kvm-ok command should look like this:

      ubuntu@docs:~$ kvm-ok\nINFO: /dev/kvm exists\nKVM acceleration can be used\n
      "},{"location":"vlab/overview/#oras","title":"Oras","text":"

      For convenience Hedgehog provides a script to install oras:

      curl -fsSL https://i.hhdev.io/oras | bash\n
      "},{"location":"vlab/overview/#hhfab_1","title":"Hhfab","text":"

      Hedgehog maintains a utility to install and configure VLAB, called hhfab.

      You need a GitHub access token to download hhfab, please submit a ticket using the Hedgehog Support Portal. Once in possession of the credentials, use the provided username and token to log into the GitHub container registry:

      docker login ghcr.io --username provided_username --password provided_token\n

      Once logged in, download and run the script:

      curl -fsSL https://i.hhdev.io/hhfab | bash\n
      "},{"location":"vlab/overview/#next-steps","title":"Next steps","text":"
      • Configure and Run VLAB
      "},{"location":"vlab/running/","title":"Running VLAB","text":"

      Make sure to follow the prerequisites and check system requirements in the VLAB Overview section before running VLAB.

      "},{"location":"vlab/running/#initialize-vlab","title":"Initialize VLAB","text":"

      First, initialize Fabricator by running hhfab init --dev. This command creates the fab.yaml file, which is the main configuration file for the fabric. This command supports several customization options that are listed in the output of hhfab init --help.

      ubuntu@docs:~$ hhfab init --dev\n11:26:52 INF Hedgehog Fabricator version=v0.30.0\n11:26:52 INF Generated initial config\n11:26:52 INF Adjust configs (incl. credentials, modes, subnets, etc.) file=fab.yaml\n11:26:52 INF Include wiring files (.yaml) or adjust imported ones dir=include\n
      "},{"location":"vlab/running/#vlab-topology","title":"VLAB Topology","text":"

      By default, hhfab init creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.

      ubuntu@docs:~$ hhfab vlab gen\n21:27:16 INF Hedgehog Fabricator version=v0.30.0\n21:27:16 INF Building VLAB wiring diagram fabricMode=spine-leaf\n21:27:16 INF >>> spinesCount=2 fabricLinksCount=2\n21:27:16 INF >>> eslagLeafGroups=2\n21:27:16 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2\n21:27:16 INF >>> orphanLeafsCount=1 vpcLoopbacks=2\n21:27:16 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n21:27:16 INF Generated wiring file name=vlab.generated.yaml\n
      You can jump to the instructions to start VLAB, or see the next section for customizing the topology.

      "},{"location":"vlab/running/#collapsed-core","title":"Collapsed Core","text":"

      If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core:

      ubuntu@docs:~$ hhfab vlab gen\n11:39:02 INF Hedgehog Fabricator version=v0.30.0\n11:39:02 INF Building VLAB wiring diagram fabricMode=collapsed-core\n11:39:02 INF >>> mclagLeafsCount=2 mclagSessionLinks=2 mclagPeerLinks=2\n11:39:02 INF >>> orphanLeafsCount=0 vpcLoopbacks=2\n11:39:02 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n11:39:02 INF Generated wiring file name=vlab.generated.yaml\n
      "},{"location":"vlab/running/#custom-spine-leaf","title":"Custom Spine Leaf","text":"

      Or you can run custom topology with 2 spines, 4 MCLAG leaves and 2 non-MCLAG leaves using flags:

      ubuntu@docs:~$ hhfab vlab gen --mclag-leafs-count 4 --orphan-leafs-count 2\n11:41:06 INF Hedgehog Fabricator version=v0.30.0\n11:41:06 INF Building VLAB wiring diagram fabricMode=spine-leaf\n11:41:06 INF >>> spinesCount=2 fabricLinksCount=2\n11:41:06 INF >>> eslagLeafGroups=\"\"\n11:41:06 INF >>> mclagLeafsCount=4 mclagSessionLinks=2 mclagPeerLinks=2\n11:41:06 INF >>> orphanLeafsCount=2 vpcLoopbacks=2\n11:41:06 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1\n11:41:06 INF Generated wiring file name=vlab.generated.yaml\n

      Additionally, you can pass extra Fabric configuration items using flags on init command or by passing a configuration file. For more information, refer to the Fabric Configuration section.

      Once you have initialized the VLAB, download the artifacts and build the installer using hhfab build. This command automatically downloads all required artifacts from the OCI registry and builds the installer and all other prerequisites for running the VLAB.

      "},{"location":"vlab/running/#build-the-installer-and-start-vlab","title":"Build the Installer and Start VLAB","text":"

      To build and start the virtual machines, use hhfab vlab up. For successive runs, use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. hhfab vlab up runs in the foreground and does not return, which allows you to stop all VLAB VMs by simply pressing Ctrl + C.

      ubuntu@docs:~$ hhfab vlab up\n11:48:22 INF Hedgehog Fabricator version=v0.30.0\n11:48:22 INF Wiring hydrated successfully mode=if-not-present\n11:48:22 INF VLAB config created file=vlab/config.yaml\n11:48:22 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:22 INF Building installer control=control-1\n11:48:22 INF Adding recipe bin to installer control=control-1\n11:48:24 INF Adding k3s and tools to installer control=control-1\n11:48:25 INF Adding zot to installer control=control-1\n11:48:25 INF Adding cert-manager to installer control=control-1\n11:48:26 INF Adding config and included wiring to installer control=control-1\n11:48:26 INF Adding airgap artifacts to installer control=control-1\n11:48:36 INF Archiving installer path=/home/ubuntu/result/control-1-install.tgz control=control-1\n11:48:45 INF Creating ignition path=/home/ubuntu/result/control-1-install.ign control=control-1\n11:48:46 INF Taps and bridge are ready count=8\n11:48:46 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:46 INF Preparing new vm=control-1 type=control\n11:48:51 INF Preparing new vm=server-01 type=server\n11:48:52 INF Preparing new vm=server-02 type=server\n11:48:54 INF Preparing new vm=server-03 type=server\n11:48:55 INF Preparing new vm=server-04 type=server\n11:48:57 INF Preparing new vm=server-05 type=server\n11:48:58 INF Preparing new vm=server-06 type=server\n11:49:00 INF Preparing new vm=server-07 type=server\n11:49:01 INF Preparing new vm=server-08 type=server\n11:49:03 INF Preparing new vm=server-09 type=server\n11:49:04 INF Preparing new vm=server-10 type=server\n11:49:05 INF Preparing new vm=leaf-01 type=switch\n11:49:06 INF Preparing new vm=leaf-02 type=switch\n11:49:06 INF Preparing new vm=leaf-03 type=switch\n11:49:06 INF Preparing new vm=leaf-04 type=switch\n11:49:06 INF Preparing new vm=leaf-05 type=switch\n11:49:06 INF Preparing new vm=spine-01 type=switch\n11:49:06 INF Preparing new vm=spine-02 type=switch\n11:49:06 INF Starting VMs count=18 cpu=\"54 vCPUs\" ram=\"49664 MB\" disk=\"550 GB\"\n11:49:59 INF Uploading control install vm=control-1 type=control\n11:53:39 INF Running control install vm=control-1 type=control\n11:53:40 INF control-install: 01:53:39 INF Hedgehog Fabricator Recipe version=v0.30.0 vm=control-1\n11:53:40 INF control-install: 01:53:39 INF Running control node installation vm=control-1\n12:00:32 INF control-install: 02:00:31 INF Control node installation complete vm=control-1\n12:00:32 INF Control node is ready vm=control-1 type=control\n12:00:32 INF All VMs are ready\n
      When the message INF Control node is ready vm=control-1 type=control from the installer's output means that the installer has finished. After this line has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned. See Accessing the VLAB.

      "},{"location":"vlab/running/#enable-outside-connectivity-from-vlab-vms","title":"Enable Outside connectivity from VLAB VMs","text":"

      By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure enable connectivity using hhfab vlab up --restrict-servers=false to allow the test servers to access the Internet and the host. When you enable connectivity, VMs get a default route pointing to the host, which means that in case of the VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific subnets).

      "},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"

      The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.

      You can select device you want to access or pass the name using the --vm flag.

      ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190  and / toggles search\nSSH to VM:\n  \ud83e\udd94 control-1\n  server-01\n  server-02\n  server-03\n  server-04\n  server-05\n  server-06\n  leaf-01\n  leaf-02\n  leaf-03\n  spine-01\n  spine-02\n\n----------- VM Details ------------\nID:             0\nName:           control-1\nReady:          true\nBasedir:        .hhfab/vlab-vms/control-1\n
      "},{"location":"vlab/running/#default-credentials","title":"Default credentials","text":"

      Fabricator creates default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.

      The default user with password-less sudo for the control node and test servers is core with password HHFab.Admin!. The admin user with full access and password-less sudo for the switches is admin with password HHFab.Admin!. The read-only, non-sudo user with access to the switch CLI is op with password HHFab.Op!.

      "},{"location":"vlab/running/#use-kubectl-to-interact-with-the-fabric","title":"Use Kubectl to Interact with the Fabric","text":"

      On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. To view information about the switches run kubectl get agents -o wide. After the control node is available it usually takes about 10-15 minutes for the switches to get installed.

      After the switches are provisioned, the command returns something like this:

      core@control-1 ~ $ kubectl get agents -o wide\nNAME       ROLE          DESCR           HWSKU                      ASIC   HEARTBEAT   APPLIED   APPLIEDG   CURRENTG   VERSION   SOFTWARE                ATTEMPT   ATTEMPTG   AGE\nleaf-01    server-leaf   VS-01 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     30s         5m5s      4          4          v0.23.0   4.1.1-Enterprise_Base   5m5s      4          10m\nleaf-02    server-leaf   VS-02 MCLAG 1   DellEMC-S5248f-P-25G-DPB   vs     27s         3m30s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m30s     3          10m\nleaf-03    server-leaf   VS-03           DellEMC-S5248f-P-25G-DPB   vs     18s         3m52s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m52s     4          10m\nspine-01   spine         VS-04           DellEMC-S5248f-P-25G-DPB   vs     26s         3m59s     3          3          v0.23.0   4.1.1-Enterprise_Base   3m59s     3          10m\nspine-02   spine         VS-05           DellEMC-S5248f-P-25G-DPB   vs     19s         3m53s     4          4          v0.23.0   4.1.1-Enterprise_Base   3m53s     4          10m\n

      The Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. The Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. Different values for AppliedG and CurrentG mean that the switch is in the process of applying the configuration.

      At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about managing the Fabric in the Running Demo and User Guide sections.

      "},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"

      You can list the main Fabric objects by running kubectl get on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

      For example, to get the list of switches, run:

      core@control-1 ~ $ kubectl get switch\nNAME       ROLE          DESCR           GROUPS   LOCATIONUUID                           AGE\nleaf-01    server-leaf   VS-01 MCLAG 1            5e2ae08a-8ba9-599a-ae0f-58c17cbbac67   6h10m\nleaf-02    server-leaf   VS-02 MCLAG 1            5a310b84-153e-5e1c-ae99-dff9bf1bfc91   6h10m\nleaf-03    server-leaf   VS-03                    5f5f4ad5-c300-5ae3-9e47-f7898a087969   6h10m\nspine-01   spine         VS-04                    3e2c4992-a2e4-594b-bbd1-f8b2fd9c13da   6h10m\nspine-02   spine         VS-05                    96fbd4eb-53b5-5a4c-8d6a-bbc27d883030   6h10m\n

      Similarly, to get the list of servers, run:

      core@control-1 ~ $ kubectl get server\nNAME        TYPE      DESCR                        AGE\ncontrol-1   control   Control node                 6h10m\nserver-01             S-01 MCLAG leaf-01 leaf-02   6h10m\nserver-02             S-02 MCLAG leaf-01 leaf-02   6h10m\nserver-03             S-03 Unbundled leaf-01       6h10m\nserver-04             S-04 Bundled leaf-02         6h10m\nserver-05             S-05 Unbundled leaf-03       6h10m\nserver-06             S-06 Bundled leaf-03         6h10m\n

      For connections, use:

      core@control-1 ~ $ kubectl get connection\nNAME                                 TYPE           AGE\nleaf-01--mclag-domain--leaf-02       mclag-domain   6h11m\nleaf-01--vpc-loopback                vpc-loopback   6h11m\nleaf-02--vpc-loopback                vpc-loopback   6h11m\nleaf-03--vpc-loopback                vpc-loopback   6h11m\nserver-01--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-02--mclag--leaf-01--leaf-02   mclag          6h11m\nserver-03--unbundled--leaf-01        unbundled      6h11m\nserver-04--bundled--leaf-02          bundled        6h11m\nserver-05--unbundled--leaf-03        unbundled      6h11m\nserver-06--bundled--leaf-03          bundled        6h11m\nspine-01--fabric--leaf-01            fabric         6h11m\nspine-01--fabric--leaf-02            fabric         6h11m\nspine-01--fabric--leaf-03            fabric         6h11m\nspine-02--fabric--leaf-01            fabric         6h11m\nspine-02--fabric--leaf-02            fabric         6h11m\nspine-02--fabric--leaf-03            fabric         6h11m\n

      For IPv4 and VLAN namespaces, use:

      core@control-1 ~ $ kubectl get ipns\nNAME      SUBNETS           AGE\ndefault   [\"10.0.0.0/16\"]   6h12m\n\ncore@control-1 ~ $ kubectl get vlanns\nNAME      AGE\ndefault   6h12m\n
      "},{"location":"vlab/running/#reset-vlab","title":"Reset VLAB","text":"

      If VLAB is currently running, press Ctrl + C to stop it. To reset VLAB and start over run hhfab init -f. This option forces the process to overwrite your existing configuration in fab.yaml.

      "},{"location":"vlab/running/#next-steps","title":"Next steps","text":"
      • Running Demo
      "}]} \ No newline at end of file diff --git a/beta-1/sitemap.xml b/beta-1/sitemap.xml index 54a01422..a15a82ca 100644 --- a/beta-1/sitemap.xml +++ b/beta-1/sitemap.xml @@ -2,147 +2,147 @@ https://docs.githedgehog.com/beta-1/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/architecture/fabric/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/architecture/overview/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/concepts/overview/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/contribute/docs/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/contribute/overview/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/getting-started/download/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/install-upgrade/build-wiring/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/install-upgrade/config/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/install-upgrade/overview/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/install-upgrade/requirements/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/install-upgrade/supported-devices/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/reference/api/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/reference/cli/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/reference/profiles/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/release-notes/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/troubleshooting/overview/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/connections/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/devices/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/external/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/grafana/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/harvester/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/overview/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/profiles/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/shrink-expand/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/user-guide/vpcs/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/vlab/demo/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/vlab/overview/ - 2024-10-29 + 2024-11-08 daily https://docs.githedgehog.com/beta-1/vlab/running/ - 2024-10-29 + 2024-11-08 daily \ No newline at end of file diff --git a/beta-1/sitemap.xml.gz b/beta-1/sitemap.xml.gz index 1455afa906d234bd7c7b4c21ae7a1b0ccb6be029..1fdb6015fe27ff61ea1696982805f434898129ba 100644 GIT binary patch delta 445 zcmV;u0Yd)I1J45oABzYGdwnjE2OfW`9=6pCd+QUl4-gUqQA3j22~6L-&8Urv;FFinr#{o|_Xyo1Bm1t@-}#t9onRyO(RlKu}Js9O>3frQ|PK zuj|^9FEE=-ZNgELop?j~L#WT~=cao(@!AYL-yZ84lI=Dt#AaNw)Q-MP;6{I#(%9NG z>mG_x7k$;6=Ke!NWlpYpJ->cZmy0^TvC4;Rr)DP;M;H)(cFy?l>#BE>Cz1Yx^sjIR z?ZhXwr6TCT^ysZ1;KYA@@vt5n*`{bA7m|Zp(D${#Lrn~<4DXIm3MVE-#b5!)x@!!{ zxMNfM$h;w_nnyGabYFdNYW07}N0CL&0W)}E%rYYTZ)&z(J;DsJkSublIj8M*DhG?V zaBzHt0gyW$&?b8+s5Xc}&6oxsjvoa941tQN938b_sy#b&N@=TwLh@TBKs8qXEFzIx zP@XqP46}$-w&R6+0gNVy0p{8U<=6q!#2qdD4+ZyZq7y^C n|iWwX^+ delta 445 zcmV;u0Yd)I1J45oABzYG#{3|W2OfW;9=a97-ueW)4-gUqQA0xQ1g39aGqb8tk3Cd4 z7l>u7FUIjt^LC%YX#pgk;;p*W=jsH}CgXLWsLl~37D&5kAx&?Ee8o$=vUqj!=gk^Y19uW$zK z$S1YAAn4w7=&c~&#D9J9upS%Prf6Z#BnLO6?@ES;niyCa-W?!kPE3lj!3>Ud*BFv< z$ENm?c|%ZT9?&?@UGc%G)gyl&MHV>+Oz(v;%ZTj1so8e%2ouCivdF2-bJ}jFa|irvTco diff --git a/beta-1/troubleshooting/overview/index.html b/beta-1/troubleshooting/overview/index.html index 3c60d7f4..18d8d973 100644 --- a/beta-1/troubleshooting/overview/index.html +++ b/beta-1/troubleshooting/overview/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/connections/index.html b/beta-1/user-guide/connections/index.html index 11077198..8a817f56 100644 --- a/beta-1/user-guide/connections/index.html +++ b/beta-1/user-guide/connections/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/devices/index.html b/beta-1/user-guide/devices/index.html index d3ad4f68..b8ab0c5c 100644 --- a/beta-1/user-guide/devices/index.html +++ b/beta-1/user-guide/devices/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/external/index.html b/beta-1/user-guide/external/index.html index 02fe947d..7fc7644f 100644 --- a/beta-1/user-guide/external/index.html +++ b/beta-1/user-guide/external/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/grafana/index.html b/beta-1/user-guide/grafana/index.html index 0c6d1cf0..0c7e2743 100644 --- a/beta-1/user-guide/grafana/index.html +++ b/beta-1/user-guide/grafana/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/harvester/index.html b/beta-1/user-guide/harvester/index.html index 7797939b..5e688c0e 100644 --- a/beta-1/user-guide/harvester/index.html +++ b/beta-1/user-guide/harvester/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/overview/index.html b/beta-1/user-guide/overview/index.html index c3010f2a..64e725e3 100644 --- a/beta-1/user-guide/overview/index.html +++ b/beta-1/user-guide/overview/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/profiles/index.html b/beta-1/user-guide/profiles/index.html index 83d1411e..7c1cafee 100644 --- a/beta-1/user-guide/profiles/index.html +++ b/beta-1/user-guide/profiles/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/shrink-expand/index.html b/beta-1/user-guide/shrink-expand/index.html index 76234f3d..acb8ca8c 100644 --- a/beta-1/user-guide/shrink-expand/index.html +++ b/beta-1/user-guide/shrink-expand/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/user-guide/vpcs/index.html b/beta-1/user-guide/vpcs/index.html index fcb96ec5..b88caad7 100644 --- a/beta-1/user-guide/vpcs/index.html +++ b/beta-1/user-guide/vpcs/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview diff --git a/beta-1/vlab/demo/index.html b/beta-1/vlab/demo/index.html index 14de0c5e..e02e6fed 100644 --- a/beta-1/vlab/demo/index.html +++ b/beta-1/vlab/demo/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview @@ -493,37 +493,30 @@
    • - - Manual VPC creation + + Utility based VPC creation -

    +

    Utility based VPC creation

    +

    Setup VPCs

    +

    hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.

    +
    NAME:
    +   hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them
    +
    +USAGE:
    +   hhfab vlab setup-vpcs [command options]
    +
    +OPTIONS:
    +   --dns-servers value, --dns value [ --dns-servers value, --dns value ]    DNS servers for VPCs advertised by DHCP
    +   --force-clenup, -f                                                       start with removing all existing VPCs and VPCAttachments (default: false)
    +   --help, -h                                                               show help
    +   --interface-mtu value, --mtu value                                       interface MTU for VPCs advertised by DHCP (default: 0)
    +   --ipns value                                                             IPv4 namespace for VPCs (default: "default")
    +   --name value, -n value                                                   name of the VM or HW to access
    +   --servers-per-subnet value, --servers value                              number of servers per subnet (default: 1)
    +   --subnets-per-vpc value, --subnets value                                 number of subnets per VPC (default: 1)
    +   --time-servers value, --ntp value [ --time-servers value, --ntp value ]  Time servers for VPCs advertised by DHCP
    +   --vlanns value                                                           VLAN namespace for VPCs (default: "default")
    +   --wait-switches-ready, --wait                                            wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)
    +
    +   Global options:
    +
    +   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
    +   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
    +   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
    +   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
    +
    +

    Setup Peering

    +

    hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.

    +
    NAME:
    +   hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)
    +
    +USAGE:
    +   Setup test scenario with VPC/External Peerings by specifying requests in the format described below.
    +
    +   Example command:
    +
    +   $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24
    +
    +   Which will produce:
    +   1. VPC peering between vpc-01 and vpc-02
    +   2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border
    +   3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted
    +   4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route
    +      from external permitted as well any route that belongs to 22.22.22.0/24
    +
    +   VPC Peerings:
    +
    +   1+2 -- VPC peering between vpc-01 and vpc-02
    +   demo-1+demo-2 -- VPC peering between demo-1 and demo-2
    +   1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present
    +   1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border
    +   1+2:remote=border -- same as above
    +
    +   External Peerings:
    +
    +   1~as5835 -- external peering for vpc-01 with External as5835
    +   1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing
    +     default subnet and any route from external
    +   1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and
    +     default route from external permitted
    +   1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details
    +   1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above
    +
    +OPTIONS:
    +   --help, -h                     show help
    +   --name value, -n value         name of the VM or HW to access
    +   --wait-switches-ready, --wait  wait for switches to be ready before before and after configuring peerings (default: true)
    +
    +   Global options:
    +
    +   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
    +   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
    +   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
    +   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
    +
    +

    Test Connectivity

    +

    hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.

    +
    NAME:
    +   hhfab vlab test-connectivity - test connectivity between all servers
    +
    +USAGE:
    +   hhfab vlab test-connectivity [command options]
    +
    +OPTIONS:
    +   --curls value                  number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)
    +   --help, -h                     show help
    +   --iperfs value                 seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)
    +   --iperfs-speed value           minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)
    +   --name value, -n value         name of the VM or HW to access
    +   --pings value                  number of pings to send between each pair of servers (0 to disable) (default: 5)
    +   --wait-switches-ready, --wait  wait for switches to be ready before testing connectivity (default: true)
    +
    +   Global options:
    +
    +   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
    +   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
    +   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
    +   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
    +

    Manual VPC creation

    Creating and attaching VPCs

    You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers:

    -
    core@control-1 ~ $ kubectl get conn | grep server
    -server-01--mclag--leaf-01--leaf-02   mclag          5h13m
    -server-02--mclag--leaf-01--leaf-02   mclag          5h13m
    -server-03--unbundled--leaf-01        unbundled      5h13m
    -server-04--bundled--leaf-02          bundled        5h13m
    -server-05--unbundled--leaf-03        unbundled      5h13m
    -server-06--bundled--leaf-03          bundled        5h13m
    -
    -core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10
    -06:48:46 INF VPC created name=vpc-1
    -
    -core@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10
    -06:49:04 INF VPC created name=vpc-2
    -
    -core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02
    -06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02
    -
    -core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02
    -06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02
    +
    core@control-1 ~ $ kubectl get conn | grep server
    +server-01--mclag--leaf-01--leaf-02   mclag          5h13m
    +server-02--mclag--leaf-01--leaf-02   mclag          5h13m
    +server-03--unbundled--leaf-01        unbundled      5h13m
    +server-04--bundled--leaf-02          bundled        5h13m
    +server-05--unbundled--leaf-03        unbundled      5h13m
    +server-06--bundled--leaf-03          bundled        5h13m
    +
    +core@control-1 ~ $ kubectl fabric vpc create --name vpc-1 --subnet 10.0.1.0/24 --vlan 1001 --dhcp --dhcp-start 10.0.1.10
    +06:48:46 INF VPC created name=vpc-1
    +
    +core@control-1 ~ $ kubectl fabric vpc create --name vpc-2 --subnet 10.0.2.0/24 --vlan 1002 --dhcp --dhcp-start 10.0.2.10
    +06:49:04 INF VPC created name=vpc-2
    +
    +core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-1/default --connection server-01--mclag--leaf-01--leaf-02
    +06:49:24 INF VPCAttachment created name=vpc-1--default--server-01--mclag--leaf-01--leaf-02
    +
    +core@control-1 ~ $ kubectl fabric vpc attach --vpc-subnet vpc-2/default --connection server-02--mclag--leaf-01--leaf-02
    +06:49:34 INF VPCAttachment created name=vpc-2--default--server-02--mclag--leaf-01--leaf-02
     

    The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is 10.0.0.0/16:

    -
    core@control-1 ~ $ kubectl get ipns
    -NAME      SUBNETS           AGE
    -default   ["10.0.0.0/16"]   5h14m
    +
    core@control-1 ~ $ kubectl get ipns
    +NAME      SUBNETS           AGE
    +default   ["10.0.0.0/16"]   5h14m
     

    After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches:

    -
    core@control-1 ~ $ kubectl get agents
    -NAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION
    -leaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0
    -leaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0
    -leaf-03    server-leaf   VS-03           112s      5          5          v0.23.0
    -spine-01   spine         VS-04           16m       3          3          v0.23.0
    -spine-02   spine         VS-05           18m       4          4          v0.23.0
    +
    core@control-1 ~ $ kubectl get agents
    +NAME       ROLE          DESCR           APPLIED   APPLIEDG   CURRENTG   VERSION
    +leaf-01    server-leaf   VS-01 MCLAG 1   2m2s      5          5          v0.23.0
    +leaf-02    server-leaf   VS-02 MCLAG 1   2m2s      4          4          v0.23.0
    +leaf-03    server-leaf   VS-03           112s      5          5          v0.23.0
    +spine-01   spine         VS-04           16m       3          3          v0.23.0
    +spine-02   spine         VS-05           18m       4          4          v0.23.0
     

    In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration has been applied.

    @@ -1540,275 +1642,173 @@

    Setting up networking on test ser will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use the little helper pre-installed by Fabricator on test servers, hhnet.

    For server-01:

    -
    core@server-01 ~ $ hhnet cleanup
    -core@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2
    -10.0.1.10/24
    -core@server-01 ~ $ ip a
    -...
    -3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    -    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01
    -4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    -    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02
    -6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    -    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
    -    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
    -       valid_lft forever preferred_lft forever
    -7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    -    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
    -    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001
    -       valid_lft 86396sec preferred_lft 86396sec
    -    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
    -       valid_lft forever preferred_lft forever
    +
    core@server-01 ~ $ hhnet cleanup
    +core@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2
    +10.0.1.10/24
    +core@server-01 ~ $ ip a
    +...
    +3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    +    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01
    +4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    +    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02
    +6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    +    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
    +    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
    +       valid_lft forever preferred_lft forever
    +7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    +    link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
    +    inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001
    +       valid_lft 86396sec preferred_lft 86396sec
    +    inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
    +       valid_lft forever preferred_lft forever
     

    And for server-02:

    -
    core@server-02 ~ $ hhnet cleanup
    -core@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2
    -10.0.2.10/24
    -core@server-02 ~ $ ip a
    -...
    -3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    -    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01
    -4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    -    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02
    -8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    -    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
    -    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
    -       valid_lft forever preferred_lft forever
    -9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    -    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
    -    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002
    -       valid_lft 86185sec preferred_lft 86185sec
    -    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
    -       valid_lft forever preferred_lft forever
    +
    core@server-02 ~ $ hhnet cleanup
    +core@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2
    +10.0.2.10/24
    +core@server-02 ~ $ ip a
    +...
    +3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    +    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01
    +4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    +    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02
    +8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    +    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
    +    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
    +       valid_lft forever preferred_lft forever
    +9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    +    link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
    +    inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002
    +       valid_lft 86185sec preferred_lft 86185sec
    +    inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
    +       valid_lft forever preferred_lft forever
     

    Testing connectivity before peering

    You can test connectivity between the servers before peering the switches using the ping command:

    -
    core@server-01 ~ $ ping 10.0.2.10
    -PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
    -From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
    -From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
    -From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
    -^C
    ---- 10.0.2.10 ping statistics ---
    -3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms
    +
    core@server-01 ~ $ ping 10.0.2.10
    +PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
    +From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
    +From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
    +From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
    +^C
    +--- 10.0.2.10 ping statistics ---
    +3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms
     
    -
    core@server-02 ~ $ ping 10.0.1.10
    -PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
    -From 10.0.2.1 icmp_seq=1 Destination Net Unreachable
    -From 10.0.2.1 icmp_seq=2 Destination Net Unreachable
    -From 10.0.2.1 icmp_seq=3 Destination Net Unreachable
    -^C
    ---- 10.0.1.10 ping statistics ---
    -3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms
    +
    core@server-02 ~ $ ping 10.0.1.10
    +PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
    +From 10.0.2.1 icmp_seq=1 Destination Net Unreachable
    +From 10.0.2.1 icmp_seq=2 Destination Net Unreachable
    +From 10.0.2.1 icmp_seq=3 Destination Net Unreachable
    +^C
    +--- 10.0.1.10 ping statistics ---
    +3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms
     

    Peering VPCs and testing connectivity

    To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:

    -
    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2
    -07:04:58 INF VPCPeering created name=vpc-1--vpc-2
    +
    core@control-1 ~ $ kubectl fabric vpc peer --vpc vpc-1 --vpc vpc-2
    +07:04:58 INF VPCPeering created name=vpc-1--vpc-2
     

    Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that, you can test connectivity between the servers again:

    -
    core@server-01 ~ $ ping 10.0.2.10
    -PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
    -64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms
    -64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms
    -64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms
    -^C
    ---- 10.0.2.10 ping statistics ---
    -3 packets transmitted, 3 received, 0% packet loss, time 2004ms
    -rtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms
    +
    core@server-01 ~ $ ping 10.0.2.10
    +PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
    +64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms
    +64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms
    +64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms
    +^C
    +--- 10.0.2.10 ping statistics ---
    +3 packets transmitted, 3 received, 0% packet loss, time 2004ms
    +rtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms
     
    -
    core@server-02 ~ $ ping 10.0.1.10
    -PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
    -64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms
    -64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms
    -64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms
    -^C
    ---- 10.0.1.10 ping statistics ---
    -3 packets transmitted, 3 received, 0% packet loss, time 2004ms
    -rtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms
    +
    core@server-02 ~ $ ping 10.0.1.10
    +PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
    +64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms
    +64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms
    +64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms
    +^C
    +--- 10.0.1.10 ping statistics ---
    +3 packets transmitted, 3 received, 0% packet loss, time 2004ms
    +rtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms
     

    If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again:

    -
    core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2
    -vpcpeering.vpc.githedgehog.com "vpc-1--vpc-2" deleted
    +
    core@control-1 ~ $ kubectl delete vpcpeering/vpc-1--vpc-2
    +vpcpeering.vpc.githedgehog.com "vpc-1--vpc-2" deleted
     
    -
    core@server-01 ~ $ ping 10.0.2.10
    -PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
    -From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
    -From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
    -From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
    -^C
    ---- 10.0.2.10 ping statistics ---
    -3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms
    +
    core@server-01 ~ $ ping 10.0.2.10
    +PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
    +From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
    +From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
    +From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
    +^C
    +--- 10.0.2.10 ping statistics ---
    +3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms
     

    You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.

    -
    core@server-01 ~ $ ping 10.0.5.10
    -PING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.
    -64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms
    -64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)
    -64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms
    -64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)
    -64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms
    -64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)
    -^C
    ---- 10.0.5.10 ping statistics ---
    -3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms
    -rtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms
    +
    core@server-01 ~ $ ping 10.0.5.10
    +PING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.
    +64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms
    +64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)
    +64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms
    +64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)
    +64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms
    +64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)
    +^C
    +--- 10.0.5.10 ping statistics ---
    +3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms
    +rtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms
     
    -

    Utility based VPC creation

    -

    Setup VPCs

    -

    hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.

    -
    NAME:
    -   hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them
    -
    -USAGE:
    -   hhfab vlab setup-vpcs [command options]
    -
    -OPTIONS:
    -   --dns-servers value, --dns value [ --dns-servers value, --dns value ]    DNS servers for VPCs advertised by DHCP
    -   --force-clenup, -f                                                       start with removing all existing VPCs and VPCAttachments (default: false)
    -   --help, -h                                                               show help
    -   --interface-mtu value, --mtu value                                       interface MTU for VPCs advertised by DHCP (default: 0)
    -   --ipns value                                                             IPv4 namespace for VPCs (default: "default")
    -   --name value, -n value                                                   name of the VM or HW to access
    -   --servers-per-subnet value, --servers value                              number of servers per subnet (default: 1)
    -   --subnets-per-vpc value, --subnets value                                 number of subnets per VPC (default: 1)
    -   --time-servers value, --ntp value [ --time-servers value, --ntp value ]  Time servers for VPCs advertised by DHCP
    -   --vlanns value                                                           VLAN namespace for VPCs (default: "default")
    -   --wait-switches-ready, --wait                                            wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)
    -
    -   Global options:
    -
    -   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
    -   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
    -   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
    -   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
    -
    -

    Setup Peering

    -

    hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.

    -
    NAME:
    -   hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)
    -
    -USAGE:
    -   Setup test scenario with VPC/External Peerings by specifying requests in the format described below.
    -
    -   Example command:
    -
    -   $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24
    -
    -   Which will produce:
    -   1. VPC peering between vpc-01 and vpc-02
    -   2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border
    -   3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted
    -   4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route
    -      from external permitted as well any route that belongs to 22.22.22.0/24
    -
    -   VPC Peerings:
    -
    -   1+2 -- VPC peering between vpc-01 and vpc-02
    -   demo-1+demo-2 -- VPC peering between demo-1 and demo-2
    -   1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present
    -   1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border
    -   1+2:remote=border -- same as above
    -
    -   External Peerings:
    -
    -   1~as5835 -- external peering for vpc-01 with External as5835
    -   1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing
    -     default subnet and any route from external
    -   1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and
    -     default route from external permitted
    -   1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details
    -   1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above
    -
    -OPTIONS:
    -   --help, -h                     show help
    -   --name value, -n value         name of the VM or HW to access
    -   --wait-switches-ready, --wait  wait for switches to be ready before before and after configuring peerings (default: true)
    -
    -   Global options:
    -
    -   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
    -   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
    -   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
    -   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
    -
    -

    Test Connectivity

    -

    hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.

    -
    NAME:
    -   hhfab vlab test-connectivity - test connectivity between all servers
    -
    -USAGE:
    -   hhfab vlab test-connectivity [command options]
    -
    -OPTIONS:
    -   --curls value                  number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)
    -   --help, -h                     show help
    -   --iperfs value                 seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)
    -   --iperfs-speed value           minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)
    -   --name value, -n value         name of the VM or HW to access
    -   --pings value                  number of pings to send between each pair of servers (0 to disable) (default: 5)
    -   --wait-switches-ready, --wait  wait for switches to be ready before testing connectivity (default: true)
    -
    -   Global options:
    -
    -   --brief, -b      brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
    -   --cache-dir DIR  use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
    -   --verbose, -v    verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
    -   --workdir PATH   run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
    -

    Using VPCs with overlapping subnets

    First, create a second IPv4Namespace with the same subnet as the default one:

    -
    core@control-1 ~ $ kubectl get ipns
    -NAME      SUBNETS           AGE
    -default   ["10.0.0.0/16"]   24m
    +
    core@control-1 ~ $ kubectl get ipns
    +NAME      SUBNETS           AGE
    +default   ["10.0.0.0/16"]   24m
     
    -core@control-1 ~ $ cat <<EOF > ipns-2.yaml
    -apiVersion: vpc.githedgehog.com/v1beta1
    -kind: IPv4Namespace
    -metadata:
    -  name: ipns-2
    -  namespace: default
    -spec:
    -  subnets:
    -  - 10.0.0.0/16
    -EOF
    +core@control-1 ~ $ cat <<EOF > ipns-2.yaml
    +apiVersion: vpc.githedgehog.com/v1beta1
    +kind: IPv4Namespace
    +metadata:
    +  name: ipns-2
    +  namespace: default
    +spec:
    +  subnets:
    +  - 10.0.0.0/16
    +EOF
     
    -core@control-1 ~ $ kubectl apply -f ipns-2.yaml
    -ipv4namespace.vpc.githedgehog.com/ipns-2 created
    +core@control-1 ~ $ kubectl apply -f ipns-2.yaml
    +ipv4namespace.vpc.githedgehog.com/ipns-2 created
     
    -core@control-1 ~ $ kubectl get ipns
    -NAME      SUBNETS           AGE
    -default   ["10.0.0.0/16"]   30m
    -ipns-2    ["10.0.0.0/16"]   8s
    +core@control-1 ~ $ kubectl get ipns
    +NAME      SUBNETS           AGE
    +default   ["10.0.0.0/16"]   30m
    +ipns-2    ["10.0.0.0/16"]   8s
     

    Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:

    -
    core@control-1 ~ $ cat <<EOF > vpc-3.yaml
    -apiVersion: vpc.githedgehog.com/v1beta1
    -kind: VPC
    -metadata:
    -  name: vpc-3
    -  namespace: default
    -spec:
    -  ipv4Namespace: ipns-2
    -  subnets:
    -    default:
    -      dhcp:
    -        enable: true
    -        range:
    -          start: 10.0.1.10
    -      subnet: 10.0.1.0/24
    -      vlan: 2001
    -  vlanNamespace: default
    -EOF
    +
    core@control-1 ~ $ cat <<EOF > vpc-3.yaml
    +apiVersion: vpc.githedgehog.com/v1beta1
    +kind: VPC
    +metadata:
    +  name: vpc-3
    +  namespace: default
    +spec:
    +  ipv4Namespace: ipns-2
    +  subnets:
    +    default:
    +      dhcp:
    +        enable: true
    +        range:
    +          start: 10.0.1.10
    +      subnet: 10.0.1.0/24
    +      vlan: 2001
    +  vlanNamespace: default
    +EOF
     
    -core@control-1 ~ $ kubectl apply -f vpc-3.yaml
    +core@control-1 ~ $ kubectl apply -f vpc-3.yaml
     

    At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in a previous section. Once you have configured networking, server-01 and @@ -1819,7 +1819,7 @@

    Using VPCs with overlapping subnets Last update: - October 24, 2024 + October 31, 2024
    Created: diff --git a/beta-1/vlab/overview/index.html b/beta-1/vlab/overview/index.html index 83b90e8e..c9623dd9 100644 --- a/beta-1/vlab/overview/index.html +++ b/beta-1/vlab/overview/index.html @@ -26,7 +26,7 @@ - Overview - Open Network Fabric + VLAB Overview - Open Network Fabric @@ -102,7 +102,7 @@
    - + Skip to content @@ -153,7 +153,7 @@
    - Overview + VLAB Overview
    @@ -406,7 +406,7 @@ - Overview + VLAB Overview @@ -417,7 +417,7 @@ - Overview + VLAB Overview @@ -439,8 +439,8 @@
    • - - Overview + + HHFAB
    • @@ -454,9 +454,43 @@
    • - Installing prerequisites + Installing Prerequisites + + + +
    • @@ -1232,8 +1266,8 @@
      • - - Overview + + HHFAB
      • @@ -1247,9 +1281,43 @@
      • - Installing prerequisites + Installing Prerequisites + + + +
      • @@ -1282,13 +1350,13 @@ -

        Overview

        +

        VLAB Overview

        It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any data plane or performance testing, or for production use.

        In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery, boot and installation process like on real hardware.

        -

        Overview

        +

        HHFAB

        The hhfab CLI provides a special command vlab to manage the virtual labs. It allows you to run sets of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs the installer to get Fabric up and running.

        @@ -1296,7 +1364,7 @@

        Overview

        System Requirements

        Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.

        -

        The following packages needs to be installed: qemu-kvm swtpm-tools tpm2-tools socat. Docker is also required, to login +

        The following packages needs to be installed: qemu-kvm socat. Docker is also required, to login into the OCI registry.

        By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) @@ -1340,13 +1408,15 @@

        System Requirements

        Usually, none of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.

        NVMe SSD for VM disks is highly recommended.

        -

        Installing prerequisites

        -

        On Ubuntu 22.04 LTS you can install all required packages using the following commands:

        +

        Installing Prerequisites

        +

        To run VLAB, your system needs docker,qemu,kvm, and hhfab. On Ubuntu 22.04 LTS you can install all required packages using the following commands:

        +

        Docker

        curl -fsSL https://get.docker.com -o install-docker.sh
         sudo sh install-docker.sh
         sudo usermod -aG docker $USER
         newgrp docker
         
        +

        Qemu/KVM

        sudo apt install -y qemu-kvm swtpm-tools tpm2-tools socat
         sudo usermod -aG kvm $USER
         newgrp kvm
        @@ -1357,9 +1427,21 @@ 

        Installing prerequisites

        INFO: /dev/kvm exists KVM acceleration can be used
        +

        Oras

        +

        For convenience Hedgehog provides a script to install oras:

        +
        curl -fsSL https://i.hhdev.io/oras | bash
        +
        +

        Hhfab

        +

        Hedgehog maintains a utility to install and configure VLAB, called hhfab.

        +

        You need a GitHub access token to download hhfab, please submit a ticket using the Hedgehog Support Portal. Once in possession of the credentials, use the provided username and token to log into the GitHub container registry:

        +
        docker login ghcr.io --username provided_username --password provided_token
        +
        +

        Once logged in, download and run the script:

        +
        curl -fsSL https://i.hhdev.io/hhfab | bash
        +

        Next steps


        @@ -1367,7 +1449,7 @@

        Next steps

        Last update: - May 6, 2024 + November 8, 2024
        Created: diff --git a/beta-1/vlab/running/index.html b/beta-1/vlab/running/index.html index 6f81a239..91a8706b 100644 --- a/beta-1/vlab/running/index.html +++ b/beta-1/vlab/running/index.html @@ -397,7 +397,7 @@ - Overview + VLAB Overview @@ -500,31 +500,50 @@
      • - - Configuring VLAB VMs + + Enable Outside connectivity from VLAB VMs
      • + + Accessing the VLAB + + +
      • + + Use Kubectl to Interact with the Fabric + + + +
      • @@ -1328,31 +1347,50 @@
      • - - Configuring VLAB VMs + + Enable Outside connectivity from VLAB VMs
      • + + Accessing the VLAB + + +
      • + + Use Kubectl to Interact with the Fabric + + + +
      • @@ -1396,7 +1434,7 @@

        Running VLAB

        Make sure to follow the prerequisites and check system requirements in the VLAB Overview section before running VLAB.

        Initialize VLAB

        -

        First, initialize Fabricator by running hhfab init --dev. This command supports several customization options that are listed in the output of hhfab init --help.

        +

        First, initialize Fabricator by running hhfab init --dev. This command creates the fab.yaml file, which is the main configuration file for the fabric. This command supports several customization options that are listed in the output of hhfab init --help.

        ubuntu@docs:~$ hhfab init --dev
         11:26:52 INF Hedgehog Fabricator version=v0.30.0
         11:26:52 INF Generated initial config
        @@ -1404,8 +1442,8 @@ 

        Initialize VLAB

        11:26:52 INF Include wiring files (.yaml) or adjust imported ones dir=include

        VLAB Topology

        -

        By default, the command creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.

        -
        ubuntu@docs:~$ hhfab vlab gen
        +

        By default, hhfab init creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.

        +

        ubuntu@docs:~$ hhfab vlab gen
         21:27:16 INF Hedgehog Fabricator version=v0.30.0
         21:27:16 INF Building VLAB wiring diagram fabricMode=spine-leaf
         21:27:16 INF >>> spinesCount=2 fabricLinksCount=2
        @@ -1415,9 +1453,9 @@ 

        VLAB Topology

        21:27:16 INF >>> mclagServers=2 eslagServers=2 unbundledServers=1 bundledServers=1 21:27:16 INF Generated wiring file name=vlab.generated.yaml
        +You can jump to the instructions to start VLAB, or see the next section for customizing the topology.

        Collapsed Core

        -

        If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core. -Or if you want to run Collapsed Core topology with 2 MCLAG switches:

        +

        If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core:

        ubuntu@docs:~$ hhfab vlab gen
         11:39:02 INF Hedgehog Fabricator version=v0.30.0
         11:39:02 INF Building VLAB wiring diagram fabricMode=collapsed-core
        @@ -1444,7 +1482,7 @@ 

        Custom Spine Leaf

        automatically downloads all required artifacts from the OCI registry and builds the installer and all other prerequisites for running the VLAB.

        Build the Installer and Start VLAB

        -

        In VLAB the build and run step are combined into one command for simplicity, hhfab vlab up. For successive runs use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. This command does not return, it runs as long as the VLAB is up. This is done so that shutdown is a simple ctrl + c. +

        To build and start the virtual machines, use hhfab vlab up. For successive runs, use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. hhfab vlab up runs in the foreground and does not return, which allows you to stop all VLAB VMs by simply pressing Ctrl + C.

        ubuntu@docs:~$ hhfab vlab up
         11:48:22 INF Hedgehog Fabricator version=v0.30.0
         11:48:22 INF Wiring hydrated successfully mode=if-not-present
        @@ -1491,18 +1529,12 @@ 

        Build the Installer and Start VLABINF Control node is ready vm=control-1 type=control from the installer's output means that the installer has finished. After this line has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned. See Accessing the VLAB.

        -

        Configuring VLAB VMs

        +

        Enable Outside connectivity from VLAB VMs

        By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure enable connectivity using hhfab vlab up --restrict-servers=false to allow the test servers to access the Internet and the host. When you enable connectivity, VMs get a default route pointing to the host, which means that in case of the VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific subnets).

        -

        Default credentials

        -

        Fabricator creates default users and keys for you to login into the control node and test servers as well as for the -SONiC Virtual Switches.

        -

        Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!. -Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!. -Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.

        Accessing the VLAB

        The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are @@ -1530,8 +1562,15 @@

        Accessing the VLAB

        Ready: true Basedir: .hhfab/vlab-vms/control-1
        -

        On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. You can find information -about the switches provisioning by running kubectl get agents -o wide. It usually takes about 10-15 minutes for the +

        Default credentials

        +

        Fabricator creates default users and keys for you to login into the control node and test servers as well as for the +SONiC Virtual Switches.

        +

        The default user with password-less sudo for the control node and test servers is core with password HHFab.Admin!. +The admin user with full access and password-less sudo for the switches is admin with password HHFab.Admin!. +The read-only, non-sudo user with access to the switch CLI is op with password HHFab.Op!.

        +

        Use Kubectl to Interact with the Fabric

        +

        On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. To view information +about the switches run kubectl get agents -o wide. After the control node is available it usually takes about 10-15 minutes for the switches to get installed.

        After the switches are provisioned, the command returns something like this:

        core@control-1 ~ $ kubectl get agents -o wide
        @@ -1548,7 +1587,7 @@ 

        Accessing the VLAB

        AppliedG and CurrentG mean that the switch is in the process of applying the configuration.

        At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about managing the Fabric in the Running Demo and User Guide sections.

        -

        Getting main Fabric objects

        +

        Getting main Fabric objects

        You can list the main Fabric objects by running kubectl get on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.

        @@ -1602,7 +1641,7 @@

        Getting main Fabric objects

        default 6h12m

        Reset VLAB

        -

        To reset VLAB and start over directory and run hhfab init -f which will force overwrite your existing configuration, fab.yaml.

        +

        If VLAB is currently running, press Ctrl + C to stop it. To reset VLAB and start over run hhfab init -f. This option forces the process to overwrite your existing configuration in fab.yaml.

        Next steps

        • Running Demo
        • @@ -1613,7 +1652,7 @@

          Next steps

          Last update: - October 24, 2024 + October 31, 2024
          Created: