After that you will be provided with the credentials to access the software on GitHub Package.
In order to use the software, log in to the registry using the following command:
This section is meant to help the reader understand how to assemble the primitives presented by the Fabric API into a functional fabric.
VPC
-
A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than a than one switch, which provides great flexibility, for instance workload mobility.
+
A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than one switch, which provides great flexibility.
A connection represents the physical wires in your data center. They connect switches to other switches or switches to servers.
Server Connections
@@ -1511,14 +1530,141 @@
Server Connections
MCLAG - Two cables going to two different switches, also called dual homing. The switches will need a fabric link between them.
ESLAG - Two to four cables going to different switches, also called multi-homing. If four links are used there will need to be four switches connected to a single server with four NIC ports.
When there is no dedicated border/peering switch available in the fabric we can use local VPC peering. This kind of peering tries sends traffic between the two VPC's on the switch where either of the VPC's has workloads attached. Due to limitation in the Sonic network operating system this kind of peering bandwidth is limited to the number of VPC loopbacks you have selected while initializing the fabric. Traffic between the VPCs will use the loopback interface, the bandwidth of this connection will be equal to the bandwidth of port used in the loopback.
+The dotted line in the diagram shows the traffic flow for local peering. The traffic originates in VPC 2, travels to the switch, travels out the first loopback port, into the second loopback port, and finally out the port destined for VPC 1.
Remote VPC Peering
Remote Peering is used when you need a high bandwidth connection between the VPCs, you will dedicate a switch to the peering traffic. This is either done on the border leaf or on a switch where either of the VPC's are not present. This kind of peering allows peer traffic between different VPC's at line rate and is only limited by fabric bandwidth. Remote peering introduces a few additional hops in the traffic and may cause a small increase in latency.
+The dotted line in the diagram shows the traffic flow for remote peering. The traffic could take a different path because of ECMP. It is important to note that Leaf 3 cannot have any servers from VPC 1 or VPC 2 on it, but it can have a different VPC attached to it.
VPC Loopback
A VPC loopback is a physical cable with both ends plugged into the same switch, suggested but not required to be the adjacent ports. This loopback allows two different VPCs to communicate with each other. This is due to a Broadcom limitation.
In order to provision and manage the switches that comprise the fabric, an out of band switch must also be installed. This network is to be used exclusively by the control node and the fabric switches, no other access is permitted. This switch (or switches) is not managed by the fabric. It is recommended that this switch have at least a 10GbE port and that port connect to the control node.
Control Node
-
Fast SSDs for system/root is mandatory for Control Nodes
+
Fast SSDs for system/root is mandatory for Control Nodes
NVMe SSDs are recommended
DRAM-less NAND SSDs are not supported (e.g. Crucial BX500)
+
+
10 GbE port for connection to management network is recommended
Minimal (non-HA) setup is a single Control Node
(Future) Full (HA) setup is at least 3 Control Nodes
(Future) Extra nodes could be used for things like Logging, Monitoring, Alerting stack, and more
+
In internal testing Hedgehog uses a server with the following specifications:
+
+
CPU - AMD EPYC 4344P
+
Memory - 32 GiB DDR5 ECC 4800MT/s
+
Storage - PCIe Gen 4 NVMe M.2 400GB
+
Network - AOC-STG-i4S Intel X710-BM1 controller
+
Motherboard - H13SAE-MF
+
Non-HA (minimal) setup - 1 Control Node
Control Node runs non-HA Kubernetes Control Plane installation with non-HA Hedgehog Fabric Control Plane on top of it
The Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to private environments. It comes without vendor lock-in.
The Fabric is built around the concept of VPCs (Virtual Private Clouds), similar to public cloud offerings. It provides a multi-tenant API to define the user intent on network isolation and connectivity, which is automatically transformed into configuration for switches and software appliances.
You can read more about its concepts and architecture in the documentation.
You can find out how to download and try the Fabric on the self-hosted fully virtualized lab or on hardware.
The Hedgehog Open Network Fabric is an open-source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standard BGP EVPN and VXLAN technology. The fabric provides a standard Kubernetes interface to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this, it defines different abstractions starting from the physical network where a set of Connection objects defines how a physical server on the network connects to a physical switch on the fabric.
A collapsed core topology is just a pair of switches connected in a MCLAG configuration with no other network elements. All workloads attach to these two switches.
The leaves in this setup are configured to be in a MCLAG pair and servers can either be connected to both switches as a MCLAG port channel or as orphan ports connected to only one switch. Both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.
A spine-leaf topology is a standard Clos network with workloads attaching to leaf switches and the spines providing connectivity between different leaves.
This kind of topology is useful for bigger deployments and provides all the advantages of a typical Clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. RFC7938 was used as the reference for establishing the underlay network.
The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Virtualization is achieved in the Hedgehog Fabric by encapsulating workload traffic over VXLAN tunnels that are source and terminated on the leaf switches in the network. The fabric uses BGP-EVPN/VXLAN to enable the creation and management of virtual networks on top of the physical one. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the Hedgehog Fabric is identified by a VPC. The following subsections contain a high-level overview of how VPCs are implemented in the Hedgehog Fabric and its associated objects.
The previous subsections have described what a VPC is, and how to attach workloads to a specific VPC. The following bullet points describe how VPCs are actually implemented in the network to ensure a private view the network.
Each VPC is modeled as a VRF on each switch where there are VPC attachments defined for this VPC. The VRF is allocated its own VNI. The VRF is local to each switch and the VNI is global for the entire fabric. By mapping the VRF to a VNI and configuring an EVPN instance in each VRF, a shared L3VNI is established across the entire fabric. All VRFs participating in this VNI can freely communicate with each other without the need for a policy. A VLAN is allocated for each VRF which functions as an IRB VLAN for the VRF.
The VRF created on each switch corresponding to a VPC configures a BGP instance with EVPN to advertise its locally attached subnets and import routes from its peered VPCs. The BGP instance in the tenant VRFs does not establish neighbor relationships and is purely used to advertise locally attached routes into the VPC (all VRFs with the same L3VNI) across leaves in the network.
A VPC can have multiple subnets. Each subnet in the VPC is modeled as a VLAN on the switch. The VLAN is only locally significant and a given subnet might have different VLANs on different leaves on the network. A globally significant VNI is assigned to each subnet. This VNI is used to extend the subnet across different leaves in the network and provides a view of single stretched L2 domain if the applications need it.
The Hedgehog Fabric has a built-in DHCP server which will automatically assign IP addresses to each workload depending on the VPC it belongs to. This is achieved by configuring a DHCP relay on each of the server facing VLANs. The DHCP server is accessible through the underlay network and is shared by all VPCs in the fabric. The inbuilt DHCP server is capable of identifying the source VPC of the request and assigning IP addresses from a pool allocated to the VPC at creation.
A VPC by default cannot communicate to anyone outside the VPC and specific peering rules are required to allow communication to external networks or to other VPCs.
To enable communication between 2 different VPCs, one needs to configure a VPC peering policy. The Hedgehog Fabric supports two different peering modes.
Local Peering: A local peering directly imports routes from another VPC locally. This is achieved by a simple import route from the peer VPC. In case there are no locally attached workloads to the peer VPC the fabric automatically creates a stub VPC for peering and imports routes from it. This allows VPCs to peer with each other without the need for a dedicated peering leaf. If a local peering is done for a pair of VPCs which have locally attached workloads, the fabric automatically allocates a pair of ports on the switch to route traffic between these VRFs using static routes. This is required because of limitations in the underlying platform. The net result of these limitations is that the bandwidth between these 2 VPCs is limited by the bandwidth of the loopback interfaces allocated on the switch. Traffic between the peered VPCs will not leave the switch that connects them.
Remote Peering: Remote peering is implemented using a dedicated peering switch/switches which is used as a rendezvous point for the 2 VPC's in the fabric. The set of switches to be used for peering is determined by configuration in the peering policy. When a remote peering policy is applied for a pair of VPCs, the VRFs corresponding to these VPCs on the peering switch advertise default routes into their specific VRFs identified by the L3VNI. All traffic that does not belong to the VPCs is forwarded to the peering switch which has routes to the other VPCs and gets forwarded from there. The bandwidth limitation that exists in the local peering solution is solved here as the bandwidth between the two VPCs is determined by the fabric cross section bandwidth.
Hedgehog Open Network Fabric is built on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs), so you can use standard Kubernetes tools to manage Fabric resources.
Hedgehog Fabric consists of the following components:
Fabricator - special tool to install and configure Fabric, or to run virtual labs
Control Node - one or more Kubernetes nodes in a single cluster running Fabric software:
Fabric Controller - main control plane component that manages Fabric resources
Fabric Kubectl plugin (Fabric CLI) - kubectl plugin to manage Fabric resources in an easy way
Fabric Agent - runs on every switch and manages switch configuration
All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. With this representation, Fabric defines switches, servers, control nodes, external systems and connections between them in a single place and then uses these definitions to deploy and manage the whole infrastructure. On top of the Wiring Diagram, Fabric provides a set of APIs to manage the VPCs and the connections between them and between VPCs and External systems.
VPC: Virtual Private Cloud, similar to a public cloud VPC, provides an isolated private network for the resources, with support for multiple subnets, each with user-defined VLANs and optional DHCP service
VPCAttachment: represents a specific VPC subnet assignment to the Connection object which means exact server port to a VPC binding
VPCPeering: enables VPC-to-VPC connectivity (could be Local where VPCs are used or Remote peering on the border/mixed leaves)
External API
External: definition of the \"external system\" to peer with (could be one or multiple devices such as edge/provider routers)
ExternalAttachment: configuration for a specific switch (using Connection object) describing how it connects to an external system
ExternalPeering: provides VPC with External connectivity by exposing specific VPC subnets to the external system and allowing inbound routes from it
This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.
In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:
make serve\n
Now you can open continuously updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.
Additionally you can run
make build\n
to make sure that your changes will be built correctly and doesn't break documentation.
If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.
Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.
Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.
Documentation is organized in per-release branches:
master - ongoing development, not released yet, referenced as dev version in the documentation
release/alpha-1/release/alpha-2 - alpha releases, referenced as alpha-1/alpha-2 versions in the documentation, if patches released for alpha-1, they'll be merged into release/alpha-1 branch
release/v1.0 - first stable release, referenced as v1.0 version in the documentation, if patches (e.g. v1.0.1) released for v1.0, they'll be merged into release/v1.0 branch
Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.
All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.
For example, top-level .pages in this repository looks like this:
Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.
You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.
For example, we have following in includes/abbreviations.md:
*[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n
It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.
We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.
To view code for examples, please, check the source code of this page.
Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.
Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.
Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.
Lorem ipsum
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.
Simple code block with line nums and highlighted lines:
bubble_sort.py
def bubble_sort(items):\n for i in range(len(items)):\n for j in range(len(items) - 1 - i):\n if items[j] > items[j + 1]:\n items[j], items[j + 1] = items[j + 1], items[j]\n
"},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"
You can directly include Mermaid diagrams in your Markdown files. Details can be found here.
graph LR\n A[Start] --> B{Error?};\n B -->|Yes| C[Hmm...];\n C --> D[Debug];\n D --> B;\n B ---->|No| E[Yay!];
sequenceDiagram\n autonumber\n Alice->>John: Hello John, how are you?\n loop Healthcheck\n John->>John: Fight against hypochondria\n end\n Note right of John: Rational thoughts!\n John-->>Alice: Great!\n John->>Bob: How about you?\n Bob-->>John: Jolly good!
The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. It is a command-line tool that allows to build installer for the Hedgehog Fabric, upgrade the existing installation, or run the Virtual LAB.
Prior to General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.
After that you will be provided with the credentials to access the software on GitHub Package. In order to use the software, log in to the registry using the following command:
Currently hhfab is supported on Linux x86/arm64 (tested on Ubuntu 22.04) and MacOS x86/arm64 for building installers/upgraders. It may work on Windows WSL2 (with Ubuntu), but it's not tested. For running VLAB only Linux x86 is currently supported.
All software is published into the OCI registry GitHub Package including binaries, container images, or Helm charts. Download the latest stable hhfab binary from the GitHub Package using the following command, it requires ORAS to be installed (see below):
curl -fsSL https://i.hhdev.io/hhfab | bash\n
Or download a specific version (e.g. beta-1) using the following command:
Use the VERSION environment variable to specify the version of the software to download. By default, the latest stable release is downloaded. You can pick a specific release series (e.g. alpha-2) or a specific release.
The download script requires ORAS to be installed. ORAS is used to download the binary from the OCI registry and can be installed using following command:
A wiring diagram is a YAML file that is a digital representation of your network. You can find more YAML level details in the User Guide section switch features and port naming and the api. It's mandatory for all switches to reference a SwitchProfile in the spec.profile of the Switch object. Only port naming defined by switch profiles could be used in the wiring diagram, NOS (or any other) port names aren't supported.
In the meantime, to have a look at working wiring diagram for Hedgehog Fabric, run the sample generator that produces working wiring diagrams:
ubuntu@sl-dev:~$ hhfab sample -h\n\nNAME:\n hhfab sample - generate sample wiring diagram\n\nUSAGE:\n hhfab sample command [command options]\n\nCOMMANDS:\n spine-leaf, sl generate sample spine-leaf wiring diagram\n collapsed-core, cc generate sample collapsed-core wiring diagram\n help, h Shows a list of commands or help for one command\n\nOPTIONS:\n --help, -h show help\n
Or you can generate a wiring diagram for a VLAB environment with flags to customize number of switches, links, servers, etc.:
ubuntu@sl-dev:~$ hhfab vlab gen --help\nNAME:\n hhfab vlab generate - generate VLAB wiring diagram\n\nUSAGE:\n hhfab vlab generate [command options]\n\nOPTIONS:\n --bundled-servers value number of bundled servers to generate for switches (only for one of the second switch in the redundancy group or orphan switch) (default: 1)\n --eslag-leaf-groups value eslag leaf groups (comma separated list of number of ESLAG switches in each group, should be 2-4 per group, e.g. 2,4,2 for 3 groups with 2, 4 and 2 switches)\n --eslag-servers value number of ESLAG servers to generate for ESLAG switches (default: 2)\n --fabric-links-count value number of fabric links if fabric mode is spine-leaf (default: 0)\n --help, -h show help\n --mclag-leafs-count value number of mclag leafs (should be even) (default: 0)\n --mclag-peer-links value number of mclag peer links for each mclag leaf (default: 0)\n --mclag-servers value number of MCLAG servers to generate for MCLAG switches (default: 2)\n --mclag-session-links value number of mclag session links for each mclag leaf (default: 0)\n --no-switches do not generate any switches (default: false)\n --orphan-leafs-count value number of orphan leafs (default: 0)\n --spines-count value number of spines if fabric mode is spine-leaf (default: 0)\n --unbundled-servers value number of unbundled servers to generate for switches (only for one of the first switch in the redundancy group or orphan switch) (default: 1)\n --vpc-loopbacks value number of vpc loopbacks for each switch (default: 0)\n
A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than a than one switch, which provides great flexibility, for instance workload mobility.
A server connection is a connection used to connect servers to the fabric. The fabric will configure the server-facing port according to the type of the connection (MLAG, Bundle, etc). The configuration of the actual server needs to be done by the server administrator. The server port names are not validated by the fabric and used as metadata to identify the connection. A server connection can be one of:
Unbundled - A single cable connecting switch to server.
Bundled - Two or more cables going to a single switch, a LAG or similar.
MCLAG - Two cables going to two different switches, also called dual homing. The switches will need a fabric link between them.
ESLAG - Two to four cables going to different switches, also called multi-homing. If four links are used there will need to be four switches connected to a single server with four NIC ports.
When there is no dedicated border/peering switch available in the fabric we can use local VPC peering. This kind of peering tries sends traffic between the two VPC's on the switch where either of the VPC's has workloads attached. Due to limitation in the Sonic network operating system this kind of peering bandwidth is limited to the number of VPC loopbacks you have selected while initializing the fabric. Traffic between the VPCs will use the loopback interface, the bandwidth of this connection will be equal to the bandwidth of port used in the loopback.
Remote Peering is used when you need a high bandwidth connection between the VPCs, you will dedicate a switch to the peering traffic. This is either done on the border leaf or on a switch where either of the VPC's are not present. This kind of peering allows peer traffic between different VPC's at line rate and is only limited by fabric bandwidth. Remote peering introduces a few additional hops in the traffic and may cause a small increase in latency.
A VPC loopback is a physical cable with both ends plugged into the same switch, suggested but not required to be the adjacent ports. This loopback allows two different VPCs to communicate with each other. This is due to a Broadcom limitation.
The fab.yaml file is the configuration file for the fabric. It supplies the configuration of the users, their credentials, logging, telemetry, and other non wiring related settings. The fab.yaml file is composed of multiple YAML documents inside of a single file. Per the YAML spec 3 hyphens (---) on a single line separate the end of one document from the beginning of the next. There are two YAML documents in the fab.yaml file. For more information about how to use hhfab init, run hhfab init --help.
"},{"location":"install-upgrade/config/#typical-hhfab-workflows","title":"Typical HHFAB workflows","text":""},{"location":"install-upgrade/config/#hhfab-for-vlab","title":"HHFAB for VLAB","text":"
For a VLAB user, the typical workflow with hhfab is:
hhfab init --dev
hhfab vlab gen
hhfab vlab up
The above workflow will get a user up and running with a spine-leaf VLAB.
"},{"location":"install-upgrade/config/#hhfab-for-physical-machines","title":"HHFAB for Physical Machines","text":"
It's possible to start from scratch:
hhfab init (see different flags to cusomize initial configuration)
After the above workflow a user will have a .img file suitable for installing the control node, then bringing up the switches which comprise the fabric.
"},{"location":"install-upgrade/config/#fabyaml","title":"Fab.yaml","text":""},{"location":"install-upgrade/config/#configure-control-node-and-switch-users","title":"Configure control node and switch users","text":"
Configuring control node and switch users is done either passing --default-password-hash to hhfab init or editing the resulting fab.yaml file emitted by hhfab init. You can specify users to be configured on the control node(s) and switches in the following format:
spec:\n config:\n control:\n defaultUser: # user 'core' on all control nodes\n password: \"hashhashhashhashhash\" # password hash\n authorizedKeys:\n - \"ssh-ed25519 SecREKeyJumblE\"\n\n fabric:\n mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n\n defaultSwitchUsers:\n admin: # at least one user with name 'admin' and role 'admin'\n role: admin\n #password: \"$5$8nAYPGcl4...\" # password hash\n #authorizedKeys: # optional SSH authorized keys\n # - \"ssh-ed25519 AAAAC3Nza...\"\n op: # optional read-only user\n role: operator\n #password: \"$5$8nAYPGcl4...\" # password hash\n #authorizedKeys: # optional SSH authorized keys\n # - \"ssh-ed25519 AAAAC3Nza...\"\n
Control node(s) user is always named core.
The role of the user,operator is read-only access to sonic-cli command on the switches. In order to avoid conflicts, do not use the following usernames: operator,hhagent,netops.
"},{"location":"install-upgrade/config/#ntp-and-dhcp","title":"NTP and DHCP","text":"
The control node uses public ntp servers from cloudflare and google by default. The control node runs a dhcp server on the management network. See the example file.
The control node is the host that manages all the switches, runs k3s, and serves images. This is the YAML document configure the control node:
apiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n name: control-1\n namespace: fab\nspec:\n bootstrap:\n disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n external:\n interface: enp2s0 # interface for external\n ip: dhcp # IP address for external interface\n management:\n interface: enp2s1 # interface for management\n\n# Currently only one ControlNode is supported\n
The management interface is for the control node to manage the fabric switches, not end-user management of the control node. For end-user management of the control node specify the external interface name."},{"location":"install-upgrade/config/#forward-switch-metrics-and-logs","title":"Forward switch metrics and logs","text":"
There is an option to enable Grafana Alloy on all switches to forward metrics and logs to the configured targets using Prometheus Remote-Write API and Loki API. If those APIs are available from Control Node(s), but not from the switches, it's possible to enable HTTP Proxy on Control Node(s) that will be used by Grafana Alloy running on the switches to access the configured targets. It could be done by passing --control-proxy=true to hhfab init.
Metrics includes port speeds, counters, errors, operational status, transceivers, fans, power supplies, temperature sensors, BGP neighbors, LLDP neighbors, and more. Logs include agent logs.
Configuring the exporters and targets is currently only possible by editing the fab.yaml configuration file. An example configuration is provided below:
spec:\n config:\n ...\n defaultAlloyConfig:\n agentScrapeIntervalSeconds: 120\n unixScrapeIntervalSeconds: 120\n unixExporterEnabled: true\n lokiTargets:\n grafana_cloud: # target name, multiple targets can be configured\n basicAuth: # optional\n password: \"<password>\"\n username: \"<username>\"\n labels: # labels to be added to all logs\n env: env-1\n url: https://logs-prod-021.grafana.net/loki/api/v1/push\n useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n prometheusTargets:\n grafana_cloud: # target name, multiple targets can be configured\n basicAuth: # optional\n password: \"<password>\"\n username: \"<username>\"\n labels: # labels to be added to all metrics\n env: env-1\n sendIntervalSeconds: 120\n url: https://prometheus-prod-36-prod-us-west-0.grafana.net/api/prom/push\n useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n unixExporterCollectors: # list of node-exporter collectors to enable, https://grafana.com/docs/alloy/latest/reference/components/prometheus.exporter.unix/#collectors-list\n - cpu\n - filesystem\n - loadavg\n - meminfo\n collectSyslogEnabled: true # collect /var/log/syslog on switches and forward to the lokiTargets\n
For additional options, see the AlloyConfig struct in Fabric repo.
"},{"location":"install-upgrade/config/#complete-example-file","title":"Complete Example File","text":"
apiVersion: fabricator.githedgehog.com/v1beta1\nkind: Fabricator\nmetadata:\n name: default\n namespace: fab\nspec:\n config:\n control:\n tlsSAN: # IPs and DNS names to access API\n - \"customer.site.io\"\n\n ntpServers:\n - time.cloudflare.com\n - time1.google.com\n\n defaultUser: # user 'core' on all control nodes\n password: \"hash...\" # password hash\n authorizedKeys:\n - \"ssh-ed25519 hash...\"\n\n fabric:\n mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n includeONIE: true\n defaultSwitchUsers:\n admin: # at least one user with name 'admin' and role 'admin'\n role: admin\n password: \"hash...\" # password hash\n authorizedKeys:\n - \"ssh-ed25519 hash...\"\n op: # optional read-only user\n role: operator\n password: \"hash...\" # password hash\n authorizedKeys:\n - \"ssh-ed25519 hash...\"\n\n defaultAlloyConfig:\n agentScrapeIntervalSeconds: 120\n unixScrapeIntervalSeconds: 120\n unixExporterEnabled: true\n collectSyslogEnabled: true\n lokiTargets:\n lab:\n url: http://url.io:3100/loki/api/v1/push\n useControlProxy: true\n labels:\n descriptive: name\n prometheusTargets:\n lab:\n url: http://url.io:9100/api/v1/push\n useControlProxy: true\n labels:\n descriptive: name\n sendIntervalSeconds: 120\n\n---\napiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n name: control-1\n namespace: fab\nspec:\n bootstrap:\n disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n external:\n interface: eno2 # interface for external\n ip: dhcp # IP address for external interface\n management:\n interface: eno1\n\n# Currently only one ControlNode is supported\n
A machine with access to the Internet to use Fabricator and build installer with at least 8 GB RAM and 25 GB of disk space
An 16 GB USB flash drive, if you are not using virtual media
Have a machine to function as the Fabric Control Node. System Requirements as well as IPMI access to it to install the OS.
A management switch with at least 1 10GbE port is recommended
Enough Supported Switches for your Fabric
"},{"location":"install-upgrade/overview/#overview-of-install-process","title":"Overview of Install Process","text":"
This section is dedicated to the Hedgehog Fabric installation on bare-metal control node(s) and switches, their preparation and configuration. To install the VLAB see VLAB Overview.
Download and install hhfab following instructions from the Download section.
The main steps to install Fabric are:
Install hhfab on the machines with access to the Internet
Prepare Wiring Diagram
Select Fabric Configuration
Build Control Node configuration and installer
Install Control Node
Insert USB with control-os image into Fabric Control Node
Boot the node off the USB to initiate the installation
Prepare Management Network
Connect management switch to Fabric control node
Connect 1GbE Management port of switches to management switch
Prepare supported switches
Ensure switch serial numbers and / or first management interface MAC addresses are recorded in wiring diagram
Boot them into ONIE Install Mode to have them automatically provisioned
"},{"location":"install-upgrade/overview/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and Installer","text":"
Hedgehog has created a command line utility, called hhfab, that helps generate the wiring diagram and fabric configuration, validate the supplied configurations, and generate an installation image (.img) suitable for writing to a USB flash drive or mounting via IPMI virtual media. The first hhfab command to run is hhfab init. This will generate the main configuration file, fab.yaml. fab.yaml is responsible for almost every configuration of the fabric with the exception of the wiring. Each command and subcommand have usage messages, simply supply the -h flag to your command or sub command to see the available options. For example hhfab vlab -h and hhfab vlab gen -h.
"},{"location":"install-upgrade/overview/#hhfab-commands-to-make-a-bootable-image","title":"HHFAB commands to make a bootable image","text":"
hhfab init --wiring wiring-lab.yaml
The init command generates a fab.yaml file, edit the fab.yaml file for your needs
ensure the correct boot disk (e.g. /dev/sda) and control node NIC names are supplied
hhfab validate
hhfab build
The installer for the fabric is generated in $CWD/result/. This installation image is named control-1-install-usb.img and is 7.5 GB in size. Once the image is created, you can write it to a USB drive, or mount it via virtual media.
"},{"location":"install-upgrade/overview/#write-usb-image-to-disk","title":"Write USB Image to Disk","text":"
This will erase data on the USB disk.
Insert the USB to your machine
Identify the path to your USB stick, for example: /dev/sdc
Issue the command to write the image to the USB drive
There are utilities that assist this process such as etcher.
"},{"location":"install-upgrade/overview/#install-control-node","title":"Install Control Node","text":"
This control node should be given a static IP address. Either a lease or statically assigned.
Configure the server to use UEFI boot without secure boot
Attach the image to the server either by inserting via USB, or attaching via virtual media
Select boot off of the attached media, the installation process is automated
Once the control node has booted, it logs in automatically and begins the installation process
Optionally use journalctl -f -u flatcar-install.service to monitor progress
Once the installation is complete, the system automatically reboots.
After the system has shutdown but before the boot up process reaches the operating system, remove the USB image from the system. Removal during the UEFI boot screen is acceptable.
Upon booting into the freshly installed system, the fabric installation will automatically begin
If the insecure --dev flag was passed to hhfab init the password for the core user is HHFab.Admin!, the switches have two users created admin and op. admin has administrator privileges and password HHFab.Admin!, whereas the op user is a read-only, non-sudo user with password HHFab.Op!.
Optionally this can be monitored with journalctl -f -u fabric-install.service
The install is complete when the log emits \"Control Node installation complete\". Additionally, the systemctl status will show inactive (dead) indicating that the executable has finished.
The control node is dual-homed. It has a 10GbE interface that connects to the management network. The other link called external in the fab.yaml file is for the customer to access the control node. The management network is for the command and control of the switches that comprise the fabric. This management network can be a simple broadcast domain with layer 2 connectivity. The control node will run a DHCP and small http servers. The management network is not accessible to machines or devices not associated with the fabric.
Now that the install has finished, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s, all pre-installed as part of the Control Node installer.
At this stage, the fabric hands out DHCP addresses to the switches via the management network. Optionally, you can monitor this process by going through the following steps: - enter k9s at the command prompt - use the arrow keys to select the pod named fabric-boot - the logs of the pod will be displayed showing the DHCP lease process - use the switches screen of k9s to see the heartbeat column to verify the connection between switch and controller. - to see the switches type :switches (like a vim command) into k9s
"},{"location":"install-upgrade/requirements/","title":"System Requirements","text":""},{"location":"install-upgrade/requirements/#out-of-band-management-network","title":"Out of Band Management Network","text":"
In order to provision and manage the switches that comprise the fabric, an out of band switch must also be installed. This network is to be used exclusively by the control node and the fabric switches, no other access is permitted. This switch (or switches) is not managed by the fabric. It is recommended that this switch have at least a 10GbE port and that port connect to the control node.
"},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":"
(Future) Each participating device is part of the Kubernetes cluster, so it runs Kubernetes kubelet
Additionally, it runs the Hedgehog Fabric Agent that controls devices configuration
Following resources should be available on a device to run in the Hedgehog Fabric (after other software such as SONiC usage):
Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":"
You can find detailed information about devices in the Switch Profiles Catalog and in the User Guide switch features and port naming.
Package v1beta1 contains API Schema definitions for the agent v1beta1 API group. This is the internal API group for the switch and control node agents. Not intended to be modified by the user.
Field Description `` updowntesting"},{"location":"reference/api/#agent","title":"Agent","text":"
Agent is an internal API object used by the controller to pass all relevant information to the agent running on a specific switch in order to fully configure it and manage its lifecycle. It is not intended to be used directly by users. Spec of the object isn't user-editable, it is managed by the controller. Status of the object is updated by the agent and is used by the controller to track the state of the agent and the switch it is running on. Name of the Agent object is the same as the name of the switch it is running on and it's created in the same namespace as the Switch object.
Field Description Default Validation apiVersion string agent.githedgehog.com/v1beta1kind string Agentmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. status AgentStatus Status is the observed state of the Agent"},{"location":"reference/api/#agentstatus","title":"AgentStatus","text":"
AgentStatus defines the observed state of the agent running on a specific switch and includes information about the switch itself as well as the state of the agent and applied configuration.
Appears in: - Agent
Field Description Default Validation version string Current running agent version installID string ID of the agent installation, used to track NOS re-installs runID string ID of the agent run, used to track NOS reboots lastHeartbeat Time Time of the last heartbeat from the agent lastAttemptTime Time Time of the last attempt to apply configuration lastAttemptGen integer Generation of the last attempt to apply configuration lastAppliedTime Time Time of the last successful configuration application lastAppliedGen integer Generation of the last successful configuration application state SwitchState Detailed switch state updated with each heartbeat conditions Condition array Conditions of the agent, includes readiness marker for use with kubectl wait"},{"location":"reference/api/#bgpmessages","title":"BGPMessages","text":"
Appears in: - SwitchStateBGPNeighbor
Field Description Default Validation received BGPMessagesCounters sent BGPMessagesCounters"},{"location":"reference/api/#bgpmessagescounters","title":"BGPMessagesCounters","text":"
Appears in: - BGPMessages
Field Description Default Validation capability integer keepalive integer notification integer open integer routeRefresh integer update integer"},{"location":"reference/api/#bgpneighborsessionstate","title":"BGPNeighborSessionState","text":"
Underlying type: string
Appears in: - SwitchStateBGPNeighbor
Field Description `` idleconnectactiveopenSentopenConfirmestablished"},{"location":"reference/api/#bgppeertype","title":"BGPPeerType","text":"
Underlying type: string
Appears in: - SwitchStateBGPNeighbor
Field Description `` internalexternal"},{"location":"reference/api/#operstatus","title":"OperStatus","text":"
Underlying type: string
Appears in: - SwitchStateInterface
Field Description `` updowntestingunknowndormantnotPresentlowerLayerDown"},{"location":"reference/api/#switchstate","title":"SwitchState","text":"
Appears in: - AgentStatus
Field Description Default Validation nos SwitchStateNOS Information about the switch and NOS interfaces object (keys:string, values:SwitchStateInterface) Switch interfaces state (incl. physical, management and port channels) breakouts object (keys:string, values:SwitchStateBreakout) Breakout ports state (port -> breakout state) bgpNeighbors object (keys:string, values:map[string]SwitchStateBGPNeighbor) State of all BGP neighbors (VRF -> neighbor address -> state) platform SwitchStatePlatform State of the switch platform (fans, PSUs, sensors) criticalResources SwitchStateCRM State of the critical resources (ACLs, routes, etc.)"},{"location":"reference/api/#switchstatebgpneighbor","title":"SwitchStateBGPNeighbor","text":"
Appears in: - SwitchState
Field Description Default Validation connectionsDropped integer enabled boolean establishedTransitions integer lastEstablished Time lastRead Time lastResetReason string lastResetTime Time lastWrite Time localAS integer messages BGPMessages peerAS integer peerGroup string peerPort integer peerType BGPPeerType remoteRouterID string sessionState BGPNeighborSessionState shutdownMessage string prefixes object (keys:string, values:SwitchStateBGPNeighborPrefixes)"},{"location":"reference/api/#switchstatebgpneighborprefixes","title":"SwitchStateBGPNeighborPrefixes","text":"
Appears in: - SwitchStateBGPNeighbor
Field Description Default Validation received integer receivedPrePolicy integer sent integer"},{"location":"reference/api/#switchstatebreakout","title":"SwitchStateBreakout","text":"
Appears in: - SwitchState
Field Description Default Validation mode string nosMembers string array status string"},{"location":"reference/api/#switchstatecrm","title":"SwitchStateCRM","text":"
Appears in: - SwitchState
Field Description Default Validation aclStats SwitchStateCRMACLStats stats SwitchStateCRMStats"},{"location":"reference/api/#switchstatecrmacldetails","title":"SwitchStateCRMACLDetails","text":"
Field Description Default Validation lag SwitchStateCRMACLDetails port SwitchStateCRMACLDetails rif SwitchStateCRMACLDetails switch SwitchStateCRMACLDetails vlan SwitchStateCRMACLDetails"},{"location":"reference/api/#switchstatecrmaclstats","title":"SwitchStateCRMACLStats","text":"
Appears in: - SwitchStateCRM
Field Description Default Validation egress SwitchStateCRMACLInfo ingress SwitchStateCRMACLInfo"},{"location":"reference/api/#switchstatecrmstats","title":"SwitchStateCRMStats","text":"
Field Description Default Validation chassisID string systemName string systemDescription string portID string portDescription string manufacturer string model string serialNumber string"},{"location":"reference/api/#switchstatenos","title":"SwitchStateNOS","text":"
SwitchStateNOS contains information about the switch and NOS received from the switch itself by the agent
Appears in: - SwitchState
Field Description Default Validation asicVersion string ASIC name, such as \"broadcom\" or \"vs\" buildCommit string NOS build commit buildDate string NOS build date builtBy string NOS build user configDbVersion string NOS config DB version, such as \"version_4_2_1\" distributionVersion string Distribution version, such as \"Debian 10.13\" hardwareVersion string Hardware version, such as \"X01\" hwskuVersion string Hwsku version, such as \"DellEMC-S5248f-P-25G-DPB\" kernelVersion string Kernel version, such as \"5.10.0-21-amd64\" mfgName string Manufacturer name, such as \"Dell EMC\" platformName string Platform name, such as \"x86_64-dellemc_s5248f_c3538-r0\" productDescription string NOS product description, such as \"Enterprise SONiC Distribution by Broadcom - Enterprise Base package\" productVersion string NOS product version, empty for Broadcom SONiC serialNumber string Switch serial number softwareVersion string NOS software version, such as \"4.2.0-Enterprise_Base\" uptime string Switch uptime, such as \"21:21:27 up 1 day, 23:26, 0 users, load average: 1.92, 1.99, 2.00 \""},{"location":"reference/api/#switchstateplatform","title":"SwitchStatePlatform","text":"
Appears in: - SwitchState
Field Description Default Validation fans object (keys:string, values:SwitchStatePlatformFan) psus object (keys:string, values:SwitchStatePlatformPSU) temperature object (keys:string, values:SwitchStatePlatformTemperature)"},{"location":"reference/api/#switchstateplatformfan","title":"SwitchStatePlatformFan","text":"
Appears in: - SwitchStatePlatform
Field Description Default Validation direction string speed float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformpsu","title":"SwitchStatePlatformPSU","text":"
Appears in: - SwitchStatePlatform
Field Description Default Validation inputCurrent float inputPower float inputVoltage float outputCurrent float outputPower float outputVoltage float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformtemperature","title":"SwitchStatePlatformTemperature","text":"
Appears in: - SwitchStatePlatform
Field Description Default Validation temperature float alarms string highThreshold float criticalHighThreshold float lowThreshold float criticalLowThreshold float"},{"location":"reference/api/#switchstatetransceiver","title":"SwitchStateTransceiver","text":"
Package v1beta1 contains API Schema definitions for the dhcp v1beta1 API group. It is the primary internal API group for the intended Hedgehog DHCP server configuration and storing leases as well as making them available to the end user through API. Not intended to be modified by the user.
DHCPAllocated is a single allocated IP with expiry time and hostname from DHCP requests, it's effectively a DHCP lease
Appears in: - DHCPSubnetStatus
Field Description Default Validation ip string Allocated IP address expiry Time Expiry time of the lease hostname string Hostname from DHCP request"},{"location":"reference/api/#dhcpsubnet","title":"DHCPSubnet","text":"
DHCPSubnet is the configuration (spec) for the Hedgehog DHCP server and storage for the leases (status). It's primary internal API group, but it makes allocated IPs / leases information available to the end user through API. Not intended to be modified by the user.
Field Description Default Validation apiVersion string dhcp.githedgehog.com/v1beta1kind string DHCPSubnetmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec DHCPSubnetSpec Spec is the desired state of the DHCPSubnet status DHCPSubnetStatus Status is the observed state of the DHCPSubnet"},{"location":"reference/api/#dhcpsubnetspec","title":"DHCPSubnetSpec","text":"
DHCPSubnetSpec defines the desired state of DHCPSubnet
Appears in: - DHCPSubnet
Field Description Default Validation subnet string Full VPC subnet name (including VPC name), such as \"vpc-0/default\" cidrBlock string CIDR block to use for VPC subnet, such as \"10.10.10.0/24\" gateway string Gateway, such as 10.10.10.1 startIP string Start IP from the CIDRBlock to allocate IPs, such as 10.10.10.10 endIP string End IP from the CIDRBlock to allocate IPs, such as 10.10.10.99 vrf string VRF name to identify specific VPC (will be added to DHCP packets by DHCP relay in suboption 151), such as \"VrfVvpc-1\" as it's named on switch circuitID string VLAN ID to identify specific subnet within the VPC, such as \"Vlan1000\" as it's named on switch pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option. defaultURL string DefaultURL (optional) is the option 114 \"default-url\" to be sent to the clients"},{"location":"reference/api/#dhcpsubnetstatus","title":"DHCPSubnetStatus","text":"
DHCPSubnetStatus defines the observed state of DHCPSubnet
Appears in: - DHCPSubnet
Field Description Default Validation allocated object (keys:string, values:DHCPAllocated) Allocated is a map of allocated IPs with expiry time and hostname from DHCP requests"},{"location":"reference/api/#vpcgithedgehogcomv1beta1","title":"vpc.githedgehog.com/v1beta1","text":"
Package v1beta1 contains API Schema definitions for the vpc v1beta1 API group. It is public API group for the VPCs and Externals APIs. Intended to be used by the user.
External object represents an external system connected to the Fabric and available to the specific IPv4Namespace. Users can do external peering with the external system by specifying the name of the External Object without need to worry about the details of how external system is attached to the Fabric.
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string Externalmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalSpec Spec is the desired state of the External status ExternalStatus Status is the observed state of the External"},{"location":"reference/api/#externalattachment","title":"ExternalAttachment","text":"
ExternalAttachment is a definition of how specific switch is connected with external system (External object). Effectively it represents BGP peering between the switch and external system including all needed configuration.
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string ExternalAttachmentmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalAttachmentSpec Spec is the desired state of the ExternalAttachment status ExternalAttachmentStatus Status is the observed state of the ExternalAttachment"},{"location":"reference/api/#externalattachmentneighbor","title":"ExternalAttachmentNeighbor","text":"
ExternalAttachmentNeighbor defines the BGP neighbor configuration for the external attachment
Appears in: - ExternalAttachmentSpec
Field Description Default Validation asn integer ASN is the ASN of the BGP neighbor ip string IP is the IP address of the BGP neighbor to peer with"},{"location":"reference/api/#externalattachmentspec","title":"ExternalAttachmentSpec","text":"
ExternalAttachmentSpec defines the desired state of ExternalAttachment
Appears in: - ExternalAttachment
Field Description Default Validation external string External is the name of the External object this attachment belongs to connection string Connection is the name of the Connection object this attachment belongs to (essentially the name of the switch/port) switch ExternalAttachmentSwitch Switch is the switch port configuration for the external attachment neighbor ExternalAttachmentNeighbor Neighbor is the BGP neighbor configuration for the external attachment"},{"location":"reference/api/#externalattachmentstatus","title":"ExternalAttachmentStatus","text":"
ExternalAttachmentStatus defines the observed state of ExternalAttachment
ExternalAttachmentSwitch defines the switch port configuration for the external attachment
Appears in: - ExternalAttachmentSpec
Field Description Default Validation vlan integer VLAN (optional) is the VLAN ID used for the subinterface on a switch port specified in the connection, set to 0 if no VLAN is used ip string IP is the IP address of the subinterface on a switch port specified in the connection"},{"location":"reference/api/#externalpeering","title":"ExternalPeering","text":"
ExternalPeering is the Schema for the externalpeerings API
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string ExternalPeeringmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalPeeringSpec Spec is the desired state of the ExternalPeering status ExternalPeeringStatus Status is the observed state of the ExternalPeering"},{"location":"reference/api/#externalpeeringspec","title":"ExternalPeeringSpec","text":"
ExternalPeeringSpec defines the desired state of ExternalPeering
Appears in: - ExternalPeering
Field Description Default Validation permit ExternalPeeringSpecPermit Permit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit"},{"location":"reference/api/#externalpeeringspecexternal","title":"ExternalPeeringSpecExternal","text":"
ExternalPeeringSpecExternal defines the External-side of the configuration to peer with
Appears in: - ExternalPeeringSpecPermit
Field Description Default Validation name string Name is the name of the External to peer with prefixes ExternalPeeringSpecPrefix array Prefixes is the list of prefixes to permit from the External to the VPC"},{"location":"reference/api/#externalpeeringspecpermit","title":"ExternalPeeringSpecPermit","text":"
ExternalPeeringSpecPermit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit
Appears in: - ExternalPeeringSpec
Field Description Default Validation vpc ExternalPeeringSpecVPC VPC is the VPC-side of the configuration to peer with external ExternalPeeringSpecExternal External is the External-side of the configuration to peer with"},{"location":"reference/api/#externalpeeringspecprefix","title":"ExternalPeeringSpecPrefix","text":"
ExternalPeeringSpecPrefix defines the prefix to permit from the External to the VPC
Appears in: - ExternalPeeringSpecExternal
Field Description Default Validation prefix string Prefix is the subnet to permit from the External to the VPC, e.g. 0.0.0.0/0 for any route including default route.It matches any prefix length less than or equal to 32 effectively permitting all prefixes within the specified one."},{"location":"reference/api/#externalpeeringspecvpc","title":"ExternalPeeringSpecVPC","text":"
ExternalPeeringSpecVPC defines the VPC-side of the configuration to peer with
Appears in: - ExternalPeeringSpecPermit
Field Description Default Validation name string Name is the name of the VPC to peer with subnets string array Subnets is the list of subnets to advertise from VPC to the External"},{"location":"reference/api/#externalpeeringstatus","title":"ExternalPeeringStatus","text":"
ExternalPeeringStatus defines the observed state of ExternalPeering
ExternalSpec describes IPv4 namespace External belongs to and inbound/outbound communities which are used to filter routes from/to the external system.
Appears in: - External
Field Description Default Validation ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this External belongs to inboundCommunity string InboundCommunity is the inbound community to filter routes from the external system (e.g. 65102:5000) outboundCommunity string OutboundCommunity is theoutbound community that all outbound routes will be stamped with (e.g. 50000:50001)"},{"location":"reference/api/#externalstatus","title":"ExternalStatus","text":"
ExternalStatus defines the observed state of External
IPv4Namespace represents a namespace for VPC subnets allocation. All VPC subnets within a single IPv4Namespace are non-overlapping. Users can create multiple IPv4Namespaces to allocate same VPC subnets.
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string IPv4Namespacemetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec IPv4NamespaceSpec Spec is the desired state of the IPv4Namespace status IPv4NamespaceStatus Status is the observed state of the IPv4Namespace"},{"location":"reference/api/#ipv4namespacespec","title":"IPv4NamespaceSpec","text":"
IPv4NamespaceSpec defines the desired state of IPv4Namespace
Appears in: - IPv4Namespace
Field Description Default Validation subnets string array Subnets is the list of subnets to allocate VPC subnets from, couldn't overlap between each other and with Fabric reserved subnets MaxItems: 20 MinItems: 1"},{"location":"reference/api/#ipv4namespacestatus","title":"IPv4NamespaceStatus","text":"
IPv4NamespaceStatus defines the observed state of IPv4Namespace
VPC is Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string VPCmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCSpec Spec is the desired state of the VPC status VPCStatus Status is the observed state of the VPC"},{"location":"reference/api/#vpcattachment","title":"VPCAttachment","text":"
VPCAttachment is the Schema for the vpcattachments API
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string VPCAttachmentmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCAttachmentSpec Spec is the desired state of the VPCAttachment status VPCAttachmentStatus Status is the observed state of the VPCAttachment"},{"location":"reference/api/#vpcattachmentspec","title":"VPCAttachmentSpec","text":"
VPCAttachmentSpec defines the desired state of VPCAttachment
Appears in: - VPCAttachment
Field Description Default Validation subnet string Subnet is the full name of the VPC subnet to attach to, such as \"vpc-1/default\" connection string Connection is the name of the connection to attach to the VPC nativeVLAN boolean NativeVLAN is the flag to indicate if the native VLAN should be used for attaching the VPC subnet"},{"location":"reference/api/#vpcattachmentstatus","title":"VPCAttachmentStatus","text":"
VPCAttachmentStatus defines the observed state of VPCAttachment
VPCDHCP defines the on-demand DHCP configuration for the subnet
Appears in: - VPCSubnet
Field Description Default Validation relay string Relay is the DHCP relay IP address, if specified, DHCP server will be disabled enable boolean Enable enables DHCP server for the subnet range VPCDHCPRange Range (optional) is the DHCP range for the subnet if DHCP server is enabled options VPCDHCPOptions Options (optional) is the DHCP options for the subnet if DHCP server is enabled"},{"location":"reference/api/#vpcdhcpoptions","title":"VPCDHCPOptions","text":"
VPCDHCPOptions defines the DHCP options for the subnet if DHCP server is enabled
Appears in: - VPCDHCP
Field Description Default Validation pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option."},{"location":"reference/api/#vpcdhcprange","title":"VPCDHCPRange","text":"
VPCDHCPRange defines the DHCP range for the subnet if DHCP server is enabled
Appears in: - VPCDHCP
Field Description Default Validation start string Start is the start IP address of the DHCP range end string End is the end IP address of the DHCP range"},{"location":"reference/api/#vpcpeer","title":"VPCPeer","text":"
Appears in: - VPCPeeringSpec
Field Description Default Validation subnets string array Subnets is the list of subnets to advertise from current VPC to the peer VPC MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeering","title":"VPCPeering","text":"
VPCPeering represents a peering between two VPCs with corresponding filtering rules. Minimal example of the VPC peering showing vpc-1 to vpc-2 peering with all subnets allowed:
spec:\n permit:\n - vpc-1: {}\n vpc-2: {}\n
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string VPCPeeringmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCPeeringSpec Spec is the desired state of the VPCPeering status VPCPeeringStatus Status is the observed state of the VPCPeering"},{"location":"reference/api/#vpcpeeringspec","title":"VPCPeeringSpec","text":"
VPCPeeringSpec defines the desired state of VPCPeering
Appears in: - VPCPeering
Field Description Default Validation remote string permit map[string]VPCPeer array Permit defines a list of the peering policies - which VPC subnets will have access to the peer VPC subnets. MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeeringstatus","title":"VPCPeeringStatus","text":"
VPCPeeringStatus defines the observed state of VPCPeering
VPCSpec defines the desired state of VPC. At least one subnet is required.
Appears in: - VPC
Field Description Default Validation subnets object (keys:string, values:VPCSubnet) Subnets is the list of VPC subnets to configure ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this VPC belongs to (if not specified, \"default\" is used) vlanNamespace string VLANNamespace is the name of the VLANNamespace this VPC belongs to (if not specified, \"default\" is used) defaultIsolated boolean DefaultIsolated sets default behavior for isolated mode for the subnets (disabled by default) defaultRestricted boolean DefaultRestricted sets default behavior for restricted mode for the subnets (disabled by default) permit string array array Permit defines a list of the access policies between the subnets within the VPC - each policy is a list of subnets that have access to each other.It's applied on top of the subnet isolation flag and if subnet isn't isolated it's not required to have it in a permit list while if vpc is markedas isolated it's required to have it in a permit list to have access to other subnets. staticRoutes VPCStaticRoute array StaticRoutes is the list of additional static routes for the VPC"},{"location":"reference/api/#vpcstaticroute","title":"VPCStaticRoute","text":"
VPCStaticRoute defines the static route for the VPC
Appears in: - VPCSpec
Field Description Default Validation prefix string Prefix for the static route (mandatory), e.g. 10.42.0.0/24 nextHops string array NextHops for the static route (at least one is required), e.g. 10.99.0.0"},{"location":"reference/api/#vpcstatus","title":"VPCStatus","text":"
Field Description Default Validation subnet string Subnet is the subnet CIDR block, such as \"10.0.0.0/24\", should belong to the IPv4Namespace and be unique within the namespace gateway string Gateway (optional) for the subnet, if not specified, the first IP (e.g. 10.0.0.1) in the subnet is used as the gateway dhcp VPCDHCP DHCP is the on-demand DHCP configuration for the subnet vlan integer VLAN is the VLAN ID for the subnet, should belong to the VLANNamespace and be unique within the namespace isolated boolean Isolated is the flag to enable isolated mode for the subnet which means no access to and from the other subnets within the VPC restricted boolean Restricted is the flag to enable restricted mode for the subnet which means no access between hosts within the subnet itself"},{"location":"reference/api/#wiringgithedgehogcomv1beta1","title":"wiring.githedgehog.com/v1beta1","text":"
Package v1beta1 contains API Schema definitions for the wiring v1beta1 API group. It is public API group mainly for the underlay definition including Switches, Server, wiring between them and etc. Intended to be used by the user.
Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object."},{"location":"reference/api/#connbundled","title":"ConnBundled","text":"
ConnBundled defines the bundled connection (port channel, single server to a single switch with multiple links)
Appears in: - ConnectionSpec
Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#conneslag","title":"ConnESLAG","text":"
ConnESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)
Appears in: - ConnectionSpec
Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connexternal","title":"ConnExternal","text":"
ConnExternal defines the external connection (single switch to a single external device with a single link)
Appears in: - ConnectionSpec
Field Description Default Validation link ConnExternalLink Link is the external connection link"},{"location":"reference/api/#connexternallink","title":"ConnExternalLink","text":"
ConnExternalLink defines the external connection link
Appears in: - ConnExternal
Field Description Default Validation switch BasePortName"},{"location":"reference/api/#connfabric","title":"ConnFabric","text":"
ConnFabric defines the fabric connection (single spine to a single leaf with at least one link)
Appears in: - ConnectionSpec
Field Description Default Validation links FabricLink array Links is the list of spine-to-leaf links MinItems: 1"},{"location":"reference/api/#connfabriclinkswitch","title":"ConnFabricLinkSwitch","text":"
ConnFabricLinkSwitch defines the switch side of the fabric link
Appears in: - FabricLink
Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the fabric link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$"},{"location":"reference/api/#connmclag","title":"ConnMCLAG","text":"
ConnMCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)
Appears in: - ConnectionSpec
Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connmclagdomain","title":"ConnMCLAGDomain","text":"
ConnMCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch or redundancy group and allows to use MCLAG connections to connect servers in a multi-homed way.
Appears in: - ConnectionSpec
Field Description Default Validation peerLinks SwitchToSwitchLink array PeerLinks is the list of peer links between the switches, used to pass server traffic between switch MinItems: 1 sessionLinks SwitchToSwitchLink array SessionLinks is the list of session links between the switches, used only to pass MCLAG control plane and BGPtraffic between switches MinItems: 1"},{"location":"reference/api/#connstaticexternal","title":"ConnStaticExternal","text":"
ConnStaticExternal defines the static external connection (single switch to a single external device with a single link)
Appears in: - ConnectionSpec
Field Description Default Validation link ConnStaticExternalLink Link is the static external connection link withinVPC string WithinVPC is the optional VPC name to provision the static external connection within the VPC VRF instead of default one to make resource available to the specific VPC"},{"location":"reference/api/#connstaticexternallink","title":"ConnStaticExternalLink","text":"
ConnStaticExternalLink defines the static external connection link
Appears in: - ConnStaticExternal
Field Description Default Validation switch ConnStaticExternalLinkSwitch Switch is the switch side of the static external connection link"},{"location":"reference/api/#connstaticexternallinkswitch","title":"ConnStaticExternalLinkSwitch","text":"
ConnStaticExternalLinkSwitch defines the switch side of the static external connection link
Appears in: - ConnStaticExternalLink
Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the static external connection link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$nextHop string NextHop is the next hop IP address for static routes that will be created for the subnets Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}$subnets string array Subnets is the list of subnets that will get static routes using the specified next hop vlan integer VLAN is the optional VLAN ID to be configured on the switch port"},{"location":"reference/api/#connunbundled","title":"ConnUnbundled","text":"
ConnUnbundled defines the unbundled connection (no port channel, single server to a single switch with a single link)
Appears in: - ConnectionSpec
Field Description Default Validation link ServerToSwitchLink Link is the server-to-switch link mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connvpcloopback","title":"ConnVPCLoopback","text":"
ConnVPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) that enables automated workaround named \"VPC Loopback\" that allow to avoid switch hardware limitations and traffic going through CPU in some cases
Appears in: - ConnectionSpec
Field Description Default Validation links SwitchToSwitchLink array Links is the list of VPC loopback links MinItems: 1"},{"location":"reference/api/#connection","title":"Connection","text":"
Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all physical and logical connections between the devices in the Wiring Diagram. Connection type is defined by the top-level field in the ConnectionSpec. Exactly one of them could be used in a single Connection object.
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string Connectionmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ConnectionSpec Spec is the desired state of the Connection status ConnectionStatus Status is the observed state of the Connection"},{"location":"reference/api/#connectionspec","title":"ConnectionSpec","text":"
ConnectionSpec defines the desired state of Connection
Appears in: - Connection
Field Description Default Validation unbundled ConnUnbundled Unbundled defines the unbundled connection (no port channel, single server to a single switch with a single link) bundled ConnBundled Bundled defines the bundled connection (port channel, single server to a single switch with multiple links) mclag ConnMCLAG MCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links) eslag ConnESLAG ESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links) mclagDomain ConnMCLAGDomain MCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch for server multi-homing fabric ConnFabric Fabric defines the fabric connection (single spine to a single leaf with at least one link) vpcLoopback ConnVPCLoopback VPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) for automated workaround external ConnExternal External defines the external connection (single switch to a single external device with a single link) staticExternal ConnStaticExternal StaticExternal defines the static external connection (single switch to a single external device with a single link)"},{"location":"reference/api/#connectionstatus","title":"ConnectionStatus","text":"
ConnectionStatus defines the observed state of Connection
Field Description Default Validation spine ConnFabricLinkSwitch Spine is the spine side of the fabric link leaf ConnFabricLinkSwitch Leaf is the leaf side of the fabric link"},{"location":"reference/api/#server","title":"Server","text":"
Server is the Schema for the servers API
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string Servermetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ServerSpec Spec is desired state of the server status ServerStatus Status is the observed state of the server"},{"location":"reference/api/#serverfacingconnectionconfig","title":"ServerFacingConnectionConfig","text":"
ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration
Field Description Default Validation mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#serverspec","title":"ServerSpec","text":"
ServerSpec defines the desired state of Server
Appears in: - Server
Field Description Default Validation description string Description is a description of the server profile string Profile is the profile of the server, name of the ServerProfile object to be used for this server, currently not used by the Fabric"},{"location":"reference/api/#serverstatus","title":"ServerStatus","text":"
Field Description Default Validation server BasePortName Server is the server side of the connection switch BasePortName Switch is the switch side of the connection"},{"location":"reference/api/#switch","title":"Switch","text":"
Switch is the Schema for the switches API
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string Switchmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchSpec Spec is desired state of the switch status SwitchStatus Status is the observed state of the switch"},{"location":"reference/api/#switchboot","title":"SwitchBoot","text":"
Appears in: - SwitchSpec
Field Description Default Validation serial string Identify switch by serial number mac string Identify switch by MAC address of the management port"},{"location":"reference/api/#switchgroup","title":"SwitchGroup","text":"
SwitchGroup is the marker API object to group switches together, switch can belong to multiple groups
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string SwitchGroupmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchGroupSpec Spec is the desired state of the SwitchGroup status SwitchGroupStatus Status is the observed state of the SwitchGroup"},{"location":"reference/api/#switchgroupspec","title":"SwitchGroupSpec","text":"
SwitchGroupSpec defines the desired state of SwitchGroup
SwitchProfile represents switch capabilities and configuration
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string SwitchProfilemetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchProfileSpec status SwitchProfileStatus"},{"location":"reference/api/#switchprofileconfig","title":"SwitchProfileConfig","text":"
Defines switch-specific configuration options
Appears in: - SwitchProfileSpec
Field Description Default Validation maxPathsEBGP integer MaxPathsIBGP defines the maximum number of IBGP paths to be configured"},{"location":"reference/api/#switchprofilefeatures","title":"SwitchProfileFeatures","text":"
Defines features supported by a specific switch which is later used for roles and Fabric API features usage validation
Appears in: - SwitchProfileSpec
Field Description Default Validation subinterfaces boolean Subinterfaces defines if switch supports subinterfaces vxlan boolean VXLAN defines if switch supports VXLANs acls boolean ACLs defines if switch supports ACLs"},{"location":"reference/api/#switchprofileport","title":"SwitchProfilePort","text":"
Defines a switch port configuration Only one of Profile or Group can be set
Appears in: - SwitchProfileSpec
Field Description Default Validation nos string NOSName defines how port is named in the NOS baseNOSName string BaseNOSName defines the base NOS name that could be used together with the profile to generate the actual NOS name (e.g. breakouts) label string Label defines the physical port label you can see on the actual switch group string If port isn't directly manageable, group defines the group it belongs to, exclusive with profile profile string If port is directly configurable, profile defines the profile it belongs to, exclusive with group management boolean Management defines if port is a management port, it's a special case and it can't have a group or profile oniePortName string OniePortName defines the ONIE port name for management ports only"},{"location":"reference/api/#switchprofileportgroup","title":"SwitchProfilePortGroup","text":"
Defines a switch port group configuration
Appears in: - SwitchProfileSpec
Field Description Default Validation nos string NOSName defines how group is named in the NOS profile string Profile defines the possible configuration profile for the group, could only have speed profile"},{"location":"reference/api/#switchprofileportprofile","title":"SwitchProfilePortProfile","text":"
Defines a switch port profile configuration
Appears in: - SwitchProfileSpec
Field Description Default Validation speed SwitchProfilePortProfileSpeed Speed defines the speed configuration for the profile, exclusive with breakout breakout SwitchProfilePortProfileBreakout Breakout defines the breakout configuration for the profile, exclusive with speed autoNegAllowed boolean AutoNegAllowed defines if configuring auto-negotiation is allowed for the port autoNegDefault boolean AutoNegDefault defines the default auto-negotiation state for the port"},{"location":"reference/api/#switchprofileportprofilebreakout","title":"SwitchProfilePortProfileBreakout","text":"
Defines a switch port profile breakout configuration
Appears in: - SwitchProfilePortProfile
Field Description Default Validation default string Default defines the default breakout mode for the profile supported object (keys:string, values:SwitchProfilePortProfileBreakoutMode) Supported defines the supported breakout modes for the profile with the NOS name offsets"},{"location":"reference/api/#switchprofileportprofilebreakoutmode","title":"SwitchProfilePortProfileBreakoutMode","text":"
Defines a switch port profile breakout mode configuration
Appears in: - SwitchProfilePortProfileBreakout
Field Description Default Validation offsets string array Offsets defines the breakout NOS port name offset from the port NOS Name for each breakout mode"},{"location":"reference/api/#switchprofileportprofilespeed","title":"SwitchProfilePortProfileSpeed","text":"
Defines a switch port profile speed configuration
Appears in: - SwitchProfilePortProfile
Field Description Default Validation default string Default defines the default speed for the profile supported string array Supported defines the supported speeds for the profile"},{"location":"reference/api/#switchprofilespec","title":"SwitchProfileSpec","text":"
SwitchProfileSpec defines the desired state of SwitchProfile
Appears in: - SwitchProfile
Field Description Default Validation displayName string DisplayName defines the human-readable name of the switch otherNames string array OtherNames defines alternative names for the switch features SwitchProfileFeatures Features defines the features supported by the switch config SwitchProfileConfig Config defines the switch-specific configuration options ports object (keys:string, values:SwitchProfilePort) Ports defines the switch port configuration portGroups object (keys:string, values:SwitchProfilePortGroup) PortGroups defines the switch port group configuration portProfiles object (keys:string, values:SwitchProfilePortProfile) PortProfiles defines the switch port profile configuration nosType NOSType NOSType defines the NOS type to be used for the switch platform string Platform is what expected to be request by ONIE and displayed in the NOS"},{"location":"reference/api/#switchprofilestatus","title":"SwitchProfileStatus","text":"
SwitchProfileStatus defines the observed state of SwitchProfile
SwitchRedundancy is the switch redundancy configuration which includes name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections. It defines how redundancy will be configured and handled on the switch as well as which connection types will be available. If not specified, switch will not be part of any redundancy group. If name isn't empty, type must be specified as well and name should be the same as one of the SwitchGroup objects.
Appears in: - SwitchSpec
Field Description Default Validation group string Group is the name of the redundancy group switch belongs to type RedundancyType Type is the type of the redundancy group, could be mclag or eslag"},{"location":"reference/api/#switchrole","title":"SwitchRole","text":"
Underlying type: string
SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf
Field Description spineserver-leafborder-leafmixed-leafvirtual-edge"},{"location":"reference/api/#switchspec","title":"SwitchSpec","text":"
SwitchSpec defines the desired state of Switch
Appears in: - Switch
Field Description Default Validation role SwitchRole Role is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge] Required: {} description string Description is a description of the switch profile string Profile is the profile of the switch, name of the SwitchProfile object to be used for this switch, currently not used by the Fabric groups string array Groups is a list of switch groups the switch belongs to redundancy SwitchRedundancy Redundancy is the switch redundancy configuration including name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap asn integer ASN is the ASN of the switch ip string IP is the IP of the switch that could be used to access it from other switches and control nodes in the Fabric vtepIP string VTEPIP is the VTEP IP of the switch protocolIP string ProtocolIP is used as BGP Router ID for switch configuration portGroupSpeeds object (keys:string, values:string) PortGroupSpeeds is a map of port group speeds, key is the port group name, value is the speed, such as '\"2\": 10G' portSpeeds object (keys:string, values:string) PortSpeeds is a map of port speeds, key is the port name, value is the speed portBreakouts object (keys:string, values:string) PortBreakouts is a map of port breakouts, key is the port name, value is the breakout configuration, such as \"1/55: 4x25G\" portAutoNegs object (keys:string, values:boolean) PortAutoNegs is a map of port auto negotiation, key is the port name, value is true or false boot SwitchBoot Boot is the boot/provisioning information of the switch"},{"location":"reference/api/#switchstatus","title":"SwitchStatus","text":"
SwitchToSwitchLink defines the switch-to-switch link
Appears in: - ConnMCLAGDomain - ConnVPCLoopback
Field Description Default Validation switch1 BasePortName Switch1 is the first switch side of the connection switch2 BasePortName Switch2 is the second switch side of the connection"},{"location":"reference/api/#vlannamespace","title":"VLANNamespace","text":"
VLANNamespace is the Schema for the vlannamespaces API
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string VLANNamespacemetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VLANNamespaceSpec Spec is the desired state of the VLANNamespace status VLANNamespaceStatus Status is the observed state of the VLANNamespace"},{"location":"reference/api/#vlannamespacespec","title":"VLANNamespaceSpec","text":"
VLANNamespaceSpec defines the desired state of VLANNamespace
Appears in: - VLANNamespace
Field Description Default Validation ranges VLANRange array Ranges is a list of VLAN ranges to be used in this namespace, couldn't overlap between each other and with Fabric reserved VLAN ranges MaxItems: 20 MinItems: 1"},{"location":"reference/api/#vlannamespacestatus","title":"VLANNamespaceStatus","text":"
VLANNamespaceStatus defines the observed state of VLANNamespace
Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.
core@control-1 ~ $ kubectl fabric\nNAME:\n hhfctl - Hedgehog Fabric user client\n\nUSAGE:\n hhfctl [global options] command [command options] [arguments...]\n\nVERSION:\n v0.23.0\n\nCOMMANDS:\n vpc VPC commands\n switch, sw, agent Switch/Agent commands\n connection, conn Connection commands\n switchgroup, sg SwitchGroup commands\n external External commands\n help, h Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n --verbose, -v verbose output (includes debug) (default: true)\n --help, -h show help\n --version, -V print the version\n
The following is a list of all supported switches. Please, make sure to use the version of documentation that matches your environment to get an up-to-date list of supported switches, their features and port naming scheme.
Profile Name (to use in switch.spec.profile): dell-s5248f-on
Supported features:
Subinterfaces: true
VXLAN: true
ACLs: true
Available Ports:
Label column is a port label on a physical switch.
Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"reference/profiles/#edgecore-dcs203","title":"Edgecore DCS203","text":"
Profile Name (to use in switch.spec.profile): edgecore-dcs203
Other names: Edgecore AS7326-56X
Supported features:
Subinterfaces: true
VXLAN: true
ACLs: true
Available Ports:
Label column is a port label on a physical switch.
Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 1 25G 10G, 25G E1/6 6 Port Group 1 25G 10G, 25G E1/7 7 Port Group 1 25G 10G, 25G E1/8 8 Port Group 1 25G 10G, 25G E1/9 9 Port Group 1 25G 10G, 25G E1/10 10 Port Group 1 25G 10G, 25G E1/11 11 Port Group 1 25G 10G, 25G E1/12 12 Port Group 1 25G 10G, 25G E1/13 13 Port Group 2 25G 10G, 25G E1/14 14 Port Group 2 25G 10G, 25G E1/15 15 Port Group 2 25G 10G, 25G E1/16 16 Port Group 2 25G 10G, 25G E1/17 17 Port Group 2 25G 10G, 25G E1/18 18 Port Group 2 25G 10G, 25G E1/19 19 Port Group 2 25G 10G, 25G E1/20 20 Port Group 2 25G 10G, 25G E1/21 21 Port Group 2 25G 10G, 25G E1/22 22 Port Group 2 25G 10G, 25G E1/23 23 Port Group 2 25G 10G, 25G E1/24 24 Port Group 2 25G 10G, 25G E1/25 25 Port Group 3 25G 10G, 25G E1/26 26 Port Group 3 25G 10G, 25G E1/27 27 Port Group 3 25G 10G, 25G E1/28 28 Port Group 3 25G 10G, 25G E1/29 29 Port Group 3 25G 10G, 25G E1/30 30 Port Group 3 25G 10G, 25G E1/31 31 Port Group 3 25G 10G, 25G E1/32 32 Port Group 3 25G 10G, 25G E1/33 33 Port Group 3 25G 10G, 25G E1/34 34 Port Group 3 25G 10G, 25G E1/35 35 Port Group 3 25G 10G, 25G E1/36 36 Port Group 3 25G 10G, 25G E1/37 37 Port Group 4 25G 10G, 25G E1/38 38 Port Group 4 25G 10G, 25G E1/39 39 Port Group 4 25G 10G, 25G E1/40 40 Port Group 4 25G 10G, 25G E1/41 41 Port Group 4 25G 10G, 25G E1/42 42 Port Group 4 25G 10G, 25G E1/43 43 Port Group 4 25G 10G, 25G E1/44 44 Port Group 4 25G 10G, 25G E1/45 45 Port Group 4 25G 10G, 25G E1/46 46 Port Group 4 25G 10G, 25G E1/47 47 Port Group 4 25G 10G, 25G E1/48 48 Port Group 4 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/56 56 Direct 100G 40G, 100G E1/57 57 Direct 10G 1G, 10G E1/58 58 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs204","title":"Edgecore DCS204","text":"
Profile Name (to use in switch.spec.profile): edgecore-dcs204
Other names: Edgecore AS7726-32X
Supported features:
Subinterfaces: true
VXLAN: true
ACLs: true
Available Ports:
Label column is a port label on a physical switch.
Label column is a port label on a physical switch.
Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#beta-1","title":"Beta-1","text":""},{"location":"release-notes/#device-support","title":"Device support","text":"
Groups LEAF switches to provide multi-homed connectivity to the Fabric
2-4 switches per group
Support for MCLAG and ESLAG (EVPN MH / ESI)
A single redundancy group can only support multi-homing of one type (ESLAG or MCLAG)
Multiple types of redundancy groups can be used in the fabric simultaneously
"},{"location":"release-notes/#improved-vpc-security-policy-better-zero-trust","title":"Improved VPC security policy - better Zero Trust","text":"
Inter-VPC
Allow inter-VPC and external peering with per subnet control
Intra-VPC intra-subnet policies
Isolated Subnets
subnets isolated by default from other subnets in the VPC
require a user-defined explicitly permit list to allow communications to other subnets within the VPC
can be set on individual subnets within VPC or per entire VPC - off by default
Inter-VPC and external peering configurations are not affected and work the same as before
Restricted Subnets
Hosts within a subnet have no mutual reachability
Hosts within a subnet can be reached by members of other subnets or peered VPCs as specified by the policy
Inter-VPC and external peering configurations are not affected and work the same as before
Permit Lists
Intra-VPC Permit Lists govern connectivity between subnets within the VPC for isolated subnets
Inter-VPC Permit Lists govern which subnets of one VPC have access to some subnets of the other VPC for finer-grained control of inter-VPC and external peering
For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.
Spine controller connectivity is not supported.
For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.
Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.
A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.
"},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"
Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.
VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.
This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).
VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.
The user controls what subnets/prefixes to import and export from/to the external system.
No NAT function is supported for external peering.
[ silicon platform limitation in Trident 3; limits to number of endpoints in the fabric ]
Total VPCs per switch: up to 1000
[ Including VPCs attached at the given switch and VPCs peered with ]
Total VPCs per VLAN namespace: up to 3000
[ assuming 1 subnet per VPC ]
Total VPCs per fabric: unlimited
[ if using multiple VLAN namespaces ]
VPC subnets per switch: up to 3000
VPC subnets per VLAN namespace up to 3000
Subnets per VPC: up to 20
[ just a validation; the current design allows up to 100, but it could be increased even more in the future ]
VPC Slots per remote peering @ switch: 2
Max VPC loopbacks per switch: 500
[ VPC loopback workarounds per switch are needed for local peering when both VPCs are attached to the switch or for external peering with VPC attached on the same switch that is peering with external ]
Fabric MTU is 9100 and not configurable right now (A3 planned)
Server-facing MTU is 9136 and not configurable right now (A3+)
no support for Access VLANs for attaching servers (A3 planned)
VPC peering is enabled on all subnets of the participating VPCs. No subnet selection for peering. (A3 planned)
peering with external is only possible with a VLAN (by design)
If you have VPCs with remote peering on a switch group, you can't attach those VPCs on that switch group (by definition of remote peering)
if a group of VPCs has remote peering on a switch group, any other VPC that will peer with those VPCs remotely will need to use the same switch group (by design)
if VPC peers with external, it can only be remotely peered with on the same switches that have a connection to that external (by design)
the server-facing connection object is immutable as it\u2019s very easy to get into a deadlock, re-create to change it (A3+)
A single controller connecting to each switch management port. No redundancy.
Controller requirements:
One 1 gig port per switch
One+ 1 gig+ ports connecting to the external management network.
4 Cores, 12GB RAM, 100GB SSD.
Seeder:
Seeder and Controller functions co-resident on the control node. Switch booting and ZTP on management ports directly connected to the controller.
HHFab - the fabricator:
An operational tool to generate, initiate, and maintain the fabric software appliance. Allows fabrication of the environment-specific image with all of the required underlay and security configuration baked in.
DHCP Service:
A simple DHCP server for assigning IP addresses to hosts connecting to the fabric, optimized for use with VPC overlay.
Topology:
Support for a Collapsed Core topology with 2 switch nodes.
Underlay:
A simple single-VRF network with a BGP control plane. IPv4 support only.
External connectivity:
An edge router must be connected to selected ports of one or both switches. IPv4 support only.
Dual-homing:
L2 Dual homing with MCLAG is implemented to connect servers, storage, and other devices in the data center. NIC bonding and LACP configuration at the host are required.
VPC overlay implementation:
VPC is implemented as a set of ACLs within the underlay VRF. External connectivity to the VRF is performed via internally managed VLANs. IPv4 support only.
VPC Peering:
VPC peering is performed via ACLs with no fine-grained control.
NAT
DNAT + SNAT are supported per VPC. SNAT and DNAT can't be enabled per VPC simultaneously.
Hardware support:
Please see the supported hardware list.
Virtual Lab:
A simulation of the two-node Collapsed Core Topology as a virtual environment. Designed for use as a network simulation, a configuration scratchpad, or a training/demonstration tool. Minimum requirements: 8 cores, 24GB RAM, 100GB SSD
Limitations:
40 VPCs max
50 VPC peerings
[ 768 ACL entry platform limitation from Broadcom ]
Connection objects represent logical and physical connections between the devices in the Fabric (Switch, Server and External objects) and are needed to define all the connections in the Wiring Diagram.
All connections reference switch or server ports. Only port names defined by switch profiles can be used in the wiring diagram for the switches. NOS (or any other) port names aren't supported. Currently, server ports aren't validated by the Fabric API other than for uniqueness. See the Switch Profiles and Port Naming section for more details.
There are several types of connections.
"},{"location":"user-guide/connections/#workload-server-connections","title":"Workload server connections","text":"
Server connections are used to connect workload servers to switches.
Unbundled server connections are used to connect servers to a single switch using a single port.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: server-4--unbundled--s5248-02\n namespace: default\nspec:\n unbundled:\n link: # Defines a single link between a server and a switch\n server:\n port: server-4/enp2s1\n switch:\n port: s5248-02/Ethernet3\n
Bundled server connections are used to connect servers to a single switch using multiple ports (port channel, LAG).
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: server-3--bundled--s5248-01\n namespace: default\nspec:\n bundled:\n links: # Defines multiple links between a single server and a single switch\n - server:\n port: server-3/enp2s1\n switch:\n port: s5248-01/Ethernet3\n - server:\n port: server-3/enp2s2\n switch:\n port: s5248-01/Ethernet4\n
MCLAG server connections are used to connect servers to a pair of switches using multiple ports (Dual-homing). Switches should be configured as an MCLAG pair which requires them to be in a single redundancy group of type mclag and a Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: server-1--mclag--s5248-01--s5248-02\n namespace: default\nspec:\n mclag:\n links: # Defines multiple links between a single server and a pair of switches\n - server:\n port: server-1/enp2s1\n switch:\n port: s5248-01/Ethernet1\n - server:\n port: server-1/enp2s2\n switch:\n port: s5248-02/Ethernet1\n
ESLAG server connections are used to connect servers to the 2-4 switches using multiple ports (Multi-homing). Switches should belong to the same redundancy group with type eslag, but contrary to the MCLAG case, no other configuration is required.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: server-1--eslag--s5248-01--s5248-02\n namespace: default\nspec:\n eslag:\n links: # Defines multiple links between a single server and a 2-4 switches\n - server:\n port: server-1/enp2s1\n switch:\n port: s5248-01/Ethernet1\n - server:\n port: server-1/enp2s2\n switch:\n port: s5248-02/Ethernet1\n
MCLAG-Domain connections define a pair of MCLAG switches with Session and Peer link between them. Switches should be configured as an MCLAG, pair which requires them to be in a single redundancy group of type mclag and Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: s5248-01--mclag-domain--s5248-02\n namespace: default\nspec:\n mclagDomain:\n peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n - switch1:\n port: s5248-01/Ethernet72\n switch2:\n port: s5248-02/Ethernet72\n - switch1:\n port: s5248-01/Ethernet73\n switch2:\n port: s5248-02/Ethernet73\n sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n - switch1:\n port: s5248-01/Ethernet74\n switch2:\n port: s5248-02/Ethernet74\n - switch1:\n port: s5248-01/Ethernet75\n switch2:\n port: s5248-02/Ethernet75\n
VPC-Loopback connections are required in order to implement a workaround for the local VPC peering (when both VPC are attached to the same switch), which is caused by a hardware limitation of the currently supported switches.
"},{"location":"user-guide/connections/#connecting-fabric-to-the-outside-world","title":"Connecting Fabric to the outside world","text":"
Connections in this section provide connectivity to the outside world. For example, they can be connections to the Internet, to other networks, or to some other systems such as DHCP, NTP, LMA, or AAA services.
StaticExternal connections provide a simple way to connect things like DHCP servers directly to the Fabric by connecting them to specific switch ports.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: third-party-dhcp-server--static-external--s5248-04\n namespace: default\nspec:\n staticExternal:\n link:\n switch:\n port: s5248-04/Ethernet1 # Switch port to use\n ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n vlan: 1005 # Optional VLAN ID to use for the switch port; if 0, no VLAN is configured\n subnets: # List of subnets to route to the switch port using static routes and next hop\n - 10.99.0.1/24\n - 10.199.0.100/32\n nextHop: 172.30.50.1 # Next hop IP address to use when configuring static routes for the \"subnets\" list\n
Additionally, it's possible to configure StaticExternal within the VPC to provide access to the third-party resources within a specific VPC, with the rest of the YAML configuration remaining unchanged.
...\nspec:\n staticExternal:\n withinVPC: vpc-1 # VPC name to attach the static external to\n link:\n ...\n
Connection to external systems, such as edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what gets advertised and which routes are accepted.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: s5248-03--external--5835\n namespace: default\nspec:\n external:\n link: # Defines a single link between a switch and an external system\n switch:\n port: s5248-03/Ethernet3\n
"},{"location":"user-guide/devices/","title":"Switches and Servers","text":"
All devices in a Hedgehog Fabric are divided into two groups: switches and servers, represented by the corresponding Switch and Server objects in the API. These objects are needed to define all of the participants of the Fabric and their roles in the Wiring Diagram, together with Connection objects (see Connections).
Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API. These objects consist of basic metadata like name, description, role, serial, management port mac, as well as port group speeds, port breakouts, ASN, IP addresses, and more. Additionally, a Switch contains a reference to a SwitchProfile object that defines the switch model and capabilities. More details can be found in the Switch Profiles and Port Naming section.
In order for the fabric to manage a switch the profile needs to include either the serial or mac need to be defined in the YAML doc.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n name: s5248-01\n namespace: default\nspec:\n boot: # at least one of the serial or mac needs to be defined\n serial: XYZPDQ1234\n mac: 00:11:22:33:44:55 # Usually the first management port MAC address\n profile: dell-s5248f-on # Mandatory reference to the SwitchProfile object defining the switch model and capabilities\n asn: 65101 # ASN of the switch\n description: leaf-1\n ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n portBreakouts: # Configures port breakouts for the switch, see the SwitchProfile for available options\n E1/55: 4x25G\n portGroupSpeeds: # Configures port group speeds for the switch, see the SwitchProfile for available options\n \"1\": 10G\n \"2\": 10G\n portSpeeds: # Configures port speeds for the switch, see the SwitchProfile for available options\n E1/1: 25G\n protocolIP: 172.30.11.100/32 # Used as BGP router ID\n role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n vlanNamespaces: # Defines which VLANs could be used to attach servers\n - default\n vtepIP: 172.30.12.100/32\n groups: # Defines which groups the switch belongs to, by referring to SwitchGroup objects\n - some-group\n redundancy: # Optional field to define that switch belongs to the redundancy group\n group: eslag-1 # Name of the redundancy group\n type: eslag # Type of the redundancy group, one of mclag or eslag\n
The SwitchGroup is just a marker at that point and doesn't have any configuration options.
Redundancy groups are used to define the redundancy between switches. It's a regular SwitchGroup used by multiple switches and currently it could be MCLAG or ESLAG (EVPN MH / ESI). A switch can only belong to a single redundancy group.
MCLAG is only supported for pairs of switches and ESLAG is supported for up to 4 switches. Multiple types of redundancy groups can be used in the fabric simultaneously.
Connections with types mclag and eslag are used to define the servers connections to switches. They are only supported if the switch belongs to a redundancy group with the corresponding type.
In order to define a MCLAG or ESLAG redundancy group, you need to create a SwitchGroup object and assign it to the switches using the redundancy field.
In case of MCLAG it's required to have a special connection with type mclag-domain that defines the peer and session links between switches. For more details, see Connections.
Hedgehog Fabric uses the Border Leaf concept to exchange VPC routes outside the Fabric and provide L3 connectivity. The External Peering feature allows you to set up an external peering endpoint and to enforce several policies between internal and external endpoints.
Note
Hedgehog Fabric does not operate Edge side devices.
Traffic exits from the Fabric on Border Leaves that are connected with Edge devices. Border Leaves are suitable to terminate L2VPN connections, to distinguish VPC L3 routable traffic towards Edge devices, and to land VPC servers. Border Leaves (or Borders) can connect to several Edge devices.
Note
External Peering is only available on the switch devices that are capable for sub-interfaces.
"},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"
In order to distinguish VPC traffic, an Edge device should be able to:
Set up BGP IPv4 to advertise and receive routes from the Fabric
Connect to a Fabric Border Leaf over VLAN
Be able to mark egress routes towards the Fabric with BGP Communities
Be able to filter ingress routes from the Fabric by BGP Communities
All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.
VPC L3 routable traffic will be tagged with VLAN and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.
"},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"
Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that the VPC can export to Edge devices and import from the Edge devices.
"},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"
General configuration starts with the specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with the following configuration:
Name of External
Inbound routes marked with the dedicated BGP community
Outbound routes marked with the dedicated community
Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: External\nmetadata:\n name: default--5835\nspec:\n ipv4Namespace: # VPC IP Namespace\n inboundCommunity: # BGP Standard Community of routes from Edge devices\n outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
External Attachment defines BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to a Connection with type external and they specify an optional vlan that will be used to segregate particular Edge peering.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n name: #\nspec:\n connection: # Name of the Connection with type external\n external: # Name of the External to pick config\n neighbor:\n asn: # Edge device ASN\n ip: # IP address of Edge device to peer with\n switch:\n ip: # IP address on the Border Leaf to set up BGP peering\n vlan: # VLAN (optional) ID to tag control and data traffic, use 0 for untagged\n
Several External Attachment can be configured for the same Connection but for different vlan.
To allow a specific VPC to have access to Edge devices, bind the VPC to a specific External object. To do so, define an External Peering object.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n name: # Name of ExternalPeering\nspec:\n permit:\n external:\n name: # External Name\n prefixes: # List of prefixes (routes) to be allowed to pick up from External\n - # IPv4 prefix\n vpc:\n name: # VPC Name\n subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n - # Name of the subnet within VPC\n
Prefixes is the list of subnets to permit from the External to the VPC. It matches any prefix length less than or equal to 32, effectively permitting all prefixes within the specified one. Use 0.0.0.0/0 for any route, including the default route.
This example allows any IPv4 prefix that came from External:
spec:\n permit:\n external:\n name: ###\n prefixes:\n - prefix: 0.0.0.0/0 # Any route will be allowed including default route\n
This example allows all prefixes that match the default route, with any prefix length:
spec:\n permit:\n external:\n name: ###\n prefixes:\n - prefix: 77.0.0.0/8 # Any route that belongs to the specified prefix is allowed (such as 77.0.0.0/8 or 77.1.2.0/24)\n
This example shows how to peer with the External object with name HedgeEdge, given a Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable connecting it to an Edge device on the port Ethernet42. Specifying vpc-1 is required to receive any prefixes advertised from the External.
"},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"
"},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"
Warning
Hedgehog does not recommend using the following configuration for production. It is only provided as an example of Edge Peer configuration.
Interface configuration:
interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n
route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n
To provide monitoring for most critical metrics from the switches managed by Hedgehog Fabric there are several Dashboards that may be used in Grafana deployments. Make sure that you've enabled metrics and logs collection for the switches in the Fabric that is described in Fabric Config section.
Grafana Node Exporter Full is an opensource Grafana board that provide visualizations for monitoring Linux nodes. In particular case Node Exporter is used to track SONiC OS own stats such as
Memory/disks usage
CPU/System utilization
Networking stats (traffic that hits SONiC interfaces) ...
JSON
"},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"
This section contains an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.
You need to define a Server object for each server running Harvester and a Connection object for each server connection to the switches.
You can have multiple VPCs created and attached to the Connections to the servers to make them available to the VMs in Harvester or any other hypervisor.
"},{"location":"user-guide/harvester/#configure-harvester","title":"Configure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"
From the \"Cluster Networks/Configs\" side menu, create a new Cluster Network.
Here is a cleaned-up version of what the CRD looks like:
This chapter gives an overview of the main features of Hedgehog Fabric and their usage.
"},{"location":"user-guide/profiles/","title":"Switch Profiles and Port Naming","text":""},{"location":"user-guide/profiles/#switch-profiles","title":"Switch Profiles","text":"
All supported switches have a SwitchProfile that defines the switch model, supported features, and available ports with supported configurations such as port group and speeds as well as port breakouts. SwitchProfiles available in-cluster or generated documentation can be found in the Reference section.
Each switch used in the wiring diagram should have a SwitchProfile references in the spec.profile of the Switch object.
Switch profile defines what features and ports are available on the switch. Based on the ports data in the profile, it's possible to set port speeds (for non-breakout and non-group ports), port group speeds and port breakout modes in the Switch object in the Fabric API.
<asic-or-chassis-number> is the ASIC or chassis number (usually only one named 1 for the most switches)
<port-number> is the port number on the ASIC or chassis, starting from 1
optional /<breakout> is the breakout number for the port, starting from 1, only for breakout ports and always consecutive numbers independent of the lanes allocation and other implementation details
optional .<subinterface> is the subinterface number for the port
Examples of port names:
M1 - management port
E1/1 - port 1 on the ASIC or chassis 1, usually a first port on the switch
E1/55/1 - first breakout port of the switch port 55 on the ASIC or chassis 1
Non-breakout and non-group ports. Would have a reference to the port profile with default and available speeds. Could be configured by setting the speed in the Switch object in the Fabric API:
Ports that belong to a port group, non-breakout and not directly configurable. Would have a reference to the port group which will have a reference to the port profile with default and available speeds. Port couldn't be configured directly, speed configuration is applied to the whole group in the Switch object in the Fabric API:
.spec:\n portGroupSpeeds:\n \"1\": 10G\n
It'll set the speed of all ports in the group 1 to 10G, e.g. if the group 1 contains ports E1/1, E1/2, E1/3 and E1/4, all of them will be set to 10G speed.
Ports that are breakouts and non-group ports. Would have a reference to the port profile with default and available breakout modes. Could be configured by setting the breakout mode in the Switch object in the Fabric API:
.spec:\n portBreakouts:\n E1/55: 4x25G\n
Configuring a port breakout mode will make \"breakout\" ports available for use in the wiring diagram. The breakout ports are named as E<asic-or-chassis-number>/<port-number>/<breakout>, e.g. E1/55/1, E1/55/2, E1/55/3, E1/55/4 for the example above. Omitting the breakout number is allowed for the first breakout port, e.g. E1/55 is the same as E1/55/1. The breakout ports are always consecutive numbers independent of the lanes allocation and other implementation details.
This section provides a brief overview of how to add or remove switches within the fabric using Hedgehog Fabric API, and how to manage connections between them.
Manipulating API objects is done with the assumption that target devices are correctly cabled and connected.
This article uses terms that can be found in the Hedgehog Concepts, the User Guide documentation, and the Fabric API reference.
"},{"location":"user-guide/shrink-expand/#add-a-switch-to-the-existing-fabric","title":"Add a switch to the existing fabric","text":"
In order to be added to the Hedgehog Fabric, a switch should have a corresponding Switch object. An example on how to define this object is available in the User Guild.
Note
If theSwitch will be used in ESLAG or MCLAG groups, appropriate groups should exist. Redundancy groups should be specified in the Switch object before creation.
After the Switch object has been created, you can define and create dedicated device Connections. The types of the connections may differ based on the Switch role given to the device. For more details, refer to Connections section.
Note
Switch devices should be booted in ONIE installation mode to install SONiC OS and configure the Fabric Agent.
Ensure the management port of the switch is connected to fabric management network.
"},{"location":"user-guide/shrink-expand/#remove-a-switch-from-the-existing-fabric","title":"Remove a switch from the existing fabric","text":"
Before you decommission a switch from the Hedgehog Fabric, several preparation steps are necessary.
Warning
Currently the Wiring diagram used for initial deployment is saved in /var/lib/rancher/k3s/server/manifests/hh-wiring.yaml on the Control node. Fabric will sustain objects within the original wiring diagram. In order to remove any object, first remove the dedicated API objects from this file. It is recommended to reapply hh-wiring.yaml after changing its internals.
If the Switch is a Leaf switch (including Mixed and Border leaf configurations), remove all VPCAttachments bound to all switches Connections.
If the Switch was used for ExternalPeering, remove all ExternalAttachment objects that are bound to the Connections of the Switch.
Remove all connections of the Switch.
At last, remove the Switch and Agent objects.
"},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"
A Virtual Private Cloud (VPC) is similar to a public cloud VPC. It provides an isolated private network with support for multiple subnets, each with user-defined VLANs and optional DHCP services.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n name: vpc-1\n namespace: default\nspec:\n ipv4Namespace: default # Limits which subnets can the VPC use to guarantee non-overlapping IPv4 ranges\n vlanNamespace: default # Limits which Vlan Ids can the VPC use to guarantee non-overlapping VLANs\n\n defaultIsolated: true # Sets default behavior for the current VPC subnets to be isolated\n defaultRestricted: true # Sets default behavior for the current VPC subnets to be restricted\n\n subnets:\n default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n dhcp:\n enable: true # On-demand DHCP server\n range: # Optionally, start/end range could be specified, otherwise all available IPs are used\n start: 10.10.1.10\n end: 10.10.1.99\n options: # Optional, additional DHCP options to enable for DHCP server, only available when enable is true\n pxeURL: tftp://10.10.10.99/bootfilename # PXEURL (optional) to identify the PXE server to use to boot hosts; HTTP query strings are not supported\n dnsServers: # (optional) configure DNS servers\n - 1.1.1.1\n timeServers: # (optional) configure Time (NTP) Servers\n - 1.1.1.1\n interfaceMTU: 1500 # (optional) configure the MTU (default is 9036); doesn't affect the actual MTU of the switch interfaces\n subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n gateway: 10.10.1.1 # User-defined gateway (optional, default is .1)\n vlan: 1001 # User-defined VLAN from VLAN namespace\n isolated: true # Makes subnet isolated from other subnets within the VPC (doesn't affect VPC peering)\n restricted: true # Causes all hosts in the subnet to be isolated from each other\n\n thrird-party-dhcp: # Another subnet\n dhcp:\n relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n subnet: \"10.10.2.0/24\"\n vlan: 1002\n\n another-subnet: # Minimal configuration is just a name, subnet and VLAN\n subnet: 10.10.100.0/24\n vlan: 1100\n\n permit: # Defines which subnets of the current VPC can communicate to each other, applied on top of subnets \"isolated\" flag (doesn't affect VPC peering)\n - [subnet-1, subnet-2, subnet-3] # 1, 2 and 3 subnets can communicate to each other\n - [subnet-4, subnet-5] # Possible to define multiple lists\n\n staticRoutes: # Optional, static routes to be added to the VPC\n - prefix: 10.100.0.0/24 # Destination prefix\n nextHops: # Next hop IP addresses\n - 10.200.0.0\n
"},{"location":"user-guide/vpcs/#isolated-and-restricted-subnets-permit-lists","title":"Isolated and restricted subnets, permit lists","text":"
Subnets can be isolated and restricted, with the ability to define permit lists to allow communication between specific isolated subnets. The permit list is applied on top of the isolated flag and doesn't affect VPC peering.
Isolated subnet means that the subnet has no connectivity with other subnets within the VPC, but it could still be allowed by permit lists.
Restricted subnet means that all hosts in the subnet are isolated from each other within the subnet.
A Permit list contains a list. Every element of the list is a set of subnets that can communicate with each other.
"},{"location":"user-guide/vpcs/#third-party-dhcp-server-configuration","title":"Third-party DHCP server configuration","text":"
In case you use a third-party DHCP server, by configuring spec.subnets.<subnet>.dhcp.relay, additional information is added to the DHCP packet forwarded to the DHCP server to make it possible to identify the VPC and subnet. This information is added under the RelayAgentInfo (option 82) in the DHCP packet. The relay sets two suboptions in the packet:
VirtualSubnetSelection (suboption 151) is populated with the VRF which uniquely identifies a VPC on the Hedgehog Fabric and will be in VrfV<VPC-name> format, for example VrfVvpc-1 for a VPC named vpc-1 in the Fabric API.
CircuitID (suboption 1) identifies the VLAN which, together with the VRF (VPC) name, maps to a specific VPC subnet.
A VPCAttachment represents a specific VPC subnet assignment to the Connection object which means a binding between an exact server port and a VPC. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.
VPC could be attached to a switch that is part of the VLAN namespace used by the VPC.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCAttachment\nmetadata:\n name: vpc-1-server-1--mclag--s5248-01--s5248-02\n namespace: default\nspec:\n connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n subnet: vpc-1/default # VPC subnet name\n nativeVLAN: true # (Optional) if true, the port will be configured as a native VLAN port (untagged)\n
apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n name: vpc-1--vpc-2\n namespace: default\nspec:\n permit: # Defines a pair of VPCs to peer\n - vpc-1: {} # Meaning all subnets of two VPCs will be able to communicate with each other\n vpc-2: {} # See \"Subnet filtering\" for more advanced configuration\n
It's possible to specify which specific subnets of the peering VPCs could communicate to each other using the permit field.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n name: vpc-1--vpc-2\n namespace: default\nspec:\n permit: # subnet-1 and subnet-2 of vpc-1 could communicate to subnet-3 of vpc-2 as well as subnet-4 of vpc-2 could communicate to subnet-5 and subnet-6 of vpc-2\n - vpc-1:\n subnets: [subnet-1, subnet-2]\n vpc-2:\n subnets: [subnet-3]\n - vpc-1:\n subnets: [subnet-4]\n vpc-2:\n subnets: [subnet-5, subnet-6]\n
An IPv4Namespace defines a set of (non-overlapping) IPv4 address ranges available for use by VPC subnets. Each VPC belongs to a specific IPv4 namespace. Therefore, its subnet prefixes must be from that IPv4 namespace.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n name: default\n namespace: default\nspec:\n subnets: # List of prefixes that VPCs can pick their subnets from\n - 10.10.0.0/16\n
A VLANNamespace defines a set of VLAN ranges available for attaching servers to switches. Each switch can belong to one or more disjoint VLANNamespaces.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: VLANNamespace\nmetadata:\n name: default\n namespace: default\nspec:\n ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n - from: 1000\n to: 2999\n
"},{"location":"vlab/demo/","title":"Demo on VLAB","text":""},{"location":"vlab/demo/#goals","title":"Goals","text":"
The goal of this demo is to show how to use VPCs, attach and peer them and run test connectivity between the servers. Examples are based on the default VLAB topology.
You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.
The default topology is Spine-Leaf with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.
For more details on customizing topologies see the Running VLAB section.
In the default topology, the following Control Node and Switch VMs are created, the Control Node is connected to every switch, the lines are ommitted for clarity:
"},{"location":"vlab/demo/#manual-vpc-creation","title":"Manual VPC creation","text":""},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"
You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers:
The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is 10.0.0.0/16:
core@control-1 ~ $ kubectl get ipns\nNAME SUBNETS AGE\ndefault [\"10.0.0.0/16\"] 5h14m\n
After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches:
In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration has been applied.
"},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"
You can use hhfab vlab ssh on the host to SSH into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure a bond with a VLAN on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a VLAN and they both will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use the little helper pre-installed by Fabricator on test servers, hhnet.
For server-01:
core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n valid_lft 86396sec preferred_lft 86396sec\n inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n valid_lft forever preferred_lft forever\n
And for server-02:
core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n valid_lft 86185sec preferred_lft 86185sec\n inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n valid_lft forever preferred_lft forever\n
"},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"
You can test connectivity between the servers before peering the switches using the ping command:
core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
"},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"
To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:
Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that, you can test connectivity between the servers again:
core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n
If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again:
core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.
core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
"},{"location":"vlab/demo/#utility-based-vpc-creation","title":"Utility based VPC creation","text":""},{"location":"vlab/demo/#setup-vpcs","title":"Setup VPCs","text":"
hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.
NAME:\n hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them\n\nUSAGE:\n hhfab vlab setup-vpcs [command options]\n\nOPTIONS:\n --dns-servers value, --dns value [ --dns-servers value, --dns value ] DNS servers for VPCs advertised by DHCP\n --force-clenup, -f start with removing all existing VPCs and VPCAttachments (default: false)\n --help, -h show help\n --interface-mtu value, --mtu value interface MTU for VPCs advertised by DHCP (default: 0)\n --ipns value IPv4 namespace for VPCs (default: \"default\")\n --name value, -n value name of the VM or HW to access\n --servers-per-subnet value, --servers value number of servers per subnet (default: 1)\n --subnets-per-vpc value, --subnets value number of subnets per VPC (default: 1)\n --time-servers value, --ntp value [ --time-servers value, --ntp value ] Time servers for VPCs advertised by DHCP\n --vlanns value VLAN namespace for VPCs (default: \"default\")\n --wait-switches-ready, --wait wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)\n\n Global options:\n\n --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n --cache-dir DIR use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.
NAME:\n hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)\n\nUSAGE:\n Setup test scenario with VPC/External Peerings by specifying requests in the format described below.\n\n Example command:\n\n $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24\n\n Which will produce:\n 1. VPC peering between vpc-01 and vpc-02\n 2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border\n 3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted\n 4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route\n from external permitted as well any route that belongs to 22.22.22.0/24\n\n VPC Peerings:\n\n 1+2 -- VPC peering between vpc-01 and vpc-02\n demo-1+demo-2 -- VPC peering between demo-1 and demo-2\n 1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present\n 1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border\n 1+2:remote=border -- same as above\n\n External Peerings:\n\n 1~as5835 -- external peering for vpc-01 with External as5835\n 1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing\n default subnet and any route from external\n 1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and\n default route from external permitted\n 1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details\n 1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above\n\nOPTIONS:\n --help, -h show help\n --name value, -n value name of the VM or HW to access\n --wait-switches-ready, --wait wait for switches to be ready before before and after configuring peerings (default: true)\n\n Global options:\n\n --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n --cache-dir DIR use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.
NAME:\n hhfab vlab test-connectivity - test connectivity between all servers\n\nUSAGE:\n hhfab vlab test-connectivity [command options]\n\nOPTIONS:\n --curls value number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)\n --help, -h show help\n --iperfs value seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)\n --iperfs-speed value minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)\n --name value, -n value name of the VM or HW to access\n --pings value number of pings to send between each pair of servers (0 to disable) (default: 5)\n --wait-switches-ready, --wait wait for switches to be ready before testing connectivity (default: true)\n\n Global options:\n\n --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n --cache-dir DIR use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
"},{"location":"vlab/demo/#using-vpcs-with-overlapping-subnets","title":"Using VPCs with overlapping subnets","text":"
First, create a second IPv4Namespace with the same subnet as the default one:
Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:
At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in a previous section. Once you have configured networking, server-01 and server-03 have IP addresses from the same subnets.
It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any data plane or performance testing, or for production use.
In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery, boot and installation process like on real hardware.
The hhfab CLI provides a special command vlab to manage the virtual labs. It allows you to run sets of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs the installer to get Fabric up and running.
You can find more information about getting hhfab in the download section.
Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.
The following packages needs to be installed: qemu-kvm swtpm-tools tpm2-tools socat. Docker is also required, to login into the OCI registry.
By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.
You can calculate the system requirements based on the allocated resources to the VMs using the following table:
Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB
These numbers give approximately the following requirements for the default topologies:
Spine-Leaf: 38 vCPUs, 36352 MB, 410 GB disk
Collapsed Core: 22 vCPUs, 19456 MB, 240 GB disk
Usually, none of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.
First, initialize Fabricator by running hhfab init --dev. This command supports several customization options that are listed in the output of hhfab init --help.
By default, the command creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.
If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core. Or if you want to run Collapsed Core topology with 2 MCLAG switches:
Additionally, you can pass extra Fabric configuration items using flags on init command or by passing a configuration file. For more information, refer to the Fabric Configuration section.
Once you have initialized the VLAB, download the artifacts and build the installer using hhfab build. This command automatically downloads all required artifacts from the OCI registry and builds the installer and all other prerequisites for running the VLAB.
"},{"location":"vlab/running/#build-the-installer-and-start-vlab","title":"Build the Installer and Start VLAB","text":"
In VLAB the build and run step are combined into one command for simplicity, hhfab vlab up. For successive runs use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. This command does not return, it runs as long as the VLAB is up. This is done so that shutdown is a simple ctrl + c.
ubuntu@docs:~$ hhfab vlab up\n11:48:22 INF Hedgehog Fabricator version=v0.30.0\n11:48:22 INF Wiring hydrated successfully mode=if-not-present\n11:48:22 INF VLAB config created file=vlab/config.yaml\n11:48:22 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:22 INF Building installer control=control-1\n11:48:22 INF Adding recipe bin to installer control=control-1\n11:48:24 INF Adding k3s and tools to installer control=control-1\n11:48:25 INF Adding zot to installer control=control-1\n11:48:25 INF Adding cert-manager to installer control=control-1\n11:48:26 INF Adding config and included wiring to installer control=control-1\n11:48:26 INF Adding airgap artifacts to installer control=control-1\n11:48:36 INF Archiving installer path=/home/ubuntu/result/control-1-install.tgz control=control-1\n11:48:45 INF Creating ignition path=/home/ubuntu/result/control-1-install.ign control=control-1\n11:48:46 INF Taps and bridge are ready count=8\n11:48:46 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:46 INF Preparing new vm=control-1 type=control\n11:48:51 INF Preparing new vm=server-01 type=server\n11:48:52 INF Preparing new vm=server-02 type=server\n11:48:54 INF Preparing new vm=server-03 type=server\n11:48:55 INF Preparing new vm=server-04 type=server\n11:48:57 INF Preparing new vm=server-05 type=server\n11:48:58 INF Preparing new vm=server-06 type=server\n11:49:00 INF Preparing new vm=server-07 type=server\n11:49:01 INF Preparing new vm=server-08 type=server\n11:49:03 INF Preparing new vm=server-09 type=server\n11:49:04 INF Preparing new vm=server-10 type=server\n11:49:05 INF Preparing new vm=leaf-01 type=switch\n11:49:06 INF Preparing new vm=leaf-02 type=switch\n11:49:06 INF Preparing new vm=leaf-03 type=switch\n11:49:06 INF Preparing new vm=leaf-04 type=switch\n11:49:06 INF Preparing new vm=leaf-05 type=switch\n11:49:06 INF Preparing new vm=spine-01 type=switch\n11:49:06 INF Preparing new vm=spine-02 type=switch\n11:49:06 INF Starting VMs count=18 cpu=\"54 vCPUs\" ram=\"49664 MB\" disk=\"550 GB\"\n11:49:59 INF Uploading control install vm=control-1 type=control\n11:53:39 INF Running control install vm=control-1 type=control\n11:53:40 INF control-install: 01:53:39 INF Hedgehog Fabricator Recipe version=v0.30.0 vm=control-1\n11:53:40 INF control-install: 01:53:39 INF Running control node installation vm=control-1\n12:00:32 INF control-install: 02:00:31 INF Control node installation complete vm=control-1\n12:00:32 INF Control node is ready vm=control-1 type=control\n12:00:32 INF All VMs are ready\n
When the message INF Control node is ready vm=control-1 type=control from the installer's output means that the installer has finished. After this line has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned. See Accessing the VLAB."},{"location":"vlab/running/#configuring-vlab-vms","title":"Configuring VLAB VMs","text":"
By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure enable connectivity using hhfab vlab up --restrict-servers=false to allow the test servers to access the Internet and the host. When you enable connectivity, VMs get a default route pointing to the host, which means that in case of the VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific subnets).
Fabricator creates default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.
Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!. Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!. Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.
"},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"
The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.
You can select device you want to access or pass the name using the --vm flag.
ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190 and / toggles search\nSSH to VM:\n \ud83e\udd94 control-1\n server-01\n server-02\n server-03\n server-04\n server-05\n server-06\n leaf-01\n leaf-02\n leaf-03\n spine-01\n spine-02\n\n----------- VM Details ------------\nID: 0\nName: control-1\nReady: true\nBasedir: .hhfab/vlab-vms/control-1\n
On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. You can find information about the switches provisioning by running kubectl get agents -o wide. It usually takes about 10-15 minutes for the switches to get installed.
After the switches are provisioned, the command returns something like this:
The Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. The Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. Different values for AppliedG and CurrentG mean that the switch is in the process of applying the configuration.
At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about managing the Fabric in the Running Demo and User Guide sections.
"},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"
You can list the main Fabric objects by running kubectl get on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"
The Hedgehog Open Network Fabric is an open networking platform that brings the user experience enjoyed by so many in the public cloud to private environments. It comes without vendor lock-in.
The Fabric is built around the concept of VPCs (Virtual Private Clouds), similar to public cloud offerings. It provides a multi-tenant API to define the user intent on network isolation and connectivity, which is automatically transformed into configuration for switches and software appliances.
You can read more about its concepts and architecture in the documentation.
You can find out how to download and try the Fabric on the self-hosted fully virtualized lab or on hardware.
The Hedgehog Open Network Fabric is an open-source network architecture that provides connectivity between virtual and physical workloads and provides a way to achieve network isolation between different groups of workloads using standard BGP EVPN and VXLAN technology. The fabric provides a standard Kubernetes interface to manage the elements in the physical network and provides a mechanism to configure virtual networks and define attachments to these virtual networks. The Hedgehog Fabric provides isolation between different groups of workloads by placing them in different virtual networks called VPC's. To achieve this, it defines different abstractions starting from the physical network where a set of Connection objects defines how a physical server on the network connects to a physical switch on the fabric.
A collapsed core topology is just a pair of switches connected in a MCLAG configuration with no other network elements. All workloads attach to these two switches.
The leaves in this setup are configured to be in a MCLAG pair and servers can either be connected to both switches as a MCLAG port channel or as orphan ports connected to only one switch. Both the leaves peer to external networks using BGP and act as gateway for workloads attached to them. The configuration of the underlay in the collapsed core is very simple and is ideal for very small deployments.
A spine-leaf topology is a standard Clos network with workloads attaching to leaf switches and the spines providing connectivity between different leaves.
This kind of topology is useful for bigger deployments and provides all the advantages of a typical Clos network. The underlay network is established using eBGP where each leaf has a separate ASN and peers will all spines in the network. RFC7938 was used as the reference for establishing the underlay network.
The overlay network runs on top the underlay network to create a virtual network. The overlay network isolates control and data plane traffic between different virtual networks and the underlay network. Virtualization is achieved in the Hedgehog Fabric by encapsulating workload traffic over VXLAN tunnels that are source and terminated on the leaf switches in the network. The fabric uses BGP-EVPN/VXLAN to enable the creation and management of virtual networks on top of the physical one. The fabric supports multiple virtual networks over the same underlay network to support multi-tenancy. Each virtual network in the Hedgehog Fabric is identified by a VPC. The following subsections contain a high-level overview of how VPCs are implemented in the Hedgehog Fabric and its associated objects.
The previous subsections have described what a VPC is, and how to attach workloads to a specific VPC. The following bullet points describe how VPCs are actually implemented in the network to ensure a private view the network.
Each VPC is modeled as a VRF on each switch where there are VPC attachments defined for this VPC. The VRF is allocated its own VNI. The VRF is local to each switch and the VNI is global for the entire fabric. By mapping the VRF to a VNI and configuring an EVPN instance in each VRF, a shared L3VNI is established across the entire fabric. All VRFs participating in this VNI can freely communicate with each other without the need for a policy. A VLAN is allocated for each VRF which functions as an IRB VLAN for the VRF.
The VRF created on each switch corresponding to a VPC configures a BGP instance with EVPN to advertise its locally attached subnets and import routes from its peered VPCs. The BGP instance in the tenant VRFs does not establish neighbor relationships and is purely used to advertise locally attached routes into the VPC (all VRFs with the same L3VNI) across leaves in the network.
A VPC can have multiple subnets. Each subnet in the VPC is modeled as a VLAN on the switch. The VLAN is only locally significant and a given subnet might have different VLANs on different leaves on the network. A globally significant VNI is assigned to each subnet. This VNI is used to extend the subnet across different leaves in the network and provides a view of single stretched L2 domain if the applications need it.
The Hedgehog Fabric has a built-in DHCP server which will automatically assign IP addresses to each workload depending on the VPC it belongs to. This is achieved by configuring a DHCP relay on each of the server facing VLANs. The DHCP server is accessible through the underlay network and is shared by all VPCs in the fabric. The inbuilt DHCP server is capable of identifying the source VPC of the request and assigning IP addresses from a pool allocated to the VPC at creation.
A VPC by default cannot communicate to anyone outside the VPC and specific peering rules are required to allow communication to external networks or to other VPCs.
To enable communication between 2 different VPCs, one needs to configure a VPC peering policy. The Hedgehog Fabric supports two different peering modes.
Local Peering: A local peering directly imports routes from another VPC locally. This is achieved by a simple import route from the peer VPC. In case there are no locally attached workloads to the peer VPC the fabric automatically creates a stub VPC for peering and imports routes from it. This allows VPCs to peer with each other without the need for a dedicated peering leaf. If a local peering is done for a pair of VPCs which have locally attached workloads, the fabric automatically allocates a pair of ports on the switch to route traffic between these VRFs using static routes. This is required because of limitations in the underlying platform. The net result of these limitations is that the bandwidth between these 2 VPCs is limited by the bandwidth of the loopback interfaces allocated on the switch. Traffic between the peered VPCs will not leave the switch that connects them.
Remote Peering: Remote peering is implemented using a dedicated peering switch/switches which is used as a rendezvous point for the 2 VPC's in the fabric. The set of switches to be used for peering is determined by configuration in the peering policy. When a remote peering policy is applied for a pair of VPCs, the VRFs corresponding to these VPCs on the peering switch advertise default routes into their specific VRFs identified by the L3VNI. All traffic that does not belong to the VPCs is forwarded to the peering switch which has routes to the other VPCs and gets forwarded from there. The bandwidth limitation that exists in the local peering solution is solved here as the bandwidth between the two VPCs is determined by the fabric cross section bandwidth.
Hedgehog Open Network Fabric is built on top of Kubernetes and uses Kubernetes API to manage its resources. It means that all user-facing APIs are Kubernetes Custom Resources (CRDs), so you can use standard Kubernetes tools to manage Fabric resources.
Hedgehog Fabric consists of the following components:
Fabricator - special tool to install and configure Fabric, or to run virtual labs
Control Node - one or more Kubernetes nodes in a single cluster running Fabric software:
Fabric Controller - main control plane component that manages Fabric resources
Fabric Kubectl plugin (Fabric CLI) - kubectl plugin to manage Fabric resources in an easy way
Fabric Agent - runs on every switch and manages switch configuration
All infrastructure is represented as a set of Fabric resource (Kubernetes CRDs) and named Wiring Diagram. With this representation, Fabric defines switches, servers, control nodes, external systems and connections between them in a single place and then uses these definitions to deploy and manage the whole infrastructure. On top of the Wiring Diagram, Fabric provides a set of APIs to manage the VPCs and the connections between them and between VPCs and External systems.
VPC: Virtual Private Cloud, similar to a public cloud VPC, provides an isolated private network for the resources, with support for multiple subnets, each with user-defined VLANs and optional DHCP service
VPCAttachment: represents a specific VPC subnet assignment to the Connection object which means exact server port to a VPC binding
VPCPeering: enables VPC-to-VPC connectivity (could be Local where VPCs are used or Remote peering on the border/mixed leaves)
External API
External: definition of the \"external system\" to peer with (could be one or multiple devices such as edge/provider routers)
ExternalAttachment: configuration for a specific switch (using Connection object) describing how it connects to an external system
ExternalPeering: provides VPC with External connectivity by exposing specific VPC subnets to the external system and allowing inbound routes from it
This documentation is done using MkDocs with multiple plugins enabled. It's based on the Markdown, you can find basic syntax overview here.
In order to contribute to the documentation, you'll need to have Git and Docker installed on your machine as well as any editor of your choice, preferably supporting Markdown preview. You can run the preview server using following command:
make serve\n
Now you can open continuously updated preview of your edits in browser at http://127.0.0.1:8000. Pages will be automatically updated while you're editing.
Additionally you can run
make build\n
to make sure that your changes will be built correctly and doesn't break documentation.
If you want to quick edit any page in the documentation, you can press the Edit this page icon at the top right of the page. It'll open the page in the GitHub editor. You can edit it and create a pull request with your changes.
Please, never push to the master or release/* branches directly. Always create a pull request and wait for the review.
Each pull request will be automatically built and preview will be deployed. You can find the link to the preview in the comments in pull request.
Documentation is organized in per-release branches:
master - ongoing development, not released yet, referenced as dev version in the documentation
release/alpha-1/release/alpha-2 - alpha releases, referenced as alpha-1/alpha-2 versions in the documentation, if patches released for alpha-1, they'll be merged into release/alpha-1 branch
release/v1.0 - first stable release, referenced as v1.0 version in the documentation, if patches (e.g. v1.0.1) released for v1.0, they'll be merged into release/v1.0 branch
Latest release branch is referenced as latest version in the documentation and will be used by default when you open the documentation.
All documentation files are located in docs directory. Each file is a Markdown file with .md extension. You can create subdirectories to organize your files. Each directory can have a .pages file that overrides the default navigation order and titles.
For example, top-level .pages in this repository looks like this:
Where you can add pages by file name like index.md and page title will be taked from the file (first line with #). Additionally, you can reference the whole directory to created nested section in navigation. You can also add custom titles by using : separator like Wiring Diagram: wiring where Wiring Diagram is a title and wiring is a file/directory name.
You can find abbreviations in includes/abbreviations.md file. You can add various abbreviations there and all usages of the defined words in the documentation will get a highlight.
For example, we have following in includes/abbreviations.md:
*[HHFab]: Hedgehog Fabricator - a tool for building Hedgehog Fabric\n
It'll highlight all usages of HHFab in the documentation and show a tooltip with the definition like this: HHFab.
We're using MkDocs Material theme with multiple extensions enabled. You can find detailed reference here, but here you can find some of the most useful ones.
To view code for examples, please, check the source code of this page.
Text can be deleted and replacement text added. This can also be combined into onea single operation. Highlighting is also possible and comments can be added inline.
Formatting can also be applied to blocks by putting the opening and closing tags on separate lines and adding new lines between the tags and the content.
Admonitions, also known as call-outs, are an excellent choice for including side content without significantly interrupting the document flow. Different types of admonitions are available, each with a unique icon and color. Details can be found here.
Lorem ipsum
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla et euismod nulla. Curabitur feugiat, tortor non consequat finibus, justo purus auctor massa, nec semper lorem quam in massa.
Simple code block with line nums and highlighted lines:
bubble_sort.py
def bubble_sort(items):\n for i in range(len(items)):\n for j in range(len(items) - 1 - i):\n if items[j] > items[j + 1]:\n items[j], items[j + 1] = items[j + 1], items[j]\n
"},{"location":"contribute/docs/#tables","title":"Tables","text":"Method Description GET Fetch resource PUT Update resource DELETE Delete resource"},{"location":"contribute/docs/#diagrams","title":"Diagrams","text":"
You can directly include Mermaid diagrams in your Markdown files. Details can be found here.
graph LR\n A[Start] --> B{Error?};\n B -->|Yes| C[Hmm...];\n C --> D[Debug];\n D --> B;\n B ---->|No| E[Yay!];
sequenceDiagram\n autonumber\n Alice->>John: Hello John, how are you?\n loop Healthcheck\n John->>John: Fight against hypochondria\n end\n Note right of John: Rational thoughts!\n John-->>Alice: Great!\n John->>Bob: How about you?\n Bob-->>John: Jolly good!
The main entry point for the software is the Hedgehog Fabricator CLI named hhfab. It is a command-line tool that allows to build installer for the Hedgehog Fabric, upgrade the existing installation, or run the Virtual LAB.
Prior to General Availability, access to the full software is limited and requires Design Partner Agreement. Please submit a ticket with the request using Hedgehog Support Portal.
After that you will be provided with the credentials to access the software on GitHub Package. In order to use the software, log in to the registry using the following command:
Currently hhfab is supported on Linux x86/arm64 (tested on Ubuntu 22.04) and MacOS x86/arm64 for building installers/upgraders. It may work on Windows WSL2 (with Ubuntu), but it's not tested. For running VLAB only Linux x86 is currently supported.
All software is published into the OCI registry GitHub Package including binaries, container images, or Helm charts. Download the latest stable hhfab binary from the GitHub Package using the following command, it requires ORAS to be installed (see below):
curl -fsSL https://i.hhdev.io/hhfab | bash\n
Or download a specific version (e.g. beta-1) using the following command:
Use the VERSION environment variable to specify the version of the software to download. By default, the latest stable release is downloaded. You can pick a specific release series (e.g. alpha-2) or a specific release.
The download script requires ORAS to be installed. ORAS is used to download the binary from the OCI registry and can be installed using following command:
A wiring diagram is a YAML file that is a digital representation of your network. You can find more YAML level details in the User Guide section switch features and port naming and the api. It's mandatory for all switches to reference a SwitchProfile in the spec.profile of the Switch object. Only port naming defined by switch profiles could be used in the wiring diagram, NOS (or any other) port names aren't supported.
In the meantime, to have a look at working wiring diagram for Hedgehog Fabric, run the sample generator that produces working wiring diagrams:
ubuntu@sl-dev:~$ hhfab sample -h\n\nNAME:\n hhfab sample - generate sample wiring diagram\n\nUSAGE:\n hhfab sample command [command options]\n\nCOMMANDS:\n spine-leaf, sl generate sample spine-leaf wiring diagram\n collapsed-core, cc generate sample collapsed-core wiring diagram\n help, h Shows a list of commands or help for one command\n\nOPTIONS:\n --help, -h show help\n
Or you can generate a wiring diagram for a VLAB environment with flags to customize number of switches, links, servers, etc.:
ubuntu@sl-dev:~$ hhfab vlab gen --help\nNAME:\n hhfab vlab generate - generate VLAB wiring diagram\n\nUSAGE:\n hhfab vlab generate [command options]\n\nOPTIONS:\n --bundled-servers value number of bundled servers to generate for switches (only for one of the second switch in the redundancy group or orphan switch) (default: 1)\n --eslag-leaf-groups value eslag leaf groups (comma separated list of number of ESLAG switches in each group, should be 2-4 per group, e.g. 2,4,2 for 3 groups with 2, 4 and 2 switches)\n --eslag-servers value number of ESLAG servers to generate for ESLAG switches (default: 2)\n --fabric-links-count value number of fabric links if fabric mode is spine-leaf (default: 0)\n --help, -h show help\n --mclag-leafs-count value number of mclag leafs (should be even) (default: 0)\n --mclag-peer-links value number of mclag peer links for each mclag leaf (default: 0)\n --mclag-servers value number of MCLAG servers to generate for MCLAG switches (default: 2)\n --mclag-session-links value number of mclag session links for each mclag leaf (default: 0)\n --no-switches do not generate any switches (default: false)\n --orphan-leafs-count value number of orphan leafs (default: 0)\n --spines-count value number of spines if fabric mode is spine-leaf (default: 0)\n --unbundled-servers value number of unbundled servers to generate for switches (only for one of the first switch in the redundancy group or orphan switch) (default: 1)\n --vpc-loopbacks value number of vpc loopbacks for each switch (default: 0)\n
A VPC allows for isolation at layer 3. This is the main building block for users when creating their architecture. Hosts inside of a VPC belong to the same broadcast domain and can communicate with each other, if desired a single VPC can be configured with multiple broadcast domains. The hosts inside of a VPC will likely need to connect to other VPCs or the outside world. To communicate between two VPC a peering will need to be created. A VPC can be a logical separation of workloads. By separating these workloads additional controls are available. The logical separation doesn't have to be the traditional database, web, and compute layers it could be development teams who need isolation, it could be tenants inside of an office building, or any separation that allows for better control of the network. Once your VPCs are decided, the rest of the fabric will come together. With the VPCs decided traffic can be prioritized, security can be put into place, and the wiring can begin. The fabric allows for the VPC to span more than one switch, which provides great flexibility.
A server connection is a connection used to connect servers to the fabric. The fabric will configure the server-facing port according to the type of the connection (MLAG, Bundle, etc). The configuration of the actual server needs to be done by the server administrator. The server port names are not validated by the fabric and used as metadata to identify the connection. A server connection can be one of:
Unbundled - A single cable connecting switch to server.
Bundled - Two or more cables going to a single switch, a LAG or similar.
MCLAG - Two cables going to two different switches, also called dual homing. The switches will need a fabric link between them.
ESLAG - Two to four cables going to different switches, also called multi-homing. If four links are used there will need to be four switches connected to a single server with four NIC ports.
When there is no dedicated border/peering switch available in the fabric we can use local VPC peering. This kind of peering tries sends traffic between the two VPC's on the switch where either of the VPC's has workloads attached. Due to limitation in the Sonic network operating system this kind of peering bandwidth is limited to the number of VPC loopbacks you have selected while initializing the fabric. Traffic between the VPCs will use the loopback interface, the bandwidth of this connection will be equal to the bandwidth of port used in the loopback.
The dotted line in the diagram shows the traffic flow for local peering. The traffic originates in VPC 2, travels to the switch, travels out the first loopback port, into the second loopback port, and finally out the port destined for VPC 1."},{"location":"install-upgrade/build-wiring/#remote-vpc-peering","title":"Remote VPC Peering","text":"
Remote Peering is used when you need a high bandwidth connection between the VPCs, you will dedicate a switch to the peering traffic. This is either done on the border leaf or on a switch where either of the VPC's are not present. This kind of peering allows peer traffic between different VPC's at line rate and is only limited by fabric bandwidth. Remote peering introduces a few additional hops in the traffic and may cause a small increase in latency.
The dotted line in the diagram shows the traffic flow for remote peering. The traffic could take a different path because of ECMP. It is important to note that Leaf 3 cannot have any servers from VPC 1 or VPC 2 on it, but it can have a different VPC attached to it."},{"location":"install-upgrade/build-wiring/#vpc-loopback","title":"VPC Loopback","text":"
A VPC loopback is a physical cable with both ends plugged into the same switch, suggested but not required to be the adjacent ports. This loopback allows two different VPCs to communicate with each other. This is due to a Broadcom limitation.
The fab.yaml file is the configuration file for the fabric. It supplies the configuration of the users, their credentials, logging, telemetry, and other non wiring related settings. The fab.yaml file is composed of multiple YAML documents inside of a single file. Per the YAML spec 3 hyphens (---) on a single line separate the end of one document from the beginning of the next. There are two YAML documents in the fab.yaml file. For more information about how to use hhfab init, run hhfab init --help.
"},{"location":"install-upgrade/config/#typical-hhfab-workflows","title":"Typical HHFAB workflows","text":""},{"location":"install-upgrade/config/#hhfab-for-vlab","title":"HHFAB for VLAB","text":"
For a VLAB user, the typical workflow with hhfab is:
hhfab init --dev
hhfab vlab gen
hhfab vlab up
The above workflow will get a user up and running with a spine-leaf VLAB.
"},{"location":"install-upgrade/config/#hhfab-for-physical-machines","title":"HHFAB for Physical Machines","text":"
It's possible to start from scratch:
hhfab init (see different flags to cusomize initial configuration)
After the above workflow a user will have a .img file suitable for installing the control node, then bringing up the switches which comprise the fabric.
"},{"location":"install-upgrade/config/#fabyaml","title":"Fab.yaml","text":""},{"location":"install-upgrade/config/#configure-control-node-and-switch-users","title":"Configure control node and switch users","text":"
Configuring control node and switch users is done either passing --default-password-hash to hhfab init or editing the resulting fab.yaml file emitted by hhfab init. You can specify users to be configured on the control node(s) and switches in the following format:
spec:\n config:\n control:\n defaultUser: # user 'core' on all control nodes\n password: \"hashhashhashhashhash\" # password hash\n authorizedKeys:\n - \"ssh-ed25519 SecREKeyJumblE\"\n\n fabric:\n mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n\n defaultSwitchUsers:\n admin: # at least one user with name 'admin' and role 'admin'\n role: admin\n #password: \"$5$8nAYPGcl4...\" # password hash\n #authorizedKeys: # optional SSH authorized keys\n # - \"ssh-ed25519 AAAAC3Nza...\"\n op: # optional read-only user\n role: operator\n #password: \"$5$8nAYPGcl4...\" # password hash\n #authorizedKeys: # optional SSH authorized keys\n # - \"ssh-ed25519 AAAAC3Nza...\"\n
Control node(s) user is always named core.
The role of the user,operator is read-only access to sonic-cli command on the switches. In order to avoid conflicts, do not use the following usernames: operator,hhagent,netops.
"},{"location":"install-upgrade/config/#ntp-and-dhcp","title":"NTP and DHCP","text":"
The control node uses public ntp servers from cloudflare and google by default. The control node runs a dhcp server on the management network. See the example file.
The control node is the host that manages all the switches, runs k3s, and serves images. This is the YAML document configure the control node:
apiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n name: control-1\n namespace: fab\nspec:\n bootstrap:\n disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n external:\n interface: enp2s0 # interface for external\n ip: dhcp # IP address for external interface\n management:\n interface: enp2s1 # interface for management\n\n# Currently only one ControlNode is supported\n
The management interface is for the control node to manage the fabric switches, not end-user management of the control node. For end-user management of the control node specify the external interface name."},{"location":"install-upgrade/config/#forward-switch-metrics-and-logs","title":"Forward switch metrics and logs","text":"
There is an option to enable Grafana Alloy on all switches to forward metrics and logs to the configured targets using Prometheus Remote-Write API and Loki API. If those APIs are available from Control Node(s), but not from the switches, it's possible to enable HTTP Proxy on Control Node(s) that will be used by Grafana Alloy running on the switches to access the configured targets. It could be done by passing --control-proxy=true to hhfab init.
Metrics includes port speeds, counters, errors, operational status, transceivers, fans, power supplies, temperature sensors, BGP neighbors, LLDP neighbors, and more. Logs include agent logs.
Configuring the exporters and targets is currently only possible by editing the fab.yaml configuration file. An example configuration is provided below:
spec:\n config:\n ...\n defaultAlloyConfig:\n agentScrapeIntervalSeconds: 120\n unixScrapeIntervalSeconds: 120\n unixExporterEnabled: true\n lokiTargets:\n grafana_cloud: # target name, multiple targets can be configured\n basicAuth: # optional\n password: \"<password>\"\n username: \"<username>\"\n labels: # labels to be added to all logs\n env: env-1\n url: https://logs-prod-021.grafana.net/loki/api/v1/push\n useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n prometheusTargets:\n grafana_cloud: # target name, multiple targets can be configured\n basicAuth: # optional\n password: \"<password>\"\n username: \"<username>\"\n labels: # labels to be added to all metrics\n env: env-1\n sendIntervalSeconds: 120\n url: https://prometheus-prod-36-prod-us-west-0.grafana.net/api/prom/push\n useControlProxy: true # if the Loki API is not available from the switches directly, use the Control Node as a proxy\n unixExporterCollectors: # list of node-exporter collectors to enable, https://grafana.com/docs/alloy/latest/reference/components/prometheus.exporter.unix/#collectors-list\n - cpu\n - filesystem\n - loadavg\n - meminfo\n collectSyslogEnabled: true # collect /var/log/syslog on switches and forward to the lokiTargets\n
For additional options, see the AlloyConfig struct in Fabric repo.
"},{"location":"install-upgrade/config/#complete-example-file","title":"Complete Example File","text":"
apiVersion: fabricator.githedgehog.com/v1beta1\nkind: Fabricator\nmetadata:\n name: default\n namespace: fab\nspec:\n config:\n control:\n tlsSAN: # IPs and DNS names to access API\n - \"customer.site.io\"\n\n ntpServers:\n - time.cloudflare.com\n - time1.google.com\n\n defaultUser: # user 'core' on all control nodes\n password: \"hash...\" # password hash\n authorizedKeys:\n - \"ssh-ed25519 hash...\"\n\n fabric:\n mode: spine-leaf # \"spine-leaf\" or \"collapsed-core\"\n includeONIE: true\n defaultSwitchUsers:\n admin: # at least one user with name 'admin' and role 'admin'\n role: admin\n password: \"hash...\" # password hash\n authorizedKeys:\n - \"ssh-ed25519 hash...\"\n op: # optional read-only user\n role: operator\n password: \"hash...\" # password hash\n authorizedKeys:\n - \"ssh-ed25519 hash...\"\n\n defaultAlloyConfig:\n agentScrapeIntervalSeconds: 120\n unixScrapeIntervalSeconds: 120\n unixExporterEnabled: true\n collectSyslogEnabled: true\n lokiTargets:\n lab:\n url: http://url.io:3100/loki/api/v1/push\n useControlProxy: true\n labels:\n descriptive: name\n prometheusTargets:\n lab:\n url: http://url.io:9100/api/v1/push\n useControlProxy: true\n labels:\n descriptive: name\n sendIntervalSeconds: 120\n\n---\napiVersion: fabricator.githedgehog.com/v1beta1\nkind: ControlNode\nmetadata:\n name: control-1\n namespace: fab\nspec:\n bootstrap:\n disk: \"/dev/sda\" # disk to install OS on, e.g. \"sda\" or \"nvme0n1\"\n external:\n interface: eno2 # interface for external\n ip: dhcp # IP address for external interface\n management:\n interface: eno1\n\n# Currently only one ControlNode is supported\n
A machine with access to the Internet to use Fabricator and build installer with at least 8 GB RAM and 25 GB of disk space
An 16 GB USB flash drive, if you are not using virtual media
Have a machine to function as the Fabric Control Node. System Requirements as well as IPMI access to it to install the OS.
A management switch with at least 1 10GbE port is recommended
Enough Supported Switches for your Fabric
"},{"location":"install-upgrade/overview/#overview-of-install-process","title":"Overview of Install Process","text":"
This section is dedicated to the Hedgehog Fabric installation on bare-metal control node(s) and switches, their preparation and configuration. To install the VLAB see VLAB Overview.
Download and install hhfab following instructions from the Download section.
The main steps to install Fabric are:
Install hhfab on the machines with access to the Internet
Prepare Wiring Diagram
Select Fabric Configuration
Build Control Node configuration and installer
Install Control Node
Insert USB with control-os image into Fabric Control Node
Boot the node off the USB to initiate the installation
Prepare Management Network
Connect management switch to Fabric control node
Connect 1GbE Management port of switches to management switch
Prepare supported switches
Ensure switch serial numbers and / or first management interface MAC addresses are recorded in wiring diagram
Boot them into ONIE Install Mode to have them automatically provisioned
"},{"location":"install-upgrade/overview/#build-control-node-configuration-and-installer","title":"Build Control Node configuration and Installer","text":"
Hedgehog has created a command line utility, called hhfab, that helps generate the wiring diagram and fabric configuration, validate the supplied configurations, and generate an installation image (.img) suitable for writing to a USB flash drive or mounting via IPMI virtual media. The first hhfab command to run is hhfab init. This will generate the main configuration file, fab.yaml. fab.yaml is responsible for almost every configuration of the fabric with the exception of the wiring. Each command and subcommand have usage messages, simply supply the -h flag to your command or sub command to see the available options. For example hhfab vlab -h and hhfab vlab gen -h.
"},{"location":"install-upgrade/overview/#hhfab-commands-to-make-a-bootable-image","title":"HHFAB commands to make a bootable image","text":"
hhfab init --wiring wiring-lab.yaml
The init command generates a fab.yaml file, edit the fab.yaml file for your needs
ensure the correct boot disk (e.g. /dev/sda) and control node NIC names are supplied
hhfab validate
hhfab build
The installer for the fabric is generated in $CWD/result/. This installation image is named control-1-install-usb.img and is 7.5 GB in size. Once the image is created, you can write it to a USB drive, or mount it via virtual media.
"},{"location":"install-upgrade/overview/#write-usb-image-to-disk","title":"Write USB Image to Disk","text":"
This will erase data on the USB disk.
Insert the USB to your machine
Identify the path to your USB stick, for example: /dev/sdc
Issue the command to write the image to the USB drive
There are utilities that assist this process such as etcher.
"},{"location":"install-upgrade/overview/#install-control-node","title":"Install Control Node","text":"
This control node should be given a static IP address. Either a lease or statically assigned.
Configure the server to use UEFI boot without secure boot
Attach the image to the server either by inserting via USB, or attaching via virtual media
Select boot off of the attached media, the installation process is automated
Once the control node has booted, it logs in automatically and begins the installation process
Optionally use journalctl -f -u flatcar-install.service to monitor progress
Once the installation is complete, the system automatically reboots.
After the system has shutdown but before the boot up process reaches the operating system, remove the USB image from the system. Removal during the UEFI boot screen is acceptable.
Upon booting into the freshly installed system, the fabric installation will automatically begin
If the insecure --dev flag was passed to hhfab init the password for the core user is HHFab.Admin!, the switches have two users created admin and op. admin has administrator privileges and password HHFab.Admin!, whereas the op user is a read-only, non-sudo user with password HHFab.Op!.
Optionally this can be monitored with journalctl -f -u fabric-install.service
The install is complete when the log emits \"Control Node installation complete\". Additionally, the systemctl status will show inactive (dead) indicating that the executable has finished.
The control node is dual-homed. It has a 10GbE interface that connects to the management network. The other link called external in the fab.yaml file is for the customer to access the control node. The management network is for the command and control of the switches that comprise the fabric. This management network can be a simple broadcast domain with layer 2 connectivity. The control node will run a DHCP and small http servers. The management network is not accessible to machines or devices not associated with the fabric.
Now that the install has finished, you can start interacting with the Fabric using kubectl, kubectl fabric and k9s, all pre-installed as part of the Control Node installer.
At this stage, the fabric hands out DHCP addresses to the switches via the management network. Optionally, you can monitor this process by going through the following steps: - enter k9s at the command prompt - use the arrow keys to select the pod named fabric-boot - the logs of the pod will be displayed showing the DHCP lease process - use the switches screen of k9s to see the heartbeat column to verify the connection between switch and controller. - to see the switches type :switches (like a vim command) into k9s
"},{"location":"install-upgrade/requirements/","title":"System Requirements","text":""},{"location":"install-upgrade/requirements/#out-of-band-management-network","title":"Out of Band Management Network","text":"
In order to provision and manage the switches that comprise the fabric, an out of band switch must also be installed. This network is to be used exclusively by the control node and the fabric switches, no other access is permitted. This switch (or switches) is not managed by the fabric. It is recommended that this switch have at least a 10GbE port and that port connect to the control node.
"},{"location":"install-upgrade/requirements/#device-participating-in-the-hedgehog-fabric-eg-switch","title":"Device participating in the Hedgehog Fabric (e.g. switch)","text":"
(Future) Each participating device is part of the Kubernetes cluster, so it runs Kubernetes kubelet
Additionally, it runs the Hedgehog Fabric Agent that controls devices configuration
Following resources should be available on a device to run in the Hedgehog Fabric (after other software such as SONiC usage):
Minimal Recommended CPU 1 2 RAM 1 GB 1.5 GB Disk 5 GB 10 GB"},{"location":"install-upgrade/supported-devices/","title":"Supported Devices","text":"
You can find detailed information about devices in the Switch Profiles Catalog and in the User Guide switch features and port naming.
Package v1beta1 contains API Schema definitions for the agent v1beta1 API group. This is the internal API group for the switch and control node agents. Not intended to be modified by the user.
Field Description `` updowntesting"},{"location":"reference/api/#agent","title":"Agent","text":"
Agent is an internal API object used by the controller to pass all relevant information to the agent running on a specific switch in order to fully configure it and manage its lifecycle. It is not intended to be used directly by users. Spec of the object isn't user-editable, it is managed by the controller. Status of the object is updated by the agent and is used by the controller to track the state of the agent and the switch it is running on. Name of the Agent object is the same as the name of the switch it is running on and it's created in the same namespace as the Switch object.
Field Description Default Validation apiVersion string agent.githedgehog.com/v1beta1kind string Agentmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. status AgentStatus Status is the observed state of the Agent"},{"location":"reference/api/#agentstatus","title":"AgentStatus","text":"
AgentStatus defines the observed state of the agent running on a specific switch and includes information about the switch itself as well as the state of the agent and applied configuration.
Appears in: - Agent
Field Description Default Validation version string Current running agent version installID string ID of the agent installation, used to track NOS re-installs runID string ID of the agent run, used to track NOS reboots lastHeartbeat Time Time of the last heartbeat from the agent lastAttemptTime Time Time of the last attempt to apply configuration lastAttemptGen integer Generation of the last attempt to apply configuration lastAppliedTime Time Time of the last successful configuration application lastAppliedGen integer Generation of the last successful configuration application state SwitchState Detailed switch state updated with each heartbeat conditions Condition array Conditions of the agent, includes readiness marker for use with kubectl wait"},{"location":"reference/api/#bgpmessages","title":"BGPMessages","text":"
Appears in: - SwitchStateBGPNeighbor
Field Description Default Validation received BGPMessagesCounters sent BGPMessagesCounters"},{"location":"reference/api/#bgpmessagescounters","title":"BGPMessagesCounters","text":"
Appears in: - BGPMessages
Field Description Default Validation capability integer keepalive integer notification integer open integer routeRefresh integer update integer"},{"location":"reference/api/#bgpneighborsessionstate","title":"BGPNeighborSessionState","text":"
Underlying type: string
Appears in: - SwitchStateBGPNeighbor
Field Description `` idleconnectactiveopenSentopenConfirmestablished"},{"location":"reference/api/#bgppeertype","title":"BGPPeerType","text":"
Underlying type: string
Appears in: - SwitchStateBGPNeighbor
Field Description `` internalexternal"},{"location":"reference/api/#operstatus","title":"OperStatus","text":"
Underlying type: string
Appears in: - SwitchStateInterface
Field Description `` updowntestingunknowndormantnotPresentlowerLayerDown"},{"location":"reference/api/#switchstate","title":"SwitchState","text":"
Appears in: - AgentStatus
Field Description Default Validation nos SwitchStateNOS Information about the switch and NOS interfaces object (keys:string, values:SwitchStateInterface) Switch interfaces state (incl. physical, management and port channels) breakouts object (keys:string, values:SwitchStateBreakout) Breakout ports state (port -> breakout state) bgpNeighbors object (keys:string, values:map[string]SwitchStateBGPNeighbor) State of all BGP neighbors (VRF -> neighbor address -> state) platform SwitchStatePlatform State of the switch platform (fans, PSUs, sensors) criticalResources SwitchStateCRM State of the critical resources (ACLs, routes, etc.)"},{"location":"reference/api/#switchstatebgpneighbor","title":"SwitchStateBGPNeighbor","text":"
Appears in: - SwitchState
Field Description Default Validation connectionsDropped integer enabled boolean establishedTransitions integer lastEstablished Time lastRead Time lastResetReason string lastResetTime Time lastWrite Time localAS integer messages BGPMessages peerAS integer peerGroup string peerPort integer peerType BGPPeerType remoteRouterID string sessionState BGPNeighborSessionState shutdownMessage string prefixes object (keys:string, values:SwitchStateBGPNeighborPrefixes)"},{"location":"reference/api/#switchstatebgpneighborprefixes","title":"SwitchStateBGPNeighborPrefixes","text":"
Appears in: - SwitchStateBGPNeighbor
Field Description Default Validation received integer receivedPrePolicy integer sent integer"},{"location":"reference/api/#switchstatebreakout","title":"SwitchStateBreakout","text":"
Appears in: - SwitchState
Field Description Default Validation mode string nosMembers string array status string"},{"location":"reference/api/#switchstatecrm","title":"SwitchStateCRM","text":"
Appears in: - SwitchState
Field Description Default Validation aclStats SwitchStateCRMACLStats stats SwitchStateCRMStats"},{"location":"reference/api/#switchstatecrmacldetails","title":"SwitchStateCRMACLDetails","text":"
Field Description Default Validation lag SwitchStateCRMACLDetails port SwitchStateCRMACLDetails rif SwitchStateCRMACLDetails switch SwitchStateCRMACLDetails vlan SwitchStateCRMACLDetails"},{"location":"reference/api/#switchstatecrmaclstats","title":"SwitchStateCRMACLStats","text":"
Appears in: - SwitchStateCRM
Field Description Default Validation egress SwitchStateCRMACLInfo ingress SwitchStateCRMACLInfo"},{"location":"reference/api/#switchstatecrmstats","title":"SwitchStateCRMStats","text":"
Field Description Default Validation chassisID string systemName string systemDescription string portID string portDescription string manufacturer string model string serialNumber string"},{"location":"reference/api/#switchstatenos","title":"SwitchStateNOS","text":"
SwitchStateNOS contains information about the switch and NOS received from the switch itself by the agent
Appears in: - SwitchState
Field Description Default Validation asicVersion string ASIC name, such as \"broadcom\" or \"vs\" buildCommit string NOS build commit buildDate string NOS build date builtBy string NOS build user configDbVersion string NOS config DB version, such as \"version_4_2_1\" distributionVersion string Distribution version, such as \"Debian 10.13\" hardwareVersion string Hardware version, such as \"X01\" hwskuVersion string Hwsku version, such as \"DellEMC-S5248f-P-25G-DPB\" kernelVersion string Kernel version, such as \"5.10.0-21-amd64\" mfgName string Manufacturer name, such as \"Dell EMC\" platformName string Platform name, such as \"x86_64-dellemc_s5248f_c3538-r0\" productDescription string NOS product description, such as \"Enterprise SONiC Distribution by Broadcom - Enterprise Base package\" productVersion string NOS product version, empty for Broadcom SONiC serialNumber string Switch serial number softwareVersion string NOS software version, such as \"4.2.0-Enterprise_Base\" uptime string Switch uptime, such as \"21:21:27 up 1 day, 23:26, 0 users, load average: 1.92, 1.99, 2.00 \""},{"location":"reference/api/#switchstateplatform","title":"SwitchStatePlatform","text":"
Appears in: - SwitchState
Field Description Default Validation fans object (keys:string, values:SwitchStatePlatformFan) psus object (keys:string, values:SwitchStatePlatformPSU) temperature object (keys:string, values:SwitchStatePlatformTemperature)"},{"location":"reference/api/#switchstateplatformfan","title":"SwitchStatePlatformFan","text":"
Appears in: - SwitchStatePlatform
Field Description Default Validation direction string speed float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformpsu","title":"SwitchStatePlatformPSU","text":"
Appears in: - SwitchStatePlatform
Field Description Default Validation inputCurrent float inputPower float inputVoltage float outputCurrent float outputPower float outputVoltage float presence boolean status boolean"},{"location":"reference/api/#switchstateplatformtemperature","title":"SwitchStatePlatformTemperature","text":"
Appears in: - SwitchStatePlatform
Field Description Default Validation temperature float alarms string highThreshold float criticalHighThreshold float lowThreshold float criticalLowThreshold float"},{"location":"reference/api/#switchstatetransceiver","title":"SwitchStateTransceiver","text":"
Package v1beta1 contains API Schema definitions for the dhcp v1beta1 API group. It is the primary internal API group for the intended Hedgehog DHCP server configuration and storing leases as well as making them available to the end user through API. Not intended to be modified by the user.
DHCPAllocated is a single allocated IP with expiry time and hostname from DHCP requests, it's effectively a DHCP lease
Appears in: - DHCPSubnetStatus
Field Description Default Validation ip string Allocated IP address expiry Time Expiry time of the lease hostname string Hostname from DHCP request"},{"location":"reference/api/#dhcpsubnet","title":"DHCPSubnet","text":"
DHCPSubnet is the configuration (spec) for the Hedgehog DHCP server and storage for the leases (status). It's primary internal API group, but it makes allocated IPs / leases information available to the end user through API. Not intended to be modified by the user.
Field Description Default Validation apiVersion string dhcp.githedgehog.com/v1beta1kind string DHCPSubnetmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec DHCPSubnetSpec Spec is the desired state of the DHCPSubnet status DHCPSubnetStatus Status is the observed state of the DHCPSubnet"},{"location":"reference/api/#dhcpsubnetspec","title":"DHCPSubnetSpec","text":"
DHCPSubnetSpec defines the desired state of DHCPSubnet
Appears in: - DHCPSubnet
Field Description Default Validation subnet string Full VPC subnet name (including VPC name), such as \"vpc-0/default\" cidrBlock string CIDR block to use for VPC subnet, such as \"10.10.10.0/24\" gateway string Gateway, such as 10.10.10.1 startIP string Start IP from the CIDRBlock to allocate IPs, such as 10.10.10.10 endIP string End IP from the CIDRBlock to allocate IPs, such as 10.10.10.99 vrf string VRF name to identify specific VPC (will be added to DHCP packets by DHCP relay in suboption 151), such as \"VrfVvpc-1\" as it's named on switch circuitID string VLAN ID to identify specific subnet within the VPC, such as \"Vlan1000\" as it's named on switch pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option. defaultURL string DefaultURL (optional) is the option 114 \"default-url\" to be sent to the clients"},{"location":"reference/api/#dhcpsubnetstatus","title":"DHCPSubnetStatus","text":"
DHCPSubnetStatus defines the observed state of DHCPSubnet
Appears in: - DHCPSubnet
Field Description Default Validation allocated object (keys:string, values:DHCPAllocated) Allocated is a map of allocated IPs with expiry time and hostname from DHCP requests"},{"location":"reference/api/#vpcgithedgehogcomv1beta1","title":"vpc.githedgehog.com/v1beta1","text":"
Package v1beta1 contains API Schema definitions for the vpc v1beta1 API group. It is public API group for the VPCs and Externals APIs. Intended to be used by the user.
External object represents an external system connected to the Fabric and available to the specific IPv4Namespace. Users can do external peering with the external system by specifying the name of the External Object without need to worry about the details of how external system is attached to the Fabric.
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string Externalmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalSpec Spec is the desired state of the External status ExternalStatus Status is the observed state of the External"},{"location":"reference/api/#externalattachment","title":"ExternalAttachment","text":"
ExternalAttachment is a definition of how specific switch is connected with external system (External object). Effectively it represents BGP peering between the switch and external system including all needed configuration.
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string ExternalAttachmentmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalAttachmentSpec Spec is the desired state of the ExternalAttachment status ExternalAttachmentStatus Status is the observed state of the ExternalAttachment"},{"location":"reference/api/#externalattachmentneighbor","title":"ExternalAttachmentNeighbor","text":"
ExternalAttachmentNeighbor defines the BGP neighbor configuration for the external attachment
Appears in: - ExternalAttachmentSpec
Field Description Default Validation asn integer ASN is the ASN of the BGP neighbor ip string IP is the IP address of the BGP neighbor to peer with"},{"location":"reference/api/#externalattachmentspec","title":"ExternalAttachmentSpec","text":"
ExternalAttachmentSpec defines the desired state of ExternalAttachment
Appears in: - ExternalAttachment
Field Description Default Validation external string External is the name of the External object this attachment belongs to connection string Connection is the name of the Connection object this attachment belongs to (essentially the name of the switch/port) switch ExternalAttachmentSwitch Switch is the switch port configuration for the external attachment neighbor ExternalAttachmentNeighbor Neighbor is the BGP neighbor configuration for the external attachment"},{"location":"reference/api/#externalattachmentstatus","title":"ExternalAttachmentStatus","text":"
ExternalAttachmentStatus defines the observed state of ExternalAttachment
ExternalAttachmentSwitch defines the switch port configuration for the external attachment
Appears in: - ExternalAttachmentSpec
Field Description Default Validation vlan integer VLAN (optional) is the VLAN ID used for the subinterface on a switch port specified in the connection, set to 0 if no VLAN is used ip string IP is the IP address of the subinterface on a switch port specified in the connection"},{"location":"reference/api/#externalpeering","title":"ExternalPeering","text":"
ExternalPeering is the Schema for the externalpeerings API
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string ExternalPeeringmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ExternalPeeringSpec Spec is the desired state of the ExternalPeering status ExternalPeeringStatus Status is the observed state of the ExternalPeering"},{"location":"reference/api/#externalpeeringspec","title":"ExternalPeeringSpec","text":"
ExternalPeeringSpec defines the desired state of ExternalPeering
Appears in: - ExternalPeering
Field Description Default Validation permit ExternalPeeringSpecPermit Permit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit"},{"location":"reference/api/#externalpeeringspecexternal","title":"ExternalPeeringSpecExternal","text":"
ExternalPeeringSpecExternal defines the External-side of the configuration to peer with
Appears in: - ExternalPeeringSpecPermit
Field Description Default Validation name string Name is the name of the External to peer with prefixes ExternalPeeringSpecPrefix array Prefixes is the list of prefixes to permit from the External to the VPC"},{"location":"reference/api/#externalpeeringspecpermit","title":"ExternalPeeringSpecPermit","text":"
ExternalPeeringSpecPermit defines the peering policy - which VPC and External to peer with and which subnets/prefixes to permit
Appears in: - ExternalPeeringSpec
Field Description Default Validation vpc ExternalPeeringSpecVPC VPC is the VPC-side of the configuration to peer with external ExternalPeeringSpecExternal External is the External-side of the configuration to peer with"},{"location":"reference/api/#externalpeeringspecprefix","title":"ExternalPeeringSpecPrefix","text":"
ExternalPeeringSpecPrefix defines the prefix to permit from the External to the VPC
Appears in: - ExternalPeeringSpecExternal
Field Description Default Validation prefix string Prefix is the subnet to permit from the External to the VPC, e.g. 0.0.0.0/0 for any route including default route.It matches any prefix length less than or equal to 32 effectively permitting all prefixes within the specified one."},{"location":"reference/api/#externalpeeringspecvpc","title":"ExternalPeeringSpecVPC","text":"
ExternalPeeringSpecVPC defines the VPC-side of the configuration to peer with
Appears in: - ExternalPeeringSpecPermit
Field Description Default Validation name string Name is the name of the VPC to peer with subnets string array Subnets is the list of subnets to advertise from VPC to the External"},{"location":"reference/api/#externalpeeringstatus","title":"ExternalPeeringStatus","text":"
ExternalPeeringStatus defines the observed state of ExternalPeering
ExternalSpec describes IPv4 namespace External belongs to and inbound/outbound communities which are used to filter routes from/to the external system.
Appears in: - External
Field Description Default Validation ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this External belongs to inboundCommunity string InboundCommunity is the inbound community to filter routes from the external system (e.g. 65102:5000) outboundCommunity string OutboundCommunity is theoutbound community that all outbound routes will be stamped with (e.g. 50000:50001)"},{"location":"reference/api/#externalstatus","title":"ExternalStatus","text":"
ExternalStatus defines the observed state of External
IPv4Namespace represents a namespace for VPC subnets allocation. All VPC subnets within a single IPv4Namespace are non-overlapping. Users can create multiple IPv4Namespaces to allocate same VPC subnets.
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string IPv4Namespacemetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec IPv4NamespaceSpec Spec is the desired state of the IPv4Namespace status IPv4NamespaceStatus Status is the observed state of the IPv4Namespace"},{"location":"reference/api/#ipv4namespacespec","title":"IPv4NamespaceSpec","text":"
IPv4NamespaceSpec defines the desired state of IPv4Namespace
Appears in: - IPv4Namespace
Field Description Default Validation subnets string array Subnets is the list of subnets to allocate VPC subnets from, couldn't overlap between each other and with Fabric reserved subnets MaxItems: 20 MinItems: 1"},{"location":"reference/api/#ipv4namespacestatus","title":"IPv4NamespaceStatus","text":"
IPv4NamespaceStatus defines the observed state of IPv4Namespace
VPC is Virtual Private Cloud, similar to the public cloud VPC it provides an isolated private network for the resources with support for multiple subnets each with user-provided VLANs and on-demand DHCP.
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string VPCmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCSpec Spec is the desired state of the VPC status VPCStatus Status is the observed state of the VPC"},{"location":"reference/api/#vpcattachment","title":"VPCAttachment","text":"
VPCAttachment is the Schema for the vpcattachments API
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string VPCAttachmentmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCAttachmentSpec Spec is the desired state of the VPCAttachment status VPCAttachmentStatus Status is the observed state of the VPCAttachment"},{"location":"reference/api/#vpcattachmentspec","title":"VPCAttachmentSpec","text":"
VPCAttachmentSpec defines the desired state of VPCAttachment
Appears in: - VPCAttachment
Field Description Default Validation subnet string Subnet is the full name of the VPC subnet to attach to, such as \"vpc-1/default\" connection string Connection is the name of the connection to attach to the VPC nativeVLAN boolean NativeVLAN is the flag to indicate if the native VLAN should be used for attaching the VPC subnet"},{"location":"reference/api/#vpcattachmentstatus","title":"VPCAttachmentStatus","text":"
VPCAttachmentStatus defines the observed state of VPCAttachment
VPCDHCP defines the on-demand DHCP configuration for the subnet
Appears in: - VPCSubnet
Field Description Default Validation relay string Relay is the DHCP relay IP address, if specified, DHCP server will be disabled enable boolean Enable enables DHCP server for the subnet range VPCDHCPRange Range (optional) is the DHCP range for the subnet if DHCP server is enabled options VPCDHCPOptions Options (optional) is the DHCP options for the subnet if DHCP server is enabled"},{"location":"reference/api/#vpcdhcpoptions","title":"VPCDHCPOptions","text":"
VPCDHCPOptions defines the DHCP options for the subnet if DHCP server is enabled
Appears in: - VPCDHCP
Field Description Default Validation pxeURL string PXEURL (optional) to identify the pxe server to use to boot hosts connected to this segment such as http://10.10.10.99/bootfilename or tftp://10.10.10.99/bootfilename, http query strings are not supported dnsServers string array DNSservers (optional) to configure Domain Name Servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} timeServers string array TimeServers (optional) NTP server addresses to configure for time servers for this particular segment such as: 10.10.10.1, 10.10.10.2 Optional: {} interfaceMTU integer InterfaceMTU (optional) is the MTU setting that the dhcp server will send to the clients. It is dependent on the client to honor this option."},{"location":"reference/api/#vpcdhcprange","title":"VPCDHCPRange","text":"
VPCDHCPRange defines the DHCP range for the subnet if DHCP server is enabled
Appears in: - VPCDHCP
Field Description Default Validation start string Start is the start IP address of the DHCP range end string End is the end IP address of the DHCP range"},{"location":"reference/api/#vpcpeer","title":"VPCPeer","text":"
Appears in: - VPCPeeringSpec
Field Description Default Validation subnets string array Subnets is the list of subnets to advertise from current VPC to the peer VPC MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeering","title":"VPCPeering","text":"
VPCPeering represents a peering between two VPCs with corresponding filtering rules. Minimal example of the VPC peering showing vpc-1 to vpc-2 peering with all subnets allowed:
spec:\n permit:\n - vpc-1: {}\n vpc-2: {}\n
Field Description Default Validation apiVersion string vpc.githedgehog.com/v1beta1kind string VPCPeeringmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VPCPeeringSpec Spec is the desired state of the VPCPeering status VPCPeeringStatus Status is the observed state of the VPCPeering"},{"location":"reference/api/#vpcpeeringspec","title":"VPCPeeringSpec","text":"
VPCPeeringSpec defines the desired state of VPCPeering
Appears in: - VPCPeering
Field Description Default Validation remote string permit map[string]VPCPeer array Permit defines a list of the peering policies - which VPC subnets will have access to the peer VPC subnets. MaxItems: 10 MinItems: 1"},{"location":"reference/api/#vpcpeeringstatus","title":"VPCPeeringStatus","text":"
VPCPeeringStatus defines the observed state of VPCPeering
VPCSpec defines the desired state of VPC. At least one subnet is required.
Appears in: - VPC
Field Description Default Validation subnets object (keys:string, values:VPCSubnet) Subnets is the list of VPC subnets to configure ipv4Namespace string IPv4Namespace is the name of the IPv4Namespace this VPC belongs to (if not specified, \"default\" is used) vlanNamespace string VLANNamespace is the name of the VLANNamespace this VPC belongs to (if not specified, \"default\" is used) defaultIsolated boolean DefaultIsolated sets default behavior for isolated mode for the subnets (disabled by default) defaultRestricted boolean DefaultRestricted sets default behavior for restricted mode for the subnets (disabled by default) permit string array array Permit defines a list of the access policies between the subnets within the VPC - each policy is a list of subnets that have access to each other.It's applied on top of the subnet isolation flag and if subnet isn't isolated it's not required to have it in a permit list while if vpc is markedas isolated it's required to have it in a permit list to have access to other subnets. staticRoutes VPCStaticRoute array StaticRoutes is the list of additional static routes for the VPC"},{"location":"reference/api/#vpcstaticroute","title":"VPCStaticRoute","text":"
VPCStaticRoute defines the static route for the VPC
Appears in: - VPCSpec
Field Description Default Validation prefix string Prefix for the static route (mandatory), e.g. 10.42.0.0/24 nextHops string array NextHops for the static route (at least one is required), e.g. 10.99.0.0"},{"location":"reference/api/#vpcstatus","title":"VPCStatus","text":"
Field Description Default Validation subnet string Subnet is the subnet CIDR block, such as \"10.0.0.0/24\", should belong to the IPv4Namespace and be unique within the namespace gateway string Gateway (optional) for the subnet, if not specified, the first IP (e.g. 10.0.0.1) in the subnet is used as the gateway dhcp VPCDHCP DHCP is the on-demand DHCP configuration for the subnet vlan integer VLAN is the VLAN ID for the subnet, should belong to the VLANNamespace and be unique within the namespace isolated boolean Isolated is the flag to enable isolated mode for the subnet which means no access to and from the other subnets within the VPC restricted boolean Restricted is the flag to enable restricted mode for the subnet which means no access between hosts within the subnet itself"},{"location":"reference/api/#wiringgithedgehogcomv1beta1","title":"wiring.githedgehog.com/v1beta1","text":"
Package v1beta1 contains API Schema definitions for the wiring v1beta1 API group. It is public API group mainly for the underlay definition including Switches, Server, wiring between them and etc. Intended to be used by the user.
Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object."},{"location":"reference/api/#connbundled","title":"ConnBundled","text":"
ConnBundled defines the bundled connection (port channel, single server to a single switch with multiple links)
Appears in: - ConnectionSpec
Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#conneslag","title":"ConnESLAG","text":"
ConnESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links)
Appears in: - ConnectionSpec
Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connexternal","title":"ConnExternal","text":"
ConnExternal defines the external connection (single switch to a single external device with a single link)
Appears in: - ConnectionSpec
Field Description Default Validation link ConnExternalLink Link is the external connection link"},{"location":"reference/api/#connexternallink","title":"ConnExternalLink","text":"
ConnExternalLink defines the external connection link
Appears in: - ConnExternal
Field Description Default Validation switch BasePortName"},{"location":"reference/api/#connfabric","title":"ConnFabric","text":"
ConnFabric defines the fabric connection (single spine to a single leaf with at least one link)
Appears in: - ConnectionSpec
Field Description Default Validation links FabricLink array Links is the list of spine-to-leaf links MinItems: 1"},{"location":"reference/api/#connfabriclinkswitch","title":"ConnFabricLinkSwitch","text":"
ConnFabricLinkSwitch defines the switch side of the fabric link
Appears in: - FabricLink
Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the fabric link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$"},{"location":"reference/api/#connmclag","title":"ConnMCLAG","text":"
ConnMCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links)
Appears in: - ConnectionSpec
Field Description Default Validation links ServerToSwitchLink array Links is the list of server-to-switch links MinItems: 2 mtu integer MTU is the MTU to be configured on the switch port or port channel fallback boolean Fallback is the optional flag that used to indicate one of the links in LACP port channel to be used as a fallback link"},{"location":"reference/api/#connmclagdomain","title":"ConnMCLAGDomain","text":"
ConnMCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch or redundancy group and allows to use MCLAG connections to connect servers in a multi-homed way.
Appears in: - ConnectionSpec
Field Description Default Validation peerLinks SwitchToSwitchLink array PeerLinks is the list of peer links between the switches, used to pass server traffic between switch MinItems: 1 sessionLinks SwitchToSwitchLink array SessionLinks is the list of session links between the switches, used only to pass MCLAG control plane and BGPtraffic between switches MinItems: 1"},{"location":"reference/api/#connstaticexternal","title":"ConnStaticExternal","text":"
ConnStaticExternal defines the static external connection (single switch to a single external device with a single link)
Appears in: - ConnectionSpec
Field Description Default Validation link ConnStaticExternalLink Link is the static external connection link withinVPC string WithinVPC is the optional VPC name to provision the static external connection within the VPC VRF instead of default one to make resource available to the specific VPC"},{"location":"reference/api/#connstaticexternallink","title":"ConnStaticExternalLink","text":"
ConnStaticExternalLink defines the static external connection link
Appears in: - ConnStaticExternal
Field Description Default Validation switch ConnStaticExternalLinkSwitch Switch is the switch side of the static external connection link"},{"location":"reference/api/#connstaticexternallinkswitch","title":"ConnStaticExternalLinkSwitch","text":"
ConnStaticExternalLinkSwitch defines the switch side of the static external connection link
Appears in: - ConnStaticExternalLink
Field Description Default Validation port string Port defines the full name of the switch port in the format of \"device/port\", such as \"spine-1/Ethernet1\".SONiC port name is used as a port name and switch name should be same as the name of the Switch object. ip string IP is the IP address of the switch side of the static external connection link (switch port configuration) Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}/([1-2]?[0-9]\\|3[0-2])$nextHop string NextHop is the next hop IP address for static routes that will be created for the subnets Pattern: ^((25[0-5]\\|(2[0-4]\\|1\\d\\|[1-9]\\|)\\d)\\.?\\b)\\{4\\}$subnets string array Subnets is the list of subnets that will get static routes using the specified next hop vlan integer VLAN is the optional VLAN ID to be configured on the switch port"},{"location":"reference/api/#connunbundled","title":"ConnUnbundled","text":"
ConnUnbundled defines the unbundled connection (no port channel, single server to a single switch with a single link)
Appears in: - ConnectionSpec
Field Description Default Validation link ServerToSwitchLink Link is the server-to-switch link mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#connvpcloopback","title":"ConnVPCLoopback","text":"
ConnVPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) that enables automated workaround named \"VPC Loopback\" that allow to avoid switch hardware limitations and traffic going through CPU in some cases
Appears in: - ConnectionSpec
Field Description Default Validation links SwitchToSwitchLink array Links is the list of VPC loopback links MinItems: 1"},{"location":"reference/api/#connection","title":"Connection","text":"
Connection object represents a logical and physical connections between any devices in the Fabric (Switch, Server and External objects). It's needed to define all physical and logical connections between the devices in the Wiring Diagram. Connection type is defined by the top-level field in the ConnectionSpec. Exactly one of them could be used in a single Connection object.
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string Connectionmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ConnectionSpec Spec is the desired state of the Connection status ConnectionStatus Status is the observed state of the Connection"},{"location":"reference/api/#connectionspec","title":"ConnectionSpec","text":"
ConnectionSpec defines the desired state of Connection
Appears in: - Connection
Field Description Default Validation unbundled ConnUnbundled Unbundled defines the unbundled connection (no port channel, single server to a single switch with a single link) bundled ConnBundled Bundled defines the bundled connection (port channel, single server to a single switch with multiple links) mclag ConnMCLAG MCLAG defines the MCLAG connection (port channel, single server to pair of switches with multiple links) eslag ConnESLAG ESLAG defines the ESLAG connection (port channel, single server to 2-4 switches with multiple links) mclagDomain ConnMCLAGDomain MCLAGDomain defines the MCLAG domain connection which makes two switches into a single logical switch for server multi-homing fabric ConnFabric Fabric defines the fabric connection (single spine to a single leaf with at least one link) vpcLoopback ConnVPCLoopback VPCLoopback defines the VPC loopback connection (multiple port pairs on a single switch) for automated workaround external ConnExternal External defines the external connection (single switch to a single external device with a single link) staticExternal ConnStaticExternal StaticExternal defines the static external connection (single switch to a single external device with a single link)"},{"location":"reference/api/#connectionstatus","title":"ConnectionStatus","text":"
ConnectionStatus defines the observed state of Connection
Field Description Default Validation spine ConnFabricLinkSwitch Spine is the spine side of the fabric link leaf ConnFabricLinkSwitch Leaf is the leaf side of the fabric link"},{"location":"reference/api/#server","title":"Server","text":"
Server is the Schema for the servers API
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string Servermetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec ServerSpec Spec is desired state of the server status ServerStatus Status is the observed state of the server"},{"location":"reference/api/#serverfacingconnectionconfig","title":"ServerFacingConnectionConfig","text":"
ServerFacingConnectionConfig defines any server-facing connection (unbundled, bundled, mclag, etc.) configuration
Field Description Default Validation mtu integer MTU is the MTU to be configured on the switch port or port channel"},{"location":"reference/api/#serverspec","title":"ServerSpec","text":"
ServerSpec defines the desired state of Server
Appears in: - Server
Field Description Default Validation description string Description is a description of the server profile string Profile is the profile of the server, name of the ServerProfile object to be used for this server, currently not used by the Fabric"},{"location":"reference/api/#serverstatus","title":"ServerStatus","text":"
Field Description Default Validation server BasePortName Server is the server side of the connection switch BasePortName Switch is the switch side of the connection"},{"location":"reference/api/#switch","title":"Switch","text":"
Switch is the Schema for the switches API
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string Switchmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchSpec Spec is desired state of the switch status SwitchStatus Status is the observed state of the switch"},{"location":"reference/api/#switchboot","title":"SwitchBoot","text":"
Appears in: - SwitchSpec
Field Description Default Validation serial string Identify switch by serial number mac string Identify switch by MAC address of the management port"},{"location":"reference/api/#switchgroup","title":"SwitchGroup","text":"
SwitchGroup is the marker API object to group switches together, switch can belong to multiple groups
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string SwitchGroupmetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchGroupSpec Spec is the desired state of the SwitchGroup status SwitchGroupStatus Status is the observed state of the SwitchGroup"},{"location":"reference/api/#switchgroupspec","title":"SwitchGroupSpec","text":"
SwitchGroupSpec defines the desired state of SwitchGroup
SwitchProfile represents switch capabilities and configuration
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string SwitchProfilemetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec SwitchProfileSpec status SwitchProfileStatus"},{"location":"reference/api/#switchprofileconfig","title":"SwitchProfileConfig","text":"
Defines switch-specific configuration options
Appears in: - SwitchProfileSpec
Field Description Default Validation maxPathsEBGP integer MaxPathsIBGP defines the maximum number of IBGP paths to be configured"},{"location":"reference/api/#switchprofilefeatures","title":"SwitchProfileFeatures","text":"
Defines features supported by a specific switch which is later used for roles and Fabric API features usage validation
Appears in: - SwitchProfileSpec
Field Description Default Validation subinterfaces boolean Subinterfaces defines if switch supports subinterfaces vxlan boolean VXLAN defines if switch supports VXLANs acls boolean ACLs defines if switch supports ACLs"},{"location":"reference/api/#switchprofileport","title":"SwitchProfilePort","text":"
Defines a switch port configuration Only one of Profile or Group can be set
Appears in: - SwitchProfileSpec
Field Description Default Validation nos string NOSName defines how port is named in the NOS baseNOSName string BaseNOSName defines the base NOS name that could be used together with the profile to generate the actual NOS name (e.g. breakouts) label string Label defines the physical port label you can see on the actual switch group string If port isn't directly manageable, group defines the group it belongs to, exclusive with profile profile string If port is directly configurable, profile defines the profile it belongs to, exclusive with group management boolean Management defines if port is a management port, it's a special case and it can't have a group or profile oniePortName string OniePortName defines the ONIE port name for management ports only"},{"location":"reference/api/#switchprofileportgroup","title":"SwitchProfilePortGroup","text":"
Defines a switch port group configuration
Appears in: - SwitchProfileSpec
Field Description Default Validation nos string NOSName defines how group is named in the NOS profile string Profile defines the possible configuration profile for the group, could only have speed profile"},{"location":"reference/api/#switchprofileportprofile","title":"SwitchProfilePortProfile","text":"
Defines a switch port profile configuration
Appears in: - SwitchProfileSpec
Field Description Default Validation speed SwitchProfilePortProfileSpeed Speed defines the speed configuration for the profile, exclusive with breakout breakout SwitchProfilePortProfileBreakout Breakout defines the breakout configuration for the profile, exclusive with speed autoNegAllowed boolean AutoNegAllowed defines if configuring auto-negotiation is allowed for the port autoNegDefault boolean AutoNegDefault defines the default auto-negotiation state for the port"},{"location":"reference/api/#switchprofileportprofilebreakout","title":"SwitchProfilePortProfileBreakout","text":"
Defines a switch port profile breakout configuration
Appears in: - SwitchProfilePortProfile
Field Description Default Validation default string Default defines the default breakout mode for the profile supported object (keys:string, values:SwitchProfilePortProfileBreakoutMode) Supported defines the supported breakout modes for the profile with the NOS name offsets"},{"location":"reference/api/#switchprofileportprofilebreakoutmode","title":"SwitchProfilePortProfileBreakoutMode","text":"
Defines a switch port profile breakout mode configuration
Appears in: - SwitchProfilePortProfileBreakout
Field Description Default Validation offsets string array Offsets defines the breakout NOS port name offset from the port NOS Name for each breakout mode"},{"location":"reference/api/#switchprofileportprofilespeed","title":"SwitchProfilePortProfileSpeed","text":"
Defines a switch port profile speed configuration
Appears in: - SwitchProfilePortProfile
Field Description Default Validation default string Default defines the default speed for the profile supported string array Supported defines the supported speeds for the profile"},{"location":"reference/api/#switchprofilespec","title":"SwitchProfileSpec","text":"
SwitchProfileSpec defines the desired state of SwitchProfile
Appears in: - SwitchProfile
Field Description Default Validation displayName string DisplayName defines the human-readable name of the switch otherNames string array OtherNames defines alternative names for the switch features SwitchProfileFeatures Features defines the features supported by the switch config SwitchProfileConfig Config defines the switch-specific configuration options ports object (keys:string, values:SwitchProfilePort) Ports defines the switch port configuration portGroups object (keys:string, values:SwitchProfilePortGroup) PortGroups defines the switch port group configuration portProfiles object (keys:string, values:SwitchProfilePortProfile) PortProfiles defines the switch port profile configuration nosType NOSType NOSType defines the NOS type to be used for the switch platform string Platform is what expected to be request by ONIE and displayed in the NOS"},{"location":"reference/api/#switchprofilestatus","title":"SwitchProfileStatus","text":"
SwitchProfileStatus defines the observed state of SwitchProfile
SwitchRedundancy is the switch redundancy configuration which includes name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections. It defines how redundancy will be configured and handled on the switch as well as which connection types will be available. If not specified, switch will not be part of any redundancy group. If name isn't empty, type must be specified as well and name should be the same as one of the SwitchGroup objects.
Appears in: - SwitchSpec
Field Description Default Validation group string Group is the name of the redundancy group switch belongs to type RedundancyType Type is the type of the redundancy group, could be mclag or eslag"},{"location":"reference/api/#switchrole","title":"SwitchRole","text":"
Underlying type: string
SwitchRole is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf
Field Description spineserver-leafborder-leafmixed-leafvirtual-edge"},{"location":"reference/api/#switchspec","title":"SwitchSpec","text":"
SwitchSpec defines the desired state of Switch
Appears in: - Switch
Field Description Default Validation role SwitchRole Role is the role of the switch, could be spine, server-leaf or border-leaf or mixed-leaf Enum: [spine server-leaf border-leaf mixed-leaf virtual-edge] Required: {} description string Description is a description of the switch profile string Profile is the profile of the switch, name of the SwitchProfile object to be used for this switch, currently not used by the Fabric groups string array Groups is a list of switch groups the switch belongs to redundancy SwitchRedundancy Redundancy is the switch redundancy configuration including name of the redundancy group switch belongs to and its type, used both for MCLAG and ESLAG connections vlanNamespaces string array VLANNamespaces is a list of VLAN namespaces the switch is part of, their VLAN ranges could not overlap asn integer ASN is the ASN of the switch ip string IP is the IP of the switch that could be used to access it from other switches and control nodes in the Fabric vtepIP string VTEPIP is the VTEP IP of the switch protocolIP string ProtocolIP is used as BGP Router ID for switch configuration portGroupSpeeds object (keys:string, values:string) PortGroupSpeeds is a map of port group speeds, key is the port group name, value is the speed, such as '\"2\": 10G' portSpeeds object (keys:string, values:string) PortSpeeds is a map of port speeds, key is the port name, value is the speed portBreakouts object (keys:string, values:string) PortBreakouts is a map of port breakouts, key is the port name, value is the breakout configuration, such as \"1/55: 4x25G\" portAutoNegs object (keys:string, values:boolean) PortAutoNegs is a map of port auto negotiation, key is the port name, value is true or false boot SwitchBoot Boot is the boot/provisioning information of the switch"},{"location":"reference/api/#switchstatus","title":"SwitchStatus","text":"
SwitchToSwitchLink defines the switch-to-switch link
Appears in: - ConnMCLAGDomain - ConnVPCLoopback
Field Description Default Validation switch1 BasePortName Switch1 is the first switch side of the connection switch2 BasePortName Switch2 is the second switch side of the connection"},{"location":"reference/api/#vlannamespace","title":"VLANNamespace","text":"
VLANNamespace is the Schema for the vlannamespaces API
Field Description Default Validation apiVersion string wiring.githedgehog.com/v1beta1kind string VLANNamespacemetadata ObjectMeta Refer to Kubernetes API documentation for fields of metadata. spec VLANNamespaceSpec Spec is the desired state of the VLANNamespace status VLANNamespaceStatus Status is the observed state of the VLANNamespace"},{"location":"reference/api/#vlannamespacespec","title":"VLANNamespaceSpec","text":"
VLANNamespaceSpec defines the desired state of VLANNamespace
Appears in: - VLANNamespace
Field Description Default Validation ranges VLANRange array Ranges is a list of VLAN ranges to be used in this namespace, couldn't overlap between each other and with Fabric reserved VLAN ranges MaxItems: 20 MinItems: 1"},{"location":"reference/api/#vlannamespacestatus","title":"VLANNamespaceStatus","text":"
VLANNamespaceStatus defines the observed state of VLANNamespace
Currently Fabric CLI is represented by a kubectl plugin kubectl-fabric automatically installed on the Control Node. It is a wrapper around kubectl and Kubernetes client which allows to manage Fabric resources in a more convenient way. Fabric CLI only provides a subset of the functionality available via Fabric API and is focused on simplifying objects creation and some manipulation with the already existing objects while main get/list/update operations are expected to be done using kubectl.
core@control-1 ~ $ kubectl fabric\nNAME:\n hhfctl - Hedgehog Fabric user client\n\nUSAGE:\n hhfctl [global options] command [command options] [arguments...]\n\nVERSION:\n v0.23.0\n\nCOMMANDS:\n vpc VPC commands\n switch, sw, agent Switch/Agent commands\n connection, conn Connection commands\n switchgroup, sg SwitchGroup commands\n external External commands\n help, h Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n --verbose, -v verbose output (includes debug) (default: true)\n --help, -h show help\n --version, -V print the version\n
The following is a list of all supported switches. Please, make sure to use the version of documentation that matches your environment to get an up-to-date list of supported switches, their features and port naming scheme.
Profile Name (to use in switch.spec.profile): dell-s5248f-on
Supported features:
Subinterfaces: true
VXLAN: true
ACLs: true
Available Ports:
Label column is a port label on a physical switch.
Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"reference/profiles/#edgecore-dcs203","title":"Edgecore DCS203","text":"
Profile Name (to use in switch.spec.profile): edgecore-dcs203
Other names: Edgecore AS7326-56X
Supported features:
Subinterfaces: true
VXLAN: true
ACLs: true
Available Ports:
Label column is a port label on a physical switch.
Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 1 25G 10G, 25G E1/6 6 Port Group 1 25G 10G, 25G E1/7 7 Port Group 1 25G 10G, 25G E1/8 8 Port Group 1 25G 10G, 25G E1/9 9 Port Group 1 25G 10G, 25G E1/10 10 Port Group 1 25G 10G, 25G E1/11 11 Port Group 1 25G 10G, 25G E1/12 12 Port Group 1 25G 10G, 25G E1/13 13 Port Group 2 25G 10G, 25G E1/14 14 Port Group 2 25G 10G, 25G E1/15 15 Port Group 2 25G 10G, 25G E1/16 16 Port Group 2 25G 10G, 25G E1/17 17 Port Group 2 25G 10G, 25G E1/18 18 Port Group 2 25G 10G, 25G E1/19 19 Port Group 2 25G 10G, 25G E1/20 20 Port Group 2 25G 10G, 25G E1/21 21 Port Group 2 25G 10G, 25G E1/22 22 Port Group 2 25G 10G, 25G E1/23 23 Port Group 2 25G 10G, 25G E1/24 24 Port Group 2 25G 10G, 25G E1/25 25 Port Group 3 25G 10G, 25G E1/26 26 Port Group 3 25G 10G, 25G E1/27 27 Port Group 3 25G 10G, 25G E1/28 28 Port Group 3 25G 10G, 25G E1/29 29 Port Group 3 25G 10G, 25G E1/30 30 Port Group 3 25G 10G, 25G E1/31 31 Port Group 3 25G 10G, 25G E1/32 32 Port Group 3 25G 10G, 25G E1/33 33 Port Group 3 25G 10G, 25G E1/34 34 Port Group 3 25G 10G, 25G E1/35 35 Port Group 3 25G 10G, 25G E1/36 36 Port Group 3 25G 10G, 25G E1/37 37 Port Group 4 25G 10G, 25G E1/38 38 Port Group 4 25G 10G, 25G E1/39 39 Port Group 4 25G 10G, 25G E1/40 40 Port Group 4 25G 10G, 25G E1/41 41 Port Group 4 25G 10G, 25G E1/42 42 Port Group 4 25G 10G, 25G E1/43 43 Port Group 4 25G 10G, 25G E1/44 44 Port Group 4 25G 10G, 25G E1/45 45 Port Group 4 25G 10G, 25G E1/46 46 Port Group 4 25G 10G, 25G E1/47 47 Port Group 4 25G 10G, 25G E1/48 48 Port Group 4 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x40G, 4x10G, 4x25G E1/56 56 Direct 100G 40G, 100G E1/57 57 Direct 10G 1G, 10G E1/58 58 Direct 10G 1G, 10G"},{"location":"reference/profiles/#edgecore-dcs204","title":"Edgecore DCS204","text":"
Profile Name (to use in switch.spec.profile): edgecore-dcs204
Other names: Edgecore AS7726-32X
Supported features:
Subinterfaces: true
VXLAN: true
ACLs: true
Available Ports:
Label column is a port label on a physical switch.
Label column is a port label on a physical switch.
Port Label Type Group Default Supported M1 Management E1/1 1 Port Group 1 25G 10G, 25G E1/2 2 Port Group 1 25G 10G, 25G E1/3 3 Port Group 1 25G 10G, 25G E1/4 4 Port Group 1 25G 10G, 25G E1/5 5 Port Group 2 25G 10G, 25G E1/6 6 Port Group 2 25G 10G, 25G E1/7 7 Port Group 2 25G 10G, 25G E1/8 8 Port Group 2 25G 10G, 25G E1/9 9 Port Group 3 25G 10G, 25G E1/10 10 Port Group 3 25G 10G, 25G E1/11 11 Port Group 3 25G 10G, 25G E1/12 12 Port Group 3 25G 10G, 25G E1/13 13 Port Group 4 25G 10G, 25G E1/14 14 Port Group 4 25G 10G, 25G E1/15 15 Port Group 4 25G 10G, 25G E1/16 16 Port Group 4 25G 10G, 25G E1/17 17 Port Group 5 25G 10G, 25G E1/18 18 Port Group 5 25G 10G, 25G E1/19 19 Port Group 5 25G 10G, 25G E1/20 20 Port Group 5 25G 10G, 25G E1/21 21 Port Group 6 25G 10G, 25G E1/22 22 Port Group 6 25G 10G, 25G E1/23 23 Port Group 6 25G 10G, 25G E1/24 24 Port Group 6 25G 10G, 25G E1/25 25 Port Group 7 25G 10G, 25G E1/26 26 Port Group 7 25G 10G, 25G E1/27 27 Port Group 7 25G 10G, 25G E1/28 28 Port Group 7 25G 10G, 25G E1/29 29 Port Group 8 25G 10G, 25G E1/30 30 Port Group 8 25G 10G, 25G E1/31 31 Port Group 8 25G 10G, 25G E1/32 32 Port Group 8 25G 10G, 25G E1/33 33 Port Group 9 25G 10G, 25G E1/34 34 Port Group 9 25G 10G, 25G E1/35 35 Port Group 9 25G 10G, 25G E1/36 36 Port Group 9 25G 10G, 25G E1/37 37 Port Group 10 25G 10G, 25G E1/38 38 Port Group 10 25G 10G, 25G E1/39 39 Port Group 10 25G 10G, 25G E1/40 40 Port Group 10 25G 10G, 25G E1/41 41 Port Group 11 25G 10G, 25G E1/42 42 Port Group 11 25G 10G, 25G E1/43 43 Port Group 11 25G 10G, 25G E1/44 44 Port Group 11 25G 10G, 25G E1/45 45 Port Group 12 25G 10G, 25G E1/46 46 Port Group 12 25G 10G, 25G E1/47 47 Port Group 12 25G 10G, 25G E1/48 48 Port Group 12 25G 10G, 25G E1/49 49 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/50 50 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/51 51 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/52 52 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/53 53 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/54 54 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/55 55 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G E1/56 56 Breakout 1x100G 1x100G, 1x10G, 1x25G, 1x40G, 1x50G, 2x50G, 4x10G, 4x25G"},{"location":"release-notes/","title":"Release notes","text":""},{"location":"release-notes/#beta-1","title":"Beta-1","text":""},{"location":"release-notes/#device-support","title":"Device support","text":"
Groups LEAF switches to provide multi-homed connectivity to the Fabric
2-4 switches per group
Support for MCLAG and ESLAG (EVPN MH / ESI)
A single redundancy group can only support multi-homing of one type (ESLAG or MCLAG)
Multiple types of redundancy groups can be used in the fabric simultaneously
"},{"location":"release-notes/#improved-vpc-security-policy-better-zero-trust","title":"Improved VPC security policy - better Zero Trust","text":"
Inter-VPC
Allow inter-VPC and external peering with per subnet control
Intra-VPC intra-subnet policies
Isolated Subnets
subnets isolated by default from other subnets in the VPC
require a user-defined explicitly permit list to allow communications to other subnets within the VPC
can be set on individual subnets within VPC or per entire VPC - off by default
Inter-VPC and external peering configurations are not affected and work the same as before
Restricted Subnets
Hosts within a subnet have no mutual reachability
Hosts within a subnet can be reached by members of other subnets or peered VPCs as specified by the policy
Inter-VPC and external peering configurations are not affected and work the same as before
Permit Lists
Intra-VPC Permit Lists govern connectivity between subnets within the VPC for isolated subnets
Inter-VPC Permit Lists govern which subnets of one VPC have access to some subnets of the other VPC for finer-grained control of inter-VPC and external peering
For CLOS/LEAF-SPINE fabrics, it is recommended that the controller connects to one or more leaf switches in the fabric on front-facing data ports. Connection to two or more leaf switches is recommended for redundancy and performance. No port break-out functionality is supported for controller connectivity.
Spine controller connectivity is not supported.
For Collapsed Core topology, the controller can connect on front-facing data ports, as described above, or on management ports. Note that every switch in the collapsed core topology must be connected to the controller.
Management port connectivity can also be supported for CLOS/LEAF-SPINE topology but requires all switches connected to the controllers via management ports. No chain booting is possible for this configuration.
A VPC consists of subnets, each with a user-specified VLAN for external host/server connectivity.
"},{"location":"release-notes/#multiple-ip-address-namespaces","title":"Multiple IP address namespaces","text":"
Multiple IP address namespaces are supported per fabric. Each VPC belongs to the corresponding IPv4 namespace. There are no subnet overlaps within a single IPv4 namespace. IP address namespaces can mutually overlap.
VLAN Namespaces guarantee the uniqueness of VLANs for a set of participating devices. Each switch belongs to a list of VLAN namespaces with non-overlapping VLAN ranges. Each VPC belongs to the VLAN namespace. There are no VLAN overlaps within a single VLAN namespace.
This feature is useful when multiple VM-management domains (like separate VMware clusters connect to the fabric).
VPC peering provides the means of peering with external networking devices (edge routers, firewalls, or data center interconnects). VPC egress/ingress is pinned to a specific group of the border or mixed leaf switches. Multiple \u201cexternal systems\u201d with multiple devices/links in each of them are supported.
The user controls what subnets/prefixes to import and export from/to the external system.
No NAT function is supported for external peering.
[ silicon platform limitation in Trident 3; limits to number of endpoints in the fabric ]
Total VPCs per switch: up to 1000
[ Including VPCs attached at the given switch and VPCs peered with ]
Total VPCs per VLAN namespace: up to 3000
[ assuming 1 subnet per VPC ]
Total VPCs per fabric: unlimited
[ if using multiple VLAN namespaces ]
VPC subnets per switch: up to 3000
VPC subnets per VLAN namespace up to 3000
Subnets per VPC: up to 20
[ just a validation; the current design allows up to 100, but it could be increased even more in the future ]
VPC Slots per remote peering @ switch: 2
Max VPC loopbacks per switch: 500
[ VPC loopback workarounds per switch are needed for local peering when both VPCs are attached to the switch or for external peering with VPC attached on the same switch that is peering with external ]
Fabric MTU is 9100 and not configurable right now (A3 planned)
Server-facing MTU is 9136 and not configurable right now (A3+)
no support for Access VLANs for attaching servers (A3 planned)
VPC peering is enabled on all subnets of the participating VPCs. No subnet selection for peering. (A3 planned)
peering with external is only possible with a VLAN (by design)
If you have VPCs with remote peering on a switch group, you can't attach those VPCs on that switch group (by definition of remote peering)
if a group of VPCs has remote peering on a switch group, any other VPC that will peer with those VPCs remotely will need to use the same switch group (by design)
if VPC peers with external, it can only be remotely peered with on the same switches that have a connection to that external (by design)
the server-facing connection object is immutable as it\u2019s very easy to get into a deadlock, re-create to change it (A3+)
A single controller connecting to each switch management port. No redundancy.
Controller requirements:
One 1 gig port per switch
One+ 1 gig+ ports connecting to the external management network.
4 Cores, 12GB RAM, 100GB SSD.
Seeder:
Seeder and Controller functions co-resident on the control node. Switch booting and ZTP on management ports directly connected to the controller.
HHFab - the fabricator:
An operational tool to generate, initiate, and maintain the fabric software appliance. Allows fabrication of the environment-specific image with all of the required underlay and security configuration baked in.
DHCP Service:
A simple DHCP server for assigning IP addresses to hosts connecting to the fabric, optimized for use with VPC overlay.
Topology:
Support for a Collapsed Core topology with 2 switch nodes.
Underlay:
A simple single-VRF network with a BGP control plane. IPv4 support only.
External connectivity:
An edge router must be connected to selected ports of one or both switches. IPv4 support only.
Dual-homing:
L2 Dual homing with MCLAG is implemented to connect servers, storage, and other devices in the data center. NIC bonding and LACP configuration at the host are required.
VPC overlay implementation:
VPC is implemented as a set of ACLs within the underlay VRF. External connectivity to the VRF is performed via internally managed VLANs. IPv4 support only.
VPC Peering:
VPC peering is performed via ACLs with no fine-grained control.
NAT
DNAT + SNAT are supported per VPC. SNAT and DNAT can't be enabled per VPC simultaneously.
Hardware support:
Please see the supported hardware list.
Virtual Lab:
A simulation of the two-node Collapsed Core Topology as a virtual environment. Designed for use as a network simulation, a configuration scratchpad, or a training/demonstration tool. Minimum requirements: 8 cores, 24GB RAM, 100GB SSD
Limitations:
40 VPCs max
50 VPC peerings
[ 768 ACL entry platform limitation from Broadcom ]
Connection objects represent logical and physical connections between the devices in the Fabric (Switch, Server and External objects) and are needed to define all the connections in the Wiring Diagram.
All connections reference switch or server ports. Only port names defined by switch profiles can be used in the wiring diagram for the switches. NOS (or any other) port names aren't supported. Currently, server ports aren't validated by the Fabric API other than for uniqueness. See the Switch Profiles and Port Naming section for more details.
There are several types of connections.
"},{"location":"user-guide/connections/#workload-server-connections","title":"Workload server connections","text":"
Server connections are used to connect workload servers to switches.
Unbundled server connections are used to connect servers to a single switch using a single port.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: server-4--unbundled--s5248-02\n namespace: default\nspec:\n unbundled:\n link: # Defines a single link between a server and a switch\n server:\n port: server-4/enp2s1\n switch:\n port: s5248-02/Ethernet3\n
Bundled server connections are used to connect servers to a single switch using multiple ports (port channel, LAG).
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: server-3--bundled--s5248-01\n namespace: default\nspec:\n bundled:\n links: # Defines multiple links between a single server and a single switch\n - server:\n port: server-3/enp2s1\n switch:\n port: s5248-01/Ethernet3\n - server:\n port: server-3/enp2s2\n switch:\n port: s5248-01/Ethernet4\n
MCLAG server connections are used to connect servers to a pair of switches using multiple ports (Dual-homing). Switches should be configured as an MCLAG pair which requires them to be in a single redundancy group of type mclag and a Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: server-1--mclag--s5248-01--s5248-02\n namespace: default\nspec:\n mclag:\n links: # Defines multiple links between a single server and a pair of switches\n - server:\n port: server-1/enp2s1\n switch:\n port: s5248-01/Ethernet1\n - server:\n port: server-1/enp2s2\n switch:\n port: s5248-02/Ethernet1\n
ESLAG server connections are used to connect servers to the 2-4 switches using multiple ports (Multi-homing). Switches should belong to the same redundancy group with type eslag, but contrary to the MCLAG case, no other configuration is required.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: server-1--eslag--s5248-01--s5248-02\n namespace: default\nspec:\n eslag:\n links: # Defines multiple links between a single server and a 2-4 switches\n - server:\n port: server-1/enp2s1\n switch:\n port: s5248-01/Ethernet1\n - server:\n port: server-1/enp2s2\n switch:\n port: s5248-02/Ethernet1\n
MCLAG-Domain connections define a pair of MCLAG switches with Session and Peer link between them. Switches should be configured as an MCLAG, pair which requires them to be in a single redundancy group of type mclag and Connection with type mclag-domain between them. MCLAG switches should also have the same spec.ASN and spec.VTEPIP.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: s5248-01--mclag-domain--s5248-02\n namespace: default\nspec:\n mclagDomain:\n peerLinks: # Defines multiple links between a pair of MCLAG switches for Peer link\n - switch1:\n port: s5248-01/Ethernet72\n switch2:\n port: s5248-02/Ethernet72\n - switch1:\n port: s5248-01/Ethernet73\n switch2:\n port: s5248-02/Ethernet73\n sessionLinks: # Defines multiple links between a pair of MCLAG switches for Session link\n - switch1:\n port: s5248-01/Ethernet74\n switch2:\n port: s5248-02/Ethernet74\n - switch1:\n port: s5248-01/Ethernet75\n switch2:\n port: s5248-02/Ethernet75\n
VPC-Loopback connections are required in order to implement a workaround for the local VPC peering (when both VPC are attached to the same switch), which is caused by a hardware limitation of the currently supported switches.
"},{"location":"user-guide/connections/#connecting-fabric-to-the-outside-world","title":"Connecting Fabric to the outside world","text":"
Connections in this section provide connectivity to the outside world. For example, they can be connections to the Internet, to other networks, or to some other systems such as DHCP, NTP, LMA, or AAA services.
StaticExternal connections provide a simple way to connect things like DHCP servers directly to the Fabric by connecting them to specific switch ports.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: third-party-dhcp-server--static-external--s5248-04\n namespace: default\nspec:\n staticExternal:\n link:\n switch:\n port: s5248-04/Ethernet1 # Switch port to use\n ip: 172.30.50.5/24 # IP address that will be assigned to the switch port\n vlan: 1005 # Optional VLAN ID to use for the switch port; if 0, no VLAN is configured\n subnets: # List of subnets to route to the switch port using static routes and next hop\n - 10.99.0.1/24\n - 10.199.0.100/32\n nextHop: 172.30.50.1 # Next hop IP address to use when configuring static routes for the \"subnets\" list\n
Additionally, it's possible to configure StaticExternal within the VPC to provide access to the third-party resources within a specific VPC, with the rest of the YAML configuration remaining unchanged.
...\nspec:\n staticExternal:\n withinVPC: vpc-1 # VPC name to attach the static external to\n link:\n ...\n
Connection to external systems, such as edge/provider routers using BGP peering and configuring Inbound/Outbound communities as well as granularly controlling what gets advertised and which routes are accepted.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Connection\nmetadata:\n name: s5248-03--external--5835\n namespace: default\nspec:\n external:\n link: # Defines a single link between a switch and an external system\n switch:\n port: s5248-03/Ethernet3\n
"},{"location":"user-guide/devices/","title":"Switches and Servers","text":"
All devices in a Hedgehog Fabric are divided into two groups: switches and servers, represented by the corresponding Switch and Server objects in the API. These objects are needed to define all of the participants of the Fabric and their roles in the Wiring Diagram, together with Connection objects (see Connections).
Switches are the main building blocks of the Fabric. They are represented by Switch objects in the API. These objects consist of basic metadata like name, description, role, serial, management port mac, as well as port group speeds, port breakouts, ASN, IP addresses, and more. Additionally, a Switch contains a reference to a SwitchProfile object that defines the switch model and capabilities. More details can be found in the Switch Profiles and Port Naming section.
In order for the fabric to manage a switch the profile needs to include either the serial or mac need to be defined in the YAML doc.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: Switch\nmetadata:\n name: s5248-01\n namespace: default\nspec:\n boot: # at least one of the serial or mac needs to be defined\n serial: XYZPDQ1234\n mac: 00:11:22:33:44:55 # Usually the first management port MAC address\n profile: dell-s5248f-on # Mandatory reference to the SwitchProfile object defining the switch model and capabilities\n asn: 65101 # ASN of the switch\n description: leaf-1\n ip: 172.30.10.100/32 # Switch IP that will be accessible from the Control Node\n portBreakouts: # Configures port breakouts for the switch, see the SwitchProfile for available options\n E1/55: 4x25G\n portGroupSpeeds: # Configures port group speeds for the switch, see the SwitchProfile for available options\n \"1\": 10G\n \"2\": 10G\n portSpeeds: # Configures port speeds for the switch, see the SwitchProfile for available options\n E1/1: 25G\n protocolIP: 172.30.11.100/32 # Used as BGP router ID\n role: server-leaf # Role of the switch, one of server-leaf, border-leaf and mixed-leaf\n vlanNamespaces: # Defines which VLANs could be used to attach servers\n - default\n vtepIP: 172.30.12.100/32\n groups: # Defines which groups the switch belongs to, by referring to SwitchGroup objects\n - some-group\n redundancy: # Optional field to define that switch belongs to the redundancy group\n group: eslag-1 # Name of the redundancy group\n type: eslag # Type of the redundancy group, one of mclag or eslag\n
The SwitchGroup is just a marker at that point and doesn't have any configuration options.
Redundancy groups are used to define the redundancy between switches. It's a regular SwitchGroup used by multiple switches and currently it could be MCLAG or ESLAG (EVPN MH / ESI). A switch can only belong to a single redundancy group.
MCLAG is only supported for pairs of switches and ESLAG is supported for up to 4 switches. Multiple types of redundancy groups can be used in the fabric simultaneously.
Connections with types mclag and eslag are used to define the servers connections to switches. They are only supported if the switch belongs to a redundancy group with the corresponding type.
In order to define a MCLAG or ESLAG redundancy group, you need to create a SwitchGroup object and assign it to the switches using the redundancy field.
In case of MCLAG it's required to have a special connection with type mclag-domain that defines the peer and session links between switches. For more details, see Connections.
Hedgehog Fabric uses the Border Leaf concept to exchange VPC routes outside the Fabric and provide L3 connectivity. The External Peering feature allows you to set up an external peering endpoint and to enforce several policies between internal and external endpoints.
Note
Hedgehog Fabric does not operate Edge side devices.
Traffic exits from the Fabric on Border Leaves that are connected with Edge devices. Border Leaves are suitable to terminate L2VPN connections, to distinguish VPC L3 routable traffic towards Edge devices, and to land VPC servers. Border Leaves (or Borders) can connect to several Edge devices.
Note
External Peering is only available on the switch devices that are capable for sub-interfaces.
"},{"location":"user-guide/external/#connect-border-leaf-to-edge-device","title":"Connect Border Leaf to Edge device","text":"
In order to distinguish VPC traffic, an Edge device should be able to:
Set up BGP IPv4 to advertise and receive routes from the Fabric
Connect to a Fabric Border Leaf over VLAN
Be able to mark egress routes towards the Fabric with BGP Communities
Be able to filter ingress routes from the Fabric by BGP Communities
All other filtering and processing of L3 Routed Fabric traffic should be done on the Edge devices.
VPC L3 routable traffic will be tagged with VLAN and sent to Edge device. Later processing of VPC traffic (NAT, PBR, etc) should happen on Edge devices.
"},{"location":"user-guide/external/#vpc-access-to-edge-device","title":"VPC access to Edge device","text":"
Each VPC within the Fabric can be allowed to access Edge devices. Additional filtering can be applied to the routes that the VPC can export to Edge devices and import from the Edge devices.
"},{"location":"user-guide/external/#api-and-implementation","title":"API and implementation","text":""},{"location":"user-guide/external/#external","title":"External","text":"
General configuration starts with the specification of External objects. Each object of External type can represent a set of Edge devices, or a single BGP instance on Edge device, or any other united Edge entities that can be described with the following configuration:
Name of External
Inbound routes marked with the dedicated BGP community
Outbound routes marked with the dedicated community
Each External should be bound to some VPC IP Namespace, otherwise prefixes overlap may happen.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: External\nmetadata:\n name: default--5835\nspec:\n ipv4Namespace: # VPC IP Namespace\n inboundCommunity: # BGP Standard Community of routes from Edge devices\n outboundCommunity: # BGP Standard Community required to be assigned on prefixes advertised from Fabric\n
External Attachment defines BGP Peering and traffic connectivity between a Border leaf and External. Attachments are bound to a Connection with type external and they specify an optional vlan that will be used to segregate particular Edge peering.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalAttachment\nmetadata:\n name: #\nspec:\n connection: # Name of the Connection with type external\n external: # Name of the External to pick config\n neighbor:\n asn: # Edge device ASN\n ip: # IP address of Edge device to peer with\n switch:\n ip: # IP address on the Border Leaf to set up BGP peering\n vlan: # VLAN (optional) ID to tag control and data traffic, use 0 for untagged\n
Several External Attachment can be configured for the same Connection but for different vlan.
To allow a specific VPC to have access to Edge devices, bind the VPC to a specific External object. To do so, define an External Peering object.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: ExternalPeering\nmetadata:\n name: # Name of ExternalPeering\nspec:\n permit:\n external:\n name: # External Name\n prefixes: # List of prefixes (routes) to be allowed to pick up from External\n - # IPv4 prefix\n vpc:\n name: # VPC Name\n subnets: # List of VPC subnets name to be allowed to have access to External (Edge)\n - # Name of the subnet within VPC\n
Prefixes is the list of subnets to permit from the External to the VPC. It matches any prefix length less than or equal to 32, effectively permitting all prefixes within the specified one. Use 0.0.0.0/0 for any route, including the default route.
This example allows any IPv4 prefix that came from External:
spec:\n permit:\n external:\n name: ###\n prefixes:\n - prefix: 0.0.0.0/0 # Any route will be allowed including default route\n
This example allows all prefixes that match the default route, with any prefix length:
spec:\n permit:\n external:\n name: ###\n prefixes:\n - prefix: 77.0.0.0/8 # Any route that belongs to the specified prefix is allowed (such as 77.0.0.0/8 or 77.1.2.0/24)\n
This example shows how to peer with the External object with name HedgeEdge, given a Fabric VPC with name vpc-1 on the Border Leaf switchBorder that has a cable connecting it to an Edge device on the port Ethernet42. Specifying vpc-1 is required to receive any prefixes advertised from the External.
"},{"location":"user-guide/external/#fabric-api-configuration","title":"Fabric API configuration","text":""},{"location":"user-guide/external/#external_1","title":"External","text":"
"},{"location":"user-guide/external/#example-edge-side-bgp-configuration-based-on-sonic-os","title":"Example Edge side BGP configuration based on SONiC OS","text":"
Warning
Hedgehog does not recommend using the following configuration for production. It is only provided as an example of Edge Peer configuration.
Interface configuration:
interface Ethernet2.100\n encapsulation dot1q vlan-id 100\n description switchBorder--Ethernet42\n no shutdown\n ip vrf forwarding VrfHedge\n ip address 100.100.0.6/24\n
route-map HedgeIn permit 10\n match community Hedgehog\n!\nroute-map HedgeOut permit 10\n set community 65102:5000\n!\n\nbgp community-list standard HedgeIn permit 5000:65102\n
To provide monitoring for most critical metrics from the switches managed by Hedgehog Fabric there are several Dashboards that may be used in Grafana deployments. Make sure that you've enabled metrics and logs collection for the switches in the Fabric that is described in Fabric Config section.
Grafana Node Exporter Full is an opensource Grafana board that provide visualizations for monitoring Linux nodes. In particular case Node Exporter is used to track SONiC OS own stats such as
Memory/disks usage
CPU/System utilization
Networking stats (traffic that hits SONiC interfaces) ...
JSON
"},{"location":"user-guide/harvester/","title":"Using VPCs with Harvester","text":"
This section contains an example of how Hedgehog Fabric can be used with Harvester or any hypervisor on the servers connected to Fabric. It assumes that you have already installed Fabric and have some servers running Harvester attached to it.
You need to define a Server object for each server running Harvester and a Connection object for each server connection to the switches.
You can have multiple VPCs created and attached to the Connections to the servers to make them available to the VMs in Harvester or any other hypervisor.
"},{"location":"user-guide/harvester/#configure-harvester","title":"Configure Harvester","text":""},{"location":"user-guide/harvester/#add-a-cluster-network","title":"Add a Cluster Network","text":"
From the \"Cluster Networks/Configs\" side menu, create a new Cluster Network.
Here is a cleaned-up version of what the CRD looks like:
This chapter gives an overview of the main features of Hedgehog Fabric and their usage.
"},{"location":"user-guide/profiles/","title":"Switch Profiles and Port Naming","text":""},{"location":"user-guide/profiles/#switch-profiles","title":"Switch Profiles","text":"
All supported switches have a SwitchProfile that defines the switch model, supported features, and available ports with supported configurations such as port group and speeds as well as port breakouts. SwitchProfiles available in-cluster or generated documentation can be found in the Reference section.
Each switch used in the wiring diagram should have a SwitchProfile references in the spec.profile of the Switch object.
Switch profile defines what features and ports are available on the switch. Based on the ports data in the profile, it's possible to set port speeds (for non-breakout and non-group ports), port group speeds and port breakout modes in the Switch object in the Fabric API.
<asic-or-chassis-number> is the ASIC or chassis number (usually only one named 1 for the most switches)
<port-number> is the port number on the ASIC or chassis, starting from 1
optional /<breakout> is the breakout number for the port, starting from 1, only for breakout ports and always consecutive numbers independent of the lanes allocation and other implementation details
optional .<subinterface> is the subinterface number for the port
Examples of port names:
M1 - management port
E1/1 - port 1 on the ASIC or chassis 1, usually a first port on the switch
E1/55/1 - first breakout port of the switch port 55 on the ASIC or chassis 1
Non-breakout and non-group ports. Would have a reference to the port profile with default and available speeds. Could be configured by setting the speed in the Switch object in the Fabric API:
Ports that belong to a port group, non-breakout and not directly configurable. Would have a reference to the port group which will have a reference to the port profile with default and available speeds. Port couldn't be configured directly, speed configuration is applied to the whole group in the Switch object in the Fabric API:
.spec:\n portGroupSpeeds:\n \"1\": 10G\n
It'll set the speed of all ports in the group 1 to 10G, e.g. if the group 1 contains ports E1/1, E1/2, E1/3 and E1/4, all of them will be set to 10G speed.
Ports that are breakouts and non-group ports. Would have a reference to the port profile with default and available breakout modes. Could be configured by setting the breakout mode in the Switch object in the Fabric API:
.spec:\n portBreakouts:\n E1/55: 4x25G\n
Configuring a port breakout mode will make \"breakout\" ports available for use in the wiring diagram. The breakout ports are named as E<asic-or-chassis-number>/<port-number>/<breakout>, e.g. E1/55/1, E1/55/2, E1/55/3, E1/55/4 for the example above. Omitting the breakout number is allowed for the first breakout port, e.g. E1/55 is the same as E1/55/1. The breakout ports are always consecutive numbers independent of the lanes allocation and other implementation details.
This section provides a brief overview of how to add or remove switches within the fabric using Hedgehog Fabric API, and how to manage connections between them.
Manipulating API objects is done with the assumption that target devices are correctly cabled and connected.
This article uses terms that can be found in the Hedgehog Concepts, the User Guide documentation, and the Fabric API reference.
"},{"location":"user-guide/shrink-expand/#add-a-switch-to-the-existing-fabric","title":"Add a switch to the existing fabric","text":"
In order to be added to the Hedgehog Fabric, a switch should have a corresponding Switch object. An example on how to define this object is available in the User Guild.
Note
If theSwitch will be used in ESLAG or MCLAG groups, appropriate groups should exist. Redundancy groups should be specified in the Switch object before creation.
After the Switch object has been created, you can define and create dedicated device Connections. The types of the connections may differ based on the Switch role given to the device. For more details, refer to Connections section.
Note
Switch devices should be booted in ONIE installation mode to install SONiC OS and configure the Fabric Agent.
Ensure the management port of the switch is connected to fabric management network.
"},{"location":"user-guide/shrink-expand/#remove-a-switch-from-the-existing-fabric","title":"Remove a switch from the existing fabric","text":"
Before you decommission a switch from the Hedgehog Fabric, several preparation steps are necessary.
Warning
Currently the Wiring diagram used for initial deployment is saved in /var/lib/rancher/k3s/server/manifests/hh-wiring.yaml on the Control node. Fabric will sustain objects within the original wiring diagram. In order to remove any object, first remove the dedicated API objects from this file. It is recommended to reapply hh-wiring.yaml after changing its internals.
If the Switch is a Leaf switch (including Mixed and Border leaf configurations), remove all VPCAttachments bound to all switches Connections.
If the Switch was used for ExternalPeering, remove all ExternalAttachment objects that are bound to the Connections of the Switch.
Remove all connections of the Switch.
At last, remove the Switch and Agent objects.
"},{"location":"user-guide/vpcs/","title":"VPCs and Namespaces","text":""},{"location":"user-guide/vpcs/#vpc","title":"VPC","text":"
A Virtual Private Cloud (VPC) is similar to a public cloud VPC. It provides an isolated private network with support for multiple subnets, each with user-defined VLANs and optional DHCP services.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPC\nmetadata:\n name: vpc-1\n namespace: default\nspec:\n ipv4Namespace: default # Limits which subnets can the VPC use to guarantee non-overlapping IPv4 ranges\n vlanNamespace: default # Limits which Vlan Ids can the VPC use to guarantee non-overlapping VLANs\n\n defaultIsolated: true # Sets default behavior for the current VPC subnets to be isolated\n defaultRestricted: true # Sets default behavior for the current VPC subnets to be restricted\n\n subnets:\n default: # Each subnet is named, \"default\" subnet isn't required, but actively used by CLI\n dhcp:\n enable: true # On-demand DHCP server\n range: # Optionally, start/end range could be specified, otherwise all available IPs are used\n start: 10.10.1.10\n end: 10.10.1.99\n options: # Optional, additional DHCP options to enable for DHCP server, only available when enable is true\n pxeURL: tftp://10.10.10.99/bootfilename # PXEURL (optional) to identify the PXE server to use to boot hosts; HTTP query strings are not supported\n dnsServers: # (optional) configure DNS servers\n - 1.1.1.1\n timeServers: # (optional) configure Time (NTP) Servers\n - 1.1.1.1\n interfaceMTU: 1500 # (optional) configure the MTU (default is 9036); doesn't affect the actual MTU of the switch interfaces\n subnet: 10.10.1.0/24 # User-defined subnet from ipv4 namespace\n gateway: 10.10.1.1 # User-defined gateway (optional, default is .1)\n vlan: 1001 # User-defined VLAN from VLAN namespace\n isolated: true # Makes subnet isolated from other subnets within the VPC (doesn't affect VPC peering)\n restricted: true # Causes all hosts in the subnet to be isolated from each other\n\n thrird-party-dhcp: # Another subnet\n dhcp:\n relay: 10.99.0.100/24 # Use third-party DHCP server (DHCP relay configuration), access to it could be enabled using StaticExternal connection\n subnet: \"10.10.2.0/24\"\n vlan: 1002\n\n another-subnet: # Minimal configuration is just a name, subnet and VLAN\n subnet: 10.10.100.0/24\n vlan: 1100\n\n permit: # Defines which subnets of the current VPC can communicate to each other, applied on top of subnets \"isolated\" flag (doesn't affect VPC peering)\n - [subnet-1, subnet-2, subnet-3] # 1, 2 and 3 subnets can communicate to each other\n - [subnet-4, subnet-5] # Possible to define multiple lists\n\n staticRoutes: # Optional, static routes to be added to the VPC\n - prefix: 10.100.0.0/24 # Destination prefix\n nextHops: # Next hop IP addresses\n - 10.200.0.0\n
"},{"location":"user-guide/vpcs/#isolated-and-restricted-subnets-permit-lists","title":"Isolated and restricted subnets, permit lists","text":"
Subnets can be isolated and restricted, with the ability to define permit lists to allow communication between specific isolated subnets. The permit list is applied on top of the isolated flag and doesn't affect VPC peering.
Isolated subnet means that the subnet has no connectivity with other subnets within the VPC, but it could still be allowed by permit lists.
Restricted subnet means that all hosts in the subnet are isolated from each other within the subnet.
A Permit list contains a list. Every element of the list is a set of subnets that can communicate with each other.
"},{"location":"user-guide/vpcs/#third-party-dhcp-server-configuration","title":"Third-party DHCP server configuration","text":"
In case you use a third-party DHCP server, by configuring spec.subnets.<subnet>.dhcp.relay, additional information is added to the DHCP packet forwarded to the DHCP server to make it possible to identify the VPC and subnet. This information is added under the RelayAgentInfo (option 82) in the DHCP packet. The relay sets two suboptions in the packet:
VirtualSubnetSelection (suboption 151) is populated with the VRF which uniquely identifies a VPC on the Hedgehog Fabric and will be in VrfV<VPC-name> format, for example VrfVvpc-1 for a VPC named vpc-1 in the Fabric API.
CircuitID (suboption 1) identifies the VLAN which, together with the VRF (VPC) name, maps to a specific VPC subnet.
A VPCAttachment represents a specific VPC subnet assignment to the Connection object which means a binding between an exact server port and a VPC. It basically leads to the VPC being available on the specific server port(s) on a subnet VLAN.
VPC could be attached to a switch that is part of the VLAN namespace used by the VPC.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCAttachment\nmetadata:\n name: vpc-1-server-1--mclag--s5248-01--s5248-02\n namespace: default\nspec:\n connection: server-1--mclag--s5248-01--s5248-02 # Connection name representing the server port(s)\n subnet: vpc-1/default # VPC subnet name\n nativeVLAN: true # (Optional) if true, the port will be configured as a native VLAN port (untagged)\n
apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n name: vpc-1--vpc-2\n namespace: default\nspec:\n permit: # Defines a pair of VPCs to peer\n - vpc-1: {} # Meaning all subnets of two VPCs will be able to communicate with each other\n vpc-2: {} # See \"Subnet filtering\" for more advanced configuration\n
It's possible to specify which specific subnets of the peering VPCs could communicate to each other using the permit field.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: VPCPeering\nmetadata:\n name: vpc-1--vpc-2\n namespace: default\nspec:\n permit: # subnet-1 and subnet-2 of vpc-1 could communicate to subnet-3 of vpc-2 as well as subnet-4 of vpc-2 could communicate to subnet-5 and subnet-6 of vpc-2\n - vpc-1:\n subnets: [subnet-1, subnet-2]\n vpc-2:\n subnets: [subnet-3]\n - vpc-1:\n subnets: [subnet-4]\n vpc-2:\n subnets: [subnet-5, subnet-6]\n
An IPv4Namespace defines a set of (non-overlapping) IPv4 address ranges available for use by VPC subnets. Each VPC belongs to a specific IPv4 namespace. Therefore, its subnet prefixes must be from that IPv4 namespace.
apiVersion: vpc.githedgehog.com/v1beta1\nkind: IPv4Namespace\nmetadata:\n name: default\n namespace: default\nspec:\n subnets: # List of prefixes that VPCs can pick their subnets from\n - 10.10.0.0/16\n
A VLANNamespace defines a set of VLAN ranges available for attaching servers to switches. Each switch can belong to one or more disjoint VLANNamespaces.
apiVersion: wiring.githedgehog.com/v1beta1\nkind: VLANNamespace\nmetadata:\n name: default\n namespace: default\nspec:\n ranges: # List of VLAN ranges that VPCs can pick their subnet VLANs from\n - from: 1000\n to: 2999\n
"},{"location":"vlab/demo/","title":"Demo on VLAB","text":""},{"location":"vlab/demo/#goals","title":"Goals","text":"
The goal of this demo is to show how to use VPCs, attach and peer them and run test connectivity between the servers. Examples are based on the default VLAB topology.
You can find instructions on how to setup VLAB in the Overview and Running VLAB sections.
The default topology is Spine-Leaf with 2 spines, 2 MCLAG leaves, 2 ESLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.
For more details on customizing topologies see the Running VLAB section.
In the default topology, the following Control Node and Switch VMs are created, the Control Node is connected to every switch, the lines are ommitted for clarity:
"},{"location":"vlab/demo/#utility-based-vpc-creation","title":"Utility based VPC creation","text":""},{"location":"vlab/demo/#setup-vpcs","title":"Setup VPCs","text":"
hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.
NAME:\n hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them\n\nUSAGE:\n hhfab vlab setup-vpcs [command options]\n\nOPTIONS:\n --dns-servers value, --dns value [ --dns-servers value, --dns value ] DNS servers for VPCs advertised by DHCP\n --force-clenup, -f start with removing all existing VPCs and VPCAttachments (default: false)\n --help, -h show help\n --interface-mtu value, --mtu value interface MTU for VPCs advertised by DHCP (default: 0)\n --ipns value IPv4 namespace for VPCs (default: \"default\")\n --name value, -n value name of the VM or HW to access\n --servers-per-subnet value, --servers value number of servers per subnet (default: 1)\n --subnets-per-vpc value, --subnets value number of subnets per VPC (default: 1)\n --time-servers value, --ntp value [ --time-servers value, --ntp value ] Time servers for VPCs advertised by DHCP\n --vlanns value VLAN namespace for VPCs (default: \"default\")\n --wait-switches-ready, --wait wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)\n\n Global options:\n\n --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n --cache-dir DIR use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.
NAME:\n hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)\n\nUSAGE:\n Setup test scenario with VPC/External Peerings by specifying requests in the format described below.\n\n Example command:\n\n $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24\n\n Which will produce:\n 1. VPC peering between vpc-01 and vpc-02\n 2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border\n 3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted\n 4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route\n from external permitted as well any route that belongs to 22.22.22.0/24\n\n VPC Peerings:\n\n 1+2 -- VPC peering between vpc-01 and vpc-02\n demo-1+demo-2 -- VPC peering between demo-1 and demo-2\n 1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present\n 1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border\n 1+2:remote=border -- same as above\n\n External Peerings:\n\n 1~as5835 -- external peering for vpc-01 with External as5835\n 1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing\n default subnet and any route from external\n 1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and\n default route from external permitted\n 1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details\n 1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above\n\nOPTIONS:\n --help, -h show help\n --name value, -n value name of the VM or HW to access\n --wait-switches-ready, --wait wait for switches to be ready before before and after configuring peerings (default: true)\n\n Global options:\n\n --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n --cache-dir DIR use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.
NAME:\n hhfab vlab test-connectivity - test connectivity between all servers\n\nUSAGE:\n hhfab vlab test-connectivity [command options]\n\nOPTIONS:\n --curls value number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)\n --help, -h show help\n --iperfs value seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)\n --iperfs-speed value minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)\n --name value, -n value name of the VM or HW to access\n --pings value number of pings to send between each pair of servers (0 to disable) (default: 5)\n --wait-switches-ready, --wait wait for switches to be ready before testing connectivity (default: true)\n\n Global options:\n\n --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]\n --cache-dir DIR use cache dir DIR for caching downloaded files (default: \"/home/ubuntu/.hhfab-cache\") [$HHFAB_CACHE_DIR]\n --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]\n --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: \"/home/ubuntu\") [$HHFAB_WORK_DIR]\n
"},{"location":"vlab/demo/#manual-vpc-creation","title":"Manual VPC creation","text":""},{"location":"vlab/demo/#creating-and-attaching-vpcs","title":"Creating and attaching VPCs","text":"
You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP server enabled with its optional IP address range start defined, and to attach them to some of the test servers:
The VPC subnet should belong to an IPv4Namespace, the default one in the VLAB is 10.0.0.0/16:
core@control-1 ~ $ kubectl get ipns\nNAME SUBNETS AGE\ndefault [\"10.0.0.0/16\"] 5h14m\n
After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested configuration was applied to the switches:
In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration has been applied.
"},{"location":"vlab/demo/#setting-up-networking-on-test-servers","title":"Setting up networking on test servers","text":"
You can use hhfab vlab ssh on the host to SSH into the test servers and configure networking there. For example, for both server-01 (MCLAG attached to both leaf-01 and leaf-02) we need to configure a bond with a VLAN on top of it and for the server-05 (single-homed unbundled attached to leaf-03) we need to configure just a VLAN and they both will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use the little helper pre-installed by Fabricator on test servers, hhnet.
For server-01:
core@server-01 ~ $ hhnet cleanup\ncore@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2\n10.0.1.10/24\ncore@server-01 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02\n6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n valid_lft forever preferred_lft forever\n7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff\n inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001\n valid_lft 86396sec preferred_lft 86396sec\n inet6 fe80::45a:e8ff:fe38:3bea/64 scope link\n valid_lft forever preferred_lft forever\n
And for server-02:
core@server-02 ~ $ hhnet cleanup\ncore@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2\n10.0.2.10/24\ncore@server-02 ~ $ ip a\n...\n3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01\n4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000\n link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02\n8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n valid_lft forever preferred_lft forever\n9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000\n link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff\n inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002\n valid_lft 86185sec preferred_lft 86185sec\n inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link\n valid_lft forever preferred_lft forever\n
"},{"location":"vlab/demo/#testing-connectivity-before-peering","title":"Testing connectivity before peering","text":"
You can test connectivity between the servers before peering the switches using the ping command:
core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms\n
core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\nFrom 10.0.2.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.2.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
"},{"location":"vlab/demo/#peering-vpcs-and-testing-connectivity","title":"Peering VPCs and testing connectivity","text":"
To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:
Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that, you can test connectivity between the servers again:
core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\n64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms\n64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms\n64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms\n
core@server-02 ~ $ ping 10.0.1.10\nPING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.\n64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms\n64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms\n64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms\n^C\n--- 10.0.1.10 ping statistics ---\n3 packets transmitted, 3 received, 0% packet loss, time 2004ms\nrtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms\n
If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the configuration on the switches, you can observe that connectivity is lost again:
core@server-01 ~ $ ping 10.0.2.10\nPING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.\nFrom 10.0.1.1 icmp_seq=1 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=2 Destination Net Unreachable\nFrom 10.0.1.1 icmp_seq=3 Destination Net Unreachable\n^C\n--- 10.0.2.10 ping statistics ---\n3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms\n
You can see duplicate packets in the output of the ping command between some of the servers. This is expected behavior and is caused by the limitations in the VLAB environment.
core@server-01 ~ $ ping 10.0.5.10\nPING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms\n64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms\n64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms\n64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)\n^C\n--- 10.0.5.10 ping statistics ---\n3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms\nrtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms\n
"},{"location":"vlab/demo/#using-vpcs-with-overlapping-subnets","title":"Using VPCs with overlapping subnets","text":"
First, create a second IPv4Namespace with the same subnet as the default one:
Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs). Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the server-03:
At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in a previous section. Once you have configured networking, server-01 and server-03 have IP addresses from the same subnets.
It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any data plane or performance testing, or for production use.
In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery, boot and installation process like on real hardware.
The hhfab CLI provides a special command vlab to manage the virtual labs. It allows you to run sets of virtual machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs the installer to get Fabric up and running.
You can find more information about getting hhfab in the download section.
Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly up-to-date packages.
The following packages needs to be installed: qemu-kvm socat. Docker is also required, to login into the OCI registry.
By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf. Optionally, you can choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core) which only consists of 2 switches.
You can calculate the system requirements based on the allocated resources to the VMs using the following table:
Device vCPU RAM Disk Control Node 6 6GB 100GB Test Server 2 768MB 10GB Switch 4 5GB 50GB
These numbers give approximately the following requirements for the default topologies:
Spine-Leaf: 38 vCPUs, 36352 MB, 410 GB disk
Collapsed Core: 22 vCPUs, 19456 MB, 240 GB disk
Usually, none of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make sure that you have at least allocated RAM and disk space for all VMs.
Hedgehog maintains a utility to install and configure VLAB, called hhfab.
You need a GitHub access token to download hhfab, please submit a ticket using the Hedgehog Support Portal. Once in possession of the credentials, use the provided username and token to log into the GitHub container registry:
First, initialize Fabricator by running hhfab init --dev. This command creates the fab.yaml file, which is the main configuration file for the fabric. This command supports several customization options that are listed in the output of hhfab init --help.
By default, hhfab init creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.
You can jump to the instructions to start VLAB, or see the next section for customizing the topology."},{"location":"vlab/running/#collapsed-core","title":"Collapsed Core","text":"
If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core:
Additionally, you can pass extra Fabric configuration items using flags on init command or by passing a configuration file. For more information, refer to the Fabric Configuration section.
Once you have initialized the VLAB, download the artifacts and build the installer using hhfab build. This command automatically downloads all required artifacts from the OCI registry and builds the installer and all other prerequisites for running the VLAB.
"},{"location":"vlab/running/#build-the-installer-and-start-vlab","title":"Build the Installer and Start VLAB","text":"
To build and start the virtual machines, use hhfab vlab up. For successive runs, use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. hhfab vlab up runs in the foreground and does not return, which allows you to stop all VLAB VMs by simply pressing Ctrl + C.
ubuntu@docs:~$ hhfab vlab up\n11:48:22 INF Hedgehog Fabricator version=v0.30.0\n11:48:22 INF Wiring hydrated successfully mode=if-not-present\n11:48:22 INF VLAB config created file=vlab/config.yaml\n11:48:22 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:22 INF Building installer control=control-1\n11:48:22 INF Adding recipe bin to installer control=control-1\n11:48:24 INF Adding k3s and tools to installer control=control-1\n11:48:25 INF Adding zot to installer control=control-1\n11:48:25 INF Adding cert-manager to installer control=control-1\n11:48:26 INF Adding config and included wiring to installer control=control-1\n11:48:26 INF Adding airgap artifacts to installer control=control-1\n11:48:36 INF Archiving installer path=/home/ubuntu/result/control-1-install.tgz control=control-1\n11:48:45 INF Creating ignition path=/home/ubuntu/result/control-1-install.ign control=control-1\n11:48:46 INF Taps and bridge are ready count=8\n11:48:46 INF Downloader cache=/home/ubuntu/.hhfab-cache/v1 repo=ghcr.io prefix=githedgehog\n11:48:46 INF Preparing new vm=control-1 type=control\n11:48:51 INF Preparing new vm=server-01 type=server\n11:48:52 INF Preparing new vm=server-02 type=server\n11:48:54 INF Preparing new vm=server-03 type=server\n11:48:55 INF Preparing new vm=server-04 type=server\n11:48:57 INF Preparing new vm=server-05 type=server\n11:48:58 INF Preparing new vm=server-06 type=server\n11:49:00 INF Preparing new vm=server-07 type=server\n11:49:01 INF Preparing new vm=server-08 type=server\n11:49:03 INF Preparing new vm=server-09 type=server\n11:49:04 INF Preparing new vm=server-10 type=server\n11:49:05 INF Preparing new vm=leaf-01 type=switch\n11:49:06 INF Preparing new vm=leaf-02 type=switch\n11:49:06 INF Preparing new vm=leaf-03 type=switch\n11:49:06 INF Preparing new vm=leaf-04 type=switch\n11:49:06 INF Preparing new vm=leaf-05 type=switch\n11:49:06 INF Preparing new vm=spine-01 type=switch\n11:49:06 INF Preparing new vm=spine-02 type=switch\n11:49:06 INF Starting VMs count=18 cpu=\"54 vCPUs\" ram=\"49664 MB\" disk=\"550 GB\"\n11:49:59 INF Uploading control install vm=control-1 type=control\n11:53:39 INF Running control install vm=control-1 type=control\n11:53:40 INF control-install: 01:53:39 INF Hedgehog Fabricator Recipe version=v0.30.0 vm=control-1\n11:53:40 INF control-install: 01:53:39 INF Running control node installation vm=control-1\n12:00:32 INF control-install: 02:00:31 INF Control node installation complete vm=control-1\n12:00:32 INF Control node is ready vm=control-1 type=control\n12:00:32 INF All VMs are ready\n
When the message INF Control node is ready vm=control-1 type=control from the installer's output means that the installer has finished. After this line has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting provisioned. See Accessing the VLAB."},{"location":"vlab/running/#enable-outside-connectivity-from-vlab-vms","title":"Enable Outside connectivity from VLAB VMs","text":"
By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure enable connectivity using hhfab vlab up --restrict-servers=false to allow the test servers to access the Internet and the host. When you enable connectivity, VMs get a default route pointing to the host, which means that in case of the VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific subnets).
"},{"location":"vlab/running/#accessing-the-vlab","title":"Accessing the VLAB","text":"
The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are provisioning and installing the software. After switches are installed you can use ssh to get into them.
You can select device you want to access or pass the name using the --vm flag.
ubuntu@docs:~$ hhfab vlab ssh\nUse the arrow keys to navigate: \u2193 \u2191 \u2192 \u2190 and / toggles search\nSSH to VM:\n \ud83e\udd94 control-1\n server-01\n server-02\n server-03\n server-04\n server-05\n server-06\n leaf-01\n leaf-02\n leaf-03\n spine-01\n spine-02\n\n----------- VM Details ------------\nID: 0\nName: control-1\nReady: true\nBasedir: .hhfab/vlab-vms/control-1\n
Fabricator creates default users and keys for you to login into the control node and test servers as well as for the SONiC Virtual Switches.
The default user with password-less sudo for the control node and test servers is core with password HHFab.Admin!. The admin user with full access and password-less sudo for the switches is admin with password HHFab.Admin!. The read-only, non-sudo user with access to the switch CLI is op with password HHFab.Op!.
"},{"location":"vlab/running/#use-kubectl-to-interact-with-the-fabric","title":"Use Kubectl to Interact with the Fabric","text":"
On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. To view information about the switches run kubectl get agents -o wide. After the control node is available it usually takes about 10-15 minutes for the switches to get installed.
After the switches are provisioned, the command returns something like this:
The Heartbeat column shows how long ago the switch has sent the heartbeat to the control node. The Applied column shows how long ago the switch has applied the configuration. AppliedG shows the generation of the configuration applied. CurrentG shows the generation of the configuration the switch is supposed to run. Different values for AppliedG and CurrentG mean that the switch is in the process of applying the configuration.
At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more about managing the Fabric in the Running Demo and User Guide sections.
"},{"location":"vlab/running/#getting-main-fabric-objects","title":"Getting main Fabric objects","text":"
You can list the main Fabric objects by running kubectl get on the control node. You can find more details about using the Fabric in the User Guide, Fabric API and Fabric CLI sections.
If VLAB is currently running, press Ctrl + C to stop it. To reset VLAB and start over run hhfab init -f. This option forces the process to overwrite your existing configuration in fab.yaml.
hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.
+
NAME:
+ hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them
+
+USAGE:
+ hhfab vlab setup-vpcs [command options]
+
+OPTIONS:
+ --dns-servers value, --dns value [ --dns-servers value, --dns value ] DNS servers for VPCs advertised by DHCP
+ --force-clenup, -f start with removing all existing VPCs and VPCAttachments (default: false)
+ --help, -h show help
+ --interface-mtu value, --mtu value interface MTU for VPCs advertised by DHCP (default: 0)
+ --ipns value IPv4 namespace for VPCs (default: "default")
+ --name value, -n value name of the VM or HW to access
+ --servers-per-subnet value, --servers value number of servers per subnet (default: 1)
+ --subnets-per-vpc value, --subnets value number of subnets per VPC (default: 1)
+ --time-servers value, --ntp value [ --time-servers value, --ntp value ] Time servers for VPCs advertised by DHCP
+ --vlanns value VLAN namespace for VPCs (default: "default")
+ --wait-switches-ready, --wait wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)
+
+ Global options:
+
+ --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
+ --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
+ --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
+ --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
+
+
Setup Peering
+
hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.
+
NAME:
+ hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)
+
+USAGE:
+ Setup test scenario with VPC/External Peerings by specifying requests in the format described below.
+
+ Example command:
+
+ $ hhfab vlab setup-peerings 1+2 2+4:r=border 1~as5835 2~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24
+
+ Which will produce:
+ 1. VPC peering between vpc-01 and vpc-02
+ 2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border
+ 3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted
+ 4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route
+ from external permitted as well any route that belongs to 22.22.22.0/24
+
+ VPC Peerings:
+
+ 1+2 -- VPC peering between vpc-01 and vpc-02
+ demo-1+demo-2 -- VPC peering between demo-1 and demo-2
+ 1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present
+ 1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border
+ 1+2:remote=border -- same as above
+
+ External Peerings:
+
+ 1~as5835 -- external peering for vpc-01 with External as5835
+ 1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing
+ default subnet and any route from external
+ 1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and
+ default route from external permitted
+ 1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details
+ 1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above
+
+OPTIONS:
+ --help, -h show help
+ --name value, -n value name of the VM or HW to access
+ --wait-switches-ready, --wait wait for switches to be ready before before and after configuring peerings (default: true)
+
+ Global options:
+
+ --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
+ --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
+ --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
+ --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
+
+
Test Connectivity
+
hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.
+
NAME:
+ hhfab vlab test-connectivity - test connectivity between all servers
+
+USAGE:
+ hhfab vlab test-connectivity [command options]
+
+OPTIONS:
+ --curls value number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)
+ --help, -h show help
+ --iperfs value seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)
+ --iperfs-speed value minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)
+ --name value, -n value name of the VM or HW to access
+ --pings value number of pings to send between each pair of servers (0 to disable) (default: 5)
+ --wait-switches-ready, --wait wait for switches to be ready before testing connectivity (default: true)
+
+ Global options:
+
+ --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
+ --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
+ --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
+ --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
+
Manual VPC creation
Creating and attaching VPCs
You can create and attach VPCs to the VMs using the kubectl fabric vpc command on the Control Node or outside of the
cluster using the kubeconfig. For example, run the following commands to create 2 VPCs with a single subnet each, a DHCP
server enabled with its optional IP address range start defined, and to attach them to some of the test servers:
core@control-1 ~ $ kubectl get ipns
+NAME SUBNETS AGE
+default ["10.0.0.0/16"] 5h14m
After you created the VPCs and VPCAttachments, you can check the status of the agents to make sure that the requested
configuration was applied to the switches:
In this example, the values in columns APPLIEDG and CURRENTG are equal which means that the requested configuration
has been applied.
@@ -1540,275 +1642,173 @@
Setting up networking on test ser
will get an IP address from the DHCP server. You can use the ip command to configure networking on the servers or use
the little helper pre-installed by Fabricator on test servers, hhnet.
For server-01:
-
core@server-01 ~ $ hhnetcleanup
-core@server-01 ~ $ hhnetbond1001enp2s1enp2s2
-10.0.1.10/24
-core@server-01 ~ $ ipa
-...
-3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
- link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01
-4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
- link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02
-6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
- inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
- valid_lft forever preferred_lft forever
-7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
- inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001
- valid_lft 86396sec preferred_lft 86396sec
- inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
- valid_lft forever preferred_lft forever
+
core@server-01 ~ $ hhnet cleanup
+core@server-01 ~ $ hhnet bond 1001 enp2s1 enp2s2
+10.0.1.10/24
+core@server-01 ~ $ ip a
+...
+3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
+ link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:01
+4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
+ link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:01:02
+6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
+ link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
+ inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
+ valid_lft forever preferred_lft forever
+7: bond0.1001@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
+ link/ether 06:5a:e8:38:3b:ea brd ff:ff:ff:ff:ff:ff
+ inet 10.0.1.10/24 metric 1024 brd 10.0.1.255 scope global dynamic bond0.1001
+ valid_lft 86396sec preferred_lft 86396sec
+ inet6 fe80::45a:e8ff:fe38:3bea/64 scope link
+ valid_lft forever preferred_lft forever
And for server-02:
-
core@server-02 ~ $ hhnetcleanup
-core@server-02 ~ $ hhnetbond1002enp2s1enp2s2
-10.0.2.10/24
-core@server-02 ~ $ ipa
-...
-3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
- link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01
-4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
- link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02
-8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
- inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
- valid_lft forever preferred_lft forever
-9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
- link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
- inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002
- valid_lft 86185sec preferred_lft 86185sec
- inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
- valid_lft forever preferred_lft forever
+
core@server-02 ~ $ hhnet cleanup
+core@server-02 ~ $ hhnet bond 1002 enp2s1 enp2s2
+10.0.2.10/24
+core@server-02 ~ $ ip a
+...
+3: enp2s1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
+ link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:01
+4: enp2s2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
+ link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff permaddr 0c:20:12:fe:02:02
+8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
+ link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
+ inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
+ valid_lft forever preferred_lft forever
+9: bond0.1002@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
+ link/ether 5e:10:b1:f7:d0:4c brd ff:ff:ff:ff:ff:ff
+ inet 10.0.2.10/24 metric 1024 brd 10.0.2.255 scope global dynamic bond0.1002
+ valid_lft 86185sec preferred_lft 86185sec
+ inet6 fe80::5c10:b1ff:fef7:d04c/64 scope link
+ valid_lft forever preferred_lft forever
Testing connectivity before peering
You can test connectivity between the servers before peering the switches using the ping command:
-
core@server-01 ~ $ ping10.0.2.10
-PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
-From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
-From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
-From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
-^C
---- 10.0.2.10 ping statistics ---
-3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms
+
core@server-01 ~ $ ping 10.0.2.10
+PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
+From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
+From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
+From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
+^C
+--- 10.0.2.10 ping statistics ---
+3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2003ms
-
core@server-02 ~ $ ping10.0.1.10
-PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
-From 10.0.2.1 icmp_seq=1 Destination Net Unreachable
-From 10.0.2.1 icmp_seq=2 Destination Net Unreachable
-From 10.0.2.1 icmp_seq=3 Destination Net Unreachable
-^C
---- 10.0.1.10 ping statistics ---
-3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms
+
core@server-02 ~ $ ping 10.0.1.10
+PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
+From 10.0.2.1 icmp_seq=1 Destination Net Unreachable
+From 10.0.2.1 icmp_seq=2 Destination Net Unreachable
+From 10.0.2.1 icmp_seq=3 Destination Net Unreachable
+^C
+--- 10.0.1.10 ping statistics ---
+3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms
Peering VPCs and testing connectivity
To enable connectivity between the VPCs, peer them using kubectl fabric vpc peer:
-
core@control-1 ~ $ kubectlfabricvpcpeer--vpcvpc-1--vpcvpc-2
-07:04:58 INF VPCPeering created name=vpc-1--vpc-2
+
Make sure to wait until the peering is applied to the switches using kubectl get agents command. After that, you can
test connectivity between the servers again:
-
core@server-01 ~ $ ping10.0.2.10
-PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
-64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms
-64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms
-64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms
-^C
---- 10.0.2.10 ping statistics ---
-3 packets transmitted, 3 received, 0% packet loss, time 2004ms
-rtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms
+
core@server-01 ~ $ ping 10.0.2.10
+PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
+64 bytes from 10.0.2.10: icmp_seq=1 ttl=62 time=6.25 ms
+64 bytes from 10.0.2.10: icmp_seq=2 ttl=62 time=7.60 ms
+64 bytes from 10.0.2.10: icmp_seq=3 ttl=62 time=8.60 ms
+^C
+--- 10.0.2.10 ping statistics ---
+3 packets transmitted, 3 received, 0% packet loss, time 2004ms
+rtt min/avg/max/mdev = 6.245/7.481/8.601/0.965 ms
-
core@server-02 ~ $ ping10.0.1.10
-PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
-64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms
-64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms
-64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms
-^C
---- 10.0.1.10 ping statistics ---
-3 packets transmitted, 3 received, 0% packet loss, time 2004ms
-rtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms
+
core@server-02 ~ $ ping 10.0.1.10
+PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
+64 bytes from 10.0.1.10: icmp_seq=1 ttl=62 time=5.44 ms
+64 bytes from 10.0.1.10: icmp_seq=2 ttl=62 time=6.66 ms
+64 bytes from 10.0.1.10: icmp_seq=3 ttl=62 time=4.49 ms
+^C
+--- 10.0.1.10 ping statistics ---
+3 packets transmitted, 3 received, 0% packet loss, time 2004ms
+rtt min/avg/max/mdev = 4.489/5.529/6.656/0.886 ms
If you delete the VPC peering with kubectl delete applied to the relevant object and wait for the agent to apply the
configuration on the switches, you can observe that connectivity is lost again:
core@server-01 ~ $ ping10.0.2.10
-PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
-From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
-From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
-From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
-^C
---- 10.0.2.10 ping statistics ---
-3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms
+
core@server-01 ~ $ ping 10.0.2.10
+PING 10.0.2.10 (10.0.2.10) 56(84) bytes of data.
+From 10.0.1.1 icmp_seq=1 Destination Net Unreachable
+From 10.0.1.1 icmp_seq=2 Destination Net Unreachable
+From 10.0.1.1 icmp_seq=3 Destination Net Unreachable
+^C
+--- 10.0.2.10 ping statistics ---
+3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2004ms
You can see duplicate packets in the output of the ping command between some of the servers. This is expected
behavior and is caused by the limitations in the VLAB environment.
-
core@server-01 ~ $ ping10.0.5.10
-PING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.
-64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms
-64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)
-64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms
-64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)
-64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms
-64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)
-^C
---- 10.0.5.10 ping statistics ---
-3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms
-rtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms
+
core@server-01 ~ $ ping 10.0.5.10
+PING 10.0.5.10 (10.0.5.10) 56(84) bytes of data.
+64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms
+64 bytes from 10.0.5.10: icmp_seq=1 ttl=62 time=9.58 ms (DUP!)
+64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms
+64 bytes from 10.0.5.10: icmp_seq=2 ttl=62 time=6.99 ms (DUP!)
+64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.59 ms
+64 bytes from 10.0.5.10: icmp_seq=3 ttl=62 time=9.60 ms (DUP!)
+^C
+--- 10.0.5.10 ping statistics ---
+3 packets transmitted, 3 received, +3 duplicates, 0% packet loss, time 2003ms
+rtt min/avg/max/mdev = 6.987/8.720/9.595/1.226 ms
-
Utility based VPC creation
-
Setup VPCs
-
hhfab vlab includes a utility to create VPCs in vlab. This utility is a hhfab vlab sub-command. hhfab vlab setup-vpcs.
-
NAME:
- hhfab vlab setup-vpcs - setup VPCs and VPCAttachments for all servers and configure networking on them
-
-USAGE:
- hhfab vlab setup-vpcs [command options]
-
-OPTIONS:
- --dns-servers value, --dns value [ --dns-servers value, --dns value ] DNS servers for VPCs advertised by DHCP
- --force-clenup, -f start with removing all existing VPCs and VPCAttachments (default: false)
- --help, -h show help
- --interface-mtu value, --mtu value interface MTU for VPCs advertised by DHCP (default: 0)
- --ipns value IPv4 namespace for VPCs (default: "default")
- --name value, -n value name of the VM or HW to access
- --servers-per-subnet value, --servers value number of servers per subnet (default: 1)
- --subnets-per-vpc value, --subnets value number of subnets per VPC (default: 1)
- --time-servers value, --ntp value [ --time-servers value, --ntp value ] Time servers for VPCs advertised by DHCP
- --vlanns value VLAN namespace for VPCs (default: "default")
- --wait-switches-ready, --wait wait for switches to be ready before and after configuring VPCs and VPCAttachments (default: true)
-
- Global options:
-
- --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
- --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
- --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
- --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
-
-
Setup Peering
-
hhfab vlab includes a utility to create VPC peerings in VLAB. This utility is a hhfab vlab sub-command. hhfab vlab setup-peerings.
-
NAME:
- hhfab vlab setup-peerings - setup VPC and External Peerings per requests (remove all if empty)
-
-USAGE:
- Setup test scenario with VPC/External Peerings by specifying requests in the format described below.
-
- Example command:
-
- $ hhfabvlabsetup-peerings1+22+4:r=border1~as58352~as5835:subnets=sub1,sub2:prefixes=0.0.0.0/0,22.22.22.0/24
-
- Which will produce:
- 1. VPC peering between vpc-01 and vpc-02
- 2. Remote VPC peering between vpc-02 and vpc-04 on switch group named border
- 3. External peering for vpc-01 with External as5835 with default vpc subnet and any routes from external permitted
- 4. External peering for vpc-02 with External as5835 with subnets sub1 and sub2 exposed from vpc-02 and default route
- from external permitted as well any route that belongs to 22.22.22.0/24
-
- VPC Peerings:
-
- 1+2 -- VPC peering between vpc-01 and vpc-02
- demo-1+demo-2 -- VPC peering between demo-1 and demo-2
- 1+2:r -- remote VPC peering between vpc-01 and vpc-02 on switch group if only one switch group is present
- 1+2:r=border -- remote VPC peering between vpc-01 and vpc-02 on switch group named border
- 1+2:remote=border -- same as above
-
- External Peerings:
-
- 1~as5835 -- external peering for vpc-01 with External as5835
- 1~ -- external peering for vpc-1 with external if only one external is present for ipv4 namespace of vpc-01, allowing
- default subnet and any route from external
- 1~:subnets=default@prefixes=0.0.0.0/0 -- external peering for vpc-1 with auth external with default vpc subnet and
- default route from external permitted
- 1~as5835:subnets=default,other:prefixes=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same but with more details
- 1~as5835:s=default,other:p=0.0.0.0/0_le32_ge32,22.22.22.0/24 -- same as above
-
-OPTIONS:
- --help, -h show help
- --name value, -n value name of the VM or HW to access
- --wait-switches-ready, --wait wait for switches to be ready before before and after configuring peerings (default: true)
-
- Global options:
-
- --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
- --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
- --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
- --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
-
-
Test Connectivity
-
hhfab vlab includes a utility to test connectivity between servers inside VLAB. This utility is a hhfab vlab sub-command. hhfab vlab test-connectivity.
-
NAME:
- hhfab vlab test-connectivity - test connectivity between all servers
-
-USAGE:
- hhfab vlab test-connectivity [command options]
-
-OPTIONS:
- --curls value number of curl tests to run for each server to test external connectivity (0 to disable) (default: 3)
- --help, -h show help
- --iperfs value seconds of iperf3 test to run between each pair of reachable servers (0 to disable) (default: 10)
- --iperfs-speed value minimum speed in Mbits/s for iperf3 test to consider successful (0 to not check speeds) (default: 7000)
- --name value, -n value name of the VM or HW to access
- --pings value number of pings to send between each pair of servers (0 to disable) (default: 5)
- --wait-switches-ready, --wait wait for switches to be ready before testing connectivity (default: true)
-
- Global options:
-
- --brief, -b brief output (only warn and error) (default: false) [$HHFAB_BRIEF]
- --cache-dir DIR use cache dir DIR for caching downloaded files (default: "/home/ubuntu/.hhfab-cache") [$HHFAB_CACHE_DIR]
- --verbose, -v verbose output (includes debug) (default: false) [$HHFAB_VERBOSE]
- --workdir PATH run as if hhfab was started in PATH instead of the current working directory (default: "/home/ubuntu") [$HHFAB_WORK_DIR]
-
Using VPCs with overlapping subnets
First, create a second IPv4Namespace with the same subnet as the default one:
Let's assume that vpc-1 already exists and is attached to server-01 (see Creating and attaching VPCs).
Now we can create vpc-3 with the same subnet as vpc-1 (but in the different IPv4Namespace) and attach it to the
server-03:
At that point you can setup networking on server-03 the same as you did for server-01 and server-02 in
a previous section. Once you have configured networking, server-01 and
@@ -1819,7 +1819,7 @@
Using VPCs with overlapping subnets
Last update:
- October 24, 2024
+ October 31, 2024
Created:
diff --git a/beta-1/vlab/overview/index.html b/beta-1/vlab/overview/index.html
index 83b90e8e..c9623dd9 100644
--- a/beta-1/vlab/overview/index.html
+++ b/beta-1/vlab/overview/index.html
@@ -26,7 +26,7 @@
- Overview - Open Network Fabric
+ VLAB Overview - Open Network Fabric
@@ -102,7 +102,7 @@
It's possible to run Hedgehog Fabric in a fully virtual environment using QEMU/KVM and SONiC Virtual Switch (VS). It's
a great way to try out Fabric and learn about its look and feel, API, and capabilities. It's not suitable for any
data plane or performance testing, or for production use.
In the VLAB all switches start as empty VMs with only the ONIE image on them, and they go through the whole discovery,
boot and installation process like on real hardware.
-
Overview
+
HHFAB
The hhfab CLI provides a special command vlab to manage the virtual labs. It allows you to run sets of virtual
machines to simulate the Fabric infrastructure including control node, switches, test servers and it automatically runs
the installer to get Fabric up and running.
@@ -1296,7 +1364,7 @@
Overview
System Requirements
Currently, it's only tested on Ubuntu 22.04 LTS, but should work on any Linux distribution with QEMU/KVM support and fairly
up-to-date packages.
-
The following packages needs to be installed: qemu-kvm swtpm-tools tpm2-tools socat. Docker is also required, to login
+
The following packages needs to be installed: qemu-kvm socat. Docker is also required, to login
into the OCI registry.
By default, the VLAB topology is Spine-Leaf with 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf. Optionally, you can
choose to run the default Collapsed Core topology using flag --fabric-mode collapsed-core (or -m collapsed-core)
@@ -1340,13 +1408,15 @@
System Requirements
Usually, none of the VMs will reach 100% utilization of the allocated resources, but as a rule of thumb you should make
sure that you have at least allocated RAM and disk space for all VMs.
NVMe SSD for VM disks is highly recommended.
-
Installing prerequisites
-
On Ubuntu 22.04 LTS you can install all required packages using the following commands:
+
Installing Prerequisites
+
To run VLAB, your system needs docker,qemu,kvm, and hhfab. On Ubuntu 22.04 LTS you can install all required packages using the following commands:
For convenience Hedgehog provides a script to install oras:
+
curl-fsSLhttps://i.hhdev.io/oras|bash
+
+
Hhfab
+
Hedgehog maintains a utility to install and configure VLAB, called hhfab.
+
You need a GitHub access token to download hhfab, please submit a ticket using the Hedgehog Support Portal. Once in possession of the credentials, use the provided username and token to log into the GitHub container registry:
Make sure to follow the prerequisites and check system requirements in the VLAB Overview section
before running VLAB.
Initialize VLAB
-
First, initialize Fabricator by running hhfab init --dev. This command supports several customization options that are listed in the output of hhfab init --help.
+
First, initialize Fabricator by running hhfab init --dev. This command creates the fab.yaml file, which is the main configuration file for the fabric. This command supports several customization options that are listed in the output of hhfab init --help.
11:26:52 INF Include wiring files (.yaml) or adjust imported ones dir=include
VLAB Topology
-
By default, the command creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.
-
ubuntu@docs:~$ hhfabvlabgen
+
By default, hhfab init creates 2 spines, 2 MCLAG leaves and 1 non-MCLAG leaf with 2 fabric connections (between each spine and leaf), 2 MCLAG peer links and 2 MCLAG session links as well as 2 loopbacks per leaf for implementing VPC loopback workaround. To generate the preceding topology, hhfab vlab gen. You can also configure the number of spines, leafs, connections, and so on. For example, flags --spines-count and --mclag-leafs-count allow you to set the number of spines and MCLAG leaves, respectively. For complete options, hhfab vlab gen -h.
+You can jump to the instructions to start VLAB, or see the next section for customizing the topology.
Collapsed Core
-
If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core.
-Or if you want to run Collapsed Core topology with 2 MCLAG switches:
+
If a Collapsed Core topology is desired, after the hhfab init --dev step, edit the resulting fab.yaml file and change the mode: spine-leaf to mode: collapsed-core:
automatically downloads all required artifacts from the OCI registry and builds the installer and all other
prerequisites for running the VLAB.
Build the Installer and Start VLAB
-
In VLAB the build and run step are combined into one command for simplicity, hhfab vlab up. For successive runs use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. This command does not return, it runs as long as the VLAB is up. This is done so that shutdown is a simple ctrl + c.
+
To build and start the virtual machines, use hhfab vlab up. For successive runs, use the --kill-stale flag to ensure that any virtual machines from a previous run are gone. hhfab vlab up runs in the foreground and does not return, which allows you to stop all VLAB VMs by simply pressing Ctrl + C.
Build the Installer and Start VLAB
When the message INF Control node is ready vm=control-1 type=control from the installer's output means that the installer has finished. After this line
has been displayed, you can get into the control node and other VMs to watch the Fabric coming up and switches getting
provisioned. See Accessing the VLAB.
-
Configuring VLAB VMs
+
Enable Outside connectivity from VLAB VMs
By default, all test server VMs are isolated and have no connectivity to the host or the Internet. You can configure
enable connectivity using hhfab vlab up --restrict-servers=false to allow the test servers to access the Internet and
the host. When you enable connectivity, VMs get a default route pointing to the host, which means that in case of the
VPC peering you need to configure test server VMs to use the VPC attachment as a default route (or just some specific
subnets).
-
Default credentials
-
Fabricator creates default users and keys for you to login into the control node and test servers as well as for the
-SONiC Virtual Switches.
-
Default user with passwordless sudo for the control node and test servers is core with password HHFab.Admin!.
-Admin user with full access and passwordless sudo for the switches is admin with password HHFab.Admin!.
-Read-only, non-sudo user with access only to the switch CLI for the switches is op with password HHFab.Op!.
Accessing the VLAB
The hhfab vlab command provides ssh and serial subcommands to access the VMs. You can use ssh to get into the
control node and test servers after the VMs are started. You can use serial to get into the switch VMs while they are
@@ -1530,8 +1562,15 @@
Accessing the VLAB
Ready: trueBasedir: .hhfab/vlab-vms/control-1
-
On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. You can find information
-about the switches provisioning by running kubectl get agents -o wide. It usually takes about 10-15 minutes for the
+
Default credentials
+
Fabricator creates default users and keys for you to login into the control node and test servers as well as for the
+SONiC Virtual Switches.
+
The default user with password-less sudo for the control node and test servers is core with password HHFab.Admin!.
+The admin user with full access and password-less sudo for the switches is admin with password HHFab.Admin!.
+The read-only, non-sudo user with access to the switch CLI is op with password HHFab.Op!.
+
Use Kubectl to Interact with the Fabric
+
On the control node you have access to kubectl, Fabric CLI, and k9s to manage the Fabric. To view information
+about the switches run kubectl get agents -o wide. After the control node is available it usually takes about 10-15 minutes for the
switches to get installed.
After the switches are provisioned, the command returns something like this:
AppliedG and CurrentG mean that the switch is in the process of applying the configuration.
At that point Fabric is ready and you can use kubectl and kubectl fabric to manage the Fabric. You can find more
about managing the Fabric in the Running Demo and User Guide sections.
-
Getting main Fabric objects
+
Getting main Fabric objects
You can list the main Fabric objects by running kubectl get on the control node. You can find more details about
using the Fabric in the User Guide, Fabric API and
Fabric CLI sections.
@@ -1602,7 +1641,7 @@
Getting main Fabric objects
default 6h12m
Reset VLAB
-
To reset VLAB and start over directory and run hhfab init -f which will force overwrite your existing configuration, fab.yaml.
+
If VLAB is currently running, press Ctrl + C to stop it. To reset VLAB and start over run hhfab init -f. This option forces the process to overwrite your existing configuration in fab.yaml.