This documentation describes how to create a working EdgeNet cluster in your environment. You can refer to this tutorial for deploying EdgeNet software to a local, sandbox, or production cluster.
To deploy EdgeNet to a Kubernetes cluster you need to have a cluster with kubectl
installed and configured.
There are many alternatives for creating a cluster for test purposes you can use minikube however this documentation will not cover a cluster creation.
You will install cert-manager and clone the EdgeNet repository. Then you will install desired features.
EdgeNet requires a cert-manager
to work. Please deploy cert-manager.
You need to clone the official EdgeNet repository to your local filesystem. Use the cd
command to go to an empty directory you can use. Then use the following command to clone and go inside the EdgeNet repository.
git clone https://github.com/EdgeNet-project/edgenet.git && cd ./edgenet
After the cloning, you may want to switch to the latest release branch. You can find EdgeNet's releases here. To switch to a branch of a release you can use the command below.
git checkout release-1.0
A handful of CRDs, controllers, and additional objects are required for EdgeNet to function. All of these declarations are organized in yaml files under build/yamls/kubernetes/
. The files represent different feature-pack of edgeNet. They are briefly explained below:
-
multi-tenancy.yaml
contains the CRDs, controllers, etc. for enabling single instance native multi-tenancy. Please refer to multi-tenancy documentation for more information. -
multi-provider.yaml
contains the CRDs, controllers, etc. to cluster to have multi-provider functionality. Please refer to the multi-provider section to have more information. -
notifier.yaml
contains the notification manager. It is used for sending notifications about events in the cluster such as tenant requests, rolerequests, clusterrolerequests, etc. These notifications are sent via mail and/or Slack. -
location-based-node-selection.yaml
contains a set of features that allows deployments to be made using the geographical information of nodes. Please refer to the location-based node selection section for additional information. -
federation-manager.yaml
contains the CRDs, controllers, etc. for federation features intended for manager clusters. Note that, manager clusters should track caches, etc. which workload clusters don't have to thus it is redundant for workload clusters to have these definitions. Please refer to federation section to have more information. -
federation-workload.yaml
contains the CRDs, controllers, etc. for federation features intended for workload clusters. -
all-in-one.yaml
contains the combined definitions formulti-tenancy
,multi-provider.yaml
,location-based-node-selection
. Federation and notifier definitions are not included. We recommend installing features separately.
EdgeNet is designed portable thus if you only require certain features, it is possible to install EdgeNet without performing a full install. Below you can find different sets of features:
- Install only the multi-tenancy features
- Install only the multi-provider features
- Install only the location-based-node-selection features
- Install only the notifier
- Install the federation features
The yaml file for multi-tenancy is located in build/yamls/kubernetes/multi-tenancy.yaml
Since it does not contain any configuration, you can directly apply and start using it's features. Run the following kubectl command to apply the yaml file.
kubectl apply -f build/yamls/kubernetes/multi-tenancy.yaml
Wait until the creation of custom controllers and it is done. You can test the multi-tenancy by first registering a tenant.
The yaml file for multi-provider is located in build/yamls/kubernetes/multi-provider.yaml
Unlike multi-tenancy, multi-provider features need some configuration in order to work. You can edit the yaml file. Note that the API-keys, tokens, etc. of the external services that EdgeNet uses need to be encoded in base64
. You can find the command to encode the secrets.
echo "<token-or-secret>" | base64
The following fields in the multi-provider.yaml
file can be configured:
# Used for DNS service not strictly required for EdgeNet to work
namecheap.yaml: |
# Provide the namecheap credentials for DNS records.
# app: "<App name>"
# apiUser : "<API user>"
# apiToken : "<API Token>"
# username : "<Username>"
# Used for node-labeler, if empty node-labeler cannot label nodes by their geoIPs
maxmind-account-id: "<MaxMind GeoIP2 precision API account id>"
maxmind-license-key: "<MaxMind GeoIP2 precision API license key>"
After you edit the file, you can use the following command to apply, the CRDs, and the deployment of the custom controllers.
kubectl -f apply ./build/yamls/kubernetes/multi-provider.yaml
Wait until the creation of custom controllers and it is done.
The yaml file for location-based-node-selection is located in build/yamls/kubernetes/location-based-node-selection.yaml
Unlike multi-tenancy, location-based-node-selection features need some configuration in order to work. You can edit the yaml file. Note that the API-keys, tokens, etc. of the external services that EdgeNet uses need to be encoded in base64
. You can find the command to encode the secrets.
echo "<token-or-secret>" | base64
The following fields in the location-based-node-selection.yaml
file can be configured:
# Used for node-labeler, if empty node-labeler cannot label nodes by their geoIPs
maxmind-account-id: "<MaxMind GeoIP2 precision API account id>"
maxmind-license-key: "<MaxMind GeoIP2 precision API license key>"
After you edit the file, you can use the following command to apply, the CRDs, and the deployment of the custom controllers.
kubectl -f apply ./build/yamls/kubernetes/location-based-node-selection.yaml
Wait until the creation of custom controllers and it is done.
The yaml file for the notifier is located in build/yamls/kubernetes/notifier.yaml
Unlike multi-tenancy, notifier features need some configuration in order to work. You can edit the yaml file. Note that the API-keys, tokens, etc. of the external services that EdgeNet uses need to be encoded in base64
. You can find the command to encode the secrets.
Notifier can handle email, Slack, and console notifications. You need to create a Slack bot for Slack notifications and an email client for emails. Additionally, EdgeNet Console is used with the EdgeNet testbed. You can leave the fields empty if you don't plan to use those features.
echo "<token-or-secret>" | base64
The following fields in the notifier.yaml
file can be configured:
headnode.yaml: |
# DNS should contain the root domain consisting of the domain name and top-level domain.
# dns: "<Root domain>"
# ip: "<IP address of the control plane node>"
smtp.yaml: |
# SMTP settings for mailer service. The 'to' field indicates the email address to receive the emails
# that concerns the cluster administration.
# host: ""
# port: ""
# from: ""
# username : ""
# password : ""
# to: ""
console.yaml: |
# URL to the console if you deploy on your cluster. For example, https://console.edge-net.org.
# url: "<URL of the console>"
# Below there is another secret object for slack
data:
token: auth token
channelid: channel ID
After you edit the file, you can use the following command to apply, the CRDs, and the deployment of the custom controllers.
kubectl -f apply ./build/yamls/kubernetes/notifier.yaml
Wait until the creation of custom controllers and it is done.
The federation features are actively worked on and are experimental. The federation features are built on top of multitenancy, thuse before installing make sure you installed the multitenancy features to your Kubernetes cluster.
You can have 2 types of clusters manager cluster
and workload cluster
. Manager clusters
federate multiple workload clusters
. They can send and receive workloads in the form of selective deployments.
The federation framework can be installed without any configurations. However, the manager cluster
should have the federation-manager.yaml
installed. As such the workload cluster
should have the federation-workload.yaml
.
You can use the below command to deploy the CRDs, custom controllers, etc. to the clusters.
kubectl -f apply ./build/yamls/kubernetes/federation-manager.yaml --context <MANAGER>
kubectl -f apply ./build/yamls/kubernetes/federation-workload.yaml --context <WORKLOAD>
Wait until the creation of custom controllers and it is done.
After installing the federation extensions to your manager and workload clusters, we recommend installing fedmanctl for automated federation. Additionally, you can refer to federation tutorial.