API Documentation
In this section you will find links to the API documentation of metal-stack components.
diff --git a/previews/PR235/.documenter-siteinfo.json b/previews/PR235/.documenter-siteinfo.json index 1be3bde29d..058cb0a6a3 100644 --- a/previews/PR235/.documenter-siteinfo.json +++ b/previews/PR235/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-12-11T14:29:06","documenter_version":"1.3.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2024-12-12T08:20:22","documenter_version":"1.3.0"}} \ No newline at end of file diff --git a/previews/PR235/apidocs/apidocs/index.html b/previews/PR235/apidocs/apidocs/index.html index 0a9b486162..bc52879875 100644 --- a/previews/PR235/apidocs/apidocs/index.html +++ b/previews/PR235/apidocs/apidocs/index.html @@ -1,2 +1,2 @@ -
In this section you will find links to the API documentation of metal-stack components.
Settings
This document was generated with Documenter.jl version 1.3.0 on Wednesday 11 December 2024. Using Julia version 1.9.4.
In this section you will find links to the API documentation of metal-stack components.
Settings
This document was generated with Documenter.jl version 1.3.0 on Thursday 12 December 2024. Using Julia version 1.9.4.
Our public-facing APIs are built on swagger, which allows you generating API clients in all sorts of programming languages.
For the metal-api we officially support the following client libraries:
Settings
This document was generated with Documenter.jl version 1.3.0 on Wednesday 11 December 2024. Using Julia version 1.9.4.
Our public-facing APIs are built on swagger, which allows you generating API clients in all sorts of programming languages.
For the metal-api we officially support the following client libraries:
Settings
This document was generated with Documenter.jl version 1.3.0 on Thursday 12 December 2024. Using Julia version 1.9.4.
This document describes the way we want to contribute code to the projects of metal-stack, which are hosted on github.com/metal-stack.
The document is meant to be understood as a general guideline for contributions, but not as burden to be placed on a developer. Use your best judgment when contributing code. Try to be as clean and precise as possible when writing code and try to make your code as maintainable and understandable as possible for other people.
Even if it should go without saying, we live an open culture of discussion, in which everybody is welcome to participate. We treat every contribution with respect and objectiveness with the general aim to write software of quality.
If you want, feel free to propose changes to this document in a pull request.
Open a Github issue in the project you would like to contribute. Within the issue, your idea can be discussed. It is also possible to directly create a pull request when the set of changes is relatively small.
The process described here has several goals:
This section contains language-agnostic topics that all metal-stack projects are trying to follow.
The code base is owned by the entire team and every member is allowed to contribute changes to any of the projects. This is considered as collective code ownership[1].
As a matter of fact, there are persons in a project, which already have experience with the sources. These are defined directly in the repository's CODEOWNERS file. If you want to merge changes into the master branch, it is advisable to include code owners into the process of discussion and merging.
One major ambition of metal-stack is to follow the idea of microservices. This way, we want to achieve that we can
We are generally open to write code in any language that fits best to the function of the software. However, we encourage golang to be the main language of metal-stack as we think that it makes development faster when not establishing too many different languages in our architecture. Reason for this is that we are striving for consistent behavior of the microservices, similar to what has been described for the Twelve-Factor App (see 12 Factor). We help enforcing unified behavior by allowing a small layer of shared code for every programming language. We will refer to this shared code as "libraries" for the rest of this document.
Artifacts are always produced by a CI process (Github Actions).
Docker images are published on the Github Container Registry of the metal-stack organization.
Binary artifacts or OS images can be uploaded to images.metal-stack.io
if necessary.
When building Docker images, please consider our build tool docker-make or the specific docker-make action respectively.
We are currently making use of Swagger when we exposing traditional REST APIs for end-users. This helps us with being technology-agnostic as we can generate clients in almost any language using go-swagger. Swagger additionally simplifies the documentation of our APIs.
Most APIs though are not required to be user-facing but are of technical nature. These are preferred to be implemented using grpc.
Artifacts are versioned by tagging the respective repository with a tag starting with the letter v
. After the letter, there stands a valid semantic version.
In order to make it easier for others to understand a project, we document general information and usage instructions in a README.md
in any project.
In addition to that, we document a microservice in the docs repository. The documentation should contain the reasoning why this service exists and why it was being implemented the way it was being implemented. The aim of this procedure is to reduce the time for contributors to comprehend architectural decisions that were made during the process of writing the software and to clarify the general purpose of this service in the entire context of the software.
This chapter describes general guidelines on how to develop and contribute code for a certain programming language.
Development follows the official guide to:
metal-stack maintains several libraries that you should utilize in your project in order unify common behavior. Some of these projects are:
From the server-side you should ensure that you are returning the common error json struct in case of an error as defined in the metal-lib/httperrors
. Ensure you are using go-restful >= v2.9.1
and go-restful-openapi >= v0.13.1
(allows default responses with error codes other than 200).
We want to share knowledge and keep things simple. If things cannot kept simple we want enable everybody to understand them by:
<THE WHAT> to <THE TO>
").Development follows the official guide to:
Settings
This document was generated with Documenter.jl version 1.3.0 on Wednesday 11 December 2024. Using Julia version 1.9.4.
This document describes the way we want to contribute code to the projects of metal-stack, which are hosted on github.com/metal-stack.
The document is meant to be understood as a general guideline for contributions, but not as burden to be placed on a developer. Use your best judgment when contributing code. Try to be as clean and precise as possible when writing code and try to make your code as maintainable and understandable as possible for other people.
Even if it should go without saying, we live an open culture of discussion, in which everybody is welcome to participate. We treat every contribution with respect and objectiveness with the general aim to write software of quality.
If you want, feel free to propose changes to this document in a pull request.
Open a Github issue in the project you would like to contribute. Within the issue, your idea can be discussed. It is also possible to directly create a pull request when the set of changes is relatively small.
The process described here has several goals:
This section contains language-agnostic topics that all metal-stack projects are trying to follow.
The code base is owned by the entire team and every member is allowed to contribute changes to any of the projects. This is considered as collective code ownership[1].
As a matter of fact, there are persons in a project, which already have experience with the sources. These are defined directly in the repository's CODEOWNERS file. If you want to merge changes into the master branch, it is advisable to include code owners into the process of discussion and merging.
One major ambition of metal-stack is to follow the idea of microservices. This way, we want to achieve that we can
We are generally open to write code in any language that fits best to the function of the software. However, we encourage golang to be the main language of metal-stack as we think that it makes development faster when not establishing too many different languages in our architecture. Reason for this is that we are striving for consistent behavior of the microservices, similar to what has been described for the Twelve-Factor App (see 12 Factor). We help enforcing unified behavior by allowing a small layer of shared code for every programming language. We will refer to this shared code as "libraries" for the rest of this document.
Artifacts are always produced by a CI process (Github Actions).
Docker images are published on the Github Container Registry of the metal-stack organization.
Binary artifacts or OS images can be uploaded to images.metal-stack.io
if necessary.
When building Docker images, please consider our build tool docker-make or the specific docker-make action respectively.
We are currently making use of Swagger when we exposing traditional REST APIs for end-users. This helps us with being technology-agnostic as we can generate clients in almost any language using go-swagger. Swagger additionally simplifies the documentation of our APIs.
Most APIs though are not required to be user-facing but are of technical nature. These are preferred to be implemented using grpc.
Artifacts are versioned by tagging the respective repository with a tag starting with the letter v
. After the letter, there stands a valid semantic version.
In order to make it easier for others to understand a project, we document general information and usage instructions in a README.md
in any project.
In addition to that, we document a microservice in the docs repository. The documentation should contain the reasoning why this service exists and why it was being implemented the way it was being implemented. The aim of this procedure is to reduce the time for contributors to comprehend architectural decisions that were made during the process of writing the software and to clarify the general purpose of this service in the entire context of the software.
This chapter describes general guidelines on how to develop and contribute code for a certain programming language.
Development follows the official guide to:
metal-stack maintains several libraries that you should utilize in your project in order unify common behavior. Some of these projects are:
From the server-side you should ensure that you are returning the common error json struct in case of an error as defined in the metal-lib/httperrors
. Ensure you are using go-restful >= v2.9.1
and go-restful-openapi >= v0.13.1
(allows default responses with error codes other than 200).
We want to share knowledge and keep things simple. If things cannot kept simple we want enable everybody to understand them by:
<THE WHAT> to <THE TO>
").Development follows the official guide to:
Settings
This document was generated with Documenter.jl version 1.3.0 on Thursday 12 December 2024. Using Julia version 1.9.4.
We face the situation that we argue for running bare metal on premise because this way the customers can control where and how their software and data are processed and stored. On the other hand, we have currently decided that our metal-api control plane components run on a kubernetes cluster (in our case on a cluster provided by one of the available hyperscalers).
Running the control plane on Kubernetes has the following benefits:
Using a kubernetes as a service offering from one of the hyperscalers, enables us to focus on using kubernetes instead of maintaining it as well.
It would be much saner if metal-stack has no, or only minimal dependencies to external services. Imagine a metal-stack deployment in a plant, it would be optimal if we only have to deliver a single rack with servers and networking gear installed and wired, plug that rack to the power supply and a internet uplink and its ready to go.
Have a second plant which you want to be part of all your plants? Just tell both that they are part of something bigger and metal-api knows of two partitions.
We can think of two different solutions to this vision:
As we can see, the first approach does not really address the problem, therefore i will describe solution #2 in more details.
Every distributed system suffer from handling state in a scalable, fast and correct way. To start how to cope with the state, we first must identify which state can be seen as partition local only and which state must be synchronous for read, and synchronous for writes across partitions.
Affected states:
Now we can see that the most critical state to held and synchronize are the IPAM data, because these entities must be guaranteed to be synchronously updated, while being updated frequently.
Datastores:
We use three different types of datastores to persist the states of the metal application.
These are the easy part, all of our services which are stateless can be scaled up and down without any impact on functionality. Even the stateful services like masterdata and metal-api rely fully on the underlying datastore and can therefore also be scaled up and down to meet scalability requirements.
Albeit, most of these services need to be placed behind a loadbalancer which does the L4/L7 balancing across the started/available replicas of the service for the clients talking to it. This is actually provided by kubernetes with either service type loadbalancer or type clusterip.
One exception is the metal-console
service which must have the partition in it´s dns name now, because there is no direct network connectivity between the management networks of the partitions. See "Network Setup)
In order to replicate certain data which must be available across all partitions we can use on of the existing open source databases which enable such kind of setup. There are a few available out there, the following incomplete list will highlight the pro´s and cons of each.
RethinkDB
We already store most of our data in RethinkDB and it gives already the ability to synchronize the data in a distributed manner with different guarantees for consistency and latency. This is described here: Scaling, Sharding and replication. But because rethinkdb has a rough history and unsure future with the last release took more than a year, we in the team already thought that we eventually must move away from rethinkdb in the future.
Postgresql
Postgres does not have a multi datacenter with replication in both directions, it just can make the remote instance store the same data.
CockroachDB
Is a Postgresql compatible database engine on the wire. CockroachDB gives you both, ACID and geo replication with writes allowed from all connected members. It is even possible to configure Follow the Workload and Geo Partitioning and Replication.
If we migrate all metal-api entities to be stored the same way we store masterdata, we could use cockroachdb to store all metal entities in one ore more databases spread across all partitions and still ensure consistency and high availability.
A simple setup how this would look like is shown here.
go-ipam was modified in a example PR here: PR 17
In order to make the metal-api accessible for api users like cloud-api
or metalctl
as easy at it is today, some effort has to be taken. One possible approach would be to use a external loadbalancer which spread the requests evenly to all metal-api endpoints in all partitions. Because all data are accessible from all partitions, a api request going to partition A with a request to create a machine in partition B, will still work. If on the other hand partition B is not in a connected state because the interconnection between both partitions is broken, then of course the request will fail.
IMPORTANT The NSQ Message to inform metal-core
must end in the correct partition
To provide such a external loadbalancer we have several opportunities:
Another setup would place a small gateway behind the metal-api address, which forwards to the metal-api in the partition where the request must be executed. This gateway, metal-api-router
must inspect the payload, extract the desired partition, and forward the request without any modifications to the metal-api endpoint in this partition. This can be done for all requests, or if we want to optimize, only for write accesses.
In order to have the impact to the overall security concept as minimal as possible i would not modify the current network setup. The only modifications which has to be made are:
A simple setup how this would look like is shown here, this does not work though because of the forementioned NSQ issue.
Therefore we need the metal-api-router
:
The deployment of our components will substantially differ in a partition compared to a the deployment we have actually. Deploying it in kubernetes in the partition would be very difficult to achieve because we have no sane way to deploy kubernetes on physical machines without a underlying API. I would therefore suggest to deploy our components in the same way we do that for the services running on the management server. Use systemd to start docker containers.
Settings
This document was generated with Documenter.jl version 1.3.0 on Wednesday 11 December 2024. Using Julia version 1.9.4.
We face the situation that we argue for running bare metal on premise because this way the customers can control where and how their software and data are processed and stored. On the other hand, we have currently decided that our metal-api control plane components run on a kubernetes cluster (in our case on a cluster provided by one of the available hyperscalers).
Running the control plane on Kubernetes has the following benefits:
Using a kubernetes as a service offering from one of the hyperscalers, enables us to focus on using kubernetes instead of maintaining it as well.
It would be much saner if metal-stack has no, or only minimal dependencies to external services. Imagine a metal-stack deployment in a plant, it would be optimal if we only have to deliver a single rack with servers and networking gear installed and wired, plug that rack to the power supply and a internet uplink and its ready to go.
Have a second plant which you want to be part of all your plants? Just tell both that they are part of something bigger and metal-api knows of two partitions.
We can think of two different solutions to this vision:
As we can see, the first approach does not really address the problem, therefore i will describe solution #2 in more details.
Every distributed system suffer from handling state in a scalable, fast and correct way. To start how to cope with the state, we first must identify which state can be seen as partition local only and which state must be synchronous for read, and synchronous for writes across partitions.
Affected states:
Now we can see that the most critical state to held and synchronize are the IPAM data, because these entities must be guaranteed to be synchronously updated, while being updated frequently.
Datastores:
We use three different types of datastores to persist the states of the metal application.
These are the easy part, all of our services which are stateless can be scaled up and down without any impact on functionality. Even the stateful services like masterdata and metal-api rely fully on the underlying datastore and can therefore also be scaled up and down to meet scalability requirements.
Albeit, most of these services need to be placed behind a loadbalancer which does the L4/L7 balancing across the started/available replicas of the service for the clients talking to it. This is actually provided by kubernetes with either service type loadbalancer or type clusterip.
One exception is the metal-console
service which must have the partition in it´s dns name now, because there is no direct network connectivity between the management networks of the partitions. See "Network Setup)
In order to replicate certain data which must be available across all partitions we can use on of the existing open source databases which enable such kind of setup. There are a few available out there, the following incomplete list will highlight the pro´s and cons of each.
RethinkDB
We already store most of our data in RethinkDB and it gives already the ability to synchronize the data in a distributed manner with different guarantees for consistency and latency. This is described here: Scaling, Sharding and replication. But because rethinkdb has a rough history and unsure future with the last release took more than a year, we in the team already thought that we eventually must move away from rethinkdb in the future.
Postgresql
Postgres does not have a multi datacenter with replication in both directions, it just can make the remote instance store the same data.
CockroachDB
Is a Postgresql compatible database engine on the wire. CockroachDB gives you both, ACID and geo replication with writes allowed from all connected members. It is even possible to configure Follow the Workload and Geo Partitioning and Replication.
If we migrate all metal-api entities to be stored the same way we store masterdata, we could use cockroachdb to store all metal entities in one ore more databases spread across all partitions and still ensure consistency and high availability.
A simple setup how this would look like is shown here.
go-ipam was modified in a example PR here: PR 17
In order to make the metal-api accessible for api users like cloud-api
or metalctl
as easy at it is today, some effort has to be taken. One possible approach would be to use a external loadbalancer which spread the requests evenly to all metal-api endpoints in all partitions. Because all data are accessible from all partitions, a api request going to partition A with a request to create a machine in partition B, will still work. If on the other hand partition B is not in a connected state because the interconnection between both partitions is broken, then of course the request will fail.
IMPORTANT The NSQ Message to inform metal-core
must end in the correct partition
To provide such a external loadbalancer we have several opportunities:
Another setup would place a small gateway behind the metal-api address, which forwards to the metal-api in the partition where the request must be executed. This gateway, metal-api-router
must inspect the payload, extract the desired partition, and forward the request without any modifications to the metal-api endpoint in this partition. This can be done for all requests, or if we want to optimize, only for write accesses.
In order to have the impact to the overall security concept as minimal as possible i would not modify the current network setup. The only modifications which has to be made are:
A simple setup how this would look like is shown here, this does not work though because of the forementioned NSQ issue.
Therefore we need the metal-api-router
:
The deployment of our components will substantially differ in a partition compared to a the deployment we have actually. Deploying it in kubernetes in the partition would be very difficult to achieve because we have no sane way to deploy kubernetes on physical machines without a underlying API. I would therefore suggest to deploy our components in the same way we do that for the services running on the management server. Use systemd to start docker containers.
Settings
This document was generated with Documenter.jl version 1.3.0 on Thursday 12 December 2024. Using Julia version 1.9.4.
IP forwarding is deactivated on eth0
, and no IP Masquerade is configured.
Settings
This document was generated with Documenter.jl version 1.3.0 on Wednesday 11 December 2024. Using Julia version 1.9.4.