Skip to content

Commit

Permalink
Merge pull request #475 from orozery/website
Browse files Browse the repository at this point in the history
website: nit fixes
  • Loading branch information
orozery authored Apr 1, 2024
2 parents 3be1c2e + 5f59df3 commit dbc45c4
Show file tree
Hide file tree
Showing 9 changed files with 140 additions and 129 deletions.
2 changes: 1 addition & 1 deletion website/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ build: ; $(info building site...)
@hugo

# delete hugo modules under clean?
clean: ; $(info cleaning website output folder...)
clean: ; $(info cleaning website output directory...)
@rm -rf ./public

serve: ; $(info running locally with env=production...)
Expand Down
13 changes: 8 additions & 5 deletions website/content/en/docs/concepts/fabric.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Defining a ClusterLink fabric
weight: 10
---

The concept of a `Fabric` encapsulates a set of cooperating [peers]({{< ref "peers" >}}/).
The concept of a `Fabric` encapsulates a set of cooperating [peers]({{< ref "peers" >}}).
All peers in a fabric can communicate and may share [services]({{< ref "services" >}})
between them, with access governed by [policies]({{< ref "policies" >}}).
The `Fabric` acts as a root of trust for peer to peer communications (i.e.,
Expand All @@ -17,7 +17,7 @@ Currently, the concept of a `Fabric` is just that - a concept. It is not represe
One could potentially consider a more elaborate implementation where a central
management entity explicitly deals with `Fabric` life cycle, association of peers to
a fabric, etc. The role of this central management component in ClusterLink is currently
delegated to users who are responsible for coordinating the transfer to certificates
delegated to users who are responsible for coordinating the transfer of certificates
between peers, out of band.

## Initializing a new fabric
Expand All @@ -28,15 +28,18 @@ The following assume that you have access to the `clusterlink` CLI and one or mo
peers (i.e., clusters) where you'll deploy ClusterLink. The CLI can be downloaded
from the ClusterLink [releases page on GitHub](https://github.com/clusterlink-net/clusterlink/releases/latest).

### Create a new Fabric CA
### Create a new fabric CA

To create a new Fabric certificate authority (CA), execute the following CLI command:
To create a new fabric certificate authority (CA), execute the following CLI command:

```sh
clusterlink create fabric --name <fabric_name>
```

This command will create the CA files `cert.pem` and `key.pem` in a folder named <fabric_name>. The `--name` option is optional, and by default, "default_fabric" will be used. While you will need access to these files to create the peers` gateway certificates later, the private key file should be protected and not shared with others.
This command will create the CA files `cert.pem` and `key.pem` in a directory named <fabric_name>.
The `--name` option is optional, and by default, "default_fabric" will be used.
While you will need access to these files to create the peers` gateway certificates later,
the private key file should be protected and not shared with others.

## Related tasks

Expand Down
84 changes: 42 additions & 42 deletions website/content/en/docs/concepts/peers.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
---
title: Peers
description: Defining ClusterLink Peers as part of Fabric
description: Defining ClusterLink peers as part of a fabric
weight: 20
---

A `Peer` represents a location, such as a Kubernetes cluster, participating in a
[Fabric]({{< ref "fabric" >}}). Each peer may host one or more [Services]({{< ref "services" >}})
[fabric]({{< ref "fabric" >}}). Each peer may host one or more [services]({{< ref "services" >}})
it wishes to share with other peers. A peer is managed by a peer administrator,
which is responsible for running the ClusterLink control and data planes. The
administrator will typically deploy the ClusterLink components by configuring
the [deployment CRD]({{< ref "users#deploy-crd-instance" >}}). They may also wish to provide
the [deployment CR]({{< ref "users#deploy-cr-instance" >}}). They may also wish to provide
(often) coarse-grained access policies in accordance with high level corporate
policies (e.g., "production peers should only communicate with other production peers").

Once a Peer has been added to a Fabric, it can communicate with any other Peer
Once a peer has been added to a fabric, it can communicate with any other peer
belonging to it. All configuration relating to service sharing (e.g., the exporting
and importing of Services, and the setting of fine grained application policies) can be
and importing of services, and the setting of fine grained application policies) can be
done with lowered privileges (e.g., by users, such as application owners). Remote peers are
represented by the `Peer` Custom Resource Definition (CRD). Each Peer CRD instance
represented by the `Peer` Custom Resource Definition (CRD). Each Peer CR instance
defines a remote cluster and the network endpoints of its ClusterLink gateways.

## Prerequisites
Expand All @@ -26,30 +26,31 @@ The following assume that you have access to the `clusterlink` CLI and one or mo
peers (i.e., clusters) where you'll deploy ClusterLink. The CLI can be downloaded
from the ClusterLink [releases page on GitHub](https://github.com/clusterlink-net/clusterlink/releases/latest).
It also assumes that you have access to the [previously created]({{< ref "fabric#create-a-new-fabric-ca" >}})
Fabric CA files.
fabric CA files.

## Initializing a new Peer
## Initializing a new peer

{{< notice warning >}}
Creating a new Peer is a **Fabric administrator** level operation and should be appropriately
Creating a new peer is a **fabric administrator** level operation and should be appropriately
protected.
{{< /notice >}}

### Create a new Peer certificate
### Create a new peer certificate

To create a new Peer certificate belonging to a fabric, confirm that the Fabric CA files
To create a new peer certificate belonging to a fabric, confirm that the fabric CA files
are available in the current working directory, and then execute the following CLI command:

```sh
clusterlink create peer-cert --name <peer_name> --fabric <fabric_name>
```

{{< notice tip >}}
The Fabric CA files (certificate and private key) are expected to be in a subdirectory (i.e., `./<fabric_name>/cert.name` and `./<fabric_name>/key.pem`).
The fabric CA files (certificate and private key) are expected to be in a subdirectory
(i.e., `./<fabric_name>/cert.name` and `./<fabric_name>/key.pem`).
{{< /notice >}}

This will create the certificate and private key files (`cert.pem` and
`key.pem`, respectively) for the new peer. By default, the files are
`key.pem`, respectively) of the new peer. By default, the files are
created in a subdirectory named `<peer_name>` under the subdirectory of the fabric `<fabric_name>`.
You can override the default by setting the `--output <path>` option.

Expand All @@ -59,11 +60,11 @@ You will need the CA certificate (but **not** the CA private key) and the peer c
peer administrator.
{{< /notice >}}

## Deploy ClusterLink to a new Peer
## Deploy ClusterLink to a new peer

{{< notice info >}}
This operation is typically done by a local **Peer administrator**, usually different
than the **Fabric administrator**.
This operation is typically done by a local **peer administrator**, usually different
than the **fabric administrator**.
{{< /notice >}}

Before proceeding, ensure that the CA certificate (the CA private key is not needed),
Expand All @@ -72,23 +73,24 @@ Before proceeding, ensure that the CA certificate (the CA private key is not nee

### Install the ClusterLink deployment operator

Install the ClusterLink operator by running the following command
Install the ClusterLink operator by running the following command:

```sh
clusterlink peer init
```
<!-- TODO: is this the right command -->

The command assumes that kubectl is set to the correct context and credentials
and that the certificates were created in the local folder. If they were not,
add the `-f <path>` CLI option to set the correct path to the certificate files.
and that the certificates were created in respective sub-directories
under the current working directory.
If they were not, add the `-f <path>` CLI option to set the correct path.

This command will deploy the ClusterLink deployment CRDs using the current
kubectl context. The operation requires cluster administrator privileges
in order to install CRDs into the cluster.
The ClusterLink operator is installed to the `clusterlink-operator` namespace
and the CA and peer certificate and key are set as Kubernetes secrets
in the namespace. You can confirm the successful completion of the step using
and the CA and peer certificate and private key are set as K8s secrets
in the namespace. You can confirm the successful completion of this step using
the following commands:

```sh
Expand All @@ -109,50 +111,48 @@ multiline output of `kubectl get secret --namespace clusterlink-operator` comman

{{% /expand %}}

### Deploy ClusterLink via the Operator and ClusterLink CRD
### Deploy ClusterLink via the operator and ClusterLink CR

After the operator is installed, you can deploy ClusterLink by applying
the ClusterLink instance CRD. This will cause the ClusterLink operator to
the ClusterLink CR. This will cause the ClusterLink operator to
attempt reconciliation of the actual and intended ClusterLink deployment.
By default, the operator will install the ClusterLink control and data plane
components into a dedicated and privileged namespace (defaults to `clusterlink-system`).
Configurations affecting the entire peer, such as the list of known Peers, are also maintained
Configurations affecting the entire peer, such as the list of known peers, are also maintained
in the same namespace.

Refer to the [getting started guide]({{< ref "users#setup" >}}) for a description
of the ClusterLink instance CRD's fields.
of the ClusterLink CR fields.

<!-- TODO expand the sample CRD file? -->

## Add or remove Peers
## Add or remove peers

{{< notice info >}}
This operation is typically done by a local **Peer administrator**, usually different
than the **Fabric administrator**.
This operation is typically done by a local **peer administrator**, usually different
than the **fabric administrator**.
{{< /notice >}}

Managing peers is done by creating, deleting and updating Peer CRD instances
Managing peers is done by creating, deleting and updating peer CRs
in the dedicated ClusterLink namespace (typically, `clusterlink-system`). Peers are
added to the ClusterLink namespace by the peer administrator. Information
regarding peer gateways and attributes is communicated out of band (e.g., provided
by the Fabric or remote Peer administrator over email). In the future, these may
by the fabric or remote peer administrator over email). In the future, these may
be configured via a management plane.

There are two fundamental attributes in the Peer CRD: the Peer's name and the list of
ClusterLink gateway endpoints through which the remote peer's Services are available.
There are two fundamental attributes in the peer CRD: the peer name and the list of
ClusterLink gateway endpoints through which the remote peer's services are available.
Peer names are unique and must align with the Subject name present in their certificate
during connection establishment. The name is used by importers in referencing an export
(see [here]({{< ref "services" >}}) for details).

Gateway endpoint would typically be a implemented via a `NodePort` or `LoadBalancer`
Kubernetes Service. A `NodePort` Service would typically be used in local deployments
(e.g., when running in KIND clusters during development) and a `LoadBalancer` Service
would be used in Cloud based deployments. These can be automatically configured and
created via the [operator CRD]{{< ref "#deploy-clusterlink-via-the-operator-and-clusterlink-crd" >}}.
Not having any gateways is an error and will be reported in the Peer's Status.
In addition, the Status section includes other useful attributes, such a `Reachable`
(or `Seen`) indicating whether the Peer is currently reachable, the last time it
successfully responded to heartbeats, etc.
K8s service. A `NodePort` service would typically be used in local deployments
(e.g., when running in kind clusters during development) and a `LoadBalancer` service
would be used in cloud based deployments. These can be automatically configured and
created via the [ClusterLink CRD]{{< ref "#deploy-clusterlink-via-the-operator-and-clusterlink-crd" >}}.
The peer's status section includes a `Reachable` condition indicating whether the peer is currently reachable,
and in case it is not reachable, the last time it was.

{{% expand summary="Example YAML for `kubectl apply -f <peer_file>`" %}}
{{< readfile file="/static/files/peer_crd_sample.yaml" code="true" lang="yaml" >}}
Expand All @@ -161,6 +161,6 @@ Gateway endpoint would typically be a implemented via a `NodePort` or `LoadBalan
## Related tasks

Once a peer has been created and initialized with the ClusterLink control and data
planes as well as one or more remote Peers, you can proceed with configuring
planes as well as one or more remote peers, you can proceed with configuring
[services]({{< ref "services" >}}) and [policies]({{< ref "policies" >}}).
For a complete end to end use case, refer to [iperf toturial]({{< ref "iperf" >}}).
For a complete end to end use case, refer to the [iperf toturial]({{< ref "iperf" >}}).
4 changes: 2 additions & 2 deletions website/content/en/docs/concepts/policies.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Access Policies
description: Controlling Service access across peers
description: Controlling service access across peers
weight: 40
---

Expand Down Expand Up @@ -50,7 +50,7 @@ The following assumes that you have `kubectl` access to two or more clusters whe
### Creating access policies
Recall that a connection is dropped if it does not match any access policy.
Hence, for a connection to be allowed, an access policy with an `allow` action must be created on both sides of the connection.
Creating an access policy is accomplished by creating an `AccessPolicy` CRD instance in the relevant namespace (see Note above).
Creating an access policy is accomplished by creating an `AccessPolicy` CR in the relevant namespace (see Note above).

{{% expand summary="Export Custom Resource" %}}

Expand Down
32 changes: 16 additions & 16 deletions website/content/en/docs/concepts/services.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
---
title: Services
description: Sharing Services
description: Sharing services
weight: 30
---

ClusterLink uses services as the unit of sharing between peers.
One or more peers can expose an (internal) Kubernetes Service to
One or more peers can expose an (internal) K8s Service to
be consumed by other [peers]({{% ref "peers" %}}) in the [fabric]({{% ref "fabric" %}}).
A service is exposed by creating an `Export` CRD instance referencing it in the
A service is exposed by creating an `Export` CR referencing it in the
source cluster. Similarly, the exported service can be made accessible to workloads
in a peer by defining an `Import` CRD instance in the destination cluster[^KEP-1645].
in a peer by defining an `Import` CR in the destination cluster[^KEP-1645].
Thus, service sharing is an explicit operation. Services are not automatically
shared by peers in the fabric. Note that the exporting cluster must be
[configured as a peer]({{% ref "peers#add-or-remove-peers" %}}) of the importing
Expand Down Expand Up @@ -52,8 +52,8 @@ The following assume that you have `kubectl` access to two or more clusters wher

In order to make a service potentially accessible by other clusters, it must be
explicitly configured for remote access via ClusterLink. Exporting is
accomplished by creating an `Export` CRD instance in the **same** namespace
as the service being exposed. The CRD instance acts as a marker for enabling
accomplished by creating an `Export` CR in the **same** namespace
as the service being exposed. The CR acts as a marker for enabling
remote access to the service via ClusterLink.

{{% expand summary="Export Custom Resource" %}}
Expand Down Expand Up @@ -82,7 +82,7 @@ type ExportStatus struct {
The ExportSpec defines the following fields:

- **Host** (string, optional): the name of the service being exported. The service
must be defined in the same namespace as the Export CRD instance. If empty,
must be defined in the same namespace as the Export CR. If empty,
the export shall refer to a Kubernetes Service with the same name as the instance's
`metadata.name`. It is an error to refer to a non-existent service or one that is
not present in the local namespace. The error will be reflected in the CRD's status.
Expand All @@ -101,9 +101,9 @@ Note that exporting a Service does not automatically make is accessible to other

### Importing a service

Exposing remote services to a peer is accomplished by creating an `Import` CRD
instance to a namespace. The CRD representing the imported service and its
available backends across all peers. In response to an Import CRD, ClusterLink
Exposing remote services to a peer is accomplished by creating an `Import` CR
to a namespace. The CR represents the imported service and its
available backends across all peers. In response to an Import CR, ClusterLink
control plane will create a local Kubernetes Service selecting the ClusterLink
data plane Pods. The use of native Kubernetes constructs, allows ClusterLink
to work with any compliant cluster and CNI, transparently.
Expand All @@ -112,7 +112,7 @@ The Import instance creates the service endpoint in the same namespace as it is
defined in. The created service will have the Import's `metadata.Name`. This
allows maintaining independent names for services between peers. Alternately,
you may use the same name for the import and related source exports.
You can define multiple Import CRDs for the same set of Exports in different
You can define multiple Import CRs for the same set of Exports in different
namespaces. These are independent of each other.

{{% expand summary="Import Custom Resource" %}}
Expand Down Expand Up @@ -148,7 +148,7 @@ type ImportStatus struct {

The ImportSpec defines the following fields:

- **Port** (integer, required): the imported, user facing, port number define
- **Port** (integer, required): the imported, user facing, port number defined
on the created service object.
- **TargetPort** (integer, optional): this is the internal listening port
used by the ClusterLink data plane pods to represent the remote services. Typically the
Expand All @@ -159,20 +159,20 @@ The ImportSpec defines the following fields:
[port conflicts](https://kubernetes.io/docs/concepts/services-networking/service/#avoid-nodeport-collisions)
as is done for NodePort services.
- **Sources** (source array, required): references to remote exports providing backends
for the Import. Each reference names a different export through the combination of
for the Import. Each reference names a different export through the combination of:
- *Peer* (string, required): name of ClusterLink peer where the export is defined.
- *ExportNamespace* (string, required): name of the namespace on the remote peer where
the export is defined.
- *ExportName* (string, required): name of the remote export.
- **LBScheme** (string, optional): load balancing method to select between different
Sources defined. The default policy is `round-robin`, but you could override it to use
`random` or `static` (fixed) assignment.
Sources defined. The default policy is `random`, but you could override it to use
`round-robin` or `static` (fixed) assignment.

<!-- Importing multiport? It is not possible... Could use merge in future?
perhaps, but might requires explicit service name so can merge correctly
or use port set instead of individual port per export/import -->

As with exports, importing a Service does not automatically make it accessible by
As with exports, importing a service does not automatically make it accessible by
workloads, but only enables *potential* access. To complete service sharing,
you must define at least one [access control policy]({{% ref "policies" %}}) that
allows access in the importing cluster. To grant access, a connection must be
Expand Down
Loading

0 comments on commit dbc45c4

Please sign in to comment.