Skip to content

Commit

Permalink
Merge pull request #122 from n-boshnakov/fix-broken-links-2024.05
Browse files Browse the repository at this point in the history
Fixed broken links - 2024.05
  • Loading branch information
unmarshall authored Jun 21, 2024
2 parents bbfe965 + 4684f4e commit 2f9de7f
Show file tree
Hide file tree
Showing 3 changed files with 13 additions and 13 deletions.
6 changes: 3 additions & 3 deletions docs/concepts/prober.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

Prober starts asynchronous and periodic probes for every shoot cluster. The first probe is the api-server probe which checks the reachability of the API Server from the control plane. The second probe is the lease probe which is done after the api server probe is successful and checks if the number of expired node leases is below a certain threshold.
If the lease probe fails, it will scale down the dependent kubernetes resources. Once the connectivity to `kube-apiserver` is reestablished and the number of expired node leases are within the accepted threshold, the prober will then proactively scale up the dependent kubernetes resources it had scaled down earlier. The failure threshold fraction for lease probe
and dependent kubernetes resources are defined in [configuration](/example/04-dwd-prober-configmap.yaml) that is passed to the prober.
and dependent kubernetes resources are defined in [configuration](/example/01-dwd-prober-configmap.yaml) that is passed to the prober.

### Origin

Expand Down Expand Up @@ -61,12 +61,12 @@ then the probes are retried after a backOff of `backOffDurationForThrottledReque
If the lease probe fails, then the error could be due to failure in listing the leases. In this case, no scaling operations are performed. If the error in listing the leases is a `TooManyRequests` error due to requests to the Kube-API-Server being throttled,
then the probes are retried after a backOff of `backOffDurationForThrottledRequests`.

If there is no error in listing the leases, then the Lease probe fails if the number of expired leases reaches the threshold fraction specified in the [configuration](/example/04-dwd-prober-configmap.yaml).
If there is no error in listing the leases, then the Lease probe fails if the number of expired leases reaches the threshold fraction specified in the [configuration](/example/01-dwd-prober-configmap.yaml).
A lease is considered expired in the following scenario:-
```
time.Now() >= lease.Spec.RenewTime + (p.config.KCMNodeMonitorGraceDuration.Duration * expiryBufferFraction)
```
Here, `lease.Spec.RenewTime` is the time when current holder of a lease has last updated the lease. `config` is the probe config generated from the [configuration](/example/04-dwd-prober-configmap.yaml) and
Here, `lease.Spec.RenewTime` is the time when current holder of a lease has last updated the lease. `config` is the probe config generated from the [configuration](/example/01-dwd-prober-configmap.yaml) and
`KCMNodeMonitorGraceDuration` is amount of time which KCM allows a running Node to be unresponsive before marking it unhealthy (See [ref](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/#:~:text=Amount%20of%20time%20which%20we%20allow%20running%20Node%20to%20be%20unresponsive%20before%20marking%20it%20unhealthy.%20Must%20be%20N%20times%20more%20than%20kubelet%27s%20nodeStatusUpdateFrequency%2C%20where%20N%20means%20number%20of%20retries%20allowed%20for%20kubelet%20to%20post%20node%20status.))
. `expiryBufferFraction` is a hard coded value of `0.75`. Using this fraction allows the prober to intervene before KCM marks a node as unknown, but at the same time allowing kubelet sufficient retries to renew the node lease (Kubelet renews the lease every `10s` See [ref](https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/#:~:text=The%20lease%20is%20currently%20renewed%20every%2010s%2C%20per%20KEP%2D0009.)).

Expand Down
8 changes: 4 additions & 4 deletions docs/deployment/configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ Prober can be configured via the following flags:
| leader-elect-renew-deadline | time.Duration | No | 10s | The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration. This is only applicable if leader election is enabled. |
| leader-elect-retry-period | time.Duration | No | 2s | The duration the clients should wait between attempting acquisition and renewal of a leadership. This is only applicable if leader election is enabled. |

You can view an example kubernetes prober [deployment](../../example/01-dwd-prober-deployment.yaml) YAML to see how these command line args are configured.
You can view an example kubernetes prober [deployment](../../example/03-dwd-prober-deployment.yaml) YAML to see how these command line args are configured.


### Prober Configuration

A probe configuration is mounted as `ConfigMap` to the container. The path to the config file is configured via `config-file` command line argument as mentioned above. Prober will start one probe per Shoot control plane hosted within the Seed cluster. Each such probe will run asynchronously and will periodically connect to the Kube ApiServer of the Shoot. Configuration below will influence each such probe.

You can view an example YAML configuration provided as `data` in a `ConfigMap` [here](../../example/04-dwd-prober-configmap.yaml).
You can view an example YAML configuration provided as `data` in a `ConfigMap` [here](../../example/01-dwd-prober-configmap.yaml).

| Name | Type | Required | Default Value | Description |
|-----------------------------|--------------------------------|----------|---------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
Expand Down Expand Up @@ -144,13 +144,13 @@ Dependency watchdog weeder command also (just like the prober command) takes com
### Command Line Arguments

Weeder can be configured with the same flags as that for prober described under [command-line-arguments](#command-line-arguments) section
You can find an example weeder [deployment](../../example/02-dwd-weeder-deployment.yaml) YAML to see how these command line args are configured.
You can find an example weeder [deployment](../../example/04-dwd-weeder-deployment.yaml) YAML to see how these command line args are configured.

### Weeder Configuration

Weeder configuration is mounted as `ConfigMap` to the container. The path to the config file is configured via `config-file` command line argument as mentioned above. Weeder will start one go routine per podSelector per endpoint on an endpoint event as described in [weeder internal concepts](../concepts/weeder.md#internals).

You can view the example YAML configuration provided as `data` in a `ConfigMap` [here](../../example/03-dwd-weeder-configmap.yaml).
You can view the example YAML configuration provided as `data` in a `ConfigMap` [here](../../example/02-dwd-weeder-configmap.yaml).

| Name | Type | Required | Default Value | Description |
|-------------------------------|-------------------------------|----------|---------------|----------------------------------------------------------------------------------------------------------|
Expand Down
12 changes: 6 additions & 6 deletions docs/development/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,16 +25,16 @@ We use the `Testing` package provided by the standard library in golang for writ

### Common for All Kinds
- For naming the individual tests (`TestXxx` and `testXxx` methods) and helper methods, make sure that the name describes the implementation of the method. For eg: `testScalingWhenMandatoryResourceNotFound` tests the behaviour of the `scaler` when a mandatory resource (KCM deployment) is not present.
- Maintain proper logging in tests. Use `t.log()` method to add appropriate messages wherever necessary to describe the flow of the test. See [this](../../controllers/endpoints_controller_test.go) for examples.
- Maintain proper logging in tests. Use `t.log()` method to add appropriate messages wherever necessary to describe the flow of the test. See [this](../../controllers/endpoint/endpoints_controller_test.go) for examples.
- Make use of the `testdata` directory for storing arbitrary sample data needed by tests (YAML manifests, etc.). See [this](../../controllers) package for examples.
- From https://pkg.go.dev/cmd/go/internal/test:
> The go tool will ignore a directory named "testdata", making it available to hold ancillary data needed by the tests.
### Table-driven tests
We need a tabular structure in two cases:

- **When we have multiple tests which require the same kind of setup**:- In this case we have a `TestXxxSuite` method which will do the setup and run all the tests. We have a slice of `test` struct which holds all the tests (typically a `title` and `run` method). We use a `for` loop to run all the tests one by one. See [this](../../controllers/cluster_controller_test.go) for examples.
- **When we have the same code path and multiple possible values to check**:- In this case we have the arguments and expectations in a struct. We iterate through the slice of all such structs, passing the arguments to appropriate methods and checking if the expectation is met. See [this](../../internal/prober/scaler_test.go) for examples.
- **When we have multiple tests which require the same kind of setup**:- In this case we have a `TestXxxSuite` method which will do the setup and run all the tests. We have a slice of `test` struct which holds all the tests (typically a `title` and `run` method). We use a `for` loop to run all the tests one by one. See [this](../../controllers/cluster/cluster_controller_test.go) for examples.
- **When we have the same code path and multiple possible values to check**:- In this case we have the arguments and expectations in a struct. We iterate through the slice of all such structs, passing the arguments to appropriate methods and checking if the expectation is met. See [this](../../internal/prober/scaler/scaler_test.go) for examples.

### Env Tests
Env tests in Dependency Watchdog use the `sigs.k8s.io/controller-runtime/pkg/envtest` package. It sets up a temporary control plane (etcd + kube-apiserver) and runs the test against it. The code to set up and teardown the environment can be checked out [here](../../internal/test/testenv.go).
Expand All @@ -44,8 +44,8 @@ These are the points to be followed while writing tests that use `envtest` setup
1. tests with common environment (`testXxxCommonEnvTests`)
2. tests which need a dedicated environment for each one. (`testXxxDedicatedEnvTests`)

They should be contained within the `TestXxxSuite` method. See [this](../../controllers/cluster_controller_test.go) for examples. If all tests are of one kind then this is not needed.
- Create a method named `setUpXxxTest` for performing setup tasks before all/each test. It should either return a method or have a separate method to perform teardown tasks. See [this](../../controllers/cluster_controller_test.go) for examples.
They should be contained within the `TestXxxSuite` method. See [this](../../controllers/cluster/cluster_controller_test.go) for examples. If all tests are of one kind then this is not needed.
- Create a method named `setUpXxxTest` for performing setup tasks before all/each test. It should either return a method or have a separate method to perform teardown tasks. See [this](../../controllers/cluster/cluster_controller_test.go) for examples.
- The tests run by the suite can be table-driven as well.
- Use the `envtest` setup when there is a need of an environment close to an actual setup. Eg: start controllers against a real Kubernetes control plane to catch bugs that can only happen when talking to a real API server.

Expand All @@ -58,7 +58,7 @@ You can check out the code for this setup [here](../../internal/test/kind.go). Y
These are the points to be followed while writing tests that use `Vanilla Kind Cluster` setup:

- Use this setup only if there is a need of an actual Kubernetes cluster(api server + control plane + etcd) to write the tests. (Because this is slower than your normal `envTest` setup)
- Create `setUpXxxTest` similar to the one in `envTest`. Follow the same structural pattern used in `envTest` for writing these tests. See [this](../../internal/prober/scaler_test.go) for examples.
- Create `setUpXxxTest` similar to the one in `envTest`. Follow the same structural pattern used in `envTest` for writing these tests. See [this](../../internal/prober/scaler/scaler_test.go) for examples.


## Run Tests
Expand Down

0 comments on commit 2f9de7f

Please sign in to comment.