Skip to content

Commit

Permalink
docs: add webhooks and functional-tests/assertions documentation (#397)
Browse files Browse the repository at this point in the history
  • Loading branch information
NivLipetz authored Sep 20, 2020
1 parent 3f0ec12 commit 5fef4ac
Show file tree
Hide file tree
Showing 8 changed files with 102 additions and 5 deletions.
4 changes: 1 addition & 3 deletions docs/devguide/docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ Below are variables Predator can be configured with.
| RUNNER_CPU | runner_cpu | Number of CPU use by the each runner || 1 |
| RUNNER_MEMORY | runner_memory | Max memory to use by each runner || 256 |
| DEFAULT_EMAIL_ADDRESS | default_email_address | Default email to send final report to, address can be configured || |
| DEFAULT_WEBHOOK_URL | default_webhook_url | Default webhook url to send live report statistics to || |
| ALLOW_INSECURE_TLS | allow_insecure_tls | If true, don't fail requests on unverified server certificate errors || false |
| DELAY_RUNNER_MS | delay_runner_ms | Delay the predator runner from sending http requests (ms) || |
| INTERVAL_CLEANUP_FINISHED_CONTAINERS_MS | interval_cleanup_finished_containers_ms | Interval (in ms) to search and delete finished tests containers. Value of 0 means no auto clearing enabled || 0 |
Expand Down Expand Up @@ -64,7 +63,6 @@ Additional parameters for the following chosen databases:
| Environment Variable | Configuration key | Description | Configurable from UI/API | Default value |
|---------------------- |---------------------- |----------------------------------------------------------- |-------------------------- |--------------- |
| BENCHMARK_THRESHOLD | benchmark_threshold | Minimum acceptable score of tests, if a score is less than this value, a webhook will be sent to the threshold webhook url || |
| BENCHMARK_THRESHOLD_WEBHOOK_URL | benchmark_threshold_webhook_url | Url to send webhooks to incase a test receives a score less than the benchmark threshold || |
| | benchmark_weights.percentile_ninety_five.percentage | Percentage of the score affected by p95 results || 20 |
| | benchmark_weights.percentile_fifty.percentage | Percentage of the score affected by median results || 20 |
| | benchmark_weights.server_errors_ratio.percentage | Percentage of the score affected by server errors ratio || 20 |
Expand All @@ -83,7 +81,7 @@ Additional parameters for the following chosen databases:
| | prometheus_metrics.buckets_sizes | Bucket sizes to configure prometheus || |
| | prometheus_metrics.labels | Labels will be passed to the push gateway || |

#### Influx
#### InfluxDB
| Environment Variable | Configuration key | Description | Configurable from UI/API | Default value |
|---------------------- |------------------------- |-------------------- |-------------------------- |--------------- |
| | influx_metrics.host | Influx db host || |
Expand Down
37 changes: 37 additions & 0 deletions docs/devguide/docs/functionalandassertions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Functional Tests and Assertions
!!! TIP "Supported from version zooz/predator:1.5.0"

Load tests are important to stress the system and see how it is dealing with the stress. The actual functionality and acceptance tests are not usually tested in load tests.
For this, Predator offers running **functional tests with assertions**.

## Creating a Functional Tests
Functional tests are created in the same manner as the load tests. When creating functional tests,
it's advised to add expectations so that you will be informed in the report if the acceptance tests passed or not.
These expectations will be checked for each response received in the test and the report will display the assertion results.

### Expectations
The possible expectations that can be created to assert the response received:

- statusCode
- contentType
- hasProperty
- hasHeader
- equals
- matchesRegexp

### Example

A simlple functional test that sends a GET request to `http://www.google.com/` and asserts that the response received has:

- statusCode: `200`
- contentType: `application/json`
- hasProperty: `body`

![functional-test](images/expectations.png)

## Running a Functional Tests
In order to run the created test as a functional test and not a load test, the only difference is the actual load we run.
For this we need to create a job with type `functional_test` and `arrival_count` as the rate parameter (instead of `arrival_rate` in load_tests).
This means that for the test duration, the test will send the amount of requests that are configured in the `arrival_count` param.

**Example**: A test duration of 10 minutes with `arrival_count = 600` will result in 1 request per second.
Binary file added docs/devguide/docs/images/create-webhook.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/devguide/docs/images/expectations.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 7 additions & 1 deletion docs/devguide/docs/plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,17 @@ and to send final test results straight to your email upon test completion.

## SMTP Server - Email Notifier

Set up a connection to your SMTP server to receive email notifactions. Please refer to <u>[configuration section](configuration.md#smtp-server)</u> to see required variables.

## Prometheus

Set up a connection to your Prometheus Push-Gateway to receive test run metrics. Please refer to <u>[configuration section](configuration.md#prometheus)</u> to see required variables.

For reference, Predator uses the following <u>[plugin](https://github.com/enudler/artillery-plugin-prometheus)</u> to export Prometheus metrics.

## InfluxDB

## InfluxDB

Set up a connection to your InfluxDB to receive test run metrics. Please refer to <u>[configuration section](configuration.md#influxdb)</u> to see required variables.

For reference, Predator uses the following <u>[plugin](https://github.com/Nordstrom/artillery-plugin-influxdb)</u> to export InfluxDB metrics.
9 changes: 8 additions & 1 deletion docs/devguide/docs/schedulesandreports.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,11 @@ Predator's test reports give in-depth performance metrics in real-time, while ag
!!! TIP "Supported from version zooz/predator:1.3.0"

Easily compare two or more test results in one predator report dashboard. This is available under `Last Reports` page
and after selecting the desired tests, clicking on `Compare Reports` will display the test results side by side on the same graphs.
and after selecting the desired tests, clicking on `Compare Reports` will display the test results side by side on the same graphs.

### Favorite Reports

!!! TIP "Supported from version zooz/predator:1.5.0"

Want to save a report and then find it easily after? In the report you would like to favorite, click on the star to add it to a favorites list.
Under the reports page you can then filter only the favorite reports and find them in one-click.
46 changes: 46 additions & 0 deletions docs/devguide/docs/webhooks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Webhooks
!!! TIP "Supported from version zooz/predator:1.5.0"

Webhooks are events that notify you on test progress.
Webhooks are supported in Slack or JSON format for an easy server to server integration.
You can define a global webhook which will be enabled for system-wide tests or an ad hoc webhook which will be optional on a specific test run.

## Webhook Events
The following test run events are supported when configuring a webhook:

- **Started**: Sent when a test starts.
- **In progress**: Sent after a test run receives its first statistics.
- **API Failure**: Sent if there are 5xx status codes.
- **Aborted**: Sent when a test is aborted.
- **Failed**: Sent when a test fails to run.
- **Finished**: Sent when a test finishes successfully.
- **Benchmark Passed**: Sent when a test finishes successfully and receives an equal or higher score than the allowed threshold.
- **Benchmark Failed**: Sent when a test finishes successfully and receives a lower score than the allowed threshold.

## Setting Up
Webhooks can be set up both in the UI and in the API. For further info please see our <u>[API Reference](apireference.md)</u>.

### Global Webhook
Global webhooks are invoked on all test runs.

### Ad hoc Webhook
Ad hoc webhooks can be paired with a specific test run (either by API/UI).

## Webhook Formats

### Slack
Webhooks can be sent in as a Slack message to any Slack channel with a proper Slack webhook URL.

### JSON
For server to server integration, webhooks can also be sent as an HTTP `POST` request to a configured webhook URL with relevant data in JSON content-type regarding the test's progress and results.

## Example
A global webhook created in Slack format that will invoke a message to the configured Slack channel's URL on every test run that's in the following phases:

- started
- in_progress
- aborted
- failed
- finished

![webhooks](images/create-webhook.png)
3 changes: 3 additions & 0 deletions docs/devguide/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,12 @@ nav:
- Installation: installation.md
- My First Test: myfirsttest.md
- Advanced Test Setup: tests.md
- Functional Tests and Assertions: functionalandassertions.md
- Processor Creation: processors.md
- Benchmarks: benchmarks.md
- Webhooks: webhooks.md
- Schedules and Reports: schedulesandreports.md
- Plugins: plugins.md
- Configuration: configuration.md
- FAQ: faq.md
- Contributing: contributing.md
Expand Down

0 comments on commit 5fef4ac

Please sign in to comment.