Skip to content

Commit

Permalink
Docs update before ph pt.2 (#284)
Browse files Browse the repository at this point in the history
* Update more images and copy

* casing
  • Loading branch information
fabiomaienschein authored Oct 10, 2024
1 parent 7f70fd8 commit f619b90
Show file tree
Hide file tree
Showing 14 changed files with 40 additions and 25 deletions.
2 changes: 1 addition & 1 deletion edit-test-case.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Use the **virtual locator picker** to change the type of the locator. Your optio

Click on `add step` and drag the `new step` to its respective place in the step sequence. The snapshot will adapt so you can use the **virtual locator picker** to select the interaction or assertion type.

Delete steps by clicking on the `trash` icon in the upper right corner of the test step view.
Delete steps by clicking on the `trash` icon in the upper right corner of the test step view or by pressing backspace(``) or the delete key(`DEL`).

<Frame caption="Adding a step with the virtual locator picker, 7/2024">
<img
Expand Down
8 changes: 4 additions & 4 deletions generate-more-test-cases.mdx
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: AI-generate more tests
description: "Use AI generation to add more test cases to your test suite."
description: "Use AI generation to add more test cases to your test suite and increase your coverage."
icon: "code-branch"
---

## Have the AI agent auto-generate more tests

Similar to the initial **AI test generation** during project setup you can use the generate more feature to expand your test case suite.
Similar to the initial **AI test generation** during project setup you can use the `generate more` feature to expand your test case suite.

Go into the test case view and hover over the `generate more` icon - a button will appear. Click on it.

Expand All @@ -17,7 +17,7 @@ Go into the test case view and hover over the `generate more` icon - a button wi
/>
</Frame>

Our AI agent will generate up to 3 new tests following up on the test case you launched it from. The original test will be added as a **dependency** automatically. It will auto-generate steps for every generated test case.
Our AI agent will generate up to 3 new tests following up on the test case you launched it from. The original test will be added as a **dependency** automatically. It will start generating steps for every generated test case.

You can follow the progress of these newly created tests in the stack.

Expand All @@ -28,7 +28,7 @@ You can follow the progress of these newly created tests in the stack.
/>
</Frame>

For each test, the AI agent has auto-generated test steps, validated them and where the validation succeeded, it turned the test **ON** - [into active mode](/execute-test-cases).
At the end of the agent run, the AI agent should have auto-generated test steps and validated them for each test case. For those that the validation succeeded, it turned the test **ON** - [into active mode](/execute-test-cases).

Help the AI agent when it couldn't quite nail the auto-generation - **yellow alert** highlights a failed step generation. See how in the [edit test case](/edit-test-case) section.

Expand Down
Binary file added images/editing/active-test-cases.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/editing/set-as-active.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/executing/schedule.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/setup/setup-4-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/setup/setup-8-first-tests.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion maintain-tests.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Maintain your tests
title: Handle broken tests
description: "How to fix your broken tests easily"
icon: "traffic-cone"
---
Expand Down
25 changes: 25 additions & 0 deletions mark-tests-as-active.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
---
title: Mark tests as active
description: "How to decide what to include in your test suite"
icon: "toggle-on"
---

## Deciding on which parts of your test suite are executed

The more your test suite grows, the more important it gets to keep an overview over which of your tests are executed and contained in the newest test report.
In Octomind, we speak of `active` test cases, which are all the test being run when a test report is triggered.

<Frame caption="Active test cases, screenshot 10/2024">
<img src="/images/editing/active-test-cases.png" alt="Active test cases" />
</Frame>

### Activate your test case

Our agent sets newly created tests to active by default, but you can always set the active status according to your liking. \*\* Use the toggle button to switch the test between `ON` and `OFF`.

<Frame caption="Toggle button to activate your test case, screenshot 07/2024">
<img
src="/images/editing/set-as-active.png"
alt="Toggle button to activate test"
/>
</Frame>
3 changes: 2 additions & 1 deletion mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,8 @@
"group": "Maintain Your Tests",
"pages": [
"edit-test-case",
"maintain-tests"
"maintain-tests",
"mark-tests-as-active"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion more-info/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ You can also run us [programmatically without using a CI](/execution-without-ci)
## 9. How can I get in touch with you?

Either use [our discord server](https://discord.gg/3ShnZMKRfA)
or [write us an email](mailto:[email protected])
or [write us an email](mailto:[email protected]).

## 10. From which IP addresses are your tests run?

Expand Down
19 changes: 4 additions & 15 deletions new-test-case.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ icon: "plus"

## Prompt our AI agent to generate new tests

The fastest way to create a new bespoke test is to have it AI generated from your prompt. You give us a short prompt and we deploy our `AI agent` to find the interactions leading towards a desired user flow.
If you want to translate a custom user flow into a test, you can have it AI generated from your prompt. Give us a short prompt and we deploy our `AI agent` to find the interactions leading towards your desired user flow.

<Frame caption="Create new test case by prompting the AI agent, screenshot 07/2024">
<img
Expand All @@ -26,8 +26,8 @@ While our `AI agent` generates the steps of your test cases, it informs you abou

If the AI agent generated wrong steps or signals a failed step with a yellow alert, you can:

1. **Restart the AI generation process.** Select the last step you want to keep. All subsequent steps will be replaced with the new AI agent output.
2. Try a **different prompt** and restart the AI generation process.
1. **Restart the AI generation process by clicking `regenerate steps`.** Select the last step you want to keep. All subsequent steps will be replaced with the new AI agent output.
2. Try a **different prompt** and restart the AI generation process by clicking `regenerate steps`.
3. **Add, edit and remove steps manually.** Our virtual locator picker helps you edit tests in no time. Learn how to [edit test steps](/edit-test-case.mdx).

<Frame caption="A generated test step failed, highlighted by yellow alert, screenshot 07/2024">
Expand Down Expand Up @@ -61,7 +61,7 @@ This test is also created on the fly for a **new project**, if we detect a login

A user flow is virtually a chain of test cases. When using the AI agent, you can use a dependency to chain test cases together. Shorter flows are faster, more specific and more reliable.

We fill in some dependencies for you set-up Octomind. They are the **cookies banner test** if cookie banners are used and **required login test** if you need to be logged in to operate the app.
We fill in some dependencies for you set-up Octomind. These are the **cookies banner test** (in case your site has a cookie banner) and **required login test** if you need to be logged in to operate the app.
You can keep, remove or add other dependencies if you wish.

This is how you do it:
Expand Down Expand Up @@ -100,17 +100,6 @@ This is how you do it:
- We cannot handle captchas yet.
- Our AI agent might get blocked by robot detection on some high traffic sites. Sites / apps in production are more bot-protected than staging / test systems.

### Activate your test case

**Check if your test is `on`.** Use the toggle button to switch the test to "on". This means the test is active - it was added to your active test suite and will run when a test report is triggered.

<Frame caption="Toggle button to activate your test case, screenshot 07/2024">
<img
src="/images/prompting/toggle-button.png"
alt="Toggle button to activate test"
/>
</Frame>

## Record a test case

For more manual control and for cases that the AI model struggles with, we are offering the option to enter code directly. Ideally, produce your code with Playwright Codegen.
Expand Down
2 changes: 1 addition & 1 deletion scheduled-execution.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ There, you can pick between the following run intervals:
- weekly
- bi-weekly

<Frame caption="Scheduling test run, screenshot 07/2024">
<Frame caption="Scheduling test run, screenshot 10/2024">
<img
src="/images/executing/schedule.png"
alt="scheduling regular test runs"
Expand Down
2 changes: 1 addition & 1 deletion test-case-creation-strategy.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Best Practices
title: Best practices
description: "How to nail creating new tests for your project"
icon: "lightbulb"
---
Expand Down

0 comments on commit f619b90

Please sign in to comment.