Skip to content

Commit

Permalink
Docs update before ph pt.1 (#282)
Browse files Browse the repository at this point in the history
* first draft of rework

* Remove more instances of "discover"

* Update screenshots and copy

* remove further mentions of discover and generate

* lint

* Swap order
  • Loading branch information
fabiomaienschein authored Oct 10, 2024
1 parent 3aeb658 commit 7f70fd8
Show file tree
Hide file tree
Showing 26 changed files with 158 additions and 147 deletions.
2 changes: 1 addition & 1 deletion account-view/test-cases.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ icon: "database"

A **test case** is a set of steps that mimic a user flow. You can find an overview of all your test cases in the `test case` section of the `project overview` page or in the `test cases` page accessible from the left sidebar.

It is the place where you can add, edit or delete test cases. From here, you can also grow your test suite by [having our AI agent suggest new tests](/new-test-case#have-ai-agent-suggest-and-auto-generate-more-tests) based on specific test cases.
It is the place where you can add, edit or delete test cases. From here, you can also grow your test suite by [having our AI agent generate new tests](/generate-more-test-cases) based on specific test cases.

<Frame caption="'test cases' section in 'project overview', 08/2024">
<img
Expand Down
13 changes: 13 additions & 0 deletions changelog.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,19 @@ description: "All the nice stuff you asked for"
icon: "sparkles"
---

## 2024-10-07

- new `pro plan`: for unlimited projects, test plans and use of our AI features, you can now upgrade to our pro plan.

## 2024-09-24

- `Test Report Overview`: test reports have a new home now: the test report overview. Here you can see past test reports, start off new test reports and schedule them.

## 2024-09-12

- `Dependency View`: There's a new way to see your test cases: the dependency view! Now you can visually grasp which test cases cases depend on other test cases (when executed).
- `Private Location Worker`: If you want to test a webapp that is not publicly available, we have something ready for you: [private location worker](/proxy/private-location.mdx)

## 2024-09-04

- `Editing` - We added **force click** and **right click** interactions to the visual locator picker.
Expand Down
2 changes: 1 addition & 1 deletion edit-test-case.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ Octomind test steps can be flexibly manipulated. You can **drag and drop** steps

You can edit the prompt and have the AI agent `regenerate` test steps based on the new prompt. We will ask you to select **the last step** in the sequence you want to keep. The AI agent will regenerate test steps from this step onwards.

Or you have simply hit the `regenerate` button to rediscover the steps based on the same prompt.
Or you have simply hit the `regenerate` button to regenerate the steps based on the same prompt.

<Frame caption="Edit prompt and restart AI auto-generation, 7/2024">
<img
Expand Down
91 changes: 42 additions & 49 deletions first-steps.mdx
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
---
title: First steps
description: "Generate your first tests and run them. Find bugs in your app before your user does."
description: "Generate your first tests and run them. Find bugs in your app before your users do."
icon: "lightbulb"
---

## 1. Give us a URL

We'll ask for a URL to create test cases. The URL has to be publicly accessible. We can test both in staging and in production, as long as we can access the site.
We'll ask for a URL to generate test cases. The URL has to be publicly accessible. We can test both in staging and in production, as long as we can access the site.

<Frame caption="First page of the setup flow - link to your website, screenshot 07/2024">
<Frame caption="First page of the setup flow - link to your website, screenshot 10/2024">
<img src="/images/setup/setup-1-url.png" alt="enter your url screen" />
</Frame>

Expand All @@ -30,21 +30,23 @@ But you can choose your own.

Now we need to sign you up. Please, give us your email, so we can get in touch.

<Frame caption="Third page of the setup flow - create account, screenshot 07/2024">
<Frame caption="Third page of the setup flow - create account, screenshot 10/2024">
<img
src="/images/setup/setup-3-create-account.png"
alt="create account, screenshot 08/2023"
alt="create account, screenshot 10/2024"
/>
</Frame>

You should receive a confirmation email that contains a link for you confirm your email and set your password.

## 4. Open the Octomind app for the first time

When you sign in to Octomind app again, you will land on your **project overview** page.
After being redirected back to [app.octomind.dev](https://app.octomind.dev/), you will land on your **project overview** page.

<Frame caption="Project overview page after sign-up, screenshot 07/2024">
<Frame caption="Project overview page after sign-up, screenshot 10/2024">
<img
src="/images/setup/setup-4-overview.png"
alt="Opening the Octomind app on project overview for the first time, screenshot 07/2024"
alt="Opening the Octomind app on project overview for the first time, screenshot 10/2024"
/>
</Frame>

Expand All @@ -64,77 +66,68 @@ These credentials are only usable for username/password logins not for social lo
/>
</Frame>

## 5. Discover more test cases
## 5. We are auto-generating test cases for you

Here comes the really cool part. You can let the AI agent browse through your site to discover and generate more tests. Click on `discover`.
Here comes the really cool part. Once we finished searching for a potential log-in and cookie banner test, we start generating 3 test cases for you automatically. You can follow the generation progress in the stack:

<Frame caption="Clicking this button will let our AI agent discover test cases, screenshot 07/2024">
<Frame caption="Stack showing ongoing AI-tasks, screenshot 10/2024">
<img
src="/images/setup/setup-6-discover.png"
alt="AI discovery button, screenshot 07/2024"
src="/images/setup/setup-6-stack.png"
alt="Stack showing ongoing AI-tasks, screenshot 10/2024"
/>
</Frame>

When it's done, the AI agent will let you know. You will find your tests on the `test cases` page.
## 6. Check your site against the generated test suite

We will [execute your generated test cases](/execute-test-cases.mdx) and create several test reports containing your test results. These will ensure that they successfully pass when executed on your site.

<Frame caption="AI agent discovered test cases and you will find them in the test cases view, screenshot 07/2024">
<Frame caption="Automatically executed test reports - screenshot 10/2024">
<img
src="/images/setup/setup-7-discovery-done.png"
alt="AI discovery has finished, screenshot 07/2024"
src="/images/setup/setup-9-run-tests.png"
alt="Automatically executed test reports - screenshot 10/2024"
/>
</Frame>

## 6. View your AI auto-generated test cases
## 7. Evaluate your test results

You should have your **first active test cases** generated. The AI agent has discovered the test cases, it auto-generated the test steps and has run them to validate whether they work.
Inside each test report, you can find the test results for the executed test cases:

They are `on` which means they will run once you trigger a **test report**.

<Frame caption="First active test cases discovered and auto-generated - screenshot 07/2024">
<img
src="/images/setup/setup-8-first-tests.png"
alt="First active test cases AI discovered and AI auto-generated - screenshot 07/2024 "
/>
</Frame>
- a green test result indicates a successful test run, meaning that your site passed the test described in the test case
- a red test result indicates a test failure, meaning we could not successfully run the test case steps. Click on it to see in which step in the app that is broken.

If the agent stumbled over a test step, you might have to lend a hand to review and [edit a test step](/edit-test-case.mdx).
Find out more about [test reports](/test-reports.mdx) and [debugging your tests](/debugtopus.mdx).

<Frame caption="Test cases that need review, discovered and auto-generated - screenshot 07/2024">
<Frame caption="First test report - screenshot 07/2024">
<img
src="/images/setup/setup-8-review-steps.png"
alt="Test cases that need review, AI discovered and AI auto-generated - screenshot 07/2024 "
src="/images/setup/setup-10-test-report.png"
alt="First first test report - screenshot 07/2024"
/>
</Frame>

## 7. Test your website for the first time
## 8. Go to test cases to generate more tests

Once you have active test cases - they are **on**, you can **run** them. Create your first test report. Click on `run all` button and then `run in app` and see if they found a bug.
You can grow your test suite by adding more test cases. For this you can jump straight into the test case view by using the "go to test cases" button in the test report.

<Frame caption="Run your tests for the first time - screenshot 07/2024">
<Frame caption="Go to the test case section to see the generated test cases - screenshot 10/2024">
<img
src="/images/setup/setup-9-run-tests.png"
alt="Run your first test report - screenshot 07/2024"
src="/images/setup/setup-11-go-to-test-cases.png"
alt="Go to the test case section to see the generated test cases - screenshot 10/2024 "
/>
</Frame>

## 8. See your first test results
You should have your **first active test cases** generated. The AI agent has found several test cases, auto-generated the test steps and has run them to validate whether they work.

You have created your first test report, congratulations! If your test is red we will pinpoint you to step in the app that is broken. Find out more about [test reports](/test-reports.mdx) and [debugging your tests](/debugtopus.mdx).
Functioning tests are turned `on` which means they will run once you trigger a **test report**.

<Frame caption="First test report - screenshot 07/2024">
<Frame caption="First active test cases auto-generated - screenshot 07/2024">
<img
src="/images/setup/setup-10-test-report.png"
alt="First first test report - screenshot 07/2024"
src="/images/setup/setup-8-first-tests.png"
alt="First active test cases AI auto-generated - screenshot 07/2024 "
/>
</Frame>

## 9. Create more tests
## Next steps

You can grow your test suite by adding more test cases. Prompt our AI agent or use the test recorder [to create new ones](/new-test-case.mdx).
Use auto-generation to [generate more test cases](/generate-more-test-cases.mdx) based off existing ones or prompt our AI agent [to create new ones](/new-test-case.mdx).

<Frame caption="Adding new test case by prompting - screenshot 07/2024">
<img
src="/images/setup/setup-11-new-test-case.png"
alt="Adding new test case by prompting 07/2024"
/>
</Frame>
If you are happy with the test cases we generated for you, you can [set up scheduling](/scheduled-execution) to periodically run your tests and ensure your site doesn't break.
37 changes: 37 additions & 0 deletions generate-more-test-cases.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
---
title: AI-generate more tests
description: "Use AI generation to add more test cases to your test suite."
icon: "code-branch"
---

## Have the AI agent auto-generate more tests

Similar to the initial **AI test generation** during project setup you can use the generate more feature to expand your test case suite.

Go into the test case view and hover over the `generate more` icon - a button will appear. Click on it.

<Frame caption="Have the AI agent generate more tests, 10/2024">
<img
src="/images/expand/generate-more.png"
alt="have AI generate more tests"
/>
</Frame>

Our AI agent will generate up to 3 new tests following up on the test case you launched it from. The original test will be added as a **dependency** automatically. It will auto-generate steps for every generated test case.

You can follow the progress of these newly created tests in the stack.

<Frame caption="AI agent informs what it does, 10/2024">
<img
src="/images/expand/generating-more-agent-running.png"
alt="AI agent status indicator"
/>
</Frame>

For each test, the AI agent has auto-generated test steps, validated them and where the validation succeeded, it turned the test **ON** - [into active mode](/execute-test-cases).

Help the AI agent when it couldn't quite nail the auto-generation - **yellow alert** highlights a failed step generation. See how in the [edit test case](/edit-test-case) section.

<Frame caption="New generated tests, 10/2024">
<img src="/images/expand/new-generated-tests.png" alt="New generated tests" />
</Frame>
Binary file added images/expand/generate-more.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/expand/generating-more-agent-running.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/expand/new-generated-tests.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/setup/setup-1-url.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/setup/setup-11-go-to-test-cases.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed images/setup/setup-11-suggest-more.png
Binary file not shown.
Binary file modified images/setup/setup-3-create-account.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/setup/setup-4-overview.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed images/setup/setup-6-discover.png
Binary file not shown.
Binary file added images/setup/setup-6-stack.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/setup/setup-9-run-tests.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
47 changes: 27 additions & 20 deletions mint.json
Original file line number Diff line number Diff line change
@@ -1,26 +1,21 @@
{
"$schema": "https://mintlify.com/schema.json",
"name": "Octomind Docs",

"logo": {
"dark": "/logo/dark.svg",
"light": "/logo/light.svg",
"href": "https://www.octomind.dev/"
},

"analytics": {
"posthog": {
"apiKey": "phc_DZrVg5kgD45m4Au5nFt6m9rykBTg3mAkeHNY3atWNbW",
"apiHost": "https://eu.posthog.com"
}
},

"modeToggle": {
"default": "dark"
},

"favicon": "/logo/favicon.svg",

"colors": {
"primary": "#00E2AA",
"light": "#00CE9B",
Expand All @@ -30,13 +25,11 @@
"light": "#FFFFFF"
}
},

"topAnchor": {
"name": "Documentation",
"icon": "octopus-deploy",
"iconType": "solid"
},

"anchors": [
{
"name": "GitHub",
Expand All @@ -55,37 +48,47 @@
"icon": "square-terminal"
}
],

"topbarLinks": [
{
"name": "About",
"url": "https://www.octomind.dev/about"
}
],

"topbarCtaButton": {
"name": "Go to app",
"url": "https://app.octomind.dev/setup/url?utm_source=docs&utm_medium=txt-lnk"
},

"navigation": [
{
"group": "Get Started",
"pages": [
"first-steps",
"first-steps"
]
},
{
"group": "Expand Your Test Suite",
"pages": [
"generate-more-test-cases",
"new-test-case",
"edit-test-case",
"test-case-creation-strategy"
]
},
{
"group": "Execute Your Tests",
"pages": [
"execute-test-cases",
"debugtopus",
"scheduled-execution",
"integrations-overview",
"execution-without-ci",
"maintain-tests"
"execution-without-ci"
]
},
{
"group": "Best Practices",
"pages": ["test-case-creation-strategy"]
"group": "Maintain Your Tests",
"pages": [
"edit-test-case",
"maintain-tests"
]
},
{
"group": "CI integrations",
Expand Down Expand Up @@ -118,7 +121,9 @@
},
{
"group": "Changelog",
"pages": ["changelog"]
"pages": [
"changelog"
]
},
{
"group": "More Info",
Expand All @@ -130,17 +135,19 @@
},
{
"group": "Data Governance",
"pages": ["data-governance/no-code-access", "data-governance/nda"]
"pages": [
"data-governance/no-code-access",
"data-governance/nda"
]
}
],
"feedback": {
"suggestEdit": true,
"raiseIssue": true
},

"footerSocials": {
"twitter": "https://twitter.com/Octomind_dev",
"github": "https://github.com/OctoMind-dev",
"discord": "https://discord.gg/3ShnZMKRfA"
}
}
}
8 changes: 4 additions & 4 deletions more-info/faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ We will add building blocks which allow for more demanding scenarios over time,

## 2. How are my tests generated?

We are using our AI agents for test case discovery and test step generation. We'll discover the interaction chain of the test case in an intermediate representation. We'll generate a corresponding Playwright code on the fly and execute on a manual trigger, a schedule or against your pull request.
We are using our AI agents for finding test cases and for generating test steps. We'll generate an interaction chain of the test case in an intermediate representation. We'll then generate a corresponding Playwright code on the fly and execute on a manual trigger, a schedule or against your pull request.

We generate tests on [sign-up](/first-steps#4-open-the-octomind-app-for-the-first-time), when you launch [test discovery](/first-steps#5-discover-more-test-cases) and when you ask our AI agents to [suggest more](http://localhost:3000/new-test-case#have-ai-agent-suggest-and-auto-generate-more-tests) tests.
We generate tests on [sign-up](/first-steps#4-open-the-octomind-app-for-the-first-time), when [we auto-generate 3 initial test cases](/first-steps#5-we-are-auto-generating-test-cases-for-you) and when you ask our AI agents to [generate more](/generate-more-test-cases) tests.

## 3. What code are you using for your tests?

Expand All @@ -27,15 +27,15 @@ End-to-end tests are notoriously flaky. Some of our strategies to fight flakines
- Smart learning based retries
- active interaction timing (sleeps)
- AI based analysis of unexpected circumstances
- Rediscovery in case of user flow changes
- Regeneration in case of user flow changes

## 5. How can I run your tests locally?

Our open source tool [Debugtopus](https://github.com/OctoMind-dev/debugtopus) can pull the latest test case from our repository and execute it against your local environment. [Learn how.](/debugtopus)

## 6. How does the auto-maintenance work?

This feature is under active development and not publicly accesible yet. We will follow a playbook to find out if a test failure is caused by a behavioral change of your user flows, the test code itself or a bug in your code.
This feature is under active development and not publicly accessible yet. We will follow a playbook to find out if a test failure is caused by a behavioral change of your user flows, the test code itself or a bug in your code.

In case of a behavioral change, we pinpoint the failing interaction. We apply machine learning to find out what's the new desired interaction to achieve the original goal of the test case.
The interaction chain of this test case will be adjusted permanently to the new behavior as a result.
Expand Down
Loading

0 comments on commit 7f70fd8

Please sign in to comment.