Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

many links fixed and vids added #346

Merged
merged 1 commit into from
Dec 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions account-view/environments.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,5 @@ In the new environment setup:
1. Name the Environment – Give a unique name to differentiate it from others.
2. Set the Target URL – Specify the URL which describes the "default" deployment of this environment - you can still override the URL at runtime.
3. Optional: Add Credentials – Provide a username and password if the URL requires login.

To clarify the difference between an `environment` and a `project` we created this [short guide](/guides/project-environments.mdx).
15 changes: 2 additions & 13 deletions account-view/projects.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ icon: "plus"

A project is a collection of test cases and test reports tied to a website or web app. The **test target URL** you define for it is used as a basis for generating test steps. You can extend it to subpaths of your website during test case creation.

You can run your test **against a different URL** while keeping the subpaths - the URL is "templated". This allows you to [run your test cases against multiple deployments](/execute-test-cases#1-trigger-a-test-run) of the same application.
You can run your test **against a different URL** while keeping the subpaths - the URL is "templated". This allows you to [run your test cases against multiple deployments](/get-started/execute-test-cases#1-trigger-a-test-run) of the same application.

<Frame caption="test target url of a project, 08/2024">
<img
Expand Down Expand Up @@ -64,15 +64,4 @@ In the upper right corner of the project main view, click on the `more` three do
/>
</Frame>

## Page purpose

One of our AI agents is specialized in assessing the purpose of a web page. This is a crucial piece of information that helps us identify critical user flows to be thoroughly end-to-end tested.

Rate the agent's description of your page to improve it by giving it a `thumbs up` or `thumbs down`.

<Frame caption="AI agent assessing the purpose of your page, 08/2024">
<img
src="/images/accounts/page-purpose.gif"
alt="AI agent assessing the purpose of a web app"
/>
</Frame>
To clarify the difference between an `environment` and a `project` we created this [short guide](/guides/project-environments.mdx).
2 changes: 1 addition & 1 deletion account-view/test-cases.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ icon: "database"

A **test case** is a set of steps that mimic a user flow. You can find an overview of all your test cases in the `test case` section of the `project overview` page or in the `test cases` page accessible from the left sidebar.

It is the place where you can add, edit or delete test cases. From here, you can also grow your test suite by [having our AI agent generate new tests](/generate-more-test-cases) based on specific test cases.
It is the place where you can add, edit or delete test cases. From here, you can also grow your test suite by [having our AI agent generate new tests](/get-started/generate-more-test-cases) based on specific test cases.

<Frame caption="'test cases' section in 'project overview', 08/2024">
<img
Expand Down
8 changes: 4 additions & 4 deletions account-view/test-reports.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Test reports are tied to a single target execution URL, so one specific deployme

## Scheduled test runs

If you want to [run your tests on a fixed schedule](/scheduled-execution), the `test reports` section in the `project overview` gives you the option of a daily, weekly or bi-weekly interval.
If you want to [run your tests on a fixed schedule](/get-started/scheduled-execution), the `test reports` section in the `project overview` gives you the option of a daily, weekly or bi-weekly interval.

This ensures that your app works flawlessly over time. We will send you an email notification if any of your tests fail.

Expand All @@ -29,7 +29,7 @@ This ensures that your app works flawlessly over time. We will send you an email

## Test reports in your CI/CD pipeline

If you integrated Octomind into your [CI pipeline](/integrations-overview), we will comment test results back in your pull request after a completed test run.
If you integrated Octomind into your [CI pipeline](/integrations/integrations-overview), we will comment test results back in your pull request after a completed test run.

<Frame caption="Example of Octomind test results in a commit comment, 09/2023">
<img
Expand Down Expand Up @@ -82,7 +82,7 @@ It will give showcase test step snapshots to understand what the execution did e
/>
</Frame>

The `failure` state indicates that your app was not working correctly when executing the test. We will help you understand what when wrong and to debug either the app or your test itself if a [test is red](/execute-test-cases#3-why-is-a-test-red).
The `failure` state indicates that your app was not working correctly when executing the test. We will help you understand what when wrong and to debug either the app or your test itself if a [test is red](/get-started/execute-test-cases#3-why-is-a-test-red).

## Debugging a failed test

Expand All @@ -105,4 +105,4 @@ The first step when debugging should be inspecting the execution using the `insp
/>
</Frame>

To run your test on your own machine against your local dev environemnt use the `run locally` button. Find out [how to run tests locally and debug using our open source Debugtopus](/debugtopus).
To run your test on your own machine against your local dev environemnt use the `run locally` button. Find out [how to run tests locally and debug using our open source Debugtopus](/get-started/debugtopus).
15 changes: 13 additions & 2 deletions advanced/2fa.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,18 @@ description: "Logging in with a second factor"
icon: "mobile"
---

# One-Time-Password handling
## One-Time-Password handling

<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/h_3KefBcVyQ?si=x8djQYmEbNJdTsVd"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>

Octomind supports logins that require a 2nd-Factor using a, a so-called one-time-password (OTP). See an example of an
app implementing an OTP login.
Expand Down Expand Up @@ -49,4 +60,4 @@ to the "environments" section. Then enter the initialization key that you copied
</Frame>

Now both in prompts and test case steps you will be able to use the template <pre>$OCTO_TOTP</pre>. Find out more about
variable usage in our [variables documentation](/variables).
variable usage in our [variables documentation](/advanced/variables).
13 changes: 12 additions & 1 deletion advanced/basic-authentication.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,15 @@ In settings you can enable basic authentication per environment.
</Frame>

If you have a basic authentication protected environment, just create a new environment, enable basic authentication
and provide username and password.
and provide username and password. Check out this short video on how to set it up:

<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/WkHudQnMgjU?si=fDAhSsq3hGsRNXWG"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
4 changes: 2 additions & 2 deletions advanced/private-location.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,12 @@ The worker acts as a combination of a proxy and tunneling mechanism, connecting

### Tunneling with FRP

The tunneling is managed using [FRP](https://github.com/fatedier/frp), connecting the workers frp-client to Octominds proxy servers (EU or US).
The tunneling is managed using [FRP](https://github.com/fatedier/frp), connecting the worker's frp-client to Octomind's proxy servers (EU or US).
It requires access permission to specific public IPs: `35.192.162.70`, `34.159.153.198` or `34.129.193.156`.

### Access to private web applications

The embedded Squid proxy server allows requests from the private network, mimicking public proxy requests from within the customers network.
The embedded Squid proxy server allows requests from the private network, mimicking public proxy requests from within the customer's network.

### Test execution and generation

Expand Down
15 changes: 14 additions & 1 deletion advanced/variables.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,19 @@ icon: "square-root-variable"

Create custom variables that we will fill in for you. Reference them in the **visual locator picker** or when **prompting the AI agent** using `$` (e.g. $firstname) by adjusting the `enter text` step.

<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/Oww0XDhA0XA?si=SrvRn4vtPGfjGTCK"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>

<div class="mt-8" />

<Frame caption="Example of custom variables, 11/2024">
<img
src="/images/advanced/variables.png"
Expand Down Expand Up @@ -45,4 +58,4 @@ We use also several predefined variables that do not show up in the default list
- `$OCTO_URL`: the URL of your page
- `$OCTO_STABLE_UUID`: a random combination of letters and numbers that will be consistent through multiple references across a single run, but different if you run the agent again - full uuid of 36 characters
- `$OCTO_STABLE_UUID_SHORT`: a random combination of letters and numbers that will be consistent through multiple references across a single run, but different if you run the agent again - maximum length of 8 characters.
- `$OCTO_TOTP`: 2-Factor Authentication code if you require if for the login. Find out more about [2-FA enrollment](./2fa)
- `$OCTO_TOTP`: 2-Factor Authentication code if you require if for the login. Find out more about [2-FA enrollment](/advanced/2fa)
2 changes: 1 addition & 1 deletion data-governance/no-code-access.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,6 @@ If you want us to sign an NDA you can find all information in our [NDA section](
## Run Octomind tests locally

We provide an option to run test cases locally from your dev machine against any test target with an open source tool called Debugtopus. You can either run a single test
case or all of them at once. To do so, please check out the [run tests locally and debug](/debugtopus) section.
case or all of them at once. To do so, please check out the [run tests locally and debug](/get-started/debugtopus) section.

Since this component is running on your local machine its code is open sourced so that you can run an audit on it. Check out the [Debugtopus repository on GitHub](https://github.com/OctoMind-dev/debugtopus).
13 changes: 13 additions & 0 deletions get-started/debugtopus.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -79,3 +79,16 @@ Now the Playwright UI is shown.
alt="Playwright UI screenshot"
/>
</Frame>

Watch this this video for more insight on how to run Octomind tests:

<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/j5w5ylOyW28?si=8-naTzE211JAeanl"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
14 changes: 7 additions & 7 deletions get-started/execute-test-cases.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ To check if there are any bugs in your new build or in production preventing you

### 1. Trigger a test run

To run all active tests instantly click `run all` and then `in app`. This will trigger a test run resulting in a [test report](/test-reports.mdx) visible in the Octomind app. We run the tests in parallel to make sure they run completes under 20 minutes, regardless of the number of your active test cases.
To run all active tests instantly click `run all` and then `in app`. This will trigger a test run resulting in a [test report](/account-view/test-reports.mdx) visible in the Octomind app. We run the tests in parallel to make sure they run completes under 20 minutes, regardless of the number of your active test cases.

Useful for **branch deployments** - you can run your tests against any other accessible URL. Select `in app on a different URL` instead.

Expand All @@ -36,14 +36,14 @@ Useful for **branch deployments** - you can run your tests against any other acc

Every test run will produce a **test report** where you will see all test results. They will tell you if everything runs as it's supposed to or if something is broken.

Go to the [test report](/test-reports.mdx) section to learn about test reports.
Go to the [test report](/account-view/test-reports.mdx) section to learn about test reports.

### 3. Why is a test red?

Red signals the test has failed. This might have 3 reasons:

1. You have a bug in your app
We will pinpoint you to exact moment where it broke. Swipe through the **report snapshots** or launch the **trace viewer** for more details. For an even more precise debugging, we created the open source [Debugtopus](/debugtopus.mdx) to use locally on your machine.
We will pinpoint you to exact moment where it broke. Swipe through the **report snapshots** or launch the **trace viewer** for more details. For an even more precise debugging, we created the open source [Debugtopus](/get-started/debugtopus.mdx) to use locally on your machine.

<Frame caption="Snapshots in test reports showing what happened during the test run, 7/2024">
<img
Expand Down Expand Up @@ -79,7 +79,7 @@ Red signals the test has failed. This might have 3 reasons:

## Schedule regular test runs

The point of software testing is to test regularly. You can [schedule tests](/scheduled-execution.mdx) by clicking the `schedule` button in the `project overview`.
The point of software testing is to test regularly. You can [schedule tests](/get-started/scheduled-execution.mdx) by clicking the `schedule` button in the `project overview`.
This is a great strategy for synthetic monitoring of your app in production.

<Frame caption="Scheduling test runs 7/2024">
Expand All @@ -88,12 +88,12 @@ This is a great strategy for synthetic monitoring of your app in production.

## Trigger test runs via curl command

If you do not use pipelines and want to manually trigger the test execution from outside our app, e.g. from your terminal, you can do it with a cURL command. [Learn how.](/execution-without-ci.mdx)
If you do not use pipelines and want to manually trigger the test execution from outside our app, e.g. from your terminal, you can do it with a cURL command. [Learn how.](/get-started/execution-without-ci.mdx)

## Trigger test runs from your CI

Integrating our tests to your CI pipeline is a great way to make sure you didn't break the app with new releases. Learn [how to integrate Octomind into your CI/CD](/integrations-overview.mdx).
Integrating our tests to your CI pipeline is a great way to make sure you didn't break the app with new releases. Learn [how to integrate Octomind into your CI/CD](/integrations/integrations-overview.mdx).

## Run your tests locally

Octomind tests are written in standard Playwright code. You can download it an run the test locally. This is how you use our open source [Debugtopus](/debugtopus.mdx) to do so.
Octomind tests are written in standard Playwright code. You can download it an run the test locally. This is how you use our open source [Debugtopus](/get-started/debugtopus.mdx) to do so.
13 changes: 13 additions & 0 deletions get-started/execution-without-ci.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,3 +37,16 @@ See the [API Reference](/api-reference) or a reference implementation in our [Gi
### Create an API key

<Snippet file="snippet-apiKey.mdx" />

Watch this this video for more insight on how to run Octomind tests:

<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/j5w5ylOyW28?si=8-naTzE211JAeanl"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>
8 changes: 4 additions & 4 deletions get-started/first-steps.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Here comes the really cool part. Once we finished searching for a potential log-

## 6. Check your site against the generated test suite

We will [execute your generated test cases](/execute-test-cases.mdx) and create several test reports containing your test results. These will ensure that they successfully pass when executed on your site.
We will [execute your generated test cases](/get-started/execute-test-cases.mdx) and create several test reports containing your test results. These will ensure that they successfully pass when executed on your site.

<Frame caption="Automatically executed test reports - screenshot 10/2024">
<img
Expand All @@ -95,7 +95,7 @@ Inside each test report, you can find the test results for the executed test cas
- a green test result indicates a successful test run, meaning that your site passed the test described in the test case
- a red test result indicates a test failure, meaning we could not successfully run the test case steps. Click on it to see in which step in the app that is broken.

Find out more about [test reports](/test-reports.mdx) and [debugging your tests](/debugtopus.mdx).
Find out more about [test reports](/account-view/test-reports.mdx) and [debugging your tests](/get-started/debugtopus.mdx).

<Frame caption="First test report - screenshot 07/2024">
<img
Expand Down Expand Up @@ -128,6 +128,6 @@ Functioning tests are turned `on` which means they will run once you trigger a *

## Next steps

Use auto-generation to [generate more test cases](/generate-more-test-cases.mdx) based off existing ones or prompt our AI agent [to create new ones](/new-test-case.mdx).
Use auto-generation to [generate more test cases](/get-started/generate-more-test-cases.mdx) based off existing ones or prompt our AI agent [to create new ones](get-started/new-test-case.mdx).

If you are happy with the test cases we generated for you, you can [set up scheduling](/scheduled-execution) to periodically run your tests and ensure your site doesn't break.
If you are happy with the test cases we generated for you, you can [set up scheduling](/get-started/scheduled-execution) to periodically run your tests and ensure your site doesn't break.
4 changes: 2 additions & 2 deletions get-started/generate-more-test-cases.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,9 @@ You can follow the progress of these newly created tests in the stack.
/>
</Frame>

At the end of the agent run, the AI agent should have auto-generated test steps and validated them for each test case. For those that the validation succeeded, it turned the test **ON** - [into active mode](/execute-test-cases).
At the end of the agent run, the AI agent should have auto-generated test steps and validated them for each test case. For those that the validation succeeded, it turned the test **ON** - [into active mode](/get-started/execute-test-cases).

Help the AI agent when it couldn't quite nail the auto-generation - **yellow alert** highlights a failed step generation. See how in the [edit test case](/edit-test-case) section.
Help the AI agent when it couldn't quite nail the auto-generation - **yellow alert** highlights a failed step generation. See how in the [edit test case](/get-started/edit-test-case) section.

<Frame caption="New generated tests, 10/2024">
<img src="/images/expand/new-generated-tests.png" alt="New generated tests" />
Expand Down
15 changes: 14 additions & 1 deletion get-started/new-test-case.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ If the AI agent generated wrong steps or signals a failed step with a yellow ale

1. **Restart the AI generation process by clicking `regenerate steps`.** Select the last step you want to keep. All subsequent steps will be replaced with the new AI agent output.
2. Try a **different prompt** and restart the AI generation process by clicking `regenerate steps`.
3. **Add, edit and remove steps manually.** Our virtual locator picker helps you edit tests in no time. Learn how to [edit test steps](/edit-test-case.mdx).
3. **Add, edit and remove steps manually.** Our virtual locator picker helps you edit tests in no time. Learn how to [edit test steps](/get-started/edit-test-case.mdx).

<Frame caption="A generated test step failed, highlighted by yellow alert, screenshot 07/2024">
<img
Expand Down Expand Up @@ -57,6 +57,19 @@ This test is also created on the fly for a **new project**, if we detect a login
/>
</Frame>

Or check out this video instead:

<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/zUMeZ7sXdqc?si=a1nm8Pv_Hez3HNjK"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen
></iframe>

### Chaining tests

A user flow is virtually a chain of test cases. When using the AI agent, you can use a dependency to chain test cases together. Shorter flows are faster, more specific and more reliable.
Expand Down
Loading
Loading