Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for behave BDD test framework #278

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 18 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
[![Azure Pipelines CI](https://dev.azure.com/kondratyev-nv/Python%20Test%20Explorer%20for%20Visual%20Studio%20Code/_apis/build/status/Python%20Test%20Explorer%20for%20Visual%20Studio%20Code%20CI?branchName=master)](https://dev.azure.com/kondratyev-nv/Python%20Test%20Explorer%20for%20Visual%20Studio%20Code/_build/latest?definitionId=1&branchName=master)
[![Dependencies Status](https://david-dm.org/kondratyev-nv/vscode-python-unittest-adapter/status.svg)](https://david-dm.org/kondratyev-nv/vscode-python-unittest-adapter)

This extension allows you to run your Python [Unittest](https://docs.python.org/3/library/unittest.html#module-unittest), [Pytest](https://docs.pytest.org/en/latest/) or [Testplan](https://testplan.readthedocs.io/)
This extension allows you to run your Python [Unittest](https://docs.python.org/3/library/unittest.html#module-unittest), [Pytest](https://docs.pytest.org/en/latest/), [Testplan](https://testplan.readthedocs.io/) or [Behave](https://behave.readthedocs.io/en/latest/)
tests with the [Test Explorer UI](https://marketplace.visualstudio.com/items?itemName=hbenl.vscode-test-explorer).

![Screenshot](img/screenshot.png)
Expand All @@ -17,6 +17,7 @@ tests with the [Test Explorer UI](https://marketplace.visualstudio.com/items?ite
* [Unittest documentation](https://docs.python.org/3/library/unittest.html#module-unittest)
* [Pytest documentation](https://docs.pytest.org/en/latest/getting-started.html)
* [Testplan documentation](https://testplan.readthedocs.io/en/latest/getting_started.html)
* [Behave documentation](https://behave.readthedocs.io/en/latest/)
* Open Test View sidebar
* Run your tests using the ![Run](img/run-button.png) icon in the Test Explorer

Expand Down Expand Up @@ -62,6 +63,9 @@ Property | Description
`python.testing.pyTestEnabled` | Whether to enable or disable unit testing using pytest (enables or disables test discovery for Test Explorer).
`python.testing.pytestPath` | Path to pytest executable or a pytest compatible module.
`python.testing.pyTestArgs` | Arguments passed to the pytest. Each argument is a separate item in the array.
`python.testing.behaveEnabled` | Whether to enable or disable testing using behave (enables or disables test discovery for Test Explorer).
`python.testing.behavePath` | Path to behave executable.
`python.testing.behaveArgs` | Arguments passed to behave. Each argument is a separate item in the array.
`python.testing.autoTestDiscoverOnSaveEnabled` | When `true` tests will be automatically rediscovered when saving a test file.
`pythonTestExplorer.testFramework` | Test framework to use (overrides Python extension properties `python.testing.unittestEnabled` and `python.testing.pyTestEnabled`).
`pythonTestExplorer.testplanPath` | Relative path to testplan main suite.
Expand Down
4 changes: 3 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,8 @@
"test",
"testing",
"unittest",
"pytest"
"pytest",
"behave"
],
"scripts": {
"clean": "rimraf out *.vsix **/*.pyc **/__pycache__ **/.pytest_cache **/.some_venv **/.venv",
Expand Down Expand Up @@ -119,6 +120,7 @@
"unittest",
"pytest",
"testplan",
"behave",
null
],
"default": null,
Expand Down
5 changes: 4 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,7 @@ https://github.com/morganstanley/testplan/archive/main.zip; python_version > '3.
plotly
kondratyev-nv marked this conversation as resolved.
Show resolved Hide resolved

# temporary fix for testplan
markupsafe==2.0.1; python_version > '3.6'
markupsafe==2.0.1; python_version > '3.6'

# behave test framework
behave
116 changes: 116 additions & 0 deletions src/behave/behaveTestJsonParser.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
import * as path from 'path';

import { TestInfo, TestSuiteInfo } from 'vscode-test-adapter-api';
import { TestEvent } from 'vscode-test-adapter-api';

// Typescript interfaces for behave json output
type IStatus = 'passed' | 'failed' | 'skipped';

interface IScenario {
type: string;
keyword: string;
name: string;
tags: any[];
location: string;
steps: IStep[];
status: IStatus;
}

interface IFeature {
keyword: string;
name: string;
tags: any[];
location: string;
status: IStatus;
elements?: IScenario[];
}
interface IStep {
keyword: string;
step_type: string;
name: string;
location: string;
match: any;
result: IResult;
text?: string[];
}
interface IResult {
status: IStatus;
duration: number;
error_message?: string[];
}

function safeJsonParse(text: string) : IFeature[] {
try {
return JSON.parse(text);
} catch (err) {
// parse json has failed, return empty array
return [];
}
}

export function parseTestSuites(content: string, cwd: string): (TestSuiteInfo | TestInfo)[] {
const discoveryResult = safeJsonParse(content);

let stepid = 0;
const suites = discoveryResult.map(feature => <TestSuiteInfo | TestInfo>({
type: 'suite' as 'suite',
id: feature.location,
label: feature.name,
file: extractFile(feature.location, cwd),
line: extractLine(feature.location),
tooltip: feature.location,
children: (feature.elements || []).map(scenario => ({
type: 'suite' as 'suite',
id: scenario.location,
label: scenario.name,
file: extractFile(scenario.location, cwd),
line: extractLine(scenario.location),
tooltip: scenario.location,
children: scenario.steps.map(step => ({
type: 'test' as 'test',
id: 'step' + (stepid += 1),
label: step.name,
file: extractFile(step.location, cwd),
line: extractLine(step.location),
tooltip: step.location,
})),
})),
}));

return suites;
}

function extractLine(text: string) : number {
const separatorIndex = text.indexOf(':');
return parseInt(text.substring(separatorIndex + 1), 10);
}

function extractFile(text: string, cwd : string) {
const separatorIndex = text.indexOf(':');
return path.resolve(cwd, text.substring(0, separatorIndex));
}

export function parseTestStates(content: string): TestEvent[] {
const runtestResult = safeJsonParse(content);

let states : TestEvent[] = [];

let stepid = 0;

runtestResult.forEach( feature => {
(feature.elements || []).forEach( scenario => {
const steps = scenario.steps.map( (step) : TestEvent => ({
type: 'test' as 'test',
state: step.result.status,
test: 'step' + (stepid += 1),
message: (step.result.error_message ? step.result.error_message.join('\n') : ''),
decorations: [],
description: undefined,
}));
states = states.concat(steps);
});
});

return states;
}

209 changes: 209 additions & 0 deletions src/behave/behaveTestRunner.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,209 @@
import * as path from 'path';

import {
TestEvent, TestSuiteInfo
} from 'vscode-test-adapter-api';

import { ArgumentParser } from 'argparse';
import { IWorkspaceConfiguration } from '../configuration/workspaceConfiguration';
import { IEnvironmentVariables, EnvironmentVariablesLoader } from '../environmentVariablesLoader';
import { ILogger } from '../logging/logger';
import { IProcessExecution, runProcess } from '../processRunner';
import { IDebugConfiguration, ITestRunner } from '../testRunner';
import { empty } from '../utilities/collections';
import { setDescriptionForEqualLabels } from '../utilities/tests';
import { parseTestStates } from './behaveTestJsonParser';
import { parseTestSuites } from './behaveTestJsonParser';
import { runModule } from '../pythonRunner';

// --- Behave Exit Codes ---
// 0: All tests were collected and passed successfully
// 1: Some tests have failed
const BEHAVE_NON_ERROR_EXIT_CODES = [0, 1];

const DISCOVERY_OUTPUT_PLUGIN_INFO = {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is not actually used, right? Can you please try removing this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you mean. Behave exit codes are not documented (to my current knowledge). I have seen 0 and 1 as valid exit codes and unfortunately 0 when an error occurs (that then spits out an error string instead of json data).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant DISCOVERY_OUTPUT_PLUGIN_INFO constant 🙂

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used in the loadEnvironmentVariables in the same file (line 154 and 159).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's referenced, but it makes sense only for pytest - pytest is run with a custom plugin (in resources/python folder). Since behave does not need any custom plugins this constant and its usages are not actually useful. If you have any doubts, I can suggest writing tests first and then try removing this, I'm pretty sure tests won't break.

PACKAGE_PATH: path.resolve(__dirname, '../../resources/python'),
MODULE_NAME: 'vscode_python_test_adapter.behave.discovery_output_plugin',
};

interface IBehaveArguments {
argumentsToPass: string[];
locations: string[];
}


export class BehaveTestRunner implements ITestRunner {

private readonly testExecutions: Map<string, IProcessExecution> = new Map<string, IProcessExecution>();

constructor(
public readonly adapterId: string,
private readonly logger: ILogger
) { }

public cancel(): void {
this.testExecutions.forEach((execution, test) => {
this.logger.log('info', `Cancelling execution of ${test}`);
try {
execution.cancel();
} catch (error) {
this.logger.log('crit', `Cancelling execution of ${test} failed: ${error}`);
}
});
}

public async debugConfiguration(config: IWorkspaceConfiguration, test: string): Promise<IDebugConfiguration> {
const additionalEnvironment = await this.loadEnvironmentVariables(config);
const runArguments = this.getRunArguments(test, config.getBehaveConfiguration().behaveArguments);
const params = [ ...runArguments.argumentsToPass, ...runArguments.locations];
return {
module: 'behave',
cwd: config.getCwd(),
args: params,
env: additionalEnvironment,
};
}

public async load(config: IWorkspaceConfiguration): Promise<TestSuiteInfo | undefined> {
if (!config.getBehaveConfiguration().isBehaveEnabled) {
this.logger.log('info', 'Behave test discovery is disabled');
return undefined;
}
const additionalEnvironment = await this.loadEnvironmentVariables(config);
this.logger.log('info', `Discovering tests using python path '${config.pythonPath()}' in ${config.getCwd()}`);

const discoveryArguments = this.getDiscoveryArguments(config.getBehaveConfiguration().behaveArguments);
this.logger.log('info', `Running behave with arguments: ${discoveryArguments.argumentsToPass.join(', ')}`);
this.logger.log('info', `Running behave with locations: ${discoveryArguments.locations.join(', ')}`);

const params = [ ...discoveryArguments.argumentsToPass, ...discoveryArguments.locations];

const result = await this.runBehave(config, additionalEnvironment, params).complete();
const tests = parseTestSuites(result.output, config.getCwd());
if (empty(tests)) {
this.logger.log('warn', 'No tests discovered');
return undefined;
}

setDescriptionForEqualLabels(tests, path.sep);
return {
type: 'suite',
id: this.adapterId,
label: 'Behave tests',
children: tests,
};
}

public async run(config: IWorkspaceConfiguration, test: string): Promise<TestEvent[]> {
if (!config.getBehaveConfiguration().isBehaveEnabled) {
this.logger.log('info', 'Behave test execution is disabled');
return [];
}
const additionalEnvironment = await this.loadEnvironmentVariables(config);
this.logger.log('info', `Running tests using python path '${config.pythonPath()}' in ${config.getCwd()}`);

const testRunArguments = this.getRunArguments(test, config.getBehaveConfiguration().behaveArguments);
this.logger.log('info', `Running behave with arguments: ${testRunArguments.argumentsToPass.join(', ')}`);
this.logger.log('info', `Running behave with locations: ${testRunArguments.locations.join(', ')}`);

const params = [ ...testRunArguments.argumentsToPass, ...testRunArguments.locations];

const result = await this.runBehave(config, additionalEnvironment, params).complete();
const states = parseTestStates(result.output);
if (empty(states)) {
// maybe an error occured
this.logger.log('warn', 'No tests run');
this.logger.log('warn', 'Output: ${result.output}');
}

return states;
}

private runBehave(config: IWorkspaceConfiguration, env: IEnvironmentVariables, args: string[]): IProcessExecution {
const behavePath = config.getBehaveConfiguration().behavePath();
if (behavePath === path.basename(behavePath)) {
this.logger.log('info', `Running ${behavePath} as a Python module`);
return runModule({
pythonPath: config.pythonPath(),
module: config.getBehaveConfiguration().behavePath(),
environment: env,
args,
cwd: config.getCwd(),
acceptedExitCodes: BEHAVE_NON_ERROR_EXIT_CODES,
});
}

this.logger.log('info', `Running ${behavePath} as an executable`);
return runProcess(
behavePath,
args,
{
cwd: config.getCwd(),
environment: env,
acceptedExitCodes: BEHAVE_NON_ERROR_EXIT_CODES,
});
}

private async loadEnvironmentVariables(config: IWorkspaceConfiguration): Promise<IEnvironmentVariables> {
const envFileEnvironment = await EnvironmentVariablesLoader.load(config.envFile(), process.env, this.logger);

const updatedPythonPath = [
config.getCwd(),
envFileEnvironment.PYTHONPATH,
process.env.PYTHONPATH,
DISCOVERY_OUTPUT_PLUGIN_INFO.PACKAGE_PATH
].filter(item => item).join(path.delimiter);

const updatedBehavePlugins = [
envFileEnvironment.BEHAVE_PLUGINS,
DISCOVERY_OUTPUT_PLUGIN_INFO.MODULE_NAME
].filter(item => item).join(',');

return {
...envFileEnvironment,
PYTHONPATH: updatedPythonPath,
BEHAVE_PLUGINS: updatedBehavePlugins,
};
}

private getDiscoveryArguments(rawBehaveArguments: string[]): IBehaveArguments {
const argumentParser = this.configureCommonArgumentParser();
const [knownArguments, argumentsToPass] = argumentParser.parse_known_args(rawBehaveArguments);
return {
locations: (knownArguments as { locations?: string[] }).locations || [],
argumentsToPass: ['-d', '-f', 'json', '--no-summary', '--no-snippets'].concat(argumentsToPass),
};
}

// @ts-expect-error
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be removed?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, should be doable. Currently the test parameter is not used, but it should be.

private getRunArguments(test: string, rawBehaveArguments: string[]): IBehaveArguments {
const argumentParser = this.configureCommonArgumentParser();
const [knownArguments, argumentsToPass] = argumentParser.parse_known_args(rawBehaveArguments);
return {
locations: (knownArguments as { locations?: string[] }).locations || [],
argumentsToPass: ['-f', 'json', '--no-summary', '--no-snippets'].concat(argumentsToPass),
};
}

private configureCommonArgumentParser() {
const argumentParser = new ArgumentParser({
exit_on_error: false,
});
argumentParser.add_argument(
'-D', '--define',
{ action: 'store', dest: 'define' });
argumentParser.add_argument(
'-e', '--exclude',
{ action: 'store', dest: 'exclude' });
argumentParser.add_argument(
'-i', '--include',
{ action: 'store', dest: 'include' });

// Handle positional arguments (list of testsuite directories to run behave in).
argumentParser.add_argument(
'locations',
{ nargs: '*' });

return argumentParser;
}
}
Loading