Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for behave BDD test framework #278

Open
wants to merge 19 commits into
base: master
Choose a base branch
from

Conversation

oberhofer
Copy link

I want to add support for the behave BDD test framework. I like the clean structure of your project and I hope I got everything covered to support behave (https://github.com/behave/behave).

@kondratyev-nv
Copy link
Owner

Hi! Thanks for the PR! It seems some lint errors are still there. If you want to reproduce this locally, you can run npm run lint. I'll be reviewing your changes this week.

@oberhofer
Copy link
Author

I will check the lint errors.
In addition I'm not satisfied with my (non existent) behave test suite enumeration. Currently it only works for one test suite in the top level directory. Maybe you could tell me more about how this is supposed to work.
Behave test suites have a dedicated "features" directory containing *.feature files. So in theory I have to scan a project for all directories, that contain a "features" directory and call behave there to get a json file with the test results for this suite.
If you call behave in a directory without a "features" subdir you get a "ConfigError".
Do I have to implement this scheme in load/run or is there another entity which is meant to scan directories ?

- wrap json parsing to catch failures
@kondratyev-nv
Copy link
Owner

I'm not familiar with behave, but I see 2 options there.

  1. You can try to replicate behave CLI behavior. As behave does not search for 'feature' files by itself, let the user specify what folders to include (with 'python.testing.behaveArgs' setting). This way it'd be straightforward on how to configure the extension if you previously called behave with some arguments like 'behave folder1 folder2', IMO it'd be rather obvious to set 'python.testing.behaveArgs' to [" folder1", "folder2"].
  2. Another option is to actually scan folders for 'feature' files. You can scan it and save it in the private property of the adapter code, see something similar here - https://github.com/kondratyev-nv/vscode-python-test-adapter/blob/master/src/pythonTestAdapter.ts#L76. Then pass this list to the testRunner somehow (idk, maybe another property in config object?) and then construct a list of behave arguments based on that.

IMO it's good to start with option 1 as it's simple. If it'll work and you'd feel like changing it afterward, you can upgrade the code following option 2. Let me know if this makes sense and what are your thoughts on this.

@kondratyev-nv
Copy link
Owner

btw, can you please merge the latest master brach to your PR, it should fix most of the CI and test errors.

@oberhofer
Copy link
Author

IMO it's good to start with option 1 as it's simple. If it'll work and you'd feel like changing it afterward, you can upgrade the code following option 2. Let me know if this makes sense and what are your thoughts on this.

Sounds good. I will go with option 1 as behave supports positional arguments where you can specify either feature directories, single feature files or specific features with file:linenumber.

@oberhofer
Copy link
Author

Ok, seems to work at least on my side, and I can specify multiple locations for my tests. Thanks for your support.

Copy link
Owner

@kondratyev-nv kondratyev-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some nits and comments.

It would be great if you can add some tests. Let me know if you need any help with that.

try {
return JSON.parse(text);
} catch (err) {
// this.logger.log('warn', 'parse json failed: ${text}');
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be uncommented?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will turn this into a comment as there is no logger accessible here (at the moment).
I also look into adding some test and I may need some help with that.
Which test scenarios do you like to see covered ?
Are there any available tests that may serve as a blueprint ?

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some examples are

  • test\tests\pytestGeneral.test.ts
  • test\tests\testplanGeneral.test.ts
  • test\tests\unittestGeneral.test.ts

The minimal coverage I would say is to replicate the core UI functionality and user flow, for example

  • Discovery of tests (set testing.behaveEnabled and maybe args) and verify that all expected tests are discovered (see 'should discover tests' tests).
  • Running tests on multiple levels - run all, run a suite, run a single test (see 'Run pytest tests' tests).

src/behave/behaveTestJsonParser.ts Outdated Show resolved Hide resolved
src/behave/behaveTestJsonParser.ts Outdated Show resolved Hide resolved
// 1: Some tests have failed
const BEHAVE_NON_ERROR_EXIT_CODES = [0, 1];

const DISCOVERY_OUTPUT_PLUGIN_INFO = {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is not actually used, right? Can you please try removing this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you mean. Behave exit codes are not documented (to my current knowledge). I have seen 0 and 1 as valid exit codes and unfortunately 0 when an error occurs (that then spits out an error string instead of json data).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant DISCOVERY_OUTPUT_PLUGIN_INFO constant 🙂

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used in the loadEnvironmentVariables in the same file (line 154 and 159).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's referenced, but it makes sense only for pytest - pytest is run with a custom plugin (in resources/python folder). Since behave does not need any custom plugins this constant and its usages are not actually useful. If you have any doubts, I can suggest writing tests first and then try removing this, I'm pretty sure tests won't break.

src/configuration/vscodeWorkspaceConfiguration.ts Outdated Show resolved Hide resolved
@@ -0,0 +1,10 @@
#!/bin/bash
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any tests that check running behave as a script, so I assume these 2 scripts can be safely removed?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took the pytest directory as a template, that spawned their existence. Currently they show how to call behave, which may be handy. Maybe I had to add a test to justify their presence :-)

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used in test\tests\pytestScript.test.ts. If you're not planning to replicate similar tests, I would suggest removing these scripts. Up to you 🙂

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add a test.

requirements.txt Show resolved Hide resolved
};
}

// @ts-expect-error
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be removed?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, should be doable. Currently the test parameter is not used, but it should be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants