Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/new hooks #177

Closed
wants to merge 19 commits into from
Closed

Feature/new hooks #177

wants to merge 19 commits into from

Conversation

michaelboulton
Copy link
Member

Adding some basic hooks which could be expanded further down the line. After thinking about it, I think this is much better than having a fixture.

At the time of writing this is just a simple thing which prints a response (not fully implemented), some other obvious ones are:

  • called with request parameters
  • called before each stage
  • called after each stage
  • ?

They will all basically just be called with the test name, stage name, stage number, and then a dictionary of any extra relevant information (the schema of which is undefined).

This is not the same as #115, these hooks are always called

@michaelboulton michaelboulton added the Status: Work In Progress Not ready to merge label Aug 20, 2018
@michaelboulton michaelboulton mentioned this pull request Aug 20, 2018
@benhowes
Copy link
Member

I'm trying to understand the use cases and how someone would use this? Can you add multiple before after hooks?

From the code I can see you mention being able to do timing, and there's some stuff about logging. Are there other things which could be done too? Some docs would be helpful. I'd be happy to write them if you can expand on it a bit here.

@michaelboulton
Copy link
Member Author

The use case is mainly to provide people with a way to implement their own ways of dealing with the test results. For example pytest-html lets you create a html report from pytest tests, but won't allow you to fully express the concept of the headers/response body/status code and dump it to a machine readable format.

If we add something like a pytest_tavern_stage_failed hook then people could make it dump out an xml document describing exactly what went wrong (eg, incorrect header, error in body, etc.), or if it's being used for uptime monitoring it could post to a server which displays the status of the server (500 response, timeout, etc).

On top of that, there is some stuff in the tests like delay_after and delay_before which is baked into each test, this is an attempt to make it so that this kind of stuff doesn't need to be handled in the core of the library and can be provided by third party plugins.

@kvenkat88
Copy link

1.It would be good if we provide Unicode string object validator like !anyunicode or something related to that...As far I know from documentation there is no option available.Is there any feature like this implemented.I am using external function for validating this.
2.It would be good if usefixtures feature with extra_kwargs to pass the arguments to implement our requirements in the runtime.As of now fixtures doesn't have this feature correct me if I am wrong.I am not mentioning fixtures with parametrization (know pytest have issues with fixture parametrization)
3.I read the issue comments somewhere and observed $ext(external function) is going to be replaced with fixture.It would be good that replacement fixture/hook have direct object return support instead of saving and use it in later.

@michaelboulton
Copy link
Member Author

#415

@michaelboulton michaelboulton deleted the feature/new-hooks branch December 9, 2019 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Work In Progress Not ready to merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants