-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Action passes even upon test execution failure #28
Comments
Hi there @ivohashamov! Glad to hear you're enjoying Maestro and Maestro Cloud. The reason that your job is not marked as a failure is due to the Upload being marked as canceled and not failed, which in this case doesn't make a lot of sense. Originally this behaviour was implemented to not block CI execution by default when Flows time out or are canceled for other reasons. Fix here would be to add an argument that allows failing the job on cancellation, which is something I will try to get to next week. Will let you know when I have an update to share! |
Update here: won't have time to work on this for the coming future due to changing priorities on our side |
Thanks for the update! For the time being, we did an ugly workaround step in our workflow: - name: Check For Errors
run: |
if [[ "$MAESTRO_CLOUD_FLOW_RESULTS" == *"ERROR"* || "$MAESTRO_CLOUD_FLOW_RESULTS" == *"CANCELED"* ]]; then
echo "One or more tests failed or were canceled."
exit 1
fi
env:
MAESTRO_CLOUD_FLOW_RESULTS: ${{ steps.upload.outputs.MAESTRO_CLOUD_FLOW_RESULTS }} It works quite well for now. |
Experienced the same issue due to a test timeout. Bugs could easily slip through when everything looks green. @ivohashamov's workaround worked fine. |
Yes we have false positives on test timeouts also. |
We've experienced the same issue recently. @ivohashamov thanks for the workaround! @axelniklasson I obviously don't know about your internal priorities, but I'd say that a test automation system that doesn't report failures reliably is pretty much useless. This looks like a very high priority to me tbh. |
@loremattei I want to reiterate what I stated above - when an upload is marked as failed, the GitHub workflow run will also be marked as failed. The reason it is not being done in this case is due to it being canceled, which is intentional since it might be due to test timeouts or other issues and we did not want to block people's CI pipelines. We do welcome pull requests, so if you (or anyone else following along) are open to raising a PR that adds a new argument called |
@axelniklasson I'm sorry, I was probably too short in my description above. Let me try to expand and explain the critical issue here. What we are experiencing is that:
Moreover, this seems to happen consistently when we retry a failed test (if the test failed because of a missing element), i.e.:
which gives the devs the feeling of a flaky test, where this is Maestro itself that's flaky. This is what we see in the log:
So, this is not a matter of adding an optional parameter (and I would argue that I don't agree with the default behaviour anyway: I want my tests to be green only when everything run successfully, not when something got cancelled. As a minimum, I would make this the default and add a parameter to make the workflow pass on cancelled tests), but this seems to be a system wide problem between Maestro-cloud action and Maestro-cloud CI system. |
@loremattei Roger, thanks for expanding on what you mean - it is clearer now, thanks! |
My team and I have been using Maestro for the past few weeks and have enjoyed the experience a lot! We use the action-maestro-cloud to run our E2E tests daily.
However, we just noticed that the action still passes whenever there's a failure in the test run in the cloud. This is problematic, as we always have to manually check if the tests actually passed, instead of getting notified by GitHub.
Below is a screenshot of the workflow run. As you can see, the first tests failed, canceling the execution of all follow-up tests, but the step and the whole job still passed.
We know that the upload and running of our tests work with the current setup, as we already had plenty of runs where everything went through successfully. Any ideas about what might be causing the issue?
Below you can see our job YAML as well.
The text was updated successfully, but these errors were encountered: