-
Notifications
You must be signed in to change notification settings - Fork 45
Rerun Tests After Remediation #101
Comments
I think we would just have to rerun lines 93-94. Do you think that will solve the issue? (Lines 109-110 show the updated code) |
At a base level, yep, that's it! We may want to kick off |
I added in the try/catch and tested it it. I made it so if the check after the remediation fails, it will output:
How I tested: Below is an example output. I think it could be helpful to have all of this output for troubleshooting purposes. But maybe you guys might think it is overkill or too verbose.
I will open a PR after #114 is closed, since this change will affect that file as well. |
Random question, what happens if the remediation fails, and the test returns fail. Second time? Is it now in a loop? Or does it just exit after the second check? |
Sorry, another question, is there a need to run the second test again? Or should we just capture if the '$fix' was successful? I think the later is cleaner. |
That's the part i was testing. It runs the test But after that part, it continues on to the next test. |
I like the second attempt. After all, "Success" doesn't necessarily mean
that the fix was applied. It just means there was no decernable error that
caused the try/catch to fail, right?
…On Apr 9, 2017 20:34, "Justin Sider" ***@***.***> wrote:
Sorry, another question, is there a need to run the second test again? Or
should we just capture if the '$fix' was successful? I think the later is
cleaner.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#101 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEqXUmxfqa3QU166WRnYIGijNf1BSo8xks5ruYc5gaJpZM4MgxDt>
.
|
I see what you are saying. |
Hmm, I would think that since the goal is to have a specific test, with a known state, value, etc. a retest after a fix would be a waste. The fix either works or it needs to be re-coded. The only reason I'm asking is that we just spent time on 'performance', so this sounds like a step in the wrong direction as far as efficiency goes. If the fix is not working right and returns that it was successful it will be picked up in the next round of tests. |
I think something that would potentially be more useful would be a rerun after all tests complete, only on the items that were remediated. Kind of a second round but the scope would be limited. It would also provide less results for someone to review potentially. |
At that point I'd leave it to the higher ups to decide.
Technically it would only effect performance on remediations though.
…On Apr 9, 2017 10:02 PM, "Justin Sider" ***@***.***> wrote:
Hmm, I would think that since the goal is to have a specific test, with a
known state, value, etc. a retest after a fix would be a waste. The fix
either works or it needs to be re-coded. The only reason I'm asking is that
we just spent time on 'performance', so this sounds like a step in the
wrong direction as far as efficiency goes. If the fix is not working right
and returns that it was successful it will be picked up in the next round
of tests.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#101 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADxtkHL-m2vP3OV7sjo1VWUtyszRyHywks5ruY3OgaJpZM4MgxDt>
.
|
True, I don't think it's much time loss. But I'm wondering what the xml looks like now too. Are you able to post a snippet of that? |
Oh, I'm not saying I'm right or worong, just throwing some thoughts out there. |
I've never run vester with XML output before. I should try it out though.
That and nunit.
…On Apr 9, 2017 10:09 PM, "Justin Sider" ***@***.***> wrote:
Oh, I'm not saying I'm right or worong, just throwing some thoughts out
there.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#101 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADxtkDG0pwI_dGSGbK7L_UcTHxU4NUeVks5ruY9pgaJpZM4MgxDt>
.
|
XML is awesome! Said no one ever, lol. But it has its uses. I use it with Vester as a way to enhance the results and share with my team. |
This is going back to adding more info in the try/catch of the remediate. I was testing out some new tests and mine failed with this: Desired: 1500 - int It was really nice to know that it needed it to be a 'long' and not an 'int'. In this scenario, i ran a gettype() on the object and it showed that it was an 'int64' which is a long. I figured 'int' would work, but apparently not. |
I've been doing some more testing on this and wanted to bring something up for discussion. Say we are running a VM test. In the desired, if we are just specifying If you run a test and it fails and you remediate it, the value should be updated. That being said, in the I would personally vote against re running tests after remediate. As a side not, i would still really like to have a more verbose output on the Meaning instead of what it is currently:
to change it to:
The reason being is currently, if a test fails, all we know is that it failed. The error it throws is: When the Here is the line: |
Expected Behavior
When running a vester test with the
-remediate
flag, the command should re-execute the initial condition test after the remediation action - this ensures that the test results are actually as-expected post-remediation. This should be true for all published/included tests.Current Behavior
At least some tests do not rerun the comparison post-remediation.
Possible Solution
Update the template to rerun the test again - this means test-writers don't have to think about it, they get it for free from the template.
Context
This would ensure that if a remediation completes without throwing an error the test will still fail if the conditions specified are not met.
Your Environment
1.0.1
10.0.14393
5.1.14393.693
The text was updated successfully, but these errors were encountered: