-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HLA-1060: Fix formerly working HAP PyTest in test_align.py #1632
HLA-1060: Fix formerly working HAP PyTest in test_align.py #1632
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #1632 +/- ##
==========================================
- Coverage 36.88% 32.40% -4.48%
==========================================
Files 159 159
Lines 35063 35049 -14
Branches 5697 0 -5697
==========================================
- Hits 12932 11359 -1573
- Misses 21789 23690 +1901
+ Partials 342 0 -342
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code changes look great. This looks to be a very old bug, so there might be changes seen in other regression test data.
I am not sure why you removed the "no fit information" datafile from the test. We will get a different answer, presumably, and have to update the output. However, the code should now handle this case properly as is. Including this file makes it a better test. (IMHO)
Comments?
I think I understand, and I think I agree that it would be better. You mean leaving the file in as an expected failure. I think I could try to do that as a stand-alone file in an additional different filelist, but I'm not sure I can test one expected failure within a set of successes. How does that sound to you? |
In real-time discussion, we decided to remove the troublesome dataset from the parameterized test entirely. Two Jira tickets will be written: (1) new PyTest to focus on the processing of this one dataset, and (2) new PyTest to focus on just the troublesome image. |
@mdlpstsci I blacked the two files related to this PR and removed a couple unused imports and variables. Otherwise the logic of the code hasn't changed. When you find the time, let me know if it looks good to merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few syntactic changes. Also, remove one dataset from processing per our Drizzlepac Meeting discussion. Creating the new tickets is not part of this review.
@mdlpstsci I think I misunderstood the two new planned tests, but I understand now and fully agree. The other formatting changes have been made as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look fine.
As discussed in Jira, I've removed a test on "j8ura1jbq_flt.fits" which was ultimately failing due to a lack of matches. The checks that were suppose to tell us what was failing were also not working because the num_xmaches variable was not being re-instantiated every loop. Instead the previous value was being carried over allowing the checks to pass. I've made sure to set the check flags before using the continue function, in order to insure that the correct information is printed in the final summary of the run. In the future, we should try to break up large functions like this to avoid reusing the same namespaces.