-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix flaky test_batching_equivalence
#35564
Conversation
223bfe2
to
39d3a35
Compare
For context: see previous similar fixes #34995 |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
if hasattr(test_case.model_tester, "out_features") or hasattr(test_case.model_tester, "out_indices"): | ||
target_num_hidden_layers = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some vision models is hard to adjust the number of layers as sometimes other parameters have to be changed at the same time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yay, less flaky tests! Thanks!
There are some model test cases overwrite the common one with |
What does this PR do?
I am the king serial killer of flaky tests in
transformers
!The the ratio of failures:
tests/models/flaubert/test_modeling_flaubert.py::FlaubertModelTest::test_batching_equivalence
:tests/models/mobilevitv2/test_modeling_mobilevitv2.py::MobileViTV2ModelTest::test_batching_equivalence
:tests/models/xlm/test_modeling_xlm.py::XLMModelTest::test_batching_equivalence
:I will apply the same changes to overwritten tests wherever they are flaky too.