Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "Allow benchmark runner config: openxla for inference runs. … #6096

Closed

Conversation

frgossen
Copy link
Collaborator

…(#5939)"

This reverts commit dcdd66e.

Introduced openxla to unfold for both training and inference. This is not intended and leads to duplicate experiment runs.

@frgossen frgossen requested a review from ysiraichi December 11, 2023 20:52
@frgossen
Copy link
Collaborator Author

Will introduce tests in #6093

@ysiraichi
Copy link
Collaborator

This is not intended and leads to duplicate experiment runs.

I remember @JackCaoG mentioning we were trying to get rid of openxla_eval (comment). Is it not the case?

@JackCaoG
Copy link
Collaborator

@ysiraichi it is still the plan.. but I don't think we will be able to do that for 2.2 releases. We might as well just keep testing it until we deprecate it.

@frgossen
Copy link
Collaborator Author

I agree, replacing openxla_eval with openxla is generally a good idea. This change is unfortunately not doing that exactly.

@ysiraichi
Copy link
Collaborator

Correct. #5939 only adds openxla as a possible backend for eval benchmark runs. i.e. reverting that PR would make it not possible to run models in eval mode with openxla dynamo backend. I guess the real question is: do we want to disallow that?

@frgossen
Copy link
Collaborator Author

Reverting this brings us back to the previous state where eval goes together with openxla_eval and train together with openxla? Is there any difference between openxla_eval+eval and openxla+eval?

@ysiraichi
Copy link
Collaborator

ysiraichi commented Dec 11, 2023

Yes. openxla_eval doesn't use AOTAutograd. i.e. it doesn't get a nn.Module-traced dynamo graph. This makes the set models that run/fail also different.

@frgossen
Copy link
Collaborator Author

Thanks for explaining that. So we actually want to have these different combinations. Eventually, we want to get rid of the openxla_eval backend. Is that correct?

@ysiraichi
Copy link
Collaborator

As far as I understand, that's the idea. Maybe @JackCaoG can confirm this.

@frgossen
Copy link
Collaborator Author

Thanks. In that case, I will just close this PR and update the test cases instead.

@frgossen frgossen closed this Dec 11, 2023
@frgossen frgossen deleted the revert-allow-openxla-config branch December 11, 2023 22:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants