-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow benchmark runner config: openxla
for inference runs.
#5939
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. Can you update the README accordingly?
@frgossen Let me know if that is good enough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you! Needs one fix in readme.
@@ -32,9 +32,10 @@ python xla/benchmarks/experiment_runner.py \ | |||
You can change the flags to add the configurations you are interested in. The | |||
`experiment_runner.py` will expand the options to all supported configurations. | |||
For example, in the case above, it will consider all the possible combinations | |||
among the flags `--dynamo`, `--xla`, and `--test`, 4 of which are supported: | |||
among the flags `--dynamo`, `--xla`, and `--test`, 5 of which are supported: | |||
|
|||
- `dynamo=openxla_eval`, `xla=PJRT`, `test=eval` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think you have to remove this line. It will still be 4 configs overall.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if I get why that is the case. Isn't the following combination supported in addition to the ones already specified in the readme?
dynamo=openxla
,xla=PJRT
,test=eval
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I misunderstand this change then. I thought you were getting rid of openxla_eval
in favor of openxla
. My thought was we'd have to remove dynamo=openxla_eval
here.
benchmarks/benchmark_experiment.py
Outdated
if experiment_config["dynamo"] == "openxla_eval" and not ( | ||
experiment_config["xla"] and experiment_config["test"] == "eval"): | ||
return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I need to revert this deletion. openxla_eval
is only supposed to work with eval
test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…ytorch#5939)" This reverts commit dcdd66e.
…#5939) * Allow openxla for eval. * Update readme. * Revert `openxla_eval` rule.
* Allow openxla for eval. * Update readme. * Revert `openxla_eval` rule.
* Allow openxla for eval. * Update readme. * Revert `openxla_eval` rule.
This PR modifies the benchmarking script, enabling
openxla
dynamo backend for inference runs.