Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR] Support config torch.compile()/TorchDynamoPlugin with dynamic=None #3284

Open
IrineSistiana opened this issue Dec 9, 2024 · 0 comments

Comments

@IrineSistiana
Copy link

Hi accelerate devs.

The dynamic parameter in torch.compile() has 3 values. True/False/None. According to the torch documentation, the default value is "None".

Doc: https://pytorch.org/docs/2.5/generated/torch.compile.html#torch.compile

dynamic (bool or None) – Use dynamic shape tracing. When this is True, we will up-front attempt to generate a kernel that is as dynamic as possible to avoid recompilations when sizes change. This may not always work as some operations/optimizations will force specialization; use TORCH_LOGS=dynamic to debug overspecialization. When this is False, we will NEVER generate dynamic kernels, we will always specialize. By default (None), we automatically detect if dynamism has occurred and compile a more dynamic kernel upon recompile.

If my understanding is correct. Currently accelerate can only config it as True or False.

if self.dynamic is None:
self.dynamic = str_to_bool(os.environ.get(prefix + "USE_DYNAMIC", "False")) == 1

model = torch.compile(model, **self.state.dynamo_plugin.to_kwargs())

In our development environment, using dynamic = None has slightly better results. It would be nice if accelerate can support it out of the box, and maybe have the same default value/behavior as torch.compile().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant