You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dynamic (bool or None) – Use dynamic shape tracing. When this is True, we will up-front attempt to generate a kernel that is as dynamic as possible to avoid recompilations when sizes change. This may not always work as some operations/optimizations will force specialization; use TORCH_LOGS=dynamic to debug overspecialization. When this is False, we will NEVER generate dynamic kernels, we will always specialize. By default (None), we automatically detect if dynamism has occurred and compile a more dynamic kernel upon recompile.
If my understanding is correct. Currently accelerate can only config it as True or False.
In our development environment, using dynamic = None has slightly better results. It would be nice if accelerate can support it out of the box, and maybe have the same default value/behavior as torch.compile().
The text was updated successfully, but these errors were encountered:
Hi accelerate devs.
The
dynamic
parameter in torch.compile() has 3 values. True/False/None. According to the torch documentation, the default value is "None".Doc: https://pytorch.org/docs/2.5/generated/torch.compile.html#torch.compile
If my understanding is correct. Currently accelerate can only config it as True or False.
accelerate/src/accelerate/utils/dataclasses.py
Lines 964 to 965 in cb8b7c6
accelerate/src/accelerate/accelerator.py
Line 1613 in cb8b7c6
In our development environment, using dynamic = None has slightly better results. It would be nice if accelerate can support it out of the box, and maybe have the same default value/behavior as torch.compile().
The text was updated successfully, but these errors were encountered: