Skip to content

Commit

Permalink
Force specialize_float = True for torch_xla (#8404)
Browse files Browse the repository at this point in the history
Signed-off-by: Edward Z. Yang <[email protected]>
  • Loading branch information
ezyang authored Nov 21, 2024
1 parent d572aeb commit d503ca5
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions torch_xla/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,9 @@ def _init_xla_lazy_backend():
# keep PyTorch/XLA CI healthy.
# TODO @wonjoo come up with a long term fix in Dynamo.
torch._dynamo.config.automatic_dynamic_shapes = False
# Unspecialized float is not friendly to XLA, set flag to False until XLA
# can better compile F64 scalar tensors
torch._dynamo.config.specialize_float = True

# Activate view-replay on AOTAutograd.
# See: https://github.com/pytorch/pytorch/pull/124488
Expand Down

0 comments on commit d503ca5

Please sign in to comment.