Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Copy input tensors before async transfer #5830

Merged
merged 4 commits into from
Nov 27, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 2 additions & 3 deletions torch_xla/csrc/runtime/tensor_source.h
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,9 @@ class AtenSource : public TensorSource {
at::ScalarType target_torch_type = TorchTypeFromXlaType(primitive_type());
if (target_torch_type != tensor.type().scalarType()) {
TORCH_LAZY_COUNTER("AtenSourceDowncasts", 1);
tensor_ = std::move(tensor.to(target_torch_type).contiguous());
} else {
tensor_ = std::move(tensor.contiguous());
}
tensor_ = std::move(tensor.to(target_torch_type, /*non_blocking=*/false,
/*copy=*/true, at::MemoryFormat::Contiguous));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking.. we have 2 options, either copy the tensor on CPU or only return the control to python after we started the transfer(or finish the transfer? hard for me to tell at which stage the origional tensor is not needed).

If I understand correctly instead of creating a XLA:Literal, now we perform a copy on the cpu tensor, and copying the cpu tensor is faster?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking.. we have 2 options, either copy the tensor on CPU or only return the control to python after we started the transfer(or finish the transfer? hard for me to tell at which stage the origional tensor is not needed).

Correct. We don't have any tools to "lock" the input tensor, so we should either block until the transfer is done or we make a CPU copy here (which is also blocking, but faster than the CPU -> TPU copy).

The third option is to let the caller decide via the non_blocking argument. Whether the CPU tensor will ever be modified during transfer is context dependent, so the caller can decide whether an unsafe concurrent copy is okay. We very likely want to set non_blocking=True in our data loader, for example. The default case (non_blocking=False) would skip the copy if the tensor is already contiguous and has the correct dtype, saving host memory. This make the default case slower, but it also makes it safer (avoiding OOMs and races) and more consistent with upstream/eager (where .to is blocking by default).

If I understand correctly instead of creating a XLA:Literal, now we perform a copy on the cpu tensor, and copying the cpu tensor is faster?

Yeah. I couldn't tell you why this is faster, but it is.

}

const void* data() const override { return tensor_.const_data_ptr(); }
Expand Down