Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use regular torch.Tensor for CPU tensors #8416

Merged
merged 4 commits into from
Nov 27, 2024
Merged

Use regular torch.Tensor for CPU tensors #8416

merged 4 commits into from
Nov 27, 2024

Conversation

qihqi
Copy link
Collaborator

@qihqi qihqi commented Nov 26, 2024

This is to test the water with encouraging enable_globally() usage pattern. So ideally torch_xla2 should not conflict with regular (non-compute) torch usage (such as dataloading). So we will make the type conversion explicit.

@qihqi qihqi changed the title Unit test pass Use regular torch.Tensor for CPU tensors Nov 26, 2024
@qihqi qihqi marked this pull request as ready for review November 26, 2024 22:06
@qihqi qihqi requested review from GleasonK and ManfeiBai November 26, 2024 22:06
context_length = 2048
input_shape_prefill = (1, context_length)
input_shape_decode = (1, 1)
model_args = model_exportable.ModelArgs(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we have indent here for this function? for with torch_xla2.default_env():?

@@ -133,6 +133,7 @@ def _aten_copy(x, y, memory_format=None):


@op(torch.ops.aten.clone)
@op(torch.ops.aten.clone.default)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the .default change?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually not needed, removed.

Copy link
Collaborator

@ManfeiBai ManfeiBai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, LGTM

@qihqi qihqi merged commit 20f5166 into master Nov 27, 2024
3 checks passed
@qihqi qihqi deleted the hanq_torchxla2 branch November 27, 2024 23:10
rpsilva-aws pushed a commit to rpsilva-aws/xla that referenced this pull request Dec 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants