You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given now that PyTorch-XLA 2.1 is just released, and many new features are added compared to PyTorch-XLA 1.xx, I wonder if you have any plan to integrate those features in to this codebase? Those useful features includes torch.compile, PJRT Runtime, AMP, and so on. For the user end, torch.compile may be no different on GPU and TPU.
The text was updated successfully, but these errors were encountered:
@zw615 there was lots of nice updates to XLA on the horizon when I was still using it regularly via the bits_and_tpu brach. I was excited to start testing PJRT in XLA when my v4 TPU access was suddenly reduced in # devices and downgraded to preemptable. That basically made developing, supporting/debugging, and training w/ PT XLA impractical given my time constraints :(
I know PT XLA can be used on GPU but at least prior to 2.x it wasn't as good for GPU as PyTorch eager (especially with compile) so didn't make much sense to try and support it without developing on TPUs...
Sigh. In this case, it indeed does not make sense to support pytorch-xla 2.0. I can see the last commit to the bits_and_tpu brach was made last year. Hopefully your access to TPU can be restored in the near future.
Thanks for your wonderful work!
Given now that PyTorch-XLA 2.1 is just released, and many new features are added compared to PyTorch-XLA 1.xx, I wonder if you have any plan to integrate those features in to this codebase? Those useful features includes
torch.compile
,PJRT Runtime
,AMP
, and so on. For the user end,torch.compile
may be no different on GPU and TPU.The text was updated successfully, but these errors were encountered: