From d1ce72d59e53e45ac753bd4fea3de6bb88686e7f Mon Sep 17 00:00:00 2001 From: Liyang90 Date: Wed, 29 Nov 2023 12:23:08 -0800 Subject: [PATCH] Update pjrt.md (#5941) Update some missing changes from `GPU` to `CUDA` --- docs/pjrt.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/pjrt.md b/docs/pjrt.md index fca27cca6837..1d0b50fff6d8 100644 --- a/docs/pjrt.md +++ b/docs/pjrt.md @@ -34,7 +34,7 @@ _New features in PyTorch/XLA r2.0_: ## TL;DR * To use the PJRT preview runtime, set the `PJRT_DEVICE` environment variable to - `CPU`, `TPU`, or `GPU` + `CPU`, `TPU`, or `CUDA` * In XRT, all distributed workloads are multiprocess, with one process per device. On TPU v2 and v3 in PJRT, workloads are multiprocess and multithreaded (4 processes with 2 threads each), so your workload should be thread-safe. See @@ -112,7 +112,7 @@ Sample diff from XRT to PJRT: ## Benefits -* Simple runtime configuration: just set `PJRT_DEVICE` to `TPU`, `CPU`, or `GPU` +* Simple runtime configuration: just set `PJRT_DEVICE` to `TPU`, `CPU`, or `CUDA` and start using XLA! Or, let PJRT select a device automatically based on your environment. * Improved performance: reduced overhead from gRPC means faster end-to-end