From 497eb3cf867f9ebf10423f33680eb5f9d53aa6b5 Mon Sep 17 00:00:00 2001 From: Fanli Lin Date: Thu, 31 Oct 2024 21:08:20 +0800 Subject: [PATCH] fix bug (#3166) --- docs/source/basic_tutorials/launch.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/source/basic_tutorials/launch.md b/docs/source/basic_tutorials/launch.md index 3dfcbfb6fa4..449d72d6685 100644 --- a/docs/source/basic_tutorials/launch.md +++ b/docs/source/basic_tutorials/launch.md @@ -97,7 +97,10 @@ Since this runs the various torch spawn methods, all of the expected environment For example, here is how to use `accelerate launch` with a single GPU: ```bash +# for cuda device: CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ... +# for xpu device: +ZE_AFFINITY_MASK="0" accelerate launch {script_name.py} --arg1 --arg2 ... ``` You can also use `accelerate launch` without performing `accelerate config` first, but you may need to manually pass in the right configuration parameters. @@ -136,7 +139,7 @@ accelerate launch -h For a visualization of this difference, that earlier `accelerate launch` on multi-gpu would look something like so with `torchrun`: ```bash -MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --num_machines=1 {script_name.py} {--arg1} {--arg2} ... +MIXED_PRECISION="fp16" torchrun --nproc_per_node=2 --nnodes=1 {script_name.py} {--arg1} {--arg2} ... ``` You can also launch your script utilizing the launch CLI as a python module itself, enabling the ability to pass in other python-specific