You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the scripts in the examples/ folder of Accelerate or an officially supported no_trainer script in the examples folder of the transformers repo (such as run_no_trainer_glue.py)
Loading pipeline components...: 14%|██████ | 1/7 [00:00<00:02, 2.18it/s]
Traceback (most recent call last):
File "/data/baymax/test_diffusers.py", line 4, in<module>
pipe = FluxPipeline.from_pretrained("/data/baymax/models/FLUX.1-dev",
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 896, in from_pretrained
loaded_sub_model = load_sub_model(
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 704, in load_sub_model
loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 886, in from_pretrained
accelerate.load_checkpoint_and_dispatch(
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/accelerate/big_modeling.py", line 613, in load_checkpoint_and_dispatch
load_checkpoint_in_model(
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1749, in load_checkpoint_in_model
loaded_checkpoint = load_state_dict(checkpoint_file, device_map=device_map)
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1471, in load_state_dict
return safe_load_file(checkpoint_file, device=target_device)
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/safetensors/torch.py", line 315, in load_file
result[k] = f.get_tensor(k)
File "/root/miniconda3/envs/baymax/lib/python3.10/site-packages/torch/cuda/__init__.py", line 289, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Expected behavior
Runs normally without any errors.
The text was updated successfully, but these errors were encountered:
System Info
Information
Tasks
no_trainer
script in theexamples
folder of thetransformers
repo (such asrun_no_trainer_glue.py
)Reproduction
When I used the example provided by Hugging Face and set the device_map to 'balanced', I encountered an error.
code example:
error:
Expected behavior
Runs normally without any errors.
The text was updated successfully, but these errors were encountered: