Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) #16

Open
zyddnys opened this issue Nov 25, 2022 · 5 comments

Comments

@zyddnys
Copy link

zyddnys commented Nov 25, 2022

Traceback (most recent call last):
  File "G:\workspace\DreamArtist-stable-diffusion\modules\ui.py", line 185, in f
    res = list(func(*args, **kwargs))
  File "G:\workspace\DreamArtist-stable-diffusion\webui.py", line 54, in f
    res = func(*args, **kwargs)
  File "G:\workspace\DreamArtist-stable-diffusion\modules\dream_artist\ui.py", line 36, in train_embedding
    embedding, filename = modules.dream_artist.cptuning.train_embedding(*args)
  File "G:\workspace\DreamArtist-stable-diffusion\modules\dream_artist\cptuning.py", line 436, in train_embedding
    output = shared.sd_model(x, c_in, scale=cfg_scale)
  File "C:\Users\unknown\miniconda3\envs\pytorch-1.13\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "G:\workspace\DreamArtist-stable-diffusion\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 879, in forward
    return self.p_losses(x, c, t, *args, **kwargs)
  File "G:\workspace\DreamArtist-stable-diffusion\modules\dream_artist\cptuning.py", line 286, in p_losses_hook
    logvar_t = self.logvar[t].to(self.device)
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

Do you know why this is happening? I can fix this by changing that line to logvar_t = self.logvar.to(self.device)[t] but I don't know why self.logvar is not moved to GPU.

@xITmasterx
Copy link

Same, been having that problem here too.

@JPPhoto
Copy link

JPPhoto commented Dec 19, 2022

Try changing the line in question to logvar_t = self.logvar[t.cpu()].to(self.device) and see if that helps.

@xITmasterx
Copy link

xITmasterx commented Dec 19, 2022

Well, now I'm running into this kind of a problem now:

Got any ideas on how to solve it?

Arguments: ('Vex', '0.003', 1, '/content/gdrive/MyDrive/Images/AIVEX', 'dream_artist', 512, 704, 1500, 500, 500, '/content/gdrive/MyDrive/sd/stable-diffusion-webui/textual_inversion_templates/style_filewords.txt', True, False, '', '', 20, 0, 7, -1.0, 512, 512, '5.0', '', True, True, 1, 1, 1.0, 25.0, 1.0, 25.0, 0.9, 0.999, False, 1, False, '0.000005') {}
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/ui.py", line 30, in train_embedding
    embedding, filename = dream_artist.cptuning.train_embedding(*args)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/cptuning.py", line 543, in train_embedding
    loss.backward()
  File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 197, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper__convolution_backward)```

@xITmasterx
Copy link

More details with the problem: Somehow this error is thrown out whenever I enabled the "Train with reconstruction" option.

@TumnusB TumnusB mentioned this issue Jan 2, 2023
@a-cold-bird
Copy link

Well, now I'm running into this kind of a problem now:

Got any ideas on how to solve it?

Arguments: ('Vex', '0.003', 1, '/content/gdrive/MyDrive/Images/AIVEX', 'dream_artist', 512, 704, 1500, 500, 500, '/content/gdrive/MyDrive/sd/stable-diffusion-webui/textual_inversion_templates/style_filewords.txt', True, False, '', '', 20, 0, 7, -1.0, 512, 512, '5.0', '', True, True, 1, 1, 1.0, 25.0, 1.0, 25.0, 0.9, 0.999, False, 1, False, '0.000005') {}
Traceback (most recent call last):
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 45, in f
    res = list(func(*args, **kwargs))
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 28, in f
    res = func(*args, **kwargs)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/ui.py", line 30, in train_embedding
    embedding, filename = dream_artist.cptuning.train_embedding(*args)
  File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/DreamArtist-sd-webui-extension/scripts/dream_artist/cptuning.py", line 543, in train_embedding
    loss.backward()
  File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 197, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument weight in method wrapper__convolution_backward)```

i had meet this problem too,i uninstalled accelerate and this problem disappeared,but i didn't solve this problem"RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)",so sad,if anyone solve it,tell me,please,i am crazy about it:(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants