Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I'm using zluda for stable difussion. But when i click "create" Iget this error in terminal: #10

Open
AragornT opened this issue Apr 6, 2024 · 1 comment

Comments

@AragornT
Copy link

AragornT commented Apr 6, 2024

Stable diffusion model failed to load
Exception in thread MemMon:
Traceback (most recent call last):
File "C:\Users\xgevr\miniconda3\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\memmon.py", line 43, in run
torch.cuda.reset_peak_memory_stats()
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\cuda\memory.py", line 309, in reset_peak_memory_stats
return torch._C._cuda_resetPeakMemoryStats(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using already loaded model v1-5-pruned-emaonly.safetensors [6ce0161689]: done in 0.0s
RuntimeError: invalid argument to reset_peak_memory_stats
*** Error completing request
*** Arguments: ('task(jpo8trnumwzeiw5)', <gradio.routes.Request object at 0x0000021B1676FED0>, 'A dog', '', [], 20, 'DPM++ 2M Karras', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\txt2img.py", line 110, in txt2img
processed = processing.process_images(p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\processing.py", line 787, in process_images
res = process_images_inner(p)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\processing.py", line 940, in process_images_inner
model_hijack.embedding_db.load_textual_inversion_embeddings()
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 224, in load_textual_inversion_embeddings
self.expected_shape = self.get_expected_shape()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\textual_inversion\textual_inversion.py", line 156, in get_expected_shape
vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\sd_hijack_clip.py", line 344, in encode_embedding_init_text
embedded = embedding_layer.token_embedding.wrapped(ids.to(embedding_layer.token_embedding.wrapped.weight.device)).squeeze(0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\nn\modules\sparse.py", line 163, in forward
return F.embedding(
^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\nn\functional.py", line 2237, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.


Traceback (most recent call last):
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\call_queue.py", line 95, in f
mem_stats = {k: -(v//-(1024
1024)) for k, v in shared.mem_mon.stop().items()}
^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\memmon.py", line 99, in stop
return self.read()
^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\modules\memmon.py", line 81, in read
torch_stats = torch.cuda.memory_stats(self.device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\cuda\memory.py", line 258, in memory_stats
stats = memory_stats_as_nested_dict(device=device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\sd-test\Zluda\stable-diffusion-webui-directml\venv\Lib\site-packages\torch\cuda\memory.py", line 270, in memory_stats_as_nested_dict
return torch._C._cuda_memoryStats(device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: invalid argument to memory_allocated

@CS1o
Copy link

CS1o commented Apr 11, 2024

Whats your GPU model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants