Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error on latest Development version of A111 - (Reporting in case it may prevent an upcoming problem) #44

Closed
CCpt5 opened this issue Jun 7, 2023 · 6 comments

Comments

@CCpt5
Copy link

CCpt5 commented Jun 7, 2023

Thanks again for creating such an amazing extension!

I wanted to report an issue running the extension on the latest development branch of A1111. I have both the current commit and also the latest development version as the latter fixed an unrelated issue so I've switched to that. "Inpaint Anything" works fine on the current version, but using the Dev version (available here: https://github.com/AUTOMATIC1111/stable-diffusion-webui/commits/dev) I receive the following error when trying to "Run Segment Anything":

(2 different attempts to run w/ different images that both work fine in the current main branch of A1111)

input_image: (540, 960, 3) uint8
SamAutomaticMaskGenerator sam_vit_l_0b3195.pth
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\main.py", line 236, in run_sam
sam_masks = sam_mask_generator.generate(input_image)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 163, in generate
mask_data = self._generate_masks(image)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 206, in _generate_masks
crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 251, in _process_crop
keep_by_nms = batched_nms(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 75, in batched_nms
return _batched_nms_coordinate_trick(boxes, scores, idxs, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\jit_trace.py", line 1220, in wrapper
return fn(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 94, in _batched_nms_coordinate_trick
keep = nms(boxes_for_nms, scores, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback]
AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback]
AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback]
AutogradMPS: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback]
AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback]
AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback]
AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]

input_image: (1024, 1024, 3) uint8
SamAutomaticMaskGenerator sam_vit_l_0b3195.pth
Traceback (most recent call last):
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\extensions\sd-webui-inpaint-anything\scripts\main.py", line 236, in run_sam
sam_masks = sam_mask_generator.generate(input_image)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 163, in generate
mask_data = self._generate_masks(image)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 206, in _generate_masks
crop_data = self._process_crop(image, crop_box, layer_idx, orig_size)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\segment_anything\automatic_mask_generator.py", line 251, in _process_crop
keep_by_nms = batched_nms(
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 75, in batched_nms
return _batched_nms_coordinate_trick(boxes, scores, idxs, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch\jit_trace.py", line 1220, in wrapper
return fn(*args, **kwargs)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 94, in _batched_nms_coordinate_trick
keep = nms(boxes_for_nms, scores, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
File "D:\Stable-Diffusion-Webui-Dev\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback]
AutogradCPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback]
AutogradCUDA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback]
AutogradMPS: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback]
AutogradXPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback]
AutogradHPU: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback]
AutogradLazy: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at ..\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]


I realize this might not be an issue you would address before there's an official merge of the development commits to the main branch, but I wanted to give a heads up incase it's an error you may potentially have to face in the near future. It'd be great if there is an easy solution (as I could use a "fix" with the dev build I'm running), but regardless I wanted to mention. Please close if this is "out of scope" at this point.

Thanks!!

(BTW I did try toggling the new "Enable offline network Inpainting" option in settings and that didn't resolve the error.

@Uminosachi
Copy link
Owner

Thank you for reporting the bug in the dev branch. I haven't been able to reproduce it yet, but I noticed an increase in the VRAM usage on the dev branch and took precautionary measures immediately. After doing this, I remove venv and attempt to reproduce the issue.

@Uminosachi
Copy link
Owner

It may depend on your CUDA version, but I can't reproduce the issue in my environment.

@CCpt5
Copy link
Author

CCpt5 commented Jun 8, 2023

Great thanks! I'll see if I have this problem once there's another merge to the main branch. Until then I'll close this.

For what it's worth, this is the references regarding CUDA in my system settings report (which is a new tab in the settings menu on this next update):

"Torch env info": {
    "torch_version": "2.0.1+cu118",
    "is_debug_build": "False",
    "cuda_compiled_version": "11.8",
    "gcc_version": null,
    "clang_version": null,
    "cmake_version": null,
    "os": "Microsoft Windows 10 Pro",
    "libc_version": "N/A",
    "python_version": "3.10.9 | packaged by conda-forge | (main, Jan 11 2023, 15:15:40) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)",
    "python_platform": "Windows-10-10.0.19045-SP0",
    "is_cuda_available": "True",
    "cuda_runtime_version": "11.8.89\r",
    "cuda_module_loading": "LAZY",
    "nvidia_driver_version": "531.41",
    "nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 4090",
    "cudnn_version": null,
    "pip_version": "pip3",
    "pip_packages": [
        "mypy-extensions==1.0.0",
        "numpy==1.23.5",
        "open-clip-torch==2.7.0",
        "pytorch-lightning==1.9.4",
        "torch==2.0.1+cu118",
        "torchaudio==2.0.2+cu118",
        "torchdiffeq==0.2.3",
        "torchmetrics==0.11.4",
        "torchsde==0.2.5",
        "torchvision==0.15.2"
    ],
    "conda_packages": [
        "blas                      1.0                         mkl  ",
        "mkl                       2023.1.0         h8bd8f75_46356  ",
        "mkl-service               2.4.0           py310h2bbff1b_1  ",
        "mkl_fft                   1.3.6           py310h4ed8f06_1  ",
        "mkl_random                1.2.2           py310h4ed8f06_1  ",
        "numpy                     1.24.3          py310h055cbcc_1  ",
        "numpy-base                1.24.3          py310h65a83cf_1  ",
        "pytorch                   1.13.1          py3.10_cuda11.7_cudnn8_0    pytorch",
        "pytorch-cuda              11.7                 h67b0de4_0    pytorch",
        "pytorch-lightning         1.9.5                    pypi_0    pypi",
        "pytorch-mutex             1.0                        cuda    pytorch",
        "torch                     1.13.1+cu117             pypi_0    pypi",
        "torch-fidelity            0.3.0                    pypi_0    pypi",
        "torchaudio                0.13.1                   pypi_0    pypi",
        "torchdiffeq               0.2.3                    pypi_0    pypi",
        "torchmetrics              0.11.4                   pypi_0    pypi",
        "torchsde                  0.2.5                    pypi_0    pypi",
        "torchvision               0.14.1                   pypi_0    pypi"
    ],
    "hip_compiled_version": "N/A",
    "hip_runtime_version": "N/A",
    "miopen_runtime_version": "N/A",
    "caching_allocator_config": "",
    "is_xnnpack_available": "True",
    "cpu_info": [
        "Architecture=9",
        "CurrentClockSpeed=3000",
        "DeviceID=CPU0",
        "Family=207",
        "L2CacheSize=16384",
        "L2CacheSpeed=",
        "Manufacturer=GenuineIntel",
        "MaxClockSpeed=3000",
        "Name=13th Gen Intel(R) Core(TM) i9-13900K",
        "ProcessorType=3",
        "Revision="
    ]
},
"Exceptions": [],
"CPU": {
    "model": "Intel64 Family 6 Model 183 Stepping 1, GenuineIntel",
    "count logical": 32,
    "count physical": 24
},
"RAM": {
    "total": "64GB",
    "used": "16GB",
    "free": "48GB"

@CCpt5 CCpt5 closed this as completed Jun 8, 2023
@CCpt5
Copy link
Author

CCpt5 commented Jun 8, 2023

Just an FYI - updated this morning to the latest commit, and now it doesn't error, but doesn't ever stop "processing". Not sure which is better :) But wanted to report since I know you did make a change up top.

One other question: Do you not want this extension on the in app (A1111) list of extensions? It doesn't seem to be on there. I mentioned it in a thread on a github that adds extensions, and I'm not sure if you asked them not to put it on, or if they just need approval maybe? (Re: Last post of this thread: AUTOMATIC1111/stable-diffusion-webui-extensions#59).

@Uminosachi
Copy link
Owner

Thank you for sharing your report. I've just come to know about the list of apps. I would be delighted if this extension could be included on the list.

@Uminosachi
Copy link
Owner

Thank you for introducing this extension.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants