-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to infer for images of small size #4
Comments
@GiannakopoulosIlias |
Hello, thanks a lot for the inputs on this. For (1) I agree that an interpolation might lead it sub-optimal results. |
I think diffusionedge can handle this type of image when training on the similar datasets (small images), however, I don't know if there is such a dataset. If you want to train diffusionedge on this type of dataset (32x32), you do not have to use latent diffusion model, but just train the model in the image space. |
Thanks again! I will experiment a bit further with resizing as it seems simpler for now. |
Hello, how did you solve it in the end? I also encountered a similar problem. But I don't have a similar small-sized training dataset |
Hello,
I have a custom datasets of jpgs of a small size (38x47 to be precise).
Is it possible to adjust the network to infer the edges for these images?
If I run without any modifications in the network, I get this error:
File "./DiffusionEdge-main/denoising_diffusion_pytorch/mask_cond_unet.py", line 952, in forward
x = torch.cat((x, h.pop()), dim = 1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 4 but got size 5 for tensor number 1 in the list.
Thanks!
The text was updated successfully, but these errors were encountered: