-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
batch size > 1 #29
Comments
Hi! I believe that all of the methods in this codebase are set up to take (However, that is off the top of my head, and none of the examples in this repo uses padded batches, so be wary :) If you try it and run in to any issues please write back here to let us know.) |
Hi @nmwsharp! Sorry for replying on an old issue but I thought it would be best to use one that was already open. I am trying to work with padded triangular mesh in batches (e.g. with positions BxNxV and faces BxNxF). I essentially implemented padding with zeros to make sure each mesh has the same number of faces and vertices. When I try to run the code, however, I get a bunch of errors regarding dimensions. For instance, I had to modify the def face_coords(verts, faces):
coords_tensor = None
coords = None
if verts.dim() > 2:
coords_tensor = torch.zeros((verts.shape[0], faces.shape[1], 3,3))
for index in range(verts.shape[0]):
coords_tensor[index] = verts[index][faces[index]]
coords = coords_tensor
else:
coords = verts[faces]
return coords I was initially trying to use default torch_geometric batching as you also mention in #10, however it forces me to recompute operators for each batch every time as the vertices/faces included in the matrix are shuffled by the dataloader. Would it be possible to make batched training work? It's a shame as the model is really light and efficient and I really want to take advantage of batching. Thank you in advance :) Edit: I just noticed that the error is only when computing operators. If I compute them separately and then concatenate them the rest of the code works, so the fix might be simpler than I initially thought |
Hi, thanks for sharing your research.
I need to stack vertices and faces in a batch, but I note that all your experiments in the repository have batch size=None. Have you tried stacking vertices and faces in a batch, adding padding like pytorch3d to avoid shape differences? Can batch size and padding produce an error or bad behavior in diffusion-net?
The text was updated successfully, but these errors were encountered: