You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was using batch of size 32 and once I switched to lovasz loss I had to decrease batch size to 8 to be able to train the same model. Otherwise OOM error is thrown.
Is it expected?
The text was updated successfully, but these errors were encountered:
Hello,
Normally the memory usage should be around the same, but I only experimented with PyTorch. Maybe the sort algorithm is taking more memory than necessary. I might investigate some time in the future if I have time.
Can confirm that Lovasz loss is eating up a lot more memory than, say, cross-entropy. This is on TF 2.1. I'll try to see if there's any low-hanging fruits for this contributions-welcome tag ;-)
I was using batch of size 32 and once I switched to lovasz loss I had to decrease batch size to 8 to be able to train the same model. Otherwise OOM error is thrown.
Is it expected?
The text was updated successfully, but these errors were encountered: