-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: support mask for padded batches #4
Comments
The other option would be to use |
Given that this code is entirely vectorized, applying a mask will likely not be trivial. And in the end it may not result in performance improvement anyways, because you're basically asking the GPU to let the masked streaming processors sleep and wait until the unmasked ones are done with the batch. To get "true" performance gains, you may have to sort, truncate and batch your samples by their length before putting them through torch-pesq. Or is there another reason besides performance why you want this? In the meantime, can't you just filter the output of torch-pesq to ignore the zeros? |
Oh yeah I could maybe ignore the zeros. However, if two inputs match exactly, then the output would be zero and we would be ignoring it incorrectly. Unlikely but can still happen. |
Sorry, I should have said "ignore the masked values". |
How do i "ignore the masked values" ? The output of the loss function is a batch of numbers. I.e if What I have done is if I have an additional |
Ah, you're of course right, I misunderstood. |
I have a use case where each batch item is zero padded to fit the batch. I would like to run the PESQ metric on the whole batch but not the padded zeros. Enhancing the algorithm to support a mask would solve that problem.
The text was updated successfully, but these errors were encountered: