You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
By default the validation batch size is the same as the training batch size, but sometimes there are not many validation images and this gives an error when the batch size is larger than the number of validation images. I think we should add a check to make sure this does not happen.
Here is what I suggest. Add an extra value in the __init__ for the val_batch_size. If the batch size is smaller than the size of the validation dataset, then this value is the batch size. Otherwise we take something like half the size of the dataset. This will require to add the batch size as an argument in the _get_dataset method and use the correct one when we get a dataset.
The text was updated successfully, but these errors were encountered:
By default the validation batch size is the same as the training batch size, but sometimes there are not many validation images and this gives an error when the batch size is larger than the number of validation images. I think we should add a check to make sure this does not happen.
Here is what I suggest. Add an extra value in the
__init__
for the val_batch_size. If the batch size is smaller than the size of the validation dataset, then this value is the batch size. Otherwise we take something like half the size of the dataset. This will require to add the batch size as an argument in the_get_dataset
method and use the correct one when we get a dataset.The text was updated successfully, but these errors were encountered: