You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current version of the code defaults to uniformly cropping input images during training and testing. In practice, when testing, we often don't want to do too many operations on the image, including cropping.
Without cropping the image, an unforeseen problem arose. nn.Conv2d cuts off edges by default for odd-sized images to make them even in size, which facilitates the flow of feature maps in the network. However, for U-Net, this results in inconsistent image sizes before and after skip concatenation, which will eventually result in an error.
A possible solution is to trim the edges of odd-sized images in the data provider during testing, so as to preprocess their sizes into even-numbered ones.
This solution idea has almost no effect on the functionality of the original method. Just dropping one or two edges of the image does not affect the evaluation of the model's denoising performance. At the same time, since the size of most images is even, only performing additional trimming operations for a few odd-sized images hardly adds additional time consumption.
The text was updated successfully, but these errors were encountered:
The current version of the code defaults to uniformly cropping input images during training and testing. In practice, when testing, we often don't want to do too many operations on the image, including cropping.
Without cropping the image, an unforeseen problem arose. nn.Conv2d cuts off edges by default for odd-sized images to make them even in size, which facilitates the flow of feature maps in the network. However, for U-Net, this results in inconsistent image sizes before and after skip concatenation, which will eventually result in an error.
A possible solution is to trim the edges of odd-sized images in the data provider during testing, so as to preprocess their sizes into even-numbered ones.
This solution idea has almost no effect on the functionality of the original method. Just dropping one or two edges of the image does not affect the evaluation of the model's denoising performance. At the same time, since the size of most images is even, only performing additional trimming operations for a few odd-sized images hardly adds additional time consumption.
The text was updated successfully, but these errors were encountered: