You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I observe with my image set (1 class + background) that I achieve much better segmentation accuracy for the validation (val) images during training (pixelAcc = 97%) then during testing (pixelAcc=89%). By default fcnTrain.m and fcnTest.m are setup to use the same val images. My understanding is that testing should actually be done on a fully independent (3rd) set of images but when running the default setup I would expect the same accuracy from fcnTrain and fcnTest for a given epoch. I confirmed by outputting images from predictions in SegmentationAccuracy.m that the segmentations for the val images look better during fcnTrain then fcnTest.
The text was updated successfully, but these errors were encountered:
I guess you should set the random seed as stated in fcnTrain.m based on epoch and the random seed used in training to successfully regenerate the results previously produced like:
I observe with my image set (1 class + background) that I achieve much better segmentation accuracy for the validation (val) images during training (pixelAcc = 97%) then during testing (pixelAcc=89%). By default fcnTrain.m and fcnTest.m are setup to use the same val images. My understanding is that testing should actually be done on a fully independent (3rd) set of images but when running the default setup I would expect the same accuracy from fcnTrain and fcnTest for a given epoch. I confirmed by outputting images from predictions in SegmentationAccuracy.m that the segmentations for the val images look better during fcnTrain then fcnTest.
The text was updated successfully, but these errors were encountered: