Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation accuracy is better during training than testing for validation (val) images #59

Open
briangow opened this issue Jun 28, 2017 · 1 comment

Comments

@briangow
Copy link

I observe with my image set (1 class + background) that I achieve much better segmentation accuracy for the validation (val) images during training (pixelAcc = 97%) then during testing (pixelAcc=89%). By default fcnTrain.m and fcnTest.m are setup to use the same val images. My understanding is that testing should actually be done on a fully independent (3rd) set of images but when running the default setup I would expect the same accuracy from fcnTrain and fcnTest for a given epoch. I confirmed by outputting images from predictions in SegmentationAccuracy.m that the segmentations for the val images look better during fcnTrain then fcnTest.

@ibcny
Copy link

ibcny commented Aug 18, 2017

I guess you should set the random seed as stated in fcnTrain.m based on epoch and the random seed used in training to successfully regenerate the results previously produced like:

rng(epoch+opts.randomseed);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants