-
Notifications
You must be signed in to change notification settings - Fork 792
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Any one can reproduce pspnet result? #166
Comments
Hi, @sanweiliti .
You will get 78.65/96.31 (mIoU/pAcc) on the Cityscapes validation set. |
Hi @adam9500370 |
This is my results by running validate.py after changing things according to those mentioned above: Overall Acc: 0.8243936553782907 any tips? @adam9500370 |
我已收到,谢谢
|
and |
I finally got normal results at https://github.com/hszhao/semseg after testing multiple codes. The key is to transform the gt labels rightly (for training and evaluation). def segm_transform(self, segm):
# to tensor, -1 to 149
segm = torch.from_numpy(np.array(segm)).long() - 1
return segm I have to calculate mIoU/mAcc/allAcc for my task, but semantic segmentation not my research, so it really takes me some time to calculate these metrics. Hope this is helpful to others. |
I can only achieve a mIoU of ~ 52% using the trained model of PSPNet on the val set even with the full resolution on cityscapes with this psp model, which is far away from the reported result. I didn't train, just validate on the trained pspnet101_cityscapes.caffemodel published by the team of PSPNet, and it should directly get the reported result. Any one reproduce the result successfully?
The text was updated successfully, but these errors were encountered: