Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any one can reproduce pspnet result? #166

Open
sanweiliti opened this issue Nov 25, 2018 · 6 comments
Open

Any one can reproduce pspnet result? #166

sanweiliti opened this issue Nov 25, 2018 · 6 comments

Comments

@sanweiliti
Copy link

I can only achieve a mIoU of ~ 52% using the trained model of PSPNet on the val set even with the full resolution on cityscapes with this psp model, which is far away from the reported result. I didn't train, just validate on the trained pspnet101_cityscapes.caffemodel published by the team of PSPNet, and it should directly get the reported result. Any one reproduce the result successfully?

@adam9500370
Copy link
Contributor

adam9500370 commented Nov 25, 2018

Hi, @sanweiliti .
You can do the followings:

  • Download the converted Caffe pretrained weights here
  • Set img_norm=False and version="pascal" arguments in data_loader (due to data preprocessing of original Caffe implementation)
  • Replace ptsemseg/models/pspnet.py#L134 with x = F.interpolate(x, size=inp_shape, mode='bilinear', align_corners=True)
  • Set the corresponding settings in your config file (e.g., img_rows: 1025, img_cols: 2049, resume: pspnet_101_cityscapes.pth)

You will get 78.65/96.31 (mIoU/pAcc) on the Cityscapes validation set.

@sanweiliti
Copy link
Author

Hi @adam9500370
Finally get the result! Turns out to forget the data loader version to 'pascal' earlier.

@fido20160817
Copy link

fido20160817 commented May 31, 2023

Hi, @sanweiliti . You can do the followings:

  • Download the converted Caffe pretrained weights here
  • Set img_norm=False and version="pascal" arguments in data_loader (due to data preprocessing of original Caffe implementation)
  • Replace ptsemseg/models/pspnet.py#L134 with x = F.interpolate(x, size=inp_shape, mode='bilinear', align_corners=True)
  • Set the corresponding settings in your config file (e.g., img_rows: 1025, img_cols: 2049, resume: pspnet_101_cityscapes.pth)

You will get 78.65/96.31 (mIoU/pAcc) on the Cityscapes validation set.

This is my results by running validate.py after changing things according to those mentioned above:

Overall Acc: 0.8243936553782907
Mean Acc : 0.46220855109731096
FreqW Acc : 0.7202854165179821
Mean IoU : 0.36694185899912246

any tips? @adam9500370

@hjhjb
Copy link

hjhjb commented May 31, 2023 via email

@fido20160817
Copy link

and
Overall Acc: 0.9424106973275609
Mean Acc : 0.7539844599183159
FreqW Acc : 0.895777388036502
Mean IoU : 0.676881206358246
after setting img_rows: 713, img_cols: 713.

@fido20160817
Copy link

I finally got normal results at https://github.com/hszhao/semseg after testing multiple codes. The key is to transform the gt labels rightly (for training and evaluation).
for cityscapes: https://github.com/fyu/drn/tree/master/datasets/cityscapes. (you can transform gt labels ahead)
for ade20k: https://github.com/CSAILVision/semantic-segmentation-pytorch/blob/master/mit_semseg/dataset.py#L70 (you can transform gt inside the code, not ahead training or evaluaiton)

def segm_transform(self, segm):
        # to tensor, -1 to 149
        segm = torch.from_numpy(np.array(segm)).long() - 1
        return segm

I have to calculate mIoU/mAcc/allAcc for my task, but semantic segmentation not my research, so it really takes me some time to calculate these metrics. Hope this is helpful to others.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants