Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Faile to reproduce DilatedNet performance #12

Open
DonghyunK opened this issue Apr 17, 2017 · 2 comments
Open

Faile to reproduce DilatedNet performance #12

DonghyunK opened this issue Apr 17, 2017 · 2 comments

Comments

@DonghyunK
Copy link

Hi,

I am trying to reproduce DilatedNet.

However, my training results show that
pixel acc : 72.4%
mean acc: 38.6%
mean iou: 28.7%.

Further training does not show improvement.

I am using a pre-trained net and multiple gpus with mini-batch size of 8. I did not use augmentations as the paper do not explain what augmentations are used. I expect that augmentation does affect the results at a small amount, otherwise you probably present augmentations in the paper.

(1) Could you explain what augmentations are used and how much does it improve results?

(2) Could you provide training and validation log files?

Thank you so much.

@hangzhaomit
Copy link
Collaborator

Augmentation only helps a little (<2%), we only did flipping during training.
Try to initialize the model with a VGG network pretrained on ImageNet; do not add layers like batch normalization.

@balloch
Copy link

balloch commented Jun 22, 2017

@DonghyunK , can you comment if the above worked? Also, @hangzhaomit , what do you mean initialize the model with a VGG pretrained on imagenet...is the DilatedNet just a standard VGG? won't the difference in convolution type cause incompatibility?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants