Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIrst order gradients are used for outer loop optimization #3

Open
sebamenabar opened this issue Jul 30, 2020 · 0 comments
Open

FIrst order gradients are used for outer loop optimization #3

sebamenabar opened this issue Jul 30, 2020 · 0 comments

Comments

@sebamenabar
Copy link

Hi, there was a bug on the original code of OML where they did not create the computation graph for the inner loop updates. Looking the ANML code it seems to have the same issue, specifically here it is currently

grad = torch.autograd.grad(loss, fast_weights, allow_unused=False)

but to correctly backpropagate through the inner optimization it should be

grad = torch.autograd.grad(loss, fast_weights, allow_unused=False, create_graph=True)

I was wondering which version was used for the results on the paper, OML's author said fixing this bug improved performance and reduced training time.
Thanks

p.d.: congratz on the work, is really cool

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant