-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Manopt as the optimization package #105
base: master
Are you sure you want to change the base?
Conversation
This hopefully shouldn't be too much work. I will try to finish this up beginning of next week. |
Codecov ReportAttention: Patch coverage is
|
I got One annoying thing was that Manopt doesn't support supplying a function which provides both the cost and the gradient for While Manopt seems more powerful and versatile for sure, it does take a bit of love to make it work, it seems... |
I glanced over everything, I have to admit that I can't immediately see what's going wrong. Did you by any chance try the simplest option without caching? That should be slow, but still work.
At initial glance, that shouldn't be a big problem, right? Simply specifying both should be possible, and we can then avoid the recalculation with some kind of cache afterwards (I would first focus on getting it to work though)
I think it does? From what I gather, you can use any linesearch in LineSearches.jl, which does contain Hager-Zhang: https://julianlsolvers.github.io/LineSearches.jl/latest/reference/linesearch.html#LineSearches.HagerZhang |
That's what I do at the moment and that works in principle. The thing that is a bit ugly about it is that in order to record things from the cache during optimization, the cost function needs the functor struct such that one can access it using
Oh I wasn't aware, that is really neat. Then I'll try Hager-Zhang and see if that solves the problems I currently have with optimization. |
Somehow this is a tough nut to crack. I was trying around today but with no luck, even the simplest option without caching won't optimize. I think the next thing I'll try is to go back to OptimKit but with the current cost/gradient/cache setup, just to rule out that there are problems with that and not with the optimization/linesearching parameters. There's also a really weird Zygote error (independent of Manopt I believe) which only comes up the first time I execute
Any ideas on this @lkdvos ? |
Update: it was a typo in the |
Do you know if Manopt might be doing some in place copying which is overwriting some tensors? This is somehow the only thing I can think of, because otherwise it should really be doing the same thing. It might be worth it to check gradient descend first, since that should be slow but very simple |
As far as I can tell, it doesn't do that by default, all in-place functionality needs to be enabled explicitly. But by fixing that typo the optimization now works and seems to be consistent with OptimKit! When comparing Manopt and OptimKit, I was taking a closer look at the available linesearching algorithms and found that the Hager-Zhang from LineSearches behaves slightly differently from the one in OptimKit: The reason is that the OptimKit implementation has a kwarg Long story short: I will need to look for a good default linesearching algorithm and from the first tests it seems that |
Okay, the PR is mostly finished I would say but there are a few things that still need to be resolved:
I'm not sure if I get that in before Christmas but otherwise I'll finish it up next year :-) |
This PR will switch out OptimKit for Manopt to enable more advanced optimization features and better control over optimization settings and outputs.
Since we haven't yet defined a PEPS manifold using the ManifoldsBase.jl interface, we will resort to vectorizing the PEPS using
to_vec
which we optimize - this should not incur significant overhead cost.