-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose algorithm kwargs for Grassmann methods #11
Conversation
fixed tiny bug
These test failures seem unrelated, we should probably check if the master even runs... As a side-comment, this is not really a bug. The SDD algorithm should be a lot faster for large matrices, and this change really does have some performance implications. Nevertheless, I agree that in our case, as the stability of SDD turns out to be insufficient for the cases at hand, we should probably default to the (slower) more stable SVD algorithm |
We should probably adopt a
A singular value decomposition is probably sufficiently costly in most cases that the |
@lkdvos I'm quite certain that the failing tests are indeed unrelated since I've ran many hours of Gradient Descent with this change without any issues. @Jutho This was my original patch, the problem is that SDD already changes the input tensor so we should really do : |
In this case without the exclamation mark, |
SVD/SDD
Hi all, I updated the PR. From the TensorKitManifolds perspective the only changes compared to master are now that all methods (I think) now take the alg keyword which can be SVD or SDD. I simultaneously made a PR in MPSKit which ads an extra argument svd_alg to the GradientGrassman struct. This can be set to SVD or SDD and is passed to the underlying TensorKitManifolds algorithms. |
I removed some unnecessary comments and reinstated alg=nothing in those cases where alg is not used ayways.
I added some changes to make the tests run again:
|
This is probably a ridiculous edge case but I'm wondering what might happen if a user sets the TensorKitManifolds default svd algorithm to SDD and then uses the dispatching in MPSKit to use SVD everywhere. In this case the SDD call inside getproperty might still lead to a crash ? Or do you think users should simply not use SDD anymore ? |
The current approach does not actually have an option to set the default algorithm, it is a compile time constant. Note that your approach also did not really expose the alg keyword in the Thus, the current implementation has the stable one as default, and if you would want to change that, it would require quite a bit of rewrite. I am more or less okay with anything, so I would wait for @Jutho 's opinion |
Ok, that makes sense thanks ! |
My apologies for not getting to this sooner, which will probably result in double work. I have the following requests / suggestions:
This makes all svd calls in TensorKitManifolds.jl completely hard coded, but you can still change it by redefining Ultimately, we should have algorithms structures at the level of |
Thanks @Jutho for the suggestions and @lkdvos for implementing them faster then I could even read them 😓 Does this mean that using TensorKitManifolds
TensorKitManifolds.default_svd_alg(::TensorMap) = SDD()
using MPSKit
rest of my code will result in using the stable version of svd inside all |
No, |
Oh yes, indeed. Thanks for pointing that out ! |
Great. I will merge when lights turn green. There seems one Also, we should probably switch to the |
I saw the talk about ScopedValues on JuliaCon last week, it seems like these kinds of defaults and configurations are precisely what is being targeted with that, and from what I understand, this would solve our issues. Just something to keep in the back of our heads when we revisit this. |
fixed tiny bug