You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To take the gradient of a fixed-point function, there are alternative approaches than simply applying AD to the loop of the fixed-point procedure. One such approach is described in the Differential Programming Tensor Networks where a fixed-point of the form
This was implemented at this commit, which implements the custom adjoint for fixedpoint. Additionally it is necessary to gauge fix the edge tensor, using
#gauge fix
c *=sign(c[1])
signs =sign.(t[:,2,1])
t = t .*reshape(signs,:,1,1) .*reshape(signs,1,1,:)
but since that didn't work for complex numbers, we ultimately removed it. To reintroduce it to the package, it should work with complex numbers.
Ignoring complex numbers, some potentially interesting input was given in this comment:
That’s a bad way to compute the adjoint, which is really just solving a linear system. The formula is saying that (1-A)^-1 = sum of A^n, which is a crude way to solve it: a better one is to use an iterative method, as implemented in eg https://github.com/JuliaMath/IterativeSolvers.jl 1. The nonlinear case is: solving T = f(T) by Tn+1 = f(Tn) is bad, use something better (eg Anderson acceleration, implemented in https://github.com/JuliaNLSolvers/ 1)
It would be interesting to explore that option.
The text was updated successfully, but these errors were encountered:
To take the gradient of a fixed-point function, there are alternative approaches than simply applying AD to the loop of the fixed-point procedure. One such approach is described in the Differential Programming Tensor Networks where a fixed-point of the form
T^* = f(T^*, \theta).
The adjoint is then (according to this paper )
\bar{\theta} = \sum_{n=0}^\infty \bar{T^} \left[\frac{\partial f(T^, \theta)}{\partial T^} \right]^n \frac{\partial f(T^, \theta)}{\partial \theta}
This was implemented at this commit, which implements the custom adjoint for
fixedpoint
. Additionally it is necessary to gauge fix the edge tensor, usingbut since that didn't work for complex numbers, we ultimately removed it. To reintroduce it to the package, it should work with complex numbers.
Ignoring complex numbers, some potentially interesting input was given in this comment:
It would be interesting to explore that option.
The text was updated successfully, but these errors were encountered: