Releases: ITMO-NSS-team/torch_DE_solver
v0.4.0
v0.3.0
On previous episodes we add Fourier layers. As the big guys say. they are not working without adaptive lambdas magic.
We fully revamped usual for PINNs adaprive lambdas routine such that it is computed using dispersion part directly with Sobol indices (one may refer to https://github.com/ITMO-NSS-team/torch_DE_solver/blob/adaptive_lambdas_sobol/examples/adaptive_disp_ODE.py and https://github.com/ITMO-NSS-team/torch_DE_solver/blob/adaptive_lambdas_sobol/examples/adaptive_disp_wave_eq.py examples with my experiments), not neural tangent kernel eigenvalues analogue. It is done since NTK does not work for anything exept single PDE - in NTK case we would have left ODE and systems out.
Secondly, we rework the loss - it is now computed in two faces - one with lambdas for gradient descent and one with normalized for stopping crtiterion. Even though it is a bit of pulls back everything - namely, training process is not quite connected with stop criterion - it would be benefit for parameter unification.
Additionally, we split Dirichlet and initial (we made a step further and split Dirichlet, operator, periodic) conditions in terms of lambda like big guys. Adaptive lambdas are also split.
So in this release:
- Adaptive lambdas
Minor:
- More predictable cache location and behaviour
New layers and better performance
We add Fourier layers and fix some performance issues such as better work with cache