You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when performing a time evolution of a stationary system with the SLEPc backend, there seems to be a memory leak. The resident size scales linearly with the number of timesteps!
As a minimal example, I evolve a 20 spin Heisenberg sparse hamiltonian with some random initial state. In my actual application, the hamiltonian takes around 600 MB in memory and each timestep the resident size increases by the same amount, so it seems to me that somehow an additional copy of the hamiltonian is being created each timestep, that is not destroyed/garbage collected afterwards.
Using a profiler, I tracked down the problem to the function mfn_multiply_slepc() at /linalg/slepc_linalg.py:687. In this function, the PETSc-converted matrix takes extra space but after mfn.solve() and mfn.destroy() the resident size is bigger than before.
What did you expect to happen?
If I am not completely mistaken, the resident size should remain somewhat bounded throughout the calculation.
Minimal Complete Verifiable Example
importnumpyasnpimportquimbfromquimbimportEvolutionham=quimb.ham_heis(20, sparse=True)
psi_init=quimb.rand_ket(2**20)
evo_kwargs= {"method": "expm", "expm_backend": "SLEPC"}
t0=0.0evo=Evolution(psi_init, ham, t0=t0, **evo_kwargs)
n_steps=8# try different values. for small step sizes, the scaling of resident size is lineart_arr=np.linspace(t0, t0+10, n_steps, endpoint=True)
evo_gen=evo.at_times(t_arr)
fort, pinzip(t_arr, evo_gen):
print(t)
print("done") # compare the resident size here
Relevant log output
No response
Anything else we need to know?
I also tried looping over evo.update_to(), yielding the same behavior. I even tried destroying the Evolution object and reinitializing with the current system time at each time step, same result!
The same calculation with the SciPy backend has a stable resident size. In the minimal example with 8 timesteps, after the loop the process was taking 2.5 GB in the SLEPc case and 750 MB for scipy, so somehow memory is not released where it should be.
Environment
python 3.11.5
PETSc 3.19.5
SLEPc 3.19.1
The text was updated successfully, but these errors were encountered:
What happened?
Hi there @jcmgray,
when performing a time evolution of a stationary system with the SLEPc backend, there seems to be a memory leak. The resident size scales linearly with the number of timesteps!
As a minimal example, I evolve a 20 spin Heisenberg sparse hamiltonian with some random initial state. In my actual application, the hamiltonian takes around 600 MB in memory and each timestep the resident size increases by the same amount, so it seems to me that somehow an additional copy of the hamiltonian is being created each timestep, that is not destroyed/garbage collected afterwards.
Using a profiler, I tracked down the problem to the function mfn_multiply_slepc() at /linalg/slepc_linalg.py:687. In this function, the PETSc-converted matrix takes extra space but after mfn.solve() and mfn.destroy() the resident size is bigger than before.
What did you expect to happen?
If I am not completely mistaken, the resident size should remain somewhat bounded throughout the calculation.
Minimal Complete Verifiable Example
Relevant log output
No response
Anything else we need to know?
I also tried looping over evo.update_to(), yielding the same behavior. I even tried destroying the Evolution object and reinitializing with the current system time at each time step, same result!
The same calculation with the SciPy backend has a stable resident size. In the minimal example with 8 timesteps, after the loop the process was taking 2.5 GB in the SLEPc case and 750 MB for scipy, so somehow memory is not released where it should be.
Environment
python 3.11.5
PETSc 3.19.5
SLEPc 3.19.1
The text was updated successfully, but these errors were encountered: