Skip to content

Commit

Permalink
merge branch 'dev' for 0.4.0 release
Browse files Browse the repository at this point in the history
  • Loading branch information
GregDMeyer committed Apr 1, 2024
2 parents 9447c8d + 0302e63 commit 98a1911
Show file tree
Hide file tree
Showing 82 changed files with 4,744 additions and 892 deletions.
2 changes: 0 additions & 2 deletions .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
docker/
docs/
.gitignore
LICENSE.txt
README.md

# other files that might be generated
build/
Expand Down
26 changes: 26 additions & 0 deletions .zenodo.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
{
"license": "MIT",
"creators": [
{
"orcid": "0000-0003-2174-3308",
"affiliation": "Massachusetts Institute of Technology",
"name": "Gregory D. Kahanamoku-Meyer"
},
{
"orcid": "0000-0002-0512-4139",
"affiliation": "Harvard University",
"name": "Julia Wei"
}
],
"keywords": [
"quantum",
"parallel",
"Python",
"Krylov",
"PETSc",
"SLEPc",
"GPU",
"CUDA",
"MPI"
]
}
32 changes: 32 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,38 @@

# Changelog

## 0.3.2 - IN PROGRESS

### Added
- Detailed example scripts (in `examples/scripts`)
- `Operator.expectation()`, convenience function to compute the expectation value of the operator with respect to a state
- `dynamite.tools.MPI_COMM_WORLD()` which returns PETSc's MPI communicator object
- `Operator.precompute_diagonal` flag allows user to tune whether the matrix diagonal should be precomputed and saved, for shell matrices
- `State.entanglement_entropy` member function (a more convenient way of using `computations.entanglement_entropy`, which also remains)
- `tools.get_memory_usage` which can measure memory usage on a total, per rank, or per node basis
- Multi-GPU parallelism via GPU-aware MPI

### Removed
- `--track_memory` flag to `benchmark.py`---now memory usage is always reported by the benchmarking script
- `tools.get_max_memory_usage` and `tools.get_cur_memory_usage` in favor of a single function `tools.get_memory_usage`

### Changed
- `Operator.msc_size` renamed to `Operator.nterms`, and now invokes a call to `Operator.reduce_msc()`
- shell matrix-vector multiplications are now considerably faster
- Improved automatic version check; no longer leaves `.dynamite` files in working directory
- GPU builds now automatically switch to CPU if a GPU is not found (and print a warning)
- Changed default bind mount location for Docker images to the container user's home directory, `/home/dnm`
- Renamed some values of `which` argument of `eigsolve()`: `smallest``lowest` and `largest``highest`
- Shift-invert ("target") eigsolving on GPU disabled, as PETSc does not support it well

### Fixed
- Explicit subspace sometimes failed conservation check even when operator was actually conserved
- Build was broken with Cython 3
- Work around broken `petsc4py` and `slepc4py` builds with `pip>=23.1` (see [PETSc issue](https://gitlab.com/petsc/petsc/-/issues/1369))
- `Operator.__str__` and `Operator.table()` were formatted poorly for operators with complex coefficients
- various issues in `dynamite.extras`
- Performance was bad on Ampere (e.g. A100) GPUs unless a particular SLEPc flag was set. The flag is now automatically set.

## 0.3.1 - 2023-03-07

### Fixed
Expand Down
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,5 @@ Dynamite

[![Documentation Status](https://readthedocs.org/projects/dynamite/badge/?version=latest)](https://dynamite.readthedocs.io/en/latest/?badge=latest)

Welcome to `dynamite`, which provides fast, massively parallel evolution and eigensolving for spin chain Hamiltonians. It uses the PETSc/SLEPc libraries as a backend. Visit the [ReadTheDocs](https://dynamite.readthedocs.io)!
Welcome to `dynamite`, which provides fast, massively parallel evolution and eigensolving for spin chain Hamiltonians.
Visit the [ReadTheDocs](https://dynamite.readthedocs.io)!
110 changes: 65 additions & 45 deletions benchmarking/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,15 @@
from random import uniform,seed
from timeit import default_timer
from itertools import combinations
import numpy as np

from dynamite import config
from dynamite.states import State
from dynamite.operators import sigmax, sigmay, sigmaz
from dynamite.operators import op_sum, op_product, index_sum
from dynamite.extras import majorana
from dynamite.subspaces import Full, Parity, SpinConserve, Auto, XParity
from dynamite.tools import track_memory, get_max_memory_usage
from dynamite.tools import track_memory, get_memory_usage, mpi_print
from dynamite.computations import reduced_density_matrix


Expand All @@ -26,13 +27,13 @@ def parse_args(argv=None):

parser.add_argument('--shell', action='store_true',
help='Make a shell matrix instead of a regular matrix.')
parser.add_argument('--no-precompute-diagonal', action='store_true',
help='Turn off precomputation of the matrix diagonal for shell matrices.')
parser.add_argument('--gpu', action='store_true',
help='Run computations on GPU instead of CPU.')

parser.add_argument('--slepc_args', type=str, default='',
help='Arguments to pass to SLEPc.')
parser.add_argument('--track_memory', action='store_true',
help='Whether to compute max memory usage')

parser.add_argument('--subspace', choices=['full', 'parity',
'spinconserve',
Expand All @@ -51,8 +52,12 @@ def parse_args(argv=None):

parser.add_argument('--evolve', action='store_true',
help='Request that the Hamiltonian evolves a state.')
parser.add_argument('-t', type=float, default=1.0,
parser.add_argument('-t', type=float, default=50.0,
help='The time to evolve for.')
parser.add_argument('--no_normalize_t', action='store_true',
help='Turn off the default behavior of dividing the evolve time by the '
'matrix norm, which should yield a fairer comparison across models'
' and system sizes.')

parser.add_argument('--mult', action='store_true',
help='Simply multiply the Hamiltonian by a vector.')
Expand All @@ -76,9 +81,16 @@ def parse_args(argv=None):
'RDM computation. By default, the first half are kept.')

parser.add_argument('--check-conserves', action='store_true',
help='Check whether the given subspace is conserved by the matrix.')
help='Benchmark the check for whether the given subspace is conserved by '
'the matrix.')

return parser.parse_args(argv)
args = parser.parse_args(argv)

# we need the norm anyway for this; might as well benchmark it
if args.evolve and not args.no_normalize_t:
args.norm = True

return args

def build_subspace(params, hamiltonian=None):
space = params.which_space
Expand Down Expand Up @@ -118,19 +130,19 @@ def build_hamiltonian(params):

if params.H == 'MBL':
# dipolar interaction
rtn = index_sum(op_sum(s(0)*s(1) for s in (sigmax, sigmay, sigmaz)))
rtn = index_sum(op_sum(0.25*s(0)*s(1) for s in (sigmax, sigmay, sigmaz)))
# quenched disorder in z direction
seed(0)
for i in range(params.L):
rtn += uniform(-1, 1) * sigmaz(i)
rtn += uniform(-3, 3) * 0.5 * sigmaz(i)

elif params.H == 'long_range':
# long-range ZZ interaction
rtn = op_sum(index_sum(sigmaz(0)*sigmaz(i)) for i in range(1, params.L))
rtn = op_sum(index_sum(0.25*sigmaz(0)*sigmaz(i)) for i in range(1, params.L))
# nearest neighbor XX
rtn += 0.5 * index_sum(sigmax(0)*sigmax(1))
rtn += 0.5 * index_sum(0.25*sigmax(0)*sigmax(1))
# some other fields
rtn += sum(0.1*index_sum(s()) for s in [sigmax, sigmay, sigmaz])
rtn += sum(0.05*index_sum(s()) for s in [sigmax, sigmay, sigmaz])

elif params.H == 'SYK':
seed(0)
Expand All @@ -145,31 +157,39 @@ def gen_products(L):
yield p

rtn = op_sum(gen_products(params.L))
rtn.scale(np.sqrt(6/(params.L*2)**3))

elif params.H == 'ising':
rtn = index_sum(sigmaz(0)*sigmaz(1)) + 0.2*index_sum(sigmax())
rtn = index_sum(0.25*sigmaz(0)*sigmaz(1)) + 0.1*index_sum(sigmax())

elif params.H == 'XX':
rtn = index_sum(sigmax(0)*sigmax(1))
rtn = index_sum(0.25*sigmax(0)*sigmax(1))

elif params.H == 'heisenberg':
rtn = index_sum(sigmax(0)*sigmax(1) + sigmay(0)*sigmay(1) + sigmaz(0)*sigmaz(1))
rtn = index_sum(op_sum(0.25*s(0)*s(1) for s in (sigmax, sigmay, sigmaz)))

else:
raise ValueError('Unrecognized Hamiltonian.')

# conservation check can take a long time; we benchmark it separately
# TODO: speed up CheckConserves and remove this
rtn.allow_projection = True

return rtn

def compute_norm(hamiltonian):
config._initialize()
from petsc4py.PETSc import NormType
return hamiltonian.get_mat().norm(NormType.INFINITY)
return hamiltonian.infinity_norm()

def do_eigsolve(params, hamiltonian):
hamiltonian.eigsolve(nev=params.nev,target=params.target)

def do_evolve(params, hamiltonian, state, result):
hamiltonian.evolve(state, t=params.t, result=result)
# norm should be precomputed by now so the following shouldn't affect
# the measured cost of time evolution
t = params.t
if not params.no_normalize_t:
t /= hamiltonian.infinity_norm()
hamiltonian.evolve(state, t=t, result=result)

def do_mult(params, hamiltonian, state, result):
for _ in range(params.mult_count):
Expand All @@ -183,25 +203,21 @@ def do_check_conserves(hamiltonian):

# this decorator keeps track of and times function calls
def log_call(function, stat_dict, alt_name=None):
config._initialize()
from petsc4py.PETSc import Sys
Print = Sys.Print

if alt_name is None:
fn_name = function.__name__
else:
fn_name = alt_name

def rtn(*args, **kwargs):
if __debug__:
Print('beginning', fn_name)
mpi_print('beginning', fn_name)

tick = default_timer()
rtn_val = function(*args, **kwargs)
tock = default_timer()

if __debug__:
Print('completed', fn_name)
mpi_print('completed', fn_name)

stat_dict[fn_name] = tock-tick

Expand All @@ -210,22 +226,20 @@ def rtn(*args, **kwargs):
return rtn

def main():
main_start = default_timer()

arg_params = parse_args()
slepc_args = arg_params.slepc_args.split(' ')
config.initialize(slepc_args, gpu=arg_params.gpu)
config.L = arg_params.L
config.shell = arg_params.shell

from petsc4py.PETSc import Sys
Print = Sys.Print

if not __debug__:
Print('---ARGUMENTS---')
mpi_print('---ARGUMENTS---')
for k,v in vars(arg_params).items():
Print(str(k)+','+str(v))
mpi_print(str(k)+','+str(v))

if arg_params.track_memory:
track_memory()
track_memory()

stats = {}

Expand All @@ -242,11 +256,15 @@ def main():
subspace = log_call(build_subspace, stats)(arg_params, H)
if H is not None:
H.subspace = subspace
Print('H statistics:')
Print(' dim:', H.dim[0])
Print(' nnz:', H.nnz)
Print(' density:', H.density)
Print(' MSC size:', H.msc_size)

if arg_params.no_precompute_diagonal:
H.precompute_diagonal = False

mpi_print('H statistics:')
mpi_print(' dim:', H.dim[0])
mpi_print(' nnz:', H.nnz)
mpi_print(' density:', H.density)
mpi_print(' nterms:', H.nterms)
log_call(H.build_mat, stats)()

# build some states to use in the computations
Expand All @@ -269,6 +287,7 @@ def main():

if arg_params.mult:
log_call(do_mult, stats)(arg_params, H, in_state, out_state)
stats['avg_mult_time'] = stats['do_mult'] / arg_params.mult_count

if arg_params.rdm:
keep_idxs = arg_params.keep
Expand All @@ -279,18 +298,19 @@ def main():
if arg_params.check_conserves:
log_call(do_check_conserves, stats)(H)

if arg_params.track_memory:
# trigger memory measurement
if H is not None:
H.destroy_mat()
elif in_state is not None:
in_state.vec.destroy()
# trigger memory measurement
if H is not None:
H.destroy_mat()
elif in_state is not None:
in_state.vec.destroy()

stats['Gb_memory'] = get_max_memory_usage()
# sum the memory usage from all ranks
stats['Gb_memory'] = get_memory_usage(group_by='all', max_usage=True)
stats['total_time'] = default_timer() - main_start

Print('---RESULTS---')
mpi_print('---RESULTS---')
for k,v in stats.items():
Print('{0}, {1:0.4f}'.format(k, v))
mpi_print('{0}, {1:0.4f}'.format(k, v))

if __name__ == '__main__':
main()
Loading

0 comments on commit 98a1911

Please sign in to comment.