You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ONEAPI_DEVICE_SELECTOR=host mpirun -n <nproc> <neso-executable>
# if using docker (can we detect this?)
I_MPI_FABRICS=shm ONEAPI_DEVICE_SELECTOR=host mpirun -n <nproc> <neso-executable>
I expect that we will end up with a repository of example launch commands/configurations/submission scripts for different machines. These launch commands will become more complex with MPI+OpenMP (i.e. more than 1 thread) as the pinning/placement of the threads is often controlled through the mpiexec - and varies per implementation.
As Nektar++ has no threading we can assume no threading (for NESO). NESO-Particles can use threading - address this separately?
The text was updated successfully, but these errors were encountered:
Issue
Somewhere highly visible to users we should inform (or detect bad) environment configuration, i.e.
Intel MPI + docker we know that something like:
Is probably required. Other known bad launch examples:
All of these (most likely) will launch with N MPI ranks all trying to use a thread per core - which leads to N times over-subscription.
Possible solutions
On install print example launch commands, e.g. for hipsycl
for intel
I expect that we will end up with a repository of example launch commands/configurations/submission scripts for different machines. These launch commands will become more complex with MPI+OpenMP (i.e. more than 1 thread) as the pinning/placement of the threads is often controlled through the mpiexec - and varies per implementation.
As Nektar++ has no threading we can assume no threading (for NESO). NESO-Particles can use threading - address this separately?
The text was updated successfully, but these errors were encountered: