Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom coef solver #2

Open
wants to merge 3,258 commits into
base: master
Choose a base branch
from
Open

Conversation

cdmccombs
Copy link

No description provided.

RemiLehe and others added 30 commits July 3, 2024 14:03
…pX#5024)

* Update default behavior for gathering with direct deposition

* Modify condition under which to use galerkin interpolation

* Update condition

* Update benchmarks

---------

Co-authored-by: Edoardo Zoni <[email protected]>
* AMReX: 24.07

* pyAMReX: 24.07

* WarpX: 24.07
While working on another PR, I noticed that the CI test `Langmuir_fluid_2D` was compiled in debug mode.

The test was added originally in ECP-WarpX#3991 and it might be that the debug build type was left over unintentionally.

In general, I think we avoid running CI tests in debug mode, in order to keep the runtime of the tests as low as possible (the current runtime of this test in debug mode is around 80 seconds). 

However, we might have changed policy in the last months and I might not be up-to-date, so feel free to let me know and close this PR without merging if it is not relevant.
* Update documentation

* Allocate fields

* Always parse filename

* Add option "FromFile" in external particles fields

* Create two separate examples

* Update documentation

* Allocate aux array

* Update aux to take into account copy

* Add external fields from file to auxiliary data

* Update tests

* Remove debugging Print statements

* If averaged PSATD is used, we copy Efield_avg_fp to Efield_aux, otherwise copy Efield_fp.

* Do not access cells that have invalid data when summing external fields

* Update path of checkpoint files

* Update comments

* Update number of components

* Fix clang tidy error

* Make the code compatible with momentum-conserving gather

* Add Python interface and tests

* Use PICMI version that defined LoadAppliedField

* Update file names

* Update PICMI standard version

* Allocate dedicated MultiFabs for external particle fields

* Some clean-up

* Performance optimization: do not use external fields in gather kernel

* Allow external particle fields and external grid fields to be simultaneously defined

* External fields are incompatible with the moving window

* Support load balancing

* Update Source/Initialization/WarpXInitData.cpp

* Update Source/Initialization/WarpXInitData.cpp

---------

Co-authored-by: Remi Lehe <[email protected]>
* Turn on particle comparison

* New value for checksum
…X#5043)

* CI: remove unused params, check particles in `collisionXYZ`

* Update benchmark

---------

Co-authored-by: Remi Lehe <[email protected]>
…ons (ECP-WarpX#5045)

* shuffling the full list of particles for intra-species binary collisions

* fixing merge issue with collision json files.

* moved checksum to before error assert for intraspecies DD fusion regression test analysis script.

* updating benchmark values for DD_fusion_3D_intraspecies.

* adding fixed random seed to DD_3D_intraspecies fusion regression test.

* putting checksum back after assert.

* adjusting json.

* updating json.
Update TOSS3 install scripts.
Use the correct ABLASTR (not WarpX) CMake options and compiler
defines. E.g., for ImpactX, we only control ABLASTR, not WarpX.
* AMReX: Weekly Update

* pyAMReX: Weekly Update
`pybind11::lto` is only defined if `CMAKE_INTERPROCEDURAL_OPTIMIZATION`
is not set in `pybind11Common.cmake`. Package managers like Spack
use the latter.
Sadly, the TOSS4 transition was not completed before the IBM contract ran out and Lassen stays with TOSS3. This simplifies the documentation again.
* moved density and temperature calc from each pair to once for each cell.
* pulled out n12 calc to cell level.
* fixed dV issue for cyl geom.
* AMReX: Weekly Update

* pyAMReX: Weekly Update
…w moves. (ECP-WarpX#5082)

* Recompute the macroscopic properties everytime the moving window moves.

* Minor cleanup

* Separate allocation and initialization
* update profile script for Frontier

* satisfy dependency for module load

* change module versions and manually install adios2

* fix bug

* fix bug

* update instructions

* update scripts

* remove line for LibEnsemble
ax3l and others added 30 commits October 10, 2024 12:52
Attempt to fix 1D SYCL EB compile errors (throw not allowed on device).

X-ref: spack/spack#46765 (comment)
`isAnyBoundaryPML` is used only inside `WarpX.cpp`. It does not need to be a member function of the WarpX class and it can be moved into an anonymous namespace inside `WarpX.cpp`.
…X#5208)

# Overview

This PR implements flux injection of particles from the embedded
boundary.

It also adds a test that emits particles from a sphere in 3D as
represented here:

![movie](https://github.com/user-attachments/assets/1e76cf87-fd7d-4fa3-8c83-363956226a42)
as well as RZ and 2D versions of this test. (In 2D, the particles are
emitted from a cylinder.)

As can be seen in the above movie, particles are emitted from a single
point within each cell (the centroid of the EB), instead of being
emitted uniformly on the surface of the EB within the cell. This could
be improved in a future PR.

The implementation as well as the user interface largely re-use the
infrastructure for the flux injection from a plane. However, as a
result, the user interface is perhaps not very intuitive. In particular,
when specify the velocity distribution, `uz` represents the direction
normal to the EB while `ux`, `uy` represent the tangential directions.
This again will be improved in follow-up PR.

# Follow-up PRs

- [ ] Change the interface of `gaussianflux` so as to specify the
tangential and normal distribution. In other words, instead of:
```
electron.momentum_distribution_type = gaussianflux
electron.ux_th = 0.01
electron.uy_th = 0.01
electron.uz_th = 0.1
electron.uz_m = 0.07
```
we would do:
```
electron.momentum_distribution_type = gaussianflux
electron.u_tangential_th = 0.01  # Tangential to the emitting surface
electron.u_normal_th = 0.1   # Normal to the emitting surface
electron.u_normal_m = 0.07
```

- [ ] Change the interface so that the user does not need to specify the
number of macroparticles per cell (which is problematic for EB, since
difference cell contain different EB surface, and should in general emit
different numbers of macroparticles). Instead, we would specify the
weight of macroparticles, i.e. instead of
```
electron.injection_style = NFluxPerCell
electron.num_particles_per_cell = 100   
electron.flux_function(x,y,z,t) = "1.”
```
we would do
```
electron.injection_style = NFluxPerCell
electron.flux_macroweight = 200   # Number of physical particles per macroparticle
electron.flux_function(x,y,z,t) = "4e12” # Number of physical particles emitted per unit time and surface  
```

- [ ] Add a way for the user to specify the total flux across the whole
emitting surface
Example:
```
electron.flux_function(x,y,z,t) = "(x>-1)*(x<1)"
electron.total_flux = 4e12 # physical particle / second (not per unit area)
```
(In that case, `flux_function` would be rescaled internally by WarpX so
as to emit the right number of particles.)

- [ ] Add PICMI interface
- [ ] Emit the particles uniformly from the surface of the EB within one
cell
BTD diagnostics sometimes show artifacts at the edge of the range of
collected data. (See for instance the red curve below.)

My understanding is that this happens because the BTD collection planes
use data from the guard cells outside of the simulation domain, when
interpolating fields that are then used for the Lorentz back-transform.
The guard cell data may not be physically correct (e.g. it may not have
the right cancellation between `E` and `B`), and could thus cause this
artifact.

This PR avoids this issue by prevents the collection planes to collect
data when it is half a cell from the edge of the simulation domain.

See the example below, taken from
ECP-WarpX#5337 (plot of the laser field,
from the BTD diagnostic)
![Figure
40](https://github.com/user-attachments/assets/e4549856-4182-4a87-aa26-2d3bc6ac8e2c)
The BTD diagnostics values are identical with this PR, except for the
problematic point appearing at the edge of the domain.
…ing window velocity (ECP-WarpX#5341)

In the `development` branch, the `BackTransformed` diagnostics assume
that the moving window moves exactly at the speed of light. This PR
generalizes the code for arbitrary moving window velocity.

This PR does not add an automated test, but the upcoming PR ECP-WarpX#5337 will
add a test which features a moving window with a speed different than
`c`.

This is a follow-up of ECP-WarpX#5226, which modified the transformation of the
simulation box coordinates for arbitrary moving window velocity, but did
not yet update the `BackTransformed` diagnostic code.
This adds an example for how to run FEL simulations with the
boosted-frame technique.

https://warpx--5337.org.readthedocs.build/en/5337/usage/examples/free_electron_laser/README.html

---------

Co-authored-by: Brian Naranjo <[email protected]>
Co-authored-by: Edoardo Zoni <[email protected]>
Could we do this to make sure that we run the GitHub Actions and Azure
jobs (build, test) only if _at least one file outside the_ `Docs`
_directory_ is modified, i.e., skip those jobs if only files in the
`Docs` directory are modified?

I think it would be safe to do so (and a bit of a waste of resources to
not do so...), but I leave it open for discussion.

If merged, we could test this rebasing ECP-WarpX#5386 and seeing if the correct
CI jobs are skipped.

Note that this PR leaves the other CI jobs untouched, e.g., `source`,
`docs`, `CodeQL`, etc.
This environment variable was used for Perlmutter when `--cpus-per-task=N` did not work yet. It was copied around to other templates.

These days, `--cpus-per-task` should work and the name of the env variable was renamed in SLURM to `SLURM_CPUS_PER_TASK`.
https://slurm.schedmd.com/sbatch.html#OPT_SLURM_CPUS_PER_TASK

Thanks to NERSC engineers for reporting this update!
…pX#5394)

The fix introduced in ECP-WarpX#5308 was not correct for Azure pipelines.

In GitHub Actions we trigger a run on the `push` event only for the
`development` branch.

The Azure equivalent of that is triggering a run on the `trigger` event
only for the `development` branch. However, since the `trigger` event
was completely absent from the Azure pipeline file (that is, the default
setup was being used), I had erroneously added the filter branch to the
`pr` event instead, unlike what I did for GitHub actions where the
`push` was exposed in the YAML files.

This was originally aimed at avoiding duplicate runs for "individual CI"
when `pre-commit` opens a pull request by pushing to a secondary branch
`pre-commit-ci-update-config` in the main repo (instead of a fork).

The new setup is tested in ECP-WarpX#5393, where I copied these changes and where
one can see that a commit pushed to that PR does not trigger an
"individual CI" Azure pipeline anymore, but only a "PR automated" one.

Hopefully this is correct for the merge commits that get pushed to
`development` once a PR is closed, but we'll be able to test this only
after merging a PR.
<!--pre-commit.ci start-->
updates:
- [github.com/mgedmin/check-manifest: 0.49 →
0.50](mgedmin/check-manifest@0.49...0.50)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
As suggested by @WeiqunZhang:
We should move `CXXFLAGS: "-Werror -Wno-error=pass-failed"` to when
WarpX builds. It is picked up by `pip`. It didn't fail before probably
because there was cached version of Python stuff. Now there is probably
a new version of something that requires rebuilding some packages.
- Weekly update to latest AMReX:
```console
./Tools/Release/updateAMReX.py
```
- Weekly update to latest pyAMReX:
```console
./Tools/Release/updatepyAMReX.py
```
- Weekly update to latest PICSAR (no changes):
```console
./Tools/Release/updatePICSAR.py
```
<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.6.9 →
v0.7.0](astral-sh/ruff-pre-commit@v0.6.9...v0.7.0)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The new version of picmistandard is compatible with NumPy version 2.
This PR adds time-averaged field diagnostics to the WarpX output.

To-do:
- [x] code
- [x] docs
- [x] tests
- [x] example

Follow-up PRs:
- meta-data
- make compatible with adaptive time stepping

This PR is based on work performed during the *2024 WarpX Refactoring
Hackathon* and was created together with @RevathiJambunathan.

Successfully merging this pull request may close ECP-WarpX#5165.

---------

Co-authored-by: RevathiJambunathan <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Edoardo Zoni <[email protected]>
Co-authored-by: Edoardo Zoni <[email protected]>
This PR adds details in the beam-beam collision example about how to
generate the QED lookup tables.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The CI checks `Intel / oneAPI ICX SP` and `Intel / oneAPI DPC++ SP` are
failing since a few days.

This is likely due to the fact that the GitHub Actions runner is now
installing IntelLLVM 2025.0.0 instead of IntelLLVM 2024.2.1, as until a
few days ago.

This causes the following issue when building openPMD:
```console
/home/runner/work/WarpX/WarpX/build_sp/_deps/fetchedopenpmd-src/include/openPMD/backend/Container.hpp:263:32: error: no member named 'm_container' in 'Container<T, T_key, T_container>'
  263 |         container().swap(other.m_container);
      |                          ~~~~~ ^
1 error generated.
```

We can try to install the previous version of IntelLLVM manually and see
if that fixes the issue.
…5421)

Our `CMakeLists` to set up the `ctest` executable had a logic error when
`WarpX_APP=OFF` and `WarpX_PYTHON=ON`, in that it was trying to install
executable tests without an executable application.

The error message looked something like
```console
  Error evaluating generator expression:
    $<TARGET_FILE:app_3d>
  No target "app_3d"
```
…pX#5423)

This PR updates the instructions to compile WarpX on the Adastra
supercomputer (CINES, France)
<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.7.0 →
v0.7.1](astral-sh/ruff-pre-commit@v0.7.0...v0.7.1)
<!--pre-commit.ci end-->

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
- Weekly update to latest AMReX:
```console
./Tools/Release/updateAMReX.py
```
- Weekly update to latest pyAMReX:
```console
./Tools/Release/updatepyAMReX.py
```
- Weekly update to latest PICSAR (no changes):
```console
./Tools/Release/updatePICSAR.py
```
)

This adds the option to inject particles from the embedded boundary with
PICMI.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.