Skip to content

Commit

Permalink
Greatly improve readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Ceyron committed Jun 12, 2024
1 parent 1ddf093 commit fc88c33
Showing 1 changed file with 41 additions and 45 deletions.
86 changes: 41 additions & 45 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,13 @@

<p align="center">
<a href="#installation">Installation</a> •
<a href="#quickstart">Quickstart</a>
<a href="#quickstart">Quickstart</a> •
<a href="#background">Background</a> •
<a href="#features">Features</a> •
<a href="#boundary-conditions">Boundary Conditions</a> •
<a href="#constructors">Constructors</a> •
<a href="#related">Related</a> •
<a href="#license">License</a>
</p>

<p align="center">
Expand All @@ -27,13 +33,6 @@ pip install .

Requires Python 3.10+ and JAX 0.4.13+. 👉 [JAX install guide](https://jax.readthedocs.io/en/latest/installation.html).

<!-- A collection of neural operator architecture for the image-to-image application,
most often encountered in autoregressive learning; built on top of
[Equinox](https://github.com/patrick-kidger/equinox).
They are designed to map between states (=discretized functions).
! boundary modes so far only in homoegenous mode (homogeneous dirichlet & neumann, and periodic) -->

## Quickstart

Expand Down Expand Up @@ -91,17 +90,17 @@ for epoch in tqdm(range(100)):
```
## Background

Neural Emulators are networks learned to efficienty forecast transient
phenomena, often associated with PDEs. In the simplest case this can be a linear
advection equation, all the way to more complicated Navier-Stokes cases. If we
work on Uniform Cartesian grids* (which this package assumes), one can borrow
plenty of architectures from image-to-image tasks in computer vision (e.g., for
Neural Emulators are networks learned to efficienty predict physical phenomena,
often associated with PDEs. In the simplest case this can be a linear advection
equation, all the way to more complicated Navier-Stokes cases. If we work on
Uniform Cartesian grids* (which this package assumes), one can borrow plenty of
architectures from image-to-image tasks in computer vision (e.g., for
segmentation). This includes:

* Standard Feedforward ConvNets
* Convolutional ResNets ([He et al.](https://arxiv.org/abs/1512.03385))
* U-Nets ([Ronneberger et al.](https://arxiv.org/abs/1505.04597))
* Dilated ResNets ([Yu et al.](https://arxiv.org/abs/1511.07122), Stachenfeld et al. (https://arxiv.org/abs/2112.15275))
* Dilated ResNets ([Yu et al.](https://arxiv.org/abs/1511.07122), [Stachenfeld et al.](https://arxiv.org/abs/2112.15275))
* Fourier Neural Operators ([Li et al.](https://arxiv.org/abs/2010.08895))

It is interesting to note that most of these architectures resemble classical
Expand All @@ -118,17 +117,18 @@ L)^D$

## Features

* Based on JAX:
* Based on [JAX](https://github.com/google/jax):
* One of the best Automatic Differentiation engines (forward & reverse)
* Automatic vectorization
* Backend-agnostic code (run on CPU, GPU, and TPU)
* Based on [Equinox](https://github.com/patrick-kidger/equinox):
* Single-Batch by design
* Integration into the Equinox ecosystem
* Integration into the Equinox SciML ecosystem
* Agnostic to the spatial dimension (works for 1D, 2D, and 3D)
* Agnostic to the boundary condition (works for Dirichlet, Neumann, and periodic
BCs)
* Composability
* Tools to count parameters and assess receptive fields

## Boundary Conditions

Expand All @@ -143,36 +143,26 @@ Dirichlet boundaries fully eliminate degrees of freedom on the boundary.
Periodic boundaries only keep one end of the domain as a degree of freedom (This
package follows the convention that the left boundary is the degree of freedom). Neumann boundaries keep both ends as degrees of freedom.

### TODOs
## Constructors

* Explain more nicely why we need factories for each block
* Maybe change exclusively homogeneous boundary conditions??? (then we could
exclude the boundary kwargs of all blocks)
* We could also change to the typing with a literal
* re-arrange the parameter count example (according to Georg's hints)
* think about whether we really need inf receptive field for the FNO
* add some modern versions of FNOs (RFNO, FFNO, GFNO)
There are two primary architectural constructors for Sequential and Hierarchical
Networks that allow for composability with the `PDEquinox` blocks.

### Constructors

You find many common architectures in the `pdequinox.arch` submodule. They build
on the two primary constructor paradigms:
### Sequential Constructor

![](img/sequential_net.svg)

A sequential network constructor.

![](img/hierarchical_net.svg)

A hierarchical network constructor.

The squential network constructor is defined by:
* a lifting block $\mathcal{L}$
* $N$ blocks $\left \{ \mathcal{B}_i \right\}_{i=1}^N$
* a projection block $\mathcal{P}$
* the hidden channels within the sequential processing
* the number of blocks $N$ (one can also supply a list of hidden channels if they shall be different between blocks)

### Hierarchical Constructor

![](img/hierarchical_net.svg)

The hierarchical network constructor is defined by:
* a lifting block $\mathcal{L}$
* The number of levels $D$ (i.e., the number of additional hierarchies). Setting $D = 0$ recovers the sequential processing.
Expand All @@ -196,22 +186,28 @@ The hierarchical network constructor is defined by:
provided; this is assumed to be the number of hidden channels in the highest
hierarchy.)

### Beyond Architectural Constructors

For completion, `pdequinox.arch` also provides a `ConvNet` which is a simple
feed-forward convolutional network. It also provides `MLP` which is a dense
networks which also requires pre-defining the number of resolution points.

## Related

Similar packages that provide a collection of emulator architectures are
[PDEBench](https://github.com/pdebench/PDEBench) and
[PDEArena](https://github.com/pdearena/pdearena). With focus on Phyiscs-informed
Neural Networks and Neural Operators, there are also
[DeepXDE](https://github.com/lululxvi/deepxde) and [NVIDIA
Modulus](https://developer.nvidia.com/modulus).

## License

Architectures in `PDEquinox` are designed with physical fields in mind, each
network can select its boundary mode out of the following options:
* `periodic`
* `dirichlet`
* `neumann`
MIT, see [here](LICENSE.txt)

For higher dimensional problems, it is assumed that the mode is the same for all
boundaries.
---

### Boundary condition
> [fkoehler.site](https://fkoehler.site/) &nbsp;&middot;&nbsp;
> GitHub [@ceyron](https://github.com/ceyron) &nbsp;&middot;&nbsp;
> X [@felix_m_koehler](https://twitter.com/felix_m_koehler)
All major modes of boundary conditions on physical fields are supported. Note
however, how the boundary condition changes what is considered a degree of
freedom

0 comments on commit fc88c33

Please sign in to comment.