Skip to content

Commit

Permalink
Updated news report (#17)
Browse files Browse the repository at this point in the history
  • Loading branch information
luraess authored Oct 22, 2024
2 parents 3870122 + 7a7e28b commit fdbb2b8
Showing 1 changed file with 42 additions and 5 deletions.
47 changes: 42 additions & 5 deletions posts/julia-hackathon-v6-2024-news_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,10 @@ tags = ["activities", "julia", "coding"]

We held our sixth GPU4GEO Julia hackathon on October 07-11, 2024 in Black Forest (DE), focussing on a wide range of Julia topics. Hereafter a glimpse into the progress made by some participants on various Julia-related projects and some visual impressions.

> :construction: more news to come!
> 🚧 more news to come!
## Chmy.jl - Finite differences and staggered grids on CPUs and GPUs

# Chmy.jl - Finite differences and staggered grids on CPUs and GPUs
*You Wu, Ivan Utkin, Ludovic RΓ€ss*

It has been a fruitful week, where we restructured the package structure and we also further furnished the documentation of [Chmy.jl](https://github.com/PTsolvers/Chmy.jl), targeting on the distributed usage of it.
Expand All @@ -24,19 +25,55 @@ In order to allow users to use all submodules with a single `using Chmy` stateme
With [PR #56](https://github.com/PTsolvers/Chmy.jl/pull/56), we aim to provide a comprehensive yet beginner-friendly documentation to distributed usage of Chmy.jl for our users. To do this, we provide a conceptual introduction to distributed computing generally under the section [`Distributed`](https://ptsolvers.github.io/Chmy.jl/dev/concepts/distributed/). For more experienced users, one can start with a simple script for solving a 2D diffusion example under the section [Using Chmy.jl with MPI](https://ptsolvers.github.io/Chmy.jl/dev/using_chmy_with_mpi/).

## Convection code

*Paul Tackley*

A Julia spherical annulus convection program.
**A Julia spherical annulus convection program.** The program solves the 2D spherical annulus variable-viscosity equations as given in [Hernlund & Tackley (2008)](https://doi.org/10.1016/j.pepi.2008.07.037), on a staggered grid using the direct solver (`\`). Some anomalous behaviour is observed relative to the test cases reported in that paper, so more testing/debugging is needed. Once perfected it will be posted online for general use.

~~~
<center>
<img src="../../assets/images/convect_annulus.png" title="Annulus convection" alt="Annulus convection" width="75%">
</center>
~~~

> Almost producing the right result.
## Permability in GeoParams

*Pascal Aellig, Jacob Frasunkiewicz*

Over the course of the week, we have been discussing and adding Permeability laws to [GeoParams.jl](https://github.com/JuliaGeodynamics/GeoParams.jl). Currently, there are four laws that can now be added and called from the `MaterialParams` structure. Part one of many has been merged in PR [#225](https://github.com/JuliaGeodynamics/GeoParams.jl/pull/225), so stay tuned for more over the course of the next few weeks as we implement computational routines to facilitate the writing of two-phase codes.

## Implicit solvers with Enzyme.jl

*Lorenzo Candioti, Valentin Churavy*

We developed a workflow to solve partial differential equations (PDEs) with implicit schemes using the automatic differentiation package [Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl). Using Enzyme to solve PDEs typically involves spelling out the residual form of the equations and differentiating this function w.r.t. the solution variable. The resulting Vector-Jacobian-Product (VJP, or Jacobian-Vector-Product, JVP) is then used to assemble the sparse Jacobian needed to solve the equations. The newly developed workflow relies on Krylov solvers which only need the JVP (or VJP) as input to solve the system of equations, thus avoiding the computationally expensive part of assembling the full Jacobian. Tested on a simple 1D Diffusion Equation, the new workflow is ca. 1.5x faster compared to the full Jacobian assembly approach.
```
Matrix-free
──────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 989ms / 100.0% 1.07MiB / 71.3%
Section ncalls time %tot avg alloc %tot avg
──────────────────────────────────────────────────────────────────────
iteration 1 989ms 100.0% 989ms 785KiB 100.0% 785KiB
gmres 9 988ms 100.0% 110ms 78.9KiB 10.1% 8.77KiB
jvp 43.3k 276ms 28.0% 6.38ΞΌs 0.00B 0.0% 0.00B
forward 10 82.2ΞΌs 0.0% 8.22ΞΌs 0.00B 0.0% 0.00B
inc 9 44.4ΞΌs 0.0% 4.93ΞΌs 0.00B 0.0% 0.00B
──────────────────────────────────────────────────────────────────────
Jacobian assembly
───────────────────────────────────────────────────────────────────────
Time Allocations
─────────────────────── ────────────────────────
Tot / % measured: 1.43s / 100.0% 56.7MiB / 99.2%
Section ncalls time %tot avg alloc %tot avg
───────────────────────────────────────────────────────────────────────
iteration 1 1.43s 100.0% 1.43s 56.2MiB 100.0% 56.2MiB
assembly 9 1.40s 98.2% 156ms 15.0MiB 26.6% 1.66MiB
jvp 90.0k 625ms 43.8% 6.94ΞΌs 0.00B 0.0% 0.00B
solve 9 24.8ms 1.7% 2.76ms 40.5MiB 72.1% 4.50MiB
forward 10 73.0ΞΌs 0.0% 7.30ΞΌs 0.00B 0.0% 0.00B
───────────────────────────────────────────────────────────────────────
```

0 comments on commit fdbb2b8

Please sign in to comment.