Skip to content

Commit

Permalink
Merge branch 'main' into maintenance/python_3_13_release
Browse files Browse the repository at this point in the history
  • Loading branch information
ecomodeller authored Oct 24, 2024
2 parents cfcf360 + a333f8a commit 4c973fb
Show file tree
Hide file tree
Showing 50 changed files with 1,153 additions and 594 deletions.
5 changes: 3 additions & 2 deletions .github/workflows/full_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,12 @@ on:

jobs:
build:
runs-on: ${{ matrix.os }}

runs-on: ubuntu-latest
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
python-version: ["3.9", "3.13.0"]
python-version: ["3.9", "3.13"]

steps:
- uses: actions/checkout@v4
Expand Down
36 changes: 35 additions & 1 deletion .github/workflows/python-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,42 @@ on:
workflow_dispatch:

jobs:
deploy:

test:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
python-version: [3.9, "3.12"]

steps:
- uses: actions/checkout@v4
- uses: chartboost/ruff-action@v1 # Fail fast if there are any linting errors
with:
version: 0.6.2 # consistent with pyproject.toml ?
src: mikeio # ignore notebooks
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest pytest-cov
- name: Install mikeio
run: |
pip install .[test]
- name: Test with pytest
run: |
pytest --cov=mikeio tests --ignore tests/performance/ --ignore tests/notebooks/ --disable-warnings
- name: Test docstrings with doctest
run: make doctest
- name: Static type check
run: make typecheck

deploy:
needs: test
runs-on: ubuntu-latest
permissions:
# IMPORTANT: this permission is mandatory for trusted publishing
Expand Down
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -35,4 +35,7 @@ docs/api/

.testmondata
objects.json
.jupyter_cache/
.jupyter_cache/

# direnv
.envrc
13 changes: 12 additions & 1 deletion docs/_quarto.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,20 @@ website:
- section: Examples
href: examples/index.qmd
contents:
- section: Dfs0
href: examples/dfs0/index.qmd
contents:
- examples/dfs0/cmems_insitu.qmd
- section: Dfs2
href: examples/dfs2/index.qmd
contents:
- examples/dfs2/bathy.qmd
- examples/dfs2/gfs.qmd
- examples/Dfsu-2D-interpolation.qmd
- section: Dfsu
href: examples/dfsu/index.qmd
contents:
- examples/dfsu/spatial_interpolation.qmd
- examples/dfsu/merge_subdomains.qmd
- examples/Time-interpolation.qmd
- examples/Generic.qmd
- text: Design philosophy
Expand Down Expand Up @@ -164,3 +172,6 @@ format:
html:
theme: cosmo
toc: true
ipynb:
theme: cosmo
toc: true
95 changes: 95 additions & 0 deletions docs/examples/dfs0/cmems_insitu.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
---
title: Dfs0 - CMEMS *in-situ* data
jupyter: python3
---

Copernicus Marine provides access to a wide range of model and [*in-situ* data](https://marine.copernicus.eu/about/producers/insitu-tac). In this example we will look at how to access the *in-situ* data and convert it to a MIKE IO dataset.

```{python}
import pandas as pd
import xarray as xr
import mikeio
```

```{python}
fino = xr.open_dataset("../../data/NO_TS_MO_FINO1_202209.nc")
fino
```

CMEMS *in-situ* data is provided in a [standardised format](https://archimer.ifremer.fr/doc/00488/59938/).

Find out which variables we are interested in to extract:

```{python}
data = [
{
"name": fino[var].name,
"standard_name": fino[var].standard_name,
"units": fino[var].units,
}
for var in fino.data_vars
if hasattr(fino[var], "units")
]
pd.DataFrame(data)
```

The data have a DEPTH dimension, even though variables are only measured at a single level and doesn't vary in time although the format allows for it.

I.e. temperature (TEMP) is available at level 1 (0.5 m)

```{python}
fino.DEPH.plot.line(x="TIME")
```

```{python}
fino['TEMP'].plot.line("-^",x='TIME')
```

```{python}
fino['VHM0'].plot.line("-^",x='TIME')
```

Wave data are only available at the surface.

```{python}
fino[['VHM0','VTZA','VPED']].isel(DEPTH=0)
```

```{python}
df = fino[['VHM0','VTZA','VPED']].isel(DEPTH=0).to_dataframe()
```

The data are stored on the concurrent timesteps.

```{python}
df[['VHM0','VTZA','VPED']].head()
```

```{python}
df[['VHM0','VTZA']].plot(style='+')
```

Convert the wave height data to a mikeio dataset.

```{python}
ds = mikeio.from_pandas(
df[["VHM0"]].dropna(), items=mikeio.ItemInfo(mikeio.EUMType.Significant_wave_height)
)
ds
```

Store the results in Dfs0 format.

```{python}
ds.to_dfs("FINO1_VHM0.dfs0")
```

Read the file again to check...

```{python}
ds = mikeio.read("FINO1_VHM0.dfs0")
ds
```


7 changes: 7 additions & 0 deletions docs/examples/dfs0/index.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
---
title: Dfs0 examples
---

A collection of specific examples of working with dfs0 files. For a general introduction to dfs0 see the [user guide](../../user-guide/dfs0.qmd) and the [API reference](../../api/#dfs).

* [CMEMS *In-situ* data](cmems_insitu.qmd)
2 changes: 2 additions & 0 deletions docs/examples/dfs2/index.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
title: Dfs2 examples
---

A collection of specific examples of working with dfs2 files. For a general introduction to dfsu see the [user guide](../../user-guide/dfs2.qmd) and the [API reference](../../api/#dfs).

* [Bathymetry](bathy.qmd)
* [Meteo data](gfs.qmd)

9 changes: 9 additions & 0 deletions docs/examples/dfsu/index.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
---
title: Dfsu examples
---

A collection of specific examples of working with dfsu files. For a general introduction to dfsu see the [user guide](../../user-guide/dfsu.qmd) and the [API reference](../../api/#dfs).


* [2D spatial interpolation](spatial_interpolation.qmd)
* [Merging subdomain dfsu files](merge_subdomains.qmd)
153 changes: 153 additions & 0 deletions docs/examples/dfsu/merge_subdomains.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
---
title: Merging subdomain dfsu files
jupyter: python3
---

During simulation MIKE will commonly split simulation files into subdomains and output results with a p_# suffix. This script will merge dfsu files of this type into a single file.

Note: Below implementation considers a 2D dfsu file. For 3D dfsu file, the script needs to be modified accordingly.


## Import libraries

```{python}
import mikeio
import numpy as np
from mikeio.spatial import GeometryFM2D
```

```{python}
# (optional) check first file, items etc.
mikeio.open("../../data/SimA_HD_p0.dfsu")
```

## Choose items to process

```{python}
# choose items to process (when in doubt look at one of the files you want to process with mikeio.open)
items = ["Surface elevation", "Current speed", "Current direction"]
```

## Read files

Option A: automatically find all with _p# suffix

```{python}
import glob
import os
basename = "../../data/SimA_HD" # basename of the dfsu files
def find_dfsu_files(basename):
pattern = f"{basename}_p*.dfsu"
files = sorted(glob.glob(pattern))
if not files:
raise ValueError(f"No files found matching the pattern: {pattern}")
return files
dfs_files = find_dfsu_files(basename)
print(f"Found {len(dfs_files)} files:")
for file in dfs_files:
print(f" - {os.path.basename(file)}")
dfs_list = [mikeio.read(file, items=items) for file in dfs_files]
```

Option B: manually select files

```{python}
# List of input dfsu files
dfs_files = [
"../../data/SimA_HD_p0.dfsu",
"../../data/SimA_HD_p1.dfsu",
"../../data/SimA_HD_p2.dfsu",
"../../data/SimA_HD_p3.dfsu",
]
# read all dfsu files
dfs_list = [mikeio.read(file, items=items) for file in dfs_files]
```

## Extract data of all subdomains

```{python}
# Create a dictionary to store data for each item
data_dict = {item: [] for item in items}
# Get time steps (assuming all files have the same time steps)
time_steps = dfs_list[0][items[0]].time
# loop over items and time steps and concatenate data from all subdomains
for item in items:
for i in range(len(time_steps)):
# Extract and combine data for the current time step from all subdomains
combined_data = np.concatenate([dfs[item].values[i, :] for dfs in dfs_list])
data_dict[item].append(combined_data)
# Convert the list to a numpy array
data_dict[item] = np.array(data_dict[item])
# Prepare Merged Data
merged_data = np.array([data_dict[item] for item in items])
```

## Merge geometry of all subdomains

```{python}
geometries = [dfs.geometry for dfs in dfs_list]
combined_node_coordinates = []
combined_element_table = []
node_offset = 0
# loop through geometries to combine nodes and elements of all subdomains
for geom in geometries:
current_node_coordinates = geom.node_coordinates
current_element_table = geom.element_table
combined_node_coordinates.extend(current_node_coordinates)
adjusted_element_table = [element + node_offset for element in current_element_table]
combined_element_table.extend(adjusted_element_table)
node_offset += len(current_node_coordinates)
combined_node_coordinates = np.array(combined_node_coordinates)
combined_element_table = np.array(combined_element_table, dtype=object)
projection = geometries[0]._projstr
# create combined geometry
combined_geometry = GeometryFM2D(
node_coordinates=combined_node_coordinates,
element_table=combined_element_table,
projection=projection
)
```

```{python}
combined_geometry.plot()
```

## Merge everything into dataset

```{python}
ds_out = mikeio.Dataset(
data=merged_data, # n_items, timesteps, n_elements
items=items,
time=time_steps,
geometry=combined_geometry
)
```

```{python}
ds_out[items[0]].sel(time=1).plot() # plot the first time step of the first item
```

## Write output to single file

```{python}
output_file = "area_merged.dfsu"
ds_out.to_dfs(output_file)
```

Loading

0 comments on commit 4c973fb

Please sign in to comment.