Skip to content

Commit

Permalink
Merge commit 'a5e000f569a44df3cccb6a292e84e2196033b210' into issue/00…
Browse files Browse the repository at this point in the history
…8-coarser-data

# Conflicts:
#	experiments/india/008_coarse4/readme.md
  • Loading branch information
peterdudfield committed Jun 20, 2024
2 parents 0af29e7 + a5e000f commit f154de4
Showing 1 changed file with 10 additions and 7 deletions.
17 changes: 10 additions & 7 deletions experiments/india/008_coarse4/readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Coarser data and more examples

We down samples the ECMWF data from 0.05 to 0.2.
We down samples the ECMWF data from 0.05 to 0.2.
In previous experiments we used a 0.1 resolution, as this is the same as the live ECMWF data.

By reducing the resolution we can increase the number of samples we have to train on.
Expand All @@ -11,12 +11,12 @@ This is approximately 5 times more samples than the previous experiments.


### b8_s1
Batche size 8, with 0.2 degree NWP data.
Batche size 8, with 0.2 degree NWP data.
https://wandb.ai/openclimatefix/india/runs/w85hftb6


### b8_s2
Batch size 8, different seed, with 0.2 degree NWP data.
Batch size 8, different seed, with 0.2 degree NWP data.
https://wandb.ai/openclimatefix/india/runs/k4x1tunj

### b32_s3
Expand All @@ -38,18 +38,20 @@ https://wandb.ai/openclimatefix/india/runs/a5nkkzj6


### old
Old experiment with 0.1 degree NWP data.
Old experiment with 0.1 degree NWP data.
https://wandb.ai/openclimatefix/india/runs/m46wdrr7.
Note the validation batches are different that the experiments above.

Interesting the GPU memory did not increase much better experiments 2 and 3.
Need to check that 32 batches were being passed through.
Interesting the GPU memory did not increase much better experiments 2 and 3.
Need to check that 32 batches were being passed through.

## Results

The coarsening data does seem to improve the experiments results in the first 10 hours of the forecast.
DA forecast looks very similar. Note the 0 hour forecast has a large amount of variation.



Still spike results in the individual runs.

| Timestep | b8_s1 MAE % | b8_s2 MAE % | b32_s3 MAE % | epochs MAE % | small MAE % | mae/val MAE % | old MAE % |
Expand All @@ -72,4 +74,5 @@ Still spike results in the individual runs.
![](mae_step_smooth.png "mae_steps")

I think its worth noting the model traing MAE is around `3`% and the validation MAE is about `7`%, so there is good reason to believe that the model is over fit to the trianing set.
It would be good to plot some of the trainin examples, to see if they are less spiky.
It would be good to plot some of the trainin examples, to see if they are less spiky.

0 comments on commit f154de4

Please sign in to comment.