From a5e000f569a44df3cccb6a292e84e2196033b210 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 18 Jun 2024 11:01:09 +0000 Subject: [PATCH] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- experiments/india/008_coarse4/readme.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/experiments/india/008_coarse4/readme.md b/experiments/india/008_coarse4/readme.md index 0946a59b..462d3068 100644 --- a/experiments/india/008_coarse4/readme.md +++ b/experiments/india/008_coarse4/readme.md @@ -1,6 +1,6 @@ # Coarser data and more examples -We down samples the ECMWF data from 0.05 to 0.2. +We down samples the ECMWF data from 0.05 to 0.2. In previous experiments we used a 0.1 resolution, as this is the same as the live ECMWF data. By reducing the resolution we can increase the number of samples we have to train on. @@ -11,12 +11,12 @@ This is approximately 5 times more samples than the previous experiments. ### b8_s1 -Batche size 8, with 0.2 degree NWP data. +Batche size 8, with 0.2 degree NWP data. https://wandb.ai/openclimatefix/india/runs/w85hftb6 ### b8_s2 -Batch size 8, different seed, with 0.2 degree NWP data. +Batch size 8, different seed, with 0.2 degree NWP data. https://wandb.ai/openclimatefix/india/runs/k4x1tunj ### b32_s3 @@ -24,17 +24,17 @@ Batch size 32, with 0.2 degree NWP data. Also kept the learning rate a bit highe https://wandb.ai/openclimatefix/india/runs/ktale7pa ### old -Old experiment with 0.1 degree NWP data. +Old experiment with 0.1 degree NWP data. https://wandb.ai/openclimatefix/india/runs/m46wdrr7. Note the validation batches are different that the experiments above. -Interesting the GPU memory did not increase much better experiments 2 and 3. -Need to check that 32 batches were being passed through. +Interesting the GPU memory did not increase much better experiments 2 and 3. +Need to check that 32 batches were being passed through. ## Results The coarsening data does seem to improve the experiments results in the first 10 hours of the forecast. -DA forecast looks very similar. +DA forecast looks very similar. Still spike results in the individual runs @@ -55,4 +55,4 @@ Still spike results in the individual runs ![](mae_step.png "mae_steps") -![](mae_step_smooth.png "mae_steps") \ No newline at end of file +![](mae_step_smooth.png "mae_steps")