diff --git a/experiments/india/008_coarse4/readme.md b/experiments/india/008_coarse4/readme.md index b3915361..404d0a91 100644 --- a/experiments/india/008_coarse4/readme.md +++ b/experiments/india/008_coarse4/readme.md @@ -1,6 +1,6 @@ # Coarser data and more examples -We down samples the ECMWF data from 0.05 to 0.2. +We down samples the ECMWF data from 0.05 to 0.2. In previous experiments we used a 0.1 resolution, as this is the same as the live ECMWF data. By reducing the resolution we can increase the number of samples we have to train on. @@ -11,12 +11,12 @@ This is approximately 5 times more samples than the previous experiments. ### b8_s1 -Batche size 8, with 0.2 degree NWP data. +Batche size 8, with 0.2 degree NWP data. https://wandb.ai/openclimatefix/india/runs/w85hftb6 ### b8_s2 -Batch size 8, different seed, with 0.2 degree NWP data. +Batch size 8, different seed, with 0.2 degree NWP data. https://wandb.ai/openclimatefix/india/runs/k4x1tunj ### b32_s3 @@ -38,18 +38,20 @@ https://wandb.ai/openclimatefix/india/runs/a5nkkzj6 ### old -Old experiment with 0.1 degree NWP data. +Old experiment with 0.1 degree NWP data. https://wandb.ai/openclimatefix/india/runs/m46wdrr7. Note the validation batches are different that the experiments above. -Interesting the GPU memory did not increase much better experiments 2 and 3. -Need to check that 32 batches were being passed through. +Interesting the GPU memory did not increase much better experiments 2 and 3. +Need to check that 32 batches were being passed through. ## Results The coarsening data does seem to improve the experiments results in the first 10 hours of the forecast. DA forecast looks very similar. Note the 0 hour forecast has a large amount of variation. + + Still spike results in the individual runs. | Timestep | b8_s1 MAE % | b8_s2 MAE % | b32_s3 MAE % | epochs MAE % | small MAE % | mae/val MAE % | old MAE % | @@ -72,4 +74,5 @@ Still spike results in the individual runs. ![](mae_step_smooth.png "mae_steps") I think its worth noting the model traing MAE is around `3`% and the validation MAE is about `7`%, so there is good reason to believe that the model is over fit to the trianing set. -It would be good to plot some of the trainin examples, to see if they are less spiky. \ No newline at end of file +It would be good to plot some of the trainin examples, to see if they are less spiky. +