Replies: 2 comments
-
When you work on a different variable and different dimension, Dask is able to compute and write these files simultenaously, considerably accelerating any kind of computation |
Beta Was this translation helpful? Give feedback.
0 replies
-
Sorry seems like that would only work as a composite dataset with each source as a specific xarray dataset |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I have now tested quite a few things to integrate xarray in the opendrift workflow so that I can run simulations in a cloud environment as I am currently limited with my small cluster.
I can read and write xarrays dataset with Opendrift but to fully optimise it I believe there are a few things to consider.
In essence, the data structure is different from the classic arrays and allows Dask to better distribute the workload.
For it to work, the origin of the particle should become a string name that will be a variable inside a dataset. This could have been the case already in the netcdf. It seems to me like an important first step.
Is there any problem with that in a general terms?
Beta Was this translation helpful? Give feedback.
All reactions