Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Converting notebooks from COSIMA Cookbook to ACCESS-NRI intake catalog #313

Open
28 of 42 tasks
navidcy opened this issue Jan 8, 2024 · 75 comments
Open
28 of 42 tasks
Labels
🕹️ hackathon 4.0 🛸 updating An existing notebook needs to be updated

Comments

@navidcy
Copy link
Collaborator

navidcy commented Jan 8, 2024

Why we need to have two ways for people to load output? At the moment, after #298, we have tutorials both for the cookbook and for the ACCESS-NRI Intake catalog

https://cosima-recipes.readthedocs.io/en/latest/tutorials.html

Let's just leave the best of the two. Is ACCESS-NRI Intake catalog the future? Let's make that the default in all examples and drop the cookbook then?

cc @angus-g, @aidanheerdegen, @AndyHoggANU, @aekiss, @adele-morrison, @anton-seaice, @PaulSpence, @rmholmes, @edoddridge, @micaeljtoliveira

Recipes

Tutorials

ACCESS-OM2-GMD-Paper-Figs

@navidcy navidcy added the ❓ question Further information is requested label Jan 8, 2024
@navidcy navidcy changed the title Cookbook _or_ ACCESS-NRI Intake Catalog Cookbook OR ACCESS-NRI Intake Catalog Jan 8, 2024
@navidcy
Copy link
Collaborator Author

navidcy commented Jan 8, 2024

#310 seems relevant

@aidanheerdegen
Copy link
Contributor

Largely this is up to the COSIMA community, but it is likely a choice you will be forced to make for a few reasons:

  1. The Cookbook Database does not scale well, particularly in the time taken to index data
  2. The infrastructure which runs the indexing will likely cease to exist by the end of the first quarter of 2024 (the accessdev Virtual Machine is scheduled to be shuttered, and this is what provides the outward facing web address for the jenkins service)
  3. There are no resources to do more than just maintain the Cookbook DB (@micaeljtoliveira does what he can, but he has many other high priority tasks)
  4. ACCESS-NRI has chosen to support the Intake Catalogue (in part for some of the reasons above)

So I'd suggest taking this to a COSIMA meeting and making the community aware that a decision needs to be made, and probably the sooner the better.

One downside of dropping the Cookbook is there is currently no GUI interface for searching the Intake Database. However there are some powerful methods for searching (filtering) the intake catalogue

https://access-nri-intake-catalog.readthedocs.io/en/latest/usage/quickstart.html#data-discovery

Also there are plans for some tools to assist with data discovery from ACCESS-NRI, and we're working on prototypes we hope to share soon.

@navidcy
Copy link
Collaborator Author

navidcy commented Jan 8, 2024

Thanks @aidanheerdegen!
I'm happy to hear also voices from others here and then take it to a COSIMA meeting -- sounds good!

@adele-morrison
Copy link
Collaborator

The advice we had from ACCESS-NRI on this back around the workshop last year was that at that time they didn’t advise switching everything to intake catalogue, because it was so new and untested. But @aidanheerdegen @dougiesquire if you think the intake catalogue is ready for us to switch everything over completely, i think we should do that. We discussed this previously at a COSIMA meeting and agreed we should switch when the time was right (which is maybe now?).

@anton-seaice
Copy link
Collaborator

Not much to add from me:

  • There hasn't been much uptake of the intake catalogue, so there may still be bugs/missing datasets etc that will need attention.
  • We should keep the cookbook going in its current state (i.e. without adding new data or updates) for existing users, and keep the cookbook tutorials available for people who are halfway through projects for some time. This could be a simple as a tag on the repository, or a fork/branch of the repo, or a seperate folder in the main branch (depending if we want the tutorials on the read the docs version or are happy with them only in github).
  • It would be nice to have the test pipeline replaced prior to the switch (which also relies on accessdev), which could make reviewing notebooks more robust too.

@dougiesquire
Copy link
Collaborator

I think @max-anu has been modifying the existing recipes to use the catalog. It would be good to get an update from him. There are definitely datasets that are in the cookbook but not yet in the catalog. I think @max-anu will probably have the best idea of what these are.

@aidanheerdegen
Copy link
Contributor

@anton-seaice makes a good point about retaining existing capability for those who rely on it, identifying deficiencies and setting up a new testing infrastructure.

@dougiesquire is also correct that datasets that are missing from intake should be identified.

@adele-morrison I'm not in a position to say if the intake catalogue is ready or not, but we're rapidly approaching a point where there isn't really an option, so we need to make sure it is.

So, synthesising from above, some steps to consider:

  • Decide on deprecation plan for cookbook DB (timing to stop ingesting new data and freeze DB, where/how to archive CC DB recipes for medium/long term support)
  • Set up testing infrastructure for intake recipes
  • Document datasets missing from intake
  • Decide which datasets need to be added, particularly if there are any that must be added before swapping to intake
  • Set proposed time-frame for replacing CC DB recipes with intake versions
  • Communicate plan to community, and get users testing out intake catalogue to find weak points
  • Deprecate CC DB and replace with intake recipes in main and docs

Might be good to break this out into a GitHub project, make some issues etc. I would do this, but I'm not in a position to assist much I'm afraid.

@Thomas-Moore-Creative
Copy link
Collaborator

Could the culture be shifted going forward so that when someone / a group runs an experiment that is meant to be open to the community that creating an Intake-ESM datastore is part of the process of making it available and documented? My view is that it's the "authors" of specific experiments that are best placed to create and update datastores for the catalog?

@dougiesquire appears to have done a great job documenting how that can be done for those who've never built a catalog before > I want to catalog my own data to make it easy to find and load

This new catalog is a big step forward, IMO, but will require a community effort to maintain, in my view.

@rbeucher
Copy link
Contributor

Hi All,

So from the MED team perspective, I asked @max-anu to look at converting the recipes to use the intake catalog.
The main reason for going ahead with this was precisely to identify the gaps that are mentioned above (missing functionalities, datasets etc.) and to push for a broader use of the intake-catalog. @max-anu should be able to give us an update.

#310 is supposed to be a draft and everyone is welcome to participate and/or comment on it.

Happy to discuss the plan going ahead.

@rbeucher
Copy link
Contributor

I have started a repo on the ACCESS-NRI organisation to automate testing of the recipes:
https://github.com/ACCESS-NRI/COSIMA-recipes-workflow

This is based on what I have set up for the ESMValTool recipes:

https://github.com/ACCESS-NRI/ESMValTool-workflow?tab=readme-ov-file#recipes-current-status

Will keep you posted.

@access-hive-bot
Copy link

This issue has been mentioned on ACCESS Hive Community Forum. There might be relevant details there:

https://forum.access-hive.org.au/t/workshop-on-community-driven-practices-for-analysing-coding-and-sharing-cosima-hackathon-v4-0/2047/6

@dougiesquire
Copy link
Collaborator

Will keep you posted.

@rbeucher do you have any updates? It would be great to get an update on where @max-anu has got to so we can try to progress this in the upcoming hackathon.

@rbeucher
Copy link
Contributor

rbeucher commented May 1, 2024

@max-anu may have some updates.

@AndyHoggANU
Copy link
Contributor

I've had a quick look at this and it seems that many of the changes @max-anu made are in the max-updating-notebooks branch. So, to a large degree this task depends upon taking what has already been done. Would be good to do that in a way which retain's Max's contributions. Does anybody know the best way to do that?

My only suggestion is for others to check the existing branch and then to merge?

@rbeucher
Copy link
Contributor

rbeucher commented Jun 5, 2024

It's possible to split @max-anu work into multiple PRs if that helps for the workshop. We can do one / notebook.

@AndyHoggANU
Copy link
Contributor

Yeah, I think that would be perfect!

@navidcy
Copy link
Collaborator Author

navidcy commented Jun 6, 2024

Agree!!

@adele-morrison
Copy link
Collaborator

Maybe this could be good practice for the group doing PR review training to hone their skills on during the hackathon?

@edoddridge
Copy link
Collaborator

It's possible to split @max-anu work into multiple PRs if that helps for the workshop. We can do one / notebook.

@rbeucher - you are much braver at messing about with git than I am. Are you envisioning cloning the branch multiple times and then resetting all but one of the files on each branch? 😬

@navidcy
Copy link
Collaborator Author

navidcy commented Jun 6, 2024

Maybe this could be good practice for the group doing PR review training to hone their skills on during the hackathon?

Sure. But a bunch of smaller PRs would be even better. And more manageable in general. Even the “PR review master” might have trouble dealing with a PR that they need to run all the notebooks

@adele-morrison
Copy link
Collaborator

Oh yes, that's what a meant. A bunch of small PRs and then each person in the training could work on reviewing one each.

@rbeucher
Copy link
Contributor

rbeucher commented Jun 6, 2024

I'll have a look today or tomorrow.

@rbeucher
Copy link
Contributor

rbeucher commented Jun 6, 2024

That's one way to do it @edoddridge . I want to keep @max-anu contributions. I'll have a look and report back to you.

@edoddridge
Copy link
Collaborator

I think that level of usage is very much in the noise. To put some numbers on this, the estimated usage from ARE for an 8 hour session with XX-Large is

SU estimate
28 cpu cores + 252GB mem on normalbw queue (1.25 SUs/core/h) for 8h = 280 SUs

It's not a big deal.

@navidcy
Copy link
Collaborator Author

navidcy commented Jul 23, 2024

I think you/we/everybody should go full on using the ARE resources and save your time and effort for analyses, thinking, fun activities, etc.

@julia-neme
Copy link
Collaborator

Jaja great! I might be scarred from my beginner days when I was told to get better at dask instead of opening bigger gadi_jupyter notebooks 😝 I'll up my CPU game.

@navidcy navidcy added 🛸 updating An existing notebook needs to be updated and removed ❓ question Further information is requested labels Jul 29, 2024
@navidcy navidcy pinned this issue Jul 29, 2024
@navidcy
Copy link
Collaborator Author

navidcy commented Jul 30, 2024

@anton-seaice, @dougiesquire:

I'm trying to understand how to proceed here.

Personally I'm hitting big slowdown issues, much worse then 2x reported above. @julia-neme seems to be doing so as well and we have an open channel of communications providing mental support to each other.

But if me and @julia-neme are hitting these then I expect most COSIMA users to hit those as well.

I see a bunch of kwargs here proposed by @anton-seaice. Do we need those? Should we always have them? If so, we need to have them in the PRs for the intake conversion... Otherwise they are hidden in a comment in an issue.

data_ic = catalog[experiment].search(
    variable=variable, 
    frequency="1mon"
).to_dask()

data_ic_times = data_ic.sel(time=slice(start_time, end_time))

.. do the calculation

If that to_dask step is slow, it may be due to there being many files to concatenate. You can make you instance bigger, make sure you start a dask cluster (with threads_per_worker=1) before running to_dask, and add these keywords to the .to_dask() call:

xarray_combine_by_coords_kwargs=dict(
    compat="override",
    data_vars="minimal",
    coords="minimal"
)

catalog[experiment].search(
    variable=variable, 
).to_dask(
    xarray_combine_by_coords_kwargs=xarray_combine_by_coords_kwargs,
)

I'm a bit in despair. I'm very intrigued and interested in understanding the nitty gritty details of dask and xarray but in terms of what we suggests all COSIMAers to do, I would like to give them something that works. (Also for myself btw, most times I'd like just to copy a line and paste it and load the data -- I don't wanna be thinking of xarray internals etc.) If we should have these kwargs in then let's have them there all the time?

At the moment I'm trying to push through several of the open PRs for intake conversion and I'm hitting these issues. Perhaps we should pause trying to penetrate those PRs until I understand what's happening?

@navidcy
Copy link
Collaborator Author

navidcy commented Jul 30, 2024

Btw I'm hitting these issues even when I'm trying to load 1 variable... so no concatenation (or at least no concatenation as far as I understand!)

@dougiesquire
Copy link
Collaborator

@navidcy could you please point me to an example of your issue(s)?

@navidcy
Copy link
Collaborator Author

navidcy commented Jul 30, 2024

I'll try to do that later... Or @adele-morrison / @julia-neme feel free to do so?

#344 (comment) might be one?

@adele-morrison
Copy link
Collaborator

I did some more timing tests, and I actually find the kwargs don't help at all. There is just a change in timing due to caching on the 2nd run using Intake. I'm getting that Intake is about 2x slower than the cookbook when both are run for the first time.

This one runs the kwargs case first (50s for Intake compared with 28s for cookbook):

Screenshot 2024-07-30 at 11 07 47 AM

This one I changed the order and did no kwargs first (52s for Intake vs 25s for cookbook).
Screenshot 2024-07-30 at 11 11 23 AM

Running Intake a second time with either kwargs and no kwargs is much faster than the first time.

So I don't think it will help us to use kwargs.

@anton-seaice
Copy link
Collaborator

I see a bunch of kwargs here proposed by @anton-seaice. Do we need those? Should we always have them? If so, we need to have them in the PRs for the intake conversion... Otherwise they are hidden in a comment in an issue.

Apologies for the confusion about this. These kwargs help with sea-ice (cice) data but not with ocean (mom) data.

Its similar to the decode_coords=False which we used to set for the cookbook for cice data.

MOM only add 1d coordinates (i.e. xt_ocean, yt_ocean and time) to its output:
Screenshot 2024-07-30 at 12 10 14 PM

Whilst cice adds 2d coordinates to its output (geographic lons/lats at T and U points):
Screenshot 2024-07-30 at 12 18 45 PM

When .to_dask() is called this is concatenating all the files for that variable. Part of the concatenation is, by default, xarray will check that all of the coordinates in the files are consistent (i.e. the times don't overlap, but the other coordinates (lat/lon) are identical). Doing this check with 2d arrays from cice is much slower than doing this check with 1d arrays from mom. As we know the coordinates are consistent (they come from the same model at all times), we can tell xarray not to do the checks for consistency by setting the xarray_combine_by_coords_kwargs argument and therefore speed up loading cice data.

@dougiesquire
Copy link
Collaborator

dougiesquire commented Jul 30, 2024

Personally I'm hitting big slowdown issues

#344 (comment) might be one?

@navidcy, I've looked into this and replied, but I didn't encounter the big slowdown issues you refer to here. I think this one might mostly have been a case of comparing apples to oranges.

@marc-white
Copy link
Contributor

I can see a large number of INTAKE branches that aren't currently referenced in the issue header - does anyone mind if I do a bit of issue admin and collate them all together?

@julia-neme
Copy link
Collaborator

julia-neme commented Aug 30, 2024

Would collating the branches collate the pull requests too? I think I've found working through these that most of the notebooks require more work than just an intake conversion, so merging into one pull request would potentially generate discussion on different topics all mixed up in one PR.

@marc-white
Copy link
Contributor

@julia-neme oh no, absolutely not. I just meant doing the administrative task of working out where all the various branches/PRs are up to, and dumping that information in the issue description, given a lot of them aren't cross-linked (although I've just realized I can't edit the initial issue text myself).

@julia-neme
Copy link
Collaborator

Ohh I guess there are a lot of branches with conversions already complete that could be cleaned up?

@navidcy
Copy link
Collaborator Author

navidcy commented Aug 30, 2024

@julia-neme could you edit the first comment on this issue and mark with ticks the recipes you know have been converted?

@marc-white
Copy link
Contributor

@julia-neme could you edit the first comment on this issue and mark with ticks the recipes you know have been converted?

@julia-neme could we also link in the existing PRs (both open and closed) against the relevant recipe? The list I came up with so far is:

@julia-neme
Copy link
Collaborator

Yep! I'll do on monday. Note that not all notebooks have a PR request requesting a conversion. I'm not sure how all those PRs/branches happened.

@marc-white
Copy link
Contributor

I'm not sure how all those PRs/branches happened.

Not that I've ever tried it before, so take this suggestion with a grain of salt, but we could even stand up sub-issues for each recipe that hasn't been completed yet: https://dev.to/keracudmore/create-sub-issues-in-github-issues-409m

@julia-neme
Copy link
Collaborator

Hey @marc-white, I think we've updated the list at the beginning with @navidcy and should have all the links. Hope it is useful.

@marc-white
Copy link
Contributor

marc-white commented Sep 16, 2024

Could someone with the appropriate superpowers please add me to the project? I was just stymied in my attempts to push back my changes to the #356 branch.

@navidcy
Copy link
Collaborator Author

navidcy commented Sep 16, 2024

added you; let me know if you still have issues

@marc-white
Copy link
Contributor

Apart from my Outlook client's stubborn refusal to open the invitation email at all, that seems to have done it! Thanks @navidcy !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🕹️ hackathon 4.0 🛸 updating An existing notebook needs to be updated
Development

No branches or pull requests