-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update dependencies (Jarmo-fix) #50
Conversation
See if moving away from 3.9 solves the GitHub Actions `black` linter error.
Change in behaviour: - for the "very-high" meta classification, there's no need anymore to report at least 9 variables. Instead, I added sulfur. The reason for this is (a) the code is easier, and (b) there was also little basis for the original and I don't think anyone used it. - auxiliary function `count_variables_very_high()` is deleted
This function now actually throws an error, and does not seem to be necessary anymore? @znicholls probably wrote this bit, so might want to double-check.
Woops - still got a remaining local test failing - will dive back in - so hold your horses for the review for a second |
@jkikstra and @znicholls are there any updates on this? Would be great to have this resolved before the end of the year. If there's anything I can do to help please just let me know :) |
…ith --update-expected-files" This reverts commit 16a1491.
Indeed, The numbers shouldn't change. Now tried with a fresh conda environment (using python 3.10), but still getting these KeyError issues. I'll dig a bit deeper and try also with 3.10, since that's what the GHA workflow was running on. |
Hmm okay so I keep being stranded at getting this "KeyError" issue on my laptop for
Besides that, there are still a few other test failures too. So Philip, would gladly take up your offer - let's connect tomorrow if you have time (I'll write you on Teams). |
Okay, @phackstock and I sat down. There's a few things. Local installation issues on my Windows laptop Additionally, I had to do `` instead of Meta issue: exclude column
A quick deleting of the column in one file indeed seems to have resolved this issue. Way to resolve:
Data issue: different climate outcomes
Somehow, it looks like climate output values are different from what is expected. We see large differences in e.g. 2100 temperature in cicero (about 0.8K), and noticeable differences in fair (about 0.05K). Way to resolve: @phackstock will now take over this branch. |
Yikes, nice detective work.
Weird. Do whatever in terms of removing exclude and changing to int or not (it's easy enough to git diff to check that only that column changes)
Yes. I guess if there is a difference in the emissions, we have to fix that. If there isn't a difference in the emissions, something has changed in the post-processing. That is possible with pandas 2.0. @phackstock if you find that the emissions are in fact identical ping me and I can go diving into emulator land. If only the climate tests are failing (and all the other processing tests e.g. harmonisation and infilling related tests are passing) then this suggests to me already a problem in the emulators/post-processing, but I will wait until the emissions checks are done before diving in just in case my instinct is wrong. |
Fixed the first issue on your list @jkikstra. The issue ended up being both the extra |
Fixed the
@znicholls you can take over here, fixing 1. should be a matter of seconds for you, fingers crossed for 2. |
Alrighty we're done. The CICERO test is failing, but that is because we fixed a bug in CICERO's heat uptake handling in Openscm-runner way back (https://github.com/openscm/openscm-runner/blob/main/docs/source/changelog.md#openscm-runner-v093-2022-01-19). So, if you have a linux machine and want to just re-run the failing test to update it, then go for it. Otherwise, I can do that Tuesday. As usual, a weird pandas bug was the culprit. This package is also just such a mess though, maintenance is going to be an ongoing headache. |
@znicholls thanks a lot for your work 👍 amazing that we'll now be compatible with the lastest version of the Scenario Explorer processing. |
The merge-base changed after approval.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests are green (https://github.com/iiasa/climate-assessment/actions/runs/7797092361/job/21263037772), let's go
This is a breaking change and will mean that we'll have to remove the nightly CICERO tests (as they are now no longer able to be reproduced, which is kind of fine as the results were wrong anyway).
The merge-base changed after approval.
Continuation of #47, which itself was a continuation of #46.
It's already (re)based on the current #49. (speaking as 15:50, 30.11.2023)
The idea is that only merging this current branch should do the trick. The other PRs can then be closed.
The PR does the following:
update dependencies, most notably
pyam
, wherepyam.IamDataFrame.require_variable()
has been replaced bypyam.IamDataFrame.require_data()
, noting that it is not a one-on-one replacement. Additionally, there seems to have been some changes in functionality around empty filtered outputs.following this, rewrite
reclassify_waste_and_other_co2_ar6()
, and deleterequire_var_allyears
which was previously only used inreclassify_waste_and_other_co2_ar6()
also delete
drop_broken_stuff
which does not seem necessary anymore (would be great if @znicholls can check if this is OK)building on Fix if all emissions data starts in 2015, in :func:
add_year_historical_percentage_offset
#49, there's no issues anymoreadd a test
test_add_completeness_category
, which ensures thatadd_completeness_category()
runsrewrite
add_completeness_category()
. NOTE: this causes a minor change in the functionality, where we don't count the variables anymore (this functionality wasn't really used anyway, and more of a legacy of explorations during an earlier projects: so I deleted the auxiliary functioncount_variables_very_high()
which is not necessary anymore. I am not aware of anyone using this "completeness_category" functionality, and we didn't guarantee its continuationupdate some test-data expected files, because a new error occurred regarding the "exclude" column in the output meta file (which may have had to do with updated pyam behaviour, too?)
Tests added
Documentation added
Example added (in the documentation, to an existing notebook, or in a new notebook)
Description in
CHANGELOG.rst
added (single line such as:(`#XX <https://github.com/iiasa/climate-assessment/pull/XX>`_) Added feature which does something
)