-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor make conditionals more declarative #31
Open
gtempus
wants to merge
9
commits into
master
Choose a base branch
from
refactor--make-conditionals-more-declarative
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* docker-compose works fine and can enter docker container locally. Haven't tried running the model with updated GDAL and Python yet. * Successfully installs the required Python packages. Haven't tried running test tile yet. * Runs test tile 00N_000E. Didn't check that outputs were correct but did verify that the output rasters load and have values in ArcMap.
* Froze all dependencies in `requirements.txt` to their current version. Also, added testing folder and file but haven't tried using them yet. * Used pylint on mp_create_carbon_pools.py. Addressed pretty much all the messages I wanted to. * Continued delinting in create_carbon_pools. Changed all obvious print statements in mp_create_carbon_pools.py and create_carbon_pools.py to fprint. Added docstrings to each function. Testing carbon pool creation for 00N_000E seems to work fine. * Experimenting with setting some variables as global so that I don't have to pass them as arguments: sensit_type, no_upload, save_intermediates, etc. For saving and modifying variables between files, this page seems to be helpful: https://thewebdev.info/2021/10/19/how-to-use-global-variables-between-files-in-python/#:~:text=To%20use%20global%20variables%20between%20files%20in%20Python%2C%20we%20can,reference%20the%20global%20variable%20directly.&text=We%20import%20the%20settings%20and,Then%20we%20call%20settings. * Testing global variables with no_upload. I seem to be able to reset the global variable from run_full_model.py, including in the log. Need to make sure this is actually carrying through to uploading. * Added global variables to constants_and_names.py and top of run_full_model.py. * Changed run_full_model.py through carbon pool step. * Changed carbon pool creation to use global variables. Decided to have functions pass carbon_pool_extent because it's a key parameter of carbon pool creation. * Changed all model stages to use global variables from the command line. Still testing that I didn't break anything in local runs. * Changed some universal_util.py functions to use the global variables instead of passing arguments to them. * Starting to change print statements to f'' print statements throughout the model. * Changed to f print statements for model extent and forest age category steps. * Changed to f print statements for entire removals model. * Changed to f print statements for carbon, emissions, and analyses. Haven't changed in universal_util or constants_and_names. Haven't checked if everything is working alright. * Changed to f print statements for universal_util.py. Didn't change arguments to gdal commands for the most part, though. * Used pylint on all regular model steps and run_full_model.py. Fixed most message that weren't about importing, too many variables, too many statements, or too many branches. I'll work on those structural issues later. * Testing 00N_000E locally with linting of run_full_model.py and all model stages through net flux. Going to try running it on an ec2 instance now. * 00N_000 works in a full local run and 00N_020E works in a full ec2 run. I've linted enough for now.
* Added a command line argument `--single-processor` or `-sp` to run_full_model.py and each model step through net flux that sets whether the tile processing is done with the multiprocessing module or not. This involved adding another if...else statement (or sometimes statements) to each step to have it use the correct processing route. Also changed readme.md to add the new argument. * Ran 00N_000E locally for all model steps with single and multiprocessing options to make sure both still worked after this reconfiguration. Both worked. Single processing took (no uploading of outputs): 1 hour 23 minutes Multi-processing took (no uploading of outputs): 1 hour 11 minutes
* ✅ test(Carbon Pools): Mark failing tests with `xfail` This is handy if we're writing the tests first or we have a large batch of tests failing for some reason and we want to cut down on the error output generated during a test run. * 🎨 refactor(Carbon Pools): Extract `deadwood_litter_equations` This refactoring pattern is described here: https://refactoring.guru/extract-method * 🎨 style(Carbon Pools): Add proper spacing between functions
* Testing not working. Import errors. * Testing works when I run pytest from usr/local/app/test. Added deadwood and litter pool tests for the simple numpy operations that represent the five categories of domain/elevation/precipitation. The tests are on 1x1 numpy arrays to keep things simple (not on actual tiles). Doing this testing involved refacting the numpy parts of create_deadwood_litter into their own function that inputs and outputs just arrays of any dimension. * Carbon pool creation still works, even with the deadwood and litter equations factored out. All tests of the different equations work, too.
This is a good first start to create more intention revealing code.
Holding off on making this test past. We have more work to do to decouple the function from some 3rd party dependencies.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull request checklist
Please check if your PR fulfills the following requirements:
Pull request type
Please check the type of change your PR introduces:
What is the current behavior?
Issue Number: N/A
What is the new behavior?
Does this introduce a breaking change?
Other information