Skip to content

Commit

Permalink
Model v 1.2.0 (#10)
Browse files Browse the repository at this point in the history
* Couldn't open hdf files for burned area using osgeo/gdal:ubuntu-small-3.0.4. I think that it just didn't have the hdf driver. I'm trying to build the docker with osgeo/gdal:ubuntu-full-3.0.4 instead to see if that can read hdf files.

* Changing to full gdal build for Docker allowed gdal to read hdf. I don't know if that affects anything else in the model. I will see when I do later testing.

* Changing to full gdal build for Docker allowed gdal to read hdf. I don't know if that affects anything else in the model. I will see when I do later testing.

* Going ahead with burned area analysis now that the gdal-hdf issue is sorted out.

* Going ahead with burned area analysis now that the gdal-hdf issue is sorted out.

* Going ahead with burned area analysis now that the gdal-hdf issue is sorted out.

* Going ahead with burned area analysis now that the gdal-hdf issue is sorted out.

* Working on next step of updating burned area through 2019 for emissions model.

* Working on next step of updating burned area through 2019 for emissions model.

* Working on next step of updating burned area through 2019 for emissions model.

* Working on next step of updating burned area through 2019 for emissions model.

* Working on next step of updating burned area through 2019 for emissions model.

* Working on next step of updating burned area through 2019 for emissions model.

* Working on next step of updating burned area through 2019 for emissions model.

* Working on next step of updating burned area through 2019 for emissions model.

* Working on next step of updating burned area through 2019 for emissions model.

* I think I have the clip_year_tiles step working. Running on all tiles now, from 2000 to 2019.

* I think I have the clip_year_tiles step working. Running on all tiles now, from 2000 to 2019.

* I think I have the clip_year_tiles step working. Running on all tiles now, from 2000 to 2019.

* I think I have the clip_year_tiles step working. Running on all tiles now, from 2000 to 2019.

* I think I have the clip_year_tiles step working. Running on all tiles now, from 2000 to 2019.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* On the final stage of the burned area analysis update.

* Changed burned area final step to skip tiles without loss.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Ran test tile through entire model on my computer (standard model only). Now going to try three test tiles on spot machine.

* Still working on getting the carbon pool generation to work again in the full model run context. Testing three tiles on a spot machine.

* Still working on getting the carbon pool generation to work again in the full model run context. Testing three tiles on a spot machine.

* Still working on getting the carbon pool generation to work again in the full model run context. Testing three tiles on a spot machine.

* Still working on getting the carbon pool generation to work again in the full model run context. Issues with soil carbon. Testing three tiles on a spot machine.

* Still working on getting the carbon pool generation to work again in the full model run context. Issues with soil carbon. Testing three tiles on a spot machine.

* Still working on test run of three tiles on spot machine. Carbon pool generation works now. On to emissions.

* Still working on test run of three tiles on spot machine. Carbon pool generation works now. On to emissions.

* Still working on test run of three tiles on spot machine. Making sure emissions runs.

* Still working on test run of three tiles on spot machine. Making sure emissions runs.

* Still working on test run of three tiles on spot machine. Making sure emissions runs.

* Still working on test run of three tiles on spot machine. Making sure emissions runs.

* Still working on test run of three tiles on spot machine. Making sure emissions runs.

* Issue was with where the c++ was being compiled. It wasn't in the project folder, which was messing me up. Think emissions are working now.

* Issue was with where the c++ was being compiled. It wasn't in the project folder, which was messing me up. Think emissions are working now.

* Issue was with where the c++ was being compiled. It wasn't in the project folder, which was messing me up. Think emissions are working now.

* Emissions, net flux, and aggregate worked. Fixing path issue for per pixel output step.

* Emissions, net flux, and aggregate worked. Fixing path issue for per pixel output step.

* Emissions, net flux, and aggregate worked. Fixing path issue for per pixel output step.

* Flux model seems to be ready for full model run for model v.1.2.0 on a spot machine. Ran all stages in Docker locally on one test tile and on Docker on spot machine on three test tiles.

* Flux model seems to be ready for full model run for model v.1.2.0 on a spot machine. Ran all stages in Docker locally on one test tile and on Docker on spot machine on three test tiles. Changed output directory dates to reflect intended date run.

* Flux model seems to be ready for full model run for model v.1.2.0 on a spot machine. Ran all stages in Docker locally on one test tile and on Docker on spot machine on three test tiles. Changed output directory dates to reflect intended date run.

* Realized that metadata tags weren't writing to the aggregated rasters, so I fixed that. Switched from adding tags with rasterio to adding them with gdal_edit. Used the same method for adding metadata tags to emissions outputs.

* Changed the number of allocated processors for each model step. Just guessing how much to use for r5d.24xlarge.

* Model extent tiles without any data were being uploaded to s3, so adding function to delete the empty ones before uploading.

* Fixing issues with full model run as I go and fine tuning number of processors. Had to make exceptions for loss tiles not existing in the age category script.

* Fixing issues with full model run as I go and fine tuning number of processors. Had to make exceptions for loss tiles not existing in the age category script.

* Fixing issues with full model run as I go and fine tuning number of processors. Had to make exceptions for loss tiles not existing in the age category script.

* Fixing issues with full model run as I go and fine tuning number of processors. Had to make exceptions for loss tiles not existing in the age category script.

* Fixing issues with full model run as I go and fine tuning number of processors. Had to make exceptions for loss tiles not existing in the age category script.

* Fixing issues with full model run as I go and fine tuning number of processors. Had to make exceptions for loss tiles not existing in the age category script.

* Model stopped on IPCC default rate creation because one continent-ecozone tile didn't exist. Revised step to skip tiles that don't have ecozone-continent and age category tiles.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Model stopped on merging all the gain year count rasters because some tiles didn't have all four. Fixing that and restarting with gain year count.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Somehow skipped the annual gain rate all forest type step before. Going back to do that now.

* Adjusting number of processors for gross removals step.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* Fiddling with carbon pool generation step now.

* I got each stage running on all tiles through deadwood/litter emission carbon pool generation. I haven't gotten to testing it beyond that but I hope it works. Decreased allocated processors for remaining steps to reduce likelihood of running out of memory. Trying to run model on all tiles for model v1.2.0 standard run now.

* I got each stage running on all tiles through deadwood/litter emission carbon pool generation. I haven't gotten to testing it beyond that but I hope it works. Decreased allocated processors for remaining steps to reduce likelihood of running out of memory. Trying to run model on all tiles for model v1.2.0 standard run now.

* I got each stage running on all tiles through deadwood/litter emission carbon pool generation. I haven't gotten to testing it beyond that but I hope it works. Decreased allocated processors for remaining steps to reduce likelihood of running out of memory. Trying to run model on all tiles for model v1.2.0 standard run now.

* Full model run ran out of storage during gain year count step. Adding tile deletion before that step to make room.

* Full model run ran out of storage during gain year count step. Adding tile deletion before that step to make room.

* Full model run ran out of storage during gain year count step. Adding tile deletion before that step to make room.

* Running tile statistics on standard v1.2.0 for everything until carbon pools.

* Running tile statistics on standard v1.2.0 for everything until carbon pools.

* Deadwood/litter c pool creation issue. Restarting model there.

* Allocated too many processors for total carbon step. Restarting from there.

* Done with carbon pool creation. Issue with cleaning unneeded tiles before emissions step.

* Done with carbon pool creation. Issue with cleaning unneeded tiles before emissions step.

* Having trouble creating blank tiles for missing emissions inputs.

* Having trouble creating blank tiles for missing emissions inputs.

* Fixed the tool for making blank tiles for missing gross emissions inputs. Running on all tiles now.

* Fixed the tool for making blank tiles for missing gross emissions inputs. Running on all tiles now.

* Fixed the tool for making blank tiles for missing gross emissions inputs. Running on all tiles now.

* Getting tile statistics for carbon pools in emissions year, gross emissions frombiomass and soil, and net flux.

* Need to redo emissions steps onwards. Metadata writing was old version and overwrote results in emissions tiles.

* Need to redo emissions steps onwards. Metadata writing was old version and overwrote results in emissions tiles.

* Doing more tile statistics.

* Need to redo emissions steps onwards. Metadata writing was old version and overwrote results in emissions tiles.

* Doing more tile statistics.

* Doing more tile statistics.

* Needed to fix net flux script.

* Needed to fix net flux script.

* Needed to fix net flux script.

* Needed to fix net flux script.

* Needed to fix net flux script.

* Needed to fix net flux script.

* Running carbon pool 2000 generation and soil only emissions now.

* Running carbon pool 2000 generation and soil only emissions now.

* Had to fix deadwood and litter in emissions year carbon pool generation. They weren't producing values before. Need to rerun the model from pool generation!

* Had to fix deadwood and litter in emissions year carbon pool generation. They weren't producing values before. Need to rerun the model from pool generation! Also, fixed metadata tags for per pixel output step.

* Had to fix deadwood and litter in emissions year carbon pool generation. They weren't producing values before. Need to rerun the model from pool generation! Also, fixed metadata tags for per pixel output step.

* Had to fix deadwood and litter in emissions year carbon pool generation. They weren't producing values before. Need to rerun the model from pool generation! Also, fixed metadata tags for per pixel output step.

* Redoing tile statistics for standard model v1.2.0.

* Redoing tile statistics for standard model v1.2.0.

* Redoing tile statistics for standard model v1.2.0.

* Redoing tile statistics for standard model v1.2.0.

* Had to fix deadwood and litter in emissions year carbon pool generation. They weren't producing values before. Need to rerun the model from pool generation! Also, fixed metadata tags for per pixel output step.

* Redoing tile statistics for standard model v1.2.0.

* Creating carbon pools in 2000 for full model v1.2.0.

* Creating carbon pools in 2000 for full model v1.2.0.

* Internet died during soil C creation in emissions year for full run of standard modevl v1.2.0. Resuming from there.

* Internet died during soil C creation in emissions year for full run of standard modevl v1.2.0. Resuming from there.

* Creating carbon pools in 2000 for full standard model v1.2.0.

* Redoing tile statistics for standard model v1.2.0.

* Creating deadwood C 2000 and onwards. Spot machine ran out of storage, so had to add some lines to clean up tiles more frequently.

* Creating deadwood C 2000 and onwards. Spot machine ran out of storage, so had to add some lines to clean up tiles more frequently.

* Creating deadwood C 2000 and onwards. Spot machine ran out of storage, so had to add some lines to clean up tiles more frequently.

* Creating deadwood C 2000 and onwards. Spot machine ran out of storage, so had to add some lines to clean up tiles more frequently.

* Creating deadwood C 2000 and onwards. Spot machine ran out of storage, so had to add some lines to clean up tiles more frequently.

* Creating soil only gross emissions for model v1.2.0.

* Starting to work on creating uncertainty for soil C using SoilGrids250 CI5 and CI95 data.

* Starting to work on creating uncertainty for soil C using SoilGrids250 CI5 and CI95 data.

* Working on soil C stdev. All input SoilGrids tiles downloaded. Working on calculating stdev from them.

* Working on soil C stdev. All input SoilGrids tiles downloaded. Working on calculating stdev from them.

* Working on soil C stdev. All input SoilGrids tiles downloaded. Working on calculating stdev from them.

* Working on soil C stdev. All input SoilGrids tiles downloaded. Working on calculating stdev from them.

* Working on soil C stdev. All input SoilGrids tiles downloaded. Working on calculating stdev from them.

* Working on soil C stdev. All input SoilGrids tiles downloaded. Working on calculating stdev from them.

* Working on soil C stdev. All input SoilGrids tiles downloaded. Working on calculating stdev from them.

* Creating mangrove removal factor stdev for select tiles again. Some had erroneously high values.

* Creating removal factor standard deviation for US tiles and composite RF standard deviation for all forest types.

* Creating removal factor standard deviation for US tiles and composite RF standard deviation for all forest types.

* Creating removal factor standard deviation for US tiles and composite RF standard deviation for all forest types.

* Found problems with US tiles. Rerunning model for US tiles only.

* Found problems with US tiles. Rerunning model for US tiles only.

* Found problems with US tiles. Rerunning model for US tiles only.

* Found problems with US tiles. Rerunning model for US tiles only.

* Found problems with US tiles. Rerunning model for US tiles only.

* Found problems with US tiles. Rerunning model for US tiles only.

* Found problems with US tiles. Rerunning model for US tiles only.

* Found problems with US tiles. Rerunning model for US tiles only.

* Creating global aggregate maps for standard model v1.2.0 with corrected US tiles.

* Rerunning aggregate step for standard model v1.2.0 because I changed which pixels are included in aggregation. Now includes all pixels with tcd>30, all Hansen gain pixels, and all mangrove pixels.

* Running no_shifting_ag sensitivity analysis for model v1.2.0 (emissions onwards).

* Running no_shifting_ag sensitivity analysis for model v1.2.0 (emissions onwards).

* Running no_shifting_ag sensitivity analysis for model v1.2.0 (emissions onwards).

* Running no_shifting_ag sensitivity analysis for model v1.2.0 (emissions onwards).

* Running no_shifting_ag sensitivity analysis for model v1.2.0 (emissions onwards).

* Running no_shifting_ag sensitivity analysis for model v1.2.0 (emissions onwards).

* Ready to run maxgain sensitivity analysis for model v1.2.0.

* Ready to run maxgain sensitivity analysis for model v1.2.0.

* Ready to run maxgain sensitivity analysis for model v1.2.0.

* Net flux missed tiles without Hansen loss for no_shifting_ag and convert_to_grassland sensitivity analyses. Fixed to include full tile set and rerunning those models from net flux onwards.

* Net flux missed tiles without Hansen loss for no_shifting_ag and convert_to_grassland sensitivity analyses. Fixed to include full tile set and rerunning those models from net flux onwards.

* Going to run model v1.2.0 biomass_swap sensitivity analysis now.

* Going to run model v1.2.0 biomass_swap sensitivity analysis now.

* Going to run model v1.2.0 biomass_swap sensitivity analysis now.

* Going to run model v1.2.0 biomass_swap sensitivity analysis now.

* Net flux missed tiles without Hansen loss for no_shifting_ag and convert_to_grassland sensitivity analyses. Fixed to include full tile set and rerunning those models from net flux onwards.

* Running biomass swap sensitivity analysis again using just the JPL tile set.

* Running biomass swap sensitivity analysis again using just the JPL tile set.

* Running biomass swap sensitivity analysis again using just the JPL tile set. Reduced number of processors for many steps because biomass_swap seems to take more memory than using WHRC.

* Running biomass swap sensitivity analysis again using just the JPL tile set. Reduced number of processors for many steps because biomass_swap seems to take more memory than using WHRC.

* Running biomass swap sensitivity analysis again using just the JPL tile set. Reduced number of processors for many steps because biomass_swap seems to take more memory than using WHRC.

* Running no_primary_gain sensitivity analysis now.

* Running the rest of biomass_swap sensitivity analysis. Had to reduce the number of processors for carbon pools onwards.

* Needed to redo no_primary_gain sensitivity analysis.

* Running biomass_swap sensitivity analysis again.

* Running biomass_swap sensitivity analysis again.

* Running biomass_swap sensitivity analysis again.

* Running biomass_swap sensitivity analysis again.

* Running US_removals sensitivity analysis.

* Running US_removals sensitivity analysis.

* Trying to finish up biomass_swap sensitivity analysis. Randomly stopped before.

* Trying to finish up biomass_swap sensitivity analysis. Randomly stopped before.

* Trying to finish up biomass_swap sensitivity analysis. Randomly stopped before.

* Trying to finish up biomass_swap sensitivity analysis. Randomly stopped before.

* Creating PRODES legal Amazon loss tiles for 2001-2019.

* Creating PRODES legal Amazon loss tiles for 2001-2019.

* Creating PRODES legal Amazon loss tiles for 2001-2019.

* Creating PRODES legal Amazon loss tiles for 2001-2019.

* Creating PRODES legal Amazon loss tiles for 2001-2019.

* Creating PRODES legal Amazon loss tiles for 2001-2019.

* Creating PRODES legal Amazon loss tiles for 2001-2019.

* Working on legal_Amazon_loss sensitivity analysis.

* Working on legal_Amazon_loss sensitivity analysis.

* Working on legal_Amazon_loss sensitivity analysis.

* Working on legal_Amazon_loss sensitivity analysis.

* Working on legal_Amazon_loss sensitivity analysis.

* Working on legal_Amazon_loss sensitivity analysis.

* Added a few notes here and there.
  • Loading branch information
dagibbs22 authored Feb 8, 2021
1 parent 1365183 commit 786a766
Show file tree
Hide file tree
Showing 4 changed files with 34 additions and 9 deletions.
9 changes: 6 additions & 3 deletions analyses/aggregate_results_to_4_km.py
Original file line number Diff line number Diff line change
Expand Up @@ -300,13 +300,16 @@ def percent_diff(std_aggreg_flux, sensit_aggreg_flux, sensit_type):
date = datetime.datetime.now()
date_formatted = date.strftime("%Y_%m_%d")

uu.print_log(std_aggreg_flux)
uu.print_log(sensit_aggreg_flux)
uu.print_log(std_aggreg_flux)

# CO2 gain uses non-mangrove non-planted biomass:carbon ratio
# This produces errors about dividing by 0. As far as I can tell, those are fine. It's just trying to divide NoData
# pixels by NoData pixels, and it doesn't affect the output.
perc_diff_calc = '--calc=(A-B)/absolute(B)*100'.format(sensit_aggreg_flux, std_aggreg_flux)
# For model v1.2.0, this kept producing incorrect values for the biomass_swap analysis. I don't know why. I ended
# up just using raster calculator in ArcMap to create the percent diff raster for biomass_swap. It worked
# fine for all the other analyses, though (including legal_Amazon_loss).
# Maybe that divide by 0 is throwing off other values now.
perc_diff_calc = '--calc=(A-B)/absolute(B)*100'
perc_diff_outfilename = '{0}_{1}_{2}.tif'.format(cn.pattern_aggreg_sensit_perc_diff, sensit_type, date_formatted)
perc_diff_outfilearg = '--outfile={}'.format(perc_diff_outfilename)
# cmd = ['gdal_calc.py', '-A', sensit_aggreg_flux, '-B', std_aggreg_flux, perc_diff_calc, perc_diff_outfilearg,
Expand Down
1 change: 0 additions & 1 deletion constants_and_names.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,6 @@
pattern_planted_forest_type_unmasked = 'plantation_type_oilpalm_woodfiber_other_unmasked'
planted_forest_type_unmasked_dir = os.path.join(s3_base_dir, 'other_emissions_inputs/plantation_type/standard/20200730/')


# Peat mask inputs
peat_unprocessed_dir = os.path.join(s3_base_dir, 'other_emissions_inputs/peatlands/raw/')
cifor_peat_file = 'cifor_peat_mask.tif'
Expand Down
10 changes: 5 additions & 5 deletions emissions/mp_calculate_gross_emissions.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,11 +154,11 @@ def mp_calculate_gross_emissions(sensit_type, tile_id_list, emitted_pools, run_d
uu.exception_log('Pool and/or sensitivity analysis option not valid')


# # Downloads input files or entire directories, depending on how many tiles are in the tile_id_list
# for key, values in download_dict.items():
# dir = key
# pattern = values[0]
# uu.s3_flexible_download(dir, pattern, folder, sensit_type, tile_id_list)
# Downloads input files or entire directories, depending on how many tiles are in the tile_id_list
for key, values in download_dict.items():
dir = key
pattern = values[0]
uu.s3_flexible_download(dir, pattern, folder, sensit_type, tile_id_list)


# If the model run isn't the standard one, the output directory and file names are changed
Expand Down
23 changes: 23 additions & 0 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,17 +92,39 @@ and often the number of processors being used is 1/2 or a 1/3 of the actual numb
If the tiles were smaller (e.g., 1x1 degree), more processors could be used but then there'd also be more tiles to process, so I'm not sure that would be any faster.
Users can track memory usage in realtime using the `htop` command line utility.

The model runs inside a Docker container. Once you have Docker configured on your system and have cloned this repository, you can do the following.
This will enter the command line in the Docker container.

For runs on my local computer, I use Docker-compose so that the Docker is mapped to my computer's drives.
I do this for development and testing.
`docker-compose build`
`docker-compose run --rm -e AWS_SECRET_ACCESS_KEY=... -e AWS_ACCESS_KEY_ID=... carbon-budget`

For runs on an AWS r5d spot machine (for full model runs), I use docker build.
`docker build . -t gfw/carbon-budget`
`docker run --rm -it -e AWS_SECRET_ACCESS_KEY=... -e AWS_ACCESS_KEY_ID=... gfw/carbon-budget`

Before doing a model run, confirm that the dates of the relevant input and output s3 folders are correct in `constants_and_names.py`.
Depending on what exactly the user is running, the user may have to change lots of dates in the s3 folders or change none.
Unfortunately, I can't really give better guidance than that; it really depends on what part of the model is being run and how.
(I want to make the situations under which users change folder dates more consistent eventually.)

##### Individual scripts
The flux model is comprised of many separate scripts, each of which can be run separately and
has its own inputs and output(s). Combined, these comprise the flux model. There are several data preparation
scripts, several for the removals (sequestration/gain) model, a few to generate carbon pools, one for calculating
gross emissions, one for calculating net flux, and one for aggregating key results into coarser
resolution rasters for mapping. The order in which these must be run is very specific; many scripts depend on

the outputs of other scripts. Looking at the files that must be downloaded for the
script to run will show what files must already be created and therefore what scripts must have already been
run. The date component of the output directory on s3 generally must be changed in `constants_and_names.py`
for each output file unless a date argument is provided on the command line.

Each script can be run either using multiple processors or one processor. The former is for full model runs,
while the latter is for model development. The user can switch between these two versions by commenting out
the appropriate code chunks.

##### Master script
A master script will run through all of the non-preparatory scripts in the model: some removal factor creation, gross removals, carbon
pool generation, gross emissions, net flux, and aggregation. It includes all the arguments needed to run
Expand All @@ -111,6 +133,7 @@ the output directories. The emissions C++ code has to be be compiled before runn

`python run_full_model.py -t std -s all -r true -d 20200822 -l all -ce loss -p biomass_soil -tcd 30 -ma true -us true -ln "This will run the entire standard model, including creating mangrove and US removal factor tiles, on all tiles and output everything in s3 folders with the date 20200822."`


##### Running the emissions model
The gross emissions script is the only part that uses C++. Thus, it must be manually compiled before running.
There are a few different versions of the emissions script: one for the standard model and a few other for
Expand Down

0 comments on commit 786a766

Please sign in to comment.