Skip to content

Commit

Permalink
Version 1 2 2 2021 tcl update (#22)
Browse files Browse the repository at this point in the history
* Updated model constants (primarily loss years/model years) for annual update. Also tentatively updated input and output folder dates on s3 from 2021 to 2022. Finally, refactored "gain" folder to "removals" folder (don't know if refactor worked entirely correctly but it did results in "gain" being changed to "removals" in comments and variables throughout the model).

* Updated burn area scripts for 2022 model run.

* Ran model successfully on 00N_000E locally in Docker, with a combination of constants for v1.2.1 and v1.2.2 in order to make it run to completion (but not necessarily produce reasonable results). Now going to run on a spot machine.

* Ran model successfully on 00N_000E locally in Docker, with a combination of constants for v1.2.1 and v1.2.2 in order to make it run to completion (but not necessarily produce reasonable results). Now going to run on a spot machine.

* Updating MODIS burned area for 2021 tree cover loss. Only running steps 1-3 for now because need TCL update for step 4.

* Updating MODIS burned area for 2021 tree cover loss. Only running steps 1-3 for now because need TCL update for step 4.

* Working on burn year update for 2021.

* Working on burn year update for 2021. On step 3 now.

* Did steps 1-3 of burn year update. Waiting on 2021 TCL for step 4 of burn year update.

* Experimenting with tracking memory during model run. I can't figure out how to have a memory profiler that runs concurrently with the model processing (tracemalloc doesn't seem to do the thing I want), so I'm just checking memory at specific points and hopefully that's representative of peak memory usage.

* Experimenting with tracking memory during model run. Using my new psutil-based memory tracker in uu.end_of_fx_summary seems to capture peak memory usage. I checked that in a local test and in a r5d.4xlarge instance running 3 tiles. Both using mp_model_extent stage. Haven't tried memory tracking using other stages yet.

* Added memory tracking to all model steps used in run_full_model.py. I've only tested it in mp_model_extent.py but it should work similarly elsewhere. Used htop memory graph for validation for psutil memory value.

* Changed postgres steps for plantation preprocessing to work in r5d instance. Dockerfile now sets up the correct environment and permissions but the actual database creation and postgis extension addition has to be done manually in the container. I can't figure out how to get the container to launch with postgres running, the database created, or postgis exension added. Anyhow, this is still workable. I tested out adding plantation data to the postgres database and it seemed fine, although I didn't then try running the plantation script to rasterize that. So there could easily be issues with the postgres database; I just don't know.

* test commit. changed plantation preparation.

* Trying full model run with 2021 TCL. Theoretically will run all the way through, but using 2020 drivers and burned area. Will update burned area through 2021 later on.

* Trying full model run with 2021 TCL. Theoretically will run all the way through, but using 2020 drivers and burned area. Will update burned area through 2021 later on.

* Changing memory warning threshold to be higher.

* Changing processor count for model extent.

* Issue with full model run-- not finding s3 folder correctly. Trying with two test tiles first.

* Experimenting with full model run again.

* Experimenting with full model run again.

* 96% memory warning threshold led to unnecessary program termination. Increasing to 99%. That's because the percentage reported by psutil.virtual_memory() doesn't exactly equal what's shown in htop.

* Reducing processor count for deadwood/litter creation.

* Finishing burn year update for 2021 now that TCL 2021 is available.

* Changed directory and names for updated drivers.

* Changed directory and names for updated drivers.

* Trying to rewindow the tcd, gain, etc. tiles again.

* Trying to run aggregation step again.

* Trying to run aggregation step again.

* Issue with 20N_030W in emissions aggregation step. Never rewindowed and can't tell why.

* Issue with 10N_150E net flux rewindowing. Randomly didn't happen. Trying net flux tile aggregation again.

* Having rewindowed area tiles from aggregation step on the spot machine made the spot machine think it already had the area tiles, even though it had the rewindowed version. So adding the deletion of all rewindowed tiles after aggregate maps are made.

* Wrote a script that downloads the tiles using a list of directories and patterns for a list of tile_ids. It then builds overviews for them for easy viewing in ArcMap. This is just to simplify the process of downloading all the intermediate and final outputs of the model for QCing (don't have to download them individually using Cloudberry).

* Running full model, hopefully from start to finish. Using 2021 TCL with drivers and burn year updated.

* Running full model, from gross emissions aggregation onwards. Emissions aggregation skipped 20N_030W again, so the model stopped. This happened with the same tile before. Still don't know what the deal is.

* Running full model, from net flux aggregation onwards. Net flux aggregation skipped another tile again (I think it was 10N_150E but I actually didn't record it-- oops). This happened with net flux before. Still don't know what the deal is.

* Revised download_tile_set.py to create overviews (.ovr) for tile set that are smaller (DEFLATE compression).

* Rerunning drivers preparation and emissions onwards because of corrected drivers map.

* Rerunning drivers preparation and emissions onwards because of corrected drivers map.

* Rerunning emissions aggregation onwards because of corrected drivers map.

* Rerunning net flux aggregation onwards because of corrected drivers map.

* Creating soil C emissions for 2021 update (v1.2.2).

* Updated readme for model update with 2021 TCL (v.1.2.2). This is the model version as I ran it for the 2021 update.
  • Loading branch information
dagibbs22 authored Mar 22, 2022
1 parent 11780aa commit 58fdef4
Show file tree
Hide file tree
Showing 51 changed files with 1,039 additions and 538 deletions.
29 changes: 24 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Use osgeo GDAL image. It builds off Ubuntu 18.04 and uses GDAL 3.0.4
FROM osgeo/gdal:ubuntu-small-3.0.4
#FROM osgeo/gdal:ubuntu-full-3.0.4 # Use this if downloading hdf files for burn year analysis

# # Use this if downloading hdf files for burn year analysis
# FROM osgeo/gdal:ubuntu-full-3.0.4

ENV DIR=/usr/local/app
ENV TMP=/usr/local/tmp
Expand Down Expand Up @@ -39,11 +41,28 @@ RUN mkdir -p ${TILES}
WORKDIR ${DIR}
COPY . .

# set environment variables

# Set environment variables
ENV AWS_SHARED_CREDENTIALS_FILE $SECRETS_PATH/.aws/credentials
ENV AWS_CONFIG_FILE $SECRETS_PATH/.aws/config
# https://www.postgresql.org/docs/current/libpq-envars.html
ENV PGUSER postgres
ENV PGDATABASE=ubuntu


#######################################
# Activate postgres and enable connection to it
# Copies config file that allows user postgres to enter psql shell,
# as shown here: https://stackoverflow.com/a/26735105 (change peer to trust).
# Commented out the start/restart commands because even with running them, postgres isn't running when the container is created.
# So there's no point in starting posgres here if it's not active when the instance opens.
#######################################
RUN cp pg_hba.conf /etc/postgresql/10/main/
# RUN pg_ctlcluster 10 main start
# RUN service postgresql restart


# Install missing python dependencies
# Install missing Python dependencies
RUN pip3 install -r requirements.txt

# Link gdal libraries
Expand All @@ -57,10 +76,10 @@ RUN ln -s /usr/bin/python3 /usr/bin/python
RUN git config --global user.email [email protected]

## Check out the branch that I'm currently using for model development
#RUN git checkout model_v_1.2.1
#RUN git checkout model_v_1.2.2
#
## Makes sure the latest version of the current branch is downloaded
#RUN git pull origin model_v_1.2.1
#RUN git pull origin model_v_1.2.2

## Compile C++ scripts
#RUN g++ /usr/local/app/emissions/cpp_util/calc_gross_emissions_generic.cpp -o /usr/local/app/emissions/cpp_util/calc_gross_emissions_generic.exe -lgdal && \
Expand Down
12 changes: 7 additions & 5 deletions analyses/aggregate_results_to_4_km.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
For sensitivity analysis runs, it only processes outputs which actually have a sensitivity analysis version.
The user has to supply a tcd threshold for which forest pixels to include in the results. Defaults to cn.canopy_threshold.
For sensitivity analysis, the s3 folder with the aggregations for the standard model must be specified.
sample command: python mp_aggregate_results_to_4_km.py -tcd 30 -t no_shifting_ag -sagg s3://gfw2-data/climate/carbon_model/0_4deg_output_aggregation/biomass_soil/standard/20200901/net_flux_Mt_CO2e_biomass_soil_per_year_tcd30_0_4deg_modelv1_2_0_std_20200901.tif
sample command: python mp_aggregate_results_to_4_km.py -tcd 30 -t no_shifting_ag -sagg s3://gfw2-data/climate/carbon_model/0_04deg_output_aggregation/biomass_soil/standard/20200901/net_flux_Mt_CO2e_biomass_soil_per_year_tcd30_0_4deg_modelv1_2_0_std_20200901.tif
'''


Expand Down Expand Up @@ -71,7 +71,9 @@ def aggregate(tile, thresh, sensit_type, no_upload):
#2D array in which the 0.04x0.04 deg aggregated sums will be stored
sum_array = np.zeros([250,250], 'float32')

out_raster = "{0}_{1}_0_4deg.tif".format(tile_id, tile_type)
out_raster = "{0}_{1}_0_04deg.tif".format(tile_id, tile_type)

uu.check_memory()

# Iterates across the windows (160x160 30m pixels) of the input tile
for idx, window in windows:
Expand Down Expand Up @@ -103,11 +105,11 @@ def aggregate(tile, thresh, sensit_type, no_upload):
sum_array[idx[0], idx[1]] = non_zero_pixel_sum


# Converts the annual carbon gain values annual gain in megatonnes and makes negative (because removals are negative)
# Converts the annual carbon removals values annual removals in megatonnes and makes negative (because removals are negative)
if cn.pattern_annual_gain_AGC_all_types in tile_type:
sum_array = sum_array / cn.tonnes_to_megatonnes * -1

# Converts the cumulative CO2 gain values to annualized CO2 in megatonnes and makes negative (because removals are negative)
# Converts the cumulative CO2 removals values to annualized CO2 in megatonnes and makes negative (because removals are negative)
if cn.pattern_cumul_gain_AGCO2_BGCO2_all_types in tile_type:
sum_array = sum_array / cn.loss_years / cn.tonnes_to_megatonnes * -1

Expand Down Expand Up @@ -183,7 +185,7 @@ def aggregate(tile, thresh, sensit_type, no_upload):
# aggregated.close()

# Prints information about the tile that was just processed
uu.end_of_fx_summary(start, tile_id, '{}_0_4deg'.format(tile_type), no_upload)
uu.end_of_fx_summary(start, tile_id, '{}_0_04deg'.format(tile_type), no_upload)


# Calculates the percent difference between the standard model's net flux output
Expand Down
1 change: 1 addition & 0 deletions analyses/create_supplementary_outputs.py
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,7 @@ def create_supplementary_outputs(tile_id, input_pattern, output_patterns, sensit
per_pixel_forest_extent_dst.update_tags(
scale='Negative values are net sinks. Positive values are net sources.')

uu.check_memory()

# Iterates across the windows of the input tiles
for idx, window in windows:
Expand Down
123 changes: 123 additions & 0 deletions analyses/download_tile_set.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
'''
This script downloads the listed tiles and creates overviews for them for easy viewing in ArcMap.
It must be run in the Docker container, and so tiles are downloaded to and overviewed in the folder of the Docker container where
all other tiles are downloaded.
'''

import multiprocessing
from functools import partial
from osgeo import gdal
import pandas as pd
import datetime
import argparse
import glob
from subprocess import Popen, PIPE, STDOUT, check_call
import os
import sys
sys.path.append('../')
import constants_and_names as cn
import universal_util as uu

def download_tile_set(sensit_type, tile_id_list):

uu.print_log("Downloading all tiles for: ", tile_id_list)

wd = os.path.join(cn.docker_base_dir,"spot_download")

os.chdir(wd)

download_dict = {
cn.model_extent_dir: [cn.pattern_model_extent],
cn.age_cat_IPCC_dir: [cn.pattern_age_cat_IPCC],
cn.annual_gain_AGB_IPCC_defaults_dir: [cn.pattern_annual_gain_AGB_IPCC_defaults],
cn.annual_gain_BGB_IPCC_defaults_dir: [cn.pattern_annual_gain_BGB_IPCC_defaults],
cn.stdev_annual_gain_AGB_IPCC_defaults_dir: [cn.pattern_stdev_annual_gain_AGB_IPCC_defaults],
cn.removal_forest_type_dir: [cn.pattern_removal_forest_type],
cn.annual_gain_AGC_all_types_dir: [cn.pattern_annual_gain_AGC_all_types],
cn.annual_gain_BGC_all_types_dir: [cn.pattern_annual_gain_BGC_all_types],
cn.annual_gain_AGC_BGC_all_types_dir: [cn.pattern_annual_gain_AGC_BGC_all_types],
cn.stdev_annual_gain_AGC_all_types_dir: [cn.pattern_stdev_annual_gain_AGC_all_types],
cn.gain_year_count_dir: [cn.pattern_gain_year_count],
cn.cumul_gain_AGCO2_all_types_dir: [cn.pattern_cumul_gain_AGCO2_all_types],
cn.cumul_gain_BGCO2_all_types_dir: [cn.pattern_cumul_gain_BGCO2_all_types],
cn.cumul_gain_AGCO2_BGCO2_all_types_dir: [cn.pattern_cumul_gain_AGCO2_BGCO2_all_types],
cn.AGC_emis_year_dir: [cn.pattern_AGC_emis_year],
cn.BGC_emis_year_dir: [cn.pattern_BGC_emis_year],
cn.deadwood_emis_year_2000_dir: [cn.pattern_deadwood_emis_year_2000],
cn.litter_emis_year_2000_dir: [cn.pattern_litter_emis_year_2000],
cn.soil_C_emis_year_2000_dir: [cn.pattern_soil_C_emis_year_2000],
cn.total_C_emis_year_dir: [cn.pattern_total_C_emis_year],

# cn.gross_emis_commod_biomass_soil_dir: [cn.pattern_gross_emis_commod_biomass_soil],
# cn.gross_emis_shifting_ag_biomass_soil_dir: [cn.pattern_gross_emis_shifting_ag_biomass_soil],
# cn.gross_emis_forestry_biomass_soil_dir: [cn.pattern_gross_emis_forestry_biomass_soil],
# cn.gross_emis_wildfire_biomass_soil_dir: [cn.pattern_gross_emis_wildfire_biomass_soil],
# cn.gross_emis_urban_biomass_soil_dir: [cn.pattern_gross_emis_urban_biomass_soil],
# cn.gross_emis_no_driver_biomass_soil_dir: [cn.pattern_gross_emis_all_gases_all_drivers_biomass_soil],

cn.gross_emis_all_gases_all_drivers_biomass_soil_dir: [cn.pattern_gross_emis_all_gases_all_drivers_biomass_soil],
cn.gross_emis_co2_only_all_drivers_biomass_soil_dir: [cn.pattern_gross_emis_co2_only_all_drivers_biomass_soil],
cn.gross_emis_non_co2_all_drivers_biomass_soil_dir: [cn.pattern_gross_emis_non_co2_all_drivers_biomass_soil],
cn.gross_emis_nodes_biomass_soil_dir: [cn.pattern_gross_emis_nodes_biomass_soil],
cn.net_flux_dir: [cn.pattern_net_flux],
cn.cumul_gain_AGCO2_BGCO2_all_types_per_pixel_full_extent_dir: [cn.pattern_cumul_gain_AGCO2_BGCO2_all_types_per_pixel_full_extent],
cn.cumul_gain_AGCO2_BGCO2_all_types_forest_extent_dir: [cn.pattern_cumul_gain_AGCO2_BGCO2_all_types_forest_extent],
cn.cumul_gain_AGCO2_BGCO2_all_types_per_pixel_forest_extent_dir: [cn.pattern_cumul_gain_AGCO2_BGCO2_all_types_per_pixel_forest_extent],
cn.gross_emis_all_gases_all_drivers_biomass_soil_per_pixel_full_extent_dir: [cn.pattern_gross_emis_all_gases_all_drivers_biomass_soil_per_pixel_full_extent],
cn.gross_emis_all_gases_all_drivers_biomass_soil_forest_extent_dir: [cn.pattern_gross_emis_all_gases_all_drivers_biomass_soil_forest_extent],
cn.gross_emis_all_gases_all_drivers_biomass_soil_per_pixel_forest_extent_dir: [cn.pattern_gross_emis_all_gases_all_drivers_biomass_soil_per_pixel_forest_extent],
cn.net_flux_per_pixel_full_extent_dir: [cn.pattern_net_flux_per_pixel_full_extent],
cn.net_flux_forest_extent_dir: [cn.pattern_net_flux_forest_extent],
cn.net_flux_per_pixel_forest_extent_dir: [cn.pattern_net_flux_per_pixel_forest_extent]
}

# Downloads input files or entire directories, depending on how many tiles are in the tile_id_list
for key, values in download_dict.items():
dir = key
pattern = values[0]
uu.s3_flexible_download(dir, pattern, wd, sensit_type, tile_id_list)

cmd = ['aws', 's3', 'cp', cn.output_aggreg_dir, wd, '--recursive']
uu.log_subprocess_output_full(cmd)

tile_list = glob.glob('*tif')
uu.print_log("Tiles for pyramiding: ", tile_list)

# https://gis.stackexchange.com/questions/160459/comparing-use-of-gdal-to-build-raster-pyramids-or-overviews-versus-arcmap
# Example 3 from https://gdal.org/programs/gdaladdo.html
# https://stackoverflow.com/questions/33158526/how-to-correctly-use-gdaladdo-in-a-python-program
for tile in tile_list:
uu.print_log("Pyramiding ", tile)
Image = gdal.Open(tile, 0) # 0 = read-only, 1 = read-write.
gdal.SetConfigOption('COMPRESS_OVERVIEW', 'DEFLATE')
Image.BuildOverviews('NEAREST', [2, 4, 8, 16, 32], gdal.TermProgress_nocb)
del Image # close the dataset (Python object and pointers)

uu.print_log("Pyramiding done")


if __name__ == '__main__':

# The arguments for what kind of model run is being run (standard conditions or a sensitivity analysis) and
# the tiles to include
parser = argparse.ArgumentParser(
description='Download model outputs for specific tile')
parser.add_argument('--model-type', '-t', required=True,
help='{}'.format(cn.model_type_arg_help))
parser.add_argument('--tile_id_list', '-l', required=True,
help='List of tile ids to use in the model. Should be of form 00N_110E or 00N_110E,00N_120E or all.')
parser.add_argument('--run-date', '-d', required=False,
help='Date of run. Must be format YYYYMMDD.')
args = parser.parse_args()
sensit_type = args.model_type
tile_id_list = args.tile_id_list
run_date = args.run_date

# Create the output log
uu.initiate_log(tile_id_list=tile_id_list, sensit_type=sensit_type, run_date=run_date)

# Checks whether the sensitivity analysis and tile_id_list arguments are valid
uu.check_sensit_type(sensit_type)
tile_id_list = uu.tile_id_list_check(tile_id_list)

download_tile_set(sensit_type=sensit_type, tile_id_list=tile_id_list)
18 changes: 12 additions & 6 deletions analyses/mp_aggregate_results_to_4_km.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
For sensitivity analysis runs, it only processes outputs which actually have a sensitivity analysis version.
The user has to supply a tcd threshold for which forest pixels to include in the results. Defaults to cn.canopy_threshold.
For sensitivity analysis, the s3 folder with the aggregations for the standard model must be specified.
sample command: python mp_aggregate_results_to_4_km.py -tcd 30 -t no_shifting_ag -sagg s3://gfw2-data/climate/carbon_model/0_4deg_output_aggregation/biomass_soil/standard/20200901/net_flux_Mt_CO2e_biomass_soil_per_year_tcd30_0_4deg_modelv1_2_0_std_20200901.tif
sample command: python mp_aggregate_results_to_4_km.py -tcd 30 -t no_shifting_ag -sagg s3://gfw2-data/climate/carbon_model/0_04deg_output_aggregation/biomass_soil/standard/20200901/net_flux_Mt_CO2e_biomass_soil_per_year_tcd30_0_4deg_modelv1_2_0_std_20200901.tif
'''


Expand Down Expand Up @@ -81,7 +81,7 @@ def mp_aggregate_results_to_4_km(sensit_type, thresh, tile_id_list, std_net_flux

# Downloads input files or entire directories, depending on how many tiles are in the tile_id_list
uu.s3_flexible_download(dir, download_pattern_name, cn.docker_base_dir, sensit_type, tile_id_list)


if tile_id_list == 'all':
# List of tiles to run in the model
Expand Down Expand Up @@ -110,7 +110,7 @@ def mp_aggregate_results_to_4_km(sensit_type, thresh, tile_id_list, std_net_flux
# from https://stackoverflow.com/questions/12666897/removing-an-item-from-list-matching-a-substring
tile_list = [i for i in tile_list if not ('hanson_2013' in i)]
tile_list = [i for i in tile_list if not ('rewindow' in i)]
tile_list = [i for i in tile_list if not ('0_4deg' in i)]
tile_list = [i for i in tile_list if not ('0_04deg' in i)]
tile_list = [i for i in tile_list if not ('.ovr' in i)]

# tile_list = ['00N_070W_cumul_gain_AGCO2_BGCO2_t_ha_all_forest_types_2001_15_biomass_swap.tif'] # test tiles
Expand Down Expand Up @@ -169,8 +169,8 @@ def mp_aggregate_results_to_4_km(sensit_type, thresh, tile_id_list, std_net_flux
# aggregate_results_to_4_km.aggregate(tile, thresh, sensit_type, no_upload)

# Makes a vrt of all the output 10x10 tiles (10 km resolution)
out_vrt = "{}_0_4deg.vrt".format(pattern)
os.system('gdalbuildvrt -tr 0.04 0.04 {0} *{1}_0_4deg*.tif'.format(out_vrt, pattern))
out_vrt = "{}_0_04deg.vrt".format(pattern)
os.system('gdalbuildvrt -tr 0.04 0.04 {0} *{1}_0_04deg*.tif'.format(out_vrt, pattern))

# Creates the output name for the 10km map
out_pattern = uu.name_aggregated_output(download_pattern_name, thresh, sensit_type)
Expand Down Expand Up @@ -221,7 +221,13 @@ def mp_aggregate_results_to_4_km(sensit_type, thresh, tile_id_list, std_net_flux
for tile_name in tile_list:
tile_id = uu.get_tile_id(tile_name)
os.remove('{0}_{1}_rewindow.tif'.format(tile_id, pattern))
os.remove('{0}_{1}_0_4deg.tif'.format(tile_id, pattern))
os.remove('{0}_{1}_0_04deg.tif'.format(tile_id, pattern))

# Need to delete rewindowed tiles so they aren't confused with the normal tiles for creation of supplementary outputs
rewindow_list = glob.glob('*rewindow*tif')
for rewindow_tile in rewindow_list:
os.remove(rewindow_tile)
uu.print_log("Deleted all rewindowed tiles")


# Compares the net flux from the standard model and the sensitivity analysis in two ways.
Expand Down
2 changes: 1 addition & 1 deletion analyses/mp_tile_statistics.py
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ def mp_tile_statistics(sensit_type, tile_id_list):
if __name__ == '__main__':

parser = argparse.ArgumentParser(
description='Create tiles of the annual AGB and BGB gain rates for mangrove forests')
description='Create tiles of the annual AGB and BGB removals rates for mangrove forests')
parser.add_argument('--model-type', '-t', required=True,
help='{}'.format(cn.model_type_arg_help))
parser.add_argument('--tile_id_list', '-l', required=True,
Expand Down
6 changes: 4 additions & 2 deletions analyses/net_flux.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ def net_calc(tile_id, pattern, sensit_type, no_upload):
# Start time
start = datetime.datetime.now()

# Names of the gain and emissions tiles
# Names of the removals and emissions tiles
removals_in = uu.sensit_tile_rename(sensit_type, tile_id, cn.pattern_cumul_gain_AGCO2_BGCO2_all_types)
emissions_in = uu.sensit_tile_rename(sensit_type, tile_id, cn.pattern_gross_emis_all_gases_all_drivers_biomass_soil)

Expand Down Expand Up @@ -73,6 +73,8 @@ def net_calc(tile_id, pattern, sensit_type, no_upload):
net_flux_dst.update_tags(
scale='Negative values are net sinks. Positive values are net sources.')

uu.check_memory()

# Iterates across the windows (1 pixel strips) of the input tile
for idx, window in windows:

Expand All @@ -86,7 +88,7 @@ def net_calc(tile_id, pattern, sensit_type, no_upload):
except:
emissions_window = np.zeros((window.height, window.width)).astype('float32')

# Subtracts gain that from loss
# Subtracts removals from emissions to calculate net flux (negative is net sink, positive is net source)
dst_data = emissions_window - removals_window

net_flux_dst.write_band(1, dst_data, window=window)
Expand Down
Loading

0 comments on commit 58fdef4

Please sign in to comment.