Skip to content

Commit

Permalink
pypromicev1.4.0 (#291)
Browse files Browse the repository at this point in the history
* Update .gitignore

* L2 split from L3 CLI processing

* unit tests moved to separate module

* file writing functions moved to separate module

* Loading functions moved to separate module

* Handling and reformating functions moved

* resampling functions moved

* aws module updated with structure changes

* get_l2 and l2_to_l3 process test added

* data prep and write function moved out of AWS class

* stations for testing changed

* creating folder before writing files, writing hourly daily monthly files out in L2toL3, trying not to re-write sorted tx file if already sorted

* update get_l3 to add historical data

* resampling frequency specified

* renamed join_levels to join_l2 because join_l3 will have different merging function, attribute management and use site_id and list_station_id

* skipping resample after join_l2, fixed setup.py for join_l2

* fixing test

* fixed function names

* update get_l3 to add historical data

* update get_l3 to add historical data

* Create get_l3_new.py

* further work on join_l3, varible_aliases in ressource folder

* cleaning up debug code in join_l3

* small fix in join_l3

* working verion

* delete encoding info after reading netcdf, debug of getColNames

* delete get_l3.py

* removing variables and output files metadata

* new variable description files

* added back ressource files, use level attributes for output definition

* make list of sites from station_config, switched print to logger.info

* removing get_l3, remove inst. values from averaged files, fixes on logging, attributes and tests,

* Updates to numpy dependency version and pandas deprecation warnings (#258)

* numpy dependency <2.0

* resample rules updated (deprecation warning)

* fillna replaced with ffill (deprecation warning)

* get_l3 called directly rather than from file

* process action restructured

* small changes following review, restored variable.csv history

renamed new variable.csv

moved old variable.csv

renamed new variables.csv

recreate variables.csv

* buiding a station list instead of a station_dict

* renamed l3m to l3_merged, reintroduced getVars and getMeta

* moving gcnet_postprocessing as part of readNead

* sorting out the station_list in reverse chronological order

* using tilde notation in setup.py

* better initialisation of station_attributes attribute

* moved addMeta, addVars, roundValues, reformatTime, reformatLon to write.py

* Inline comment describing enocding attribute removal when reading a netcdf

* loading toml file as dictionary within join_l3

instead of just reading the stid to join

* ressources renamed to resources (#261)

* using project attribute of a station locate AWS file and specify whether it's a Nead file

* update test after moving addVars and addMeta

* fixed logger message in resample

* better definition of monthly sample rates in addMeta

* dummy datasaet built in unit test now has 'level' attribute

* not storing timestamp_max for each station but pulling the info directly from the dataset when sorting

* removing unecessary import of addMeta, roundValues

* make CLI scripts usable within python

* return result in join_l2 and join_l3

* removing args from join_l2 function

* proper removal of encoding info when reading netcdf

* Refactored and Organized Test Modules

- Moved test modules and data from the package directory to the root-level tests directory.
- Updated directory structure to ensure clear separation of source code and tests.
- Updated import statements in test modules to reflect new paths.
- Restructured the tests module:
  - Renamed original automatic tests to `e2e` as they primarily test the main CLI scripts.
  - Added `unit` directory for unit tests.
  - Created `data` directory for shared test data files.

This comprehensive refactoring improves project organization by clearly separating test code from application code. It facilitates easier test discovery and enhances maintainability by following common best practices.

* Limited the ci tests to only run e2e

* naming conventions changed

* Feature/smoothing and extrapolating gps coordinates (#268)

* implemented gps postprocessing on top of the #262

This update:
- clears up the SHF LHF calculation
- reads dates of station relocations (when station coordinates are discontinuous) from the `aws-l0/metadata/station_configurations` 
- for each interval between station relocations, a linear function is fitted to the GPS observations of latitude longitude and altitude and is used to interpolate and extrapolate the gps observations
- these new smoothed and gap-free coordinates are the variables `lat, lon, alt`
- for bedrock stations (like KAN_B) static coordinates are used to build `lat, lon, alt`
- eventually `lat_avg`, `lon_avg` `alt_avg` are calculated from `lat, lon, alt` and added as attributes to the netcdf files.

Several minor fixes were also brought like:
- better encoding info removal when reading netcdf
- printing to files variables full of NaNs at `L2` and `L3/stations` but not printing them in the `L3/sites files`.
- recalculate dirWindSpd if needed for historical data
- due to xarray version, new columns need to be added manually before a concatenation of different datasets in join_l3

* Updated persistence.py to use explicit variable thresholds

Avoided applying the persistence filter on averaged pressure variables (`p_u` and `p_l`) due to their 0 decimal precision often leading to incorrect filtering.

* Fixed bug in persistence QC where initial repetitions were ignored

* Relocated unit persistence tests
* Added explicit test for `get_duration_consecutive_true`
* Renamed `duration_consecutive_true` to `get_duration_consecutive_true` for imperative clarity

* Updated python version in unittest

* Fixed bug in get_bufr

Configuration variables were to strictly validated.
* Made bufr_integration_test explicit

* Added __all__ to get_bufr.py

* Applied black code formatting

* Made bufr_to_csv as cli script in setup.py

* Updated read_bufr_file to use wmo_id as index

* Added script to recreate bufr files

* Added corresponding unit tests
* Added flag to raise exceptions on errors
* Added create_bufr_files.py to setup

* Updated tests parameters

Updated station config:
* Added sonic_ranger_from_gps
* Changed height_of_gps_from_station_ground from 0 to 1

* Added test for missing data in get_bufr

- Ensure get_bufr_variables raises AttributeError when station dimensions are missing

* Updated get_bufr to support static GPS heights.

* Bedrock stations shouldn’t depend on the noisy GPS signal for elevation.
* Added station dimension values for WEG_B
* Added corresponding unittest

* Updated github/workflow to run unittests

Added eccodes installation

* Updated get_bufr to support station config files in folder

* Removed station_configurations.toml from repository
* Updated bufr_utilities.set_station to validate wmo id
* Implemented StationConfig io tests
* Extracted StationConfiguration utils from get_bufr
* Added support for loading multiple station configuration files

Other
* Made ArgumentParser instantiation inline

* Updated BUFRVariables with scales and descriptions

* Added detailed descriptions with references to the attributes in BUFRVariables
* Change the attribute order to align with the exported schema
* Changed variable roundings to align with the scales defined in the BUFR schemas:
  * Latitude and longitude is set to 5. Was 6
  * heightOfStationGroundAboveMeanSeaLevel is set to 1. Was 2
  * heightOfBarometerAboveMeanSeaLevel is set to to 1. Was 2
  * pressure is set to -1. Was 1. Note: The BUFRVariable unit is Pa and not hPA
  * airTemperature is set to 2. Was 1.
  * heightOfSensorAboveLocalGroundOrDeckOfMarinePlatformTempRH is set to 2. Was 4
  * heightOfSensorAboveLocalGroundOrDeckOfMarinePlatformWSPD is set to 2. Was 4
 * Added unit tests to test the roundings
* Updated existing unit tests to align with corrected precision

* Increased the real_time_utilities rounding precisions

* Updated get_bufr to separate station position from bufr

* The station position determination (AWS_latest_locations) is separated from the bufr file export
* Updated the unit tests

Corrected minimum data check to allow p_i or t_i to be nan

Renamed process_station parameters for readability
* Rename now_timestamp -> target_timestamp
* Rename time_limit -> linear_regression_time_limit

Applied black

* Minor cleanup

* Updated StationConfiguration IO to handle unknown attributes from input

* Updated docstring in create_bufr_files.py

* Renamed e2e unittest methods

Added missing "test" prefix required by the unittest framework.

* Feature/surface heights and thermistor depths (#278)

* processes surface heights variables: `z_surf_combined`, `z_ice_surf`, `snow_height`, and thermistors' depths: `d_t_i_1-11`
* `variable.csv` was updated accordingly
* some clean-up of turbulent fluxes calculation, including renaming functions
* handling empty station configuration files and making errors understandable
* updated join_l3 so that surface height and thermistor depths in historical data are no longer ignored and to adjust the surface height between the merged datasets

* calculated either from `gps_lat, gps_lon, gps_alt` or `lat, lon, alt`, static values called `latitude`, `longitude` and `altitude` are saved as attributes along with  `latitude_origin`, `longitude_origin` and `altitude_origin` which states whether they come from gappy observations  `gps_lat, gps_lon, gps_alt`  or from gap-filled postprocess `lat, lon, alt`
* changed "H" to "h" in pandas and added ".iloc" when necessary to remove deprecation warnings

* made `make_metadata_csv.py` to update latest location in `aws-l3/AWS_station_metadata.csv` and `aws-l3/AWS_sites_metadata.csv`

---------

Co-authored-by: Penny How <[email protected]>

* L2toL3 test added (#282)

* 3.8 and 3.9 tests removed, tests only for 3.10 and 3.11

* echo syntax changed

* updated input file paths
---------

* better adjustment of surface height in join_l3, also adjusting z_ice_surf (#289)

* different decoding of GPS data if "L" is in GPS string (#288)

* Updated pressure field for BUFR output files

* Updated get_l2 to use aws.vars and aws.meta

get_l2 were previously also loading vars and meta in addition to AWS.
AWS is populating meta with source information during instantiation.

* Removed static processing level attribute from file_attributes

* Run black on write.py

* Implemented alternative helper functions for reading variables and metadata files

* Refactor getVar getMeta
* Use pypromice.resources instaed of pkg_resources

* Select format from multiple L0 input files

The format string was previously selected from the last l0 file.

* Updated attribute metadata

* Added test case for output meta data
* Added json formatted source string to attributes
* Added title string to attributes
* Updated ID string to include level
* Added utility function for fetching git commit id

* Updated test_process with full pipeline test

* Added test station configuration
* Cleanup test data files

* Removed station configuration generation

* Renamed folder name in temporaty test directory

* Added data issues repository path as an explicit parameter to AWS

* Added data issues path to process_test.yml

* Applied black on join_l3

* Updated join_l3 to generate source attribute for sites

Validate attribute keys in e2e test

* job name changed

* Bugfix/passing adj dir to l3 processing plus attribute fix (#292)

* passing adjustment_dir to L2toL3.py

* fixing attributes in join_l3

- station_attribute containing info from merged dataset was lost when concatenating the datasets
- The key "source" is not present in the attributes of the old GC-Net files so `station_source = json.loads(station_attributes["source"])` was throwing an error

* give data_issues_path to get_l2tol3 in test_process

* using data_adjustments_dir as input in AWS.getL3

* adding path to dummy data_issues folder to process_test

* making sure data_issues_path  is Path in get_l2tol3

---------

Co-authored-by: PennyHow <[email protected]>
Co-authored-by: Mads Christian Lund <[email protected]>
  • Loading branch information
3 people authored Aug 20, 2024
1 parent af09818 commit 735a8ba
Show file tree
Hide file tree
Showing 69 changed files with 38,615 additions and 35,916 deletions.
20 changes: 16 additions & 4 deletions .github/workflows/process_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: "3.8"
python-version: "3.10"
- name: Checkout repo
uses: actions/checkout@v3
with:
Expand All @@ -31,14 +31,26 @@ jobs:
run: |
cd $GITHUB_WORKSPACE
git clone --depth 1 https://oauth2:${{ env.GITLAB_TOKEN }}@geusgitlab.geus.dk/glaciology-and-climate/promice/aws-l0.git
- name: Run data processing
- name: Run L0 to L2 processing
env:
TEST_STATION: KPC_U CEN2 JAR
TEST_STATION: KAN_U HUM
shell: bash
run: |
mkdir $GITHUB_WORKSPACE/out/
mkdir $GITHUB_WORKSPACE/out/L0toL2/
mkdir $GITHUB_WORKSPACE/data_issues
for i in $(echo ${{ env.TEST_STATION }} | tr ' ' '\n'); do
python3 $GITHUB_WORKSPACE/main/src/pypromice/process/get_l3.py -v $GITHUB_WORKSPACE/main/src/pypromice/process/variables.csv -m $GITHUB_WORKSPACE/main/src/pypromice/process/metadata.csv -c $GITHUB_WORKSPACE/aws-l0/raw/config/$i.toml -i $GITHUB_WORKSPACE/aws-l0/raw -o $GITHUB_WORKSPACE/out/
python3 $GITHUB_WORKSPACE/main/src/pypromice/process/get_l2.py -c $GITHUB_WORKSPACE/aws-l0/tx/config/$i.toml -i $GITHUB_WORKSPACE/aws-l0/tx --issues $GITHUB_WORKSPACE/data_issues -o $GITHUB_WORKSPACE/out/L0toL2/ --data_issues_path $GITHUB_WORKSPACE/data_issues
done
- name: Run L2 to L3 processing
env:
TEST_STATION: KAN_U HUM
shell: bash
run: |
mkdir $GITHUB_WORKSPACE/out/L2toL3/
for i in $(echo ${{ env.TEST_STATION }} | tr ' ' '\n'); do
echo ${i}_hour.nc
python3 $GITHUB_WORKSPACE/main/src/pypromice/process/get_l2tol3.py -c $GITHUB_WORKSPACE/aws-l0/metadata/station_configurations/ -i $GITHUB_WORKSPACE/out/L0toL2/${i}/${i}_hour.nc -o $GITHUB_WORKSPACE/out/L2toL3/ --data_issues_path $GITHUB_WORKSPACE/data_issues
done
- name: Upload test output
uses: actions/upload-artifact@v3
Expand Down
9 changes: 6 additions & 3 deletions .github/workflows/unit_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ on:
workflow_dispatch:

jobs:
build:
test:
name: unit_test
runs-on: ubuntu-latest
strategy:
matrix:
python_version: ['3.8','3.9','3.10']
python_version: ['3.10', '3.11']
steps:
- name: Install Python
uses: actions/setup-python@v4
Expand All @@ -19,6 +19,9 @@ jobs:
uses: actions/checkout@v3
with:
token: ${{ secrets.GITHUB_TOKEN }}
- name: Install eccodes
run : |
sudo apt-get install -y libeccodes-dev
- name: Install dependencies
shell: bash
run: |
Expand All @@ -30,4 +33,4 @@ jobs:
- name: Run unit tests
shell: bash
run: |
python3 -m unittest discover pypromice
python3 -m unittest discover tests
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
include src/pypromice/test/*
include src/pypromice/resources/*
16 changes: 10 additions & 6 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

setuptools.setup(
name="pypromice",
version="1.3.6",
version="1.4.0",
author="GEUS Glaciology and Climate",
description="PROMICE/GC-Net data processing toolbox",
long_description=long_description,
Expand All @@ -31,21 +31,25 @@
packages=setuptools.find_packages(where="src"),
python_requires=">=3.8",
package_data={
"pypromice.process": ["metadata.csv", "variables.csv"],
"pypromice.tx": ["payload_formats.csv", "payload_types.csv"],
"pypromice.qc.percentiles": ["thresholds.csv"],
"pypromice.postprocess": ["station_configurations.toml", "positions_seed.csv"],
"pypromice.postprocess": ["positions_seed.csv"],
},
install_requires=['numpy>=1.23.0', 'pandas>=1.5.0', 'xarray>=2022.6.0', 'toml', 'scipy>=1.9.0', 'Bottleneck', 'netcdf4', 'pyDataverse', 'eccodes','scikit-learn>=1.1.0'],
install_requires=['numpy~=1.23', 'pandas>=1.5.0', 'xarray>=2022.6.0', 'toml', 'scipy>=1.9.0', 'Bottleneck', 'netcdf4', 'pyDataverse==0.3.1', 'eccodes', 'scikit-learn>=1.1.0'],
# extras_require={'postprocess': ['eccodes','scikit-learn>=1.1.0']},
entry_points={
'console_scripts': [
'get_promice_data = pypromice.get.get_promice_data:get_promice_data',
'get_l0tx = pypromice.tx.get_l0tx:get_l0tx',
'get_l3 = pypromice.process.get_l3:get_l3',
'join_l3 = pypromice.process.join_l3:join_l3',
'join_l2 = pypromice.process.join_l2:main',
'join_l3 = pypromice.process.join_l3:main',
'get_l2 = pypromice.process.get_l2:main',
'get_l2tol3 = pypromice.process.get_l2tol3:main',
'make_metadata_csv = pypromice.postprocess.make_metadata_csv:main',
'get_watsontx = pypromice.tx.get_watsontx:get_watsontx',
'get_bufr = pypromice.postprocess.get_bufr:main',
'create_bufr_files = pypromice.postprocess.create_bufr_files:main',
'bufr_to_csv = pypromice.postprocess.bufr_to_csv:main',
'get_msg = pypromice.tx.get_msg:get_msg'
],
},
Expand Down
7 changes: 6 additions & 1 deletion src/pypromice/postprocess/bufr_to_csv.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,14 @@

from pypromice.postprocess.bufr_utilities import read_bufr_file

if __name__ == "__main__":

def main():
parser = argparse.ArgumentParser("BUFR to CSV converter")
parser.add_argument("path", type=Path)
args = parser.parse_args()

print(read_bufr_file(args.path).to_csv())


if __name__ == "__main__":
main()
109 changes: 91 additions & 18 deletions src/pypromice/postprocess/bufr_utilities.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ def round(value: float):

return round


# Enforce precision
# Note the sensor accuracies listed here:
# https://essd.copernicus.org/articles/13/3819/2021/#section8
Expand All @@ -64,28 +65,82 @@ class BUFRVariables:
* heightOfSensorAboveLocalGroundOrDeckOfMarinePlatformWSPD: Corresponds to "#7#heightOfSensorAboveLocalGroundOrDeckOfMarinePlatform" which is height if anemometer relative to ground or deck of marine platform.
"""
wmo_id: str

# Station type: "mobile" or "land"
# ===============================
# Fixed land station schema: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/307080
# Mobile station schema: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/307090

station_type: str

# WMO station identifier
# Land stations: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/301090
# Mobile stations: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/301092
# ======================================================================================================
wmo_id: str
timestamp: datetime.datetime
relativeHumidity: float = attrs.field(converter=round_converter(0))
airTemperature: float = attrs.field(converter=round_converter(1))
pressure: float = attrs.field(converter=round_converter(1))
windDirection: float = attrs.field(converter=round_converter(0))
windSpeed: float = attrs.field(converter=round_converter(1))
latitude: float = attrs.field(converter=round_converter(6))
longitude: float = attrs.field(converter=round_converter(6))

# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/005001
# Scale: 5, unit: degrees
# TODO: Test if eccodes does the rounding as well. The rounding is was 6 which is larger that the scale.
latitude: float = attrs.field(converter=round_converter(5))
# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/006001
# Scale: 5, unit: degrees
longitude: float = attrs.field(converter=round_converter(5))

# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/007030
# Scale: 1, unit: m
heightOfStationGroundAboveMeanSeaLevel: float = attrs.field(
converter=round_converter(2)
converter=round_converter(1)
)
#
# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/007031
# Scale: 1, unit: m
heightOfBarometerAboveMeanSeaLevel: float = attrs.field(
converter=round_converter(2),
converter=round_converter(1),
)

# Pressure information
# ====================
# Definition table: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/302031
# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/010004
# Scale: -1, unit: Pa
nonCoordinatePressure: float = attrs.field(converter=round_converter(-1))
# There are two other pressure variables in the template: 007004 - pressure and 010062 24-hour pressure change

# Basic synoptic "instantaneous" data
# ===================================
# Definition table: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/302035
# This section only include the temperature and humidity data (302032).
# Precipitation and cloud data are currently ignored.
# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/007032
# Scale: 2, unit: m
# This is the first appearance of this variable id.
heightOfSensorAboveLocalGroundOrDeckOfMarinePlatformTempRH: float = attrs.field(
converter=round_converter(4),
converter=round_converter(2),
)
# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/012101
# Scale: 2, unit: K
airTemperature: float = attrs.field(converter=round_converter(2))
# There is also a Dewpoint temperature in this template: 012103 which is currently unused.
# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/012103
# Scale: 0, unit: %
relativeHumidity: float = attrs.field(converter=round_converter(0))

# Basic synoptic "period" data
# ============================
# Definition table: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/302043
# Wind data: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/302042
# Wind direction: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/011001
# Scale: 0, unit: degrees
windDirection: float = attrs.field(converter=round_converter(0))
# Wind speed: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/011002
# Scale: 1, unit: m/s
windSpeed: float = attrs.field(converter=round_converter(1))
# https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/007032
# Scale: 2, unit: m
# This is the 7th appearance of this variable id.
heightOfSensorAboveLocalGroundOrDeckOfMarinePlatformWSPD: float = attrs.field(
converter=round_converter(4)
converter=round_converter(2)
)

def as_series(self) -> pd.Series:
Expand Down Expand Up @@ -129,6 +184,7 @@ def __eq__(self, other: "BUFRVariables"):

BUFR_TEMPLATES = {
"mobile": {
# Template definition: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/307090
"unexpandedDescriptors": (307090), # message template, "synopMobil"
"edition": 4, # latest edition
"masterTableNumber": 0,
Expand All @@ -144,6 +200,7 @@ def __eq__(self, other: "BUFRVariables"):
"compressedData": 0,
},
"land": {
# Template definition: https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_D/307080
"unexpandedDescriptors": (307080), # message template, "synopLand"
"edition": 4, # latest edition
"masterTableNumber": 0,
Expand Down Expand Up @@ -246,6 +303,11 @@ def set_station(ibufr, station_type: str, wmo_id: str):
elif station_type == "land":
# StationNumber for land stations are integeres
wmo_id_int = int(wmo_id)
if wmo_id_int >= 1024:
raise ValueError(
f"Invalid WMO ID {wmo_id}. Land station number must be less than 1024."
"See https://vocabulary-manager.eumetsat.int/vocabularies/BUFR/WMO/32/TABLE_B/001002"
)
station_config = dict(stationNumber=wmo_id_int)
else:
raise Exception(f"Unsupported station station type {station_type}")
Expand Down Expand Up @@ -280,7 +342,7 @@ def set_AWS_variables(

set_bufr_value(ibufr, "relativeHumidity", variables.relativeHumidity)
set_bufr_value(ibufr, "airTemperature", variables.airTemperature)
set_bufr_value(ibufr, "pressure", variables.pressure)
set_bufr_value(ibufr, "nonCoordinatePressure", variables.nonCoordinatePressure)
set_bufr_value(ibufr, "windDirection", variables.windDirection)
set_bufr_value(ibufr, "windSpeed", variables.windSpeed)

Expand Down Expand Up @@ -372,7 +434,7 @@ def get_bufr_value(msgid: int, key: str) -> float:
raise ValueError(f"Unsupported BUFR value type {type(value)} for key {key}")


def read_bufr_message(fp: BinaryIO) -> Optional[BUFRVariables]:
def read_bufr_message(fp: BinaryIO, backwards_compatible: bool = False) -> Optional[BUFRVariables]:
"""
Read and parse BUFR message from binary IO stream.
Expand All @@ -383,6 +445,8 @@ def read_bufr_message(fp: BinaryIO) -> Optional[BUFRVariables]:
----------
fp
Readable binary io stream
backwards_compatible
Use legacy pressure if nonCoordinatePressure is nan
Returns
-------
Expand Down Expand Up @@ -435,11 +499,19 @@ def read_bufr_message(fp: BinaryIO) -> Optional[BUFRVariables]:
f"Unknown BUFR template unexpandedDescriptors: {unexpanded_descriptors}"
)

nonCoordinatePressure = get_bufr_value(ibufr, "nonCoordinatePressure")
if math.isnan(nonCoordinatePressure) and backwards_compatible:
nonCoordinatePressure = get_bufr_value(ibufr, "pressure")
if not math.isnan(nonCoordinatePressure):
logger.warning(
f"nonCoordinatePressure is nan, using legacy pressure instead"
)

variables = BUFRVariables(
timestamp=timestamp,
relativeHumidity=get_bufr_value(ibufr, "relativeHumidity"),
airTemperature=get_bufr_value(ibufr, "airTemperature"),
pressure=get_bufr_value(ibufr, "pressure"),
nonCoordinatePressure=nonCoordinatePressure,
windDirection=get_bufr_value(ibufr, "windDirection"),
windSpeed=get_bufr_value(ibufr, "windSpeed"),
latitude=get_bufr_value(ibufr, "latitude"),
Expand Down Expand Up @@ -485,5 +557,6 @@ def read_bufr_file(path: PathLike) -> pd.DataFrame:
message_vars = read_bufr_message(fp)
if message_vars is None:
break
lines.append(message_vars)
return pd.DataFrame(lines).rename_axis("message_index")
lines.append(message_vars.as_series())
data_frame = pd.DataFrame(lines).set_index("wmo_id")
return data_frame
Loading

0 comments on commit 735a8ba

Please sign in to comment.