Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create hydranet main #26

Merged
merged 137 commits into from
Jun 12, 2024
Merged
Show file tree
Hide file tree
Changes from 106 commits
Commits
Show all changes
137 commits
Select commit Hold shift + click to select a range
7c7890e
debug test run
Polichinel May 23, 2024
19ac021
added setup_artifact_path
Polichinel May 23, 2024
5348675
new path_art..
Polichinel May 23, 2024
d840fc7
removed comments
Polichinel May 23, 2024
e8c55c2
first main for P-A
Polichinel May 23, 2024
6392cef
model_type to run_type
Polichinel May 23, 2024
0e14f93
nornir to viewspipeline
Polichinel May 23, 2024
c582bdc
nornir to views_pipeline
Polichinel May 23, 2024
9bd3c25
for debug
Polichinel May 23, 2024
eb700e1
added get_data to import
Polichinel May 23, 2024
41c8c4b
new run_type_dict
Polichinel May 23, 2024
e3b2e22
argparse solution
Polichinel May 24, 2024
9348e4b
starting on sweep
Polichinel May 24, 2024
321ee62
sweep back in
Polichinel May 24, 2024
7825fe7
added time_steps here
Polichinel May 24, 2024
76673b0
added get_posterior
Polichinel May 24, 2024
ed2e070
posterior...
Polichinel May 24, 2024
c012f18
fix?
Polichinel May 24, 2024
1114d81
fix??
Polichinel May 24, 2024
e0bd460
unet -> model
Polichinel May 24, 2024
1a63be0
validate args
Polichinel May 24, 2024
5046b36
better help and warnings
Polichinel May 24, 2024
11e45fe
new parser script
Polichinel May 24, 2024
6db4f8c
fix?
Polichinel May 24, 2024
ae4bbe4
import sys
Polichinel May 24, 2024
8ab6d81
removed comments...
Polichinel May 24, 2024
78ad92c
extended logic
Polichinel May 24, 2024
02cae7c
sweep right now?
Polichinel May 24, 2024
3a66151
debug...
Polichinel May 24, 2024
8db0a55
now with action
Polichinel May 24, 2024
a28dbd2
move start time
Polichinel May 24, 2024
f6f748c
forecastin place holder
Polichinel May 24, 2024
a4f1be4
full sweeps test
Polichinel May 24, 2024
6ef1607
now with eval
Polichinel May 28, 2024
7f8278f
no forcing of t or e for s
Polichinel May 28, 2024
77c1252
utils to find the last art
Polichinel May 28, 2024
76bbd54
time stamped arts in mp
Polichinel May 28, 2024
f4aa7b4
better print for debug
Polichinel May 28, 2024
523a43f
artifact name can now be passed
Polichinel May 28, 2024
7ea89b7
can now pass art name
Polichinel May 28, 2024
f8c425f
notes on stepshifted models
Polichinel May 28, 2024
d579870
fix typo...
Polichinel May 28, 2024
9b8a4a1
debug prints
Polichinel May 28, 2024
01b652e
if/if not sweep
Polichinel May 28, 2024
7ab48e6
test sweep
Polichinel May 28, 2024
6077bea
correct path now?
Polichinel May 28, 2024
1c478c9
fixed loop?
Polichinel May 28, 2024
f1f9938
added /
Polichinel May 29, 2024
bf109f8
more generel for single and sweep
Polichinel May 29, 2024
22ade30
use evalution.py
Polichinel May 29, 2024
357df2b
timedtapm to pickle
Polichinel May 29, 2024
17c57bb
fixed?
Polichinel May 29, 2024
3b3bc95
full run single model
Polichinel May 29, 2024
e3ec2ab
note on one script
Polichinel May 29, 2024
51579ae
some comments
Polichinel May 29, 2024
3831e49
new (old) name
Polichinel May 29, 2024
8fca699
sweep enabled again
Polichinel May 29, 2024
02db210
fixed?
Polichinel May 29, 2024
d54df88
test run
Polichinel May 29, 2024
5b71050
renamed test_tensor to full
Polichinel May 29, 2024
0ee00c7
test_tensor to full tensor
Polichinel May 29, 2024
514412b
test_tensoer to full_tensor
Polichinel May 29, 2024
2646ab0
test_tensor to full
Polichinel May 29, 2024
c998959
changed print
Polichinel May 29, 2024
20422b5
hold_out setting
Polichinel May 30, 2024
b09b963
debugging print
Polichinel May 30, 2024
064e299
checking
Polichinel May 30, 2024
d575001
test the new solution
Polichinel May 30, 2024
5882ff2
dump print shit
Polichinel May 30, 2024
032f719
just a test:w
Polichinel May 30, 2024
75a06f3
moved pred stuf to new utils
Polichinel May 30, 2024
a2760bc
better printing?
Polichinel May 30, 2024
1afaebb
better printing
Polichinel May 30, 2024
525b2c2
get_posterior to evaluate_posterior
Polichinel May 30, 2024
93880be
first commit
Polichinel May 30, 2024
8882742
thinking about forecastng
Polichinel May 30, 2024
34a2219
full sweep test
Polichinel May 30, 2024
28d4e4c
much improved modularity - see if works
Polichinel May 30, 2024
a54c636
fixed a typo...
Polichinel May 30, 2024
d20b857
Better now?
Polichinel May 30, 2024
abeb7d9
now mayhaps?
Polichinel May 30, 2024
a1e1ba4
now?
Polichinel May 30, 2024
55a8b67
forecastin error
Polichinel May 30, 2024
a3d24c3
sweep to see if error also there...
Polichinel May 30, 2024
7b27e6e
added debug print
Polichinel May 30, 2024
f018555
larger test...
Polichinel May 30, 2024
8b325ed
full run
Polichinel May 31, 2024
496351a
added handle_training.py
Polichinel May 31, 2024
fcd44f9
added handle_evaluation
Polichinel May 31, 2024
1b7c3f1
moved handle_forecast here
Polichinel May 31, 2024
8608b31
moved handle functions
Polichinel May 31, 2024
bf04fdb
migrate code co modular scripts
Polichinel May 31, 2024
9a09b0a
imported handlers
Polichinel May 31, 2024
3068ce3
corrected imprt
Polichinel May 31, 2024
f9dd01d
corrected import
Polichinel May 31, 2024
5a9fc94
corrected script name
Polichinel May 31, 2024
1bd47fa
full run
Polichinel Jun 1, 2024
bf6d241
added function
Polichinel Jun 1, 2024
9b3d51c
removed comments
Polichinel Jun 3, 2024
a5b94a5
get_data comment?
Polichinel Jun 3, 2024
0eb9afb
added comment
Polichinel Jun 3, 2024
0b2c6b4
fixed time_stamp?
Polichinel Jun 3, 2024
3abd39b
added not on pickled files being overwritten...
Polichinel Jun 3, 2024
835c6d7
note on print statement
Polichinel Jun 3, 2024
c3a4645
Merge branch 'main' into create_hydranet_main
Polichinel Jun 3, 2024
bd8a362
model file extensions for model
Polichinel Jun 10, 2024
91f1a21
abstracted out model and root path
Polichinel Jun 10, 2024
bc9d599
better naming 01
Polichinel Jun 10, 2024
05cd666
doc strings
Polichinel Jun 10, 2024
dcee74f
add management to paths
Polichinel Jun 10, 2024
9e6cdd1
change name 02 and location
Polichinel Jun 10, 2024
4bab662
more renaming 03
Polichinel Jun 10, 2024
2ed7b53
fixed a thing...
Polichinel Jun 10, 2024
f6e7c99
log_monthly_metric in w&b utils
Polichinel Jun 10, 2024
800c938
fixed print?
Polichinel Jun 10, 2024
d4617ab
os to pathlib
Polichinel Jun 10, 2024
81f258c
better docstrings
Polichinel Jun 10, 2024
d58701a
os -> pathlib
Polichinel Jun 10, 2024
8c799a8
fixed print?
Polichinel Jun 10, 2024
09462b7
print better now?
Polichinel Jun 10, 2024
6014f71
300 run
Polichinel Jun 11, 2024
5b717b9
new combined dataloader
Polichinel Jun 11, 2024
8a7af6d
updated for the new single dataloader
Polichinel Jun 11, 2024
707313c
new config_setup
Polichinel Jun 11, 2024
a05d118
removed double stuff
Polichinel Jun 11, 2024
2fe4a34
added comment regarding stuff
Polichinel Jun 11, 2024
d07d4c7
config_input_data added
Polichinel Jun 11, 2024
481a1c4
naive first viewers 6 test...
Polichinel Jun 11, 2024
93ebc19
better help
Polichinel Jun 11, 2024
fc8f71f
better help
Polichinel Jun 11, 2024
05b0346
better help
Polichinel Jun 11, 2024
ef68903
fixe typo
Polichinel Jun 11, 2024
ee8b818
right loa now?
Polichinel Jun 11, 2024
ca2a714
seems correct
Polichinel Jun 11, 2024
1c66fa3
small comment
Polichinel Jun 11, 2024
2252551
one dataloader to rule them all
Polichinel Jun 11, 2024
1b9b9cd
set entity for sweep - I think
Polichinel Jun 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 65 additions & 0 deletions common_utils/artifacts_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
import os


def get_model_files(path, run_type):
"""
Retrieve model files from a directory that match the given run type and common extensions.

Args:
path (str): The directory path where model files are stored.
run_type (str): The type of run (e.g., calibration, testing).

Returns:
list: List of matching model file paths.
"""
# Define the common model file extensions - more can be added as needed
common_extensions = ['.pt', '.pth', '.h5', '.hdf5', '.pkl', '.json', '.bst', '.txt', '.bin', '.cbm', '.onnx']

# Retrieve files that start with run_type and end with any of the common extensions
model_files = [f for f in os.listdir(path) if f.startswith(f"{run_type}_model_") and any(f.endswith(ext) for ext in common_extensions)]

return model_files


def get_latest_model_artifact(path, run_type):
"""
Retrieve the latest model artifact for a given run type based on the modification time.

Args:
path (str): The model specifc directory path where artifacts are stored.
Where PATH_ARTIFACTS = setup_artifacts_paths(PATH) executed in the model specifc main.py script.
and PATH = Path(__file__)

run_type (str): The type of run (e.g., calibration, testing, forecasting).

Returns:
str: The path to the latest model artifact given the run type.

Raises:
FileNotFoundError: If no model artifacts are found for the given run type.
"""

# List all model files for the given specific run_type with the expected filename pattern
model_files = get_model_files(path, run_type) #[f for f in os.listdir(path) if f.startswith(f"{run_type}_model_") and f.endswith('.pt')]

if not model_files:
raise FileNotFoundError(f"No model artifacts found for run type '{run_type}' in path '{path}'")

# Sort the files based on the timestamp embedded in the filename. With format %Y%m%d_%H%M%S For example, '20210831_123456.pt'
model_files.sort(reverse=True)

#print statements for debugging
print(f"artifacts availible: {model_files}")
print(f"artifact used: {model_files[0]}")

# Return the latest model file
return os.path.join(path, model_files[0])

# notes on stepshifted models:
# There will be some thinking here in regards to how we store, denote (naming convention), and retrieve the model artifacts from stepshifted models.
# It is not a big issue, but it is something to consider os we don't do something headless.
# A possible format could be: <run_type>_model_s<step>_<timestamp>.pt example: calibration_model_s00_20210831_123456.pt, calibration_model_s01_20210831_123456.pt, etc.
# And the rest of the code maded in a way to handle this naming convention without any issues. Could be a simple fix.
# Alternatively, we could store the model artifacts in a subfolder for each stepshifted model. This would make it easier to handle the artifacts, but it would also make it harder to retrieve the latest artifact for a given run type.
# Lastly, the solution Xiaolong is working on might allow us the store multiple models (steps) in one artifact, which would make this whole discussion obsolete and be the best solution.

76 changes: 76 additions & 0 deletions common_utils/cli_parser_utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
import sys
import argparse

def parse_args():

"""
CLI parser for model specific main.py scripts.
"""

parser = argparse.ArgumentParser(description='Run model pipeline with specified run type.')

parser.add_argument('-r', '--run_type',
choices=['calibration', 'testing', 'forecasting'],
type=str,
default='calibration',
help='Choose the run type for the model: calibration, testing, or forecasting. Default is calibration. '
'Note: If --sweep is flagged, --run_type must be calibration.')

parser.add_argument('-s', '--sweep',
action='store_true',
help='Set flag to run the model pipeline as part of a sweep. No explicit flag means no sweep.'
'Note: If --sweep is flagged, --run_type must be calibration, and both training and evaluation is automatically implied.')

parser.add_argument('-t', '--train',
action='store_true',
help='Flag to indicate if a new model should be trained. '
'Note: If --sweep is flagged, --train will also automatically be flagged.')

parser.add_argument('-e', '--evaluate',
action='store_true',
help='Flag to indicate if the model should be evaluated. '
'Note: If --sweep is specified, --evaluate will also automatically be flagged. '
'Cannot be used with --run_type forecasting.')

parser.add_argument('-a', '--artifact_name',
type=str,
help='Specify the name of the model artifact to be used for evaluation. '
'The file extension will be added in the main and fit with the specific model algorithm.'
'The artifact name should be in the format: <run_type>_model_<timestamp>.pt.'
'where <run_type> is calibration, testing, or forecasting, and <timestamp> is in the format YMD_HMS.'
'If not provided, the latest artifact will be used by default.')

return parser.parse_args()

def validate_arguments(args):
if args.sweep:
if args.run_type != 'calibration':
print("Error: Sweep runs must have --run_type set to 'calibration'. Exiting.")
print("To fix: Use --run_type calibration when --sweep is flagged.")
sys.exit(1)

if args.run_type in ['testing', 'forecasting'] and args.sweep:
print("Error: Sweep cannot be performed with testing or forecasting run types. Exiting.")
print("To fix: Remove --sweep flag or set --run_type to 'calibration'.")
sys.exit(1)

if args.run_type == 'forecasting' and args.evaluate:
print("Error: Forecasting runs cannot evaluate. Exiting.")
print("To fix: Remove --evaluate flag when --run_type is 'forecasting'.")
sys.exit(1)

if args.run_type in ['calibration', 'testing'] and not args.train and not args.evaluate and not args.sweep:
print(f"Error: Run type is {args.run_type} but neither --train, --evaluate, nor --sweep flag is set. Nothing to do... Exiting.")
print("To fix: Add --train and/or --evaluate flag. Or use --sweep to run both training and evaluation in a WadnB sweep loop.")
sys.exit(1)


# notes on stepshifted models:
# There will be some thinking here in regards to how we store, denote (naming convention), and retrieve the model artifacts from stepshifted models.
# It is not a big issue, but it is something to consider os we don't do something headless.
# A possible format could be: <run_type>_model_s<step>_<timestamp>.pt example: calibration_model_s00_20210831_123456.pt, calibration_model_s01_20210831_123456.pt, etc.
# And the rest of the code maded in a way to handle this naming convention without any issues. Could be a simple fix.
# Alternatively, we could store the model artifacts in a subfolder for each stepshifted model. This would make it easier to handle the artifacts, but it would also make it harder to retrieve the latest artifact for a given run type.
# Lastly, the solution Xiaolong is working on might allow us the store multiple models (steps) in one artifact, which would make this whole discussion obsolete and be the best solution.


4 changes: 2 additions & 2 deletions models/purple_alien/configs/config_hyperparameters.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ def get_hp_config():
'scheduler' : 'WarmupDecay', # 'CosineAnnealingLR' 'OneCycleLR'
'total_hidden_channels' : 32,
'min_events' : 5,
'samples': 600, # 10 just for debug
'samples': 600, # 600 for actual trainnig, 10 for debug
'batch_size': 3,
'dropout_rate' : 0.125,
'learning_rate' : 0.001,
Expand All @@ -24,7 +24,7 @@ def get_hp_config():
'loss_reg': 'b',
'loss_reg_a' : 258,
'loss_reg_c' : 0.001, # 0.05 works...
'test_samples': 128,
'test_samples': 128, # 128 for actual testing, 10 for debug
'np_seed' : 4,
'torch_seed' : 4,
'window_dim' : 32,
Expand Down
5 changes: 3 additions & 2 deletions models/purple_alien/configs/config_sweep.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def get_swep_config():
'scheduler' : {'value': 'WarmupDecay'}, #CosineAnnealingLR004 'CosineAnnealingLR' 'OneCycleLR'
'total_hidden_channels': {'value': 32}, # you like need 32, it seems from qualitative results
'min_events': {'value': 5},
'samples': {'value': 600}, # should be a function of batches becaus batch 3 and sample 1000 = 3000....
'samples': {'value': 600}, # 600 for run 10 for debug. should be a function of batches becaus batch 3 and sample 1000 = 3000....
'batch_size': {'value': 3}, # just speed running here..
"dropout_rate" : {'value' : 0.125},
'learning_rate': {'value' : 0.001}, #0.001 default, but 0.005 might be better
Expand All @@ -33,7 +33,7 @@ def get_swep_config():
'loss_reg' : { 'value' : 'b'},
'loss_reg_a' : { 'value' : 256},
'loss_reg_c' : { 'value' : 0.001},
'test_samples': { 'value' : 128},
'test_samples': { 'value' :128}, # 128 for actual testing, 10 for debug
'np_seed' : {'values' : [4,8]},
'torch_seed' : {'values' : [4,8]},
'window_dim' : {'value' : 32},
Expand All @@ -43,6 +43,7 @@ def get_swep_config():
'first_feature_idx' : {'value' : 5},
'norm_target' : {'value' : False},
'freeze_h' : {'value' : "hl"},
'time_steps' : {'value' : 36}
}

sweep_config['parameters'] = parameters_dict
Expand Down
57 changes: 57 additions & 0 deletions models/purple_alien/main.py
Polichinel marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
import time

import wandb

import sys
from pathlib import Path

PATH = Path(__file__)
sys.path.insert(0, str(Path(*[i for i in PATH.parts[:PATH.parts.index("views_pipeline")+1]]) / "common_utils")) # PATH_COMMON_UTILS
from set_path import setup_project_paths, setup_artifacts_paths
setup_project_paths(PATH)

from cli_parser_utils import parse_args, validate_arguments
#from artifacts_utils import get_latest_model_artifact

from model_run_handlers import handle_sweep_run, handle_single_run

#from mode_run_manager import model_run_manager

if __name__ == "__main__":

# new argpars solution.
args = parse_args()
#print(args)

# Validate the parsed arguments to ensure they conform to the required logic and combinations.
validate_arguments(args)

# wandb login
wandb.login()

start_t = time.time()

# Test if and why a model_metadata_dict.py was saved in the artifacts folder..

# first you need to check if you are running a sweep or not, because the sweep will overwrite the train and evaluate flags
if args.sweep == True:

handle_sweep_run(args)

elif args.sweep == False:

handle_single_run(args)

end_t = time.time()
minutes = (end_t - start_t)/60
print(f'Done. Runtime: {minutes:.3f} minutes')

# notes on stepshifted models:
# There will be some thinking here in regards to how we store, denote (naming convention), and retrieve the model artifacts from stepshifted models.
# It is not a big issue, but it is something to consider os we don't do something headless.
# A possible format could be: <run_type>_model_s<step>_<timestamp>.pt example: calibration_model_s00_20210831_123456.pt, calibration_model_s01_20210831_123456.pt, etc.
# And the rest of the code maded in a way to handle this naming convention without any issues. Could be a simple fix.
# Alternatively, we could store the model artifacts in a subfolder for each stepshifted model. This would make it easier to handle the artifacts, but it would also make it harder to retrieve the latest artifact for a given run type.
# Lastly, the solution Xiaolong is working on might allow us the store multiple models (steps) in one artifact, which would make this whole discussion obsolete and be the best solution.


Empty file.
149 changes: 149 additions & 0 deletions models/purple_alien/src/forecasting/generate_forecast.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
import os

import numpy as np
import pickle
import time
import functools

import torch
import torch.nn as nn
import torch.nn.functional as F

import wandb

import sys
from pathlib import Path

PATH = Path(__file__)
sys.path.insert(0, str(Path(*[i for i in PATH.parts[:PATH.parts.index("views_pipeline")+1]]) / "common_utils")) # PATH_COMMON_UTILS
from set_path import setup_project_paths, setup_data_paths
setup_project_paths(PATH)


from utils import choose_model, choose_loss, choose_sheduler, get_train_tensors, get_full_tensor, apply_dropout, execute_freeze_h_option, get_log_dict, train_log, init_weights, get_data
from utils_prediction import predict, sample_posterior
from config_hyperparameters import get_hp_config


def generate_forecast(model, views_vol, config, device, PATH):
"""
Function to generate forecast using the provided model and views_vol.
It saves the generated posterior distributions and out-of-sample volumes.

Args:
model: The trained model used for forecasting.
views_vol: The input data tensor for forecasting.
config: Configuration object containing settings.
device: The device (CPU or GPU) to run the predictions on.
PATH: The base path where generated data will be saved.

Returns:
None
"""
# Ensure the model is in evaluation mode
model.eval()
model.apply(apply_dropout)

# Generate posterior samples and out-of-sample volumes
posterior_list, posterior_list_class, out_of_sample_vol, _ = sample_posterior(model, views_vol, config, device) # the _ is the full tensor.

# I suspect you'll need the out_of_sample_vol to create the df (it has pg and ocean info)
# However, I see in the test_prediction_store notebook in "conflictnet" repo that I load the "calibration_vol" from the pickle file.... Investigate...


# Set up paths for storing generated data
_, _, PATH_GENERATED = setup_data_paths(PATH)
Polichinel marked this conversation as resolved.
Show resolved Hide resolved

# Create the directory if it does not exist
os.makedirs(PATH_GENERATED, exist_ok=True)

# Print the path for debugging
print(f'PATH to generated data: {PATH_GENERATED}')

# Create a dictionary to store posterior data
posterior_dict = {
'posterior_list': posterior_list,
'posterior_list_class': posterior_list_class,
'out_of_sample_vol': out_of_sample_vol # you might need this for the df creation before predstore. Experiments in notebook test_to_prediction_store.ipynb
}

# Save the posterior data to a pickle file
filename = f'posterior_dict_{config.time_steps}_{config.run_type}_{config.model_time_stamp}.pkl'
with open(os.path.join(PATH_GENERATED, filename), 'wb') as file:
pickle.dump(posterior_dict, file)

print('Posterior dict and test vol pickled and dumped!')


def handle_forecasting(config, device, views_vol, PATH_ARTIFACTS, artifact_name=None):
Polichinel marked this conversation as resolved.
Show resolved Hide resolved

# the thing above might work, but it needs to be tested thoroughly....
raise NotImplementedError('Forecasting not implemented yet')




# Ensure utils_prediction.py and any other dependencies are imported correctly
# from utils_prediction import sample_posterior, apply_dropout
# from utils_data import setup_data_paths


















## you always load an artifact for forecasting - like with the evaluate you take the latest artifact unless you specify another one
## But that is done in main.py - just passed to here as an argument
#
## Then the load the offical forescasting partition
## And the first steps must be usign the function from utils_prediction.py to get the predictions and the posetrior
#
## model, views_vol, config, device should be passed as arguments to this function
#
#def generate_forecast(model, views_vol, config, device):
#
#
# # THIS IS ALL PURE MESS RIGHT NOW!!!
#
#
# posterior_list, posterior_list_class, out_of_sample_vol, full_tensor = sample_posterior(model, views_vol, config, device)
#
## then to prediction store I guess? Or perhaps just the generated data for now...
#
# _ , _, PATH_GENERATED = setup_data_paths(PATH)
#
# # if the path does not exist, create it
#
# if not os.path.exists(PATH_GENERATED):
#
# os.makedirs(PATH_GENERATED)
#
# # print for debugging
# print(f'PATH to generated data: {PATH_GENERATED}')
#
# # pickle the posterior dict, metric dict, and test vol
#
# # Should be time_steps and run_type in the name....
# posterior_dict = {'posterior_list' : posterior_list, 'posterior_list_class': posterior_list_class, 'out_of_sample_vol' : out_of_sample_vol}
#
#
# with open(f'{PATH_GENERATED}/posterior_dict_{config.time_steps}_{config.run_type}_{config.model_time_stamp}.pkl', 'wb') as file:
#
# pickle.dump(posterior_dict, file)
#
#
# print('Posterior dict, metric dict and test vol pickled and dumped!')
#
#
Loading