-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Black_lodge model/Production model template #37
base: production
Are you sure you want to change the base?
Conversation
Copy pasted from src/utils in @xiaolong0728 blank_space. This script might be a good place to also do input data checks.
Getting error though, probably having issues with views6: TypeError: __init__() got an unexpected keyword argument 'from_loa'
Logic taken from viewsforecasting notebook
Adaoted from purple_alien
Not yet tested
"steps": [*range(1, 36 + 1, 1)], | ||
"deployment_status": "production", | ||
"creator": "Sara", | ||
"preprocessing": "float_it", #new |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is the float_it function?
month_first = partitioner_dict['train'][0] | ||
|
||
if partition == 'forecasting': | ||
month_last = partitioner_dict['train'][1] + 1 # no need to get the predict months as these are empty |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make the current stepshifter work, we still need the predict months even if they are empty (otherwise the predictions might have some problems)
from config_hyperparameters import get_hp_config | ||
|
||
|
||
def train(model_config, para_config): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can refer to my new training function (I haven't merged the main so it's branch more_model and you can check orange_pasta). In short, we don't train three models together but train a specific one based on the arguments instead.
[Not ready to be merged]
Let's use this PR to clean up the CM model code for replicating the production ensemble
To-dos are in the model readme