Replies: 4 comments 10 replies
-
Yes, the |
Beta Was this translation helpful? Give feedback.
-
Thank you for the confirmation. I was trying to use the training part(the skipped one) in zero shot datasets to finetune the models and check the performance on test portion of the same dataset after finetuning with its training part. ds = datasets.load_dataset( |
Beta Was this translation helpful? Give feedback.
-
I see that while training for in-domain monash datasets you set the context length to 512 (for all datasets) and prediction length to 64 which is greater than the prediction lengths of all datasets considered for evaluation. As you mentioned in your previous response, I can keep the context length as 512 or smaller based on the size of the time series in the specific dataset picked for fine-tuning. My question is when I compare my results(MASE) of finetuned tiny/mini model with its zero shot performance on that same dataset will this context length not affect the results? For example like the dataset NN5 weekly has series length of only 113 so we need to set the context length <512 to lets say 48 or 36. and prediction length to 8 for this dataset. Then I will evaluate the fine-tuned model on this dataset setting offset and prediction length as -8 and 8 respectively. The MASE will be compared to the zero shot performance on this dataset which acc to the table in paper is 0.927. Is this comparison fair? The shorter context length I've set while finetuning will not have any impact? |
Beta Was this translation helpful? Give feedback.
-
I don't have any specific dataset for fine-tuning so I am using the zero-shot datasets from your paper where the series length is atleast 512 to keep the context length=512 as set in the pre-trained model. I am specifically using traffic, weather, fred-md, australian electricity to name a few. |
Beta Was this translation helpful? Give feedback.
-
In the in-domain.yaml which says "# Backtest configs for the 15 "in-domain" datasets. The training portion of these datasets was part of the training corpus for Chronos models."
I just wanted to confirm that you used the training part of ( _, test_template = split(gts_dataset, offset=offset)) load_and_split_dataset() function used in the evaluate.py as training dataset.
Beta Was this translation helpful? Give feedback.
All reactions