-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Varying calibration / prediction results #22
Comments
This variation is actually a feature of GluonTS. But there is a solution, set the MXNet seed (discussed here and below). Why is variation a feature?Because GluonTS bills itself as a "Probabilistic Forecasting Software". This statement varies by algorithm, but for DeepAR, you get a probabilistic forecast. What's happening under the hood that makes it probabilistic?We are actually making a mean prediction from many paths that DeepAR generates. We only use the mean inside of Modeltime, but if you forecast with GluonTS DeepAR in python you can actually get quantiles around the forecast. We don't use that feature since it doesn't fit into the Modeltime framework (currently - we use a quantile around a calibration forecast, refer to That's for another day... for now, just recognize that your forecast variance is actually a result of how DeepAR forecasts and not an error occurring. SolutionGluonTS internally uses a seed from the MXNet library. We can set this via reticulate.
|
Hey Matt, I was surprised to see that even the forecast is probabilistic. I tried mxnet seed solution but it is still giving different forecasts ( |
Ok, that's odd. I would think the MXNet Random Seed would solve it. I'll close for now, but can reopen if something is amiss. |
I think not retrieving reproducible results will pose a significant obstacle to using this package. |
Every calibration / prediction for the deepAR model gives a slightly different result. Below,
forecast1
andforecast2
will never be exactly the same.I was aware deep learning models estimates cannot be exactly replicated but I did not know the forecast would vary each time too, even using the same parameters and estimates. Do you know the reason?
A disclaimer about that in the documentation might be useful for non-experts in deep learning like myself.
Thanks again
I saw that you just launch modeltime.h2o, very cool stuff, I will explore that now.
The text was updated successfully, but these errors were encountered: