We do our best to avoid the introduction of breaking changes, but cannot always guarantee backwards compatibility. Changes that may break code which uses a previous release of Darts are marked with a "🔴".
Improved
- Improvements to
ForecastingModel
: #2269 by Felix Divo.- Renamed the private
_is_probabilistic
property to a publicsupports_probabilistic_prediction
.
- Renamed the private
- Improvements to
DataTransformer
: #2267 by Alicja Krzeminska-Sciga.InvertibleDataTransformer
now supports parallelized inverse transformation forseries
being a list of lists ofTimeSeries
(Sequence[Sequence[TimeSeries]]
). Thisseries
type represents for example the output fromhistorical_forecasts()
when using multiple series.
Fixed
- Fixed type hint warning "Unexpected argument" when calling
historical_forecasts()
caused by the_with_sanity_checks
decorator. The type hinting is now properly configured to expect any input arguments and return the output type of the method for which the sanity checks are performed for. #2286 by Dennis Bader.
Dependencies
0.28.0 (2024-03-05)
Improved
- Improvements to
GlobalForecastingModel
:- 🚀🚀🚀 All global models (regression and torch models) now support shifted predictions with model creation parameter
output_chunk_shift
. This will shift the output chunk for training and prediction byoutput_chunk_shift
steps into the future. #2176 by Dennis Bader.
- 🚀🚀🚀 All global models (regression and torch models) now support shifted predictions with model creation parameter
- Improvements to
TimeSeries
: #2196 by Dennis Bader.- 🚀🚀🚀 Significant performance boosts for several
TimeSeries
methods resulting increased efficiency across the entireDarts
library. Up to 2x faster creation times for series indexed with "regular" frequencies (e.g. Daily, hourly, ...), and >100x for series indexed with "special" frequencies (e.g. "W-MON", ...). Affects:- All
TimeSeries
creation methods - Additional boosts for slicing with integers and Timestamps
- Additional boosts for
from_group_dataframe()
by performing some of the heavy-duty computations on the entire DataFrame, rather than iteratively on the group level.
- All
- Added option to exclude some
group_cols
from being added as static covariates when usingTimeSeries.from_group_dataframe()
with parameterdrop_group_cols
.
- 🚀🚀🚀 Significant performance boosts for several
- 🚀 New global baseline models that use fixed input and output chunks for prediction. This offers support for univariate, multivariate, single and multiple target series prediction, one-shot- or autoregressive/moving forecasts, optimized historical forecasts, batch prediction, prediction from datasets, and more. #2261 by Dennis Bader.
GlobalNaiveAggregate
: Computes an aggregate (using a custom or built-intorch
function) for each target component over the lastinput_chunk_length
points, and repeats the valuesoutput_chunk_length
times for prediction. Depending on the parameters, this model can be equivalent toNaiveMean
andNaiveMovingAverage
.GlobalNaiveDrift
: Takes the slope of each target component over the lastinput_chunk_length
points and projects the trend over the nextoutput_chunk_length
points for prediction. Depending on the parameters, this model can be equivalent toNaiveDrift
.GlobalNaiveSeasonal
: Takes the target component value at theinput_chunk_length
th point before the end of the targetseries
, and repeats the valuesoutput_chunk_length
times for prediction. Depending on the parameters, this model can be equivalent toNaiveSeasonal
.
- Improvements to
TorchForecastingModel
:- Added support for additional lr scheduler configuration parameters for more control ("interval", "frequency", "monitor", "strict", "name"). #2218 by Dennis Bader.
- Improvements to
RegressionModel
: #2246 by Antoine Madrona.- Added a
get_estimator()
method to access the underlying estimator - Added attribute
lagged_label_names
to identify the forecasted step and component of each estimator - Updated the docstring of
get_multioutout_estimator()
- Added a
- Other improvements:
- Added argument
keep_names
toWindowTransformer
andwindow_transform
to indicate whether the original component names should be kept. #2207 by Antoine Madrona. - Added new helper function
darts.utils.utils.n_steps_between()
to efficiently compute the number of time steps (periods) between two points with a given frequency. Improves efficiency for regression model tabularization by avoidingpd.date_range()
. #2176 by Dennis Bader. - 🔴 Changed the default
start
value inForecastingModel.gridsearch()
from0.5
toNone
, to make it consistent withhistorical_forecasts
and other methods. #2243 by Thomas Kientz. - Improvements to
ARIMA
documentation: Specified possiblep
,d
,P
,D
,trend
advanced options that are available in statsmodels. More explanations on the behaviour of the parameters were added. #2142 by MarcBresson.
- Added argument
Fixed
- Fixed a bug when using
RegressionModel
withlags=None
, somelags_*covariates
, and the covariates starting after or at the same time as the first predictable time step; the lags were not extracted from the correct indices. #2176 by Dennis Bader. - Fixed a bug when calling
window_transform
on aTimeSeries
with a hierarchy. The hierarchy is now only preseved for single transformations applied to all components, or removed otherwise. #2207 by Antoine Madrona. - Fixed a bug in probabilistic
LinearRegressionModel.fit()
, where themodel
attribute was not pointing to all underlying estimators. #2205 by Antoine Madrona. - Raise an error in
RegressionEsembleModel
when theregression_model
was created withmulti_models=False
(not supported). #2205 by Antoine Madrona. - Fixed a bug in
coefficient_of_variation()
withintersect=True
, where the coefficient was not computed on the intersection. #2202 by Antoine Madrona. - Fixed a bug in
gridsearch()
withuse_fitted_values=True
, where the model was not propely instantiated for sanity checks. #2222 by Antoine Madrona. - Fixed a bug in
TimeSeries.append/prepend_values()
, where the component names and the hierarchy were dropped. #2237 by Antoine Madrona. - Fixed a bug in
get_multioutput_estimator()
, where the index of the estimator was incorrectly calculated. #2246 by Antoine Madrona. - 🔴 Fixed a bug in
datetime_attribute_timeseries()
, where 1-indexed attributes were not properly handled. Also, 0-indexing is now enforced for all the generated encodings. #2242 by Antoine Madrona.
Dependencies
- Removed upper version cap (<=v2.1.2) for PyTorch Lightning. #2251 by Dennis Bader.
- Updated pre-commit hooks to the latest version using
pre-commit autoupdate
. Changepyupgrade
pre-commit hook argument to--py38-plus
. #2228 by MarcBresson. - Bumped dev dependencies to newest versions: #2248 by Dennis Bader.
- black[jupyter]: from 22.3.0 to 24.1.1
- flake8: from 4.0.1 to 7.0.0
- isort: from 5.11.5 to 5.13.2
- pyupgrade: 2.31.0 from to v3.15.0
0.27.2 (2024-01-21)
Improved
- Added
darts.utils.statistics.plot_ccf
that can be used to plot the cross correlation between a time series (e.g. target series) and the lagged values of another time series (e.g. covariates series). #2122 by Dennis Bader. - Improvements to
TimeSeries
: Improved the time series frequency inference when using slices or pandas DatetimeIndex as keys for__getitem__
. #2152 by DavidKleindienst.
Fixed
- Fixed a bug when using a
TorchForecastingModel
withuse_reversible_instance_norm=True
and predicting withn > output_chunk_length
. The input normalized multiple times. #2160 by FourierMourier.
0.27.1 (2023-12-10)
Improved
- 🔴 Added
CustomRNNModule
andCustomBlockRNNModule
for defining custom RNN modules that can be used withRNNModel
andBlockRNNModel
. The custommodel
must now be a subclass of the custom modules. #2088 by Dennis Bader.
Fixed
- Fixed a bug in historical forecasts, where some
fit/predict_kwargs
were not passed to the underlying model's fit/predict methods. #2103 by Dennis Bader. - Fixed an import error when trying to create a
TorchForecastingModel
with PyTorch Lightning v<2.0.0. #2087 by Eschibli. - Fixed a bug when creating a
RNNModel
with a custommodel
. #2088 by Dennis Bader.
- Added a folder
docs/generated_api
to define custom .rst files for generating the documentation. #2115 by Dennis Bader.
0.27.0 (2023-11-18)
Improved
- Improvements to
TorchForecastingModel
:- 🚀🚀 We optimized
historical_forecasts()
for pre-trainedTorchForecastingModel
running up to 20 times faster than before (and even more when tuning the batch size)!. #2013 by Dennis Bader. - Added callback
darts.utils.callbacks.TFMProgressBar
to customize at which model stages to display the progress bar. #2020 by Dennis Bader. - All
InferenceDataset
s now support strided forecasts with parametersstride
,bounds
. These datasets can be used withTorchForecastingModel.predict_from_dataset()
. #2013 by Dennis Bader.
- 🚀🚀 We optimized
- Improvements to
RegressionModel
:- New example notebook for the
RegressionModels
explaining features such as (component-specific) lags,output_chunk_length
in relation withmulti_models
, multivariate support, and more. #2039 by Antoine Madrona. XGBModel
now leverages XGBoost's native Quantile Regression support that was released in version 2.0.0 for improved probabilistic forecasts. #2051 by Dennis Bader.
- New example notebook for the
- Improvements to
LocalForecastingModel
- Added optional keyword arguments dict
kwargs
toExponentialSmoothing
that will be passed to the constructor of the underlyingstatsmodels.tsa.holtwinters.ExponentialSmoothing
model. #2059 by Antoine Madrona.
- Added optional keyword arguments dict
- General model improvements:
- Added new arguments
fit_kwargs
andpredict_kwargs
tohistorical_forecasts()
,backtest()
andgridsearch()
that will be passed to the model'sfit()
and / orpredict
methods. E.g., you can now set a batch size, static validation series, ... depending on the model support. #2050 by Antoine Madrona - For transparency, we issue a (removable) warning when performing auto-regressive forecasts with past covariates (with
n >= output_chunk_length
) to inform users that future values of past covariates will be accessed. #2049 by Antoine Madrona
- Added new arguments
- Other improvements:
- Added support for time index time zone conversion with parameter
tz
before generating/computing holidays and datetime attributes. Support was added to all Time Axis Encoders, standalone encoders and forecasting models'add_encoders
, time series generation utils functionsholidays_timeseries()
anddatetime_attribute_timeseries()
, andTimeSeries
methodsadd_datetime_attribute()
andadd_holidays()
. #2054 by Dennis Bader. - Added new data transformer:
MIDAS
, which uses mixed-data sampling to convertTimeSeries
from high frequency to low frequency (and back). #1820 by Boyd Biersteker, Antoine Madrona and Dennis Bader. - Added new dataset
ElectricityConsumptionZurichDataset
: The dataset contains the electricity consumption of households in Zurich, Switzerland from 2015-2022 on different grid levels. We also added weather measurements for Zurich which can be used as covariates for modelling. #2039 by Antoine Madrona and Dennis Bader. - Adapted the example notebooks to properly apply data transformers and avoid look-ahead bias. #2020 by Samriddhi Singh.
- Added support for time index time zone conversion with parameter
Fixed
- Fixed a bug when calling
historical_forecasts()
andoverlap_end=False
that did not generate the last possible forecast. #2013 by Dennis Bader. - Fixed a bug when calling optimized
historical_forecasts()
for aRegressionModel
trained with varying component-specific lags. #2040 by Antoine Madrona. - Fixed a bug when using encoders with
RegressionModel
and series with a non-evenly spaced frequency (e.g. Month Begin). This raised an error during lagged data creation when trying to divide a pd.Timedelta by the ambiguous frequency. #2034 by Antoine Madrona. - Fixed a bug when loading the weights of a
TorchForecastingModel
that was trained with a precision other thanfloat64
. #2046 by Freddie Hsin-Fu Huang. - Fixed broken links in the
Transfer learning
example notebook with publicly hosted version of the three datasets. #2067 by Antoine Madrona. - Fixed a bug when using
NLinearModel
on multivariate series with covariates andnormalize=True
. #2072 by Antoine Madrona. - Fixed a bug when using
DLinearModel
andNLinearModel
on multivariate series with static covariates shared across components anduse_static_covariates=True
. #2070 by Antoine Madrona.
No changes.
0.26.0 (2023-09-16)
Improved
- Improvements to
RegressionModel
: #1962 by Antoine Madrona.- 🚀🚀 All models now support component/column-specific lags for target, past, and future covariates series.
- Improvements to
TorchForecastingModel
:- 🚀 Added
RINorm
(Reversible Instance Norm) as an input normalization option for all models exceptRNNModel
. Activate it with model creation parameteruse_reversible_instance_norm
. #1969 by Dennis Bader. - 🔴 Added past covariates feature projection to
TiDEModel
with parametertemporal_width_past
following the advice of the model architect. Parametertemporal_width
was renamed totemporal_width_future
. Additionally, added the option to bypass the feature projection withtemporal_width_past/future=0
. #1993 by Dennis Bader.
- 🚀 Added
- Improvements to
EnsembleModel
: #1815 by Antoine Madrona and Dennis Bader.- 🔴 Renamed model constructor argument
models
toforecasting_models
. - 🚀🚀 Added support for pre-trained
GlobalForecastingModel
asforecasting_models
to avoid re-training when ensembling. This requires all models to be pre-trained global models. - 🚀 Added support for generating the
forecasting_model
forecasts (used to train the ensemble model) with historical forecasts rather than direct (auto-regressive) predictions. Enable it withtrain_using_historical_forecasts=True
at model creation. - Added an example notebook for ensemble models.
- 🔴 Renamed model constructor argument
- Improvements to historical forecasts, backtest and gridsearch: #1866 by Antoine Madrona.
- Added support for negative
start
values to start historical forecasts relative to the end of the target series. - Added a new argument
start_format
that allows to use an integerstart
either as the index position or index value/label forseries
indexed with apd.RangeIndex
. - Added support for
TimeSeries
with aRangeIndex
starting at a negative integer.
- Added support for negative
- Other improvements:
- Reduced the size of the Darts docker image
unit8/darts:latest
, and included all optional models as well as dev requirements. #1878 by Alex Colpitts. - Added short examples in the docstring of all the models, including covariates usage and some model-specific parameters. #1956 by Antoine Madrona.
- Added method
TimeSeries.cumsum()
to get the cumulative sum of the time series along the time axis. #1988 by Eliot Zubkoff.
- Reduced the size of the Darts docker image
Fixed
- Fixed a bug in
TimeSeries.from_dataframe()
when using a pandas.DataFrame withdf.columns.name != None
. #1938 by Antoine Madrona. - Fixed a bug in
RegressionEnsembleModel.extreme_lags
when the forecasting models have only covariates lags. #1942 by Antoine Madrona. - Fixed a bug when using
TFTExplainer
with aTFTModel
running on GPU. #1949 by Dennis Bader. - Fixed a bug in
TorchForecastingModel.load_weights()
that raised an error when loading the weights from a valid architecture. #1952 by Antoine Madrona. - Fixed a bug in
NLinearModel
wherenormalize=True
and past covariates could not be used at the same time. #1873 by Eliot Zubkoff. - Raise an error when an
EnsembleModel
containing at least oneLocalForecastingModel
is callinghistorical_forecasts
withretrain=False
. #1815 by Antoine Madrona. - 🔴 Dropped support for lambda functions in
add_encoders
’s “custom” encoder in favor of named functions to ensure that models can be exported. #1957 by Antoine Madrona.
Improved
- Refactored all tests to use pytest instead of unittest. #1950 by Dennis Bader.
0.25.0 (2023-08-04)
Installation
- 🔴 Removed Prophet, LightGBM, and CatBoost dependencies from PyPI packages (
darts
,u8darts
,u8darts[torch]
), and conda-forge packages (u8darts
,u8darts-torch
) to avoid installation issues that some users were facing (installation on Apple M1/M2 devices, ...). #1589 by Julien Herzen and Dennis Bader.- The models are still supported by installing the required packages as described in our installation guide.
- The Darts package including all dependencies can still be installed with PyPI package
u8darts[all]
or conda-forge packageu8darts-all
. - Added new PyPI flavor
u8darts[notorch]
, and conda-forge flavoru8darts-notorch
which are equivalent to the oldu8darts
installation (all dependencies except neural networks).
- 🔴 Removed support for Python 3.7 #1864 by Dennis Bader.
Improved
- General model improvements:
- 🚀🚀 Optimized
historical_forecasts()
forRegressionModel
whenretrain=False
andforecast_horizon <= output_chunk_length
by vectorizing the prediction. This can run up to 700 times faster than before! #1885 by Antoine Madrona. - Improved efficiency of
historical_forecasts()
andbacktest()
for all models giving significant process time reduction for larger number of predict iterations and series. #1801 by Dennis Bader. - 🚀🚀 Added support for direct prediction of the likelihood parameters to probabilistic models using a likelihood (regression and torch models). Set
predict_likelihood_parameters=True
when callingpredict()
. #1811 by Antoine Madrona. - 🚀🚀 New forecasting model:
TiDEModel
as proposed in this paper. An MLP based encoder-decoder model that is said to outperform many Transformer-based architectures. #1727 by Alex Colpitts. Prophet
now supports conditional seasonalities, and properly handles all parameters passed toProphet.add_seasonality()
and model creation parameteradd_seasonalities
#1829 by Idan Shilon.- Added method
generate_fit_predict_encodings()
to generate the encodings (fromadd_encoders
at model creation) required for training and prediction. #1925 by Dennis Bader. - Added support for
PathLike
to thesave()
andload()
functions of all non-deep learning based models. #1754 by Simon Sudrich. - Added model property
ForecastingModel.supports_multivariate
to indicate whether the model supports multivariate forecasting. #1848 by Felix Divo.
- 🚀🚀 Optimized
- Improvements to
EnsembleModel
:- Model creation parameter
forecasting_models
now supports a mix ofLocalForecastingModel
andGlobalForecastingModel
(singleTimeSeries
training/inference only, due to the local models). #1745 by Antoine Madrona. - Future and past covariates can now be used even if
forecasting_models
have different covariates support. The covariates passed tofit()
/predict()
are used only by models that support it. #1745 by Antoine Madrona. RegressionEnsembleModel
andNaiveEnsembleModel
can generate probabilistic forecasts, probabilisticsforecasting_models
can be sampled to train theregression_model
, updated the documentation (stacking technique). #1692 by Antoine Madrona.
- Model creation parameter
- Improvements to
Explainability
module:- 🚀🚀 New forecasting model explainer:
TFTExplainer
forTFTModel
. You can now access and visualize the trained model's feature importances and self attention. #1392 by Sebastian Cattes and Dennis Bader. - Added static covariates support to
ShapeExplainer
. #1803 by Anne de Vries and Dennis Bader.
- 🚀🚀 New forecasting model explainer:
- Improvements to documentation #1904 by Dennis Bader:
- made model sections in README.md, covariates user guide and forecasting model API Reference more user friendly by adding model links and reorganizing them into model categories.
- added the Dynamic Time Warping (DTW) module and improved its appearance.
- Other improvements:
- Improved static covariates column naming when using
StaticCovariatesTransformer
with asklearn.preprocessing.OneHotEncoder
. #1863 by Anne de Vries. - Added
MSTL
(Season-Trend decomposition using LOESS for multiple seasonalities) as amethod
option forextract_trend_and_seasonality()
. #1879 by Alex Colpitts. - Added
RINorm
(Reversible Instance Norm) as a new input normalization option forTorchForecastingModel
. So far onlyTiDEModel
supports it with model creation parameteruse_reversible_instance_norm
. #1865 by Alex Colpitts. - Improvements to
TimeSeries.plot()
: custom axes are now properly supported with parameterax
. Axis is now returned for downstream tasks. #1916 by Dennis Bader.
- Improved static covariates column naming when using
Fixed
- Fixed an issue not considering original component names for
TimeSeries.plot()
when providing a label prefix. #1783 by Simon Sudrich. - Fixed an issue with the string representation of
ForecastingModel
when using array-likes at model creation. #1749 by Antoine Madrona. - Fixed an issue with
TorchForecastingModel.load_from_checkpoint()
not properly loading the loss function and metrics. #1759 by Antoine Madrona. - Fixed a bug when loading the weights of a
TorchForecastingModel
trained with encoders or a Likelihood. #1744 by Antoine Madrona. - Fixed a bug when using selected
target_components
withShapExplainer
. #1803 by Dennis Bader. - Fixed
TimeSeries.__getitem__()
for series with a RangeIndex with start != 0 and freq != 1. #1868 by Dennis Bader. - Fixed an issue where
DTWAlignment.plot_alignment()
was not plotting the alignment plot of series with a RangeIndex correctly. #1880 by Ahmet Zamanis and Dennis Bader. - Fixed an issue when calling
ARIMA.predict()
andnum_samples > 1
(probabilistic forecasting), where the start point of the simulation was not anchored to the end of the target series. #1893 by Dennis Bader. - Fixed an issue when using
TFTModel.predict()
withfull_attention=True
where the attention mask was not applied properly. #1392 by Dennis Bader.
Improvements
- Refactored the
ForecastingModelExplainer
andExplainabilityResult
to simplify implementation of new explainers. #1392 by Dennis Bader. - Adapted all unit tests to run successfully on M1 devices. #1933 by Dennis Bader.
0.24.0 (2023-04-12)
Improved
- General model improvements:
- New baseline forecasting model
NaiveMovingAverage
. #1557 by Janek Fidor. - New models
StatsForecastAutoCES
, andStatsForecastAutoTheta
from Nixtla's statsforecasts library as local forecasting models without covariates support. AutoTheta supports probabilistic forecasts. #1476 by Boyd Biersteker. - Added support for future covariates, and probabilistic forecasts to
StatsForecastAutoETS
. #1476 by Boyd Biersteker. - Added support for logistic growth to
Prophet
with parametersgrowth
,cap
,floor
. #1419 by David Kleindienst. - Improved the model string / object representation style similar to scikit-learn models. #1590 by Janek Fidor.
- 🔴 Renamed
MovingAverage
toMovingAverageFilter
to avoid confusion with newNaiveMovingAverage
model. #1557 by Janek Fidor.
- New baseline forecasting model
- Improvements to
RegressionModel
:- Optimized lagged data creation for fit/predict sets achieving a drastic speed-up. #1399 by Matt Bilton.
- Added support for categorical past/future/static covariates to
LightGBMModel
with model creation parameterscategorical_*_covariates
. #1585 by Rijk van der Meulen. - Added lagged feature names for better interpretability; accessible with model property
lagged_feature_names
. #1679 by Antoine Madrona. - 🔴 New
use_static_covariates
option for all models: When True (default), models use static covariates if available at fitting time and enforce identical static covariate shapes across all targetseries
used for training or prediction; when False, models ignore static covariates. #1700 by Dennis Bader.
- Improvements to
TorchForecastingModel
:- New methods
load_weights()
andload_weights_from_checkpoint()
for loading only the weights from a manually saved model or checkpoint. This allows to fine-tune the pre-trained models with different optimizers or learning rate schedulers. #1501 by Antoine Madrona. - New method
lr_find()
that helps to find a good initial learning rate for your forecasting problem. #1609 by Levente Szabados and Dennis Bader. - Improved the user guide and added new sections about saving/loading (checkpoints, manual save/load, loading weights only), and callbacks. #1661 by Antoine Madrona.
- 🔴 Replaced
":"
in save file names with"_"
to avoid issues on some operating systems. For loading models saved on earlier Darts versions, try to rename the file names by replacing":"
with"_"
. #1501 by Antoine Madrona. - 🔴 New
use_static_covariates
option forTFTModel
,DLinearModel
andNLinearModel
: When True (default), models use static covariates if available at fitting time and enforce identical static covariate shapes across all targetseries
used for training or prediction; when False, models ignore static covariates. #1700 by Dennis Bader.
- New methods
- Improvements to
TimeSeries
:- Added support for integer indexed input to
from_*
factory methods, if index can be converted to a pandas.RangeIndex. #1527 by Dennis Bader. - Added support for integer indexed input with step sizes (freq) other than 1. #1527 by Dennis Bader.
- Optimized time series creation with
fill_missing_dates=True
achieving a drastic speed-up . #1527 by Dennis Bader. from_group_dataframe()
now warns the user if there is suspicion of a "bad" time index (monotonically increasing). #1628 by Dennis Bader.
- Added support for integer indexed input to
- Added a parameter to give a custom function name to the transformed output of
WindowTransformer
; improved the explanation of thewindow
parameter. #1676 and #1666 by Jing Qiang Goh. - Added
historical_forecasts
parameter tobacktest()
that allows to use precomputed historical forecasts fromhistorical_forecasts()
. #1597 by Janek Fidor. - Added feature values and SHAP object to
ShapExplainabilityResult
, giving easy user access to all SHAP-specific explainability results. #1545 by Rijk van der Meulen. - New
quantile_loss()
(pinball loss) metric for probabilistic forecasts. #1559 by Janek Fidor.
Fixed
- Fixed an issue in
BottomUp/TopDownReconciliator
where the order of the series components was not taken into account. #1592 by David Kleindienst. - Fixed an issue with
DLinearModel
not supporting even numberedkernel_size
. #1695 by Antoine Madrona. - Fixed an issue with
RegressionEnsembleModel
not using future covariates during training. #1660 by Rajesh Balakrishnan. - Fixed an issue where
NaiveEnsembleModel
prediction did not transfer the series' component name. #1602 by David Kleindienst. - Fixed an issue in
TorchForecastingModel
that prevented from using multi GPU training. #1509 by Levente Szabados. - Fixed a bug when saving a
FFT
model withtrend=None
. #1594 by Antoine Madrona. - Fixed some issues with PyTorch-Lightning version 2.0.0. #1651 by Dennis Bader.
- Fixed a bug in
QuantileDetector
which raised an error when low and high quantiles had identical values. #1553 by Julien Adda. - Fixed an issue preventing
TimeSeries
from being empty. #1359 by Antoine Madrona. - Fixed an issue when using
backtest()
on multiple series. #1517 by Julien Herzen. - General fixes to
historical_forecasts()
- Fixed issue where
retrain
functions were not handled properly; Improved handling ofstart
, andtrain_length
parameters; better interpretability with warnings and improved error messages (warnings can be turned of withshow_warnings=False
). By #1675 by Antoine Madrona and Dennis Bader. - Fixed an issue for several models (mainly ensemble and local models) where automatic
start
did not respect the minimum required training lengths. #1616 by Janek Fidor and Dennis Bader. - Fixed an issue when using a
RegressionModel
with future covariates lags only. #1685 by Maxime Dumonal.
- Fixed issue where
Improvements
- Option to skip slow tests locally with
pytest . --no-cov -m "not slow"
. #1625 by Blazej Nowicki. - Major refactor of data transformers which simplifies implementation of new transformers. #1409 by Matt Bilton.
0.23.1 (2023-01-12)
Patch release
Fixed
- Fix an issue in
TimeSeries
which made it incompatible with Python 3.7. #1449 by Dennis Bader. - Fix an issue with static covariates when series have variable lengths with
RegressionModel
s. #1469 by Eliane Maalouf. - Fix an issue with PyTorch Lightning trainer handling. #1459 by Dennis Bader.
- Fix an issue with
historical_forecasts()
retraining PyTorch models iteratively instead of from scratch. #1465 by Dennis Bader. - Fix an issue with
historical_forecasts()
not working in some cases whenfuture_covariates
are provided andstart
is not specified. #1481 by Maxime Dumonal. - Fix an issue with
slice_n_points
functions on integer indexes. #1482 by Julien Herzen.
0.23.0 (2022-12-23)
Improved
- 🚀🚀🚀 Brand new Darts module dedicated to anomaly detection on time series:
darts.ad
. More info on the API doc page: https://unit8co.github.io/darts/generated_api/darts.ad.html. #1256 by Julien Adda and Julien Herzen. - New forecasting models:
DLinearModel
andNLinearModel
as proposed in this paper. #1139 by Julien Herzen and Greg DeVos. - New forecasting model:
XGBModel
implementing XGBoost. #1405 by Julien Herzen. - New
multi_models
option for allRegressionModel
s: when set to False, uses only a single underlying estimator for multi-step forecasting, which can drastically increase computational efficiency. #1291 by Eliane Maalouf. - All
RegressionModel
s (incl. LightGBM, Catboost, XGBoost, Random Forest, ...) now support static covariates. #1412 by Eliane Maalouf. historical_forecasts()
andbacktest()
now work on multiple series, too. #1318 by Maxime Dumonal.- New window transformation capabilities:
TimeSeries.window_transform()
and a newWindowTransformer
which allow to easily create window features. #1269 by Eliane Maalouf. - 🔴 Improvements to
TorchForecastingModels
: Load models directly to CPU that were trained on GPU. Save file size reduced. Improved PyTorch Lightning Trainer handling fixing several minor issues. Removed deprecated methodsload_model
andsave_model
#1371 by Dennis Bader. - Improvements to encoders: Added support for encoders to all models with covariate support through
add_encoders
at model creation. Encoders now generate the correct minimum required covariate time spans for all models. #1338 by Dennis Bader. - New datasets available in
darts.datasets
(ILINetDataset
,ExchangeRateDataset
,TrafficDataset
,WeatherDataset
) #1298 by Kamil Wierciak. #1291 by Eliane Maalouf. - New
Diff
transformer, which can difference and "undifference" series #1380 by Matt Bilton. - Improvements to KalmanForecaster: The model now accepts different TimeSeries for prediction than the ones used to fit the model. #1338 by Dennis Bader.
- Backtest functions can now accept a list of metric functions #1333 by Antoine Madrona.
- Extension of baseline models to work on multivariate series #1373 by Błażej Nowicki.
- Improvement to
TimeSeries.gaps()
#1265 by Antoine Madrona. - Speedup of
TimeSeries.quantile_timeseries()
method #1351 by @tranquilitysmile. - Some dependencies which can be hard to install (LightGBM, Catboost, XGBoost, Prophet, Statsforecast) are not required anymore (if not installed the corresponding models will not be available) #1360 by Antoine Madrona.
- Removed
IPython
as a dependency. #1331 by Erik Hasse - Allow the creation of empty
TimeSeries
#1359 by Antoine Madrona.
Fixed
- Fixed edge case in ShapExplainer for regression models where covariates series > target series #1310 by Rijk van der Meulen
- Fixed a bug in
TimeSeries.resample()
#1350 by Antoine Madrona. - Fixed splitting methods when split point is not in the series #1415 by @DavidKleindienst
- Fixed issues with
append_values()
andprepend_values()
not correctly extendingRangeIndex
es #1435 by Matt Bilton. - Fixed some issues with time zones #1343 by Antoine Madrona.
- Fixed some issues when using a single target series with
RegressionEnsembleModel
#1357 by Dennis Bader. - Fixed treatment of stochastic models in ensemble models #1423 by Eliane Maalouf.
0.22.0 (2022-10-04)
Improved
- New explainability feature. The class
ShapExplainer
indarts.explainability
can provide Shap-values explanations of the importance of each lag and each dimension in producing each forecasting lag forRegressionModel
s. #909 by Maxime Dumonal. - New model:
StatsForecastsETS
. Similarly toStatsForecastsAutoARIMA
, this model offers the ETS model from Nixtla'sstatsforecasts
library as a local forecasting model supporting future covariates. #1171 by Julien Herzen. - Added support for past and future covariates to
residuals()
function. #1223 by Eliane Maalouf. - Added support for retraining model(s) every
n
iteration and on custom conditions inhistorical_forecasts
method ofForecastingModel
s. #1139 by Francesco Bruzzesi. - Added support for beta-NLL in
GaussianLikelihood
s, as proposed in this paper. #1162 by Julien Herzen. - New LayerNorm alternatives, RMSNorm and LayerNormNoBias #1113 by Greg DeVos.
- 🔴 Improvements to encoders: improve fitting behavior of encoders' transformers and solve a couple of issues. Remove support for absolute index encoding. #1257 by Dennis Bader.
- Overwrite min_train_series_length for Catboost and LightGBM #1214 by Anne de Vries.
- New example notebook showcasing and end-to-end example of hyperparameter optimization with Optuna #1242 by Julien Herzen.
- New user guide section on hyperparameter optimization with Optuna and Ray Tune #1242 by Julien Herzen.
- Documentation on model saving and loading. #1210 by Amadej Kocbek.
- 🔴
torch_device_str
has been removed from all torch models in favor of Pytorch Lightning'spl_trainer_kwargs
method #1244 by Greg DeVos.
Fixed
- An issue with
add_encoders
inRegressionModel
s when fit/predict were called with a single target series. #1193 by Dennis Bader. - Some issues with integer-indexed series. #1191 by Julien Herzen.
- A bug when using the latest versions (>=1.1.1) of Prophet. #1208 by Julien Herzen.
- An issue with calling
fit_transform()
on reconciliators. #1165 by Julien Herzen. - A bug in
GaussianLikelihood
object causing issues with confidence intervals. #1162 by Julien Herzen. - An issue which prevented plotting
TimeSeries
of length 1. #1206 by Julien Herzen. - Type hinting for ExponentialSmoothing model #1185 by Rijk van der Meulen
0.21.0 (2022-08-12)
Improved
- New model: Catboost, incl
quantile
,poisson
andgaussian
likelihoods support. #1007, #1044 by Jonas Racine. - Extension of the
add_encoders
option toRegressionModel
s. It is now straightforward to add calendar based or custom past or future covariates to these models, similar to torch models. #1093 by Dennis Bader. - Introduction of
StaticCovariatesTransformer
, categorical static covariate support forTFTModel
, example and user-guide updates on static covariates. #1081 by Dennis Bader. - ARIMA and VARIMA models now support being applied to a new series, different than the one used for training. #1036 by Samuele Giuliano Piazzetta.
- All Darts forecasting models now have unified
save()
andload()
methods. #1070 by Dustin Brunner. - Improvements in logging. #1034 by Dustin Brunner.
- Re-integrating Prophet >= 1.1 in core dependencies (as it does not depend on PyStan anymore). #1054 by Julien Herzen.
- Added a new
AustralianTourismDataset
. #1141 by Julien Herzen. - Added a new notebook demonstrating hierarchical reconciliation. #1147 by Julien Herzen.
- Added
drop_columns()
method toTimeSeries
. #1040 by @shaido987 - Speedup static covariates when no casting is needed. #1053 by Julien Herzen.
- Implemented the min_train_series_length method for the FourTheta and Theta models that overwrites the minimum default of 3 training samples by 2*seasonal_period when appropriate. #1101 by Rijk van der Meulen.
- Make default formatting optional in plots. #1056 by Colin Delahunty
- Introduce
retrain
option inresiduals()
method. #1066 by Julien Herzen. - Improved error messages. #1066 by Julien Herzen.
- Small readability improvements to user guide. #1039, #1046 by Ryan Russell
Fixed
- Fixed an error when loading torch forecasting models. #1124 by Dennis Bader.
- 🔴 renamed
ignore_time_axes
intoignore_time_axis
inTimeSeries.concatenate()
. #1073 by Thomas KIENTZ - Propagate static covs and hierarchy in missing value filler. #1076 by Julien Herzen.
- Fixed an issue where num_stacks is used instead of self.num_stacks in the NBEATSModel. Also, a few mistakes in API reference docs. #1103 by Rijk van der Meulen.
- Fixed
univariate_component()
method to propagate static covariates and drop hierarchy. #1128 by Julien Herzen. - Fixed various issues. #1106 by Julien Herzen.
- Fixed an issue with
residuals
onRNNModel
. #1066 by Julien Herzen.
0.20.0 (2022-06-22)
Improved
- Added support for static covariates in
TimeSeries
class. #966 by Dennis Bader. - Added support for static covariates in TFT model. #966 by Dennis Bader.
- Support for storing hierarchy of components in
TimeSeries
(in view of hierarchical reconciliation) #1012 by Julien Herzen. - New Reconciliation transformers for forecast reconciliation: bottom up, top down and MinT. #1012 by Julien Herzen.
- Added support for Monte Carlo Dropout, as a way to capture model uncertainty with torch models at inference time. #1013 by Julien Herzen.
- New datasets: ETT and Electricity. #617 by Greg DeVos
- New dataset: Uber TLC. #1003 by Greg DeVos.
- Model Improvements: Option for changing activation function for NHiTs and NBEATS. NBEATS support for dropout. NHiTs Support for AvgPooling1d. #955 by Greg DeVos.
- Implemented "GLU Variants Improve Transformer" for transformer based models (transformer and TFT). #959 by Greg DeVos.
- Added support for torch metrics during training and validation. #996 by Greg DeVos.
- Better handling of logging #1010 by Dustin Brunner.
- Better support for Python 3.10, and dropping
prophet
as a dependency (Prophet
model still works ifprophet
package is installed separately) #1023 by Julien Herzen. - Option to avoid global matplotlib configuration changes. #924 by Mike Richman.
- 🔴
HNiTSModel
renamed toHNiTS
#1000 by Greg DeVos.
Fixed
- A bug with
tail()
andhead()
#942 by Julien Herzen. - An issue with arguments being reverted for the
metric
function of gridsearch and backtest #989 by Clara Grotehans. - An error checking whether
fit()
has been called in global models #944 by Julien Herzen. - An error in Gaussian Process filter happening with newer versions of sklearn #963 by Julien Herzen.
Fixed
- An issue with LinearLR scheduler in tests. #928 by Dennis Bader.
0.19.0 (2022-04-13)
Improved
- New model:
NHiTS
implementing the N-HiTS model. #898 by Julien Herzen. - New model:
StatsForecastAutoARIMA
implementing the (faster) AutoARIMA version of statsforecast. #893 by Julien Herzen. - New model:
Croston
method. #893 by Julien Herzen. - Better way to represent stochastic
TimeSeries
from distributions specified by quantiles. #899 by Gian Wiher. - Better sampling of trajectories for stochastic
RegressionModel
s. #899 by Gian Wiher. - Improved user guide with more sections. #905 by Julien Herzen.
- New notebook showcasing transfer learning and training forecasting models on large time series datasets. #885 by Julien Herzen.
Fixed
- Some issues with PyTorch Lightning >= 1.6.0 #888 by Julien Herzen.
0.18.0 (2022-03-22)
Improved
LinearRegressionModel
andLightGBMModel
can now be probabilistic, supporting quantile and poisson regression. #831, #853 by Gian Wiher.- New models:
BATS
andTBATS
, based on tbats. #816 by Julien Herzen. - Handling of stochastic inputs in PyTorch based models. #833 by Julien Herzen.
- GPU and TPU user guide. #826 by @gsamaras.
- Added train and validation loss to PyTorch Lightning progress bar. #825 by Dennis Bader.
- More losses available in
darts.utils.losses
for PyTorch-based models:SmapeLoss
,MapeLoss
andMAELoss
. #845 by Julien Herzen. - Improvement to the seasonal decomposition #862. by Gian Wiher.
- The
gridsearch()
method can now return best metric score. #822 by @nlhkh. - Removed needless checkpoint loading when predicting. #821 by Dennis Bader.
- Changed default number of epochs for validation from 10 to 1. #825 by Dennis Bader.
Fixed
- Fixed some issues with encoders in
fit_from_dataset()
. #829 by Julien Herzen. - Fixed an issue with covariates slicing for
DualCovariatesForecastingModels
. #858 by Dennis Bader.
0.17.1 (2022-02-17)
Patch release
Fixed
- Fixed issues with (now deprecated)
torch_device_str
parameter, and improved documentation related to using devices with PyTorch Lightning. #806 by Dennis Bader. - Fixed an issue with
ReduceLROnPlateau
. #806 by Dennis Bader. - Fixed an issue with the periodic basis functions of N-BEATS. #804 by Vladimir Chernykh.
- Relaxed requirements for
pandas
; frompandas>=1.1.0
topandas>=1.0.5
. #800 by @adelnick.
0.17.0 (2022-02-15)
Improved
- 🚀 Support for PyTorch Lightning: All deep learning models are now implemented using PyTorch Lightning. This means that many more features are now available via PyTorch Lightning trainers functionalities; such as tailored callbacks, or multi-GPU training. #702 by Dennis Bader.
- The
RegressionModel
s now accept anoutput_chunk_length
parameter; meaning that they can be trained to predict more than one time step in advance (and used auto-regressively to predict on longer horizons). #761 by Dustin Brunner. - 🔴
TimeSeries
"simple statistics" methods (such asmean()
,max()
,min()
etc, ...) have been refactored to work natively on stochasticTimeSeries
, and over configurable axes. #773 by Gian Wiher. - 🔴
TimeSeries
now support only pandasRangeIndex
as an integer index, and does not supportInt64Index
anymore, as it became deprecated with pandas 1.4.0. This also now brings the guarantee thatTimeSeries
do not have missing "dates" even when indexed with integers. #777 by Julien Herzen. - New model:
KalmanForecaster
is a new probabilistic model, working on multivariate series, accepting future covariates, and which works by running the state-space model of a given Kalman filter into the future. Thefit()
function uses the N4SID algorithm for system identification. #743 by Julien Herzen. - The
KalmanFilter
now also works onTimeSeries
containing missing values. #743 by Julien Herzen. - The estimators (forecasting and filtering models) now also return their own instance when calling
fit()
, which allows chaining calls. #741 by Julien Herzen.
Fixed
- Fixed an issue with tensorboard and gridsearch when
model_name
is provided. #759 by @gdevos010. - Fixed issues with pip-tools. #762 by Tomas Van Pottelbergh.
- Some linting checks have been added to the CI pipeline. #749 by Tomas Van Pottelbergh.
0.16.1 (2022-01-24)
Patch release
- Fixed an incompatibility with latest version of Pandas (#752) by Julien Herzen.
- Fixed non contiguous error when using lstm_layers > 1 on GPU. (#740) by Dennis Bader.
- Small improvement in type annotations in API documentation (#744) by Dustin Brunner.
- Added flake8 tests to CI pipelines (#749, #748, #745) by Tomas Van Pottelbergh and Dennis Bader.
0.16.0 (2022-01-13)
Improved
- The documentation page has been revamped and now contains a brand new Quickstart guide, as well as a User Guide section, which will be populated over time.
- The API documentation has been revamped and improved,
notably using
numpydoc
. - The datasets building procedure has been improved in
RegressionModel
, which yields dramatic speed improvements.
Added
- The
KalmanFilter
can now do system identification usingfit()
(using nfoursid).
Fixed
- Catch a potentially problematic case in ensemble models.
- Fixed support for
ReduceLROnPlateau
scheduler.
- We have switched to black for code formatting (this is checked by the CI pipeline).
0.15.0 (2021-12-24)
Added:
-
On-the-fly encoding of position and calendar information in Torch-based models. Torch-based models now accept an option
add_encoders
parameter, specifying how to use certain calendar and position information as past and/or future covariates on the-fly.Example:
from darts.dataprocessing.transformers import Scaler add_encoders={ 'cyclic': {'future': ['month']}, 'datetime_attribute': {'past': ['hour', 'dayofweek']}, 'position': {'past': ['absolute'], 'future': ['relative']}, 'custom': {'past': [lambda idx: (idx.year - 1950) / 50]}, 'transformer': Scaler() }
This will add a cyclic encoding of the month as future covariates, add some datetime attributes as past and future covariates, an absolute/relative position (index), and even some custom mapping of the index (such as a function of the year). A
Scaler
will be applied to fit/transform all of these covariates both during training and inference. -
The scalers can now also be applied on stochastic
TimeSeries
. -
There is now a new argument
max_samples_per_ts
to the :func:fit()
method of Torch-based models, which can be used to limit the number of samples contained in the underlying training dataset, by taking (at most) the most recentmax_samples_per_ts
training samples per time series. -
All local forecasting models that support covariates (Prophet, ARIMA, VARIMA, AutoARIMA) now handle covariate slicing themselves; this means that you don't need to make sure your covariates have the exact right time span. As long as they contain the right time span, the models will slice them for you.
-
TimeSeries.map()
and mappers data transformers now work on stochasticTimeSeries
. -
Granger causality function:
utils.statistics.granger_causality_tests
can test if one univariateTimeSeries
"granger causes" another. -
New stationarity tests for univariate
TimeSeries
:darts.utils.statistics.stationarity_tests
,darts.utils.statistics.stationarity_test_adf
anddarts.utils.statistics.stationarity_test_kpss
. -
New test coverage badge 🦄
Fixed:
- Fixed various issues in different notebooks.
- Fixed a bug handling frequencies in Prophet model.
- Fixed an issue causing
PastCovariatesTorchModels
(such asNBEATSModel
) prediction to fail whenn > output_chunk_length
ANDn
not being a multiple ofoutput_chunk_length
. - Fixed an issue in backtesting which was causing untrained models
not to be trained on the initial window when
retrain=False
. - Fixed an issue causing
residuals()
to fail for Torch-based models.
- Updated the contribution guidelines
- The unit tests have been re-organised with submodules following that of the library.
- All relative import paths have been removed and replaced by absolute paths.
- pytest and pytest-cov are now used to run tests and compute coverage.
0.14.0 (2021-11-28)
Added:
- Probabilistic N-BEATS: The
NBEATSModel
can now produce probabilistic forecasts, in a similar way as all the other deep learning models in Darts (specifying alikelihood
and predicting withnum_samples
>> 1). - We have improved the speed of the data loaing functionalities for PyTorch-based models. This should speedup training, typically by a few percents.
- Added
num_loader_workers
parameters tofit()
andpredict()
methods of PyTorch-based models, in order to control thenum_workers
of PyTorch DataLoaders. This can sometimes result in drastic speedups. - New method
TimeSeries.astype()
which allows to easily case (e.g. betweennp.float64
andnp.float32
). - Added
dtype
as an option to the time series generation modules. - Added a small performance guide for PyTorch-based models.
- Possibility to specify a (relative) time index to be used as future covariates in the TFT Model. Future covariates don't have to be specified when this is used.
- New TFT example notebook.
- Less strict dependencies: we have loosened the required dependencies versions.
Fixed:
- A small fix on the Temporal Fusion Transformer
TFTModel
, which should improve performance. - A small fix in the random state of some unit tests.
- Fixed a typo in Transformer example notebook.
0.13.1 (2021-11-08)
Added:
- Factory methods in
TimeSeries
are nowclassmethods
, which makes inheritance ofTimeSeries
more convenient.
Fixed:
- An issue which was causing some of the flavours installations not to work
0.13.0 (2021-11-07)
Added:
- New forecasting model: Temporal Fusion Transformer (
TFTModel
). A new deep learning model supporting both past and future covariates. - Improved support for Facebook Prophet model (
Prophet
):- Added support for fit & predict with future covariates. For instance:
model.fit(train, future_covariates=train_covariates)
andmodel.predict(n=len(test), num_sample=1, future_covariates=test_covariates)
- Added stochastic forecasting, for instance:
model.predict(n=len(test), num_samples=200)
- Added user-defined seasonalities either at model creation with kwarg
add_seasonality
(Prophet(add_seasonality=kwargs_dict)
) or pre-fit withmodel.add_seasonality(kwargs)
. For more information on how to add seasonalities, see the Prophet docs. - Added possibility to predict and return the base model's raw output with
model.predict_raw()
. Note that this returns a pd.DataFramepred_df
, which will not be supported for further processing with the Darts API. But it is possible to access Prophet's methods such as plots withmodel.model.plot_compenents(pred_df)
.
- Added support for fit & predict with future covariates. For instance:
- New
n_random_samples
ingridsearch()
method, which allows to specify a number of (random) hyper parameters combinations to be tried, in order mainly to limit the gridsearch time. - Improvements in the checkpointing and saving of Torch models.
- Now models don't save checkpoints by default anymore. Set
save_checkpoints=True
to enable them. - Models can be manually saved with
YourTorchModel.save_model(file_path)
(file_path pointing to the .pth.tar file). - Models can be manually loaded with
YourTorchModel.load_model(file_path)
or the original methodYourTorchModel.load_from_checkpoint()
.
- Now models don't save checkpoints by default anymore. Set
- New
QuantileRegression
Likelihood class indarts.utils.likelihood_models
. Allows to apply quantile regression loss, and get probabilistic forecasts on all deep learning models supporting likelihoods. Used by default in the Temporal Fusion Transformer.
Fixed:
- Some issues with
darts.concatenate()
. - Fixed some bugs with
RegressionModel
s applied on multivariate series. - An issue with the confidence bounds computation in ACF plot.
- Added a check for some models that do not support
retrain=False
forhistorical_forecasts()
. - Small fixes in install instructions.
- Some rendering issues with bullet points lists in examples.
0.12.0 (2021-09-25)
Added:
- Improved probabilistic forecasting with neural networks
- Now all neural networks based forecasting models (except
NBEATSModel
) support probabilistic forecasting, by providing thelikelihood
parameter to the model's constructor method. darts.utils.likelihood_models
now contains many more distributions. The complete list of likelihoods available to train neural networks based models is available here: https://unit8co.github.io/darts/generated_api/darts.utils.likelihood_models.html- Many of the available likelihood models now offer the possibility to specify "priors" on the distribution's parameters. Specifying such priors will regularize the training loss to make the output distribution more like the one specified by the prior parameters values.
- Now all neural networks based forecasting models (except
- Performance improvements on
TimeSeries
creation. creatingTimeSeries
is now be significantly faster, especially for large series, and filling missing dates has also been significantly sped up. - New rho-risk metric for probabilistic forecasts.
- New method
darts.utils.statistics.plot_hist()
to plot histograms of time series data (e.g. backtest errors). - New argument
fillna_value
toTimeSeries
factory methods, allowing to specify a value to fill missing dates (instead ofnp.nan
). - Synthetic
TimeSeries
generated withdarts.utils.timeseries_generation
methods can now be integer-index (just pass an integer instead of a timestamp for thestart
argument). - Removed some deprecation warnings
- Updated conda installation instructions
Fixed:
- Removed extra 1x1 convolutions in TCN Model.
- Fixed an issue with linewidth parameter when plotting
TimeSeries
. - Fixed a column name issue in datetime attribute time series.
- We have removed the
develop
branch. - We force sklearn<1.0 has we have observed issues with pmdarima and sklearn==1.0
0.11.0 (2021-09-04)
Added:
- New model:
LightGBMModel
is a new regression model. Regression models allow to predict future values of the target, given arbitrary lags of the target as well as past and/or future covariates.RegressionModel
already works with any scikit-learn regression model, and nowLightGBMModel
does the same with LightGBM. If you want to activate LightGBM support in Darts, please read the detailed install notes on the README carefully. - Added stride support to gridsearch
Fixed:
- A bug which was causing issues when training on a GPU with a validation set
- Some issues with custom-provided RNN modules in
RNNModel
. - Properly handle
kwargs
in thefit
function ofRegressionModel
s. - Fixed an issue which was causing problems with latest versions of Matplotlib.
- An issue causing errors in the FFT notebook
0.10.1 (2021-08-19)
Fixed:
- A bug with memory pinning that was causing issues with training models on GPUs.
Changed:
- Clarified conda support on the README
0.10.0 (2021-08-13)
Added:
-
🔴 Improvement of the covariates support. Before, some models were accepting a
covariates
(orexog
) argument, but it wasn't always clear whether this represented "past-observed" or "future-known" covariates. We have made this clearer. Now all covariate-aware models supportpast_covariates
and/orfuture_covariates
argument in theirfit()
andpredict()
methods, which makes it clear what series is used as a past or future covariate. We recommend this article for more information and examples. -
🔴 Significant improvement of
RegressionModel
(incl.LinearRegressionModel
andRandomForest
). These models now support training on multiple (possibly multivariate) time series. They also support bothpast_covariates
andfuture_covariates
. It makes it easier than ever to fit arbitrary regression models (e.g. from scikit-learn) on multiple series, to predict the future of a target series based on arbitrary lags of the target and the past/future covariates. The signature of these models changed: It's not using "exog
" keyword arguments, butpast_covariates
andfuture_covariates
instead. -
Dynamic Time Warping. There is a brand new
darts.dataprocessing.dtw
submodule that implements Dynamic Time Warping between twoTimeSeries
. It's also coming with a newdtw
metric indarts.metrics
. We recommend going over the new DTW example notebook for a good overview of the new functionalities -
Conda forge installation support (fully supported with Python 3.7 only for now). You can now
conda install u8darts-all
. -
TimeSeries.from_csv()
allows to obtain aTimeSeries
from a CSV file directly. -
Optional cyclic encoding of the datetime attributes future covariates; for instance it's now possible to call
my_series.add_datetime_attribute('weekday', cyclic=True)
, which will add two columns containing a sin/cos encoding of the weekday. -
Default seasonality inference in
ExponentialSmoothing
. If left toNone
, theseasonal_periods
is inferred from thefreq
of the provided series. -
Various documentation improvements.
Fixed:
- Now transformations and forecasting maintain the columns' names of the
TimeSeries
. The generation moduledarts.utils.timeseries_generation
also comes with better default columns names. - Some issues with our Docker build process
- A bug with GPU usage
Changed:
- For probabilistic PyTorch based models, the generation of multiple samples (and series) at prediction time is now vectorized, which improves inference performance.
0.9.1 (2021-07-17)
Added:
- Improved
GaussianProcessFilter
, now handling missing values, and better handling time series indexed by datetimes. - Improved Gaussian Process notebook.
Fixed:
TimeSeries
now supports indexing usingpandas.Int64Index
and not justpandas.RangeIndex
, which solves some indexing issues.- We have changed all factory methods of
TimeSeries
to havefill_missing_dates=False
by default. This is because in some cases inferring the frequency for missing dates and resampling the series is causing significant performance overhead. - Fixed backtesting to make it work with integer-indexed series.
- Fixed a bug that was causing inference to crash on GPUs for some models.
- Fixed the default folder name, which was causing issues on Windows systems.
- We have slightly improved the documentation rendering and fixed the titles
of the documentation pages for
RNNModel
andBlockRNNModel
to distinguish them.
Changed:
- The dependencies are not pinned to some exact versions anymore.
- We have fixed the building process.
0.9.0 (2021-07-09)
Added:
- Multiple forecasting models can now produce probabilistic forecasts by specifying a
num_samples
parameter when callingpredict()
. Stochastic forecasts are stored by utilizing the newsamples
dimension in the refactoredTimeSeries
class (see 'Changed' section). Models supporting probabilistic predictions so far areARIMA
,ExponentialSmoothing
,RNNModel
andTCNModel
. - Introduced
LikelihoodModel
class which is used by probabilisticTorchForecastingModel
classes in order to make predictions in the form of parametrized distributions of different types. - Added new abstract class
TorchParametricProbabilisticForecastingModel
to serve as parent class for probabilistic models. - Introduced new
FilteringModel
abstract class alongsideMovingAverage
,KalmanFilter
andGaussianProcessFilter
as concrete implementations. - Future covariates are now utilized by
TorchForecastingModels
when the forecasting horizon exceeds theoutput_chunk_length
of the model. Before,TorchForecastingModel
instances could only predict beyond theiroutput_chunk_length
if they were not trained on covariates, i.e. if they predicted all the data they need as input. This restriction has now been lifted by letting a model not only consume its own output when producing long predictions, but also utilizing the covariates known in the future, if available. - Added a new
RNNModel
class which utilizes and rnn module as both encoder and decoder. This new class natively supports the use of the most recent future covariates when making a forecast. See documentation for more details. - Introduced optional
epochs
parameter to theTorchForecastingModel.predict()
method which, if provided, overrides then_epochs
attribute in that particular model instance and training session. - Added support for
TimeSeries
with apandas.RangeIndex
instead of just allowingpandas.DatetimeIndex
. ForecastingModel.gridsearch
now makes use of parallel computation.- Introduced a new
force_reset
parameter toTorchForecastingModel.__init__()
which, if left to False, will prevent the user from overriding model data with the same name and directory.
Fixed:
- Solved bug occurring when training
NBEATSModel
on a GPU. - Fixed crash when running
NBEATSModel
withlog_tensorboard=True
- Solved bug occurring when training a
TorchForecastingModel
instance with abatch_size
bigger than the available number of training samples. - Some fixes in the documentation, including adding more details
- Other minor bug fixes
Changed:
- 🔴 The
TimeSeries
class has been refactored to support stochastic time series representation by adding an additional dimension to a time series, namelysamples
. A time series is now based on a 3-dimensionalxarray.DataArray
with shape(n_timesteps, n_components, n_samples)
. This overhaul also includes a change of the constructor which is incompatible with the old one. However, factory methods have been added to create aTimeSeries
instance from a variety of data types, includingpd.DataFrame
. Please refer to the documentation ofTimeSeries
for more information. - 🔴 The old version of
RNNModel
has been renamed toBlockRNNModel
. - The
historical_forecast()
andbacktest()
methods ofForecastingModel
have been reorganized a bit by making use of new wrapper methods to fit and predict models. - Updated
README.md
to reflect the new additions to the library.
0.8.1 (2021-05-22)
Fixed:
- Some fixes in the documentation
Changed:
- The way to instantiate Dataset classes; datasets should now be used like this
from darts.datasets import AirPassengers
ts: TimeSeries = AirPassengers().load()
0.8.0 (2021-05-21)
Added:
RandomForest
algorithm implemented. Uses the scikit-learnRandomForestRegressor
to predict future values from (lagged) exogenous variables and lagged values of the target.darts.datasets
is a new submodule allowing to easily download, cache and import some commonly used time series.- Better support for processing sequences of
TimeSeries
.- The Transformers, Pipelines and metrics have been adapted to be used on sequences of
TimeSeries
(rather than isolated series). - The inference of neural networks on sequences of series has been improved
- The Transformers, Pipelines and metrics have been adapted to be used on sequences of
- There is a new utils function
darts.utils.model_selection.train_test_split
which allows to split aTimeSeries
or a sequence ofTimeSeries
into train and test sets; either along the sample axis or along the time axis. It also optionally allows to do "model-aware" splitting, where the split reclaims as much data as possible for the training set. - Our implementation of N-BEATS,
NBEATSModel
, now supports multivariate time series, as well as covariates.
Changed
RegressionModel
is now a user exposed class. It acts as a wrapper around any regression model with afit()
andpredict()
method. It enables the flexible usage of lagged values of the target variable as well as lagged values of multiple exogenous variables. Allowed values for thelags
argument are positive integers or a list of positive integers indicating which lags should be used during training and prediction, e.g.lags=12
translates to training with the last 12 lagged values of the target variable.lags=[1, 4, 8, 12]
translates to training with the previous value, the value at lag 4, lag 8 and lag 12.- 🔴
StandardRegressionModel
is now calledLinearRegressionModel
. It implements a linear regression model fromsklearn.linear_model.LinearRegression
. Users who still need to use the formerStandardRegressionModel
with another sklearn model should use theRegressionModel
now.
Fixed
- We have fixed a bug arising when multiple scalers were used.
- We have fixed a small issue in the TCN architecture, which makes our implementation follow the original paper more closely.
Added:
- We have added some contribution guidelines.
0.7.0 (2021-04-14)
Added:
darts
Pypi package. It is now possible topip install darts
. The older nameu8darts
is still maintained and provides the different flavours for lighter installs.- New forecasting model available: VARIMA (Vector Autoregressive moving average).
- Support for exogeneous variables in ARIMA, AutoARIMA and VARIMA (optional
exog
parameter infit()
andpredict()
methods). - New argument
dummy_index
forTimeSeries
creation. If a series is just composed of a sequence of numbers without timestamps, setting this flag will allow to create aTimeSeries
which uses a "dummy time index" behind the scenes. This simplifies the creation ofTimeSeries
in such cases, and makes it possible to use all forecasting models, except those that explicitly rely on dates. - New method
TimeSeries.diff()
returning differencedTimeSeries
. - Added an example of
RegressionEnsembleModel
in intro notebook.
Changed:
- Improved N-BEATS example notebook.
- Methods
TimeSeries.split_before()
andsplit_after()
now also accept integer or float arguments (in addition to timestamp) for the breaking point (e.g. specify 0.8 in order to obtain a 80%/20% split). - Argument
value_cols
no longer has to be provided if not necessary when creating aTimeSeries
from aDataFrame
. - Update of dependency requirements to more recent versions.
Fixed:
- Fix issue with MAX_TORCH_SEED_VALUE on 32-bit architectures (unit8co#235).
- Corrected a bug in TCN inference, which should improve accuracy.
- Fix historical forecasts not returning last point.
- Fixed bug when calling the
TimeSeries.gaps()
function for non-regular time frequencies. - Many small bug fixes.
0.6.0 (2021-02-02)
Added:
Pipeline.invertible()
a getter which returns whether the pipeline is invertible or not.TimeSeries.to_json()
andTimeSeries.from_json()
methods to convertTimeSeries
to/from aJSON
string.- New base class
GlobalForecastingModel
for all models supporting training on multiple time series, as well as covariates. All PyTorch models are nowGlobalForecastingModel
s. - As a consequence of the above, the
fit()
function of PyTorch models (all neural networks) can optionally be called with a sequence of time series (instead of a single time series). - Similarly, the
predict()
function of these models also accepts a specification of which series should be forecasted - A new
TrainingDataset
base class. - Some implementations of
TrainingDataset
containing some slicing logic for the training of neural networks on several time series. - A new
TimeSeriesInferenceDataset
base class. - An implementation
SimpleInferenceDataset
ofTimeSeriesInferenceDataset
. - All PyTorch models have a new
fit_from_dataset()
method which allows to directly fit the model from a specifiedTrainingDataset
instance (instead of using a default instance when going via the :func:fit()
method). - A new explanatory notebooks for global models: https://github.com/unit8co/darts/blob/master/examples/02-multi-time-series-and-covariates.ipynb
Changed:
- 🔴 removed the arguments
training_series
andtarget_series
inForecastingModel
s. Please consult the API documentation of forecasting models to see the new signatures. - 🔴 removed
UnivariateForecastingModel
andMultivariateForecastingModel
base classes. This distinction does not exist anymore. Instead, now some models are "global" (can be trained on multiple series) or "local" (they cannot). All implementations ofGlobalForecastingModel
s support multivariate time series out of the box, except N-BEATS. - Improved the documentation and README.
- Re-ordered the example notebooks to improve the flow of examples.
Fixed:
- Many small bug fixes.
- Unit test speedup by about 15x.
0.5.0 (2020-11-09)
Added:
- Ensemble models, a new kind of
ForecastingModel
which allows to ensemble multiple models to make predictions:EnsembleModel
is the abstract base class for ensemble models. Classes deriving fromEnsembleModel
must implement theensemble()
method, which takes in aList[TimeSeries]
of predictions from the constituent models, and returns the ensembled prediction (a singleTimeSeries
object)RegressionEnsembleModel
, a concrete implementation ofEnsembleModel
which allows to specify any regression model (providingfit()
andpredict()
methods) to use to ensemble the constituent models' predictions.
- A new method to
TorchForecastingModel
:untrained_model()
returns the model as it was initially created, allowing to retrain the exact same model from scratch. Works both when specifying arandom_state
or not. - New
ForecastingModel.backtest()
andRegressionModel.backtest()
functions which by default compute a single error score from the historical forecasts the model would have produced.- A new
reduction
parameter allows to specify whether to compute the mean/median/… of errors or (whenreduction
is set toNone
) to return a list of historical errors. - The previous
backtest()
functionality still exists but has been renamedhistorical_forecasts()
- A new
- Added a new
last_points_only
parameter tohistorical_forecasts()
,backtest()
andgridsearch()
Changed:
- 🔴 Renamed
backtest()
intohistorical_forecasts()
fill_missing_values()
andMissingValuesFiller
used to remove the variable names when used withfill='auto'
– not anymore.- Modified the default plotting style to increase contrast and make plots lighter.
Fixed:
- Small mistake in the
NaiveDrift
model implementation which caused the first predicted value to repeat the last training value.
Changed:
@random_method
decorator now always assigns a_random_instance
field to decorated methods (seeded with a random seed). This doesn't change the observed behavior, but allows to deterministically "reset"TorchForecastingModel
by saving_random_instance
along with the other parameters of the model upon creation.
0.4.0 (2020-10-28)
Added:
- Data (pre) processing abilities using
DataTransformer
,Pipeline
:DataTransformer
provide a unified interface to apply transformations onTimeSeries
, using theirtransform()
methodPipeline
:- allow chaining of
DataTransformers
- provide
fit()
,transform()
,fit_transform()
andinverse_transform()
methods.
- allow chaining of
- Implementing your own data transformers:
- Data transformers which need to be fitted first should derive from the
FittableDataTransformer
base class and implement afit()
method. Fittable transformers also provide afit_transform()
method, which fits the transformer and then transforms the data with a single call. - Data transformers which perform an invertible transformation should derive from the
InvertibleDataTransformer
base class and implement ainverse_transform()
method. - Data transformers which are neither fittable nor invertible should derive from the
BaseDataTransformer
base class - All data transformers must implement a
transform()
method.
- Data transformers which need to be fitted first should derive from the
- Concrete
DataTransformer
implementations:MissingValuesFiller
wraps aroundfill_missing_value()
and allows to fill missing values using either a constant value or thepd.interpolate()
method.Mapper
andInvertibleMapper
allow to easily perform the equivalent of amap()
function on a TimeSeries, and can be made part of aPipeline
BoxCox
allows to apply a BoxCox transformation to the data
- Extended
map()
onTimeSeries
to accept functions which use both a value and its timestamp to compute a new value e.g.f(timestamp, datapoint) = new_datapoint
- Two new forecasting models:
TransformerModel
, an implementation based on the architecture described in Attention Is All You Need by Vaswani et al. (2017)NBEATSModel
, an implementation based on the N-BEATS architecture described in N-BEATS: Neural basis expansion analysis for interpretable time series forecasting by Boris N. Oreshkin et al. (2019)
Changed:
- 🔴 Removed
cols
parameter frommap()
. Using indexing onTimeSeries
is preferred.# Assuming a multivariate TimeSeries named series with 3 columns or variables. # To apply fn to columns with names '0' and '2': #old syntax series.map(fn, cols=['0', '2']) # returned a time series with 3 columns #new syntax series[['0', '2']].map(fn) # returns a time series with only 2 columns
- 🔴 Renamed
ScalerWrapper
intoScaler
- 🔴 Renamed the
preprocessing
module intodataprocessing
- 🔴 Unified
auto_fillna()
andfillna()
into a singlefill_missing_value()
function#old syntax fillna(series, fill=0) #new syntax fill_missing_values(series, fill=0) #old syntax auto_fillna(series, **interpolate_kwargs) #new syntax fill_missing_values(series, fill='auto', **interpolate_kwargs) fill_missing_values(series, **interpolate_kwargs) # fill='auto' by default
Changed:
- GitHub release workflow is now triggered manually from the GitHub "Actions" tab in the repository, providing a
#major
,#minor
, or#patch
argument. #211 - (A limited number of) notebook examples are now run as part of the GitHub PR workflow.
0.3.0 (2020-10-05)
Added:
- Better indexing on TimeSeries (support for column/component indexing) #150
- New
FourTheta
forecasting model #123, #156 map()
method for TimeSeries #121, #166- Further improved the backtesting functions #111:
- Added support for multivariate TimeSeries and models
- Added
retrain
andstride
parameters
- Custom style for matplotlib plots #191
- sMAPE metric #129
- Option to specify a
random_state
at model creation using the@random_method
decorator on models using neural networks to allow reproducibility of results #118
Changed:
- 🔴 Refactored backtesting #184
- Moved backtesting functionalities inside
ForecastingModel
andRegressionModel
# old syntax: backtest_forecasting(forecasting_model, *args, **kwargs) # new syntax: forecasting_model.backtest(*args, **kwargs) # old syntax: backtest_regression(regression_model, *args, **kwargs) # new syntax: regression_model.backtest(*args, **kwargs)
- Consequently removed the
backtesting
module
- Moved backtesting functionalities inside
- 🔴
ForecastingModel
fit()
method syntax using TimeSeries indexing instead of additional parameters #161# old syntax: multivariate_model.fit(multivariate_series, target_indices=[0, 1]) # new syntax: multivariate_model.fit(multivariate_series, multivariate_series[["0", "1"]]) # old syntax: univariate_model.fit(multivariate_series, component_index=2) # new syntax: univariate_model.fit(multivariate_series["2"])
Fixed:
- Solved issue of TorchForecastingModel.predict(n) throwing an error at n=1. #108
- Fixed MASE metrics #129
- [BUG] ForecastingModel.backtest: Can bypass sanity checks #188
- ForecastingModel.backtest() fails if forecast_horizon isn't provided #186
Added:
- Gradle to build docs, docker image, run tests, … #112, #127, #159
- M4 competition benchmark and notebook to the examples #138
- Check of test coverage #141
Changed:
Fixed:
- Passed the
freq
parameter to theTimeSeries
constructor in all TimeSeries generating functions #157