Skip to content

Commit

Permalink
Fix NLinear normalization to support past covariates (#1873)
Browse files Browse the repository at this point in the history
* adding test for past covariates, need to check with darts about past covariates needing to end in the future

* slicing typo

* fix slicing

* add explicit comment from @felixdivo

Co-authored-by: Felix Divo <[email protected]>

* update test to use future covariates in predict function due to autoregressive predictions

* linting

* fix tests

* making changes proposed by @dennisbader

* update comment

* remove double denormalization

* update CHANGELOG.md

---------

Co-authored-by: eliot <[email protected]>
Co-authored-by: Felix Divo <[email protected]>
Co-authored-by: dennisbader <[email protected]>
  • Loading branch information
4 people authored Sep 7, 2023
1 parent f5259b9 commit 74ed2bb
Show file tree
Hide file tree
Showing 3 changed files with 39 additions and 6 deletions.
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@ but cannot always guarantee backwards compatibility. Changes that may **break co
- Fixed a bug in `RegressionEnsembleModel.extreme_lags` when the forecasting models have only covariates lags. [#1942](https://github.com/unit8co/darts/pull/1942) by [Antoine Madrona](https://github.com/madtoinou).
- Fixed a bug when using `TFTExplainer` with a `TFTModel` running on GPU. [#1949](https://github.com/unit8co/darts/pull/1949) by [Dennis Bader](https://github.com/dennisbader).
- Fixed a bug in `TorchForecastingModel.load_weights()` that raised an error when loading the weights from a valid architecture. [#1952](https://github.com/unit8co/darts/pull/1952) by [Antoine Madrona](https://github.com/madtoinou).
- 🔴 Dropped support for lambda functions in `add_encoders`’s “custom” encoder in favor of named functions to ensure that models can be exported. [#1957](https://github.com/unit8co/darts/pull/1957) by [Antoine Madrona]
- 🔴 Dropped support for lambda functions in `add_encoders`’s “custom” encoder in favor of named functions to ensure that models can be exported. [#1957](https://github.com/unit8co/darts/pull/1957) by [Antoine Madrona](https://github.com/madtoinou).
- Fixed a bug in `NLinearModel` where `normalize=True` and past covariates could not be used at the same time. [#1873](https://github.com/unit8co/darts/pull/1873) by [Eliot Zubkoff](https://github.com/Eliotdoesprogramming).

### For developers of the library:

Expand Down
9 changes: 4 additions & 5 deletions darts/models/forecasting/nlinear.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,17 +142,15 @@ def forward(
x = x.permute(0, 2, 1, 3)
else:
if self.normalize:
seq_last = x[:, -1:, :].detach() # (batch, 1, in_dim)
# get last values only for target features
seq_last = x[:, -1:, : self.output_dim].detach()
x = x - seq_last

x = self.layer(x.view(batch, -1)) # (batch, out_len * out_dim * nr_params)
x = x.view(
batch, self.output_chunk_length, self.output_dim * self.nr_params
)

if self.normalize:
x = x + seq_last # Note: works only when nr_params == 1

if self.future_cov_dim != 0:
# x_future might be shorter than output_chunk_length when n < output_chunk_length
# so we need to pad it with zeros at the end to match the output_chunk_length
Expand All @@ -175,7 +173,8 @@ def forward(
)

x = x.view(batch, self.output_chunk_length, self.output_dim, self.nr_params)

if self.normalize:
x = x + seq_last.view(seq_last.shape + (1,))
return x


Expand Down
33 changes: 33 additions & 0 deletions darts/tests/models/forecasting/test_dlinear_nlinear.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,8 +174,13 @@ def _eval_model(
val2,
fut_cov1,
fut_cov2,
past_cov1=None,
past_cov2=None,
val_past_cov1=None,
val_past_cov2=None,
cls=DLinearModel,
lkl=None,
**kwargs
):
model = cls(
input_chunk_length=50,
Expand All @@ -189,6 +194,12 @@ def _eval_model(

model.fit(
[train1, train2],
past_covariates=[past_cov1, past_cov2]
if past_cov1 is not None
else None,
val_past_covariates=[val_past_cov1, val_past_cov2]
if val_past_cov1 is not None
else None,
future_covariates=[fut_cov1, fut_cov2]
if fut_cov1 is not None
else None,
Expand All @@ -200,6 +211,9 @@ def _eval_model(
future_covariates=[fut_cov1, fut_cov2]
if fut_cov1 is not None
else None,
past_covariates=[fut_cov1, fut_cov2]
if past_cov1 is not None
else None,
n=len(val1),
num_samples=500 if lkl is not None else 1,
)
Expand All @@ -211,6 +225,10 @@ def _eval_model(

train1, val1 = series1.split_after(0.7)
train2, val2 = series2.split_after(0.7)
past_cov1 = train1.copy()
past_cov2 = train2.copy()
val_past_cov1 = val1.copy()
val_past_cov2 = val2.copy()

for model, lkl in product(
[DLinearModel, NLinearModel], [None, GaussianLikelihood()]
Expand Down Expand Up @@ -254,6 +272,21 @@ def _eval_model(
assert e1 <= 0.40
assert e2 <= 0.34

e1, e2 = _eval_model(
train1,
train2,
val1,
val2,
fut_cov1,
fut_cov2,
past_cov1=past_cov1,
past_cov2=past_cov2,
val_past_cov1=val_past_cov1,
val_past_cov2=val_past_cov2,
cls=NLinearModel,
lkl=None,
normalize=True,
)
# can only fit models with past/future covariates when shared_weights=False
for model in [DLinearModel, NLinearModel]:
for shared_weights in [True, False]:
Expand Down

0 comments on commit 74ed2bb

Please sign in to comment.