Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
Currently, we apply the input transforms in
train
mode at theforward
call, and ineval
model at theposterior
call. We also use atransform_train_inputs
call at theeval/train
calls to make sure that ateval
time thetrain_inputs
are stored as transformed (since they don't pass throughposterior
). This design supportsExactGP
models, and supports specifying where to apply which input transform via the flags (so that one-to-many transforms are only applied to test inputs). However, this does not work great with Approximate GP models, since this setup does not transform the inducing points ateval
time.This refactor splits out one-to-many transforms as
InputAugmentationTransform
, allowing us to revert to simply applying thetransform_inputs
in theforward
pass (at all times). We still need to apply one-to-many transforms (now calledInputAugmentationTransform
) inposterior
, so we introduce anaugment_inputs
method.(Inspired by the public-private APIs of Ax) In order to minimize the transform related knowledge expected from developers, this introduces a
Model.forward
call that appliestransform_inputs
and callsself._forward
.<AnyGivenModel>._forward
is the usualforward
call that computes the prior, except that it no longer has to worry about transforms.Similarly, for the
posterior
, this makesModel.posterior
into a simple wrapper aroundModel._posterior
, which applies theaugment_inputs
call and theposterior_transform
. Again, the<AnyGivenModel>._posterior
becomes the usual posterior call that no longer has to worry about the input or posterior transforms (still has to deal with the outcome transform in the current implementation, though we can fix this by bringing back thefantasize
flag).This diff presents a minimal implementation around the
SingleTaskGP
model.Differential Revision: D35129407