Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Trainable parametric outcome transforms #1174

Open
sdaulton opened this issue Apr 14, 2022 · 3 comments
Open

[Feature Request] Trainable parametric outcome transforms #1174

sdaulton opened this issue Apr 14, 2022 · 3 comments
Assignees
Labels
enhancement New feature or request

Comments

@sdaulton
Copy link
Contributor

🚀 Feature Request

Currently input transforms can be parametric and the parameters can be optimized jointly with the other hyperparameters of the GP (e.g. input warping does). Outcome transforms on the other hand are applied once upon initialization and therefore one cannot currently implement a parametric outcome transform and optimize its parameters jointly with the GP hyperparameters.

Motivation

Support new parametric outcome transforms.

Pitch

Support for inferring parametric outcome transforms similar to input transforms.

Describe alternatives you've considered

None

Are you willing to open a pull request? (See CONTRIBUTING)

When time allows...

cc @Balandat @dme65 @saitcakmak

@sdaulton sdaulton added the enhancement New feature or request label Apr 14, 2022
@sdaulton sdaulton self-assigned this Apr 14, 2022
@sdaulton
Copy link
Contributor Author

cc @bajgar

@saitcakmak
Copy link
Contributor

Interesting idea. Would we need to move the application of the outcome transform into the forward call then? forward doesn't really deal with the train outcomes, so maybe somewhere else where we actually compute the training loss.

I just exported #1176, which proposes a refactor of how input transforms are applied. We had some discussions around it internally, it'd be nice to get more feedback. TLDR: It proposes to make the current forward methods into a private _forward, (and posterior into a private _posterior), define the public forward in Model and apply the input transforms there, eliminating the need to deal with transforms at every model. To make this work with one-to-many transforms, the idea is to make those into their own class, separate from the other input transforms and only apply those in posterior call.

@Balandat
Copy link
Contributor

Yeah, I think it would be great to have that. I do concur with @saitcakmak on this not being a straightforward extension of what we do for the input transforms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants