Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What does m.name[0].tie_to(other) do? #34

Open
g-bassetto opened this issue Nov 19, 2019 · 1 comment
Open

What does m.name[0].tie_to(other) do? #34

g-bassetto opened this issue Nov 19, 2019 · 1 comment

Comments

@g-bassetto
Copy link

Hi everybody,
In the documentation of the Param class I found a mention to a method call tie_to, which, in my understanding, should tie a parameter in one model to some other parameter in the same or an other model, but I cannot find anywhere in the code where this is implemented. Maybe you can give me some pointers?

Some more details about what I am trying to do.
Say we have a custom Parameterized, called A, having a parameter alpha and two fields field1 and field2, both instances os an other custom Parameterized class B, having their own set of parameters [kappa, theta]: I would like to tie the value of A.alpha to A.field1.kappa and A.field2.kappa, while leaving theta independent for the two fields. Ideally, the parameters of A should be [alpha, kappa1, kappa2]. Is it somehow possible to implement this behaviour?

Thanks and best regards,

Giacomo

@bela127
Copy link

bela127 commented Jan 22, 2024

Hey,
I had a similar problem.
While the method "tie_to" is in the documentation, it was never implemented.
So, it is a documentation issue / a feature request.

Still, it is possible to implement the wanted behavior.
Here is an example GPy Kernel I implemented, that does what you want:

class Combined(Kern):
    """
    Abstract class for change kernels
    """
    def __init__(self, input_dim = 2, active_dims=None,  rbf_variance = 10, rbf_lengthscale = 0.4, brown_variance = 10, name = 'Combined'):

        super().__init__(input_dim, active_dims, name)

        self.brown = GPy.kern.Brownian(variance=brown_variance, active_dims=[0])
        self.rbf = GPy.kern.RBF(variance=rbf_variance,lengthscale=rbf_lengthscale, input_dim=1, active_dims=[1])
        self.rbf_add = GPy.kern.RBF(variance=rbf_variance,lengthscale=rbf_lengthscale, input_dim=1, active_dims=[1])

        self.rbf_variance = Param('rbf_variance', rbf_variance, Logexp())
        self.link_parameter(self.rbf_variance)
        self.rbf_lengthscale = Param('rbf_lengthscale', rbf_lengthscale, Logexp())
        self.link_parameter(self.rbf_lengthscale)
        self.brown_variance = Param('brown_variance', brown_variance, Logexp())
        self.link_parameter(self.brown_variance)

    def parameters_changed(self):
        self.rbf.variance = self.rbf_add.variance = self.rbf_variance
        self.rbf.lengthscale = self.rbf_add.lengthscale = self.rbf_lengthscale
        self.brown.variance = self.brown_variance

    @Cache_this(limit = 3)
    def K(self, X, X2 = None):
        return self.rbf_add.K(X, X2) + self.brown.K(X, X2) * self.rbf.K(X, X2)

    @Cache_this(limit = 3)
    def Kdiag(self, X):
        return self.rbf_add.Kdiag(X) + self.brown.Kdiag(X) * self.rbf.Kdiag(X)

    # NOTE ON OPTIMISATION:
    #   Should be able to get away with only optimising the parameters of one sigmoidal kernel and propagating them

    def update_gradients_full(self, dL_dK, X, X2 = None): # See NOTE ON OPTIMISATION
        self.brown.update_gradients_full(dL_dK * self.rbf.K(X, X2), X, X2)
        self.rbf.update_gradients_full(dL_dK * self.brown.K(X, X2), X, X2)

        self.rbf_add.update_gradients_full(dL_dK, X, X2)

        self.rbf_variance.gradient = self.rbf.variance.gradient + self.rbf_add.variance.gradient
        self.rbf_lengthscale.gradient = self.rbf.lengthscale.gradient + self.rbf_add.lengthscale.gradient
        self.brown_variance.gradient = self.brown.variance.gradient


    def update_gradients_diag(self, dL_dK, X):
        self.brown.update_gradients_diag(dL_dK * self.rbf.Kdiag(X), X)
        self.rbf.update_gradients_diag(dL_dK * self.brown.Kdiag(X), X)

        self.rbf_add.update_gradients_diag(dL_dK, X)

        self.rbf_variance.gradient = self.rbf.variance.gradient + self.rbf_add.variance.gradient
        self.rbf_lengthscale.gradient = self.rbf.lengthscale.gradient + self.rbf_add.lengthscale.gradient
        self.brown_variance.gradient = self.brown.variance.gradient

The important method is "parameters_changed".
This is always called if a parameter changes and synchronizes the Rbf parameters.
Further, the gradients are just added up, to obtain a combined gradient.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants