How to have different kernels for different output dimensions in GPyTorch

23 Views Asked by At

I'm still new to GPyTorch and was wondering, if it is possible to have a different kernel for each output dimension in a multitask setting. Say I have a multitask SVGP as described in the docs (https://docs.gpytorch.ai/en/stable/examples/04_Variational_and_Approximate_GPs/SVGP_Multitask_GP_Regression.html) and I want a linear kernel for the first output dimension, an RBF kernel for the second output dimension and so on.

In GPFlow this is possible using the SeperateIndependentKernel for independent outputs. However, I can't figure out how to do this in GPyTorch. I'm assuming this may be done by querying different kernels in the forward method and somehow combining their outputs but I have been unable to get this to work. My naive approach for two latent dimensions looks like this:

def forward(self, x):
    # The forward function should be written as if we were dealing with each output
    # dimension in batch
    mean_x = self.mean_module(x)
    covar_1 = self.covar_module_1(x[0])
    covar_2 = self.covar_module_2(x[1])
    covar_x = torch.stack([covar_1.evaluate(), covar_2.evaluate()])

    return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)

Is the only solution here to create a separate GP for each output and then manually putting the results together?

0

There are 0 best solutions below