gpytorchwrapper.src.kernels.linearxmatern_kernel_perminv

Classes

LinearxMaternKernelPermInv(n_atoms, ...[, ...])

class gpytorchwrapper.src.kernels.linearxmatern_kernel_perminv.LinearxMaternKernelPermInv(n_atoms: int, idx_equiv_atoms: list[list[int]], select_dims: list[int] = None, nu: float = 2.5, ard: bool = False, representation: str = 'invdist', variance_prior: Prior | None = None, variance_constraint: Interval | None = None, **kwargs)[source]

Bases: PermInvKernel

forward(x1, x2, diag=False, last_dim_is_batch: bool | None = False, **params)[source]

Computes the covariance between \(\mathbf x_1\) and \(\mathbf x_2\). This method should be implemented by all Kernel subclasses.

Parameters:
  • x1 – First set of data (… x N x D).

  • x2 – Second set of data (… x M x D).

  • diag – Should the Kernel compute the whole kernel, or just the diag? If True, it must be the case that x1 == x2. (Default: False.)

  • last_dim_is_batch – If True, treat the last dimension of x1 and x2 as another batch dimension. (Useful for additive structure over the dimensions). (Default: False.)

Returns:

The kernel matrix or vector. The shape depends on the kernel’s evaluation mode:

  • full_covar: … x N x M

  • full_covar with last_dim_is_batch=True: … x K x N x M

  • diag: … x N

  • diag with last_dim_is_batch=True: … x K x N

has_lengthscale = True
linear_kernel(x1, x2, diag, last_dim_is_batch, **params)[source]
matern_kernel(x1, x2, diag, **params)[source]
property variance: Tensor