gpytorchwrapper.src.kernels.matern_kernel_perminv
Classes
|
|
|
- class gpytorchwrapper.src.kernels.matern_kernel_perminv.MaternKernelPermInv(n_atoms: int, idx_equiv_atoms: list[list[int]], select_dims: list[int] = None, nu: float = 2.5, ard: bool = False, representation: str = 'invdist', **kwargs)[source]
Bases:
PermInvKernel
- forward(x1, x2, diag=False, last_dim_is_batch: bool | None = False, **params)[source]
Computes the covariance between \(\mathbf x_1\) and \(\mathbf x_2\). This method should be implemented by all Kernel subclasses.
- Parameters:
x1 – First set of data (… x N x D).
x2 – Second set of data (… x M x D).
diag – Should the Kernel compute the whole kernel, or just the diag? If True, it must be the case that x1 == x2. (Default: False.)
last_dim_is_batch – If True, treat the last dimension of x1 and x2 as another batch dimension. (Useful for additive structure over the dimensions). (Default: False.)
- Returns:
The kernel matrix or vector. The shape depends on the kernel’s evaluation mode:
full_covar: … x N x M
full_covar with last_dim_is_batch=True: … x K x N x M
diag: … x N
diag with last_dim_is_batch=True: … x K x N
- has_lengthscale = True
- class gpytorchwrapper.src.kernels.matern_kernel_perminv.Model(train_x, train_y, likelihood)[source]
Bases:
ExactGP
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.