Helper utilities

RxGP exports several utility functions used throughout the message passing rules and by end users.

Mean function application

RxGP.apply_mean_fnFunction
apply_mean_fn(x, mf)

Apply a scalar mean function mf to input x, handling different input types:

  • If x is a scalar, apply mf directly, e.g. mf(x) = x^2 gives apply_mean_fn(3, mf) == 9.
  • If x is a length-1 vector of numbers, apply mf to the single element.
  • If x is a longer vector of numbers, apply mf to the entire vector.
  • If x is a vector of vectors, use apply_mean_fn.(x, mf) to broadcast over sub-vectors.
source

Applies the scalar mean function mf to an input x. Handles dispatch for PointMass, Distribution, scalar, and vector inputs.

Mean and covariance extraction

RxGP.mean_cov_scalar_matrixFunction
mean_cov_scalar_matrix(x)

Return (mean, cov) where mean is a scalar and cov is a 1×1 matrix. Input x must be one-dimensional (scalar, length-1 vector, PointMass, or univariate distribution).

source
RxGP.mean_cov_vector_matrixFunction
mean_cov_vector_matrix(x)

Return (mean, cov) where mean is a vector and cov is a matrix, regardless of the dimensionality of x. Accepts Real, AbstractVector, PointMass, or any NormalDistributionsFamily.

source

These convenience functions extract (mean, cov) from various input types (PointMass, Real, distribution) and normalise the output shapes:

FunctionReturns
mean_cov_scalar_matrix(scalar, 1×1 matrix)
mean_cov_vector_matrix(vector, matrix)

Linear algebra helpers

RxGP.jdotavxFunction
jdotavx(a, b)

Vectorised dot product of arrays a and b using LoopVectorization.@turbo.

source

Inducing variable buffer

RxGP.BufferUniSGPType
BufferUniSGP{D,M}

Wraps an inducing-variable message qv together with its UniSGPMeta. When the product of all N incoming messages has been accumulated (tracked via meta.counter), updates the Cholesky factor meta.Uv.

source

BufferUniSGP wraps an inducing-variable message together with its meta object. When the product of all incoming messages to v has been accumulated (tracked via meta.counter), it updates the Cholesky factor of the second moment meta.Uv. This ensures that the free-energy computation remains efficient.

Hyperparameter optimisation

The following functions compute the negative log backward message and its gradient with respect to the kernel hyperparameters $\theta$. They are designed to be used with gradient-based optimisers (e.g. Optim.jl or Flux.jl).

RxGP.neg_log_backwardmess_fastFunction
neg_log_backwardmess_fast(θ; y_data, x_y_data, q_v, q_w, kernel, mean_fn, Xu, ...)

Negative log backward message toward θ for the univariate VSGP. Supports function-value observations (y_data/x_y_data), gradient observations (ω_data/x_ω_data), and either point or distributional inputs. Used as the M-step objective in grad_llh_default!.

source
RxGP.neg_log_backwardmess_uncertainFunction
neg_log_backwardmess_uncertain(θ; y_data, qx, v, Uv, w, kernel, Xu, method)

Negative log backward message toward θ when inputs qx are distributions. Uses Gauss–Hermite cubature to compute kernel expectations Ψ0, Ψ1, Ψ2.

source
RxGP.neg_log_backwardmess_msgFunction
neg_log_backwardmess_msg(θ; in_data_, out_data_, q_v, q_W_, meta_)

Negative log backward message toward θ computed via @call_rule on UniSGP_dID(:θ, Marginalisation). Sums log-pdf contributions over all data points and batches in in_data_.

source
RxGP.neg_log_backwardmess_multiFunction
neg_log_backwardmess_multi(θ; y_data, qx, sumRv_Wbar, v, W, tr_W, kernel, Xu, method)

Negative log backward message toward θ for the multivariate (MultiSGP) VSGP with C = I. Expects pre-computed sumRv_Wbar = sum(Rv_blk .* W) and tr_W = tr(W) for efficiency.

source