Delta node

The delta node encodes a deterministic functional relationship between variables. Where a stochastic node represents p(y | x), a delta node asserts that y = f(x₁, …, xₙ) exactly. Any Julia function f can be used.

z ~ f(x, y)   # z is deterministically f(x, y)

Because f is not a probability distribution, the standard closed-form message computation does not apply. The engine must approximate the outgoing messages. The approximation method is specified via DeltaMeta:

z ~ f(x, y) where { meta = DeltaMeta(method = Linearization()) }
z ~ f(x, y) where { meta = DeltaMeta(method = Unscented()) }
z ~ f(x, y) where { meta = DeltaMeta(method = CVI(...)) }

Choosing an approximation method

MethodBest forWhat it needs
LinearizationSmooth f that is approximately linear near the operating pointJacobian, computed via ForwardDiff automatically
Unscented / UTNonlinear but smooth f in moderate dimensionSigma points; no derivatives required
CVIBlack-box or non-differentiable f, high dimensionStochastic gradient estimator; requires an optimizer
CVIProjectionSame as CVI with the result projected onto an exponential family memberSame as CVI
LaplaceApproximationUnimodal posteriors; f differentiableSecond-order Taylor expansion at the mode

When f has a known analytical inverse f⁻¹, you can pass it as the inverse keyword to skip the backward approximation entirely:

z ~ f(x) where { meta = DeltaMeta(method = Linearization(), inverse = f_inv) }

Without an inverse, the backward (input) messages are computed via the RTS smoother (Petersen et al., 2018).

Multi-input delta nodes

When a delta node has more than one input, the @rule macro receives the inputs bundled in a ReactiveMP.ManyOf container. This lets the rule dispatch on the collection of input messages rather than individually:

@rule DeltaFn{typeof(f)}(:out, Marginalisation) (
    m_ins::ReactiveMP.ManyOf,
    meta::DeltaMeta{<:Linearization},
) = begin
    # m_ins[1], m_ins[2], ... are the individual input messages
    ...
end

See the Message update rules page for how to define rules with @rule.

Note

The delta node is Deterministic and does not contribute to the Bethe free energy directly. It only transforms information between variables.

For the full API of approximation methods (CVI, Unscented, Linearization, etc.), see Approximation methods.

ReactiveMP.DeltaMetaType
DeltaMeta(method = ..., [ inverse = ... ])

DeltaMeta structure specifies the approximation method for the outbound messages in the DeltaFn node.

Arguments

  • method: required, the approximation method, currently supported methods are Linearization, Unscented and CVI.
  • inverse: optional, if no inverse provided, the backward rule will be computed based on RTS (Petersen et al. 2018; On Approximate Delta Gaussian Message Passing on Factor Graphs)

Is is also possible to pass the AbstractApproximationMethod to the meta of the delta node directly. In this case inverse is set to nothing.

source
ReactiveMP.ManyOfType

Some nodes use IndexedInterface, ManyOf structure reflects a collection of marginals from the collection of IndexedInterfaces. @rule macro also treats ManyOf specially.

source
ReactiveMP.smoothRTSFunction

RTS smoother update for inbound marginal; based on (Petersen et al. 2018; On Approximate Delta Gaussian Message Passing on Factor Graphs)

source
ReactiveMP.CVIApproximationDeltaFnRuleLayoutType
CVIApproximationDeltaFnRuleLayout

Custom rule layout for the Delta node in case of the CVI approximation method:

Layout

In order to compute:

  • q_out: mirrors the posterior marginal on the out edge
  • q_ins: uses inbound message on the out edge and all inbound messages on the ins edges
  • m_out: uses the joint over the ins edges
  • m_in_k: uses the inbound message on the in_k edge and q_ins
source
ReactiveMP.log_approximateFunction

This function calculates the log of the Gauss-laguerre integral by making use of the log of the integrable function. ln ( ∫ exp(-x)f(x) dx ) ≈ ln ( ∑ wi * f(xi) ) = ln ( ∑ exp( ln(wi) + logf(xi) ) ) = ln ( ∑ exp( yi ) ) = max(yi) + ln ( ∑ exp( yi - max(yi) ) ) where we make use of the numerically stable log-sum-exp trick: https://en.wikipedia.org/wiki/LogSumExp

source
ReactiveMP.DeltaFnDefaultRuleLayoutType
DeltaFnDefaultRuleLayout

Default rule layout for the Delta node:

Layout

In order to compute:

  • q_out: mirrors the posterior marginal on the out edge
  • q_ins: uses inbound message on the out edge and all inbound messages on the ins edges
  • m_out: uses all inbound messages on the ins edges
  • m_in_k: uses the inbound message on the in_k edge and q_ins

See also: ReactiveMP.DeltaFnDefaultKnownInverseRuleLayout

source
ReactiveMP.DeltaFnDefaultKnownInverseRuleLayoutType
DeltaFnDefaultKnownInverseRuleLayout

Default rule layout for the Delta node:

Layout

In order to compute:

  • q_out: mirrors the posterior marginal on the out edge (same as the DeltaFnDefaultRuleLayout)
  • q_ins: uses inbound message on the out edge and all inbound messages on the ins edges (same as the DeltaFnDefaultRuleLayout)
  • m_out: uses all inbound messages on the ins edges (same as the DeltaFnDefaultRuleLayout)
  • m_in_k: uses inbound message on the out edge and inbound messages on the ins edges except k
source
ReactiveMP.SoftDotType
SoftDot

The SoftDot node can be used as a substitute for the dot product operator delta node (the outgoing variable is the dot product of two others). It softens the delta constraint by adding a Gaussian noise as follows:

y ~ N(dot(θ, x), γ^(-1))

Interfaces:

  1. y - result of the "soft" dot product,
  2. θ - first variable to be multiplied,
  3. x - second variable to be multiplied,
  4. γ - precision of the Gaussian noise.

The advantage of using SoftDot is that it offers tractable and optimized closed-form variational messages for both Belief Propagation and Variational Message Passing.

See also: softdot

source