Delta node
The delta node encodes a deterministic functional relationship between variables. Where a stochastic node represents p(y | x), a delta node asserts that y = f(x₁, …, xₙ) exactly. Any Julia function f can be used.
z ~ f(x, y) # z is deterministically f(x, y)Because f is not a probability distribution, the standard closed-form message computation does not apply. The engine must approximate the outgoing messages. The approximation method is specified via DeltaMeta:
z ~ f(x, y) where { meta = DeltaMeta(method = Linearization()) }
z ~ f(x, y) where { meta = DeltaMeta(method = Unscented()) }
z ~ f(x, y) where { meta = DeltaMeta(method = CVI(...)) }Choosing an approximation method
| Method | Best for | What it needs |
|---|---|---|
Linearization | Smooth f that is approximately linear near the operating point | Jacobian, computed via ForwardDiff automatically |
Unscented / UT | Nonlinear but smooth f in moderate dimension | Sigma points; no derivatives required |
CVI | Black-box or non-differentiable f, high dimension | Stochastic gradient estimator; requires an optimizer |
CVIProjection | Same as CVI with the result projected onto an exponential family member | Same as CVI |
LaplaceApproximation | Unimodal posteriors; f differentiable | Second-order Taylor expansion at the mode |
When f has a known analytical inverse f⁻¹, you can pass it as the inverse keyword to skip the backward approximation entirely:
z ~ f(x) where { meta = DeltaMeta(method = Linearization(), inverse = f_inv) }Without an inverse, the backward (input) messages are computed via the RTS smoother (Petersen et al., 2018).
Multi-input delta nodes
When a delta node has more than one input, the @rule macro receives the inputs bundled in a ReactiveMP.ManyOf container. This lets the rule dispatch on the collection of input messages rather than individually:
@rule DeltaFn{typeof(f)}(:out, Marginalisation) (
m_ins::ReactiveMP.ManyOf,
meta::DeltaMeta{<:Linearization},
) = begin
# m_ins[1], m_ins[2], ... are the individual input messages
...
endSee the Message update rules page for how to define rules with @rule.
The delta node is Deterministic and does not contribute to the Bethe free energy directly. It only transforms information between variables.
For the full API of approximation methods (CVI, Unscented, Linearization, etc.), see Approximation methods.
ReactiveMP.DeltaMeta — Type
DeltaMeta(method = ..., [ inverse = ... ])DeltaMeta structure specifies the approximation method for the outbound messages in the DeltaFn node.
Arguments
method: required, the approximation method, currently supported methods areLinearization,UnscentedandCVI.inverse: optional, if no inverse provided, the backward rule will be computed based on RTS (Petersen et al. 2018; On Approximate Delta Gaussian Message Passing on Factor Graphs)
Is is also possible to pass the AbstractApproximationMethod to the meta of the delta node directly. In this case inverse is set to nothing.
ReactiveMP.ManyOf — Type
Some nodes use IndexedInterface, ManyOf structure reflects a collection of marginals from the collection of IndexedInterfaces. @rule macro also treats ManyOf specially.
ReactiveMP.smoothRTS — Function
RTS smoother update for inbound marginal; based on (Petersen et al. 2018; On Approximate Delta Gaussian Message Passing on Factor Graphs)
ReactiveMP.CVIApproximationDeltaFnRuleLayout — Type
CVIApproximationDeltaFnRuleLayoutCustom rule layout for the Delta node in case of the CVI approximation method:
Layout
In order to compute:
q_out: mirrors the posterior marginal on theoutedgeq_ins: uses inbound message on theoutedge and all inbound messages on theinsedgesm_out: uses the joint over theinsedgesm_in_k: uses the inbound message on thein_kedge andq_ins
ReactiveMP.log_approximate — Function
This function calculates the log of the Gauss-laguerre integral by making use of the log of the integrable function. ln ( ∫ exp(-x)f(x) dx ) ≈ ln ( ∑ wi * f(xi) ) = ln ( ∑ exp( ln(wi) + logf(xi) ) ) = ln ( ∑ exp( yi ) ) = max(yi) + ln ( ∑ exp( yi - max(yi) ) ) where we make use of the numerically stable log-sum-exp trick: https://en.wikipedia.org/wiki/LogSumExp
ReactiveMP.DeltaFnDefaultRuleLayout — Type
DeltaFnDefaultRuleLayoutDefault rule layout for the Delta node:
Layout
In order to compute:
q_out: mirrors the posterior marginal on theoutedgeq_ins: uses inbound message on theoutedge and all inbound messages on theinsedgesm_out: uses all inbound messages on theinsedgesm_in_k: uses the inbound message on thein_kedge andq_ins
ReactiveMP.DeltaFnDefaultKnownInverseRuleLayout — Type
DeltaFnDefaultKnownInverseRuleLayoutDefault rule layout for the Delta node:
Layout
In order to compute:
q_out: mirrors the posterior marginal on theoutedge (same as theDeltaFnDefaultRuleLayout)q_ins: uses inbound message on theoutedge and all inbound messages on theinsedges (same as theDeltaFnDefaultRuleLayout)m_out: uses all inbound messages on theinsedges (same as theDeltaFnDefaultRuleLayout)m_in_k: uses inbound message on theoutedge and inbound messages on theinsedges exceptk
ReactiveMP.SoftDot — Type
SoftDotThe SoftDot node can be used as a substitute for the dot product operator delta node (the outgoing variable is the dot product of two others). It softens the delta constraint by adding a Gaussian noise as follows:
y ~ N(dot(θ, x), γ^(-1))Interfaces:
- y - result of the "soft" dot product,
- θ - first variable to be multiplied,
- x - second variable to be multiplied,
- γ - precision of the Gaussian noise.
The advantage of using SoftDot is that it offers tractable and optimized closed-form variational messages for both Belief Propagation and Variational Message Passing.
See also: softdot
ReactiveMP.softdot — Type
Alias for the SoftDot node.