Inference Engine

Overview

The InferenceEngine is your gateway to probabilistic inference in Cortex.jl. The engine wraps your model engine and provides a unified interface for computing and updating messages and marginals required for inference.

Under the hood, the engine uses a reactive signal system to track dependencies between computations, update only what's necessary when data changes. It identifies which parts of the model need updates and manages computation order for efficiency.

The engine also provides built-in tracing capabilities for debugging and performance analysis that records timing of signal computations, tracks value changes during inference, and monitors execution order of computations.

API Reference

Engine Management

Cortex.InferenceEngineType
InferenceEngine{M}

Core structure for managing and executing probabilistic inference.

Fields

  • model_engine::M: The underlying model engine (e.g., a BipartiteFactorGraph).
  • dependency_resolver: Resolves dependencies between signals during inference.
  • inference_request_processor: Processes inference requests and manages computation order.
  • tracer: Optional tracer for monitoring inference execution.
  • warnings: Collection of InferenceEngineWarnings generated during inference.

Constructor

InferenceEngine(;
    model_engine::M,
    dependency_resolver = DefaultDependencyResolver(),
    inference_request_processor = InferenceRequestScanner(),
    prepare_signals_metadata::Bool = true,
    resolve_dependencies::Bool = true,
    trace::Bool = false
) where {M}

Arguments

  • model_engine: An instance of a supported model engine.
  • dependency_resolver: Custom dependency resolver (optional).
  • inference_request_processor: Custom request processor (optional).
  • prepare_signals_metadata: Whether to initialize signal variants.
  • resolve_dependencies: Whether to resolve signal dependencies on creation.
  • trace: Whether to enable inference execution tracing.

See also: get_model_engine, update_marginals!, request_inference_for

source
Cortex.get_model_engineFunction
get_model_engine(engine::InferenceEngine)

Retrieves the underlying model engine from the InferenceEngine.

Arguments

  • engine::InferenceEngine: The inference engine instance.

Returns

The model engine object stored within the engine.

See Also

source

Variable Operations

Cortex.get_variableMethod
get_variable(engine::InferenceEngine, variable_id::Int)

Alias for get_variable(get_model_engine(engine), variable_id).

source

Factor Operations

Cortex.get_factorMethod
get_factor(engine::InferenceEngine, factor_id::Int)

Alias for get_factor(get_model_engine(engine), factor_id).

source

Connection and Message Passing Interface

Cortex.get_connectionMethod
get_connection(engine::InferenceEngine, variable_id::Int, factor_id::Int)

Alias for get_connection(get_model_engine(engine), variable_id, factor_id).

source
Cortex.get_connection_message_to_variableMethod
get_connection_message_to_variable(engine::InferenceEngine, variable_id::Int, factor_id::Int)

Alias for get_connection_message_to_variable(get_connection(engine, variable_id, factor_id)::Connection)::InferenceSignal.

source
Cortex.get_connection_message_to_factorMethod
get_connection_message_to_factor(engine::InferenceEngine, variable_id::Int, factor_id::Int)

Alias for get_connection_message_to_factor(get_connection(engine, variable_id, factor_id)::Connection)::InferenceSignal.

source
Cortex.get_connected_factor_idsMethod
get_connected_factor_ids(engine::InferenceEngine, variable_id::Int)

Alias for get_connected_factor_ids(get_model_engine(engine), variable_id).

source

Signal Variants

The engine uses different signal variants to manage various aspects of inference:

Cortex.InferenceSignalVariantsModule
InferenceSignalVariants

A module containing the different variants of inference signals used in the inference engine. These variants define the structure and behavior of messages passed between variables and factors.

source
Cortex.InferenceSignalVariants.ProductOfMessagesType
ProductOfMessages(variable_id::Int, range::UnitRange{Int}, factors_connected_to_variable::Vector{Int})

A signal variant representing the product of multiple messages for a specific variable.

Fields

  • variable_id::Int: The ID of the source variable
  • range::UnitRange{Int}: Range specification for selecting messages from which factors to include
  • factors_connected_to_variable::Vector{Int}: Complete list of factor IDs connected to the variable

See also Cortex.compute_product_of_messages!.

source
Cortex.InferenceSignalVariants.JointMarginalType
JointMarginal(factor_id::Int, variable_ids::Vector{Int})

A signal variant representing the joint marginal distribution over multiple variables connected to a specific factor.

Fields

  • factor_id::Int: The ID of the factor around which the joint marginal is computed
  • variable_ids::Vector{Int}: The IDs of variables included in the joint marginal

See also Cortex.compute_joint_marginal!.

source
Cortex.InferenceSignalVariantType
InferenceSignalVariant

A Union type representing all possible variants of an inference signal.

This type alias encompasses all the concrete variant types defined in the InferenceSignalVariants module, providing type-safe signal classification for the inference engine.

source
Cortex.set_signals_variants!Function
set_signals_variants!(engine::InferenceEngine)

Initializes the variant field for relevant signals within the InferenceEngine.

This function iterates through variables and factors in the model backend, setting:

This setup is typically done once upon engine creation and is crucial for dispatching appropriate computation rules during inference.

Arguments

  • engine::InferenceEngine: The inference engine instance whose signals are to be prepared.

See Also

source

Running Inference

Cortex.request_inference_forFunction
request_inference_for(engine::InferenceEngine, variable_id_or_ids)

Creates an InferenceRequest to compute the marginals for the specified variable_id_or_ids.

This function prepares the necessary signals by marking their dependencies as potentially pending. It supports requesting inference for a single variable ID or a collection (Tuple or AbstractVector) of variable IDs.

Arguments

  • engine::InferenceEngine: The inference engine instance.
  • variable_id_or_ids: A single variable identifier or a collection of variable identifiers.

Returns

  • InferenceRequest: An internal structure representing the inference request.

See Also

source
Cortex.InferenceRequestType
InferenceRequest{E,V,M}

Internal structure representing a request to perform inference for a set of variables.

Fields

  • engine::E: The inference engine instance.
  • variable_ids::V: Collection of variable identifiers to compute marginals for.
  • marginals::M: Collection of marginal signals corresponding to the variables.
  • readines_status::BitVector: Tracks which variables have been processed.

See also: request_inference_for, update_marginals!

source
Cortex.AbstractInferenceRequestProcessorType
AbstractInferenceRequestProcessor

Abstract type for inference request processors that handle different types of inference signals. Subtypes must implement methods for processing various signal variants.

source
Cortex.compute_message_to_variable!Function
compute_message_to_variable!(processor, engine, variant, signal, dependencies)

Compute a message from a factor to a variable.

Arguments

  • processor::AbstractInferenceRequestProcessor: The processor instance
  • engine::InferenceEngine: The inference engine
  • variant::InferenceSignalVariants.MessageToVariable: The message variant
  • signal::InferenceSignal: The signal to compute
  • dependencies: The dependencies of the signal

Returns

The computed message value.

Throws

Error if not implemented by the processor.

source
Cortex.compute_message_to_factor!Function
compute_message_to_factor!(processor, engine, variant, signal, dependencies)

Compute a message from a variable to a factor.

Arguments

  • processor::AbstractInferenceRequestProcessor: The processor instance
  • engine::InferenceEngine: The inference engine
  • variant::InferenceSignalVariants.MessageToFactor: The message variant
  • signal::InferenceSignal: The signal to compute
  • dependencies: The dependencies of the signal

Returns

The computed message value.

Throws

Error if not implemented by the processor.

source
Cortex.compute_individual_marginal!Function
compute_individual_marginal!(processor, engine, variant, signal, dependencies)

Compute an individual marginal for a variable.

Arguments

  • processor::AbstractInferenceRequestProcessor: The processor instance
  • engine::InferenceEngine: The inference engine
  • variant::InferenceSignalVariants.IndividualMarginal: The marginal variant
  • signal::InferenceSignal: The signal to compute
  • dependencies: The dependencies of the signal

Returns

The computed marginal value.

Throws

Error if not implemented by the processor.

source
Cortex.compute_product_of_messages!Function
compute_product_of_messages!(processor, engine, variant, signal, dependencies)

Compute the product of multiple messages.

Arguments

  • processor::AbstractInferenceRequestProcessor: The processor instance
  • engine::InferenceEngine: The inference engine
  • variant::InferenceSignalVariants.ProductOfMessages: The product variant
  • signal::InferenceSignal: The signal to compute
  • dependencies: The dependencies of the signal

Returns

The computed product value.

Throws

Error if not implemented by the processor.

source
Cortex.compute_joint_marginal!Function
compute_joint_marginal!(processor, engine, variant, signal, dependencies)

Compute a joint marginal for multiple variables.

Arguments

  • processor::AbstractInferenceRequestProcessor: The processor instance
  • engine::InferenceEngine: The inference engine
  • variant::InferenceSignalVariants.JointMarginal: The joint marginal variant
  • signal::InferenceSignal: The signal to compute
  • dependencies: The dependencies of the signal

Returns

The computed joint marginal value.

Throws

Error if not implemented by the processor.

source
Cortex.update_marginals!Function
update_marginals!(engine::InferenceEngine, variable_id_or_ids)

Updates the marginals for the specified variable_id_or_ids.

source

Tracing and Debugging

Cortex.InferenceEngineWarningType
InferenceEngineWarning

A warning message generated during inference execution.

Fields

  • description::String: A human-readable description of the warning.
  • context::Any: Additional context or data related to the warning.
source
Cortex.TracedInferenceExecutionType
TracedInferenceExecution

A record of a single signal computation during inference.

Fields

  • engine::InferenceEngine: The inference engine instance.
  • variable_id: The identifier of the variable being processed.
  • signal::InferenceSignal: The signal that was computed.
  • total_time_in_ns::UInt64: Total computation time in nanoseconds.
  • value_before_execution: Signal value before computation.
  • value_after_execution: Signal value after computation.
source
Cortex.TracedInferenceRoundType
TracedInferenceRound

A record of a single round of inference computations.

Fields

  • engine::InferenceEngine: The inference engine instance.
  • total_time_in_ns::UInt64: Total round time in nanoseconds.
  • executions::Vector{TracedInferenceExecution}: List of signal computations performed.
source
Cortex.TracedInferenceRequestType
TracedInferenceRequest

A complete record of an inference request execution.

Fields

  • engine::InferenceEngine: The inference engine instance.
  • total_time_in_ns::UInt64: Total request processing time in nanoseconds.
  • request::InferenceRequest: The original inference request.
  • rounds::Vector{TracedInferenceRound}: List of inference rounds performed.
source
Cortex.InferenceEngineTracerType
InferenceEngineTracer

Tracer for monitoring and debugging inference execution.

Fields

  • inference_requests::Vector{TracedInferenceRequest}: History of traced inference requests.

The tracer records:

  • Signal computations and their timing
  • Value changes during inference
  • Execution order of computations
source