ExponentialFamilyProjection
The ExponentialFamilyProjection.jl package offers a suite of functions for projecting an arbitrary (un-normalized) log probability density function onto a specified member of the exponential family (e.g., Gaussian, Beta, Bernoulli). This is achieved by optimizing the natural parameters of the exponential family member within a defined manifold. The library leverages Manopt.jl for optimization and utilizes ExponentialFamilyManifolds.jl to define the manifolds corresponding to the members of the exponential family.
Projection parameters
In order to project a log probability density function onto a member of the exponential family, the user first needs to specify projection parameters:
ExponentialFamilyProjection.ProjectionParameters — TypeProjectionParameters(; kwargs...)A type to hold the parameters for the projection procedure. The following parameters are available:
strategy = ExponentialFamilyProjection.DefaultStrategy(): The strategy to use to compute the gradients.niterations = 100: The number of iterations for the optimization procedure.tolerance = 1e-6: The tolerance for the norm of the gradient.stepsize = ConstantLength(0.1): The stepsize for the optimization procedure. Accepts stepsizes fromManopt.jl.seed: Optional; Seed for therngrng: Optional; Random number generatordirection = BoundedNormUpdateRule(static(1.0): Direction update rule. AcceptsManopt.DirectionUpdateRulefromManopt.jl.
ExponentialFamilyProjection.DefaultProjectionParameters — FunctionDefaultProjectionParameters()Return the default parameters for the projection procedure.
ExponentialFamilyProjection.getinitialpoint — Functiongetinitialpoint(strategy, M::AbstractManifold, parameters::ProjectionParameters)Returns an initial point to start optimization from. By default returns a rand point from M, but different strategies may implement their own methods.
Read more about different optimization strategies here.
Projection family
After the parameters have been specified the user can proceed with specifying the projection type (exponential family member), its dimensionality and (optionally) the conditioner.
ExponentialFamilyProjection.ProjectedTo — TypeProjectedTo(::Type{T}, dims...; conditioner = nothing, parameters = DefaultProjectionParameters)A specification of a projection to an exponential family distribution.
The following arguments are required:
Type{T}: a type of an exponential family member to project to, e.g.Betadims...: dimensions of the distribution, e.g.2forMvNormal
The following arguments are optional:
conditioner = nothing: a conditioner to use for the projection, not all exponential family members require a conditioner, but some do, e.g.Laplaceparameters = DefaultProjectionParameters: parameters for the projection procedurekwargs = nothing: Additional arguments passed toManopt.gradient_descent!(optional). For details ongradient_descent!parameters, see the Manopt.jl documentation. Note, thatkwargspassed toproject_totake precedence overkwargsspecified in the parameters.
julia> using ExponentialFamily
julia> projected_to = ProjectedTo(Beta)
ProjectedTo(Beta)
julia> projected_to = ProjectedTo(Beta, parameters = ProjectionParameters(niterations = 10))
ProjectedTo(Beta)
julia> projected_to = ProjectedTo(MvNormalMeanCovariance, 2)
ProjectedTo(MvNormalMeanCovariance, dims = 2)
julia> projected_to = ProjectedTo(Laplace, conditioner = 2.0)
ProjectedTo(Laplace, conditioner = 2.0)Projection
The projection is performed by calling the project_to function with the specified ExponentialFamilyProjection.ProjectedTo and log probability density function or a set of data point as the second argument.
ExponentialFamilyProjection.project_to — Functionproject_to(to::ProjectedTo, argument::F, supplementary..., initialpoint, kwargs...)Finds the closest projection of argument onto the exponential family distribution specified by to.
Arguments
to::ProjectedTo: Configuration for the projection. Refer toProjectedTofor detailed information.argument::F: An (un-normalized) function representing the log-PDF of an arbitrary distribution or a list of samples.supplementary...: Additional distributions to project the product ofargumentand these distributions (optional).initialpoint: Starting point for the optimization process (optional).kwargs...: Additional arguments passed toManopt.gradient_descent!(optional). For details ongradient_descent!parameters, see the Manopt.jl documentation.
Supplementary
The supplementary distributions must match the type and conditioner of the target distribution specified in to. Including supplementary distributions is equivalent to modified argument function as follows:
f_modified = (x) -> argument(x) + logpdf(supplementary[1], x) + logpdf(supplementary[2], x) + ...julia> using ExponentialFamily, BayesBase
julia> f = (x) -> logpdf(Beta(30.14, 2.71), x);
julia> prj = ProjectedTo(Beta; parameters = ProjectionParameters(niterations = 500))
ProjectedTo(Beta)
julia> project_to(prj, f) isa ExponentialFamily.Beta
truejulia> using ExponentialFamily, BayesBase, StableRNGs
julia> samples = rand(StableRNG(42), Beta(30.14, 2.71), 1_000);
julia> prj = ProjectedTo(Beta; parameters = ProjectionParameters(tolerance = 1e-2))
ProjectedTo(Beta)
julia> project_to(prj, samples) isa ExponentialFamily.Beta
trueOptimization strategies
The optimization procedure requires computing the expectation of the gradient to perform gradient descent in the natural parameters space. Currently, the library provides the following strategies for computing these expectations:
ExponentialFamilyProjection.DefaultStrategy — TypeDefaultStrategyThe DefaultStrategy selects the optimal projection strategy based on the type of the second argument provided to the project_to function.
Rules:
- If the second argument is an
AbstractArray, useMLEStrategy. - For all other types, use
ControlVariateStrategy.
ExponentialFamilyProjection.ControlVariateStrategy — TypeControlVariateStrategy(; kwargs...)A strategy for gradient descent optimization and gradients computations that resembles the REINFORCE gradient estimator.
The following parameters are available:
nsamples = 2000: The number of samples to use for estimatesbuffer = Bumper.SlabBuffer(): Advanced option; A buffer for temporary computations
ExponentialFamilyProjection.MLEStrategy — TypeMLEStrategy()A strategy for gradient descent optimization and gradients computations that resembles MLE estimation.
ExponentialFamilyProjection.BonnetStrategy — TypeBonnetStrategy{S, TL}A strategy for gradient descent optimization and gradients computations that resembles the Bonnet gradient estimator that works for normal distributions. It's based on the equations (10) and (11) in Khan, 2024.
The following parameters are available:
nsamples = 2000: The number of samples to use for estimates
This strategy requires a function as an argument for project_to and cannot project a collection of samples. Use MLEStrategy to project a collection of samples. This strategy requires a logpdf function that can be converted to an InplaceLogpdfGradHess object. This strategy requires the normal manifold.
ExponentialFamilyProjection.GaussNewton — TypeGaussNewton{S,TL}A deterministic strategy that resembles the Bonnet gradient with a point-mass approximation (no sampling). For normal distributions, it evaluates logpdf/grad/Hessian once at the current mean and takes a step. It implements the update akin to Eq. (13) in Khan, 2024.
This strategy requires a function as an argument for project_to and cannot project a collection of samples. Use MLEStrategy to project a collection of samples. This strategy requires a logpdf function that can be converted to an InplaceLogpdfGradHess object. This strategy requires the normal manifold.
ExponentialFamilyProjection.preprocess_strategy_argument — Functionpreprocess_strategy_argument(strategy, argument)Checks the compatibility of strategy with argument and returns a modified strategy and argument if needed.
ExponentialFamilyProjection.create_state! — Functioncreate_state!(
strategy,
M::AbstractManifold,
parameters::ProjectionParameters,
projection_argument,
initial_ef,
supplementary_η,
)Creates, initializes and returns a state for the strategy with the given parameters.
ExponentialFamilyProjection.prepare_state! — Functionprepare_state!(
strategy,
state,
M::AbstractManifold,
parameters::ProjectionParameters,
projection_argument,
distribution,
supplementary_η,
)Prepares an existing state of the strategy for the new optimization iteration for use by setting or updating its internal parameters.
ExponentialFamilyProjection.compute_cost — Functioncompute_cost(
M::AbstractManifold,
strategy,
state,
η,
logpartition,
gradlogpartition,
inv_fisher,
)Compute the cost using the provided strategy.
Arguments
M::AbstractManifold: The manifold on which the computations are performed.strategy: The strategy used for computation of the cost value.state: The current state for thestrategy.η: Parameter vector.logpartition: The log partition of the current point (η).gradlogpartition: The gradient of the log partition of the current point (η).inv_fisher: The inverse Fisher information matrix of the current point (η).
Returns
cost: The computed cost value.
ExponentialFamilyProjection.compute_gradient! — Functioncompute_gradient!(
M::AbstractManifold,
strategy,
state,
X,
η,
logpartition,
gradlogpartition,
inv_fisher,
)Updates the gradient X in-place using the provided strategy.
Arguments
M::AbstractManifold: The manifold on which the computations are performed.strategy: The strategy used for computation of the gradient value.state: The current state of the control variate strategy.X: The storage for the gradient.η: Parameter vector.logpartition: The log partition of the current point (η).gradlogpartition: The gradient of the log partition of the current point (η).inv_fisher: The inverse Fisher information matrix of the current point (η).
Returns
X: The computed gradient (updated in-place)
In-place logpdf/grad/Hessian adapters
The library provides convenient wrappers to evaluate log-density, gradient, and Hessian in-place, and an adapter to combine separate grad!/hess! into a single grad_hess!.
ExponentialFamilyProjection.InplaceLogpdfGradHess — TypeInplaceLogpdfGradHess(logpdf!, grad_hess!)Wraps logpdf! and the unified grad_hess! function in a type used for dispatch. The unified interface evaluates gradient and Hessian together for efficiency.
Arguments
logpdf!: Function that takes(out_logpdf, x)and writes the logpdf toout_logpdfgrad_hess!: Function that takes(out_grad, out_hess, x)and writes gradient and Hessian
Methods
logpdf!(structure, out, x)grad_hess!(structure, out_grad, out_hess, x)
All methods expect pre-allocated containers of appropriate dimensions.
ExponentialFamilyProjection.InplaceLogpdfGradHess — MethodInplaceLogpdfGradHess(logpdf!, grad!, hess!)Outer convenience constructor that accepts separate grad! and hess! functions. Internally it wraps them with NaiveGradHess to provide a unified grad_hess! implementation and returns an InplaceLogpdfGradHess instance.
Arguments
logpdf!: Function(out_logpdf, x) ->writes the log-density intoout_logpdfgrad!: Function(out_grad, x) ->writes the gradient intoout_gradhess!: Function(out_hess, x) ->writes the Hessian intoout_hess
See also
NaiveGradHess— adapter that combines separategrad!/hess!intograd_hess!.
ExponentialFamilyProjection.NaiveGradHess — TypeNaiveGradHess{G, H}Adapter that exposes only grad_hess! by calling provided grad! and hess! sequentially. Useful as a fallback when a combined implementation is not available.
ExponentialFamilyProjection.logpdf! — Methodlogpdf!(inplace::InplaceLogpdfGradHess, out, x)Evaluate the log probability density function at point x, writing the result to pre-allocated container out.
ExponentialFamilyProjection.grad_hess! — Methodgrad_hess!(inplace::InplaceLogpdfGradHess, out_grad, out_hess, x)Evaluate the gradient and the Hessian at point x, writing the result to pre-allocated containers out_grad and out_hess.
ExponentialFamilyProjection.grad_hess! — Methodgrad_hess!(inplace::NaiveGradHess, out_grad, out_hess, x)Evaluate the gradient and the Hessian at point x using the provided separate implementations.
For high-dimensional distributions, adjusting the default number of samples might be necessary to achieve better performance.
Examples
Gaussian projection
In this example we project an arbitrary log probability density function onto a Gaussian distribution. The log probability density function is defined using another Gaussian, but it can be any function:
using ExponentialFamilyProjection, ExponentialFamily, BayesBase
hiddengaussian = NormalMeanVariance(3.14, 2.71)
targetf = (x) -> logpdf(hiddengaussian, x)
prj = ProjectedTo(NormalMeanVariance)
result = project_to(prj, targetf)ExponentialFamily.NormalMeanVariance{Float64}(μ=3.147287071387545, v=2.6741746757950247)We can see that the estimated result is pretty close to the actual hiddengaussian used to define the targetf. We can also visualise the results using the Plots.jl package.
using Plots
plot(-6.0:0.1:12.0, x -> pdf(hiddengaussian, x), label="real distribution", fill = 0, fillalpha = 0.2)
plot!(-6.0:0.1:12.0, x -> pdf(result, x), label="estimated projection", fill = 0, fillalpha = 0.2)Let's also try to project an arbitrary unnormalized log probability density function onto a Gaussian distribution:
# `+ 100` to ensure that the function is unnormalized
targetf = (x) -> -0.5 * (x - 3.14)^2 + 100
result = project_to(prj, targetf)ExponentialFamily.NormalMeanVariance{Float64}(μ=3.022854932263283, v=0.8218806444470579)In this case, targetf does not define any valid probability distribution since it is unnormalized, but the project_to function is able to project it onto a closest possible Gaussian distribution. We can again visualize the results using the Plots.jl package:
plot(-40.0:0.1:40.0, targetf, label="unnormalized logpdf", fill = 0, fillalpha = 0.2)
plot!(-40.0:0.1:40.0, (x) -> logpdf(result, x), label="estimated logpdf of a Gaussian", fill = 0, fillalpha = 0.2)Beta projection
The experiment can be performed for other members of the exponential family as well. For example, let's project an arbitrary log probability density function onto a Beta distribution:
hiddenbeta = Beta(10, 3)
targetf = (x) -> logpdf(hiddenbeta, x)
prj = ProjectedTo(Beta)
result = project_to(prj, targetf)Distributions.Beta{Float64}(α=8.97671074447669, β=2.7316968890241995)And let's visualize the result using the Plots.jl package:
plot(0.0:0.01:1.0, x -> pdf(hiddenbeta, x), label="real distribution", fill = 0, fillalpha = 0.2)
plot!(0.0:0.01:1.0, x -> pdf(result, x), label="estimated projection", fill = 0, fillalpha = 0.2)Multivariate Gaussian projection
The library also supports multivariate distributions. Let's project an arbitrary log probability density function onto a multivariate Gaussian distribution.
hiddengaussian = MvNormalMeanCovariance(
[ 3.14, 2.17 ],
[ 2.0 -0.1; -0.1 3.0 ]
)
targetf = (x) -> logpdf(hiddengaussian, x)
prj = ProjectedTo(MvNormalMeanCovariance, 2)
result = project_to(prj, targetf)MvNormalMeanCovariance(
μ: [3.046220659005711, 2.215291316607296]
Σ: [1.8986008425872856 0.002457042029684182; 0.002457042029684182 2.9753353173865795]
)
As in previous examples the result is pretty close to the actual hiddengaussian used to define the targetf.
Gauss–Newton strategy (logistic regression)
The Gauss–Newton strategy uses first and second derivatives of the target log-density to form a deterministic update, avoiding Monte Carlo sampling. This is useful when you can provide in-place logpdf!, grad!, and hess! for your target. Below we demonstrate projecting a Bayesian logistic regression model (which is not a normalized distribution) onto a multivariate Gaussian using Gauss–Newton strategy GaussNewton.
We split this example into small steps and use a shared example environment so that variables (including a stable RNG) persist between blocks.
In the following block we sample X (our features) and y (binary outputs).
using LinearAlgebra
using StableRNGs
using Distributions
using ExponentialFamily
using ExponentialFamilyProjection
using Plots
# 1) Generate a reproducible dataset (shared RNG)
rng = StableRNG(42)
n = 600
input_dim = 2
d = input_dim + 1
X_feat = randn(rng, n, input_dim)
X = hcat(ones(n), X_feat)
β_true = [0.5, 2.0, -1.5]
σ(z) = 1 / (1 + exp(-z))
p = map(σ, X * β_true)
y = rand.(Ref(rng), Bernoulli.(p));We created a binary logistic regression dataset with an intercept and fixed rng for reproducibility.
# 2) Define in-place log-posterior, gradient, and Hessian
function logpost!(out::AbstractVector{T}, β::AbstractVector{T}) where {T<:Real}
Xβ = X * β
@inline function log1pexp(z)
z > 0 ? z + log1p(exp(-z)) : log1p(exp(z))
end
s = zero(T)
@inbounds for i in 1:n
s += y[i] * Xβ[i] - log1pexp(Xβ[i])
end
# standard normal prior on β
s += -0.5 * dot(β, β)
out[1] = s
return out
end
function grad!(out::AbstractVector{T}, β::AbstractVector{T}) where {T<:Real}
fill!(out, 0)
Xβ = X * β
@inbounds for i in 1:n
pi = 1 / (1 + exp(-Xβ[i]))
@views out[:] .+= (y[i] - pi) .* X[i, :]
end
return out
end
function hess!(out::AbstractMatrix{T}, β::AbstractVector{T}) where {T<:Real}
Xβ = X * β
fill!(out, 0)
@inbounds for i in 1:n
pi = 1 / (1 + exp(-Xβ[i]))
wi = pi * (1 - pi)
@views out .-= wi .* (X[i, :] * transpose(X[i, :]))
end
return out
endhess! (generic function with 1 method)These in-place routines allow Gauss–Newton to form deterministic updates without Monte Carlo sampling.
# 3) Wrap and run Gauss–Newton projection
inplace = ExponentialFamilyProjection.InplaceLogpdfGradHess(logpost!, grad!, hess!)
params = ProjectionParameters(
tolerance = 1e-8,
strategy = ExponentialFamilyProjection.GaussNewton(nsamples = 1), # deterministic
)
prj = ProjectedTo(MvNormalMeanCovariance, d; parameters = params)
result = project_to(prj, inplace)MvNormalMeanCovariance(
μ: [0.1899735085900649, 0.8944230856095603, -0.48689068986755923]
Σ: [0.09654291677579087 0.00036433306504875685 0.0027047450407188234; 0.00036433306504875685 0.10658332461101037 -0.01129930001921165; 0.0027047450407188234 -0.01129930001921165 0.09522152710566681]
)
This projects the posterior to an MvNormalMeanCovariance parameterization using Gauss–Newton updates.
# 4) Inspect the projection result
μ = mean(result)
Σ = cov(result)([0.1899735085900649, 0.8944230856095603, -0.48689068986755923], (3, 3))Now we visualize the posterior-mean decision boundary and probability map. We compute a grid over feature space and evaluate the mean prediction σ(μ₀ + μ₁ x₁ + μ₂ x₂).
# 5) Build grid and compute posterior-mean probabilities
x1_min = minimum(X[:, 2]) - 3.0
x1_max = maximum(X[:, 2]) + 3.0
x2_min = minimum(X[:, 3]) - 3.0
x2_max = maximum(X[:, 3]) + 3.0
xs = range(x1_min, x1_max; length = 200)
ys = range(x2_min, x2_max; length = 200)
Z = Array{Float64}(undef, length(xs), length(ys))
for (i, x1) in enumerate(xs)
for (j, x2) in enumerate(ys)
z = μ[1] + μ[2] * x1 + μ[3] * x2
Z[i, j] = 1.0 / (1.0 + exp(-z))
end
end# 6) Render probability heatmap and 0.5 decision contour with data overlay
plt_mean = contourf(
xs, ys, Z';
levels = 0:0.05:1,
c = cgrad([:red, :green]),
alpha = 0.65,
colorbar_title = "P(y=1)",
contour_lines = false,
linecolor = :transparent,
linewidth = 0,
size = (650, 500),
)
contour!(xs, ys, Z'; levels = [0.5], linecolor = :black, linewidth = 3, label = nothing)
scatter!(
X[y .== 0, 2], X[y .== 0, 3];
markersize = 6,
markerstrokecolor = :white,
markerstrokewidth = 0.8,
label = "y = 0",
color = :red4,
)
scatter!(
X[y .== 1, 2], X[y .== 1, 3];
markersize = 6,
markerstrokecolor = :white,
markerstrokewidth = 0.8,
label = "y = 1",
color = :green4,
)
xlabel!("x₁")
ylabel!("x₂")
title!("mean boundary")To account for parameter uncertainty, we can estimate the predictive probability by Monte Carlo: sample coefficients β from the Gaussian posterior result ~ N(μ, Σ) and average σ(β₀ + β₁ x₁ + β₂ x₂) over samples. This yields a boundary reflecting posterior spread.
# 7) Monte Carlo-averaged predictive map from posterior β ~ N(μ, Σ)
nsamples_pred = 200
Zmc = zeros(length(xs), length(ys))
mvn_post = MvNormal(μ, Symmetric(Σ))
for s in 1:nsamples_pred
βs = rand(rng, mvn_post)
for (i, x1) in enumerate(xs)
for (j, x2) in enumerate(ys)
z = βs[1] + βs[2] * x1 + βs[3] * x2
Zmc[i, j] += 1.0 / (1.0 + exp(-z))
end
end
end
Zmc ./= nsamples_pred# 8) Render MC-averaged probability heatmap and decision contour
plt_mc = contourf(
xs, ys, Zmc';
levels = 0:0.05:1,
c = cgrad([:red, :green]),
alpha = 0.65,
colorbar_title = "E[P(y=1)]",
contour_lines = false,
linecolor = :transparent,
linewidth = 0,
size = (650, 500),
)
contour!(xs, ys, Zmc'; levels = [0.5], linecolor = :black, linewidth = 3, label = nothing)
scatter!(
X[y .== 0, 2], X[y .== 0, 3];
markersize = 6,
markerstrokecolor = :white,
markerstrokewidth = 0.8,
label = "y = 0",
color = :red4,
)
scatter!(
X[y .== 1, 2], X[y .== 1, 3];
markersize = 6,
markerstrokecolor = :white,
markerstrokewidth = 0.8,
label = "y = 1",
color = :green4,
)
xlabel!("x₁")
ylabel!("x₂")
title!("full posterior boundary")
plt_mc# 9) Optional: side-by-side comparison
plot(plt_mean, plt_mc; layout = (1, 2), size = (1100, 450))Projection with samples
The projection can be done given a set of samples instead of the function directly. For example, let's project an set of samples onto a Beta distribution:
using StableRNGs
hiddenbeta = Beta(10, 3)
samples = rand(StableRNG(42), hiddenbeta, 1_000)
prj = ProjectedTo(Beta)
result = project_to(prj, samples)Distributions.Beta{Float64}(α=9.934683749459355, β=2.844620239774742)plot(0.0:0.01:1.0, x -> pdf(hiddenbeta, x), label="real distribution", fill = 0, fillalpha = 0.2)
histogram!(samples, label = "samples", normalize = :pdf, fillalpha = 0.2)
plot!(0.0:0.01:1.0, x -> pdf(result, x), label="estimated projection", fill = 0, fillalpha = 0.2)Other
Manopt extensions
ExponentialFamilyProjection.ProjectionCostGradientObjective — TypeProjectionCostGradientObjectiveThis structure provides an interface for Manopt to compute the cost and gradients required for the optimization procedure based on manifold projection. The actual computation of costs and gradients is defined by the strategy argument.
Arguments
projection_parameters: The parameters for projection, must be of typeProjectionParametersprojection_argument: The second argument of theproject_tofunction.current_η: Current optimization point.supplementary_η: A tuple of additional natural parameters subtracted from the current point in each optimization iteration.strategy: Specifies the method for computing costs and gradients, which may support differentprojection_argumentvalues.strategy_state: The state for thestrategy, usually created withcreate_state!
Bounded direction update rule
The ExponentialFamilyProjection.jl package implements a specialized gradient direction rule that limits the norm (manifold-specific) of the gradient to a pre-specified value.
ExponentialFamilyProjection.BoundedNormUpdateRule — TypeBoundedNormUpdateRule(limit; direction = Manopt.IdentityUpdateRule())A DirectionUpdateRule is a direction rule that constrains the norm of the direction to a specified limit.
This rule operates in two steps:
- Initial direction computation: It first applies the specified
directionupdate rule to compute an initial direction. - Norm check and scaling: The norm of the resulting direction vector is checked using
Manopt.norm(M, p, d), where:M` is the manifold on which the optimization is running,pis the point at which the direction was computed,dis the computed direction.- If this norm exceeds the specified
limit, the direction vector is scaled down so that its new norm exactly equals the limit. This scaling preserves the direction of the gradient while controlling its magnitude.
Read more about Manopt.DirectionUpdateRule in the Manopt.jl documentation.
Index
ExponentialFamilyProjection.BonnetStrategyExponentialFamilyProjection.BoundedNormUpdateRuleExponentialFamilyProjection.ControlVariateStrategyExponentialFamilyProjection.DefaultStrategyExponentialFamilyProjection.GaussNewtonExponentialFamilyProjection.InplaceLogpdfGradHessExponentialFamilyProjection.InplaceLogpdfGradHessExponentialFamilyProjection.MLEStrategyExponentialFamilyProjection.NaiveGradHessExponentialFamilyProjection.ProjectedToExponentialFamilyProjection.ProjectionCostGradientObjectiveExponentialFamilyProjection.ProjectionParametersExponentialFamilyProjection.DefaultProjectionParametersExponentialFamilyProjection.compute_costExponentialFamilyProjection.compute_gradient!ExponentialFamilyProjection.create_state!ExponentialFamilyProjection.getinitialpointExponentialFamilyProjection.grad_hess!ExponentialFamilyProjection.grad_hess!ExponentialFamilyProjection.logpdf!ExponentialFamilyProjection.prepare_state!ExponentialFamilyProjection.preprocess_strategy_argumentExponentialFamilyProjection.project_to