Skip to content

Using optimizers directly

A PEtabODEProblem contains all information needed to use an optimizer directly for parameter estimation. While PEtab.jl provides high-level functions for single-start estimation (calibrate) and multi-start estimation (calibrate_multistart), direct use can be useful for unsupported optimization packages and/or custom workflows.

This tutorial demonstrates how to use the IPNewton method from Optim.jl directly via a PEtabODEProblem. As a running example, the Michaelis–Menten model from the starting tutorial is used.

julia
using Catalyst, PEtab
rn = @reaction_network begin
    @parameters S0 c3=3.0
    @species begin
        S(t) = S0
        E(t) = 50.0
        SE(t) = 0.0
        P(t) = 0.0
    end
    @observables begin
        obs1 ~ S + E
        obs2 ~ P
    end
    c1, S + E --> SE
    c2, SE --> S + E
    c3, SE --> P + E
end

# Observables
@parameters sigma
petab_obs1 = PEtabObservable(:petab_obs1, :obs1, 3.0)
petab_obs2 = PEtabObservable(:petab_obs2, :obs2, sigma)
observables = [petab_obs1, petab_obs2]

# Parameters to estimate
p_c1 = PEtabParameter(:c1)
p_c2 = PEtabParameter(:c2)
p_S0 = PEtabParameter(:S0)
p_sigma = PEtabParameter(:sigma)
pest = [p_c1, p_c2, p_S0, p_sigma]

# Measurements; simulate with 'true' parameters
using DataFrames, OrdinaryDiffEqRosenbrock
ps = [:c1 => 1.0, :c2 => 10.0, :c3 => 1.0, :S0 => 100.0]
u0 = [:S => 100.0, :E => 50.0, :SE => 0.0, :P => 0.0]
tspan = (0.0, 10.0)
oprob = ODEProblem(rn, u0, tspan, ps)
sol = solve(oprob, Rodas5P(); saveat = 0:0.5:10.0)
obs1 = (sol[:S] + sol[:E]) .+ randn(length(sol[:E]))
obs2   = sol[:P] .+ randn(length(sol[:P]))
df1 = DataFrame(obs_id = "petab_obs1", time = sol.t, measurement = obs1)
df2 = DataFrame(obs_id = "petab_obs2", time = sol.t, measurement = obs2)
measurements = vcat(df1, df2)

# Create the PEtabODEProblem
petab_model = PEtabModel(rn, observables, measurements, pest)
petab_prob = PEtabODEProblem(petab_model)
PEtabODEProblem ReactionSystemModel: 4 parameters to estimate
(for more statistics, call `describe(petab_prob)`)

Extracting relevant functions from a PEtabODEProblem

A numerical optimizer requires an objective function, often a gradient function, and sometimes a Hessian function. PEtab.jl works with likelihoods, so the objective corresponds to the negative log-likelihood (nllh):

julia
x = get_x(petab_prob)
nllh = petab_prob.nllh(x; prior = true)
21508.975038367662

The keyword argument prior = true (default) includes parameter priors (if defined). In addition, the PEtabODEProblem provides both in-place and out-of-place gradients:

julia
g_inplace = similar(x)
petab_prob.grad!(g_inplace, x)
g_outplace = petab_prob.grad(x)
ComponentVector{Float64}(log10_c1 = -174.86023059455079, log10_c2 = 173.58924897473662, log10_S0 = 168612.90863694134, log10_sigma = 24.71503988866759)

and Hessians:

julia
h_inplace = zeros(length(x), length(x))
petab_prob.hess!(h_inplace, x)
h_outplace = petab_prob.hess(x)
4×4 Matrix{Float64}:
   484.913     -481.687     -1005.78        -0.137188
  -481.687      481.387      1009.0          0.136463
 -1005.78      1009.0           1.0961e6  -117.111
    -0.137188     0.136463   -117.111      108.863

The input x is typically a ComponentArray, but Vector inputs are also supported. More details on what is available in a PEtabODEProblem can be found in the API documentation

Lastly, for ODE models, parameter bounds are often important: without bounds, the optimizer often explores regions where the ODE solver fails, increasing runtime [2]. The bounds in petab_probcan be accessed with:

julia
lb, ub = petab_prob.lower_bounds, petab_prob.upper_bounds

lb and ub are ComponentArrays. If an optimizer does not support ComponentArray, convert them to Vectors with collect (see below).

Wrapping Optim.jl IPNewton

Optim.jl’s IPNewton expects an objective, gradient, Hessian, and box constraints (bounds provided as plain vectors). Using the functions in a PEtabODEProblem (see above), this can be set up as:

julia
using Optim
x0 = collect(get_x(petab_prob))
df  = TwiceDifferentiable(
    petab_prob.nllh, petab_prob.grad!, petab_prob.hess!, x0
)
dfc = TwiceDifferentiableConstraints(
    collect(lb), collect(ub)
)

Here, collect converts ComponentArrays to Vectors for Optim.jl. The optimization can then be run from x0:

julia
opt_res = Optim.optimize(df, dfc, x0, IPNewton())
 * Status: success

 * Candidate solution
    Final objective value:     1.570974e+02

 * Found with
    Algorithm:     Interior Point Newton

 * Convergence measures
    |x - x'|               = 0.00e+00 ≤ 0.0e+00
    |x - x'|/|x'|          = 0.00e+00 ≤ 0.0e+00
    |f(x) - f(x')|         = 0.00e+00 ≤ 0.0e+00
    |f(x) - f(x')|/|f(x')| = 0.00e+00 ≤ 0.0e+00
    |g(x)|                 = 5.12e-03 ≰ 1.0e-08

 * Work counters
    Seconds run:   2  (vs limit Inf)
    Iterations:    43
    f(x) calls:    92
    ∇f(x) calls:   92

References

  1. F. Fröhlich and P. K. Sorger. Fides: Reliable trust-region optimization for parameter estimation of ordinary differential equation models. PLoS computational biology 18, e1010322 (2022).