API

PEtab.PEtabModelType
PEtabModel(path_yaml::String;
           build_julia_files::Bool=false,
           verbose::Bool=true,
           ifelse_to_event::Bool=true,
           write_to_file::Bool=true,
           jlfile_path::String="")::PEtabModel

Create a PEtabModel from a PEtab specified problem with a YAML-file located at path_yaml.

When parsing a PEtab problem, several things happen under the hood:

  1. The SBML file is translated into ModelingToolkit.jl format to allow for symbolic computations of the ODE-model Jacobian. Piecewise and model events are further written into DifferentialEquations.jl callbacks.
  2. The observable PEtab table is translated into a Julia file with functions for computing the observable (h), noise parameter (σ), and initial values (u0).
  3. To allow gradients via adjoint sensitivity analysis and/or forward sensitivity equations, the gradients of h and σ are computed symbolically with respect to the ODE model's states (u) and parameters (ode_problem.p).

All of this happens automatically, and resulting files are stored under petab_model.dir_julia assuming writetofile=true. To save time, forceBuildJlFiles=false by default, which means that Julia files are not rebuilt if they already exist.

Arguments

  • path_yaml::String: Path to the PEtab problem YAML file.
  • build_julia_files::Bool=false: If true, forces the creation of Julia files for the problem even if they already exist.
  • verbose::Bool=true: If true, displays verbose output during parsing.
  • ifelse_to_event::Bool=true: If true, rewrites if-else statements in the SBML model as event-based callbacks.
  • write_to_file::Bool=true: If true, writes built Julia files to disk (recomended)

Example

petab_model = PEtabModel("path_to_petab_problem_yaml")
PEtabModel(system::Union{ReactionSystem, ODESystem},
           simulation_conditions::Dict{String, Dict},
           observables::Dict{String, PEtabObservable},
           measurements::DataFrame,
           petab_parameters::Vector{PEtabParameter};
           state_map::Union{Nothing, Vector{Pair}=nothing,
           parameter_map::Union{Nothing, Vector{Pair}=nothing,
           events::Union{Nothing, PEtabEvent, AbstractVector}=nothing,
           verbose::Bool=false)::PEtabModel

Create a PEtabModel directly in Julia from a Catalyst reaction system or MTK ODESystem.

For additional information on the input format, see the main documentation.

Arguments

  • system::Union{ReactionSystem, ODESystem}: A Catalyst reaction system or a ModellingToolkit ODESystem
  • simulation_conditions::Dict{String, T}: A dictionary specifying values for control parameters/species per simulation condition.
  • observables::Dict{String, PEtab.PEtabObservable}: A dictionary specifying the observable and noise formulas linking the model to data.
  • measurements::DataFrame: Measurement data to calibrate the model against.
  • petab_parameters::Vector{PEtab.PEtabParameter}: Parameters to estimate in PEtabParameter format.
  • state_map=nothing: An optional state-map to set initial species values to be constant across all simulation conditions.
  • parameter_map=nothing: An optional state-map to set parameter values to be constant across all simulation conditions.
  • events=nothing: Potential model event (callbacks) defined via PEtabEvent. In case of several events provide a vector of PEtabEvent.
  • verbose::Bool=false: Whether to print progress when building the model.

Example

# Define a reaction network model
rn = @reaction_network begin
    @parameters a0 b0
    @species A(t)=a0 B(t)=b0
    (k1, k2), A <--> B
end

# Alternatively we can use an ODESystem (just use sys in PEtabModel)
@parameters a0 b0 k1 k2
@variables t A(t) B(t)
D = Differential(t)
eqs = [
    D(A) ~ -k1*A + k2*B
    D(B) ~ k1*A - k2*B
]
@named sys = ODESystem(eqs; defaults=Dict(A => a0, B => b0))

# Measurement data
measurements = DataFrame(
    simulation_id=["c0", "c0"],
    obs_id=["obs_a", "obs_a"],
    time=[0, 10.0],
    measurement=[0.7, 0.1],
    noise_parameters=0.5
)

# Single experimental condition
simulation_conditions = Dict("c0" => Dict())

# PEtab-parameters to estimate
petab_parameters = [
    PEtabParameter(:a0, value=1.0, scale=:lin),
    PEtabParameter(:b0, value=0.0, scale=:lin),
    PEtabParameter(:k1, value=0.8, scale=:lin),
    PEtabParameter(:k2, value=0.6, scale=:lin)
]

# Observable equation
@unpack A = rn
observables = Dict("obs_a" => PEtabObservable(A, 0.5))

# Create a PEtabODEProblem
petab_model = PEtabModel(
    rn, simulation_conditions, observables, measurements,
    petab_parameters, verbose=false
)
PEtabModel(system::Union{ReactionSystem, ODESystem},
           observables::Dict{String, PEtabObservable},
           measurements::DataFrame,
           petab_parameters::Vector{PEtabParameter};
           state_map::Union{Nothing, Vector{Pair}=nothing,
           parameter_map::Union{Nothing, Vector{Pair}=nothing,
           events::Union{Nothing, PEtabEvent, AbstractVector}=nothing,
           verbose::Bool=false)::PEtabModel

Create a PEtabModel directly in Julia from a Catalyst ReactionSystem or MTK ODESystem without simulation conditions.

In case of simulation conditions, and for all arguments, see above.

source
PEtab.PEtabODEProblemType
PEtabODEProblem

Everything needed to setup an optimization problem (compute cost, gradient, hessian and parameter bounds) for a PEtab model.

For constructor, see below.

Note

The parameter vector θ is always assumed to be on the parameter scale specified in the PEtab parameters file. If needed, θ is transformed to the linear scale inside the function call.

Fields

  • compute_cost: For θ computes the negative likelihood (objective to minimize)
  • compute_chi2: For θ compute χ2 value
  • compute_gradient!: For θ computes in-place gradient compute_gradient!(gradient, θ)
  • compute_gradient: For θ computes out-place gradient gradient = compute_gradient(θ)
  • compute_hessian!: For θ computes in-place hessian-(approximation) compute_hessian!(hessian, θ)
  • compute_hessian: For θ computes out-place hessian-(approximation) hessian = compute_hessian(θ)
  • compute_FIM!: For θ computes the empirical Fisher-Information-Matrix (FIM) which is the Hessian of the negative-log-likelihood compute_FIM!(FIM, θ).
  • compute_FIM: For θ computes FIM out of place FIM = compute_FIM(θ).
  • compute_simulated_values: For θ compute the corresponding model (simulated) values to the measurements in the same order as in the Measurements PEtab table
  • compute_residuals: For θ compute the residuals (hmodel - hobserved)^2 / σ^2 in the same order as in the Measurements PEtab table
  • gradient_method: The method used to compute the gradient (either :ForwardDiff, :ForwardEquations, :Adjoint, or :Zygote).
  • hessian_method: The method used to compute or approximate the Hessian (either :ForwardDiff, :BlocForwardDiff, or :GaussNewton).
  • FIM_method: The method used to compute FIM, either :ForwardDiff (full Hessian) or :GaussNewton (only recomended for >100 parameter models)
  • n_parameters_esimtate: The number of parameters to estimate.
  • θ_names: The names of the parameters in θ.
  • θ_nominal: The nominal values of θ as specified in the PEtab parameters file.
  • θ_nominalT: The nominal values of θ on the parameter scale (e.g., log) as specified in the PEtab parameters file.
  • lower_bounds: The lower parameter bounds on the parameter scale for θ as specified in the PEtab parameters file.
  • upper_bounds: The upper parameter bounds on the parameter scale for θ as specified in the PEtab parameters file.
  • petab_model: The PEtabModel used to construct the PEtabODEProblem.
  • ode_solver: The options for the ODE solver specified when creating the PEtabODEProblem.
  • ode_solver_gradient: The options for the ODE solver gradient specified when creating the PEtabODEProblem.

Constructor

PEtabODEProblem(petab_model::PEtabModel; <keyword arguments>)

Given a PEtabModel creates a PEtabODEProblem with potential user specified options.

The keyword arguments (described below) allows to choose cost, gradient, and Hessian methods, ODE solver options, and other tuneable options that can potentially make computations more efficient for some "edge-case" models. Please refer to the documentation for guidance on selecting the most efficient options for different types of models. If a keyword argument is not set, a suitable default option is chosen based on the number of model parameters.

Once created, a PEtabODEProblem contains everything needed to perform parameter estimtimation (see above)

Note

Every problem is unique, so even though the default settings often work well they might not be optimal.

Keyword arguments

  • ode_solver::ODESolver: Options for the ODE solver when computing the cost, such as solver and tolerances.
  • ode_solver_gradient::ODESolver: Options for the ODE solver when computing the gradient, such as the ODE solver options used in adjoint sensitivity analysis. Defaults to ode_solver if not set to nothing.
  • ss_solver::SteadyStateSolver: Options for finding steady-state for models with pre-equilibrium. Steady-state can be found via simulation or rootfinding, which can be set using SteadyStateSolver (see documentation). If not set, defaults to simulation with wrms < 1 termination.
  • ss_solver_gradient::SteadyStateSolver: Options for finding steady-state for models with pre-equilibrium when computing gradients. Defaults to ss_solver value if not set.
  • cost_method::Symbol=:Standard: Method for computing the cost (objective). Two options are available: :Standard, which is the most efficient, and :Zygote, which is less efficient but compatible with the Zygote automatic differentiation library.
  • gradient_method=nothing: Method for computing the gradient of the objective. Four options are available:
    • :ForwardDiff: Compute the gradient via forward-mode automatic differentiation using ForwardDiff.jl. Most efficient for models with ≤50 parameters. The number of chunks can be optionally set using chunksize.
    • :ForwardEquations: Compute the gradient via the model sensitivities, where sensealg specifies how to solve for the sensitivities. Most efficient when the Hessian is approximated using the Gauss-Newton method and when the optimizer can reuse the sensitivities (reuse_sensitivities) from gradient computations in Hessian computations (e.g., when the optimizer always computes the gradient before the Hessian).
    • :Adjoint: Compute the gradient via adjoint sensitivity analysis, where sensealg specifies which algorithm to use. Most efficient for large models (≥75 parameters).
    • :Zygote: Compute the gradient via the Zygote package, where sensealg specifies which sensitivity algorithm to use when solving the ODE model. This is the most inefficient option and not recommended.
  • hessian_method=nothing: method for computing the Hessian of the cost. There are three available options:
    • :ForwardDiff: Compute the Hessian via forward-mode automatic differentiation using ForwardDiff.jl. This is often only computationally feasible for models with ≤20 parameters but can greatly improve optimizer convergence.
    • :BlockForwardDiff: Compute the Hessian block approximation via forward-mode automatic differentiation using ForwardDiff.jl. The approximation consists of two block matrices: the first is the Hessian for only the dynamic parameters (parameter part of the ODE system), and the second is for the non-dynamic parameters (e.g., noise parameters). This is computationally feasible for models with ≤20 dynamic parameters and often performs better than BFGS methods.
    • :GaussNewton: Approximate the Hessian via the Gauss-Newton method, which often performs better than the BFGS method. If we can reuse the sensitivities from the gradient in the optimizer (see reuse_sensitivities), this method is best paired with gradient_method=:ForwardEquations.
  • FIM_method=nothing: Method for computing the empirical Fisher-Information-Matrix (FIM), can be:
    • :ForwardDiff - use ForwardDiff to compute the full Hessian (FIM) matrix, default for model with ≤ 100 parameters
    • :GaussNewton - approximate the FIM as the Gauss-Newton Hessian approximation (only recomeded when ForwardDiff is computationally infeasible)
  • sparse_jacobian::Bool=false: When solving the ODE du/dt=f(u, p, t), whether implicit solvers use a sparse Jacobian. Sparse Jacobian often performs best for large models (≥100 states).
  • specialize_level=SciMLBase.FullSpecialize: Specialization level when building the ODE problem. It is not recommended to change this parameter (see https://docs.sciml.ai/SciMLBase/stable/interfaces/Problems/).
  • sensealg: Sensitivity algorithm for gradient computations. The available options for each gradient method are:
    • :ForwardDiff: None (as ForwardDiff takes care of all computation steps).
    • :ForwardEquations: :ForwardDiff (uses ForwardDiff.jl and typicaly performs best) or ForwardDiffSensitivity() and ForwardSensitivity() from SciMLSensitivity.jl (https://github.com/SciML/SciMLSensitivity.jl).
    • :Adjoint: InterpolatingAdjoint() and QuadratureAdjoint() from SciMLSensitivity.jl.
    • :Zygote: All sensealg in SciMLSensitivity.jl.
  • sensealg_ss=nothing: Sensitivity algorithm for adjoint gradient computations for steady-state simulations. The available options are SteadyStateAdjoint(), InterpolatingAdjoint(), and QuadratureAdjoint() from SciMLSensitivity.jl. SteadyStateAdjoint() is the most efficient but requires a non-singular Jacobian, and in the case of a non-singular Jacobian, the code automatically switches to InterpolatingAdjoint().
  • chunksize=nothing: Chunk-size for ForwardDiff.jl when computing the gradient and Hessian via forward-mode automatic differentiation. If nothing is provided, the default value is used. Tuning chunksize is non-trivial, and we plan to add automatic functionality for this.
  • split_over_conditions::Bool=false: For gradient and Hessian via ForwardDiff.jl, whether or not to split calls to ForwardDiff across experimental (simulation) conditions. This parameter should only be set to true if the model has many parameters specific to an experimental condition; otherwise, the overhead from the calls will increase run time. See the Beer example for a case where this is needed.
  • reuse_sensitivities::Bool=false : If set to true, reuse the sensitivities computed during gradient computations for the Gauss-Newton Hessian approximation. This option is only applicable when using hessian_method=:GaussNewton and gradient_method=:ForwardEquations. Note that it should only be used when the optimizer always computes the gradient before the Hessian.
  • verbose::Bool=true : If set to true, print progress messages while setting up the PEtabODEProblem.
source
PEtab.PEtabObservableType
PEtabObservable(obs_formula, noise_formula; transformation::Symbol=:lin)

Links a model to measurements using an observable formula and measurement noise formula.

The transformation argument can take one of three values: :lin (for normal measurement noise), :log, or :log10 (for log-normal measurement noise). For a full description of options, including how to define measurement-specific observable and noise parameters, see the main documentation.

Examples

# Example 1: Log-normal measurement noise with known error σ=3.0
@unpack X = rn  # 'rn' is the dynamic model
PEtabObservable(X, 3.0, transformation=:log)
# Example 2: Normal measurement noise with estimation of σ (defined as PEtabParameter)
@unpack X, Y = rn  # 'rn' is the dynamic model
@parameters sigma
PEtabObservable((X + Y) / X, sigma)
# Example 3: Normal measurement noise with measurement-specific noiseParameter
@unpack X, Y = rn  # 'rn' is the dynamic model
@parameters noiseParameter1  # Must be in the format 'noiseParameter'
PEtabObservable(X, noiseParameter1 * X)
source
PEtab.PEtabParameterType
PEtabParameter(id::Union{Num, Symbol}; <keyword arguments>)

Represents a parameter to be estimated in a PEtab model calibration problem.

Keyword Arguments

  • estimate::Bool=true: Specifies whether the parameter should be estimated (default) or set as constant.
  • value::Union{Nothing, Float64}=nothing: The parameter value to use if estimate=false. Defaults to the midpoint between lb and ub.
  • scale::Symbol=:log10: The scale on which to estimate the parameter. Allowed options are :log10 (default), :log, and :lin.
  • lb::Float64=1e-3: The lower parameter bound in parameter estimation (default: 1e-3).
  • ub::Float64=1e-3: The upper parameter bound in parameter estimation (default: 1e3).
  • prior=nothing: An optional continuous prior distribution from the Distributions package.
  • prior_on_linear_scale::Bool=true: Specifies whether the prior is on the linear scale (default) or the transformed scale, e.g., log10-scale.
  • sample_from_prior::Bool=true: Whether to sample the parameter from the prior distribution when generating startguesses for model calibration.

Examples

# Example 1: Parameter with a Log-Normal prior (LN(μ=3.0, σ=1.0)) estimated on the log10 scale
PEtabParameter(:c1, prior=LogNormal(3.0, 1.0))
# Example 2: Parameter estimated on the log scale with a Normal prior (N(0.0, 1.0)) on the log scale
PEtabParameter(:c1, scale=:log, prior=Normal(0.0, 1.0), prior_on_linear_scale=false)
source
PEtab.PEtabEventType
PEtabEvent(condition, affect, target)

An event triggered by condition that sets the value of target to that of affect.

If condition is a single value or model parameter (e.g., c1 or 1.0), the event is triggered when time reaches that value (e.g., t == c1 or t == 1.0). Condition can also depend on model states, for example, S == 2.0 will trigger the event when the state S reaches the value 2.0. In contrast, S > 2.0 will trigger the condition when S increases from below 2.0 (specifically, the event is triggered when the condition changes from false to true). Note that the condition can contain model parameter values or species, e.g., S > c1.

affect can be a constant value (e.g., 1.0) or an algebraic expression of model parameters/states. For example, to add 5.0 to the state S, write S + 5. In case an event affects several parameters and/or states provide affect as a Vector, for example [S + 5, 1.0].

target is either a model state or parameter that the event acts on. In case an event affects several states and/or parameters provide as a Vector where target[i] is the target of affect[i].

For more details, see the documentation.

Note

If the condition and target are single parameters or states, they can be specied as Num (from unpack) or Symbol. If the event involves multiple parameters or states, you must provide them as either a Num (as shown below) or a String.

Examples

using Catalyst
# Trigger event at t = 3.0, add 5 to A
rn = @reaction_network begin
    (k1, k2), A <--> B
end
@unpack A = rn
event = PEtabEvent(3.0, A + 5.0, A)
using Catalyst
# Trigger event at t = k1, set k2 to 3
rn = @reaction_network begin
    (k1, k2), A <--> B
end
event = PEtabEvent(:k1, 3.0, :k2)
using Catalyst
# Trigger event when A == 0.2, set B to 2.0
rn = @reaction_network begin
    (k1, k2), A <--> B
end
@unpack A, B = rn
event = PEtabEvent(A == 0.2, 2.0, B)
using Catalyst
# Trigger event when A == 0.2, set B to 2.0 and A += 2
rn = @reaction_network begin
    (k1, k2), A <--> B
end
@unpack A, B = rn
event = PEtabEvent(A == 0.2, [A + 2, 2.0], [A, B])
source
PEtab.ODESolverType
ODESolver(solver, <keyword arguments>)

ODE-solver options (solver, tolerances, etc...) to use when computing gradient/cost for a PEtabODEProblem.

More information about the available options and solvers can be found in the documentation for DifferentialEquations.jl (https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve/). Recommended settings for which solver and options to use for different problems can be found below and in the documentation.

Arguments

  • solver: Any of the ODE solvers in DifferentialEquations.jl. For small (≤20 states) mildly stiff models, composite solvers such as AutoVern7(Rodas5P()) perform well. For stiff small models, Rodas5P() performs well. For medium-sized models (≤75 states), QNDF(), FBDF(), and CVODE_BDF() perform well. CVODE_BDF() is not compatible with automatic differentiation and thus cannot be used if the gradient is computed via automatic differentiation or if the Gauss-Newton Hessian approximation is used. If the gradient is computed via adjoint sensitivity analysis, CVODE_BDF() is often the best choice as it is typically more reliable than QNDF() and FBDF() (fails less often).
  • abstol=1e-8: Absolute tolerance when solving the ODE system. Not recommended to increase above 1e-6 for gradients.
  • reltol=1e-8: Relative tolerance when solving the ODE system. Not recommended to increase above 1e-6 for gradients.
  • force_dtmin=false: Whether or not to force dtmin when solving the ODE system.
  • dtmin=nothing: Minimal acceptable step-size when solving the ODE system.
  • maxiters=10000: Maximum number of iterations when solving the ODE system. Increasing above the default value can cause the optimization to take substantial time.
  • verbose::Bool=true: Whether or not warnings are displayed if the solver exits early. true is recommended in order to detect if a suboptimal ODE solver was chosen.
source
PEtab.SteadyStateSolverType
SteadyStateSolver(method::Symbol;
                  check_simulation_steady_state::Symbol=:wrms,
                  rootfinding_alg=nothing,
                  abstol=nothing,
                  reltol=nothing,
                  maxiters=nothing)

Setup options for finding steady-state via either method=:Rootfinding or method=:Simulate.

For method=:Rootfinding, the steady-state u* is found by solving the problem du = f(u, p, t) ≈ 0 with tolerances abstol and reltol via an automatically chosen optimization algorithm (rootfinding_alg=nothing) or via any provided algorithm in NonlinearSolve.jl.

For method=:Simulate, the steady-state u* is found by simulating the ODE system until du = f(u, p, t) ≈ 0. Two options are available for check_simulation_steady_state:

  • :wrms : Weighted root-mean square √(∑((du ./ (reltol * u .+ abstol)).^2) / length(u)) < 1
  • :Newton : If Newton-step Δu is sufficiently small √(∑((Δu ./ (reltol * u .+ abstol)).^2) / length(u)) < 1. - Newton often performs better but requires an invertible Jacobian. In case it's not fulfilled, the code switches automatically to :wrms.

maxiters refers to either the maximum number of rootfinding steps or the maximum number of integration steps, depending on the chosen method.

source
PEtab.remake_PEtab_problemFunction
remake_PEtab_problem(petab_problem::PEtabODEProblem, parameters_change::Dict) :: PEtabODEProblem

Fixate model parameters for a given PEtabODEProblem without recompiling the problem.

This function allows you to modify parameters without the need to recompile the underlying code, resulting in reduced latency. To fixate the parameter k1, you can use parameters_change=Dict(:k1 => 1.0).

If model derivatives are computed using ForwardDiff.jl with a chunk-size of N, the new PEtabODEProblem will only evaluate the necessary number of chunks of size N to compute the full gradient for the remade problem.

source
PEtab.FidesType
Fides(hessian_method::Union{Nothing, Symbol}; verbose::Bool=false)

Fides is a Python Newton-trust region optimizer designed for box-bounded optimization problems.

It excels when the full Hessian is too computationally expensive, but a Gauss-Newton Hessian approximation can be calculated. When constructed with Fides(verbose=true), it displays optimization progress during estimation.

Hessian Methods

If hessian_method=nothing, the Hessian method from the PEtabODEProblem is used, which can be either exact or a Gauss-Newton approximation. Additionally, Fides supports the following Hessian approximation methods:

  • :BB: Broyden's "bad" method
  • :BFGS: Broyden-Fletcher-Goldfarb-Shanno update strategy
  • :BG: Broyden's "good" method
  • :Broyden: BroydenClass Update scheme
  • :SR1: Symmetric Rank 1 update
  • :SSM: Structured Secant Method
  • :TSSM: Totally Structured Secant Method

For more information on each method, see the Fides documentation.

Examples

# Fides with the Hessian method as in the PEtabProblem
fides_opt = Fides(nothing)
# Fides with the BFGS Hessian approximation, with progress printing
fides_opt = Fides(:BFGS; verbose=true)
source
PEtab.OptimizationProblemFunction
OptimizationProblem(petab_problem::PEtabODEProblem;
                    interior_point_alg::Bool = false,
                    box_constraints::Bool = true)

Create an Optimization.jl OptimizationProblem from a PEtabODEProblem.

To utilize interior-point Newton methods (e.g. Optim IPNewton or Ipopt), set interior_point_alg to true.

To use algorithms not compatible with box-constraints (e.g., NewtonTrustRegion), set box_constraints to false. Note, with this options optimizers may move outside exceed the parameter bounds in the petab_problem, which can negatively impact performance.

Examples

# Use IPNewton with startguess u0
using OptimizationOptimJL
prob = PEtab.OptimizationProblem(petab_problem, interior_point=true)
prob.u0 .= u0
sol = solve(prob, IPNewton())
# Use Optim ParticleSwarm with startguess u0
using OptimizationOptimJL
prob = PEtab.OptimizationProblem(petab_problem)
prob.u0 .= u0
sol = solve(prob, Optim.ParticleSwarm())
source
PEtab.IpoptOptionsType
IpoptOptions(;print_level::Int64=0,
             max_iter::Int64=1000,
             tol::Float64=1e-8,
             acceptable_tol::Float64=1e-6,
             max_wall_time::Float64=1e20,
             acceptable_obj_change_tol::Float64=1e20)

Wrapper for a subset of Ipopt options to set during parameter estimation.

For more information about each options see the Ipopt documentation

Arguments

  • print_level: Output verbosity level (valid values are 0 ≤ print_level ≤ 12)
  • max_iter: Maximum number of iterations
  • tol: Relative convergence tolerance
  • acceptable_tol: Acceptable relative convergence tolerance
  • max_wall_time: Max wall time optimisation is allowed to run
  • acceptable_obj_change_tol: Acceptance stopping criterion based on objective function change.

See also calibrate_model and calibrate_model_multistart.

source
PEtab.IpoptOptimiserType
IpoptOptimiser(LBFGS::Bool)

Ipopt is an Interior-point Newton method designed for nonlinear optimization.

Ipopt can be configured to use either the Hessian method from the PEtabODEProblem (LBFGS=false) or a LBFGS scheme (LBFGS=true). For setting Ipopt options, see IpoptOptions.

See also calibrate_model and calibrate_model_multistart.

Examples

# Ipopt with the Hessian method as in the PEtabProblem
ipopt_opt = IpoptOptimiser(false)
# Ipopt with LBFGS Hessian approximation
ipopt_opt = IpoptOptimiser(true)
source
PEtab.calibrate_modelFunction
calibrate_model(petab_problem::PEtabODEProblem,
                p0::Vector{Float64},
                alg;
                save_trace::Bool=false,
                options=algOptions)::PEtabOptimisationResult

Parameter estimate a model for a PEtabODEProblem using an optimization algorithm alg and an initial guess p0.

The optimization algorithm alg can be one of the following:

For how to use an OptimizationProblem from Optimization.jl see below.

Each algorithm accepts specific optimizer options in the format of the respective package. For a comprehensive list of available options, please refer to the main documentation.

If you want the optimizer to return parameter and objective trace information, set save_trace=true. Results are returned as a PEtabOptimisationResult, which includes the following information: minimum parameter values found (xmin), smallest objective value (fmin), number of iterations, runtime, whether the optimizer converged, and optionally, the trace.

calibrate_model(optimization_problem::OptimizationProblem,
                petab_problem::PEtabODEProblem,
                p0::Vector{Float64},
                alg;
                kwargs...)

Perform parameter estimation for an OptimizationProblem using algorithm alg and startguess p0.

To create an OptimizationProblem from a PEtabODEProblem, see PEtab.OptimizationProblem. All algorithms from Optimization.jl are supported. However, depending on the algorithm, different options must be specified when creating the OptimizationProblem.

Solver options are provided via keyword arguments, and a list can be found here. To, for example, run calibration with reltol=1e-8, use calibrate_model(prob, p0, alg; reltol=1e-8).

Note

To use Optim optimizers, you must load Optim with using Optim. To use Ipopt, you must load Ipopt with using Ipopt. To use Fides, load PyCall with using PyCall and ensure Fides is installed (see documentation for setup). To use Optimization load Optimization.jl with using Optimization

Examples

# Perform parameter estimation using Optim's IPNewton with a given initial guess
using Optim
res = calibrate_model(petab_problem, p0, Optim.IPNewton();
                     options=Optim.Options(iterations = 1000))
# Perform parameter estimation using Fides with a given initial guess
using PyCall
res = calibrate_model(petab_problem, p0, Fides(nothing);
                     options=py"{'maxiter' : 1000}"o)
# Perform parameter estimation using Ipopt and save the trace
using Ipopt
res = calibrate_model(petab_problem, p0, IpoptOptimiser(false);
                     options=IpoptOptions(max_iter = 1000),
                     save_trace=true)
# Perform parameter estimation using Optimization
using Optimization
using OptimizationOptimJL
prob = PEtab.OptimizationProblem(petab_problem, interior_point_alg=true)
res = calibrate_model(prob, petab_problem, p0, IPNewton())
source
PEtab.calibrate_model_multistartFunction
calibrate_model_multistart(petab_problem::PEtabODEProblem,
                           alg,
                           n_multistarts::Signed,
                           dir_save::Union{Nothing, String};
                           sampling_method=QuasiMonteCarlo.LatinHypercubeSample(),
                           sample_from_prior::Bool=true,
                           options=options,
                           seed=nothing,
                           save_trace::Bool=false)::PEtabMultistartOptimisationResult

Perform multistart optimization for a PEtabODEProblem using the algorithm alg.

The optimization algorithm alg can be one of the following:

For each algorithm, optimizer options can be provided in the format of the respective package. For a comprehensive list of available options, please refer to the main documentation. If you want the optimizer to return parameter and objective trace information, set save_trace=true.

Multistart optimization involves generating multiple starting points for optimization runs. These starting points are generated using the specified sampling_method from QuasiMonteCarlo.jl, with the default being LatinHypercubeSample, a method that typically produces better results than random sampling. If sample_from_prior=true (default), for parameters with priors samples are taken from the prior distribution, where the distribution is clipped/truncated by the parameter's lower- and upper bound. For reproducibility, you can set a random number generator seed using the seed parameter.

If dir_save is provided as nothing, results are not written to disk. Otherwise, if a directory path is provided, results are written to disk. Writing results to disk is recommended in case the optimization process is terminated after a number of optimization runs.

The results are returned as a PEtabMultistartOptimisationResult, which stores the best-found minima (xmin), smallest objective value (fmin), as well as optimization results for each run.

calibrate_model_multistart(optimization_problem::OptimizationProblem,
                           alg,
                           n_multistarts::Signed,
                           dir_save::Union{Nothing, String};
                           sampling_method=QuasiMonteCarlo.LatinHypercubeSample(),
                           sample_from_prior::Bool=true,
                           seed::Union{Nothing, Integer}=nothing,
                           kwargs...)::PEtabMultistartOptimisationResult

Perform multistart optimization for a OptimizationProblem using the algorithm alg.

To create an OptimizationProblem from a PEtabODEProblem, see PEtab.OptimizationProblem. All algorithms from Optimization.jl are supported. However, depending on the algorithm, different options must be specified when creating the OptimizationProblem.

Solver options are provided via keyword arguments, and a list can be found here. To, for example, run calibration with reltol=1e-8, use calibrate_model_multistart(prob, alg, n, dir_save; reltol=1e-8).

Note

To use Optim optimizers, you must load Optim with using Optim. To use Ipopt, you must load Ipopt with using Ipopt. To use Fides, load PyCall with using PyCall and ensure Fides is installed (see documentation for setup). To use Optimization load Optimization.jl with using Optimization

Examples

# Perform 100 optimization runs using Optim's IPNewton, save results in dir_save
using Optim
dir_save = joinpath(@__DIR__, "Results")
res = calibrate_model_multistart(petab_problem, Optim.IPNewton(), 100, dir_save;
                               options=Optim.Options(iterations = 1000))
# Perform 100 optimization runs using Fides, save results in dir_save
using PyCall
dir_save = joinpath(@__DIR__, "Results")
res = calibrate_model_multistart(petab_problem, Fides(nothing), 100, dir_save;
                               options=py"{'maxiter' : 1000}"o)
# Perform 100 optimization runs using Ipopt, save results in dir_save. For each
# run save the trace
using Ipopt
dir_save = joinpath(@__DIR__, "Results")
res = calibrate_model_multistart(petab_problem, IpoptOptimiser(false), 100, dir_save;
                               options=IpoptOptions(max_iter = 1000),
                               save_trace=true)
# Perform 100 optimization runs using Optimization with IPNewton, save results in dir_save.
using Optimization
using OptimizationOptimJL
prob = PEtab.OptimizationProblem(petab_problem, interior_point_alg=true)
res = calibrate_model_multistart(prob, IPNewton(), 100, dir_save;
                                 reltol=1e-8)
source
PEtab.PEtabMultistartOptimisationResultType
PEtabMultistartOptimisationResult(dir_res::String; which_run::String="1")

Read PEtab multistart optimization results saved at dir_res.

Each time a new optimization run is performed, results are saved with unique numerical endings appended to the directory specified by dir_res. Results from a specific run can be retreived by specifying the numerical ending by which_run. For example, to access results from the second run, set which_run="2".

source
PEtab.run_PEtab_selectFunction
run_PEtab_select(path_yaml, alg; <keyword arguments>)

Given a PEtab-select YAML file perform model selection with the algorithms specified in the YAML file.

Results are written to a YAML file in the same directory as the PEtab-select YAML file.

Each candidate model produced during the model selection undergoes parameter estimation using local multi-start optimization. Three alg are supported: optimizer=Fides() (Fides Newton-trust region), optimizer=IPNewton() from Optim.jl, and optimizer=LBFGS() from Optim.jl. Additional keywords for the optimisation are n_multistarts::Int- number of multi-starts for parameter estimation (defaults to 100) and optimizationSamplingMethod - which is any sampling method from QuasiMonteCarlo.jl for generating start guesses (defaults to LatinHypercubeSample).

Simulation options can be set using any keyword argument accepted by the PEtabODEProblem function. For example, setting gradient_method=:ForwardDiff specifies the use of forward-mode automatic differentiation for gradient computation. If left blank, we automatically select appropriate options based on the size of the problem.

Note

To use Optim optimizers, you must load Optim with using Optim. To use Ipopt, you must load Ipopt with using Ipopt. To use Fides, load PyCall with using PyCall and ensure Fides is installed (see documentation for setup).

source
PEtab.generate_startguessesFunction
generate_startguesses(petab_problem::PEtabODEProblem,
                      n_multistarts::Int64;
                      sampling_method::T=QuasiMonteCarlo.LatinHypercubeSample(),
                      sample_from_prior::Bool=true,
                      allow_inf_for_startguess::Bool=false,
                      verbose::Bool=false)::Array{Float64} where T <: QuasiMonteCarlo.SamplingAlgorithm

Generate n_multistarts initial parameter guesses within the parameter bounds in the petab_problem with sampling_method

Any sampling algorithm from QuasiMonteCarlo is supported, but LatinHypercubeSample is recomended as it usually performs well. If sample_from_prior=true (default), for parameters with priors samples are taken from said prior distribution, where the distribution is clipped/truncated by the parameter's lower- and upper bound.

If n_multistarts is set to 1, a single random vector within the parameter bounds is returned. For n_multistarts > 1, a matrix is returned, with each column representing a different initial guess.

By default allow_inf_startguess=false - only initial guesses that result in finite cost evaluations are returned. If allow_inf_startguess=true, initial guesses that result in Inf are allowed.

Example

# Generate a single initial guess within the parameter bounds
start_guess = generate_startguesses(petab_problem, 1)
# Generate 10 initial guesses using Sobol sampling
start_guess = generate_startguesses(petab_problem, 10,
                                    sampling_method=QuasiMonteCarlo.SobolSample())
source
PEtab.get_psFunction
get_ps(res::Union{PEtabOptimisationResult, PEtabMultistartOptimisationResult, Vector{Float64}},
       petab_problem::PEtabODEProblem;
       condition_id::Union{String, Symbol, Nothing}=nothing,
       retmap::Bool=true)

From a fitted PEtab model or parameter vector retrieve the ODE parameters to simulate the model for the specified condition_id.

If condition_id is provided, the parameters are extracted for that specific simulation condition. If not provided, parameters for the first (default) simulation condition are returned.

If a parameter vector is provided it must have the parameters in the same order as petab_problem.θ_names.

If retmap=true, a parameter vector is returned; otherwise, a vector is returned.

source
PEtab.get_u0Function
get_u0(res::Union{PEtabOptimisationResult, PEtabMultistartOptimisationResult, Vector{Float64}},
       petab_problem::PEtabODEProblem;
       condition_id::Union{String, Symbol}=nothing,
       pre_eq_id::Union{String, Symbol, Nothing}=nothing,
       retmap::Bool=true)

From a fitted PEtab model or parameter vector retrieve the inital values (u0) to simulate the model for the specified condition_id.

If condition_id is provided, the initial values are extracted for that specific simulation condition. If not provided, initial values for the first (default) simulation condition are returned.

If a pre_eq_id is provided, the initial values are taken from the pre-equilibration simulation corresponding to pre_eq_id. If there are potential overrides of initial values in the simulation conditions, they take priority over the pre-equilibrium simulation.

If a parameter vector is provided it must have the parameters in the same order as petab_problem.θ_names.

If retmap=true, a parameter vector is returned; otherwise, a vector is returned.

source
PEtab.get_odeproblemFunction
get_odeproblem(res::Union{PEtabOptimisationResult, PEtabMultistartOptimisationResult, Vector{Float64}},
               petab_problem::PEtabODEProblem;
               condition_id::Union{String, Symbol, Nothing}=nothing,
               pre_eq_id::Union{String, Symbol, Nothing}=nothing)

From a fitted PEtab model or parameter vector retrieve the ODEProblem and callbacks to simulate the model for the specified condition_id.

If condition_id is provided, the parameters are extracted for that specific simulation condition. If not provided, parameters for the first (default) simulation condition are returned.

If a pre_eq_id is provided, the initial values are taken from the pre-equilibration simulation corresponding to pre_eq_id.

If a parameter vector is provided it must have the parameters in the same order as petab_problem.θ_names.

Potential events are returned as second argument, and potential time of events (tstops) are returned as third argument.

Example

using OrdinaryDiffEq
# Solve the model with callbacks
prob, cb, tstops = get_odeproblem(res, petab_problem, condition_id="cond1")
sol = solve(prob, Rodas5P(), callback=cb, tstops=tstops)
source
PEtab.get_odesolFunction
get_odesol(res::Union{PEtabOptimisationResult, PEtabMultistartOptimisationResult, Vector{Float64}},
           petab_problem::PEtabODEProblem;
           condition_id::Union{String, Symbol, Nothing}=nothing,
           pre_eq_id::Union{String, Symbol, Nothing}=nothing)

From a fitted PEtab model or parameter vector retrieve the ODE solution for the specified condition_id.

If condition_id is provided, the parameters are extracted for that specific simulation condition. If not provided, parameters for the first (default) simulation condition are returned.

If a pre_eq_id is provided, the initial values are taken from the pre-equilibration simulation corresponding to pre_eq_id.

If a parameter vector is provided it must have the parameters in the same order as petab_problem.θ_names.

Potential events are accounted for when solving the ODE model. The ODE solver options specified when creating the petab_problem are used for solving the model.

source
PEtab.solve_all_conditionsFunction
solve_all_conditions(xpetab, petab_problem::PEtabODEProblem, solver; <keyword arguments>)

Simulates the ODE model for all simulation conditions using the provided ODE solver and parameter vector xpetab.

The parameter vector xpetab should be provided on the PEtab scale (default log10).

Keyword Arguments

  • abstol=1e-8: Absolute tolerance for the ODE solver.
  • reltol=1e-8: Relative tolerance for the ODE solver.
  • maxiters=1e4: Maximum iterations for the ODE solver.
  • n_timepoints_save=0: Specifies the number of time points at which to save the ODE solution for each condition. A value of 0 means the solution is saved at the solvers default time points.
  • save_at_observed_t=false: When set to true, this option overrides n_timepoints_save and saves the ODE solution only at the time points where measurement data are available.

Returns

  • odesols: A dictionary containing the ODESolution for each condition.
  • could_solve: A boolean value indicating whether the model was successfully solved for all conditions.
source
PEtab.compute_runtime_accuracyFunction
compute_runtime_accuracy(xpetab, petab_problem, solver; <keyword arguments>)

Get runtime and accuracy for an ODE solver when simulating a model across all simulation conditions with parameter vector xpetab.

The parameter vector xpetab should be provided on the PEtab scale (default log10).

Keyword Arguments

  • abstol=1e-8: Absolute tolerance for the ODE solver.
  • reltol=1e-8: Relative tolerance for the ODE solver.
  • solver_high_acc=Rodas4P(): The ODE solver used to generate a high accuracy solution, which is used as reference when computing the high accuracy soluation.
  • abstol_highacc=1e-12: Absolute tolerance for the high accuracy ODE solver.
  • reltol_highacc=1e-12: Relative tolerance for the high accuracy ODE solver.
  • compute_acc=true: If set to false, accuracy is not evaluated (returned a 0).
  • ntimes_solve=5: Number times to simulated the model to determine the average runtime.

Returns

  • runtime: The average time taken to solve the model across all conditions, in seconds.
  • acc: The solver's accuracy, determined by comparison with the high accuracy ODE solver.
source
PEtab.PEtabLogDensityType
PEtabLogDensity(prob::PEtabODEProblem)

Construct a LogDensityProblem using the likelihood and gradient method from the PEtabODEProblem.

This LogDensityProblem method defines everything needed to perform Bayesian inference with libraries such as AdvancedHMC.jl (which includes algorithms like NUTS, used by Turing.jl), AdaptiveMCMC.jl for adaptive Markov Chain Monte Carlo methods, and Pigeon.jl for parallel tempering methods. For examples on how to perform inference, see the documentation.

source
PEtab.PEtabPigeonReferenceType
PEtabPigeonReference(prob::PEtabODEProblem)

Construct a reference distribution (prior) for parallell tempering with Pigeon.jl

This LogDensityProblem method defines everything needed to perform Bayesian inference with libraries such as AdvancedHMC.jl (which includes algorithms like NUTS, used by Turing.jl), AdaptiveMCMC.jl for adaptive Markov Chain Monte Carlo methods, and Pigeon.jl for parallel tempering methods. For examples on how to perform inference, see the documentation.

source
PEtab.to_prior_scaleFunction
to_prior_scale(xpetab, target::PEtabLogDensity)::AbstractVector

Transforms parameter xpetab from the PEtab problem scale to the prior scale.

This conversion is essential for Bayesian inference, as in PEtab.jl Bayesian inference is performed on the prior scale.

Note

To use this function Bijectors, LogDensityProblems, LogDensityProblemsAD must be loaded; using Bijectors, LogDensityProblems, LogDensityProblemsAD

source
PEtab.to_chainsFunction
to_chains(res, target::PEtabLogDensity; start_time=nothing, end_time=nothing)::MCMCChains

Converts Bayesian inference results obtained with PEtabLogDensity into a MCMCChains.

res can be the inference results from AdvancedHMC.jl, AdaptiveMCMC.jl, or Pigeon.jl. The out chain has the inferred parameters on the prior scale.

Keyword Arguments

  • start_time: Optional starting time for the inference, obtained with now().
  • end_time: Optional ending time for the inference, obtained with now().
Note

To use this function MCMCChains must be loaded; using MCMCChains

source