Optimization#

class pysolver_view.optimization.OptimizationProblem[source]#

Bases: ABC

Abstract base class for optimization problems.

Derived optimization problem objects must implement the following methods:

  • fitness: Evaluate the objective function and constraints for a given set of decision variables.

  • get_bounds: Get the bounds for each decision variable.

  • get_neq: Return the number of equality constraints associated with the problem.

  • get_nineq: Return the number of inequality constraints associated with the problem.

Additionally, specific problem classes can define the gradient method to compute the Jacobians. If this method is not present in the derived class, the solver will revert to using forward finite differences for Jacobian calculations.

Parameters:
problem_scalefloat, optional

Scaling factor for normalization. Default is 1.0.

variable_nameslist of str, optional

Names of the decision variables. Used for reporting and debugging.

Methods

fitness(x)

Evaluate the objective function and constraints for a given set of decision variables.

get_bounds()

Get the bounds for each decision variable.

get_neq()

Return the number of equality constraints associated with the problem.

get_nineq()

Return the number of inequality constraints associated with the problem.

clip_to_bounds(x_physical, logger=None)[source]#

Clip physical variable values to lie within specified bounds.

Parameters:
x_physicalarray-like

Input vector in physical space.

loggerlogging.Logger or None, optional

Logger for outputting warnings. If None, warnings are printed to standard output.

Returns:
np.ndarray

Clipped vector.

abstract fitness(x)[source]#

Evaluate the objective function and constraints for given decision variables.

Parameters:
xarray-like

Vector of independent variables (i.e., degrees of freedom).

Returns:
array_like

Vector containing the objective function, equality constraints, and inequality constraints.

fitness_normalized_input(x_norm)[source]#

Evaluate fitness starting from normalized input.

Parameters:
x_normarray-like

Normalized input vector.

Returns:
array_like

Output of fitness evaluated on the corresponding physical vector.

abstract get_bounds()[source]#

Get the bounds for each decision variable (Pygmo format)

Returns:
boundstuple of lists

A tuple of two items where the first item is the list of lower bounds and the second item of the list of upper bounds for the vector of decision variables. For example, ([-2 -1], [2, 1]) indicates that the first decision variable has bounds between -2 and 2, and the second has bounds between -1 and 1.

get_bounds_normalized()[source]#

Return normalized bounds in [0, problem_scale]. If no scaling is applied (i.e., problem_scale is None), the physical bounds are returned instead.

Returns:
tuple of lists

Tuple (lb, ub) in normalized space or the original physical bounds if problem_scale is None.

abstract get_nec()[source]#

Return the number of equality constraints associated with the problem.

Returns:
neqint

Number of equality constraints.

abstract get_nic()[source]#

Return the number of inequality constraints associated with the problem.

Returns:
nineqint

Number of inequality constraints.

gradient_normalized_input(x_norm)[source]#

Compute the gradient of the objective and constraints with respect to normalized variables.

This function applies the chain rule to convert the gradient computed in the physical space to the corresponding gradient in the normalized space. If problem_scale is set to None, the problem is considered unscaled and the gradient is returned directly.

Parameters:
x_normarray-like

Normalized input vector (typically in [0, problem_scale]).

Returns:
np.ndarray

Gradient with respect to the normalized variables. The shape is: - (n,) for scalar objective or flat constraint vectors. - (m, n) for vector-valued constraints with m rows and n design variables.

Notes

The chain rule is applied as follows:

x_phys = lb + (ub - lb) * (x_norm / problem_scale) ∇f(x_norm) = ∇f(x_phys) * d(x_phys)/d(x_norm)

where:

d(x_phys)/d(x_norm) = (ub - lb) / problem_scale [elementwise]

This correction ensures that exact user-defined gradients are consistent with the scaling applied during optimization.

scale_normalized_to_physical(x_norm)[source]#

Convert normalized values in the range [0, problem_scale] back to physical variable values. If self.problem_scale is None, no scaling is applied.

The method uses the bounds returned by self.get_bounds() and the internal self.problem_scale.

Parameters:
x_normarray-like

Normalized values of the decision variables.

Returns:
np.ndarray

Physical values corresponding to the normalized input, or the original values if no scaling.

scale_physical_to_normalized(x_phys)[source]#

Convert physical design variable values to normalized values in the range [0, problem_scale]. If self.problem_scale is None, no scaling is applied.

The method uses the bounds returned by self.get_bounds() and the internal self.problem_scale. It automatically handles the case of fixed variables (i.e., upper bound = lower bound) by returning 0.0.

Parameters:
x_physarray-like

Physical values of the decision variables.

Returns:
np.ndarray

Normalized values in the range [0, problem_scale] if scaling is applied, otherwise the original physical values.

class pysolver_view.optimization.OptimizationSolver(problem, library='scipy', method='slsqp', tolerance=1e-06, max_iterations=100, extra_options={}, derivative_method='2-point', derivative_abs_step=None, problem_scale=None, print_convergence=True, plot_convergence=False, plot_scale_objective='linear', plot_scale_constraints='linear', logger=None, update_on='gradient', callback_functions=None, plot_improvement_only=False, tolerance_check_cache=None)[source]#

Bases: object

Solver class for general nonlinear programming problems.

The solver is designed to handle constrained optimization problems of the form:

Minimize:

\[f(\mathbf{x}) \; \mathrm{with} \; \mathbf{x} \in \mathbb{R}^n\]

Subject to:

\[c_{\mathrm{eq}}(\mathbf{x}) = 0\]
\[c_{\mathrm{in}}(\mathbf{x}) \leq 0\]
\[\mathbf{x}_l \leq \mathbf{x} \leq \mathbf{x}_u\]

where:

  • \(\mathbf{x}\) is the vector of decision variables (i.e., degree of freedom).

  • \(f(\mathbf{x})\) is the objective function to be minimized. Maximization problems can be casted into minimization problems by changing the sign of the objective function.

  • \(c_{\mathrm{eq}}(\mathbf{x})\) are the equality constraints of the problem.

  • \(c_{\mathrm{in}}(\mathbf{x})\) are the inequality constraints of the problem. Constraints of type \(c_{\mathrm{in}}(\mathbf{x}) \leq 0\) can be casted into \(c_{\mathrm{in}}(\mathbf{x}) \geq 0\) type by changing the sign of the constraint functions.

  • \(\mathbf{x}_l\) and \(\mathbf{x}_u\) are the lower and upper bounds on the decision variables.

The class interfaces with various optimization methods provided by libraries such as scipy and pygmo to solve the problem and provides a structured framework for initialization, solution monitoring, and post-processing.

This class employs a caching mechanism to avoid redundant evaluations. For a given set of independent variables, x, the optimizer requires the objective function, equality constraints, and inequality constraints to be provided separately. When working with complex models, these values are typically calculated all at once. If x hasn’t changed from a previous evaluation, the caching system ensures that previously computed values are used, preventing unnecessary recalculations.

Parameters:
problemOptimizationProblem

An instance of the optimization problem to be solved. The problem should be defined in physical space, with its own bounds and (optionally) analytic derivatives.

librarystr, optional

The library to use for solving the optimization problem (default is ‘scipy’).

methodstr, optional

The optimization method to use from the specified library (default is ‘slsqp’).

tolerancefloat, optional

Tolerance for termination. The minimization algorithm sets some solver-specific tolerances equal to tol. (default is 1e-6)

max_iterationsint, optional

Maximum number of iterations for the optimizer (default is 100).

extra_optionsdict, optional

A dictionary of solver-specific options that prevails over ‘tolerance’ and ‘max_iterations’

derivative_methodstr, optional

Method to use for derivative calculation (default is ‘2-point’).

derivative_abs_stepfloat, optional

Finite difference absolute step size to be used when the problem Jacobian is not provided. Default depends on calculation method.

problem_scalefloat or None, optional

Scaling factor used to normalize the problem. This parameter controls the transformation of physical variables into a normalized domain. Specifically, for a physical variable x, with lower and upper bounds lb and ub, the normalized variable is computed as:

x_norm = problem_scale * (x - lb) / (ub - lb)

  • If a numeric value is provided (e.g. 1.0, 10.0, etc.), the problem is scaled accordingly. Increasing problem_scale reduces the relative step sizes in the normalized space during line searches, which can improve convergence by making the optimization less aggresive. The rationale for the scaling is that the initial line search step size of many solvers is 1.0.

  • If set to None, no scaling is applied and the problem is solved in its original physical units. This might be preferred if the problem is already well-conditioned or if the user wishes to preserve the exact scale of the original formulation.

print_convergencebool, optional

If True, displays the convergence progress (default is True).

plot_convergencebool, optional

If True, plots the convergence progress (default is False).

plot_scale_objectivestr, optional

Specifies the scale of the objective function axis in the convergence plot (default is ‘linear’).

plot_scale_constraintsstr, optional

Specifies the scale of the constraint violation axis in the convergence plot (default is ‘linear’).

loggerlogging.Logger, optional

Logger object to which logging messages will be directed. Logging is disabled if logger is None.

update_onstr, optional

Specifies if the convergence report should be updated based on new function evaluations or gradient evaluations (default is ‘gradient’, alternative is ‘function’).

callback_functionslist of callable or callable, optional

Optional list of callback functions to pass to the solver.

plot_improvement_onlybool, optional

If True, plots only display iterations that improve the objective function value (useful for gradient-free optimizers) (default is False).

Methods

solve(x0):

Solve the optimization problem using the specified initial guess x0.

fitness(x):

Evaluates the optimization problem objective function and constraints at a given point x.

gradient(x):

Evaluates the Jacobians of the optimization problem at a given point x.

print_convergence_history():

Print the final result and convergence history of the optimization problem.

plot_convergence_history():

Plot the convergence history of the optimization problem.

evaluate_kkt_conditions(x_norm, tol)[source]#

Evaluate the raw quantities required for KKT condition checks.

This method performs all necessary calculations to evaluate:
  • Lagrangian gradient

  • Constraint violations (equality and inequality)

  • Lagrange multipliers for active constraints

  • Complementary slackness products

It does not apply any tolerance threshold; that is handled separately.

Returns:
dict

A dictionary containing raw values needed to assess the KKT conditions.

fitness(x_norm, called_from_grad=False)[source]#

Evaluates the optimization problem values at a given point x.

This method queries the fitness method of the OptimizationProblem class to compute the objective function value and constraint values. It first checks the cache to avoid redundant evaluations. If no matching cached result exists, it proceeds to evaluate the objective function and constraints.

Parameters:
xarray-like

Vector of independent variables (i.e., degrees of freedom).

called_from_gradbool, optional

Flag used to indicate if the method is called during gradient evaluation. This helps in preventing redundant increments in evaluation counts during finite-differences gradient calculations. Default is False.

Returns:
fitnessnumpy.ndarray

A 1D array containing the objective function, equality constraints, and inequality constraints at x.

get_constraint_data(x_norm, tol)[source]#
return a list of dicts with keys:

name, type (‘=’, ‘<’), target, value, satisfied

for all equality and inequality constraints. If self.constraint_data exists, validate and return it.

gradient(x_norm)[source]#

Evaluates the Jacobian matrix of the optimization problem at the given point x.

This method utilizes the gradient method of the OptimizationProblem class if implemented. If the gradient method is not implemented, the Jacobian is approximated using forward finite differences.

To prevent redundant calculations, cached results are checked first. If a matching cached result is found, it is returned; otherwise, a fresh calculation is performed.

Parameters:
xarray-like

Vector of independent variables (i.e., degrees of freedom).

Returns:
numpy.ndarray

A 2D array representing the Jacobian matrix of the optimization problem at x. The Jacobian matrix includes: - Gradient of the objective function - Jacobian of equality constraints - Jacobian of inequality constraints

make_constraint_report(x_norm, tol)[source]#

generate a formatted constraint report at x_phys, using get_constraint_data to build/validate the entries.

make_kkt_optimality_report(x_norm, tol)[source]#

Generate a detailed KKT condition satisfaction report (80-character width).

This report includes five key KKT checks: - First order optimality: ∥∇L(x, λ)∥ ≤ tol - Equality feasibility: max |c_eq(x)| ≤ tol - Inequality feasibility: max(0, c_ineq(x)) ≤ tol - Dual feasibility: min(λ_ineq) ≥ 0 - Complementary slackness: max |λ_i * c_i| ≤ tol

For each condition, the report shows: - Actual computed value - Comparison direction and target (tolerance or 0) - Satisfaction status

Parameters:
x_normarray-like

Normalized decision variable vector to evaluate the KKT conditions at.

tolfloat

Numerical tolerance used for comparisons in optimality and feasibility checks.

Returns:
str

A formatted report string summarizing KKT condition satisfaction.

make_lagrange_multipliers_report(x_norm, tol)[source]#

Generate a report of all Lagrange multipliers: - Equalities: always included - Inequalities and bounds: show value if active, ‘inactive’ otherwise

Returns:
str

The formatted multipliers report as a single string.

make_variables_report(x_norm, normalized=True)[source]#

Generate design variable report as a string.

Parameters:
x_normarray-like

Normalized design variables (input to the solver).

normalizedbool, optional

Whether to report in normalized or physical values. Default is True.

Returns:
str

Formatted string report.

plot_convergence_history(savefile=False, filename=None, output_dir='output', showfig=True)[source]#

Plot the convergence history of the problem.

This method plots the optimization progress against the number of iterations:
  • Objective function value (left y-axis)

  • Maximum constraint violation (right y-axis)

The constraint violation is only displayed if the problem has nonlinear constraints

This method should be called only after the optimization problem has been solved, as it relies on data generated by the solving process.

Parameters:
savefilebool, optional

If True, the plot is saved to a file instead of being displayed. Default is False.

filenamestr, optional

The name of the file to save the plot to. If not specified, the filename is automatically generated using the problem name and the start datetime. The file extension is not required.

output_dirstr, optional

The directory where the plot file will be saved if savefile is True. Default is “output”.

Returns:
matplotlib.figure.Figure

The Matplotlib figure object for the plot. This can be used for further customization or display.

Raises:
ValueError

If this method is called before the problem has been solved.

print_convergence_history(savefile=False, filename=None, output_dir='output', to_console=True)[source]#

Print or save the convergence history of the optimization process.

This function prints (or saves) a the report of the optimization convergence progress. It includes information collected at each iteration, including:

  • Number of gradient evaluations

  • Number of function evaluations

  • Objective function value

  • Maximum constraint violation

  • Two-norm of the update step

It also includes a summary at the end of the run with:

  • Exit message

  • Success flag

  • Total solution time (in seconds)

Note

This method must be called after solve() has been executed. Otherwise, the convergence report is unavailable and a ValueError is raised.

Parameters:
savefilebool, optional

If True, the report is saved to a file. Otherwise, it is printed to the screen. Default is False.

filenamestr or None, optional

The name of the file to save the report to. If None, a default name is generated based on the problem class name and the optimization start datetime.

output_dirstr, optional

Directory where the report file is saved if savefile=True. Default is “output”.

Raises:
ValueError

If the method is called before the optimization problem has been solved.

print_optimization_report(x=None, tol=None, include_design_variables=True, include_constraints=True, include_kkt_conditions=False, include_multipliers=False, savefile=False, filename=None, output_dir='output', to_console=True)[source]#

Generate and print or save a complete optimization report with customizable content.

This method assembles a detailed summary of the optimization process and results, allowing fine-grained control over which components to include. It supports outputting the report to the console or saving it to a file.

The report may include the following sections: - Convergence history: number of function and gradient evaluations, objective function value, maximum constraint violation, and step norm. The report also includes The method provides a detailed report on an exit message, success status, and execution time - Number of function evaluations - Number of gradient evaluations - Objective function value - Maximum constraint violation - Two-norm of the update step

Objective value, constraint violation, and step norm over iterations.
  • Design variables: Final values, shown in physical units and (if applicable) normalized space.

  • Constraints: Numerical values and satisfaction status of all constraints.

  • KKT conditions: Checks of the Karush-Kuhn-Tucker optimality conditions.

  • Lagrange multipliers: Values of multipliers for equality, inequality, and bound constraints.

This method is intended to be called after the solve() method has been completed.

Parameters:
include_convergence_historybool
include_design_variables_normalizedbool
include_design_variables_physicalbool
include_constraintsbool
include_kktbool
include_multipliersbool
savefilebool
filenamestr or None
tolfloat or None (tolerance for constraints/KKT)
output_dirstr (directory for saving file)
solve(x0)[source]#

Solve the optimization problem using the specified library and solver.

This method initializes the optimization process, manages the flow of the optimization, and handles the results, utilizing the solver from a specified library such as scipy or pygmo.

Parameters:
x0array-like, optional

Initial guess for the solution of the optimization problem.

Returns:
x_finalarray-like

An array with the optimal vector of design variables

pysolver_view.optimization.combine_objective_and_constraints(f, c_eq=None, c_ineq=None)[source]#

Combine an objective function with its associated equality and inequality constraints.

This function takes in an objective function value, a set of equality constraints, and a set of inequality constraints. It then returns a combined Numpy array of these values. The constraints can be given as a list, tuple, numpy array, or as individual values.

Parameters:
ffloat

The value of the objective function.

c_eqfloat, list, tuple, np.ndarray, or None

The equality constraint(s). This can be a single value or a collection of values. If None, no equality constraints will be added.

c_ineqfloat, list, tuple, np.ndarray, or None

The inequality constraint(s). This can be a single value or a collection of values. If None, no inequality constraints will be added.

Returns:
np.ndarray

A numpy array consisting of the objective function value followed by equality and inequality constraints.

Examples

>>> combine_objective_and_constraints(1.0, [0.5, 0.6], [0.7, 0.8])
array([1. , 0.5, 0.6, 0.7, 0.8])
>>> combine_objective_and_constraints(1.0, 0.5, 0.7)
array([1. , 0.5, 0.7])
pysolver_view.optimization.count_constraints(var)[source]#

Retrieve the number of constraints based on the provided input.

This function returns the count of constraints based on the nature of the input:

  • None returns 0

  • Scalar values return 1

  • Array-like structures return their length

Parameters:
varNone, scalar, or array-like (list, tuple, np.ndarray)

The input representing the constraint(s). This can be None, a scalar value, or an array-like structure containing multiple constraints.

Returns:
int

The number of constraints:

  • 0 for None

  • 1 for scalar values

  • Length of the array-like for array-like inputs

Examples

>>> count_constraints(None)
0
>>> count_constraints(5.0)
1
>>> count_constraints([1.0, 2.0, 3.0])
3