turboflow.pysolver_view.optimization module
- class turboflow.pysolver_view.optimization.OptimizationProblem[source]
Bases:
ABC
Abstract base class for optimization problems.
Derived optimization problem objects must implement the following methods:
fitness: Evaluate the objective function and constraints for a given set of decision variables.
get_bounds: Get the bounds for each decision variable.
get_neq: Return the number of equality constraints associated with the problem.
get_nineq: Return the number of inequality constraints associated with the problem.
Additionally, specific problem classes can define the gradient method to compute the Jacobians. If this method is not present in the derived class, the solver will revert to using forward finite differences for Jacobian calculations.
Methods
fitness(x)
Evaluate the objective function and constraints for a given set of decision variables.
get_bounds()
Get the bounds for each decision variable.
get_neq()
Return the number of equality constraints associated with the problem.
get_nineq()
Return the number of inequality constraints associated with the problem.
- abstract fitness(x)[source]
Evaluate the objective function and constraints for given decision variables.
- Parameters:
- xarray-like
Vector of independent variables (i.e., degrees of freedom).
- Returns:
- array_like
Vector containing the objective function, equality constraints, and inequality constraints.
- abstract get_bounds()[source]
Get the bounds for each decision variable (Pygmo format)
- Returns:
- boundstuple of lists
A tuple of two items where the first item is the list of lower bounds and the second item of the list of upper bounds for the vector of decision variables. For example, ([-2 -1], [2, 1]) indicates that the first decision variable has bounds between -2 and 2, and the second has bounds between -1 and 1.
- class turboflow.pysolver_view.optimization.OptimizationSolver(problem, library='scipy', method='slsqp', options={}, derivative_method='2-point', derivative_abs_step=None, print_convergence=True, plot_convergence=False, plot_scale_objective='linear', plot_scale_constraints='linear', logger=None, update_on='gradient', callback_functions=None, plot_improvement_only=False)[source]
Bases:
object
Solver class for general nonlinear programming problems.
The solver is designed to handle constrained optimization problems of the form:
Minimize:
\[f(\mathbf{x}) \; \mathrm{with} \; \mathbf{x} \in \mathbb{R}^n\]Subject to:
\[c_{\mathrm{eq}}(\mathbf{x}) = 0\]\[c_{\mathrm{in}}(\mathbf{x}) \leq 0\]\[\mathbf{x}_l \leq \mathbf{x} \leq \mathbf{x}_u\]where:
\(\mathbf{x}\) is the vector of decision variables (i.e., degree of freedom).
\(f(\mathbf{x})\) is the objective function to be minimized. Maximization problems can be casted into minimization problems by changing the sign of the objective function.
\(c_{\mathrm{eq}}(\mathbf{x})\) are the equality constraints of the problem.
\(c_{\mathrm{in}}(\mathbf{x})\) are the inequality constraints of the problem. Constraints of type \(c_{\mathrm{in}}(\mathbf{x}) \leq 0\) can be casted into \(c_{\mathrm{in}}(\mathbf{x}) \geq 0\) type by changing the sign of the constraint functions.
\(\mathbf{x}_l\) and \(\mathbf{x}_u\) are the lower and upper bounds on the decision variables.
The class interfaces with various optimization methods provided by libraries such as scipy and pygmo to solve the problem and provides a structured framework for initialization, solution monitoring, and post-processing.
This class employs a caching mechanism to avoid redundant evaluations. For a given set of independent variables, x, the optimizer requires the objective function, equality constraints, and inequality constraints to be provided separately. When working with complex models, these values are typically calculated all at once. If x hasn’t changed from a previous evaluation, the caching system ensures that previously computed values are used, preventing unnecessary recalculations.
- Parameters:
- problemOptimizationProblem
An instance of the optimization problem to be solved.
- librarystr, optional
The library to use for solving the optimization problem (default is ‘scipy’).
- methodstr, optional
The optimization method to use from the specified library (default is ‘slsqp’).
- tolfloat, optional
Tolerance for termination. The selected minimization algorithm sets some relevant solver-specific tolerance(s) equal to tol. The termination tolerances can be fine-tuned through the options dictionary. (default is 1e-5).
- max_iterint, optional
Maximum number of iterations for the optimizer (default is 100).
- optionsdict, optional
A dictionary of solver-specific options that prevails over ‘tol’ and ‘max_iter’
- derivative_methodstr, optional
Method to use for derivative calculation (default is ‘2-point’).
- derivative_abs_stepfloat, optional
Finite difference absolute step size to be used when the problem Jacobian is not provided. Defaults to 1e-6
- displaybool, optional
If True, displays the convergence progress (default is True).
- plotbool, optional
If True, plots the convergence progress (default is False).
- plot_scale_objectivestr, optional
Specifies the scale of the objective function axis in the convergence plot (default is ‘linear’).
- plot_scale_constraintsstr, optional
Specifies the scale of the constraint violation axis in the convergence plot (default is ‘linear’).
- loggerlogging.Logger, optional
Logger object to which logging messages will be directed. Logging is disabled if logger is None.
- update_onstr, optional
Specifies if the convergence report should be updated based on new function evaluations or gradient evaluations (default is ‘gradient’, alternative is ‘function’).
- callback_functionslist of callable or callable, optional
Optional list of callback functions to pass to the solver.
- plot_improvement_onlybool, optional
If True, plots only display iterations that improve the objective function value (useful for gradient-free optimizers) (default is False).
Methods
solve(x0):
Solve the optimization problem using the specified initial guess x0.
fitness(x):
Evaluates the optimization problem objective function and constraints at a given point x.
gradient(x):
Evaluates the Jacobians of the optimization problem at a given point x.
print_convergence_history():
Print the final result and convergence history of the optimization problem.
plot_convergence_history():
Plot the convergence history of the optimization problem.
- fitness(x, called_from_grad=False)[source]
Evaluates the optimization problem values at a given point x.
This method queries the fitness method of the OptimizationProblem class to compute the objective function value and constraint values. It first checks the cache to avoid redundant evaluations. If no matching cached result exists, it proceeds to evaluate the objective function and constraints.
- Parameters:
- xarray-like
Vector of independent variables (i.e., degrees of freedom).
- called_from_gradbool, optional
Flag used to indicate if the method is called during gradient evaluation. This helps in preventing redundant increments in evaluation counts during finite-differences gradient calculations. Default is False.
- Returns:
- fitnessnumpy.ndarray
A 1D array containing the objective function, equality constraints, and inequality constraints at x.
- gradient(x)[source]
Evaluates the Jacobian matrix of the optimization problem at the given point x.
This method utilizes the gradient method of the OptimizationProblem class if implemented. If the gradient method is not implemented, the Jacobian is approximated using forward finite differences.
To prevent redundant calculations, cached results are checked first. If a matching cached result is found, it is returned; otherwise, a fresh calculation is performed.
- Parameters:
- xarray-like
Vector of independent variables (i.e., degrees of freedom).
- Returns:
- numpy.ndarray
A 2D array representing the Jacobian matrix of the optimization problem at x. The Jacobian matrix includes: - Gradient of the objective function - Jacobian of equality constraints - Jacobian of inequality constraints
- plot_convergence_history(savefile=False, filename=None, output_dir='output')[source]
Plot the convergence history of the problem.
- This method plots the optimization progress against the number of iterations:
Objective function value (left y-axis)
Maximum constraint violation (right y-axis)
The constraint violation is only displayed if the problem has nonlinear constraints
This method should be called only after the optimization problem has been solved, as it relies on data generated by the solving process.
- Parameters:
- savefilebool, optional
If True, the plot is saved to a file instead of being displayed. Default is False.
- filenamestr, optional
The name of the file to save the plot to. If not specified, the filename is automatically generated using the problem name and the start datetime. The file extension is not required.
- output_dirstr, optional
The directory where the plot file will be saved if savefile is True. Default is “output”.
- Returns:
- matplotlib.figure.Figure
The Matplotlib figure object for the plot. This can be used for further customization or display.
- Raises:
- ValueError
If this method is called before the problem has been solved.
- print_convergence_history(savefile=False, filename=None, output_dir='output')[source]
Print the convergence history of the problem.
- The convergence history includes:
Number of function evaluations
Number of gradient evaluations
Objective function value
Maximum constraint violation
Two-norm of the update step
- The method provides a detailed report on:
Exit message
Success status
Execution time
This method should be called only after the optimization problem has been solved, as it relies on data generated by the solving process.
- Parameters:
- savefilebool, optional
If True, the convergence history will be saved to a file, otherwise printed to standard output. Default is False.
- filenamestr, optional
The name of the file to save the convergence history. If not specified, the filename is automatically generated using the problem name and the start datetime. The file extension is not required.
- output_dirstr, optional
The directory where the plot file will be saved if savefile is True. Default is “output”.
- Raises:
- ValueError
If this method is called before the problem has been solved.
- solve(x0)[source]
Solve the optimization problem using the specified library and solver.
This method initializes the optimization process, manages the flow of the optimization, and handles the results, utilizing the solver from a specified library such as scipy or pygmo.
- Parameters:
- x0array-like, optional
Initial guess for the solution of the optimization problem.
- Returns:
- x_finalarray-like
An array with the optimal vector of design variables
- turboflow.pysolver_view.optimization.combine_objective_and_constraints(f, c_eq=None, c_ineq=None)[source]
Combine an objective function with its associated equality and inequality constraints.
This function takes in an objective function value, a set of equality constraints, and a set of inequality constraints. It then returns a combined Numpy array of these values. The constraints can be given as a list, tuple, numpy array, or as individual values.
- Parameters:
- ffloat
The value of the objective function.
- c_eqfloat, list, tuple, np.ndarray, or None
The equality constraint(s). This can be a single value or a collection of values. If None, no equality constraints will be added.
- c_ineqfloat, list, tuple, np.ndarray, or None
The inequality constraint(s). This can be a single value or a collection of values. If None, no inequality constraints will be added.
- Returns:
- np.ndarray
A numpy array consisting of the objective function value followed by equality and inequality constraints.
Examples
>>> combine_objective_and_constraints(1.0, [0.5, 0.6], [0.7, 0.8]) array([1. , 0.5, 0.6, 0.7, 0.8])
>>> combine_objective_and_constraints(1.0, 0.5, 0.7) array([1. , 0.5, 0.7])
- turboflow.pysolver_view.optimization.count_constraints(var)[source]
Retrieve the number of constraints based on the provided input.
This function returns the count of constraints based on the nature of the input:
None returns 0
Scalar values return 1
Array-like structures return their length
- Parameters:
- varNone, scalar, or array-like (list, tuple, np.ndarray)
The input representing the constraint(s). This can be None, a scalar value, or an array-like structure containing multiple constraints.
- Returns:
- int
The number of constraints:
0 for None
1 for scalar values
Length of the array-like for array-like inputs
Examples
>>> count_constraints(None) 0
>>> count_constraints(5.0) 1
>>> count_constraints([1.0, 2.0, 3.0]) 3