simsopt.objectives package¶
Submodules¶
simsopt.objectives.functions module¶
This module provides a few minimal optimizable objects, each representing a function. These functions are mostly used for testing.
- class simsopt.objectives.functions.Adder(n=3)¶
Bases:
simsopt._core.optimizable.Optimizable
This class defines a minimal object that can be optimized. It has n degrees of freedom, and has a function that just returns the sum of these dofs. This class is used for testing.
- J()¶
Returns the sum of the degrees of freedom.
- dJ()¶
- property df¶
Same as the function dJ(), but a property instead of a function.
- property f¶
Same as the function J(), but a property instead of a function.
- get_dofs()¶
This base Optimizable object has no degrees of freedom, so return an empty array
- set_dofs(xin)¶
This base Optimizable object has no degrees of freedom, so do nothing.
- class simsopt.objectives.functions.Affine(nparams, nvals)¶
Bases:
simsopt._core.optimizable.Optimizable
This class represents a random affine (i.e. linear plus constant) transformation from R^n to R^m.
- J()¶
- __init__(nparams, nvals)¶
nparams = number of independent variables. nvals = number of dependent variables.
- dJ()¶
- get_dofs()¶
This base Optimizable object has no degrees of freedom, so return an empty array
- set_dofs(x)¶
This base Optimizable object has no degrees of freedom, so do nothing.
- class simsopt.objectives.functions.Beale¶
Bases:
simsopt._core.optimizable.Optimizable
This is a test function which does not supply derivatives. It is taken from https://en.wikipedia.org/wiki/Test_functions_for_optimization
- J()¶
- get_dofs()¶
This base Optimizable object has no degrees of freedom, so return an empty array
- set_dofs(x)¶
This base Optimizable object has no degrees of freedom, so do nothing.
- class simsopt.objectives.functions.Failer(nparams: int = 2, nvals: int = 3, fail_index: int = 2)¶
Bases:
simsopt._core.optimizable.Optimizable
This class is used for testing failures of the objective function. This function always returns a vector with entries all 1.0, except that ObjectiveFailure will be raised on a specified evaluation.
- Parameters
nparams – Number of input values.
nvals – Number of entries in the return vector.
fail_index – Which function evaluation to fail on.
- J()¶
- get_dofs()¶
This base Optimizable object has no degrees of freedom, so return an empty array
- set_dofs(x)¶
This base Optimizable object has no degrees of freedom, so do nothing.
- class simsopt.objectives.functions.Identity(x=0.0)¶
Bases:
simsopt._core.optimizable.Optimizable
This class represents a term in an objective function which is just the identity. It has one degree of freedom, and the output of the function is equal to this degree of freedom.
- J()¶
- dJ()¶
- property df¶
Same as the function dJ(), but a property instead of a function.
- property f¶
Same as the function J(), but a property instead of a function.
- get_dofs()¶
This base Optimizable object has no degrees of freedom, so return an empty array
- set_dofs(xin)¶
This base Optimizable object has no degrees of freedom, so do nothing.
- class simsopt.objectives.functions.Rosenbrock(b=100.0, x=0.0, y=0.0)¶
Bases:
simsopt._core.optimizable.Optimizable
This class defines a minimal object that can be optimized.
- dterm1()¶
Returns the gradient of term1
- property dterm1prop¶
Same as dterm1, but a property rather than a callable function.
- dterm2()¶
Returns the gradient of term2
- property dterm2prop¶
Same as dterm2, but a property rather than a callable function.
- dterms()¶
Returns the 2x2 Jacobian for term1 and term2.
- f()¶
Returns the total function, squaring and summing the two terms.
- get_dofs()¶
This base Optimizable object has no degrees of freedom, so return an empty array
- set_dofs(xin)¶
This base Optimizable object has no degrees of freedom, so do nothing.
- term1()¶
Returns the first of the two quantities that is squared and summed.
- property term1prop¶
Same as term1, but a property rather than a callable function.
- term2()¶
Returns the second of the two quantities that is squared and summed.
- property term2prop¶
Same as term2, but a property rather than a callable function.
- terms()¶
Returns term1 and term2 together as a 2-element numpy vector.
- class simsopt.objectives.functions.RosenbrockWithFailures(*args, fail_interval=8, **kwargs)¶
Bases:
simsopt.objectives.functions.Rosenbrock
This class is similar to the Rosenbrock class, except that it fails (raising ObjectiveFailure) at regular intervals. This is useful for testing that the simsopt infrastructure handles failures in the expected way.
- term1()¶
Returns the first of the two quantities that is squared and summed.
- class simsopt.objectives.functions.TestObject1(val)¶
Bases:
simsopt._core.optimizable.Optimizable
This is an optimizable object used for testing. It depends on two sub-objects, both of type Adder.
- J()¶
- dJ()¶
- property df¶
Same as dJ() but a property instead of a function.
- property f¶
Same as J() but a property instead of a function.
- get_dofs()¶
This base Optimizable object has no degrees of freedom, so return an empty array
- set_dofs(x)¶
This base Optimizable object has no degrees of freedom, so do nothing.
- class simsopt.objectives.functions.TestObject2(val1, val2)¶
Bases:
simsopt._core.optimizable.Optimizable
This is an optimizable object used for testing. It depends on two sub-objects, both of type Adder.
- J()¶
- dJ()¶
- property df¶
Same as dJ() but a property instead of a function.
- property f¶
Same as J() but a property instead of a function.
- get_dofs()¶
This base Optimizable object has no degrees of freedom, so return an empty array
- set_dofs(x)¶
This base Optimizable object has no degrees of freedom, so do nothing.
simsopt.objectives.graph_functions module¶
This module provides a few minimal optimizable objects, each representing a function. These functions are mostly used for testing.
- class simsopt.objectives.graph_functions.Adder(n=3, x0=None, dof_names=None)¶
Bases:
simsopt._core.graph_optimizable.Optimizable
Defines a minimal graphe based Optimizable object that can be optimized. It has n degrees of freedom.
The method sum returns the sum of these dofs. The call hook internally calls the sum method.
- Parameters
n – Number of degrees of freedom (DOFs)
x0 – Initial values of the DOFs. If not given, equal to zeroes
dof_names – Identifiers for the DOFs
- _abc_impl = <_abc_data object>¶
- _ids = count(1)¶
- dJ()¶
- property df¶
Same as the function dJ(), but a property instead of a function.
- return_fn_map: Dict[str, Callable] = {'sum': <function Adder.sum>}¶
- sum()¶
Sums the DOFs
- class simsopt.objectives.graph_functions.Affine(nparams, nvals)¶
Bases:
simsopt._core.graph_optimizable.Optimizable
Implements a random affine (i.e. linear plus constant) transformation from R^n to R^m. The n inputs to the transformation are initially set to zeroes.
- Parameters
nparams – number of independent variables.
nvals – number of dependent variables.
- _abc_impl = <_abc_data object>¶
- _ids = count(1)¶
- dJ()¶
- f()¶
- return_fn_map: Dict[str, Callable] = {'f': <function Affine.f>}¶
- class simsopt.objectives.graph_functions.Identity(x: numbers.Real = 0.0, dof_name: Optional[str] = None, dof_fixed: bool = False)¶
Bases:
simsopt._core.graph_optimizable.Optimizable
Represents a term in an objective function which is just the identity. It has one degree of freedom. Conforms to the experimental graph based Optimizable framework.
The output of the method f is equal to this degree of freedom. The call hook internally calls method f. It does not have any parent Optimizable nodes
- Parameters
x – Value of the DOF
dof_name – Identifier for the DOF
dof_fixed – To specify if the dof is fixed
- _abc_impl = <_abc_data object>¶
- _ids = count(1)¶
- dJ(x: Optional[Union[Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]]] = None)¶
- f()¶
Returns the value of the DOF
- return_fn_map: Dict[str, Callable] = {'f': <function Identity.f>}¶
- class simsopt.objectives.graph_functions.Rosenbrock(b=100.0, x=0.0, y=0.0)¶
Bases:
simsopt._core.graph_optimizable.Optimizable
Implements Rosenbrock function using the graph based optimization framework. The Rosenbrock function is defined as
\[f(x,y) = (a-x)^2 + b(y-x^2)^2\]The parameter a is fixed to 1. And the b parameter can be given as input.
- Parameters
b – The b parameter of Rosenbrock function
x – x coordinate
y – y coordinate
- _abc_impl = <_abc_data object>¶
- _ids = count(1)¶
- property dterm1¶
Returns the gradient of term1
- property dterm2¶
Returns the gradient of term2
- dterms()¶
Returns the 2x2 Jacobian for term1 and term2.
- f(x=None)¶
Returns the total function, squaring and summing the two terms.
- return_fn_map: Dict[str, Callable] = {'f': <function Rosenbrock.f>}¶
- property term1¶
Returns the first of the two quantities that is squared and summed.
- property term2¶
Returns the second of the two quantities that is squared and summed.
- property terms¶
Returns term1 and term2 together as a 2-element numpy vector.
- class simsopt.objectives.graph_functions.TestObject1(val: numbers.Real, opts: Optional[Sequence[simsopt._core.graph_optimizable.Optimizable]] = None)¶
Bases:
simsopt._core.graph_optimizable.Optimizable
Implements a graph based optimizable with a single degree of freedom and has parent optimizable nodes. Mainly used for testing.
The output method is named f. Call hook internally calls method f.
- Parameters
val – Degree of freedom
opts – Parent optimizable objects. If not given, two Adder objects are added as parents
- _abc_impl = <_abc_data object>¶
- _ids = count(1)¶
- dJ()¶
Same as dJ() but a property instead of a function.
- f()¶
Implements an objective function
- return_fn_map: Dict[str, Callable] = {'f': <function TestObject1.f>}¶
- class simsopt.objectives.graph_functions.TestObject2(val1, val2)¶
Bases:
simsopt._core.graph_optimizable.Optimizable
Implements a graph based optimizable with two single degree of freedom and has two parent optimizable nodes. Mainly used for testing.
The output method is named f. Call hook internally calls method f.
- Parameters
val1 – First degree of freedom
val2 – Second degree of freedom
- _abc_impl = <_abc_data object>¶
- _ids = count(1)¶
- dJ()¶
- f()¶
- return_fn_map: Dict[str, Callable] = {'f': <function TestObject2.f>}¶
simsopt.objectives.graph_least_squares module¶
Provides the LeastSquaresProblem class implemented using the new graph based optimization framework.
- class simsopt.objectives.graph_least_squares.LeastSquaresProblem(goals: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], weights: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[simsopt._core.graph_optimizable.Optimizable, Sequence[simsopt._core.graph_optimizable.Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None)¶
Bases:
simsopt._core.graph_optimizable.Optimizable
Represents a nonlinear-least-squares problem implemented using the new graph based optimization framework. A LeastSquaresProblem instance has 3 basic attributes: a set of functions (f_in), target values for each of the functions (goal), and weights. The residual (f_out) for each of the f_in is defined as:
\[f_{out} = weight * (f_{in} - goal) ^ 2\]- Parameters
goals – Targets for residuals in optimization
weights – Weight associated with each of the residual
funcs_in – Input functions (Generally one of the output functions of the Optimizable instances
depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects
opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object
- _abc_impl = <_abc_data object>¶
- _ids = count(1)¶
- classmethod from_sigma(goals: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], sigma: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[simsopt._core.graph_optimizable.Optimizable, Sequence[simsopt._core.graph_optimizable.Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None) simsopt.objectives.graph_least_squares.LeastSquaresProblem ¶
Define the LeastSquaresProblem with
\[\begin{split}\sigma = 1/\sqrt{weight}, \text{so} \\ f_{out} = \left(\frac{f_{in} - goal}{\sigma}\right) ^ 2.\end{split}\]- Parameters
goals – Targets for residuals in optimization
sigma – Inverse of the sqrt of the weight associated with each of the residual
funcs_in – Input functions (Generally one of the output functions of the Optimizable instances
depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects
opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object
- classmethod from_tuples(tuples: Sequence[Tuple[Callable, numbers.Real, numbers.Real]]) simsopt.objectives.graph_least_squares.LeastSquaresProblem ¶
Initializes graph based LeastSquaresProblem from a sequence of tuples containing f_in, goal, and weight.
- Parameters
tuples – A sequence of tuples containing (f_in, goal, weight) in each tuple (the specified order matters).
- objective(x=None, *args, **kwargs)¶
Return the least squares sum
- Parameters
x – Degrees of freedom or state
args – Any additional arguments
kwargs – Keyword arguments
- residuals(x=None, *args, **kwargs)¶
Return the weighted residuals
- Parameters
x – Degrees of freedom or state
args – Any additional arguments
kwargs – Keyword arguments
- return_fn_map: Dict[str, Callable] = {'objective': <function LeastSquaresProblem.objective>, 'residuals': <function LeastSquaresProblem.residuals>}¶
- unweighted_residuals(x=None, *args, **kwargs)¶
Return the unweighted residuals (f_in - goal)
- Parameters
x – Degrees of freedom or state
args – Any additional arguments
kwargs – Keyword arguments
simsopt.objectives.least_squares module¶
This module provides the LeastSquaresProblem class, as well as the associated class LeastSquaresTerm.
- class simsopt.objectives.least_squares.LeastSquaresProblem(terms, **kwargs)¶
Bases:
object
This class represents a nonlinear-least-squares optimization problem. The class stores a list of LeastSquaresTerm objects.
- Parameters
terms – Must be convertable to a list by the list() subroutine. Each entry of the resulting list must either have type LeastSquaresTerm or else be a list or tuple of the form (function, goal, weight) or (object, attribute_str, goal, weight).
kwargs – Any additional arguments will be passed to the
Dofs
constructor. This is useful for passingfail
,abs_step
,rel_step
, anddiff_method
. SeeDofs
for details.
- _init()¶
Call collect_dofs() on the list of terms to set x, mins, maxs, names, etc. This is done both when the object is created, so ‘objective’ works immediately, and also at the start of solve()
- f(x=None)¶
This method returns the vector of residuals for a given state vector x. This function is passed to scipy.optimize, and could be passed to other optimization algorithms too. This function differs from Dofs.f() because it shifts and scales the terms.
If the argument x is not supplied, the residuals will be evaluated for the present state vector. If x is supplied, then first set_dofs() will be called for each object to set the global state vector to x.
- f_from_unshifted(f_unshifted)¶
This function takes a vector of function values, as returned by dofs, and shifts and scales them. This function does not actually evaluate the dofs.
- jac(x=None, **kwargs)¶
This method gives the Jacobian of the residuals with respect to the parameters, if it is available, given the state vector x. This function is passed to scipy.optimize, and could be passed to other optimization algorithms too. This Jacobian differs from the one returned by Dofs() because it accounts for the ‘weight’ scale factors.
If the argument x is not supplied, the Jacobian will be evaluated for the present state vector. If x is supplied, then first set_dofs() will be called for each object to set the global state vector to x.
kwargs is passed to Dofs.fd_jac().
- objective(x=None)¶
Return the value of the total objective function, by summing the terms.
If the argument x is not supplied, the objective will be evaluated for the present state vector. If x is supplied, then first set_dofs() will be called for each object to set the global state vector to x.
- objective_from_shifted_f(f)¶
Given a vector of functions that has already been evaluated, and already shifted and scaled, convert the result to the overall scalar objective function, without any further function evaluations. This routine is useful if we have already evaluated the residuals and we want to convert the result to the overall scalar objective without the computational expense of further function evaluations.
- objective_from_unshifted_f(f_unshifted)¶
Given a vector of functions that has already been evaluated, but not yet shifted and scaled, convert the result to the overall scalar objective function, without any further function evaluations. This routine is useful if we have already evaluated the residuals and we want to convert the result to the overall scalar objective without the computational expense of further function evaluations.
- scale_dofs_jac(jmat)¶
Given a Jacobian matrix j for the Dofs() associated to this least-squares problem, return the scaled Jacobian matrix for the least-squares residual terms. This function does not actually compute the Dofs() Jacobian, since sometimes we would compute that directly whereas other times we might compute it with serial or parallel finite differences. The provided jmat is scaled in-place.
- property x¶
Return the state vector.
- class simsopt.objectives.least_squares.LeastSquaresTerm(f_in, goal, weight)¶
Bases:
object
This class represents one term in a nonlinear-least-squares problem. A LeastSquaresTerm instance has 3 basic attributes: a function (called f_in), a goal value (called goal), and a weight (sigma). The overall value of the term is:
f_out = weight * (f_in - goal) ** 2.
- f_out()¶
Return the overall value of this least-squares term.
- classmethod from_sigma(f_in, goal, sigma)¶
Define the LeastSquaresTerm with sigma = 1 / sqrt(weight), so
f_out = ((f_in - goal) / sigma) ** 2.
Module contents¶
- class simsopt.objectives.LeastSquaresProblem(terms, **kwargs)¶
Bases:
object
This class represents a nonlinear-least-squares optimization problem. The class stores a list of LeastSquaresTerm objects.
- Parameters
terms – Must be convertable to a list by the list() subroutine. Each entry of the resulting list must either have type LeastSquaresTerm or else be a list or tuple of the form (function, goal, weight) or (object, attribute_str, goal, weight).
kwargs – Any additional arguments will be passed to the
Dofs
constructor. This is useful for passingfail
,abs_step
,rel_step
, anddiff_method
. SeeDofs
for details.
- _init()¶
Call collect_dofs() on the list of terms to set x, mins, maxs, names, etc. This is done both when the object is created, so ‘objective’ works immediately, and also at the start of solve()
- f(x=None)¶
This method returns the vector of residuals for a given state vector x. This function is passed to scipy.optimize, and could be passed to other optimization algorithms too. This function differs from Dofs.f() because it shifts and scales the terms.
If the argument x is not supplied, the residuals will be evaluated for the present state vector. If x is supplied, then first set_dofs() will be called for each object to set the global state vector to x.
- f_from_unshifted(f_unshifted)¶
This function takes a vector of function values, as returned by dofs, and shifts and scales them. This function does not actually evaluate the dofs.
- jac(x=None, **kwargs)¶
This method gives the Jacobian of the residuals with respect to the parameters, if it is available, given the state vector x. This function is passed to scipy.optimize, and could be passed to other optimization algorithms too. This Jacobian differs from the one returned by Dofs() because it accounts for the ‘weight’ scale factors.
If the argument x is not supplied, the Jacobian will be evaluated for the present state vector. If x is supplied, then first set_dofs() will be called for each object to set the global state vector to x.
kwargs is passed to Dofs.fd_jac().
- objective(x=None)¶
Return the value of the total objective function, by summing the terms.
If the argument x is not supplied, the objective will be evaluated for the present state vector. If x is supplied, then first set_dofs() will be called for each object to set the global state vector to x.
- objective_from_shifted_f(f)¶
Given a vector of functions that has already been evaluated, and already shifted and scaled, convert the result to the overall scalar objective function, without any further function evaluations. This routine is useful if we have already evaluated the residuals and we want to convert the result to the overall scalar objective without the computational expense of further function evaluations.
- objective_from_unshifted_f(f_unshifted)¶
Given a vector of functions that has already been evaluated, but not yet shifted and scaled, convert the result to the overall scalar objective function, without any further function evaluations. This routine is useful if we have already evaluated the residuals and we want to convert the result to the overall scalar objective without the computational expense of further function evaluations.
- scale_dofs_jac(jmat)¶
Given a Jacobian matrix j for the Dofs() associated to this least-squares problem, return the scaled Jacobian matrix for the least-squares residual terms. This function does not actually compute the Dofs() Jacobian, since sometimes we would compute that directly whereas other times we might compute it with serial or parallel finite differences. The provided jmat is scaled in-place.
- property x¶
Return the state vector.
- class simsopt.objectives.LeastSquaresTerm(f_in, goal, weight)¶
Bases:
object
This class represents one term in a nonlinear-least-squares problem. A LeastSquaresTerm instance has 3 basic attributes: a function (called f_in), a goal value (called goal), and a weight (sigma). The overall value of the term is:
f_out = weight * (f_in - goal) ** 2.
- f_out()¶
Return the overall value of this least-squares term.
- classmethod from_sigma(f_in, goal, sigma)¶
Define the LeastSquaresTerm with sigma = 1 / sqrt(weight), so
f_out = ((f_in - goal) / sigma) ** 2.