simsopt.objectives package
Submodules
simsopt.objectives.fluxobjective module
- class simsopt.objectives.fluxobjective.SquaredFlux(surface, field, target=None)
Bases:
Optimizable
Objective representing the quadratic flux of a field on a surface, that is
\[\frac12 \int_{S} (\mathbf{B}\cdot \mathbf{n} - B_T)^2 ds\]where \(\mathbf{n}\) is the surface unit normal vector and \(B_T\) is an optional (zero by default) target value for the magnetic field.
- Parameters
surface – A
simsopt.geo.surface.Surface
object on which to compute the fluxfield – A
simsopt.field.magneticfield.MagneticField
for which to compute the flux.target – A
nphi x ntheta
numpy array containing target values for the flux. Herenphi
andntheta
correspond to the number of quadrature points on surface inphi
andtheta
direction.
- J()
- dJ(*args, partials=False, **kwargs)
simsopt.objectives.functions module
This module provides a few minimal optimizable objects, each representing a function. These functions are mostly used for testing.
- class simsopt.objectives.functions.Adder(n=3, **kwargs)
Bases:
Optimizable
Defines a minimal graphe based Optimizable object that can be optimized. It has n degrees of freedom.
The method sum returns the sum of these dofs. The call hook internally calls the sum method.
- Parameters
n – Number of degrees of freedom (DOFs)
x0 – Initial values of the DOFs. If not given, equal to zeroes
dof_names – Identifiers for the DOFs
- J()
- dJ()
- property df
Same as the function dJ(), but a property instead of a function.
- return_fn_map: Dict[str, Callable] = {'sum': <function Adder.sum>}
- sum()
Sums the DOFs
- class simsopt.objectives.functions.Affine(nparams, nvals)
Bases:
Optimizable
Implements a random affine (i.e. linear plus constant) transformation from R^n to R^m. The n inputs to the transformation are initially set to zeroes.
- Parameters
nparams – number of independent variables.
nvals – number of dependent variables.
- dJ()
- f()
- return_fn_map: Dict[str, Callable] = {'f': <function Affine.f>}
- class simsopt.objectives.functions.Beale(x0=None, **kwargs)
Bases:
Optimizable
This is a test function which does not supply derivatives. It is taken from https://en.wikipedia.org/wiki/Test_functions_for_optimization
- J()
- class simsopt.objectives.functions.Failer(nparams: int = 2, nvals: int = 3, fail_index: int = 2)
Bases:
Optimizable
This class is used for testing failures of the objective function. This function always returns a vector with entries all 1.0, except that ObjectiveFailure will be raised on a specified evaluation.
- Parameters
nparams – Number of input values.
nvals – Number of entries in the return vector.
fail_index – Which function evaluation to fail on.
- J()
- get_dofs()
- set_dofs(x)
- class simsopt.objectives.functions.Identity(x: Real = 0.0, dof_name: Optional[str] = None, dof_fixed: bool = False)
Bases:
Optimizable
Represents a term in an objective function which is just the identity. It has one degree of freedom. Conforms to the graph based Optimizable framework.
The output of the method f is equal to this degree of freedom. The call hook internally calls method f. It does not have any parent Optimizable nodes
- Parameters
x – Value of the DOF
dof_name – Identifier for the DOF
dof_fixed – To specify if the dof is fixed
- as_dict(serial_objs_dict: dict) dict
A JSON serializable dict representation of an object.
- dJ(x: Optional[Union[Sequence[Real], NDArray[Any, float64]]] = None)
- f()
Returns the value of the DOF
- classmethod from_dict(d, serial_objs_dict, recon_objs)
- Parameters
d – Dict representation.
- Returns
GSONable class.
- return_fn_map: Dict[str, Callable] = {'f': <function Identity.f>}
- class simsopt.objectives.functions.Rosenbrock(b=100.0, x0=[0.0, 0.0], **kwargs)
Bases:
Optimizable
Implements Rosenbrock function using the graph based optimization framework. The Rosenbrock function is defined as
\[f(x,y) = (a-x)^2 + b(y-x^2)^2\]The parameter a is fixed to 1. And the b parameter can be given as input.
- Parameters
b – The b parameter of Rosenbrock function
x0 – x, y coordinates
- property b
- property dterm1
Returns the gradient of term1
- property dterm2
Returns the gradient of term2
- dterms()
Returns the 2x2 Jacobian for term1 and term2.
- f(x=None)
Returns the total function, squaring and summing the two terms.
- return_fn_map: Dict[str, Callable] = {'f': <function Rosenbrock.f>}
- property term1
Returns the first of the two quantities that is squared and summed.
- property term2
Returns the second of the two quantities that is squared and summed.
- property terms
Returns term1 and term2 together as a 2-element numpy vector.
- class simsopt.objectives.functions.TestObject1(x0: Real, depends_on: Optional[Sequence[Optimizable]] = None, **kwargs)
Bases:
Optimizable
Implements a graph based optimizable with a single degree of freedom and has parent optimizable nodes. Mainly used for testing.
The output method is named f. Call hook internally calls method f.
- Parameters
val – Degree of freedom
opts – Parent optimizable objects. If not given, two Adder objects are added as parents
- dJ()
Same as dJ() but a property instead of a function.
- property depends_on
- f()
Implements an objective function
- return_fn_map: Dict[str, Callable] = {'f': <function TestObject1.f>}
- class simsopt.objectives.functions.TestObject2(val1, val2)
Bases:
Optimizable
Implements a graph based optimizable with two single degree of freedom and has two parent optimizable nodes. Mainly used for testing.
The output method is named f. Call hook internally calls method f.
- Parameters
val1 – First degree of freedom
val2 – Second degree of freedom
- dJ()
- f()
- return_fn_map: Dict[str, Callable] = {'f': <function TestObject2.f>}
simsopt.objectives.least_squares module
Provides the LeastSquaresProblem class implemented using the new graph based optimization framework.
- class simsopt.objectives.least_squares.LeastSquaresProblem(goals: Union[Real, Sequence[Real], NDArray[Any, float64]], weights: Union[Real, Sequence[Real], NDArray[Any, float64]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[Optimizable, Sequence[Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None, fail: Union[None, float] = 1000000000000.0)
Bases:
Optimizable
Represents a nonlinear-least-squares problem implemented using the graph based optimization framework. A LeastSquaresProblem instance has 3 basic attributes: a set of functions (f_in), target values for each of the functions (goal), and weights. The residual (f_out) for each of the f_in is defined as:
\[f_{out} = weight * (f_{in} - goal) ^ 2\]- Parameters
goals – Targets for residuals in optimization
weights – Weight associated with each of the residual
funcs_in – Input functions (Generally one of the output functions of the Optimizable instances
depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects
opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object
- classmethod from_sigma(goals: Union[Real, Sequence[Real], NDArray[Any, float64]], sigma: Union[Real, Sequence[Real], NDArray[Any, float64]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[Optimizable, Sequence[Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None, fail: Union[None, float] = 1000000000000.0) LeastSquaresProblem
Define the LeastSquaresProblem with
\[\begin{split}\sigma = 1/\sqrt{weight}, \text{so} \\ f_{out} = \left(\frac{f_{in} - goal}{\sigma}\right) ^ 2.\end{split}\]- Parameters
goals – Targets for residuals in optimization
sigma – Inverse of the sqrt of the weight associated with each of the residual
funcs_in – Input functions (Generally one of the output functions of the Optimizable instances
depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects
opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object
- classmethod from_tuples(tuples: Sequence[Tuple[Callable, Real, Real]], fail: Union[None, float] = 1000000000000.0) LeastSquaresProblem
Initializes graph based LeastSquaresProblem from a sequence of tuples containing f_in, goal, and weight.
- Parameters
tuples – A sequence of tuples containing (f_in, goal, weight) in each tuple (the specified order matters).
- objective(x=None, *args, **kwargs)
Return the least squares sum
- Parameters
x – Degrees of freedom or state
args – Any additional arguments
kwargs – Keyword arguments
- residuals(x=None, *args, **kwargs)
Return the weighted residuals
- Parameters
x – Degrees of freedom or state
args – Any additional arguments
kwargs – Keyword arguments
- return_fn_map: Dict[str, Callable] = {'objective': <function LeastSquaresProblem.objective>, 'residuals': <function LeastSquaresProblem.residuals>}
- unweighted_residuals(x=None, *args, **kwargs)
Return the unweighted residuals (f_in - goal)
- Parameters
x – Degrees of freedom or state
args – Any additional arguments
kwargs – Keyword arguments
simsopt.objectives.utilities module
- class simsopt.objectives.utilities.MPIObjective(objectives, comm, needs_splitting=False)
Bases:
Optimizable
- J()
- __init__(objectives, comm, needs_splitting=False)
Compute the mean of a list of objectives in parallel using MPI.
- Parameters
objectives – A python list of objectives that provide
.J()
and.dJ()
functions.comm – The MPI communicator to use.
needs_splitting – if set to
True
, then the list of objectives is split into disjoint partitions and only one part is worked on per mpi rank. If set toFalse
, then we assume that the user constructed the list ofobjectives
so that it only contains the objectives relevant to that mpi rank.
- dJ(*args, partials=False, **kwargs)
- class simsopt.objectives.utilities.QuadraticPenalty(obj, cons=0.0, f='min')
Bases:
Optimizable
- J()
- __init__(obj, cons=0.0, f='min')
A quadratic penalty function of the form \(0.5f(\text{obj}.J() - \text{cons})^2\) for an underlying objective
obj
and wrapping function f. This can be used to implement a barrier penalty function for (in)equality constrained optimization problem. The wrapping function defaults to ‘min’.- Parameters
obj – the underlying objective. It should provide a
.J()
and.dJ()
function.cons – constant
f – the function that wraps the difference \(obj-\text{cons}\). The options are ‘min’, ‘max’, or ‘identity’ which respectively return \(\min(\text{obj}-\text{cons}, 0)\), \(\max(\text{obj}-\text{cons}, 0)\), and \(\text{obj}-\text{cons}\).
- dJ(*args, partials=False, **kwargs)
- return_fn_map: Dict[str, Callable] = {'J': <function QuadraticPenalty.J>, 'dJ': <function derivative_dec.<locals>._derivative_dec>}
- class simsopt.objectives.utilities.Weight(value)
Bases:
object
- simsopt.objectives.utilities.forward_backward(P, L, U, rhs, iterative_refinement=False)
Solve a linear system of the form (PLU)^T*adj = rhs for adj.
- Parameters
P – permutation matrix
L – lower triangular matrix
U – upper triangular matrix
iterative_refinement – when true, applies iterative refinement which can improve the accuracy of the computed solution when the matrix is particularly ill-conditioned.