simsopt.objectives package

Submodules

simsopt.objectives.fluxobjective module

class simsopt.objectives.fluxobjective.SquaredFlux(surface, field, target=None)

Bases: simsopt._core.graph_optimizable.Optimizable

Objective representing the quadratic flux of a field on a surface, that is

\[\frac12 \int_{S} (\mathbf{B}\cdot \mathbf{n} - B_T)^2 ds\]

where \(\mathbf{n}\) is the surface unit normal vector and \(B_T\) is an optional (zero by default) target value for the magnetic field.

Parameters
  • surface – A simsopt.geo.surface.Surface object on which to compute the flux

  • field – A simsopt.field.magneticfield.MagneticField for which to compute the flux.

  • target – A nphi x ntheta numpy array containing target values for the flux. Here nphi and ntheta correspond to the number of quadrature points on surface in phi and theta direction.

J()
dJ(*args, partials=False, **kwargs)

simsopt.objectives.graph_functions module

This module provides a few minimal optimizable objects, each representing a function. These functions are mostly used for testing.

class simsopt.objectives.graph_functions.Adder(n=3, x0=None, dof_names=None)

Bases: simsopt._core.graph_optimizable.Optimizable

Defines a minimal graphe based Optimizable object that can be optimized. It has n degrees of freedom.

The method sum returns the sum of these dofs. The call hook internally calls the sum method.

Parameters
  • n – Number of degrees of freedom (DOFs)

  • x0 – Initial values of the DOFs. If not given, equal to zeroes

  • dof_names – Identifiers for the DOFs

dJ()
property df

Same as the function dJ(), but a property instead of a function.

return_fn_map: Dict[str, Callable] = {'sum': <function Adder.sum>}
sum()

Sums the DOFs

class simsopt.objectives.graph_functions.Affine(nparams, nvals)

Bases: simsopt._core.graph_optimizable.Optimizable

Implements a random affine (i.e. linear plus constant) transformation from R^n to R^m. The n inputs to the transformation are initially set to zeroes.

Parameters
  • nparams – number of independent variables.

  • nvals – number of dependent variables.

dJ()
f()
return_fn_map: Dict[str, Callable] = {'f': <function Affine.f>}
class simsopt.objectives.graph_functions.Beale

Bases: simsopt._core.graph_optimizable.Optimizable

This is a test function which does not supply derivatives. It is taken from https://en.wikipedia.org/wiki/Test_functions_for_optimization

J()
class simsopt.objectives.graph_functions.Failer(nparams: int = 2, nvals: int = 3, fail_index: int = 2)

Bases: simsopt._core.graph_optimizable.Optimizable

This class is used for testing failures of the objective function. This function always returns a vector with entries all 1.0, except that ObjectiveFailure will be raised on a specified evaluation.

Parameters
  • nparams – Number of input values.

  • nvals – Number of entries in the return vector.

  • fail_index – Which function evaluation to fail on.

J()
get_dofs()
set_dofs(x)
class simsopt.objectives.graph_functions.Identity(x: numbers.Real = 0.0, dof_name: Optional[str] = None, dof_fixed: bool = False)

Bases: simsopt._core.graph_optimizable.Optimizable

Represents a term in an objective function which is just the identity. It has one degree of freedom. Conforms to the experimental graph based Optimizable framework.

The output of the method f is equal to this degree of freedom. The call hook internally calls method f. It does not have any parent Optimizable nodes

Parameters
  • x – Value of the DOF

  • dof_name – Identifier for the DOF

  • dof_fixed – To specify if the dof is fixed

dJ(x: Optional[Union[Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]]] = None)
f()

Returns the value of the DOF

return_fn_map: Dict[str, Callable] = {'f': <function Identity.f>}
class simsopt.objectives.graph_functions.Rosenbrock(b=100.0, x=0.0, y=0.0)

Bases: simsopt._core.graph_optimizable.Optimizable

Implements Rosenbrock function using the graph based optimization framework. The Rosenbrock function is defined as

\[f(x,y) = (a-x)^2 + b(y-x^2)^2\]

The parameter a is fixed to 1. And the b parameter can be given as input.

Parameters
  • b – The b parameter of Rosenbrock function

  • xx coordinate

  • yy coordinate

property dterm1

Returns the gradient of term1

property dterm2

Returns the gradient of term2

dterms()

Returns the 2x2 Jacobian for term1 and term2.

f(x=None)

Returns the total function, squaring and summing the two terms.

return_fn_map: Dict[str, Callable] = {'f': <function Rosenbrock.f>}
property term1

Returns the first of the two quantities that is squared and summed.

property term2

Returns the second of the two quantities that is squared and summed.

property terms

Returns term1 and term2 together as a 2-element numpy vector.

class simsopt.objectives.graph_functions.TestObject1(val: numbers.Real, opts: Optional[Sequence[simsopt._core.graph_optimizable.Optimizable]] = None)

Bases: simsopt._core.graph_optimizable.Optimizable

Implements a graph based optimizable with a single degree of freedom and has parent optimizable nodes. Mainly used for testing.

The output method is named f. Call hook internally calls method f.

Parameters
  • val – Degree of freedom

  • opts – Parent optimizable objects. If not given, two Adder objects are added as parents

dJ()

Same as dJ() but a property instead of a function.

f()

Implements an objective function

return_fn_map: Dict[str, Callable] = {'f': <function TestObject1.f>}
class simsopt.objectives.graph_functions.TestObject2(val1, val2)

Bases: simsopt._core.graph_optimizable.Optimizable

Implements a graph based optimizable with two single degree of freedom and has two parent optimizable nodes. Mainly used for testing.

The output method is named f. Call hook internally calls method f.

Parameters
  • val1 – First degree of freedom

  • val2 – Second degree of freedom

dJ()
f()
return_fn_map: Dict[str, Callable] = {'f': <function TestObject2.f>}

simsopt.objectives.graph_least_squares module

Provides the LeastSquaresProblem class implemented using the new graph based optimization framework.

class simsopt.objectives.graph_least_squares.LeastSquaresProblem(goals: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], weights: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[simsopt._core.graph_optimizable.Optimizable, Sequence[simsopt._core.graph_optimizable.Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None, fail: Union[None, float] = 1000000000000.0)

Bases: simsopt._core.graph_optimizable.Optimizable

Represents a nonlinear-least-squares problem implemented using the new graph based optimization framework. A LeastSquaresProblem instance has 3 basic attributes: a set of functions (f_in), target values for each of the functions (goal), and weights. The residual (f_out) for each of the f_in is defined as:

\[f_{out} = weight * (f_{in} - goal) ^ 2\]
Parameters
  • goals – Targets for residuals in optimization

  • weights – Weight associated with each of the residual

  • funcs_in – Input functions (Generally one of the output functions of the Optimizable instances

  • depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects

  • opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object

classmethod from_sigma(goals: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], sigma: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[simsopt._core.graph_optimizable.Optimizable, Sequence[simsopt._core.graph_optimizable.Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None, fail: Union[None, float] = 1000000000000.0) simsopt.objectives.graph_least_squares.LeastSquaresProblem

Define the LeastSquaresProblem with

\[\begin{split}\sigma = 1/\sqrt{weight}, \text{so} \\ f_{out} = \left(\frac{f_{in} - goal}{\sigma}\right) ^ 2.\end{split}\]
Parameters
  • goals – Targets for residuals in optimization

  • sigma – Inverse of the sqrt of the weight associated with each of the residual

  • funcs_in – Input functions (Generally one of the output functions of the Optimizable instances

  • depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects

  • opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object

classmethod from_tuples(tuples: Sequence[Tuple[Callable, numbers.Real, numbers.Real]], fail: Union[None, float] = 1000000000000.0) simsopt.objectives.graph_least_squares.LeastSquaresProblem

Initializes graph based LeastSquaresProblem from a sequence of tuples containing f_in, goal, and weight.

Parameters

tuples – A sequence of tuples containing (f_in, goal, weight) in each tuple (the specified order matters).

objective(x=None, *args, **kwargs)

Return the least squares sum

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments

residuals(x=None, *args, **kwargs)

Return the weighted residuals

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments

return_fn_map: Dict[str, Callable] = {'objective': <function LeastSquaresProblem.objective>, 'residuals': <function LeastSquaresProblem.residuals>}
unweighted_residuals(x=None, *args, **kwargs)

Return the unweighted residuals (f_in - goal)

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments

simsopt.objectives.utilities module

class simsopt.objectives.utilities.MPIObjective(objectives, comm, needs_splitting=False)

Bases: simsopt._core.graph_optimizable.Optimizable

J()
__init__(objectives, comm, needs_splitting=False)

Compute the mean of a list of objectives in parallel using MPI.

Parameters
  • objectives – A python list of objectives that provide .J() and .dJ() functions.

  • comm – The MPI communicator to use.

  • needs_splitting – if set to True, then the list of objectives is split into disjoint partitions and only one part is worked on per mpi rank. If set to False, then we assume that the user constructed the list of objectives so that it only contains the objectives relevant to that mpi rank.

dJ(*args, partials=False, **kwargs)
class simsopt.objectives.utilities.QuadraticPenalty(obj, threshold=0.0)

Bases: simsopt._core.graph_optimizable.Optimizable

J()
__init__(obj, threshold=0.0)

A penalty function of the form \(\max(J - \text{threshold}, 0)^2\) for an underlying objective J.

Parameters
  • obj – the underlying objective. It should provide a .J() and .dJ() function.

  • threshold – the threshold past which values should be penalized.

dJ(*args, partials=False, **kwargs)
return_fn_map: Dict[str, Callable] = {'J': <function QuadraticPenalty.J>, 'dJ': <function derivative_dec.<locals>._derivative_dec>}
simsopt.objectives.utilities.sum_across_comm(derivative, comm)

Compute the sum of simsopt._core.derivative.Derivative objects from several MPI ranks. This implementation is fairly basic and requires that the derivative dictionaries contain the same keys on all ranks.

Module contents

class simsopt.objectives.LeastSquaresProblem(goals: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], weights: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[simsopt._core.graph_optimizable.Optimizable, Sequence[simsopt._core.graph_optimizable.Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None, fail: Union[None, float] = 1000000000000.0)

Bases: simsopt._core.graph_optimizable.Optimizable

Represents a nonlinear-least-squares problem implemented using the new graph based optimization framework. A LeastSquaresProblem instance has 3 basic attributes: a set of functions (f_in), target values for each of the functions (goal), and weights. The residual (f_out) for each of the f_in is defined as:

\[f_{out} = weight * (f_{in} - goal) ^ 2\]
Parameters
  • goals – Targets for residuals in optimization

  • weights – Weight associated with each of the residual

  • funcs_in – Input functions (Generally one of the output functions of the Optimizable instances

  • depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects

  • opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object

classmethod from_sigma(goals: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], sigma: Union[numbers.Real, Sequence[numbers.Real], nptyping.types._ndarray.NDArray[None, nptyping.types._number.Float[float, numpy.floating]]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[simsopt._core.graph_optimizable.Optimizable, Sequence[simsopt._core.graph_optimizable.Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None, fail: Union[None, float] = 1000000000000.0) simsopt.objectives.graph_least_squares.LeastSquaresProblem

Define the LeastSquaresProblem with

\[\begin{split}\sigma = 1/\sqrt{weight}, \text{so} \\ f_{out} = \left(\frac{f_{in} - goal}{\sigma}\right) ^ 2.\end{split}\]
Parameters
  • goals – Targets for residuals in optimization

  • sigma – Inverse of the sqrt of the weight associated with each of the residual

  • funcs_in – Input functions (Generally one of the output functions of the Optimizable instances

  • depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects

  • opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object

classmethod from_tuples(tuples: Sequence[Tuple[Callable, numbers.Real, numbers.Real]], fail: Union[None, float] = 1000000000000.0) simsopt.objectives.graph_least_squares.LeastSquaresProblem

Initializes graph based LeastSquaresProblem from a sequence of tuples containing f_in, goal, and weight.

Parameters

tuples – A sequence of tuples containing (f_in, goal, weight) in each tuple (the specified order matters).

objective(x=None, *args, **kwargs)

Return the least squares sum

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments

residuals(x=None, *args, **kwargs)

Return the weighted residuals

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments

return_fn_map: Dict[str, Callable] = {'objective': <function LeastSquaresProblem.objective>, 'residuals': <function LeastSquaresProblem.residuals>}
unweighted_residuals(x=None, *args, **kwargs)

Return the unweighted residuals (f_in - goal)

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments