simsopt.objectives package

class simsopt.objectives.LeastSquaresProblem(goals: Union[Real, Sequence[Real], NDArray[Any, float64]], weights: Union[Real, Sequence[Real], NDArray[Any, float64]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[Optimizable, Sequence[Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None, fail: Union[None, float] = 1000000000000.0)

Bases: Optimizable

Represents a nonlinear-least-squares problem implemented using the graph based optimization framework. A LeastSquaresProblem instance has 3 basic attributes: a set of functions (f_in), target values for each of the functions (goal), and weights. The residual (f_out) for each of the f_in is defined as:

\[f_{out} = weight * (f_{in} - goal) ^ 2\]
Parameters
  • goals – Targets for residuals in optimization

  • weights – Weight associated with each of the residual

  • funcs_in – Input functions (Generally one of the output functions of the Optimizable instances

  • depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects

  • opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object

classmethod from_sigma(goals: Union[Real, Sequence[Real], NDArray[Any, float64]], sigma: Union[Real, Sequence[Real], NDArray[Any, float64]], funcs_in: Optional[Sequence[Callable]] = None, depends_on: Optional[Union[Optimizable, Sequence[Optimizable]]] = None, opt_return_fns: Optional[Union[Sequence, Sequence[Sequence[str]]]] = None, fail: Union[None, float] = 1000000000000.0) LeastSquaresProblem

Define the LeastSquaresProblem with

\[\begin{split}\sigma = 1/\sqrt{weight}, \text{so} \\ f_{out} = \left(\frac{f_{in} - goal}{\sigma}\right) ^ 2.\end{split}\]
Parameters
  • goals – Targets for residuals in optimization

  • sigma – Inverse of the sqrt of the weight associated with each of the residual

  • funcs_in – Input functions (Generally one of the output functions of the Optimizable instances

  • depends_on – (Alternative initialization) Instead of specifying funcs_in, one could specify the Optimizable objects

  • opt_return_fns – (Alternative initialization) If using depends_on, specify the return functions associated with each Optimizable object

classmethod from_tuples(tuples: Sequence[Tuple[Callable, Real, Real]], fail: Union[None, float] = 1000000000000.0) LeastSquaresProblem

Initializes graph based LeastSquaresProblem from a sequence of tuples containing f_in, goal, and weight.

Parameters

tuples – A sequence of tuples containing (f_in, goal, weight) in each tuple (the specified order matters).

objective(x=None, *args, **kwargs)

Return the least squares sum

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments

residuals(x=None, *args, **kwargs)

Return the weighted residuals

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments

return_fn_map: Dict[str, Callable] = {'objective': <function LeastSquaresProblem.objective>, 'residuals': <function LeastSquaresProblem.residuals>}
unweighted_residuals(x=None, *args, **kwargs)

Return the unweighted residuals (f_in - goal)

Parameters
  • x – Degrees of freedom or state

  • args – Any additional arguments

  • kwargs – Keyword arguments

class simsopt.objectives.MPIObjective(objectives, comm, needs_splitting=False)

Bases: Optimizable

J()
__init__(objectives, comm, needs_splitting=False)

Compute the mean of a list of objectives in parallel using MPI.

Parameters
  • objectives – A python list of objectives that provide .J() and .dJ() functions.

  • comm – The MPI communicator to use.

  • needs_splitting – if set to True, then the list of objectives is split into disjoint partitions and only one part is worked on per mpi rank. If set to False, then we assume that the user constructed the list of objectives so that it only contains the objectives relevant to that mpi rank.

dJ(*args, partials=False, **kwargs)
class simsopt.objectives.QuadraticPenalty(obj, cons=0.0, f='min')

Bases: Optimizable

J()
__init__(obj, cons=0.0, f='min')

A quadratic penalty function of the form \(0.5f(\text{obj}.J() - \text{cons})^2\) for an underlying objective obj and wrapping function f. This can be used to implement a barrier penalty function for (in)equality constrained optimization problem. The wrapping function defaults to ‘min’.

Parameters
  • obj – the underlying objective. It should provide a .J() and .dJ() function.

  • cons – constant

  • f – the function that wraps the difference \(obj-\text{cons}\). The options are ‘min’, ‘max’, or ‘identity’ which respectively return \(\min(\text{obj}-\text{cons}, 0)\), \(\max(\text{obj}-\text{cons}, 0)\), and \(\text{obj}-\text{cons}\).

dJ(*args, partials=False, **kwargs)
return_fn_map: Dict[str, Callable] = {'J': <function QuadraticPenalty.J>, 'dJ': <function derivative_dec.<locals>._derivative_dec>}
class simsopt.objectives.SquaredFlux(surface, field, target=None)

Bases: Optimizable

Objective representing the quadratic flux of a field on a surface, that is

\[\frac12 \int_{S} (\mathbf{B}\cdot \mathbf{n} - B_T)^2 ds\]

where \(\mathbf{n}\) is the surface unit normal vector and \(B_T\) is an optional (zero by default) target value for the magnetic field.

Parameters
  • surface – A simsopt.geo.surface.Surface object on which to compute the flux

  • field – A simsopt.field.magneticfield.MagneticField for which to compute the flux.

  • target – A nphi x ntheta numpy array containing target values for the flux. Here nphi and ntheta correspond to the number of quadrature points on surface in phi and theta direction.

J()
dJ(*args, partials=False, **kwargs)
class simsopt.objectives.Weight(value)

Bases: object

simsopt.objectives.forward_backward(P, L, U, rhs, iterative_refinement=False)

Solve a linear system of the form (PLU)^T*adj = rhs for adj.

Parameters
  • P – permutation matrix

  • L – lower triangular matrix

  • U – upper triangular matrix

  • iterative_refinement – when true, applies iterative refinement which can improve the accuracy of the computed solution when the matrix is particularly ill-conditioned.