simsopt.solve package

simsopt.solve.GPMO(pm_opt, algorithm='baseline', **kwargs)

GPMO is a greedy algorithm for the permanent magnet optimization problem.

GPMO is an alternative to to the relax-and-split algorithm. Full-strength magnets are placed one-by-one according to minimize the MSE (fB). Allows for a number of keyword arguments that facilitate some basic backtracking (error correction) and placing magnets together so no isolated magnets occur.

Parameters:
  • pm_opt – The grid of permanent magnets to optimize. PermanentMagnetGrid instance

  • algorithm

    The type of greedy algorithm to use. Options are

    baseline:

    the simple implementation of GPMO,

    multi:

    GPMO, but placing multiple magnets per iteration,

    backtracking:

    backtrack every few hundred iterations to improve the solution,

    ArbVec:

    the simple implementation of GPMO, but with arbitrary oriented polarization vectors,

    ArbVec_backtracking:

    same as above but w/ backtracking.

    Easiest algorithm to use is ‘baseline’ but most effective algorithm is ‘ArbVec_backtracking’.

  • kwargs

    Keyword arguments for the GPMO algorithm and its variants. The following variables can be passed:

    K: integer

    Maximum number of GPMO iterations to run.

    nhistory: integer

    Every ‘nhistory’ iterations, the loss terms are recorded.

    Nadjacent: integer

    Number of neighbor cells to consider ‘adjacent’ to one another, for the purposes of placing multiple magnets or doing backtracking. Not to be used with ‘baseline’ and ‘ArbVec’.

    dipole_grid_xyz: 2D numpy array, shape (ndipoles, 3).

    XYZ coordinates of the permanent magnet locations. Needed for figuring out which permanent magnets are adjacent to one another. Not a keyword argument for ‘baseline’ and ‘ArbVec’.

    max_nMagnets: integer.

    Maximum number of magnets to place before algorithm quits. Only a keyword argument for ‘backtracking’ and ‘ArbVec_backtracking’ since without any backtracking, this is the same parameter as ‘K’.

    backtracking: integer.

    Every ‘backtracking’ iterations, a backtracking is performed to remove suboptimal magnets. Only a keyword argument for ‘backtracking’ and ‘ArbVec_backtracking’ algorithms.

    thresh_angle: float.

    If the angle between adjacent dipole moments > thresh_angle, these dipoles are considered suboptimal and liable to removed during a backtracking step. Only a keyword argument for ‘ArbVec_backtracking’ algorithm.

    single_direction: int, must be = 0, 1, or 2.

    Specify to only place magnets with orientations in a single direction, e.g. only magnets pointed in the +- x direction. Keyword argument only for ‘baseline’, ‘multi’, and ‘backtracking’ since the ‘ArbVec…’ algorithms have local coordinate systems and therefore can specify the same constraint and much more via the ‘pol_vectors’ argument.

    reg_l2: float.

    L2 regularization value, applied through the mmax argument in the GPMO algorithm. See the paper for how this works.

    verbose: bool.

    If True, print out the algorithm progress every ‘nhistory’ iterations. Also needed to record the algorithm history.

Returns:

Tuple of (errors, Bn_errors, m_history)

errors:

Total optimization loss values, recorded every ‘nhistory’ iterations.

Bn_errors:

\(|Bn|\) errors, recorded every ‘nhistory’ iterations.

m_history:

Solution for the permanent magnets, recorded after ‘nhistory’ iterations.

simsopt.solve.constrained_mpi_solve(prob: ConstrainedProblem, mpi: MpiPartition, grad: bool = False, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', opt_method: str = 'SLSQP', options: dict | None = None)

Solve a constrained minimization problem using MPI. All MPI processes (including group leaders and workers) should call this function.

Parameters:
  • probConstrainedProblem object defining the objective function, parameter space, and constraints.

  • mpi – A MpiPartition object, storing the information about how the pool of MPI processes is divided into worker groups.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered" and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. For other values, an error is raised.

  • opt_method – Constrained solver to use: One of "SLSQP", "trust-constr", or "COBYLA". Use "COBYLA" for derivative-free optimization. See scipy.optimize.minimize for a description of the methods.

  • options

    dict, options keyword which is passed to scipy.optimize.minimize.

simsopt.solve.constrained_serial_solve(prob: ConstrainedProblem, grad: bool | None = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', opt_method: str = 'SLSQP', options: dict | None = None)

Solve a constrained minimization problem using scipy.optimize, and without using any parallelization.

Parameters:
  • probConstrainedProblem object defining the objective function, parameter space, and constraints.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True for a problem, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered" and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. For other settings, an error is raised.

  • opt_method

    Constrained solver to use: One of "SLSQP", "trust-constr", or "COBYLA". Use "COBYLA" for derivative-free optimization. See scipy.optimize.minimize for a description of the methods.

  • options

    dict, options keyword which is passed to scipy.optimize.minimize.

simsopt.solve.least_squares_mpi_solve(prob: LeastSquaresProblem, mpi: MpiPartition, grad: bool = False, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', **kwargs)

Solve a nonlinear-least-squares minimization problem using MPI. All MPI processes (including group leaders and workers) should call this function.

Parameters:
  • prob – Optimizable object defining the objective function(s) and parameter space.

  • mpi – A MpiPartition object, storing the information about how the pool of MPI processes is divided into worker groups.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are “centered”, and “forward”. If centered, centered finite differences will be used. If forward, one-sided finite differences will be used. Else, error is raised.

  • kwargs – Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.least_squares_serial_solve(prob: LeastSquaresProblem, grad: bool | None = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', **kwargs)

Solve a nonlinear-least-squares minimization problem using scipy.optimize, and without using any parallelization.

Parameters:
  • prob – LeastSquaresProblem object defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True for a problem, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered", and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. Else, error is raised.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.relax_and_split(pm_opt, m0=None, **kwargs)

Uses a relax-and-split algorithm for solving the permanent magnet optimization problem, which solves a convex and nonconvex part separately.

Defaults to the MwPGP convex step and no nonconvex step. If a nonconvexity is specified, the associated prox function must be defined in this file. Relax-and-split allows for speedy algorithms for both steps and the imposition of convex equality and inequality constraints (including the required constraint on the strengths of the dipole moments).

Parameters:
  • pm_opt – The grid of permanent magnets to optimize.

  • m0 – Initial guess for the permanent magnet dipole moments. Defaults to a starting guess of all zeros. This vector must lie in the hypersurface spanned by the L2 ball constraints. Note that if algorithm is being used properly, the end result should be independent of the choice of initial condition.

  • kwargs

    Keyword arguments to pass to the algorithm. The following arguments can be passed to the MwPGP algorithm:

    epsilon:

    Error tolerance for the convex part of the algorithm (MwPGP).

    nu:

    Hyperparameter used for the relax-and-split least-squares. Set nu >> 1 to reduce the importance of nonconvexity in the problem.

    reg_l0:

    Regularization value for the L0 nonconvex term in the optimization. This value is automatically scaled based on the max dipole moment values, so that reg_l0 = 1 corresponds to reg_l0 = np.max(m_maxima). It follows that users should choose reg_l0 in [0, 1].

    reg_l1:

    Regularization value for the L1 nonsmooth term in the optimization,

    reg_l2:

    Regularization value for any convex regularizers in the optimization problem, such as the often-used L2 norm.

    max_iter_MwPGP:

    Maximum iterations to perform during a run of the convex part of the relax-and-split algorithm (MwPGP).

    max_iter_RS:

    Maximum iterations to perform of the overall relax-and-split algorithm. Therefore, also the number of times that MwPGP is called, and the number of times a prox is computed.

    verbose:

    Prints out all the loss term errors separately.

Returns:

A tuple of optimization loss, solution at each step, and sparse solution.

The tuple contains

errors:

Total optimization loss after each convex sub-problem is solved.

m_history:

Solution for the permanent magnets after each convex sub-problem is solved.

m_proxy_history:

Sparse solution for the permanent magnets after each convex sub-problem is solved.

simsopt.solve.serial_solve(prob: Optimizable | Callable, grad: bool | None = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'centered', **kwargs)

Solve a general minimization problem (i.e. one that need not be of least-squares form) using scipy.optimize.minimize, and without using any parallelization.

Parameters:
  • prob – Optimizable object defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if prob has gradient information available, otherwise a gradient-free algorithm will be used by default. If you set grad=True in which gradient information is not available, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered", and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. Else, error is raised.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.