simsopt.solve package

simsopt.solve.constrained_mpi_solve(prob: ConstrainedProblem, mpi: MpiPartition, grad: bool = False, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', opt_method: str = 'SLSQP', options: dict | None = None)

Solve a constrained minimization problem using MPI. All MPI processes (including group leaders and workers) should call this function.

Parameters:
  • probConstrainedProblem object defining the objective function, parameter space, and constraints.

  • mpi – A MpiPartition object, storing the information about how the pool of MPI processes is divided into worker groups.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered" and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. For other values, an error is raised.

  • opt_method – Constrained solver to use: One of "SLSQP", "trust-constr", or "COBYLA". Use "COBYLA" for derivative-free optimization. See scipy.optimize.minimize for a description of the methods.

  • options

    dict, options keyword which is passed to scipy.optimize.minimize.

simsopt.solve.constrained_serial_solve(prob: ConstrainedProblem, grad: bool | None = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', opt_method: str = 'SLSQP', options: dict | None = None)

Solve a constrained minimization problem using scipy.optimize, and without using any parallelization.

Parameters:
  • probConstrainedProblem object defining the objective function, parameter space, and constraints.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True for a problem, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered" and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. For other settings, an error is raised.

  • opt_method

    Constrained solver to use: One of "SLSQP", "trust-constr", or "COBYLA". Use "COBYLA" for derivative-free optimization. See scipy.optimize.minimize for a description of the methods.

  • options

    dict, options keyword which is passed to scipy.optimize.minimize.

simsopt.solve.least_squares_mpi_solve(prob: LeastSquaresProblem, mpi: MpiPartition, grad: bool = False, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', **kwargs)

Solve a nonlinear-least-squares minimization problem using MPI. All MPI processes (including group leaders and workers) should call this function.

Parameters:
  • prob – Optimizable object defining the objective function(s) and parameter space.

  • mpi – A MpiPartition object, storing the information about how the pool of MPI processes is divided into worker groups.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are “centered”, and “forward”. If centered, centered finite differences will be used. If forward, one-sided finite differences will be used. Else, error is raised.

  • kwargs – Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.least_squares_serial_solve(prob: LeastSquaresProblem, grad: bool | None = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', **kwargs)

Solve a nonlinear-least-squares minimization problem using scipy.optimize, and without using any parallelization.

Parameters:
  • prob – LeastSquaresProblem object defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True for a problem, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered", and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. Else, error is raised.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.serial_solve(prob: Optimizable | Callable, grad: bool | None = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'centered', **kwargs)

Solve a general minimization problem (i.e. one that need not be of least-squares form) using scipy.optimize.minimize, and without using any parallelization.

Parameters:
  • prob – Optimizable object defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if prob has gradient information available, otherwise a gradient-free algorithm will be used by default. If you set grad=True in which gradient information is not available, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered", and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. Else, error is raised.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.