simsopt.solve package

Submodules

simsopt.solve.graph_mpi module

This module provides two main functions, fd_jac_mpi and least_squares_mpi_solve. Also included are some functions that help in the operation of these main functions.

simsopt.solve.graph_mpi._mpi_workers_task(mpi: simsopt.util.mpi.MpiPartition, prob: simsopt._core.graph_optimizable.Optimizable)

This function is called by worker processes when MpiPartition.workers_loop() receives a signal to do something.

Parameters
  • mpi – A simsopt.util.mpi.MpiPartition object, storing the information about how the pool of MPI processes is divided into worker groups.

  • prob – Optimizable object

  • data – Integer with a value from 1 to 3

simsopt.solve.graph_mpi.least_squares_mpi_solve(prob: simsopt.objectives.graph_least_squares.LeastSquaresProblem, mpi: simsopt.util.mpi.MpiPartition, grad: bool = False, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', **kwargs)

Solve a nonlinear-least-squares minimization problem using MPI. All MPI processes (including group leaders and workers) should call this function.

Parameters
  • prob – Optimizable object defining the objective function(s) and parameter space.

  • mpi – A MpiPartition object, storing the information about how the pool of MPI processes is divided into worker groups.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are “centered”, and “forward”. If centered, centered finite differences will be used. If forward, one-sided finite differences will be used. Else, error is raised.

  • kwargs – Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.graph_serial module

This module provides the least_squares_serial_solve function. Eventually I can also put a serial_solve function here for general optimization problems.

simsopt.solve.graph_serial.least_squares_serial_solve(prob: simsopt.objectives.graph_least_squares.LeastSquaresProblem, grad: Optional[bool] = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', **kwargs)

Solve a nonlinear-least-squares minimization problem using scipy.optimize, and without using any parallelization.

Parameters
  • prob – LeastSquaresProblem object defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True for a problem, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered", and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. Else, error is raised.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.graph_serial.serial_solve(prob: Union[simsopt._core.graph_optimizable.Optimizable, Callable], grad: Optional[bool] = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'centered', **kwargs)

Solve a general minimization problem (i.e. one that need not be of least-squares form) using scipy.optimize.minimize, and without using any parallelization.

Parameters
  • prob – Optimizable object defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if prob has gradient information available, otherwise a gradient-free algorithm will be used by default. If you set grad=True in which gradient information is not available, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered", and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. Else, error is raised.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.mpi module

This module provides two main functions, fd_jac_mpi() and least_squares_mpi_solve(). Also included are some functions that help in the operation of these main functions.

simsopt.solve.mpi._mpi_leaders_task(mpi, dofs, data)

This function is called by group leaders when MpiPartition.leaders_loop() receives a signal to do something.

We have to take a “data” argument, but there is only 1 task we would do, so we don’t use it.

simsopt.solve.mpi._mpi_workers_task(mpi, dofs, data)

This function is called by worker processes when MpiPartition.workers_loop() receives a signal to do something.

simsopt.solve.mpi.fd_jac_mpi(dofs: simsopt._core.dofs.Dofs, mpi: simsopt.util.mpi.MpiPartition, x: Optional[numpy.ndarray] = None) tuple

Compute the finite-difference Jacobian of the functions in dofs with respect to all non-fixed degrees of freedom. Parallel function evaluations will be used.

The attribues abs_step`', ``rel_step, and diff_method of the Dofs object will be queried and used to set the finite difference step sizes, using simsopt._core.util.finite_difference_steps().

If the argument x is not supplied, the Jacobian will be evaluated for the present state vector. If x is supplied, then first get_dofs() will be called for each object to set the global state vector to x.

There are 2 ways to call this function. In method 1, all procs (including workers) call this function (so mpi.is_apart is False). In this case, the worker loop will be started automatically. In method 2, the worker loop has already been started before this function is called, as would be the case in least_squares_mpi_solve(). Then only the group leaders call this function.

Parameters
  • dofs – The map from \(\mathbb{R}^n \to \mathbb{R}^m\) for which you want to compute the Jacobian.

  • mpi – A simsopt.util.mpi.MpiPartition object, storing the information about how the pool of MPI processes is divided into worker groups.

  • x – The 1D state vector at which you wish to evaluate the Jacobian. If None, the Jacobian will be evaluated at the present state vector.

Returns

tuple containing

  • jac (numpy.ndarray) – The Jacobian matrix.

  • xmat (numpy.ndarray) – A matrix, the columns of which give all the values of x at which the functions were evaluated.

  • fmat (numpy.ndarray) – A matrix, the columns of which give the corresponding values of the functions.

simsopt.solve.mpi.least_squares_mpi_solve(prob: simsopt.objectives.least_squares.LeastSquaresProblem, mpi: simsopt.util.mpi.MpiPartition, grad: bool = None, **kwargs)

Solve a nonlinear-least-squares minimization problem using MPI. All MPI processes (including group leaders and workers) should call this function.

Parameters
  • prob – An instance of LeastSquaresProblem, defining the objective function(s) and parameter space.

  • mpi – A simsopt.util.mpi.MpiPartition object, storing the information about how the pool of MPI processes is divided into worker groups.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if prob has gradient information available, otherwise a gradient-free algorithm will be used by default. If you set grad=True for a problem in which gradient information is not available, finite-difference gradients will be used.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.serial module

This module provides functions for solving least-squares and general optimization problems, without parallelization in the optimization algorithm itself, and without parallelized finite-difference gradients. These functions could still be used for cases in which there is parallelization within the objective function evaluations. These functions essentially are interfaces between a simsopt.core.least_squares_problem.LeastSquaresProblem object and scipy.optimize.least_squares. The functions here also create a log file with history of the objective function evaluations.

If you want parallelized finite difference gradient evaluations, you should instead use simsopt.solve.mpi_solve.least_squares_mpi_solve(). If not, the methods here may be preferable due to their greater simplicity.

simsopt.solve.serial.least_squares_serial_solve(prob: simsopt.objectives.least_squares.LeastSquaresProblem, grad: bool = None, **kwargs)

Solve a nonlinear-least-squares minimization problem.

Parameters
  • prob – An instance of LeastSquaresProblem, defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if prob has gradient information available, otherwise a gradient-free algorithm will be used by default. If you set grad=True for a problem in which gradient information is not available, finite-difference gradients will be used.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100.

simsopt.solve.serial.serial_solve(prob, grad=None, **kwargs)

Solve a general minimization problem (i.e. one that need not be of least-squares form) using scipy.optimize.minimize, and without using any parallelization.

prob should be a simsopt problem.

kwargs allows you to pass any arguments to scipy.optimize.minimize.

Module contents

simsopt.solve.least_squares_serial_solve(prob: simsopt.objectives.graph_least_squares.LeastSquaresProblem, grad: Optional[bool] = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'forward', **kwargs)

Solve a nonlinear-least-squares minimization problem using scipy.optimize, and without using any parallelization.

Parameters
  • prob – LeastSquaresProblem object defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a a gradient-free algorithm will be used by default. If you set grad=True for a problem, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered", and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. Else, error is raised.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.

simsopt.solve.serial_solve(prob: Union[simsopt._core.graph_optimizable.Optimizable, Callable], grad: Optional[bool] = None, abs_step: float = 1e-07, rel_step: float = 0.0, diff_method: str = 'centered', **kwargs)

Solve a general minimization problem (i.e. one that need not be of least-squares form) using scipy.optimize.minimize, and without using any parallelization.

Parameters
  • prob – Optimizable object defining the objective function(s) and parameter space.

  • grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if prob has gradient information available, otherwise a gradient-free algorithm will be used by default. If you set grad=True in which gradient information is not available, finite-difference gradients will be used.

  • abs_step – Absolute step size for finite difference jac evaluation

  • rel_step – Relative step size for finite difference jac evaluation

  • diff_method – Differentiation strategy. Options are "centered", and "forward". If "centered", centered finite differences will be used. If "forward", one-sided finite differences will be used. Else, error is raised.

  • kwargs

    Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply max_nfev=100 to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supply method to choose the optimization algorithm.