simsopt.solve package¶
Submodules¶
simsopt.solve.mpi module¶
This module provides two main functions,
fd_jac_mpi()
and
least_squares_mpi_solve()
.
Also included are some functions that help in
the operation of these main functions.
-
simsopt.solve.mpi.
fd_jac_mpi
(dofs: simsopt._core.dofs.Dofs, mpi: simsopt.util.mpi.MpiPartition, x: Optional[numpy.ndarray] = None, eps: float = 1e-07, centered: bool = False) → tuple¶ Compute the finite-difference Jacobian of the functions in dofs with respect to all non-fixed degrees of freedom. Parallel function evaluations will be used.
If the argument x is not supplied, the Jacobian will be evaluated for the present state vector. If x is supplied, then first get_dofs() will be called for each object to set the global state vector to x.
There are 2 ways to call this function. In method 1, all procs (including workers) call this function (so mpi.is_apart is False). In this case, the worker loop will be started automatically. In method 2, the worker loop has already been started before this function is called, as would be the case in least_squares_mpi_solve(). Then only the group leaders call this function.
- Parameters
dofs – The map from \(\mathbb{R}^n \to \mathbb{R}^m\) for which you want to compute the Jacobian.
mpi – A
simsopt.util.mpi.MpiPartition
object, storing the information about how the pool of MPI processes is divided into worker groups.x – The 1D state vector at which you wish to evaluate the Jacobian. If
None
, the Jacobian will be evaluated at the present state vector.eps – Step size for finite differences.
centered – If
True
, centered finite differences will be used. Iffalse
, one-sided finite differences will be used.
- Returns
tuple containing
jac (numpy.ndarray) – The Jacobian matrix.
xmat (numpy.ndarray) – A matrix, the columns of which give all the values of x at which the functions were evaluated.
fmat (numpy.ndarray) – A matrix, the columns of which give the corresponding values of the functions.
-
simsopt.solve.mpi.
least_squares_mpi_solve
(prob: simsopt.objectives.least_squares.LeastSquaresProblem, mpi: simsopt.util.mpi.MpiPartition, grad: Optional[bool] = None, **kwargs)¶ Solve a nonlinear-least-squares minimization problem using MPI. All MPI processes (including group leaders and workers) should call this function.
- Parameters
prob – An instance of LeastSquaresProblem, defining the objective function(s) and parameter space.
mpi – A
simsopt.util.mpi.MpiPartition
object, storing the information about how the pool of MPI processes is divided into worker groups.grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if
prob
has gradient information available, otherwise a gradient-free algorithm will be used by default. If you setgrad=True
for a problem in which gradient information is not available, finite-difference gradients will be used.kwargs – Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply
max_nfev=100
to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supplymethod
to choose the optimization algorithm.
simsopt.solve.serial module¶
This module provides functions for solving least-squares and general optimization
problems, without parallelization in the optimization algorithm itself,
and without parallelized finite-difference gradients.
These functions could still be used for cases in which there is parallelization within the objective
function evaluations.
These functions essentially
are interfaces between a simsopt.core.least_squares_problem.LeastSquaresProblem
object and scipy.optimize.least_squares.
The functions here also create a log file with history of the objective function evaluations.
If you want parallelized finite difference gradient evaluations, you should instead use
simsopt.solve.mpi_solve.least_squares_mpi_solve()
. If not, the methods here may be preferable
due to their greater simplicity.
-
simsopt.solve.serial.
least_squares_serial_solve
(prob: simsopt.objectives.least_squares.LeastSquaresProblem, grad: Optional[bool] = None, **kwargs)¶ Solve a nonlinear-least-squares minimization problem.
- Parameters
prob – An instance of LeastSquaresProblem, defining the objective function(s) and parameter space.
grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if
prob
has gradient information available, otherwise a gradient-free algorithm will be used by default. If you setgrad=True
for a problem in which gradient information is not available, finite-difference gradients will be used.kwargs –
Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply
max_nfev=100
to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100.
-
simsopt.solve.serial.
serial_solve
(prob, grad=None, **kwargs)¶ Solve a general minimization problem (i.e. one that need not be of least-squares form) using scipy.optimize.minimize, and without using any parallelization.
prob should be a simsopt problem.
kwargs allows you to pass any arguments to scipy.optimize.minimize.
Module contents¶
-
simsopt.solve.
fd_jac_mpi
(dofs: simsopt._core.dofs.Dofs, mpi: simsopt.util.mpi.MpiPartition, x: Optional[numpy.ndarray] = None, eps: float = 1e-07, centered: bool = False) → tuple¶ Compute the finite-difference Jacobian of the functions in dofs with respect to all non-fixed degrees of freedom. Parallel function evaluations will be used.
If the argument x is not supplied, the Jacobian will be evaluated for the present state vector. If x is supplied, then first get_dofs() will be called for each object to set the global state vector to x.
There are 2 ways to call this function. In method 1, all procs (including workers) call this function (so mpi.is_apart is False). In this case, the worker loop will be started automatically. In method 2, the worker loop has already been started before this function is called, as would be the case in least_squares_mpi_solve(). Then only the group leaders call this function.
- Parameters
dofs – The map from \(\mathbb{R}^n \to \mathbb{R}^m\) for which you want to compute the Jacobian.
mpi – A
simsopt.util.mpi.MpiPartition
object, storing the information about how the pool of MPI processes is divided into worker groups.x – The 1D state vector at which you wish to evaluate the Jacobian. If
None
, the Jacobian will be evaluated at the present state vector.eps – Step size for finite differences.
centered – If
True
, centered finite differences will be used. Iffalse
, one-sided finite differences will be used.
- Returns
tuple containing
jac (numpy.ndarray) – The Jacobian matrix.
xmat (numpy.ndarray) – A matrix, the columns of which give all the values of x at which the functions were evaluated.
fmat (numpy.ndarray) – A matrix, the columns of which give the corresponding values of the functions.
-
simsopt.solve.
least_squares_mpi_solve
(prob: simsopt.objectives.least_squares.LeastSquaresProblem, mpi: simsopt.util.mpi.MpiPartition, grad: Optional[bool] = None, **kwargs)¶ Solve a nonlinear-least-squares minimization problem using MPI. All MPI processes (including group leaders and workers) should call this function.
- Parameters
prob – An instance of LeastSquaresProblem, defining the objective function(s) and parameter space.
mpi – A
simsopt.util.mpi.MpiPartition
object, storing the information about how the pool of MPI processes is divided into worker groups.grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if
prob
has gradient information available, otherwise a gradient-free algorithm will be used by default. If you setgrad=True
for a problem in which gradient information is not available, finite-difference gradients will be used.kwargs –
Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply
max_nfev=100
to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100. Or, you can supplymethod
to choose the optimization algorithm.
-
simsopt.solve.
least_squares_serial_solve
(prob: simsopt.objectives.least_squares.LeastSquaresProblem, grad: Optional[bool] = None, **kwargs)¶ Solve a nonlinear-least-squares minimization problem.
- Parameters
prob – An instance of LeastSquaresProblem, defining the objective function(s) and parameter space.
grad – Whether to use a gradient-based optimization algorithm, as opposed to a gradient-free algorithm. If unspecified, a gradient-based algorithm will be used if
prob
has gradient information available, otherwise a gradient-free algorithm will be used by default. If you setgrad=True
for a problem in which gradient information is not available, finite-difference gradients will be used.kwargs –
Any arguments to pass to scipy.optimize.least_squares. For instance, you can supply
max_nfev=100
to set the maximum number of function evaluations (not counting finite-difference gradient evaluations) to 100.
-
simsopt.solve.
serial_solve
(prob, grad=None, **kwargs)¶ Solve a general minimization problem (i.e. one that need not be of least-squares form) using scipy.optimize.minimize, and without using any parallelization.
prob should be a simsopt problem.
kwargs allows you to pass any arguments to scipy.optimize.minimize.