Optimize

class SimPEG.Optimization.StoppingCriteria[source]

Bases: object

docstring for StoppingCriteria

iteration = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : maxIter = %3d <= iter = %3d', 'left': <function <lambda>>}
iterationLS = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : maxIterLS = %3d <= iterLS = %3d', 'left': <function <lambda>>}
armijoGoldstein = {'stopType': 'optimal', 'right': <function <lambda>>, 'str': '%d : ft = %1.4e <= alp*descent = %1.4e', 'left': <function <lambda>>}
tolerance_f = {'stopType': 'optimal', 'right': <function <lambda>>, 'str': '%d : |fc-fOld| = %1.4e <= tolF*(1+|f0|) = %1.4e', 'left': <function <lambda>>}
moving_x = {'stopType': 'optimal', 'right': <function <lambda>>, 'str': '%d : |xc-x_last| = %1.4e <= tolX*(1+|x0|) = %1.4e', 'left': <function <lambda>>}
tolerance_g = {'stopType': 'optimal', 'right': <function <lambda>>, 'str': '%d : |proj(x-g)-x| = %1.4e <= tolG = %1.4e', 'left': <function <lambda>>}
norm_g = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : |proj(x-g)-x| = %1.4e <= 1e3*eps = %1.4e', 'left': <function <lambda>>}
bindingSet = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : probSize = %3d <= bindingSet = %3d', 'left': <function <lambda>>}
bindingSet_LS = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : probSize = %3d <= bindingSet = %3d', 'left': <function <lambda>>}
phi_d_target_Minimize = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : phi_d = %1.4e <= phi_d_target = %1.4e ', 'left': <function <lambda>>}
phi_d_target_Inversion = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : phi_d = %1.4e <= phi_d_target = %1.4e ', 'left': <function <lambda>>}
class SimPEG.Optimization.IterationPrinters[source]

Bases: object

docstring for IterationPrinters

iteration = {'width': 5, 'format': '%3d', 'value': <function <lambda>>, 'title': '#'}
f = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'f'}
norm_g = {'width': 15, 'format': '%1.2e', 'value': <function <lambda>>, 'title': '|proj(x-g)-x|'}
totalLS = {'width': 5, 'format': '%d', 'value': <function <lambda>>, 'title': 'LS'}
iterationLS = {'width': 5, 'format': '%3d.%d', 'value': <function <lambda>>, 'title': '#'}
LS_ft = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'ft'}
LS_t = {'width': 10, 'format': '%0.5f', 'value': <function <lambda>>, 'title': 't'}
LS_armijoGoldstein = {'width': 16, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'f + alp*g.T*p'}
itType = {'width': 8, 'format': '%s', 'value': <function <lambda>>, 'title': 'itType'}
aSet = {'width': 8, 'format': '%d', 'value': <function <lambda>>, 'title': 'aSet'}
bSet = {'width': 8, 'format': '%d', 'value': <function <lambda>>, 'title': 'bSet'}
comment = {'width': 12, 'format': '%s', 'value': <function <lambda>>, 'title': 'Comment'}
beta = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'beta'}
phi_d = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'phi_d'}
phi_m = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'phi_m'}
class SimPEG.Optimization.Minimize(**kwargs)[source]

Bases: object

Minimize is a general class for derivative based optimization.

name = 'General Optimization Algorithm'

The name of the optimization algorithm

maxIter = 20

Maximum number of iterations

maxIterLS = 10

Maximum number of iterations for the line-search

maxStep = inf

Maximum step possible, used in scaling before the line-search.

LSreduction = 0.0001

Expected decrease in the line-search

LSshorten = 0.5

Line-search step is shortened by this amount each time.

tolF = 0.1

Tolerance on function value decrease

tolX = 0.1

Tolerance on norm(x) movement

tolG = 0.1

Tolerance on gradient norm

eps = 1e-05

Small value

stopNextIteration = False

Stops the optimization program nicely.

debug = False

Print debugging information

debugLS = False

Print debugging information for the line-search

comment = ''

Used by some functions to indicate what is going on in the algorithm

counter = None

Set this to a SimPEG.Utils.Counter() if you want to count things

parent = None

This is the parent of the optimization routine.

callback
minimize(evalFunction, x0)[source]

Minimizes the function (evalFunction) starting at the location x0.

Parameters:
  • evalFunction (callable) – function handle that evaluates: f, g, H = F(x)
  • x0 (numpy.ndarray) – starting location
Return type:

numpy.ndarray

Returns:

x, the last iterate of the optimization algorithm

evalFunction is a function handle:

(f[, g][, H]) = evalFunction(x, return_g=False, return_H=False )

def evalFunction(x, return_g=False, return_H=False):
    out = (f,)
    if return_g:
        out += (g,)
    if return_H:
        out += (H,)
    return out if len(out) > 1 else out[0]

The algorithm for general minimization is as follows:

startup(x0)
printInit()

while True:
    doStartIteration()
    f, g, H = evalFunction(xc)
    printIter()
    if stoppingCriteria(): break
    p = findSearchDirection()
    p = scaleSearchDirection(p)
    xt, passLS = modifySearchDirection(p)
    if not passLS:
        xt, caught = modifySearchDirectionBreak(p)
        if not caught: return xc
    doEndIteration(xt)

printDone()
finish()
return xc
startup(*args, **kwargs)[source]

startup is called at the start of any new minimize call.

This will set:

x0 = x0
xc = x0
iter = iterLS = 0
Parameters:x0 (numpy.ndarray) – initial x
Return type:None
Returns:None

If you have things that also need to run in the method startup, you can create a method:

def _startup*(self, ... ):
    pass

Where the * can be any string. If present, _startup* will be called at the start of the default startup call. You may also completely overwrite this function.

doStartIteration()[source]

doStartIteration is called at the start of each minimize iteration.

Return type:None
Returns:None

If you have things that also need to run in the method doStartIteration, you can create a method:

def _doStartIteration*(self, ... ):
    pass

Where the * can be any string. If present, _doStartIteration* will be called at the start of the default doStartIteration call. You may also completely overwrite this function.

printInit(inLS=False)[source]

printInit is called at the beginning of the optimization routine.

If there is a parent object, printInit will check for a parent.printInit function and call that.

printIter(*args, **kwargs)[source]

printIter is called directly after function evaluations.

If there is a parent object, printIter will check for a parent.printIter function and call that.

If you have things that also need to run in the method printIter, you can create a method:

def _printIter*(self, ... ):
    pass

Where the * can be any string. If present, _printIter* will be called at the start of the default printIter call. You may also completely overwrite this function.

printDone(inLS=False)[source]

printDone is called at the end of the optimization routine.

If there is a parent object, printDone will check for a parent.printDone function and call that.

finish()[source]

finish is called at the end of the optimization.

Return type:None
Returns:None

If you have things that also need to run in the method finish, you can create a method:

def _finish*(self, ... ):
    pass

Where the * can be any string. If present, _finish* will be called at the start of the default finish call. You may also completely overwrite this function.

stoppingCriteria(inLS=False)[source]
projection(p)[source]

projects the search direction.

by default, no projection is applied.

Parameters:p (numpy.ndarray) – searchDirection
Return type:numpy.ndarray
Returns:p, projected search direction

If you have things that also need to run in the method projection, you can create a method:

def _projection*(self, ... ):
    pass

Where the * can be any string. If present, _projection* will be called at the start of the default projection call. You may also completely overwrite this function.

findSearchDirection()[source]

findSearchDirection should return an approximation of:

\[H p = - g\]

Where you are solving for the search direction, p

The default is:

\[ \begin{align}\begin{aligned}H = I\\p = - g\end{aligned}\end{align} \]

And corresponds to SteepestDescent.

The latest function evaluations are present in:

self.f, self.g, self.H
Return type:numpy.ndarray
Returns:p, Search Direction
scaleSearchDirection(p)[source]

scaleSearchDirection should scale the search direction if appropriate.

Set the parameter maxStep in the minimize object, to scale back the gradient to a maximum size.

Parameters:p (numpy.ndarray) – searchDirection
Return type:numpy.ndarray
Returns:p, Scaled Search Direction
nameLS = 'Armijo linesearch'

The line-search name

modifySearchDirection(p)[source]

modifySearchDirection changes the search direction based on some sort of linesearch or trust-region criteria.

By default, an Armijo backtracking linesearch is preformed with the following parameters:

  • maxIterLS, the maximum number of linesearch iterations
  • LSreduction, the expected reduction expected, default: 1e-4
  • LSshorten, how much the step is reduced, default: 0.5

If the linesearch is completed, and a descent direction is found, passLS is returned as True.

Else, a modifySearchDirectionBreak call is preformed.

Parameters:p (numpy.ndarray) – searchDirection
Return type:tuple
Returns:(xt, passLS) numpy.ndarray, bool
modifySearchDirectionBreak(p)[source]

Code is called if modifySearchDirection fails to find a descent direction.

The search direction is passed as input and this function must pass back both a new searchDirection, and if the searchDirection break has been caught.

By default, no additional work is done, and the evalFunction returns a False indicating the break was not caught.

Parameters:p (numpy.ndarray) – searchDirection
Return type:tuple
Returns:(xt, breakCaught) numpy.ndarray, bool
doEndIteration(xt)[source]

doEndIteration is called at the end of each minimize iteration.

By default, function values and x locations are shuffled to store 1 past iteration in memory.

self.xc must be updated in this code.

Parameters:xt (numpy.ndarray) – tested new iterate that ensures a descent direction.
Return type:None
Returns:None

If you have things that also need to run in the method doEndIteration, you can create a method:

def _doEndIteration*(self, ... ):
    pass

Where the * can be any string. If present, _doEndIteration* will be called at the start of the default doEndIteration call. You may also completely overwrite this function.

save(group)[source]
class SimPEG.Optimization.Remember[source]

Bases: object

This mixin remembers all the things you tend to forget.

You can remember parameters directly, naming the str in Minimize, or pass a tuple with the name and the function that takes Minimize.

For Example:

opt.remember('f',('norm_g', lambda M: np.linalg.norm(M.g)))

opt.minimize(evalFunction, x0)

opt.recall('f')

The param name (str) can also be located in the parent (if no conflicts), and it will be looked up by default.

_rememberThese = []
remember(*args)[source]
recall(param)[source]
_startupRemember(x0)[source]
_doEndIterationRemember(*args)[source]
class SimPEG.Optimization.ProjectedGradient(**kwargs)[source]

Bases: SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

name = 'Projected Gradient'
maxIterCG = 5
tolCG = 0.1
lower = -inf
upper = inf
_startup(x0)[source]
projection(x)[source]

Make sure we are feasible.

activeSet(x)[source]

If we are on a bound

inactiveSet(x)[source]

The free variables.

bindingSet(x)[source]

If we are on a bound and the negative gradient points away from the feasible set.

Optimality condition. (Satisfies Kuhn-Tucker) MoreToraldo91

findSearchDirection()[source]

Finds the search direction based on either CG or steepest descent.

_doEndIteration_ProjectedGradient(xt)[source]
class SimPEG.Optimization.BFGS(**kwargs)[source]

Bases: SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

name = 'BFGS'
nbfgs = 10
bfgsH0

Approximate Hessian used in preconditioning the problem.

Must be a SimPEG.Solver

_startup_BFGS(x0)[source]
bfgs(d)[source]
bfgsrec(k, n, nn, S, Y, d)[source]

BFGS recursion

findSearchDirection()[source]
_doEndIteration_BFGS(xt)[source]
class SimPEG.Optimization.GaussNewton(**kwargs)[source]

Bases: SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

name = 'Gauss Newton'
findSearchDirection(*args, **kwargs)[source]
class SimPEG.Optimization.InexactGaussNewton(**kwargs)[source]

Bases: SimPEG.Optimization.BFGS, SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

Minimizes using CG as the inexact solver of

\[\mathbf{H p = -g}\]

By default BFGS is used as the preconditioner.

Use nbfgs to set the memory limitation of BFGS.

To set the initial H0 to be used in BFGS, set bfgsH0 to be a SimPEG.Solver

name = 'Inexact Gauss Newton'
maxIterCG = 5
tolCG = 0.1
approxHinv

The approximate Hessian inverse is used to precondition CG.

Default uses BFGS, with an initial H0 of bfgsH0.

Must be a scipy.sparse.linalg.LinearOperator

findSearchDirection(*args, **kwargs)[source]
class SimPEG.Optimization.SteepestDescent(**kwargs)[source]

Bases: SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

name = 'Steepest Descent'
findSearchDirection(*args, **kwargs)[source]
class SimPEG.Optimization.NewtonRoot(**kwargs)[source]

Bases: object

Newton Method - Root Finding

root = newtonRoot(fun,x);

Where fun is the function that returns the function value as well as the gradient.

For iterative solving of dh = -Jr, use O.solveTol = TOL. For direct solves, use SOLVETOL = 0 (default)

Rowan Cockett 16-May-2013 16:29:51 University of British Columbia rcockett@eos.ubc.ca

tol = 1e-06
maxIter = 20
stepDcr = 0.5
maxLS = 30
comments = False
doLS = True
class Solver(A, **kwargs)

Bases: object

clean()
NewtonRoot.solverOpts = {}
NewtonRoot.root(fun, x)[source]

Function Should have the form:

def evalFunction(x, return_g=False):
        out = (f,)
        if return_g:
            out += (g,)
        return out if len(out) > 1 else out[0]