Optimize

class SimPEG.Optimization.Minimize(**kwargs)

Bases: object

Minimize is a general class for derivative based optimization.

LSreduction = 0.0001
LSshorten = 0.5
callback
comment = ''
counter = None
debug = False
debugLS = False
doEndIteration(xt)

doEndIteration is called at the end of each minimize iteration.

By default, function values and x locations are shuffled to store 1 past iteration in memory.

self.xc must be updated in this code.

Parameters:xt (numpy.ndarray) – tested new iterate that ensures a descent direction.
Return type:None
Returns:None

If you have things that also need to run in the method doEndIteration, you can create a method:

def _doEndIteration*(self, ... ):
    pass

Where the * can be any string. If present, _doEndIteration* will be called at the start of the default doEndIteration call. You may also completely overwrite this function.

doStartIteration()

doStartIteration is called at the start of each minimize iteration.

Return type:None
Returns:None

If you have things that also need to run in the method doStartIteration, you can create a method:

def _doStartIteration*(self, ... ):
    pass

Where the * can be any string. If present, _doStartIteration* will be called at the start of the default doStartIteration call. You may also completely overwrite this function.

eps = 1e-05
findSearchDirection()

findSearchDirection should return an approximation of:

\[H p = - g\]

Where you are solving for the search direction, p

The default is:

\[ \begin{align}\begin{aligned}H = I\\p = - g\end{aligned}\end{align} \]

And corresponds to SteepestDescent.

The latest function evaluations are present in:

self.f, self.g, self.H
Return type:numpy.ndarray
Returns:p, Search Direction
finish()

finish is called at the end of the optimization.

Return type:None
Returns:None

If you have things that also need to run in the method finish, you can create a method:

def _finish*(self, ... ):
    pass

Where the * can be any string. If present, _finish* will be called at the start of the default finish call. You may also completely overwrite this function.

maxIter = 20
maxIterLS = 10
maxStep = inf
minimize(evalFunction, x0)

Minimizes the function (evalFunction) starting at the location x0.

Parameters:
  • evalFunction (callable) – function handle that evaluates: f, g, H = F(x)
  • x0 (numpy.ndarray) – starting location
Return type:

numpy.ndarray

Returns:

x, the last iterate of the optimization algorithm

evalFunction is a function handle:

(f[, g][, H]) = evalFunction(x, return_g=False, return_H=False )

def evalFunction(x, return_g=False, return_H=False):
    out = (f,)
    if return_g:
        out += (g,)
    if return_H:
        out += (H,)
    return out if len(out) > 1 else out[0]

The algorithm for general minimization is as follows:

startup(x0)
printInit()

while True:
    doStartIteration()
    f, g, H = evalFunction(xc)
    printIter()
    if stoppingCriteria(): break
    p = findSearchDirection()
    p = scaleSearchDirection(p)
    xt, passLS = modifySearchDirection(p)
    if not passLS:
        xt, caught = modifySearchDirectionBreak(p)
        if not caught: return xc
    doEndIteration(xt)

printDone()
finish()
return xc
modifySearchDirection(p)

modifySearchDirection changes the search direction based on some sort of linesearch or trust-region criteria.

By default, an Armijo backtracking linesearch is preformed with the following parameters:

  • maxIterLS, the maximum number of linesearch iterations
  • LSreduction, the expected reduction expected, default: 1e-4
  • LSshorten, how much the step is reduced, default: 0.5

If the linesearch is completed, and a descent direction is found, passLS is returned as True.

Else, a modifySearchDirectionBreak call is preformed.

Parameters:p (numpy.ndarray) – searchDirection
Return type:tuple
Returns:(xt, passLS) numpy.ndarray, bool
modifySearchDirectionBreak(p)

Code is called if modifySearchDirection fails to find a descent direction.

The search direction is passed as input and this function must pass back both a new searchDirection, and if the searchDirection break has been caught.

By default, no additional work is done, and the evalFunction returns a False indicating the break was not caught.

Parameters:p (numpy.ndarray) – searchDirection
Return type:tuple
Returns:(xt, breakCaught) numpy.ndarray, bool
name = 'General Optimization Algorithm'
nameLS = 'Armijo linesearch'
parent = None
printDone(inLS=False)

printDone is called at the end of the optimization routine.

If there is a parent object, printDone will check for a parent.printDone function and call that.

printInit(inLS=False)

printInit is called at the beginning of the optimization routine.

If there is a parent object, printInit will check for a parent.printInit function and call that.

printIter(*args, **kwargs)

printIter is called directly after function evaluations.

If there is a parent object, printIter will check for a parent.printIter function and call that.

If you have things that also need to run in the method printIter, you can create a method:

def _printIter*(self, ... ):
    pass

Where the * can be any string. If present, _printIter* will be called at the start of the default printIter call. You may also completely overwrite this function.

projection(p)

projects the search direction.

by default, no projection is applied.

Parameters:p (numpy.ndarray) – searchDirection
Return type:numpy.ndarray
Returns:p, projected search direction

If you have things that also need to run in the method projection, you can create a method:

def _projection*(self, ... ):
    pass

Where the * can be any string. If present, _projection* will be called at the start of the default projection call. You may also completely overwrite this function.

save(group)
scaleSearchDirection(p)

scaleSearchDirection should scale the search direction if appropriate.

Set the parameter maxStep in the minimize object, to scale back the gradient to a maximum size.

Parameters:p (numpy.ndarray) – searchDirection
Return type:numpy.ndarray
Returns:p, Scaled Search Direction
startup(*args, **kwargs)

startup is called at the start of any new minimize call.

This will set:

x0 = x0
xc = x0
iter = iterLS = 0
Parameters:x0 (numpy.ndarray) – initial x
Return type:None
Returns:None

If you have things that also need to run in the method startup, you can create a method:

def _startup*(self, ... ):
    pass

Where the * can be any string. If present, _startup* will be called at the start of the default startup call. You may also completely overwrite this function.

stopNextIteration = False
stoppingCriteria(inLS=False)
tolF = 0.1
tolG = 0.1
tolX = 0.1
class SimPEG.Optimization.Remember

Bases: object

This mixin remembers all the things you tend to forget.

You can remember parameters directly, naming the str in Minimize, or pass a tuple with the name and the function that takes Minimize.

For Example:

opt.remember('f',('norm_g', lambda M: np.linalg.norm(M.g)))

opt.minimize(evalFunction, x0)

opt.recall('f')

The param name (str) can also be located in the parent (if no conflicts), and it will be looked up by default.

_doEndIterationRemember(*args)
_rememberThese = []
_startupRemember(x0)
recall(param)
remember(*args)
class SimPEG.Optimization.SteepestDescent(**kwargs)

Bases: SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

findSearchDirection(*args, **kwargs)
name = 'Steepest Descent'
class SimPEG.Optimization.BFGS(**kwargs)

Bases: SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

_doEndIteration_BFGS(xt)
_startup_BFGS(x0)
bfgs(d)
bfgsH0

Approximate Hessian used in preconditioning the problem.

Must be a SimPEG.Solver

bfgsrec(k, n, nn, S, Y, d)

BFGS recursion

findSearchDirection()
name = 'BFGS'
nbfgs = 10
class SimPEG.Optimization.GaussNewton(**kwargs)

Bases: SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

findSearchDirection(*args, **kwargs)
name = 'Gauss Newton'
class SimPEG.Optimization.InexactGaussNewton(**kwargs)

Bases: SimPEG.Optimization.BFGS, SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

Minimizes using CG as the inexact solver of

\[\mathbf{H p = -g}\]

By default BFGS is used as the preconditioner.

Use nbfgs to set the memory limitation of BFGS.

To set the initial H0 to be used in BFGS, set bfgsH0 to be a SimPEG.Solver

approxHinv

The approximate Hessian inverse is used to precondition CG.

Default uses BFGS, with an initial H0 of bfgsH0.

Must be a scipy.sparse.linalg.LinearOperator

findSearchDirection(*args, **kwargs)
maxIterCG = 5
name = 'Inexact Gauss Newton'
tolCG = 0.1
class SimPEG.Optimization.ProjectedGradient(**kwargs)

Bases: SimPEG.Optimization.Minimize, SimPEG.Optimization.Remember

_doEndIteration_ProjectedGradient(xt)
_startup(x0)
activeSet(x)

If we are on a bound

bindingSet(x)

If we are on a bound and the negative gradient points away from the feasible set.

Optimality condition. (Satisfies Kuhn-Tucker) MoreToraldo91

findSearchDirection()

Finds the search direction based on either CG or steepest descent.

inactiveSet(x)

The free variables.

lower = -inf
maxIterCG = 5
name = 'Projected Gradient'
projection(x)

Make sure we are feasible.

tolCG = 0.1
upper = inf
class SimPEG.Optimization.NewtonRoot(**kwargs)

Bases: object

Newton Method - Root Finding

root = newtonRoot(fun,x);

Where fun is the function that returns the function value as well as the gradient.

For iterative solving of dh = -Jr, use O.solveTol = TOL. For direct solves, use SOLVETOL = 0 (default)

Rowan Cockett 16-May-2013 16:29:51 University of British Columbia rcockett@eos.ubc.ca

class Solver(A, **kwargs)

Bases: object

clean()
comments = False
doLS = True
maxIter = 20
maxLS = 30
root(fun, x)

Function Should have the form:

def evalFunction(x, return_g=False):
        out = (f,)
        if return_g:
            out += (g,)
        return out if len(out) > 1 else out[0]
solverOpts = {}
stepDcr = 0.5
tol = 1e-06
class SimPEG.Optimization.StoppingCriteria

Bases: object

docstring for StoppingCriteria

armijoGoldstein = {'stopType': 'optimal', 'right': <function <lambda>>, 'str': '%d : ft = %1.4e <= alp*descent = %1.4e', 'left': <function <lambda>>}
bindingSet = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : probSize = %3d <= bindingSet = %3d', 'left': <function <lambda>>}
bindingSet_LS = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : probSize = %3d <= bindingSet = %3d', 'left': <function <lambda>>}
iteration = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : maxIter = %3d <= iter = %3d', 'left': <function <lambda>>}
iterationLS = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : maxIterLS = %3d <= iterLS = %3d', 'left': <function <lambda>>}
moving_x = {'stopType': 'optimal', 'right': <function <lambda>>, 'str': '%d : |xc-x_last| = %1.4e <= tolX*(1+|x0|) = %1.4e', 'left': <function <lambda>>}
norm_g = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : |proj(x-g)-x| = %1.4e <= 1e3*eps = %1.4e', 'left': <function <lambda>>}
phi_d_target_Inversion = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : phi_d = %1.4e <= phi_d_target = %1.4e ', 'left': <function <lambda>>}
phi_d_target_Minimize = {'stopType': 'critical', 'right': <function <lambda>>, 'str': '%d : phi_d = %1.4e <= phi_d_target = %1.4e ', 'left': <function <lambda>>}
tolerance_f = {'stopType': 'optimal', 'right': <function <lambda>>, 'str': '%d : |fc-fOld| = %1.4e <= tolF*(1+|f0|) = %1.4e', 'left': <function <lambda>>}
tolerance_g = {'stopType': 'optimal', 'right': <function <lambda>>, 'str': '%d : |proj(x-g)-x| = %1.4e <= tolG = %1.4e', 'left': <function <lambda>>}
class SimPEG.Optimization.IterationPrinters

Bases: object

docstring for IterationPrinters

LS_armijoGoldstein = {'width': 16, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'f + alp*g.T*p'}
LS_ft = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'ft'}
LS_t = {'width': 10, 'format': '%0.5f', 'value': <function <lambda>>, 'title': 't'}
aSet = {'width': 8, 'format': '%d', 'value': <function <lambda>>, 'title': 'aSet'}
bSet = {'width': 8, 'format': '%d', 'value': <function <lambda>>, 'title': 'bSet'}
beta = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'beta'}
comment = {'width': 12, 'format': '%s', 'value': <function <lambda>>, 'title': 'Comment'}
f = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'f'}
itType = {'width': 8, 'format': '%s', 'value': <function <lambda>>, 'title': 'itType'}
iteration = {'width': 5, 'format': '%3d', 'value': <function <lambda>>, 'title': '#'}
iterationLS = {'width': 5, 'format': '%3d.%d', 'value': <function <lambda>>, 'title': '#'}
norm_g = {'width': 15, 'format': '%1.2e', 'value': <function <lambda>>, 'title': '|proj(x-g)-x|'}
phi_d = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'phi_d'}
phi_m = {'width': 10, 'format': '%1.2e', 'value': <function <lambda>>, 'title': 'phi_m'}
totalLS = {'width': 5, 'format': '%d', 'value': <function <lambda>>, 'title': 'LS'}