simpeg.regularization.BaseSparse.get_lp_weights#

BaseSparse.get_lp_weights(f_m)[source]#

Compute and return iteratively re-weighted least-squares (IRLS) weights.

For a regularization kernel function fm(m) evaluated at model m, compute and return the IRLS weights. See Smallness.f_m() and SmoothnessFirstOrder.f_m() for examples of least-squares regularization kernels.

For SparseSmallness, f_m is a (n_cells, ) numpy.ndarray. For SparseSmoothness, f_m is a numpy.ndarray whose length corresponds to the number of faces along a particular orientation; e.g. for smoothness along x, the length is (n_faces_x, ).

Parameters:
f_mnumpy.ndarray

The regularization kernel function evaluated at the current model.

Notes

For a regularization kernel function fm evaluated at model m, the IRLS weights are computed via:

wr=λ[fm2+ϵ2]1p/2

where represents elementwise division, ϵ is a small constant added for stability of the algorithm (set using the irls_threshold property), and p defines the norm at each cell.

λ applies optional scaling to the IRLS weights (when the irls_scaled property is True). The scaling acts to preserve the balance between the data misfit and the components of the regularization based on the derivative of the l2-norm measure. And it assists the convergence by ensuring the model does not deviate aggressively from the global 2-norm solution during the first few IRLS iterations.

To apply elementwise scaling, let

fmax=fm

And define a vector array f~max such that:

f~i,max={fmaxforpi1ϵ1piforpi<1

The elementwise scaling vector λ is:

Extra close brace or missing open brace