simpeg.regularization.SparseSmallness.get_lp_weights#

SparseSmallness.get_lp_weights(f_m)[source]#

Compute and return iteratively re-weighted least-squares (IRLS) weights.

For a regularization kernel function \(\mathbf{f_m}(\mathbf{m})\) evaluated at model \(\mathbf{m}\), compute and return the IRLS weights. See Smallness.f_m() and SmoothnessFirstOrder.f_m() for examples of least-squares regularization kernels.

For SparseSmallness, f_m is a (n_cells, ) numpy.ndarray. For SparseSmoothness, f_m is a numpy.ndarray whose length corresponds to the number of faces along a particular orientation; e.g. for smoothness along x, the length is (n_faces_x, ).

Parameters:
f_mnumpy.ndarray

The regularization kernel function evaluated at the current model.

Notes

For a regularization kernel function \(\mathbf{f_m}\) evaluated at model \(\mathbf{m}\), the IRLS weights are computed via:

\[\mathbf{w_r} = \boldsymbol{\lambda} \oslash \Big [ \mathbf{f_m}^{\!\! 2} + \epsilon^2 \Big ]^{1 - \mathbf{p}/2}\]

where \(\oslash\) represents elementwise division, \(\epsilon\) is a small constant added for stability of the algorithm (set using the irls_threshold property), and \(\mathbf{p}\) defines the norm at each cell.

\(\boldsymbol{\lambda}\) applies optional scaling to the IRLS weights (when the irls_scaled property is True). The scaling acts to preserve the balance between the data misfit and the components of the regularization based on the derivative of the l2-norm measure. And it assists the convergence by ensuring the model does not deviate aggressively from the global 2-norm solution during the first few IRLS iterations.

To apply elementwise scaling, let

\[f_{max} = \big \| \, \mathbf{f_m} \, \big \|_\infty\]

And define a vector array \(\mathbf{\tilde{f}_{\! max}}\) such that:

\[\begin{split}\tilde{f}_{\! i,max} = \begin{cases} f_{max} \;\;\; for \; p_i \geq 1 \\ \frac{\epsilon}{\sqrt{1 - p_i}} \;\;\;\;\;\;\, for \; p_i < 1 \end{cases}\end{split}\]

The elementwise scaling vector \(\boldsymbol{\lambda}\) is:

\[\boldsymbol{\lambda} = \bigg [ \frac{f_{max}}{\mathbf{\tilde{f}_{max}}} \bigg ] \odot \bigg [ \mathbf{f_{\! max}}^{\!\! 2} + \epsilon^2} \bigg ]^{1 - \mathbf{p}/2}\]