simpeg.regularization.SparseSmallness.update_weights#
- SparseSmallness.update_weights(m)[source]#
Update the IRLS weights for sparse smallness regularization.
- Parameters:
- m
numpy.ndarray
The model used to update the IRLS weights.
- m
Notes
For the model \(\mathbf{m}\) provided, the regularization kernel function for sparse smallness is given by:
\[\mathbf{f_m}(\mathbf{m}) = \mathbf{m} - \mathbf{m}^{(ref)}\]where \(\mathbf{m}^{(ref)}\) is the reference model; see
Smallness.f_m()
for a more comprehensive definition.The IRLS weights are computed via:
\[\mathbf{w_r} = \boldsymbol{\lambda} \oslash \Big [ \mathbf{f_m}^{\!\! 2} + \epsilon^2 \Big ]^{1 - \mathbf{p}/2}\]where \(\oslash\) represents elementwise division, \(\epsilon\) is a small constant added for stability of the algorithm (set using the irls_threshold property), and \(\mathbf{p}\) defines the norm for each cell (defined using the norm property).
\(\boldsymbol{\lambda}\) applies optional scaling to the IRLS weights (when the irls_scaled property is
True
). The scaling acts to preserve the balance between the data misfit and the components of the regularization based on the derivative of the l2-norm measure. And it assists the convergence by ensuring the model does not deviate aggressively from the global 2-norm solution during the first few IRLS iterations.To compute the scaling, let
\[f_{max} = \big \| \, \mathbf{f_m} \, \big \|_\infty\]and define a vector array \(\mathbf{\tilde{f}_{\! max}}\) such that:
\[\begin{split}\tilde{f}_{\! i,max} = \begin{cases} f_{max} \;\;\;\;\; for \; p_i \geq 1 \\ \frac{\epsilon}{\sqrt{1 - p_i}} \;\;\; for \; p_i < 1 \end{cases}\end{split}\]The scaling quantity \(\boldsymbol{\lambda}\) is:
\[\boldsymbol{\lambda} = \Bigg [ \frac{f_{max}}{\mathbf{\tilde{f}_{max}}} \Bigg ] \odot \Big [ \mathbf{\tilde{f}_{max}}^{\!\! 2} + \epsilon^2 \Big ]^{1 - \mathbf{p}/2}\]