simpeg.regularization.AmplitudeSmoothnessFirstOrder.update_weights#

AmplitudeSmoothnessFirstOrder.update_weights(m)[source]#

Update the IRLS weights for sparse smoothness regularization.

Parameters:
mnumpy.ndarray

The model used to update the IRLS weights.

Notes

Let us consider the IRLS weights for sparse smoothness along the x-direction. When the class property gradient_type`=’components’`, IRLS weights are computed using the regularization kernel function and we define:

fm=Gx[mm(ref)]

where m is the model provided, Gx is the partial cell gradient operator along x (i.e. x-derivative), and m(ref) is a reference model (optional, activated using reference_model_in_smooth). See SmoothnessFirstOrder.f_m() for a more comprehensive definition of the regularization kernel function.

However, when the class property gradient_type`=’total’`, IRLS weights are computed using the magnitude of the total gradient and we define:

fm=Acxj=x,y,z|AjGj[mm(ref)]|

where Aj for j=x,y,z averages the partial gradients from their respective faces to cell centers, and Acx averages the sum of the absolute values back to the appropriate faces.

Once fm is obtained, the IRLS weights are computed via:

wr=λ[fm2+ϵ2]1p/2

where represents elementwise division, ϵ is a small constant added for stability of the algorithm (set using the irls_threshold property), and p defines the norm for each element (set using the norm property).

λ applies optional scaling to the IRLS weights (when the irls_scaled property is True). The scaling acts to preserve the balance between the data misfit and the components of the regularization based on the derivative of the l2-norm measure. And it assists the convergence by ensuring the model does not deviate aggressively from the global 2-norm solution during the first few IRLS iterations.

To apply the scaling, let

fmax=fm

and define a vector array f~max such that:

f~i,max={fmaxforpi1ϵ1piforpi<1

The scaling vector λ is:

λ=[fmaxf~max][f~max2+ϵ2]1p/2