At some point in many geophysical workflows, an inversion is a necessary step for answering the geoscientific question at hand, whether it is recovering a reflectivity series from a seismic trace in a deconvolution problem, finding a susceptibility model from magnetic data, or recovering conductivity from an electromagnetic survey. This is particularly true when working with data sets where it may not even be clear how to plot the data: 3D direct current resistivity and induced polarization surveys (it is not necessarily clear how to organize data into a pseudosection) or multicomponent data, such as electromagnetic data (we can measure three spatial components of electric and/or magnetic fields through time over a range of frequencies). Inversion is a tool for translating these data into a model we can interpret. The goal of the inversion is to find a “model” — some description of the earth's physical properties — that is consistent with both the data and geologic knowledge.
In a general inverse problem, we start from a forward problem, of the form ℱ[m] = d, where ℱ is the forward operator (the mathematical description of the physics/problem), d is our data, and m is our earth model (an array of numbers that describes the physical properties of the earth). Matt Hall kicked off the discussion of inversions in The Leading Edge in his Linear Inversion tutorial (Hall, 2016). He walked through how to solve the classic linear inverse problem in which the forward simulation takes the form ℱ[m] = Gm = d. The example he demonstrated is a deconvolution problem; in that case, G is a convolution matrix, m is the reflectivity series, and d is a seismic trace. He introduced the concepts of an underdetermined problem, motivated the need for regularization, formulated the inversion in terms of an optimization problem, and solved the linear inverse problem (in true polyglot fashion, using Python, Lua, Julia, and R). In this tutorial, we will pick up from there and explore a nonlinear forward problem, of the form ℱ[m] = d; in this case, our forward operator is a function of the model. In the accompanying notebooks (https://github.com/seg), we use SimPEG (http://simpeg.xyz) for the implementation in Python of the physics simulations, optimization, and structure necessary to perform an inversion (Cockett et al., 2015).
In summary, to implement the forward simulation for the MT problem (ℱ[m] = d), we break it into two steps:
solve A(m)u = b; and
compute the impedance data d = P(u).
In the first notebook, we provide details on how each step is performed using a finite difference approach. If you are looking for more on numerical discretization, we wrote a tutorial on finite volume methods (Cockett et al., 2016).
The inversion aims to solve ℱ−1[d] for a model. Just as in the linear problem, we require regularization to select a model from the infinitely many that can fit the data. Before we tackle this ill-posed inverse problem, let's explore an example of nonuniqueness: how can different models give us the same data?
A classic example that demonstrates the nonuniqueness of MT data is the equivalence of the conductivity-thickness product (conductance) of a thin layer. If we start with a layer that has a conductivity of σ, halve its thickness, and double its conductivity, the resulting data will be similar. In Figure 1, we show apparent resistivity and phase data for five models, each of which has the same conductance. In all of the simulations, the data show a decrease in apparent resistivity and an increase in phase starting at ∼10 Hz. Thus, in all of the data we have evidence of a conductive layer, and the frequency range at which it appears is an indicator of the depth of the layer (you can explore by changing the depth variable in the model setup of the second notebook). However, all scenarios produce similar data. Even with a small amount of noise, we cannot expect an inversion code to separate the conductivity and thickness of a conductive unit without incorporating additional information. When setting up the inverse problem and defining regularization (next up), it is important to realize that the choices we make there will influence the character of the model we recover, as the data alone do not provide us with a unique model.
The first term is often referred to as the “smallness” as it measures the “size” of the model (in the l2 sense). The matrix Ws is generally taken to be a diagonal matrix that may contain information about the length scales of the model or be used to weight the relative importance of various parameters in the model. The scalar αs weights the relative importance of this term in the regularization. Notice that we include a reference model, mref. Often this is defined as a constant value, but if more information is known about the background, that can be used to construct a more intricate reference model. Here, we will not delve too far into how the reference model impacts the recovered results, but you are encouraged to change mref in the notebooks and investigate its impact.
The second term is often referred to as the “smoothness.” The matrix Wz approximates the derivative of the model with respect to depth, and is hence a measure of how “smooth” the model is. The term αz weights the relative importance of smoothness in the regularization.
From this setup, we see that there are quite a number of choices to make: defining uncertainties on the data (Wd), selecting a reference model (mref), choosing the importance of smallness and smoothness (αs and αz), and selecting a trade-off parameter (β). Let's start by assuming a known noise model, fix αs and αz, and explore the impact of the trade-off parameter β. Our forward problem depends on the electrical conductivity. For the inverse problem, however, we are free to use any function of the conductivity as a parameter. The electrical conductivity of earth materials varies by many orders of magnitude and is strictly positive. Thus it is advantageous to use log(σ) as the model in the inverse problem. For a nonlinear problem, we also have the additional choice of the initial model m0 at which to start the inversion. Although we will not discuss the choice of m0, you are encouraged to change the initial model in the notebooks and examine the impact it makes because it can be significant.
The β knob. If the noise is Gaussian, then the sum of squares (our data misfit) is a Chi-squared distribution, which has an expected value of Ndata (in our case, we divide this by two to match our definition of ϕd). Thus, the ideal choice of β is one that gives us . To demonstrate the effect of β, we consider a five-layer model, originally shown in Whitall and Oldenburg (1992), and will demonstrate inversions when we achieve the target misfit, underfit the data, and overfit the data. The conductivity model used is the solid black line in Figure 2a. For these inversions we fix the regularization parameters to αs = 10−2, αz = 1 and set mref = log 10−2S/m, and the initial model, m0 = mref (feel free to change them in the notebook). We start the inversion with a large β and decrease its value to plot the trade-off or Tikhonov curve (Figure 2b). In blue, we show the inversion that is stopped when the data misfit approximately equals the target misfit (the star in Figure 2b). Figures 2c and 2d show the data as apparent resistivity and phase, which is a visualization of our complex-valued impedance data. The blue line in Figure 2a shows the recovered model, which identifies the general structure and conductivity values of the five layers. In this case, we are employing a smooth regularization, thus we expect to recover smoothly varying structures.
If we instead choose a larger β, reducing the contribution of the data misfit to the objective function, we underfit the data, as is shown in orange in Figure 2. Although we still see evidence of two conductive structures, we do not recover their amplitudes and do a poor job resolving the location and widths of the conductive layers. (If you had to pick the top of the first layer, where should it be?) Examining the plots in Figures 2c and 2d, there is more insight about the subsurface conductivity that can be learned by pushing the inversion to extract more from the data.
On the other extreme, we can choose a very small β and try to fit all of the details in the data. Doing this, we obtain the results shown in green in Figure 2. When we push the inversion to fit the (noisy) data very closely, we end up fitting the noise. To do this, conductivity contrasts are exaggerated and oscillatory and erroneous conductivity structures are introduced in the inversion.
The α knobs. For the inversions shown in Figure 2, we prescribed the values αs, αz. What impact do they have on the character of the model we recover?
In Figure 3, we compare two inversions with different regularization parameters: (1) a “smooth” inversion (blue line) with αs = 10−5 and αz = 1 and (2) a “small” inversion (orange line) with αs = 1 and αz = 10−5. In both, β was chosen so that a desired target misfit was achieved. The smooth inversion penalizes large gradients; the resulting model has two smooth peaks. Note that we smooth over the resistive third layer, overestimating its conductivity. The small inversion instead favors models that are close to the reference model; this model has more structure. The resistivity of the first layer matches well, and the conductivity of the third layer is closer to its true value, but additional oscillatory structures are introduced at depth. In the third notebook, you can explore the impact of these parameters yourself.
In practice, these parameters are often determined by experimentation; strategies such as examining length scales are often successfully adopted (see page 38 in Oldenburg and Li, 2005). Changing the relative values of αs and αz is one way to bring in a priori information. If we know very little, often starting with a smooth inversion is a good option; this penalizes structure (high gradients) while showing general trends. If more structure is expected, or a reliable reference model can be built from additional data such as physical property measurements, well logs, or additional geophysical/geologic data, then the influence of the smallness term may be increased. There are a few other ways to bring in additional a priori information. If we are expecting a more “blocky” model, we can choose a different norm (such as an l1 norm), or if we have structural constraints, we can introduce other weighting structures (e.g., on the smoothness); these are knobs for another tutorial, and there is discussion in Oldenburg and Li (2005).
In this tutorial, we have introduced the forward simulation for MT and explored a few aspects of the inverse problem. Prior to jumping into an inversion, it is important to know the limitations of the survey and data, and what you can and cannot resolve, even if there is no noise. Forward modeling is a powerful tool for setting realistic expectations of an inversion.
To set up and solve the inverse problem, we posed the inversion as an optimization problem that searches for a model of the earth that minimizes an objective function consisting of a data misfit and a regularization term. There are many choices to be made in defining the various elements of the inverse problem, including how to assign uncertainties, selecting a trade-off parameter, defining the regularization function, and choosing initial and reference models. In this tutorial we explored two of the knobs: (1) the trade-off parameter and (2) the relative importance of smallness and smoothness contributions in Tikhonov regularization. The interactive notebooks that are provided allow you to change parameters and experiment with their impact.