In some situations, it is difficult to image seismic data even when factors such as illumination, anisotropy, and attenuation are resolvable. An often-overlooked but dominant factor affecting imaging is the geometric complexity of subsurface reflectors. In some of these settings, we propose that geometric complexity plays a dominant role in the quality of seismic imaging. This has been documented in subsalt regions with poor image quality where 3D vertical seismic profiles (VSPs) are available. VSPs often demonstrate that there is good illumination below salt, but the complexity of the observed downgoing wave and diffractions cannot be simulated with smooth earth models. We use synthetic examples with high geometric complexity and no other complications to analyze the imaging process: a 2D sediment-salt model with rough salt-top topography, and a 1D model with complex reflectivity. In these models, it is difficult to estimate velocity, and it is challenging to image the data. These synthetic examples are useful to understand the limitations of imaging with smooth models, to create guidelines to identify geometric complexity in field data, and to develop possible mitigations to improve imaging. Further real data examples will be needed to test geometric complexity.

With significant advances in acquisition and processing technologies such as reverse time migration (RTM), full-waveform inversion, broadband acquisition and processing, tomography, and least-squares migration, seismic imaging has experienced notable improvements that have led to velocity models with impressive levels of detail and to unprecedented high-resolution images in areas of complex geology. However, there are still situations where imaging is challenging in areas with complex reflectivity. These are regions where it is difficult to identify reflections that are essential in guiding the velocity estimation and the imaging processes. Li et al. (2014) show examples of poor imaging in the presence of complex salt geometry in the deepwater Gulf of Mexico. Factors such as lack of illumination, high-order anisotropy, attenuation, and elastic wave propagation, among others, are often quoted as being responsible for poor results, but while it is possible that these might be contributing factors, they might not always be the dominant factors at play.

Etgen et al. (2014) and Luo et al. (2017) used synthetic models to demonstrate the detrimental effects of high-contrast, short-wavelength velocity variations in imaging. Bakulin et al. (2022) and Stork (2021) have also demonstrated the deleterious effects in a non-salt-involved area. Evidence of geometric complexity near salt domes has been identified in the Gulf of Mexico with 3D vertical seismic profile (VSP) data. For example, Lee and Guo (2016) present an excellent case of 3D VSP processing at the Mad Dog Field. In their paper, VSPs show no lack of downgoing wave energy propagating subsalt; however, VSP data show high geometric complexity in the observed downgoing wave, including diffractions (Figures 2 and 4 of their paper). This complexity cannot be recovered with inverted velocity models from surface seismic data. The roughness and complexity of the high-impedance salt interface needs to be known, otherwise we cannot duplicate the true downgoing wave packet in imaging.

Geometric complexity is a frequently overlooked but important factor having a strong impact on imaging seismic data. In some situations, geometric complexity might be the dominant factor in areas of poor imaging even when illumination is adequate. For this reason, we would like to investigate and understand the implications that geometric complexity has in wave propagation, in reflectivity of the subsurface, and consequently in seismic imaging. We use simple synthetic examples to demonstrate the important role vertical and lateral geometric complexity plays in wave propagation, and we discuss their implications for velocity model building and seismic imaging.

To better understand what we mean by geometric complexity, notice that seismic imaging is challenged by vertical complexity (the denseness of too many reflections) and lateral complexity (the roughness of reflector surfaces). The end members of these classes are sparse reflectivity, which is easy for imaging, and fractal reflectivity (Mandelbrot, 1982; Barton and La Pointe, 1995), highly complex reflectivity at all scales, which is difficult for imaging. In this paper we avoid these two extremes. Instead, we ask ourselves if there is a more unifying concept that stands between very simple and very complex. We present a unified view of geometric complexity by introducing the concept of locally dense but globally sparse (LG) reflectivity. LG is not as simple as sparsity (far apart) but not as complex as fractal (denseness).

We use a 2D example and a 1D example to illustrate the concept. While we recognize the earth is three-dimensional, 2D and 1D examples provide simplicity and are sufficient to illustrate the interplay among lateral and vertical geometric complexity, wave propagation, and seismic imaging. In particular, the 1D example provides a clear demonstration that even in some situations when the structure of reflectors is not an issue, vertical complexity can be detrimental to the resolution of inversion of seismic data.

Inversion challenges and limitations. Seismic imaging depends on an earth model that accurately predicts, through modeling, the characteristics of recorded seismic data: traveltimes, amplitudes, and waveforms. It is well understood that seismic data alone are not enough to estimate an accurate detailed earth model without additional interpretational constraints and prior information. Due to the nonuniqueness and instability of seismic inversion, numerical regularization and numerical constraints are used to stabilize the inversion. Unfortunately, these types of purely numerical constraints decrease resolution and tend to create geometric artifacts. Unchecked, these numerical artifacts can create detailed earth models that perfectly fit the data but might not be geologically plausible.

Mathematically, there is a limit to what is achievable by inversion methods. The Riemann-Lebesgue lemma gives us an important insight into what is mathematically invertible, that is, the solvability of geometric complexity by inversion (Lau and Yin, 2017a). This lemma states that frequencies and/or wavenumbers of a given function become more difficult to invert as they increase. It also shows that oscillations leaking from the null space into the inverted result might create models that fit the data but might be geologically unrealistic, distorting the true geometry of imaged reflectors.  Appendix A illustrates these properties with a simple example.

Measures of complex geometry (Betti numbers). Seismic inversion relies on calculating error between observed data and computed data. The error term does not comprehend geometric artifacts. We see nongeologic “blobs” or “wings” in the velocity model. We see “halos” or “swings” in the migrated image. If we have many inversion model results, then a measure of geometric complexity tries to pick a model with the lowest complexity measure (i.e., less artifacts). One such measure of complex geometry/topology is the concept of Betti numbers. A high Betti number means that there are a lot of “holes” in the model or in the image. Low Betti numbers are more favorable to inversion and imaging (see Lau and Yin, 2012). When we use only low frequencies in Fourier decomposition, we are decreasing geometric complexity. When we use beams to decompose data, we are also decreasing geometric complexity. Betti numbers are by no means the only way to measure geometric complexity, but asking the error term to minimize geometric artifacts is a tall order. See  Appendix B for an illustration of Betti numbers and synthetic data.

Statics and thin-lens term, wavefront alignment. A seismic wavefront propagating in a region with high heterogeneity becomes misaligned and incoherent. In the near surface, given that it is not possible to estimate an accurate enough velocity model to resolve high heterogeneity with the wave equation, a solution is to align wavefronts in the time domain with static shifts, effectively replacing the highly variable near-surface velocity with a smooth replacement velocity. A similar process for the subsurface, seismic adaptive optics, was proposed by Etgen (Etgen et al., 2014; Luo et al., 2017) to compensate for high-contrast, short-wavelength velocity variations that cannot be resolved with inversion.

Mathematically, when the exact earth model is known, the wave equation exactly matches the seismic data during migration. However, when the exact earth model is unknown, the wave equation approximately separates into two terms, a diffraction term, applied using the known earth model, and a thin-lens term that accounts for the difference between the exact velocity and the estimated velocity (Claerbout, 1985). The purpose of the thin-lens term is to align the propagating wavefronts when the velocity model cannot predict exactly traveltimes in the recorded data. An accurate migration algorithm should apply recursively the diffraction term followed by the thin-lens term at every extrapolation step. In practice, this is never done due to cost and difficulty. The consequence of ignoring wavefront alignment when facing high geometric complexity in the subsurface is loss of resolution and, in extreme situations, a complete loss of reflections in the image.

To illustrate the impact of geometric complexity in seismic imaging, our first example is a 2D synthetic salt model. We limited complexity only to the top-of-salt horizon as this will help us isolate its wave propagation effects. The model has a horizontal bottom of salt and four horizontal subsalt reflectors. Figure 1 shows the layered velocity model. The top-of-salt horizon was created with a fractal algorithm to produce rugosity at all scales (Turcotte, 1997; Bradley et al., 2014). See  Appendix C for details.

Figure 2 shows a representative shot gather at the center of the model. The complexity of the top-of-salt reflection is noticeable. Strong-energy events are refractions from the top of salt; notice how these arrive even at near offsets. Base-of-salt and subsalt reflections are weak but still visible in the gather. They all show complex moveouts.

A different perspective of wave propagation is gained by looking at the zero-offset VSP in Figure 3. In the first sediment layer, above 200 ms, the direct downgoing wave remains strong with a narrow pulse, but the pulse rapidly widens while propagating through salt and subsalt sediments. Notice how the base-of-salt reflection and all four subsalt reflectors are weak and become difficult to track once they travel above salt. The strong train of arrivals above salt are all salt refractions. Also, notice that below salt there are a significant number of down-propagating wavefronts generated in the overburden. Importantly, there is no lack of illumination below salt.

RTM common-offset gathers can also illustrate imaging challenges. Figure 4 shows two RTM common-offset gathers at locations x = 2000 m and x = 3000 m. Notice the high complexity of arrivals in the gathers. There is considerable energy from diving-wave refractions arriving at early times above the top-of-salt reflection at approximately 400 m and 450 m. This energy needs to be attenuated or muted in the final image, otherwise it will mask the much weaker primary reflections. There is a clear bottom-of-salt reflection in the first gather at 850 m depth, but it is very difficult to identify in the second gather. Subsalt sediment reflections are also difficult to see in these two gathers.

Further insight is gained by looking at the snapshots at 550 ms in Figure 5, taken at the middle of the model, the same location of the gather in Figure 2. Figure 5a is the wavefield propagating in the exact model. The strong shallow wavefronts, above 500 m, are all refractions propagating close to the horizontal direction. Primary reflections are weak in comparison. Notice in particular the complexity of all wavefronts traveling downward and illuminating the subsalt reflectors. The first-arrival wavefront is strong but quite asymmetric and low in frequency (white boxes). The complexity of the top-of-salt horizon has created multiple branches in the wavefront penetrating the salt and those traveling below salt.

To compare the wavefront from an alternative smoother model, Figure 5b shows the snapshot using an alternative model with smooth top of salt. RTM is usually run with some degree of smoothness of the model because of uncertainty in the exact shape of the top of salt. Notice that the first arrival is now quite symmetric (white boxes). The figure demonstrates that the smooth model cannot duplicate the asymmetry of the wavefront below salt, which may create problems for imaging subsalt reflectors.

The waveform of a plane wave propagating through the earth changes its shape in response to its interaction with short-period multiple reflections (O'Doherty and Anstey, 1971). In our 1D example the primary + multiple seismogram has weaker amplitude than primaries only. This is counterintuitive of multiples having more amplitude.

To better understand how complex reflectivity can still be an issue in simpler situations with no lateral heterogeneity, our second example is a 1D model with perfectly flat geologic layers. The model has uneven distribution of reflectivity. This simple example illustrates that LG distribution with sparse reflections interspersed with some locally dense reflections will cause significant problems for seismic imaging and for seismic inversion, such as short-period multiples. Reflectivity is not close to the complexity of a fractal distribution, but it has enough uneven geometric variability to significantly challenge seismic inversion. This situation has been observed in areas where coal beds interspersed with clastic layers produce complex stratigraphic filtering (Qi and Hilterman, 2017).

The challenge here is that LG vertical distribution of primaries will cause significant problems to remove (demultiple) internal multiples. The LG 1D example looks simple at first. But it needs only a few local packages that are locally dense (close to each other). We have modeled primaries only and primaries + multiples.

Figure 6 shows that a synthetic of primaries + multiples has weaker amplitude than primaries only for a complex LG model. Notice the counterintuitive result where the primaries-only seismogram has stronger amplitude than the primaries + multiples seismogram. Figure 7 is the zero-offset VSP showing first-arrival packages stretched with many trail cycles. Downgoing waveforms at different depths (white boxes) have significant stretch even though there is no attenuation in the model.

There are situations in seismic imaging when data are difficult to image even though there is good illumination and no complexities such as high-order anisotropy or strong attenuation. In some of these situations, complex geometry of the subsurface geology might play a major role. To study them, we introduced LG as a concept between very simple and very complex geologic geometries, encompassing cases where imaging might still be possible but with strategies different from current practice. We propose the use of Betti numbers as a measure of geometric complexity, with processing aiming to decrease Betti numbers to facilitate imaging and inversion, but without compromising resolution.

Numerical regularization and stabilization techniques, commonly used in seismic imaging and inversion, are convenient because they are easy to apply and require minimum human intervention (interpretation), so they are suitable for processing large amounts of data. However, when facing challenging imaging situations, these numerical tools do not help us understand the cause of poor imaging, and furthermore, are detrimental in creating numerical artifacts that compromise the geologic validity of inverted models and data.

Our 2D synthetic example illustrated the challenge of building an accurate model to image subsalt reflectors when the top-of-salt horizon has fractal complexity, that is, roughness at all scales. In this synthetic model there is no illumination problem, demonstrated with VSP synthetics, that hinders getting reflected data below salt. It is the complexity of the top of salt that creates strong refractions that dominate the seismic data. These refractions are not limited to far offsets, so they cannot be muted in common-offset RTM gathers. In addition, the wavefronts illuminating the subsalt reflectors are complex in amplitude and phase, and their bandwidth is narrowed by propagating in the complex overburden.

Our 1D synthetic example shows that even when lateral complexity is nonexistent, complex reflectivity sequences can lead to destructive interference, hiding the true reflectivity in the seismic data. In this case, inversion to recover the true reflectivity will not work. Instead, modeling using external information from well logs is necessary to understand the geology and its seismic response.

Mitigation. Propagating seismic wavefronts in areas of high subsurface heterogeneity can become highly complex. The higher the complexity, the more important wavefront alignment is for wave-equation imaging; otherwise, illuminating wavefronts might accumulate errors to the point of no return, becoming impossible to image subsurface reflectors. This would be similar if statics corrections were not applied in cases of near-surface velocity heterogeneity.

There are processing methods that could mitigate vertical and horizontal complex problems to improve imaging. These are ad hoc methods that could be helpful as they do not try to estimate the complexity of the downgoing wave. One such mitigation approach is to alternate between wave equation datuming and residual static (Lau and Yin, 2017b). In addition, for cases of high heterogeneity, azimuth and dip dependent statics might be needed. Other possible methods are “virtual datuming” (Lau, 1997) and seismic adaptive optics (Etgen et al., 2014; Luo et al., 2017).

The power of redatuming the data is illustrated in Figures 8 and 9. While it is true that redatuming has limitations, it has advantages as a strong filter of diving waves and refracted energy. Figure 8 shows two common-offset sections after downward continuation to 400 m. Notice the clarity of all events — top of salt, bottom of salt, and subsalt reflections — after refractions have been filtered out.

Figure 9 shows four downward-continued gathers to 400 m depth at the locations marked by the arrows in Figure 8. In these gathers, it is easier to identify reflections, and if needed, a static-shift alignment can be applied before finishing the imaging process.

The topic of solvability in imaging and inverse problems is still an area of active research, as there is a theoretical limitation to what mathematical methods as a whole can do. See Lau (2018, 2019) for a more general discussion of these ideas. Interestingly, some scientists still question determinism in classical physics (van Strien, 2021).

There are two alternatives to image data in complex situations. One is making the modeling equation more complex; however, two drawbacks to this approach are that it might cause further instability and that a larger number of parameters need to be estimated in the earth model. Instead, we can keep the modeling equation simpler and use processing such as redatuming and statics to overcome complexity.

We illustrated with two synthetic examples how the 1D (vertical variations) and 2D (horizontal variations) models achieve their complexity with only simple rock properties. The effect in their seismic response is totally due to geometric complexity. We could mitigate vertical and horizontal geometric complexity, but there are no general solutions, especially estimating the downgoing wave.

When 3D VSPs are available, the geometric complexity of the observed downgoing wave rarely matches in detail the downgoing wave of the final velocity model, but it could be used for qualitative risk analysis of step-out wells. For inversion, the observed complexity of the downgoing wave in VSPs should be compared with the predicted downgoing wave from the inverted velocity model for quality control.

Appendix A: Riemann-Lebesgue lemma

The Riemann-Lebesgue lemma (Lau and Yin, 2017a) states that if the Lebesgue integral of f is finite, then the Fourier transform of f satisfies

F(k)Rdf(k)eikxdx0, k.

This is equivalent to say the following for 1D real functions f: [a, b] → R

limkf(x)sin(kx) dx=limkf(x)cos(kx) dx=0.

A practical implication of the Riemann-Lebesgue lemma on geophysical inversion can be demonstrated with the familiar 1D convolutional model, which can be written as


where w(t) is the wavelet function, r(t) the reflectivity function, and d(t) the seismogram. According to the Riemann-Lebesgue lemma, a different reflectivity function would also yield the identical seismogram



limβ0tw(tτ)[r(τ)+αsin(βτ)]dτ=0tw(tτ)r(τ)dτ = d(t).

Figures A-1, A-2, and A-3 are numerical and graphical demonstrations of this analytical result, where Figure A-1 uses the original reflectivity. Figures A-2 and A-3 use nonzero values for α and β: α = 0.2, β = 0.9 and α = 0.9, β = 0.9, respectively. This example is a reminder that there is a huge null space even for just a 1D inversion using the convolutional model.

A direct consequence of the Riemann-Lebesgue lemma is that higher frequencies of a given function become more difficult to invert. It is important to notice that the integral will go to zero at high frequencies regardless of the magnitude of f(x).

For seismic inversion with noisy data, the objective is not to reduce residuals between modeled data and recorded data to zero. Instead, a limit is set based on noise levels, and the inversion process is stopped when the residual falls below a threshold level. If xN is the final model after N inversion iterations, and f(xN) is the predicted data, then


where Lp is the norm used in the inversion. In least-squares inversion, which is a weighted-averaging process, the inversion will distribute energy to all wavenumbers in the solution space while bringing the residual below the threshold ε in the data space. This averaging process makes the reminder highly nonunique. Furthermore, components might leak from the null space, manifesting themselves as oscillations, in particular at higher wavenumbers. These oscillations might not be necessarily small, leading to models that fit the data but are geologically unrealistic and distort the true geometry of imaged reflectors.

Appendix B: Betti numbers and synthetic data

A measure of complex geometry is the concept of Betti numbers. Betti numbers are defined in multidimensional space and increase with data complexity (Lau and Yin, 2012).

For a 2D seismic image, only B0 and B1 are nontrivial; higher Betti numbers are zero, Bn = 0 for n > 1. The 0th Betti number counts the number of connected components. The 1st Betti number, B1, counts the number of “holes.” For example, Figure B-1 is an abstraction of three diffractors as seen on time slices at different times. The three dots in Figure B-1(a) are the diffraction tops. There are three connected components, B0 = 3, and no holes, B1 = 0. At a later time slice, Figure B-1(b), the diffractors will appear as nonoverlapping semicircles. Here there are three connected components each with a hole, so B0 = 3 and B1 = 3. At a later time slice, Figure B-1(c), the diffractors will appear as three overlapping circles. Here there is one connected component but with seven holes, so B0 = 1 and B1 = 7. Migration and wave-equation datuming try to simplify geometric complexity to move from 7 to 3 to 0 by collapsing diffractions.

Figure B-2 shows synthetic seismic data from random diffractors in space. All diffractors are aligned in a plane. Figure B-3 shows a time slice thru the diffractors and associated Betti numbers. In Figure B-3(a) there is no smoothing; in Figure B-3(b) a 7 × 7 point smoothing has been applied. Smoothing reduces complexity and this reduction can be captured by Betti numbers. Figure B-3(c) shows Betti numbers B0 and B1 as a function of the mixing radius (number of points). Notice the rapid decline in Betti number B1 at the 7 × 7 point.

Seismic imaging outputs a 3D volume and seismic input is 5D. A Betti number is defined for all dimensions B0, B1, B2, etc. So, we could compute higher Betti numbers to measure geometric complexity even if we cannot plot 5D data.

The wave equation is the preferred rigorous approach to seismic imaging, but the error term in seismic inversion does not address geometric complexity or artifacts. In practice, we use “enhancements” such as smoothing for velocity models, filtering, dip scanning, seismic attributes (instantaneous phase), etc., to simplify geometry. The Betti number is one step toward quantifying geometric complexity.

Appendix C: Rugose top-of-salt model

Our 2D model consists of flat horizontal reflectors below a rugose top-of-salt horizon. To generate the top-of-salt horizon, we used the random midpoint displacement fractal algorithm, which is commonly used to simulate landscapes and rough surface maps (Turcotte, 1997; Bradley et al., 2014).

Figure 1 shows the top-of-salt horizon in the velocity model. It has a minimum depth of 243 m and a maximum depth of 564 m. The average depth is 398 m. The lateral extent is 5000 m. The velocity model has sediment velocity of 2000 m/s above salt. Subsalt velocities increase from 2500 m/s to a maximum of 4500 m/s. Salt velocity is 4800 m/s. The salt bottom is at 850 m, and the deepest reflector is at 1400 m. The model grid is 5 × 5 m. The synthetic seismic data were generated using a time-domain two-way finite-difference wave-equation algorithm simulating land acquisition. Five hundred one receivers were spaced at 10 m intervals along the entire line.

Data associated with this research are available and can be obtained by contacting the corresponding author.

All article content, except where otherwise noted (including republished material), is licensed under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC) license. See Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its digital object identifier (DOI). Commercial reuse and distribution of derivatives are not permitted.