Freely available online through the SEG open-access option.


Various postprocessing methods can be applied to seismic data to extend the spectral bandwidth and potentially increase the seismic resolution. Frequency invention techniques, including phase acceleration and loop reconvolution, produce spectrally broadened seismic sections but arbitrarily create high frequencies without a physical basis. Tests in extending the bandwidth of low-frequency synthetics using these methods indicate that the invented frequencies do not tie high-frequency synthetics generated from the same reflectivity series. Furthermore, synthetic wedge models indicate that the invented high-frequency seismic traces do not improve thin-layer resolution. Frequency invention outputs may serve as useful attributes, but they should not be used for quantitative work and do not improve actual resolution. On the other hand, under appropriate circumstances, layer frequency responses can be extrapolated to frequencies outside the band of the original data using spectral periodicities determined from within the original seismic bandwidth. This can be accomplished by harmonic extrapolation. For blocky earth structures, synthetic tests show that such spectral extrapolation can readily double the bandwidth, even in the presence of noise. Wedge models illustrate the resulting resolution improvement. Synthetic tests suggest that the more complicated the earth structure, the less valid the bandwidth extension that harmonic extrapolation can achieve. Tests of the frequency invention methods and harmonic extrapolation on field seismic data demonstrate that (1) the frequency invention methods modify the original seismic band such that the original data cannot be recovered by simple band-pass filtering, whereas harmonic extrapolation can be filtered back to the original band with good fidelity and (2) harmonic extrapolation exhibits acceptable ties between real and synthetic seismic data outside the original seismic band, whereas frequency invention methods have unfavorable well ties in the cases studied.


Resolution of seismic data is a function of data bandwidth and dominant frequency (Widess, 1973; Kallweit and Wood, 1982). Due to the attenuation of high frequencies during wave propagation in an attenuating earth, seismic frequency content may not be adequate for seismic interpretation purposes in specific cases. A seismically thin layer is commonly defined to be a layer thinner than approximately one-fourth wavelength of the dominant frequency of the data. For such layers, the reflection events of opposite sign from the top and base of the layer cannot be independently mapped, and the time difference between them is generally not sensitive to the actual thickness of a thin layer (Widess, 1973). In practice, because many reservoirs or flow units within reservoirs are seismically thin, seismic resolution improvement is often desirable. The best way to accomplish this is to design better data acquisition and to process the data in such a way as to maximize resolution; however, the desired resolution may not be achievable given practical constraints. Given an existing data set of a given resolution, any postprocessing steps that can be applied to improve resolution are, if valid, desirable.

Resolution improves as the signal dominant frequency is increased and as its bandwidth is widened. However, one can conceive of many ways of data manipulation transforming seismic data to higher frequencies that do not actually improve seismic resolution (e.g., Young and Wild, 2005; Stark, 2009). We refer to such methods that arbitrarily create amplitude at frequencies outside the band of the original data as frequency invention techniques because they are not based on the physics of seismic wave propagation. Chávez-Pérez (2015) discusses the fact that these methods are indeed used in the industry and uses canonical examples to show their lack of efficacy.

That bandwidth extension is possible under appropriate circumstances should not be a matter of debate: The only question should be the general applicability of a given method. Spectral extrapolation is recognized in telecommunications as a viable means of recovering high-frequency speech components that are not transmitted (see, e.g., Avendano et al., 1995; Pyke, 1997). In the case of seismic data, there is a strong link between the amplitudes of various frequency components of the reflectivity spectrum for individual layers out to the Nyquist frequency (see, e.g., Partyka et al., 1999; Marfurt and Kirlin, 2001). Consequently, missing frequencies in the data spectrum can potentially be extrapolated by applying predictive operators directly in the frequency domain. Oldenburg et al. (1983) state “the Fourier transform of a reflectivity function for a layered earth can be modeled as an autoregressive (AR) process. The missing high and low frequencies can thus be predicted from the band‐limited reflectivity function by standard techniques.” Walker and Ulrych (1983) deal with novel extensions to the AR algorithm and demonstrate the efficacy of the method for sparse reflectivity series. In this tutorial, we discuss the use of sparse seismic inversion to achieve the bandwidth extension.

In the special case in which the earth model contributing to a given seismic reflector is caused by a single layer, Puryear and Castagna (2008) show that, with a known wavelet and in the absence of noise, the reflectivity spectrum associated with that layer, which is periodic in frequency, can be determined. Thus, the frequency response of that layer reflectivity can be extrapolated outside of the original band. Other simple layer models can also be readily extrapolated if there is enough bandwidth in the recorded signal to properly characterize the layering. Puryear and Castagna (2008) and Zhang and Castagna (2011) show how sparse inversions can be used to produce bandwidth-extended seismic sections, even when layers are not isolated, by solving for many superposing layer responses simultaneously. Hargreaves et al. (2013) explain that all sparse inversion methods can extend the seismic bandwidth to some extent. Zhang and Castagna (2011) show that sparse layer inversion can resolve thin layers better than sparse spike inversion (Taylor et al., 1979; Oldenburg et al., 1983) and should be more effective for spectral extension. Spectra can also be extended, with some reported success, directly by extrapolating time-frequency analyses produced by spectral decomposition (Smith et al., 2008). That bandwidth extension can be achieved should not be surprising; what may be unexpected is how useful such extrapolated frequencies can be. Bandwidth extension does not increase the information content of the data beyond the original information of the data plus any information resulting from the assumptions made to accomplish that extrapolation (such as assuming a blocky earth model). Bandwidth extension should thus be considered, not a means of recovering lost signal, but as a means of transforming the data to have useful characteristics — such as possibly improved resolution.

When confined to digital filtering (without invoking sparsity constraints), once a given frequency has been filtered away, no linear digital filter can restore it. Thus, where earth filtering has attenuated away high-frequency information, those frequencies can never be recovered. Deconvolution, for example, will just blow up noise at such missing frequencies. This inverse digital filtering paradigm somehow leads to the misconception that prediction of those missing frequencies cannot be validly performed using any method. However, since the inception of sparse inversion methods, it has been well-known that inverted earth models contain frequencies not present in the original data (e.g., van Riel and Berkhout, 1985). In seismic inversion, it is well-established that a priori information and constraints can restore missing frequencies (Tarantola, 2005). The issue then becomes how adequate, how correct, and how useful the constraints are in the presence of noise.

A more damaging contribution to the confusion associated with bandwidth extension is the practice of frequency invention. Because frequencies outside the band of the seismic data are filtered away by the seismic wavelet, any frequency content can be added to this null space and still be compatible with the original seismic trace. Various schemes can be imagined that create high-frequency data that track seismic reflectors. The result is a high-frequency section that may look geologic, but in which the high frequencies amount to no more than fictitious coherent noise. We will describe two representative frequency invention methods that are common practice, including cosmetic loop reconvolution (Young and Wild, 2005) and phase acceleration (Mallat, 2009; Stark, 2009). These methods invent frequencies outside the band of the data without a physical basis. We will show using synthetic wedge models that such methods do not improve resolution. As pointed out by Wild and Young (2005), high-frequency seismic sections produced by frequency invention may serve as a useful seismic attribute for certain interpretational purposes, but we will show in this tutorial that such outputs should not be wrongly attributed with the properties of those produced by true bandwidth extension techniques.

The purpose of this tutorial paper is to help reduce the confusion relating to these issues to avoid improper use or avoidance of bandwidth extension techniques. We illustrate extension of the data bandwidth using frequency periodic layer responses; we refer to this as harmonic extrapolation and show that it can improve seismic resolution under certain conditions. The broadband reflectivity series for a blocky earth model can be obtained using a variety of approaches (Taylor et al., 1979; Puryear and Castagna, 2008; Zhang and Castagna, 2011) other than the algorithm we will describe here. We do not claim that our harmonic extrapolation method is the optimal means of inverting for the layer model, only that it is a representative way of accomplishing the bandwidth extension. We will demonstrate the applicability of harmonic extrapolation with a series of comparative studies using synthetic data, which is the only circumstance in which the “correct answer” is perfectly known. We start with low-frequency synthetic data, use various bandwidth extension methods, and compare to perfect high-frequency synthetics. Resolution improvement, or lack thereof, will be demonstrated on synthetic wedge models. These synthetic tests will address the theoretical validity of the methods in a perfect world; real-world application in the presence of imperfect data is, of course, another matter. We will test the efficacy of the methods on synthetics with added noise and on real data. We will show (1) that frequency invention does not increase seismic resolution and (2) that under the right circumstances, valid spectral broadening can be achieved to some extent using harmonic extrapolation.


We assume that the processed seismic data can be represented by the simple convolutional model, where the seismogram s(t) is the convolution of a constant seismic wavelet w(t), with a reflectivity series r(t) plus random noise n(t):  
This model implicitly assumes that the subsurface geology to be horizontally layered with constant rock properties within the layers, whereas the reflections are generated at the seismic impedance contrast boundaries between those adjacent layers and ignores many wave-propagation effects. These are assumed to be corrected in processing. The convolutional model will be used to generate synthetic traces and to invert seismic traces into band-limited reflectivity. Any real-world deviations from this model are implicitly contained in the noise term.

Frequency invention methods

Some of the inspiration for seismic bandwidth broadening methods stems from basic Fourier theorems (Bracewell, 1965) and related variants commonly seen in the field of electrical engineering. For example, the Fourier shift theorem shows that a given signal can be converted to a higher frequency signal by multiplying the signal with high-frequency sinusoids. The resulting higher frequencies can be added back to the original signal to produce a broader band waveform. Whether any apparently broader spectral band being created in this way, irrespective of the underlying physics, is theoretically valid in accordance with the convolutional model should be questioned.

One such typical method called phase acceleration produces high frequencies using instantaneous frequency (Stark, 2009). Analytical instantaneous frequency is a useful attribute related to rock and fluid properties as well as a measure for evaluating layer thickness (Barnes, 2007). In audio processing, it is a common practice to transpose the signal frequency by scaling the phase to shift the harmonics and modify the sound properties (Mallat, 2009). The phase acceleration technique uses the instantaneous frequency of the original data as a fundamental frequency and increases the seismic frequency by increasing the rate of change of the instantaneous phase.

Based on complex trace analysis, the narrowband seismic signal can be viewed as the real component of a complex number representation (called the analytic trace), which provides a definitive separation for amplitude of the trace envelope and local phase content (Taner et al., 1979). This representation permits explicit calculation and modification of the instantaneous frequency, which is the first derivative of the instantaneous phase in the time domain. The analytical signal S(t), of which the seismogram s(t), is the real part is given by  
where φ(t) is the instantaneous phase, A(t) is the instantaneous amplitude, and s(t) is the quadrature series (Hilbert transform) of the real seismic trace. Barnes (2007) shows that instantaneous frequency at the peak of the envelope is the dominant frequency of the data. Consequently, the instantaneous frequency and the dominant frequency of the original trace can be directly changed by multiplying the phase term by any desired amount. The frequency extrapolated trace sE(t) can then be calculated using  
where mi can be varied to sum an arbitrary number of invented bands to the original data. For instance, a 2 ms sampling seismic trace with a central frequency of 30 Hz can be broadened to a wider bandwidth with the factors mi (k=7) taking the integers 0, 1, 2, 3, 4, 5, 6, 7, of which m1=1 denotes the original trace. In the case of an invariant instantaneous phase, the summation will produce beating sinusoids. Because the beating sinusoids have their own envelope, with maxima spacing determined by the beat frequency, the resulting extrapolated trace may result in events with modified envelope. In the case of application to real data, the duration of events may thus be shorter and appear to have better resolution. Figure 1a1h shows the application of phase acceleration to a 30 Hz Ricker wavelet. The phase is accelerated by factors of two and three yielding higher frequency wavelets (Figure 1b and 1c). Notice that these wavelets have a higher center frequency, but they have the same envelope (dashed lines) and bandwidth as the original waveform (Figure 1e1g). These are summed in equation 3 with the original signal to produce a waveform with a broad frequency response, high center frequency, and short envelope width (Figure 1d and 1h).

In the loop reconvolution method (Young and Wild, 2005), a spike representation of the seismic data is created first, and then the resulting broadband spike series is convolved with a high-frequency wavelet (Figure 2a2c). Young and Wild (2005) oversample the input data to avoid aliasing and zero out all samples of the seismic trace other than samples representing the original peaks and troughs. This produces a “spike train”; unfortunately, this is sometimes referred to by practitioners as a sparse reflectivity, which incorrectly implies that the spikes represent reflection coefficients. In no case, even with only isolated reflectors, is this spike train representative of the true reflectivity because all wavelet side lobes also appear as spikes, which are always located at waveform peaks and troughs (referred to as the “loops” of the waveform). The “high-frequency” seismic trace is achieved by convolving the spike series with an arbitrary high-frequency wavelet of choice. This method produces output that is correlated with the original peaks and troughs, which can lead to a misleadingly high correlation to synthetic data, if the original well tie is also good. This may occur for isolated or major events, for which peaks and troughs of the high-frequency synthetic remain aligned with the loop reconvolution result if the low-frequency synthetic and input seismic data originally had aligned peaks and troughs.

These frequency invention methods are fast and relatively easy to implement. They may have utility as seismic attributes — for example, faults with throw less than one-half the period of the original data may visually become clearer because peaks may then be juxtaposed against troughs on the higher frequency outputs (Young and Wild, 2005). A purpose of this tutorial is to show that, although producing potentially useful attributes, the value of frequency invention to increase resolution or for quantitative analysis is highly questionable at best.

Harmonic extrapolation method

Bandwidth extension can be achieved by extrapolation of the spectrum outside the original sampled seismic bandwidth. We rely on the fact that any transient signal has an infinite frequency response that is potentially predictable to some extent if a sufficient portion of the spectrum is sampled. Our ability to do this of course depends on how simple and characterizable that spectrum is, how clear the frequency periodicities are, the original bandwidth of the data, and the noise level.

To the extent that the convolutional model adequately simulates seismic wave propagation and data processing, we can view seismic data as a band-limited version of the reflection coefficient time series; the reflectivity spectrum within the data band is shaped by the wavelet spectrum, whereas the frequencies outside the band of the wavelet are zeroed out, causing the ambiguity in resolving seismic reflections. A blocky earth structure provides a physical basis for a valid bandwidth extension as the reflectivity spectrum is a superposition of sinusoidal layer frequency responses (Puryear and Castagna, 2008).

As illustrated by Partyka et al. (1999), and quantitatively studied by Marfurt and Kirlin (2001), any layer with a single reflection coefficient at the top and base can be represented as an impulse pair reflectivity series. As discussed below, one can model different types of layers using weighted sums of the even and odd impulse pair symbols (Bracewell, 1965) defined by  
where δ(t) is the Dirac delta function referred to as a unit impulse at time zero in the context of signal processing. A layer with time thickness Δt and reflection coefficients r of equal magnitude and sign at the top and base, can be represented by an even impulse pair rII(t/Δt), with the reflectivity spectrum given by a real function:  
which is periodic in frequency. For odd impulse pairs, the spectrum is an imaginary sine function:  

Figure 3 shows the reflectivity amplitude spectrum of a 50 ms thick layer consisting of two reflection coefficients with equal unit magnitude, but with opposite sign. The hypothetical seismic bandwidth, bandlimited by the seismic wavelet, is denoted by dotted lines. In practice, because the convolution of the reflectivity with the seismic wavelet in the time domain is equivalent to multiplication with the wavelet spectrum in the frequency domain, the reflectivity spectrum is obtained over the band of the data by dividing the seismic data spectrum by the wavelet spectrum, which is assumed to be known in this discussion and must be well-determined a priori in practice. The amplitudes of the Fourier frequencies of the reflectivity spectrum obtained over the seismic band in this way are indicated by solid circles in Figure 3. In the case of a single layer, these amplitudes would readily reveal the sinusoidal shape of the reflectivity spectrum and could thus be fit with a best least mean-square-error (LMSE) fit (solid line in Figure 3) to the derived reflectivity spectrum. For this simple case, the frequency spectrum outside the band of the data can be readily predicted by extrapolating the resulting LMSE sinusoidal function outside the band of the original data.

The important reciprocal relationship between the spectral periodicity and the time thickness for the real and imaginary components (equations 6 and 7) is dependent upon the symmetrical location of time zero at the center of the layer. In practice, when composite reflections occur, although it is not practical to define a time zero that can be centered for all such physical layers, any reflectivity series can still be represented as a sum of even and odd impulse pairs referenced to an analysis point. Therefore, all reflectivity spectra are sums of such sinusoidal frequency responses, and they are fundamentally predictable if those impulse pairs are known. To clarify what happens in a general multilayer case, we construct a sparse reflectivity sequence (Figure 4a) composed of three individual components: a predominantly even impulse pair (Figure 4b), a predominately odd impulse pair (Figure 4c), and a single impulse (Figure 4d). Because any pair of reflection coefficients can be uniquely represented by the weighted sum of an even pair and an odd pair (Bracewell, 1965), the complex spectrum of each component can be characterized by a pure cosine function (equation 6) as the real part (solid line) and a pure sine function (equation 7) as the imaginary part (dashed line). In this example, the predominantly even reflector pair has a real spectrum with larger amplitude than the imaginary spectrum (Figure 4f) and vice versa for the predominantly odd pair (Figure 4g), whereas those for the single reflector are of the same amplitude (Figure 4h). The even and odd spectra for each individual event show the same frequency periodicity, but they are shifted by one-quarter of the spectral period. The composite reflectivity spectrum (Figure 4e) is a linear combination of those component spectral sinusoids, producing irregular peaks and troughs by local superposition. The frequencies within the hypothetical data bandwidth (denoted with red) can then be used to determine the individual sinusoidal components of the spectrum (the red curves in Figure 4f4h), and these can then be extended beyond the original bandwidth (the black curves in Figure 4f4h). These components are then added to obtain the bandwidth-extended spectrum shown in the black curve in Figure 4e.

Beginning with a temporal analysis window of 2N+1 discrete points with a sampling rate of dt, any zero-time centered impulse pair g(t,n) can be expressed as  
where n·dt is the half the time thickness of the impulse pair with n varying from 0 to N. The reflector pair can be decomposed into a unit even pair II(t/2n·dt) and a unit odd pair II(t/2n·dt) weighted by coefficients re and ro, respectively. The two reflection coefficients for the impulse pair can thus be represented as re+ro and rero, either of which ranges from 1 to 1 and could be zero denoting a single impulse. Correspondingly, the frequency spectrum of the impulse pair is given by (equations 6 and 7)  
G(f,n)=2re·cos(2π·n·dt·f)+i2ro  sin(2π·n·dt·f).
A reflectivity series with 2N+1 samples corresponds to K=N+1 nested pairs of reflection coefficients. A time sample can have zero reflectivity if re and ro are zero, but also if elements of the even and odd reflectivity pairs are equal and opposite at a given time. Thus, K is the number of impulse pairs, including all possible time thicknesses (including a zero time thickness) in the analysis window. The broadband spectrum ranging from zero to the Nyquist frequency of any reflectivity series within the analysis window can thus be explicitly expressed as  
For a real seismic trace, the narrowband data spectrum associated with a sum of reflector pairs is a sum of frequency sinusoids times the wavelet spectrum. Within a usable spectral band of seismic data with an adequate signal-to-noise ratio (S/N), dividing out the wavelet overprint provides a roughly flattened spectrum that coincides with the portion of the broadband reflectivity spectrum (equation 10) within the wavelet spectral limit (the red portion in Figure 4). The relationship between the data spectral constraint (real part and imaginary part separately) and the sinusoidal elements of varying frequency periodicities within the same spectral limit can be formulated as the forward modeling:  
where Sd is the wavelet-flattened data spectrum, ϕ is the kernel matrix consisting of sinusoidal atoms, a is the coefficients vector containing ro(n) and re(n), and ϵ is the prediction residual. The kernel has a dimension of M×K, where M is the number of frequencies determined by the available bandwidth and the sampling rate in frequency, K is the number of layers and thus also the number of cosine (real part) or sine (imaginary part) elements indicated by equation 10, which are varied to include all possible frequency periodicities.

We use the basis-pursuit method (Chen et al., 2001) to thus decompose the original band-limited data spectrum into a sparse superposition of layer responses. The frequency extrapolation is accomplished by decomposing the spectrum into sinusoidal frequency responses (atoms), extending them beyond the band of the seismic data, and then summing them to form the broadband seismic trace. The basis pursuit method solves for the coefficients for all sinusoidal atoms in frequency in a direct manner by L1-norm global optimization. Zhang and Castagna (2011) describe how the Chen et al. (2001) basis pursuit method can be used to decompose layer responses in the time domain. We use the same algorithm to do the decomposition in the frequency domain.

Basis pursuit solves for the coefficients of superposed sinusoidal frequency responses in equation 11 by minimizing the L2-norm of prediction residual term regularized by the L1-norm of the solution scaled by a regularization (trade-off) parameter λ:  
Increasing this trade-off factor λ produces results with higher sparsity, whereas decreasing it may amplify the inversion noise. In real applications, a proper regularization factor for the inversion is data and model dependent and empirically determined by comparing results with well logs used for validation.
In our implementation, the coefficients re(n) and ro(n) are the output solutions of the basis pursuit inversion, corresponding to the real part and the imaginary part of the available spectrum, respectively. Any missing frequency outside the original bandwidth can then be computed using equation 10. Because equations 8 and 9 share the same coefficients, the resulting reflectivity series can thus be calculated by  

It is important to note that, theoretically, there can be frequency components of the reflectivity spectrum that are entirely outside the band of the original data that violate our assumptions, and they can never be recovered. The extrapolated frequencies are correct to the extent that the layer model used to generate them is a good representation of the true subsurface reflectivity; any deviation from a sparse blocky earth model assumption may give rise to layers not properly constrained by the original band of the data. For example, layers that are too thin or are of high complexity are less likely to be resolved by the layer inversion. Therefore, a successful extrapolation of the complete frequency responses depends on the reasonableness and accuracy of the decomposition of the complex spectrum into the temporal impulse pair frequency spectra.

Practical implementation of the harmonic extrapolation procedure

These steps are followed in the implementation of harmonic extrapolation:

  1. 1)

    Check the validity of the method assumptions/constraints: Is blocky layering an adequate description of the true earth structure? Is the wavelet well-known? Does convolutional modeling provide a good synthetic tie within the original band of the data? If no well logs are available to ensure that the assumptions are applicable, that the wavelet is correct, and to validate the extended frequencies, the extension will have to be interpreted with great caution. For example, should the actual wavelet spectrum have notches, or be of a narrower band than assumed, the result will be noise magnification at certain frequencies. In addition, uncharacterized periodicities in the wavelet spectrum could result in the inversion producing false layering to reproduce those periodicities.

  2. 2)

    Check seismic data quality: Is the seismic data quality sufficient such that there is at least a one-octave passband with a high S/N? Again, this is best checked by synthetic seismogram construction.

  3. 3)

    Set parameters: Is the sample rate sufficient for extension objectives? If not, resample the original data by Fourier transform to increase the sampling frequency to an acceptable level relative to the desired output frequency band. Select a regularization parameter. For noise-free cases, we use 0.0001. For noisy cases, a regularization parameter on the order of 0.01 is usually acceptable (basis pursuit denoising). This is empirically determined by a best correlation between bandwidth-extended and synthetic data in blind well validation tests. Select a temporal window length.

  4. 4)

    Compute spectra and deconvolve the wavelet: For each seismic trace, slide the analysis window along the trace and compute the Fourier transform at every window location. Divide the spectra by the wavelet spectrum (the wavelet spectra can be time varying if necessary). The real part of the resulting spectrum is the spectrum of the even (symmetrical) part of the windowed trace divided by the wavelet spectrum, and it is the superposition of different cosine functions with different periodicities, each representing an even reflection coefficient impulse pair of a given thickness. The imaginary part of the resulting spectrum is the spectrum of the odd (antisymmetrical) part of the windowed trace divided by the wavelet spectrum, and it is the superposition of different sine functions with different periodicities, each representing an odd reflection coefficient impulse pair of a given thickness.

  5. 5)

    Decompose local spectra into a superposition of layer spectra: Use a basis-pursuit atomic decomposition algorithm (Chen et al., 2001; for further discussion, see Zhang and Castagna, 2011) to decompose the real and imaginary parts of the spectrum into summations of cosines and sines. At this point, a regularization parameter is required to weight the sparsity constraint (the selection criteria are discussed above).

  6. 6)

    Extend the bandwidth of the data: Extrapolate the decomposed sinusoids to the band of interest, multiply by the desired output wavelet spectrum, and inverse Fourier transform to get a time series associated with each window position. Sum the time series over all window positions to form the bandwidth-extended trace. The windows are tapered such that summing in the overlap region conserves energy.

We will test the efficacy of the harmonic extrapolation synthetically on blocky earth structures, with and without noise. We will see that resolution enhancement is readily demonstrated on wedge models. However, because the method relies on the simplicity and periodicity of the reflectivity spectrum, we also test the method for a highly interbedded earth model with locally highly complex spectra. The ability of harmonic extrapolation to achieve useful bandwidth extension for a blocky earth model, and its superiority over frequency invention methods, will be demonstrated in the following comparative studies (synthetic and real data examples).


The theoretical validity of the various spectral extension methods in increasing the seismic resolution can be reasonably demonstrated on synthetic tests, whereby a low-frequency synthetic is bandwidth extended and compared with a high-frequency synthetic derived from the same earth-reflectivity model. The resulting bandwidth-extended data can be visually compared with known variations in layer thickness to see if additional resolution is achieved. Because a perfect bandwidth extension should predict the high-frequency synthetic perfectly, the residual between calculated and predicted high-frequency seismograms provides a firm test of the efficacy of the various methods and indicates the magnitude of the errors involved. All the traces are plotted with the same relative amplitude scale to provide reasonable comparisons and reported root-mean-square (rms) errors are relative to the rms amplitude of the high-frequency target synthetic.

Wedge models

Wedge models are commonly used to study resolution as a layer is thinned from above tuning to zero thickness. Due to the finite data bandwidth, it will not be possible for all sinusoidal atoms of the decomposition to be completely uncorrelated. The presence of a correlation between these atoms over the limited seismic band is an inherent difficulty in determining the exact solution for all reflection coefficients. Figure 5 illustrates the inherent resolving limitation of harmonic extrapolation by a thinning wedge model with equal and opposite reflection coefficients at the top and base. The wedge layer time thicknesses vary from 50 to 1 ms with an interval of 1 ms, yielding the amplitude spectra for each trace (Figure 5a) showing the gradually variable frequency periodicities. The resulting spectra (Figure 5b) extrapolated from the response of this model to a 30 Hz wavelet indicate that although the method can resolve layers below the original tuning thickness, which is approximately 13 ms in this case (Chung and Lawton, 1995), the inverted and extrapolated spectra deteriorate at thicknesses less than 7 ms. This ambiguity arises from the indistinct frequency periodicities produced by the closely spaced reflectors. Obviously, data distortion due to noise or incorrect wavelet determination also interferes with the decomposition of the contributing frequency periodicity to the total spectrum for a given event within the usable band of the data. These factors limit the resolution that can be achieved by harmonic extrapolation.

Figure 6 shows a wedge model with equal reflection coefficients of the same sign at the top and base of the layer. The time thicknesses of the reflector pairs for each synthetic trace are varied from 25 to 0.5 ms to represent a wedge thinning, with an interval of 0.5 ms, which is the sample rate of the data. A 30 Hz Ricker wavelet is used to model the low-frequency response of the wedge (Figure 6a). The top and bottom reflectors can be visually distinguished at or above a time thickness of 12.5 ms at this wavelet center frequency, using the Ricker criterion (flat topping of the waveform). The phase acceleration, loop reconvolution, and harmonic extrapolation methods are applied to the original low-frequency synthetic to restore the missing high frequencies. In practice, doubling the original seismic center frequency is regarded as a significant improvement in resolution. All the broadband frequency extended outputs are thus spectrally shaped to the spectrum of a 60 Hz standard Ricker wavelet by multiplying by the wavelet spectrum. In this case, the low frequencies in the original signal are reduced in amplitude, but are still preserved, and thus present to be shaped back to the original spectrum. The phase acceleration and loop reconvolution methods produce sharper events (Figure 6b and 6c), but neither improve the temporal resolution (defined as the ability to distinguish separate events from the top and base reflectors) as compared with the original synthetic. Furthermore, the loop reconvolution method produces false events above and below the actual reflectors (Figure 6c). Harmonic extrapolation to 60 Hz, on the other hand, resolves the reflections at approximately 7 ms, not quite a twofold improvement, as well as sharpening the events (Figure 6d). The resulting harmonic extrapolation wedge is very similar to, though not exactly the same as, the true high-frequency synthetic target (Figure 7a), calculated directly with the known reflectivity coefficients and a 60 Hz Ricker wavelet.

It is further revealing to look at the prediction residuals by various methods. The harmonic extrapolation computes the high-frequency section with reasonable fidelity with only a relative rms error of 9.5% (Figure 7d). The fact that harmonic extrapolation passes this simple test is to be expected. However, failing the test, as phase acceleration (an rms error of 87.5% in Figure 7b) and loop reconvolution (an rms error of 80.8% in Figure 7c) do, is significant. This shows that these methods must be used with great caution, and not be used for quantitative work.

Blocky structured sparse reflectivity series

A more realistic, but still blocky, synthetic reflectivity series designed to simulate an earth model with multiple layers (Figure 8a8l), has a sampling rate of 2 ms, resulting in a Nyquist frequency of 250 Hz (Figure 9a9l). A low-frequency synthetic seismogram (Figure 8b) is generated by convolving the reflectivity series with a 30 Hz Ricker wavelet. A noisy low-frequency synthetic seismogram (Figure 8h) is obtained by adding a band-limited random noise series with signal power that is 10% of that of the noise-free synthetic seismogram. A higher frequency band of the reflectivity series (Figure 8g) is obtained by convolving with a 60 Hz Ricker wavelet, and it is used as a standard against which predicted high-frequency data will be evaluated. In the frequency domain, the phase acceleration and loop reconvolution can produce broader spectra that are comparable with the high-frequency target (Figure 9g) for noise-free (Figure 9c and 9e) and noisy (Figure 9i and 9k) data. However, even without noise, the misfits (Figure 8d and 8f) between the synthetic and extrapolated high-frequency traces are of the same order of magnitude as the original data. These large errors would hinder the reliability of subsequent quantitative seismic trace analysis (such as seismic inversion or multiattribute analysis). The frequency invention methods, though not very reliable, are relatively immune to noise because they are driven primarily by the location of events, which is for the most part the same on the original and bandwidth-extended data. If the noise is not of a sufficient level to alter the event location, it does not have a great influence on the residual, which is dominated by how colocated and similar in shape the events are relative to the synthetic.

Using the same band-limited low-frequency synthetic seismograms (Figure 10a and 10g), the broadband synthetic (Figure 10b) with the 60 Hz Ricker wavelet is recovered by harmonic extrapolation with good fidelity (Figure 10c) and only a 6.3% rms error (Figure 10d) for the noise-free data. This noise-free extrapolation error arises from the nonuniqueness in solving for the frequency periodicities within the limited bandwidth. For the noisy result (Figure 10e), the residual of prediction to 60 Hz increases significantly to a 32.5% rms error (Figure 10f) mainly due to the error in the extrapolation caused by the low-frequency noise in the usable band. However, the increment of a 26.2% rms error suggests that the original noise (a power ratio of 10% corresponds to a relative rms of 31.6%) has been somewhat suppressed by a relatively large regularization parameter in the basis-pursuit inversion. More importantly, the fact that the prediction error is as randomly distributed because the original noise implies that harmonic extrapolation does perform robustly against data distortion when the useful signal is dominant.

Extrapolating to higher frequencies, a broader band synthetic (Figure 10h) using a 90 Hz Ricker wavelet is also well-reproduced (Figure 10i) with good, though somewhat reduced, fidelity that is of 14.3% rms error (Figure 10j). The reason for a fidelity reduction from 60 to 90 Hz in the noise-free case is that for an imperfectly matched frequency periodicity determined from the usable bandwidth, the mismatch increases at frequencies further away from the calibration frequencies. The prediction residual (36.9% rms error in Figure 10l) for this 90 Hz extrapolated result in the presence of noise (Figure 10k) is thus reasonably larger than that of the corresponding 60 Hz case. However, the percentage fidelity reduction in high-frequency recoveries from 60 to 90 Hz is less in the noisy case than in the noise-free case because the error due to noise is added to the inversion error and dominates the fidelity percentage. The corresponding amplitude spectra (Figure 11a11l) show that the original bandwidth can be at least doubled with minor residuals.

Figure 12 shows what happens if recovering the reflectivity spectrum out to the Nyquist frequency is attempted. The full band spectrum (Figure 12c) extrapolated from the low-frequency noisy synthetic by harmonic extrapolation and the corresponding inverted reflectivity (Figure 12d) show good recovery for the original reflectivity series (Figure 12a and 12b), giving a prediction residual (Figure 12e and 12f) with an rms error of 41.7%. This is reasonably larger than that of the corresponding 60 and 90 Hz broadband cases. The residual spectrum shows a roughly increasing trend from low to high frequency.

These synthetic tests suggest that the harmonic extrapolation method can stably predict the missing frequencies under the circumstance of a blocky earth model providing the noise level is small and the bandwidth extension is limited.

Random numbers series for harmonic extrapolation

These favorable harmonic extrapolation results are not unexpected for a blocky earth model in which we observe readily characterizable frequency periodicities. We construct, as a worst-case scenario (Figure 13a13h), an earth model with completely random numbers (sampling rate is 2 ms). The sparse structure representation is then not appropriate because the low-frequency seismogram (30 Hz) is then a superposed response from too many very thin interbedded layers to separate (Figure 13e). The high-frequency recoveries by spectral extrapolation in this case are not acceptable; the error (Figure 13d) in recovery of the 60 Hz result is nearly of the same magnitude (an rms error of 68.4%) as the high-frequency synthetic (Figure 13b) and it greatly increases (an rms error of 110.7%) for the 90 Hz result (Figure 13h).

Inspection of the corresponding error spectra (Figure 14a14h) reveals an interesting fact: The error is low at frequencies at which there is any amplitude in the original spectrum, and it blows up immediately above the highest frequency in the input wavelet (Figure 14d and 14h). This good recovery of the reflectivity spectrum within the data spectral limit is a great virtue for seismic interpretation because the observed behavior of measured data from real wells has shown the reflectivity spectra to be of “blue” color in a global sense, meaning that spectral amplitudes tend to increase with frequency (Walden and Hosken, 1985). Therefore, bluing is an important process to accentuate high frequencies such that the wavelet overprinted seismic spectrum can be similar in shape to the reflectivity spectra. The random number series test suggests that, unlike seismic deconvolution that boosts noise and useful signal by the same factor at any given frequency, harmonic extrapolation, although unable to extrapolate outside the original band in this case, could be useful in boosting the high-frequency signal more than the high-frequency noise in spectral bluing, thus improving the seismic resolution within the original band of the data.


Unlike the synthetic examples in which the original earth model is known and broadband objective reflections are readily simulated for bandwidth extension evaluation, the complete frequency components of the actual reflectivity series in the real world can only be partially represented by a limited number of available wells, which cannot be assumed to be entirely accurate. In this sense, various band-pass versions of the reflectivity series derived from well logs can be used as criteria, only to a certain extent, to determine the validity of newly generated frequency components by those spectral extension techniques. We require a data set with an initial good well tie because any small error in time-depth conversion will become more significant at higher frequencies. Because the initial time-depth function deemed adequate at low frequency will also be used at high frequency, some correlation reduction for this reason alone is to be expected. The Vietnam data set studied by Ha (2014) met our criterion of a good low-frequency well tie (correlation coefficient = 0.87; see Figure 15 obtained using the Hampson-Russell synthetic modeling package, 2008). The wavelet (Figure 16) was extracted using the well-log reflectivity and assuming a constant phase as a function of frequency. The resulting extracted wavelet was nearly zero phase.

Figure 17 shows the results of applying frequency inventions and harmonic extrapolation to a seismic line through the well location. The original band of the seismic section (Figure 17a) is roughly included in the band pass 0–10–65–70 Hz, and the spectrally broadened results (see Figure 17b for phase acceleration, Figure 17c for loop reconvolution, and Figure 17d for harmonic extrapolation) are filtered to the same broadband 0–10–120–130 Hz, which about doubles the frequency content of the original data, resulting in sharper events. Looking at the sections alone, the validity, if any, of the bandwidth extension is not clear because the broadband earth model is not known. Nevertheless, the original narrowband data themselves actually provide a basic criterion to evaluate the effects of the different methods on the seismic data because fidelity should not be greatly degraded in the original data band in a valid spectral extension operation. If the original frequencies are preserved, the high-frequency data can be low-cut filtered and spectrally shaped to restore the original spectrum; we can thus loosely test whether the methods are invertible to the original data by simple band-pass filtering back to the original frequency band. The significant residuals with the original data by phase acceleration (a 53.8% rms error in Figure 18a) and loop reconvolution (a 30.8% rms error in Figure 18b) imply that the original frequency components are greatly altered. The frequency invention methods badly fail the test. However, the harmonic extrapolation result (Figure 18c) shows only a minor error (10.0% rms).

Another test is to evaluate the synthetic tie for only frequencies (70–80–120–130 Hz) outside the band of the original data. For the purpose of this high-frequency observation, we choose an analysis window of the original synthetic tie (Figure 19a) at the well location with an initial excellent correlation coefficient of 0.96. The correlation has been reduced to 0.72 for the harmonic extrapolation result (Figure 19d), and it is entirely inadequate for the other methods (0.20 for phase acceleration [Figure 19b] and 0.45 for loop reconvolution [Figure 19c]). Possible causes for the reduced correlation coefficient of harmonic extrapolation relative to the original data include: (1) small time-depth errors, (2) magnification of noise, and (3) deviations from the presumed blocky earth model evident in the original well logs (Figure 15).

We can investigate the statistical significance of the correlations between those high-frequency extrapolated data and the well-log synthetic using an f-test (Snedecor and Cochran, 1989). This F-statistic for validating the prediction model is designed to test a null hypothesis stating there is no relationship between two measured phenomena (Fisher, 1925). The Fisher f coefficient is given by  
where r is the correlation coefficient, K is the number of model parameters, and n is the number of data points. We take the number of free parameters to be the number of samples in the seismic wavelet. From this perspective, each well-log reflection coefficient is assumed to be independent from the others, and based on the convolutional model, the wavelet amplitudes represent the least mean-square-error coefficients needed to multiply the time-shifted coefficients by to produce a weighted sum predicting the seismic amplitude at each time. To the extent that the reflection coefficients are indeed independent, an f value significantly greater than unity would suggest that the model variance dominates over the residual variance, and it thus implies that the extrapolated high frequencies giving the correlation can be regarded as consistent with the underlying model. For the data shown in Figure 19 for harmonic extrapolation, r=0.72, n=100, and k=25, yields an f=3.19, suggesting that the 0.72 correlation is unlikely to be spurious. On the other hand, for frequency invention methods, even if their very poor correlations are thought to be acceptable, the f-test (f=0.12 for phase acceleration and f=0.75 for loop reconvolution) indicates poor statistical significance.

Table 1 includes all the parameters used in the comparative studies for the various methods. Note that for the loop reconvolution method, all the data are oversampled to avoid aliasing and are readily resampled back to any desired sampling rate with an acceptable Nyquist frequency after the reconvolution.


The subject of bandwidth extension is controversial. We know from digital signal processing that once a given frequency within a signal has been zeroed out, it is not recoverable by inverse linear filtering. Earth attenuation and noise limit the frequency band over which we can obtain positive S/N. Thus, if linear deconvolution operators are pushed too hard to boost high frequencies, the result is primarily noise magnification. This fact can easily lead one to the conclusion that bandwidth extension is theoretically not possible — a false and overstated conclusion in the opinion of these authors.

We also know that there are a variety of ways that a seismic trace can be manipulated to produce high-frequency traces. Two of these methods are phase acceleration and loop reconvolution. These methods are not based on physics and essentially invent frequency content outside the seismic band. Wedge models, however, demonstrate that neither of these methods, although sharpening existing events, improve the resolution. The frequency invention methods also exhibit other deleterious consequences such as creating false events (in the case of loop reconvolution) and not filtering back to the input data (in both cases); indicating that the convolutional relationship among the data, the reflectivity, and the wavelet, has been destroyed within the original frequency band. Synthetic tests show poor correlation of the invented frequencies to high-frequency synthetics. Such methods may produce useful attributes for visualization, but we urge extreme caution if quantitative analysis, such as impedance inversion, is to be used on such bandwidth-extended outputs.

Harmonic extrapolation can be a valid bandwidth extension technique, based on physics, that is possibly applicable when the earth conforms well to the a priori assumption of a blocky impedance structure. A sparsely layered earth structure results in correlations in the reflectivity spectrum between frequencies across the digital spectrum out to the Nyquist frequency. In the simplest case of a single layer with two reflection coefficients, the reflectivity spectrum is sinusoidal in frequency, and generally a superposition of a sine and cosine related to the odd and even parts of the reflection coefficient pair. In the more general case of many superposed layer responses, the periodicity may or may not be adequately characterized within the seismic band. If the layers are thick and few enough, and the data are sufficiently noise free, it may be possible to decompose the spectrum to a summation of a limited number of sinusoids if the seismic wavelet spectrum is known. In this situation, frequencies outside the band are readily extrapolated. However, even in a noise-free case, if the reflectivity series is too complicated, as in the case of a random series of closely spaced reflection coefficients, bandwidth extension may not be achievable without additional constraints. Of note, even in this worst-case scenario, harmonic extrapolation can be applied to blue the spectrum within the band of the data, without boosting the high-frequency noise to the extent that would result from a simple spectral shaping or deconvolution.

A real data example shows that harmonic extrapolation produces potentially useful bandwidth-extended data that filter back to the original data with good fidelity. For the real data case studied, outside the seismic band, harmonic extrapolation produces a statistically significant correlation coefficient with a synthetic computed over the same bandwidth, whereas frequency-invention methods exhibit very poor and unacceptable correlation over the same band.


The authors are very grateful to M. Sacchi for his assistance and valuable suggestions. Thanks also to E. Gebretsadik and V. Ha for their efforts on the real data example. We also wish to thank Hampson-Russell for use of their software.