Freely available online through the SEG open-access option.


Seismic attributes are routinely used to accelerate and quantify the interpretation of tectonic features in 3D seismic data. Coherence (or variance) cubes delineate the edges of megablocks and faulted strata, curvature delineates folds and flexures, while spectral components delineate lateral changes in thickness and lithology. Seismic attributes are at their best in extracting subtle and easy to overlook features on high-quality seismic data. However, seismic attributes can also exacerbate otherwise subtle effects such as acquisition footprint and velocity pull-up/push-down, as well as small processing and velocity errors in seismic imaging. As a result, the chance that an interpreter will suffer a pitfall is inversely proportional to his or her experience. Interpreters with a history of making conventional maps from vertical seismic sections will have previously encountered problems associated with acquisition, processing, and imaging. Because they know that attributes are a direct measure of the seismic amplitude data, they are not surprised that such attributes “accurately” represent these familiar errors. Less experienced interpreters may encounter these errors for the first time. Regardless of their level of experience, all interpreters are faced with increasingly larger seismic data volumes in which seismic attributes become valuable tools that aid in mapping and communicating geologic features of interest to their colleagues. In terms of attributes, structural pitfalls fall into two general categories: false structures due to seismic noise and processing errors including velocity pull-up/push-down due to lateral variations in the overburden and errors made in attribute computation by not accounting for structural dip. We evaluate these errors using 3D data volumes and find areas where present-day attributes do not provide the images we want.


Simple-to-make errors, or pitfalls, confront every seismic interpreter. The more common interpretation pitfalls range from miscorrelating seismic reflections across faults to applying an overly simple geologic model or hypothesis (e.g., that all channels should be filled with sand), to processing pitfalls whereby truly chaotic geology is “filtered” to look more continuous. While seismic attributes provide a means of recognizing potential pitfalls, they also provide a means of making new pitfalls. Although seismic attributes such as coherence and curvature have been in use for 10–20 years, not all interpreters include them in their workflows, either because they are already adept at efficient map making or because they do not have access to the software. Importantly, seismic attributes are more heavily favored by younger, less experienced interpreters in situations in which attributes increase their productivity and allow them to extract and quantify subtle features that would otherwise take years of interpretation experience. The new generation of interpreters is more familiar with sequence stratigraphy, impedance inversion, and anisotropy, but less experienced with seismic acquisition and processing than their predecessors. Most of these less experienced interpreters have not had the time or “opportunity” to stumble into the pitfalls of velocity pull-up/push-down, footprint, and migration artifacts encountered in conventional seismic interpretation. Rather, they fall into the same pits using seismic attributes.

In this paper, we break the pitfalls into two large groups. We begin with pitfalls associated with seismic data quality, examining the appearance of acquisition footprint, processing artifacts, and velocity pull-up/push-down on seismic attributes. We then discuss algorithmic pitfalls that are more closely tied to the numerical implementation of coherence and dip estimation attributes. We conclude with a short discussion on signal and noise with the conjecture that one interpreter’s noise may be another interpreter’s lithologic indicator.

Attribute pitfalls due to seismic data quality

False structure due to seismic noise and processing errors

Acquisition footprint and false “fractures” on legacy 3D data volumes

Acquisition footprint is a well-known phenomenon to any interpreter who has generated a root-mean-square (rms) amplitude extraction on a shallow horizon. For reasons of efficiency, most surveys are acquired in a roll-along mode, in which the seismic shot and receiver patterns are moved in a relatively continuous manner along the earth’s surface. In the shallow section, many of the farther offset traces may be muted, giving rise to low-fold data, and hence there is less suppression of horizontally traveling ground roll and other noise. Furthermore, the combinations of source-receiver offsets and azimuths that image a given bin will vary in a periodic manner (defined by the shot-line and receiver-line separation) across the survey. Each combination of source-receiver offsets and azimuths when corrected (using either normal moveout [NMO] or prestack migration) and later stacked will have a different “stack array” signal-to-noise ratio (S/N). Such periodic patterns are commonly seen on shallow time slices through seismic rms-amplitude and coherence volumes.

Less well recognized is the effect that acquisition and the stack array have on structural curvature. Curvature measures the lateral change in dip. False changes in dip will be exacerbated by curvature. To illustrate this, Figure 1 shows a relatively low-fold seismic data volume acquired over Vacuum Field, New Mexico, in the mid 1990s. The time slice at t=1.724s (Figure 1a) through the coherence and most-positive and most-negative curvature volumes shows a quite complicated deformation pattern. However, correlating vertical slices through seismic amplitude slices corendered with most-positive and most-negative curvature provides confidence that we are mapping geology. The red anticlinal and blue synclinal features seen on the time slice correlate exactly to anticlines and synclines seen on the vertical seismic amplitude slices. Deeper salt in this survey gives rise to two intersecting system of folds, giving an “egg-crate” pattern deeper down.

The shallow time slice at t=0.400s in Figure 1b should be readily recognized as acquisition footprint. There is some geology that peeks through, but our confidence in interpreting any structure or stratigraphy on this time slice is low. The intermediate time slice at t=0.800s in Figure 1c shows the potential pitfall. Note the shelf edge that appears as a red, positive curvature anticlinal feature on the time slice. One of the objectives on the carbonate shelf is to map natural fractures. Curvature is an excellent measure of strain, which in turn is correlated with fractures if the rock is sufficiently brittle to break. Folds and natural fractures are often locally periodic. However, the north–south and east–west patterns in the northern part of the survey are suspect, as are the northeast–southwest and northwest–southeast-trending patterns in the southern part of the survey.

The proper way to avoid this pitfall is to animate a suite of attribute slices from shallow to deep. For almost all P-wave seismic data, footprint artifacts will diminish with depth. This decrease in footprint is associated with higher fold (caused by less muting, resulting in greater noise suppression) and better reflector alignment, because a 5% error in at a deeper, faster velocity gives rise to less moveout than a 5% error at a shallower, slower velocity.

Figure 1d shows the shallower part of the vertical slice shown previously on the eastern side of the survey. Note the repetitive “U-shape” pattern that goes across the survey some of which is highlighted by magenta picks. The bottom of the “U” gives rise to a negative curvature anomaly whereas the top of the “U” gives rise to a positive curvature anomaly. Because the periodicity (Figure 1b) is nearly identical to that of coherence, these artifacts exhibit a “new” type of acquisition footprint. The hypothesis is that the NMO bins each have a different collection of offsets and azimuths. Let us assume that the NMO velocity chosen was a little too slow, such that the farther offsets are slightly overcorrected. If a given bin has more far offsets than near offsets, the reflector will be slightly too high. If a given bin has less far than near offsets, the reflector will be closer to the correct zero-offset position.

The image shown here as a potential interpretation pitfall can also be useful to the seismic processor. The image shown in Figure 1b is quick to generate and can thereby serve as a quality control measurement. Assuming that reprocessing is not an option, we wish to avoid falling into the pit of interpreting these artifacts as geology.

Another artifact of this type is shown in Figure 1e in the form of large depth steps applied to seismic data during processing, which generated a strong fabric on seismic reflections. The seismic section in the figure shows near-seafloor strata (comprising Quaternary hemipelagites and volcanoclastic intervals) from the southeast of Japan that was depth migrated using a 5-m vertical increment that was too coarse to reconstruct the steeply dipping slow water bottom sediments (see Moore et al., 2009). This aliasing gives rise to a repetitive steplike geometry near the water bottom that could be misinterpreted as sand waves or contourites on attribute horizon slices. Although aliasing is always in the mind of seismic processors, it is an easy pitfall for the unwary interpreter who may decimate the data vertically and laterally to fit a very large data volume into the limited workstation memory. Such trace decimation is commonly used to highlight relatively wide structures such as channels, mass-wasting deposits, and listric faults. Attributes computed from such now-aliased data volumes can introduce a fabric on the data that does not correspond to geology.

Noise bursts and “funny-looking things” on attribute time slices

Not all acquisition and processing artifacts need to be regular. Although modern acquisition and processing workflows attempt to suppress ground roll, traffic, and other noise, sometimes noise will leak in through the seismic source, receiver, and migration stack arrays. Strong noise bursts that fall above a processor-defined threshold will be eliminated during the trace editing step of processing. Noise bursts that are relatively strong but fall below this threshold will be retained. Because prestack time migration maps each seismic data sample onto an ellipsoid, a noise burst will appear as a high-amplitude ellipse on time and horizon slices. If these noise bursts are mapped to a slightly shallower level than the reflector being analyzed, it will give rise to an elliptical structural high and an elliptical-positive curvature anomaly.

Attribute expression of processing artifacts on converted wave data

The Mississippi Lime play of northern Oklahoma and southern Kansas USA is one of the more recent resource plays of North America. The objective is to map tripolite (high-porosity, diagenetically altered chert) that forms reservoir sweet spots as well as natural fractures that can provide conduits through the otherwise tight (usually underlying) fractured chert (Dowdell et al., 2013). In this survey from Kansas, there is also a spiculitic (from sponges) component to the chert (the Cowley Formation). The operator in this area used a state-of-the art multicomponent acquisition to evaluate whether converted (PS) waves might better differentiate chert from carbonate facies.

Figure 2a shows seismic amplitude and attributes computed from the PP data volume. The structure is very representative of other Mississippi Lime plays, with low coherence indicating diagenetically altered and perhaps fractured facies. In turn, curvature images areas that are more folded and hence amenable to natural fractures.

The PS seismic amplitude data volume shown in Figure 2b is of good quality, with subtle faults being better imaged. However, the corresponding attribute overlays indicate contamination by nearly vertical curvature artifacts. Like the acquisition footprint, the artifacts appear to be rather periodic. Unlike the acquisition footprint, these artifacts do not heal and in some cases become worse as we go deeper into the data volume. Such “structural” behavior is inconsistent with deformation in this part of the world. Given his own (limited) experience in PS-processing, the first author attributes these artifacts to subtle, but easy-to-make errors in common conversion point processing and velocity analysis. By examining vertical sections, it is easy to recognize and avoid the pitfall of interpreting these features as fractures illuminated by new technology.

Velocity pull-up and push-down

Time-migrated data will suffer from lateral changes in apparent structure due to overlying lateral changes in velocity. Higher velocity anomalies such as carbonate buildups will give rise to a velocity pull-up, whereas lower velocity anomalies such as incised channels and shallow gas give rise to velocity push-down. Vertical changes in velocity can be laterally offset by nonvertical faulting, giving rise to “fault shadows” (Fagin, 1996), or large velocity pull-ups at the base of “faster” strata such as those of isolated salt diapirs and carbonate platforms (Figure 3a and 3b). These figures exhibit subcircular features resembling “uplifted” structural highs. In northwest Australia, isolated carbonate buildups in Miocene strata of the Browse Basin have a contrasting VP velocity from surrounding units, which are essentially sandy and marly. The higher VP velocities in carbonate buildups generate subvertical “faults” and subcircular “horsts” that are hard to distinguish from real structures. We stress that some of the oceanward flanks of these buildups coincide with the modern shelf edge region, which is bounded by large faults. The superposition of velocity-driven fabric on real structures makes any interpretation in these areas outstandingly difficult, as seismic attributes accurately represent artifacts and existing structures together. A similar effect is observed below salt diapirs in regions such as the North Sea, where >2000m-thick evaporites cause important velocity pull-ups in presalt strata (Figure 3b). The most common effect of velocity pull-up is thus to introduce false structure on the time-migrated seismic data. Fagin (1996) shows that prestack depth migration (PSDM) using an accurate velocity model correctly images the data and removes the false structure.

Figure 4 shows a less common effect of lateral changes in velocity. This Fort Worth Basin survey previously described by Aktepe et al. (2008) shows the effect of an erosional unconformity on the fast Ellenburger Dolomite. Much of the shallower faulting in the Fort Worth Basin is controlled by deeper basement faults (e.g., Sullivan et al., 2006; Kwatiwada et al., 2013). For simplicity, let us assume the Ellenburger Dolomite was laid down with an originally flat top surface over a relatively flat basement. Basement-controlled tectonic deformation resulted in basement/Ellenburger blocks being uplifted and eroded, leaving a thinner high-velocity Ellenburger Dolomite over the structurally high basement. This thinning of the Ellenburger gives rise to a velocity push-down, resulting in relatively flat images seen on the seismic amplitude and attributes from the prestack time-migrated data volume shown in Figure 4a. Note that the basement in the time-migrated data looks quite flat on the amplitude data, as seen by the low to moderate amplitude curvature anomalies on the time slice. In contrast, the corresponding PSDM seismic amplitude and attributes shown in Figure 4b exhibit much stronger deformation. Simply stated, the deeper basement is significantly more deformed than the time-migrated data would indicate. This interpretation pitfall would be very difficult to avoid without having run PSDM and careful velocity analysis. In the case of overlying carbonate reefs and shale-filled channels, velocity pull-up and push-down give rise to apparent curvature anomalies that correctly represent the inaccurately imaged data, but not the true structure.

Attribute pitfalls associated with algorithmic design

The effect of dip on attribute computation

Coherence computed on time slices versus along dip

Default parameters in computer programs are typically designed to minimize the computation time needed to generate a good image. The value of computing coherence (and many other multitrace attributes such as Sobel filters, amplitude gradients, and gray-level co-occurrence textures) along structural dip has been known for some time (e.g., Marfurt et al., 1999). However, because most interpreters compute their attributes on their desktop, the default parameters are often set up to minimize run time. Figure 5a shows the result of using the defaults in a commercial coherence computation. Note the artifacts associated with steep dip that give rise to a “contour” appearance. Such artifacts are informally called “structural leakage” by interpreters and should not be misinterpreted as discontinuities. The pitfall is in interpreting these algorithmic artifacts as geologic discontinuities, as exemplified by coherence data from a mass-transport complex (MTC) in southeast Brazil (Figure 6).

The solution is quite simple: Compute coherence along structural dip. This choice happens to be an option in the same commercial package. The resulting image in Figure 5b accurately represents the seismic data seen on the vertical section, showing coherent, steeply dipping areas on the time slice, and incoherent areas in which the S/N is lower or the geology is more chaotic (representative of MTCs) as seen on the vertical slice KK′ shown in Figure 5c.

Apparent versus true tuning effects using spectral decomposition

Spectral decomposition is computed vertically, trace by trace. Even for depth-migrated data, spectral decomposition will provide spectra that are measured in apparent vertical frequency rather than in the true frequency perpendicular to the reflector of interest (Lin et al., 2013). This pitfall was first brought to our attention by Mike Helton who applied spectral decomposition to an Amoco survey acquired in the Andean foothills in the late 1990s. As he went up and down the slope, he found the peak spectral frequency of his target layer increased with respect to that seen on the topographic crests and troughs. At that point in time, it was an artifact to be noted and a pitfall to be avoided. Lin et al. (2013) show how one can correct for most of these changes by correcting the spectral frequencies for dip θ by dividing the frequencies by 1/cosθ. We show the correction for the listric fault shown in the previous images in Figure 7a and 7b.

Dip computations of events other than stratigraphic “reflectors”

Volumetric dip is currently computed using at least four different methods based on (1) instantaneous frequency and wave number, (2) the gradient structure tensor (GST), (3) semblance scans, and (4) plane-wave destructors, summaries of which can be found in Chopra and Marfurt (2007). What we usually want from a dip calculation is the dip of stratigraphic boundaries. What we ask the algorithm to provide is to give an estimation of the dip within the analysis window. Other dipping events, such as backscattered ground roll, migration artifacts, and (common in depth-migrated data) fault plane reflectors (FPRs), overprint the reflectors of interest. In the GST algorithm, the normal to the dipping reflector is computed as the first eigenvector of the GST matrix. By definition, the first eigenvector represents the direction of greatest data variability, which in this case is the direction of the greatest change in amplitude. Thus, the GST algorithm common to many commercial software packages will estimate the dip of the strongest reflector within the analysis window (Figure 8b). The algorithmically simpler but computationally more intensive semblance scan algorithms have the advantage of limiting the range of the dip search, thereby reducing the chance of measuring a stronger, but steeply dipping event cutting the weaker event of interest (Figure 8c). Because many attributes are computed along structural dip, errors can cascade. For example, structure-oriented filtering is computed along the local dip estimate. Application of a structure-oriented filter along the dip of a noise event will enhance the noise-to-signal ratio, rather than the S/N. Curvature computations may also appear erratic. If it is possible in the software you use, it is good practice to examine visually the 3D dip volume before computing any subsequent attributes.

Algorithmic limitations

Coherencelike algorithms compare adjacent traces using a variety of methods: crosscorrelation, semblance, eigenstructure, GST, Sobel filters, and lateral Hilbert transforms, among others. Some of these algorithms (e.g., Sobel filter) are sensitive to lateral variation in amplitude, whereas others (e.g., crosscorrelation and eigenstructure) are only sensitive to lateral changes in waveform. All present-day coherence algorithms work in relatively small windows using somewhere between five and perhaps two dozen traces and 3–21 vertical samples.

Figure 9a shows the vertical section shown in the previous suite of images, with three picked listric faults. The goal is to map these listric faults using coherence, experimenting with a 3×3 trace lateral window and a vertical window of 5, 15 (the default in the commercial package used here), and 31 samples. The coherence image generated using 15 samples was previously shown on the time slice in Figure 5b and did a nice job of mapping many of the near vertical faults and MTCs. However, although the two MTCs indicated by the block arrows in Figure 9a are nicely delineated in Figure 9b, the listric faults are not. Increasing the size of the vertical analysis window does not help map these listric faults; rather, such an increase vertically smears the strongest lateral discontinuity, giving rise to a “stair-step” pattern. If more than one stair step is smeared across a time or horizon slice, the fault anomaly is seen more than once, making the image look much more complicated than it really is.

The “obvious” algorithmic solution is to rotate the coherence analysis window to be parallel to the hypothesized fault. Although such a rotation may be correlated with the reflector dip for listric faults, such a correlation of reflector dip to fault plane dip is unclear for conjugate faults, reverse faults, and pop-up features. The local discontinuity patterns seen by a human interpreter are within the context of adjacent discontinuities that reflect specific, often predetermined, geologic (fault) models. Although attributes will provide an ingredient to future fault interpretation algorithms, such computer-assisted multiscale analysis of discontinuities is in its infancy.

Pitfalls and artifacts associated with signal and noise

Seismic noise, geologic noise, and noise indicators of geologic anomalies

Seismic volumes contain three kinds of noise. The first type of noise is purely seismic and includes acquisition footprint, backscattered ground roll, migration operator aliasing, aliased shallow diffractions, multiples, and low reflectivity that falls below the ambient noise level. The expression of these noise features has negative value in mapping geology; such noise is also exacerbated by seismic attributes. At best, such artifacts can provide quantitative quality control for more precise selection of processing parameters. At worst, they form pits into which the interpreter falls.

The second type of noise is geologic and includes chaotic features such as MTCs, turbidites, fault damage zones, and karst-collapse features. These incoherent features exhibit an easy-to-recognize pattern in seismic attributes. Skilled interpreters are able to recognize these features and use them as architectural elements that fit within a larger tectonic, stratigraphic, or diagenetic framework. These features should always be preserved; here, the pitfall is to overprocess the seismic data and remove them.

The third type of noise is a mix of the two, with limitations in seismic imaging being a direct function of the (usually overlying) geology and/or fluids. Velocity pull-ups and push-downs underneath carbonate buildups are a classic carbonate reef indicator (Bubb and Hatledid, 1997). Velocity push-down beneath shallow gas is a classic hydrocarbon indicator. The chaotic nature of karst collapse can be due to the leakage of shallow gas, which gives rise to anomalous low velocities, which in turn give rise to poorly focused reflectors within the collapse chimney (Story et al., 1999). Listric faults healing out into overpressured shales give rise to highly deformed low-reflectivity reflectors. Because these reflectors are discontinuous, they are difficult to pick in velocity analysis. The velocity is then interpolated between easier-to-pick shallower and deeper reflectors, resulting in a poorly focused overpressured shale image. In all these cases, the “noise” in the seismic image is an indicator of geologic information.

We conclude with the coherence image shown in Figure 10. This early implementation of v(z) PSDM followed by eigenstructure coherence accurately mapped a system of orthogonal faults in a transpressional tectonic regime. Strong currents gave rise to significant cable feathering and acquisition footprint (orange arrow). Not accounting for the steep dip associated with the Galeota Ridge in the coherence computation gave rise to the structural leakage (cyan arrow). In hindsight, the low-coherence anomaly indicated by the red ellipse is an indirect indicator of geology. The faults in this survey provide strong pressure compartmentalization, which gives rise not only to thick gas columns, but also to high lateral variability in velocity, such that the v(z) migration produced poor-quality images in the area of the red ellipse.


Pitfalls in the structural interpretation of seismic attributes are essentially the same pitfalls encountered in the structural interpretation of conventional amplitude interpretation. However, attributes allow us to fall into these pits faster and in 24-bit color. In essence, seismic attributes enhance subtle features on seismic data that may otherwise be overlooked. For the same reason, seismic attributes enhance subtle noise in the data that may otherwise be ignored. Erroneous anomalies such as chaotic noise and the spatial periodic acquisition footprint are easily recognized in attributes with which the interpreter is familiar, such as instantaneous frequency and rms amplitude. Attributes that may be less familiar to a given interpreter, such as spectral components and curvature, have their own response to seismic noise. Velocity pull-up and push-down give rise to an apparent structure that is mapped as ridges and valleys by curvature. Accurate depth migration will minimize such artificial structures. Spectral components measure apparent frequency vertically down the trace, not true frequencies perpendicular to the reflector having dip θ. Conversion to true frequencies can be approximated by scaling the spectra by 1/cosθ.

The punch line of our analysis is that the interpreter does not need to be able to program an algorithm, but he or she should be familiar with algorithmic assumptions. Coherence computed along time slices will run faster than coherence computed along the structural dip, but generates serious artifacts if peaks line up horizontally with troughs. Longer window coherence operators will “vertically” stack discontinuities but smear them along dipping faults. Finally, many of today’s attributes are still imperfect and only approximate the mind of a human interpreter. Human interpreters can differentiate among a fault plane reflection, crosscutting migration noise, and a formation reflection of interest. Computer algorithms cannot do so and may estimate the dip to be that of the strongest amplitude, or of the most coherent event.

Most of today’s attributes are computed within small analysis windows: perhaps one trace by one period for spectral components, 5 traces by 20 ms for coherence, and 200 traces by 100 ms for curvature. Human interpreters examine data at multiple scales. It is human nature to line up the suite of individual small-scale discontinuities along the listric faults shown in Figure 9a into a large-scale continuous pattern. Such pattern recognition helped our ancestors anticipate predators hiding in the forest. However, computer algorithms are only now being developed to extract such large-scale patterns, exposing less experienced interpreters (relatively new in the “forest” of seismic data) to ever-present pitfalls as hardware and computer algorithms keep up with advances in data acquisition and processing.


Thanks go to the multitude of former talented students at The University of Houston, The University of Oklahoma (OU), and Cardiff University (CU) who have inadvertently fallen into most of these pits. The influence of reflector dip on spectral components was first brought to the attention of the first author by M. Helton who was working a survey in the Andean foreland while with Amoco. This research used data provided by the Integrated Ocean Drilling Program. T. M. Alves acknowledges Geosciences Australia, CGG, and Petroleum Geo-Services for the seismic examples shown in this paper. Schlumberger is acknowledged for the support provided to the 3D Seismic Lab, CU, and for providing licenses to Petrel for use in research and education at OU.

Kurt Marfurt began his geophysical career as an assistant professor teaching mining geophysics at Columbia University’s Henry Krumb School of Mines in New York. After five years, he joined Amoco at its Tulsa Research Center. Through successive reorganizations at Amoco, he obtained diverse experience in seismic modeling, migration, signal analysis, basin analysis, seismic-attribute analysis, and multicomponent analysis. Through Amoco, he won five patents, two in seismic coherence technology. He joined the University of Houston in 1999 as a professor in the Department of Geosciences and as director of the Center for Applied Geosciences and Energy, where his primary emphasis is on development and calibration of new seismic-attribute technology. Actually, he joined The University of Oklahoma as Frank and Henrietta Schultz Chair and Professor of Geophysics. His recent work has focused on applying coherence, spectral decomposition, structure-oriented filtering, and volumetric curvature to mapping fractures and karst as well as attribute-assisted processing.


Tiago M. Alves received a B.Sc. (1997) in engineering geology and a Ph.D. (2002) from the University of Manchester, UK. He heads the 3D Seismic Lab, Cardiff University, since 2012, and he is also a sea-going marine researcher with close ties to the Integrated Ocean Drilling Program. His research interests include deep-water continental margins, fluid migration in sedimentary basins, and reservoir engineering.