Interpretation of 2-D Seismic Data
In previous times it was considered that the interpretation phase of a seismic project began when the processing people delivered a “final stack” (perhaps not even migrated) data set to the geophysical interpreter. Today, it is realized that the interpretation truly begins at the survey design phase, when choices about offsets, line orientation, source characteristics etc. are made. These choices can influence the interpretability of the resultant data. For example, a survey designed for deep targets may not have the high frequencies or fold needed to image stratigraphic details at shallow levels. Alternatively, the spacing between midpoints (seismic traces) might be too great to image subsurface features of interest (e.g., “shoestring” sandstones). The interpretive choices continue through the processing phase, as processors make decisions (often based on time/money considerations) that influence the character, and also interpretability of the stacked seismic data. Realizing the importance of processing, some larger companies routinely send their field data out to two or more processing shops and compare the results.
Another change from previous times is that an increasing amount of processing is occurring during the interpretation phase, interactively, by the interpreter. As noted at the end of the last chapter, interpreters can now interactively evaluate the effects of different processing routines (filtering, trace balancing, deconvolution, etc.) on stacked, migrated data sets (“post-stack processing”). This type of analysis might be employed to enhance certain aspects of the data, remove unwanted noise or to match two or more data sets of different vintages.