Sampling theory became of interest to the geophysical industry in the 1960s. Prior to that time, seismic data were recorded as a continuous analog signal. Processing was done with analog computers of considerable sophistication. The early-day digital seismic data were not quite as good as analog results. As more sophisticated algorithms, such as deconvolution, were implemented on digital data, analog processing was rapidly replaced by computing centers working on digital data from the new digital field recorders.
The sampling of data in the time domain was the first concern. Geophysicists became acquainted with the Nyquist rule and other basics of sampling continuous data. The first task was to determine the sampling rate for signals in the seismic bandwidth. Seismic signals generally peak around 30 Hz and seldom exceed 100 Hz. Most use of sampled data at that time in other applications such as radar involved much higher frequencies and much faster sampling rates than that needed for seismic data. Satellite signals are in the 6 GHz range.
In seismic surveys, there are two types of sampling. One is the sampling of a single channel of data in the time domain. The other is the spatial sampling resulting from geophone arrays, group intervals, line spacing, and line lengths. As is the case for other problems in geophysics, the sampling rules are complicated by targets that dip and curve. The velocity field of the earth is variable both laterally and vertically. Therefore, seismic sampling is more complicated than for communication applications and