Below is the user area of Tmigcvp.sh for the Taiwan data.
Remember, cdpmin and cdpmax refer to the range of CMPs that will have DMO applied to them. Previously, we decided that we will always let sustolt apply DMO to all stack traces (Section 9.2.1).
The CDP bin distance, provided to us with the data files, is 16.667 m (parameter dxcdp, line 22).
Because we do not know rms velocities for these data, we set vscale=1.0 (line25). We process for image, not for velocities.
Our first velocity is 1000 m/s and our last is 4000 m/s. Our velocity increment is 100 m/s. Here, we choose a large velocity range because we think the migration changes will be subtle.
The output of Tmigcvp.sh is 31 constant velocity migration panels, 128 Mbytes.
Below is script Tmigmovie.sh for the Taiwan data. We set loop=1 (line 12) to run the movie forward continuously. Below, fframe=1000 and dframe=100 because we re-ran the migration (Tmigcvp.sh) for velocities 1000-4000 m/s at 100 m/s increment.
Below is the surange output of Tmigcvp.su. Several facts from this output are useful for supplying values to Tmigmovie.sh. On line 21, you have to supply the number of time samples – the value of key ns. On line 17, you have to supply the time sample interval – the value of key dt.
The panel velocities are in the offset key and the number of velocities is in the nvs key. Using these two keys, you can calculate the values for fframe
Figures & Tables
Our objective is to introduce you to the fundamentals of seismic data processing with a learn-by-doing approach. We do this with Seismic Un*x (SU), a free software package maintained and distributed by the Center for Wave Phenomena (CWP) at the Colorado School of Mines (CSM). At the outset, we want to express our gratitude to John Stockwell of the CWP for his expert counsel.
SU runs on several operating systems, including Unix, Microsoft Windows, and Apple Macintosh. However, we discuss SU only on Unix.
Detailed discussion of wave propagation, convolution, cross- and auto-correlation, Fourier transforms, semblance, and migration are too advanced for this Primer. Instead, we suggest you refer to other publications of the Society of Exploration Geophysicists, such as “Digital Processing of Geophysical Data – A Review” by Roy O. Lindseth and one of the two books by Ozdogan Yilmaz: “Seismic Data Processing,” 1987 and “Seismic Data Analysis,” 2001.
Our goal is to give you the experience and tools to continue exploring the concepts of seismic data processing on your own.
This Primer covers all processing steps necessary to produce a time migrated section from a 2-D seismic line. We use three sources of input data:
Synthetic data generated by SU;
Real shot gathers from the Oz Yilmaz collection at the Colorado School of Mines (ftp://ftp.cwp.mines.edu/pub/data); and
Real 2-D marine lines provided courtesy of Prof. Greg Moore of the University of Hawaii: the “Nankai” data set and the “Taiwan” data set.
The University of Texas, the University of Tulsa, and the University of Tokyo collected the Nankai data. The U.S. National Science Foundation and the government of Japan funded acquisition of the Nankai data.
The University of Hawaii, San Jose State University, and National Taiwan University collected the Taiwan data. The U.S. National Science Foundation and the National Science Council of Taiwan funded acquisition of the Taiwan data.
Chapters 1–3 introduce the Unix system and Seismic Un*x.
Chapters 4–5 build three simple models (complexity slowly increases) and acquire a 2-D line over each model. (These chapters may be skipped if you are only interested in processing.)
Chapters 6–9 build a model based on the previous three, acquire a 2-D line over that model, and process the line through migration.
Chapters 10–11 start with a real 2-D seismic line of shot gathers (Nankai) and process it through migration.
Chapters 12–13 and 15–16 start with a real 2-D line of shot gathers (Taiwan) and process it through migration.