Abstract

This paper presents preliminary results from direct imaging of P to S conversion data from the Earthscope Transportable Array (TA). Input data are receiver function estimates from the Earthscope Automated Receiver Function Survey (EARS). The waveforms from EARS were first stacked to produce composite events from 60 different source regions defined by a radial grid with a center in the western United States (U.S.). The composite waveforms were imaged with a three-dimensional, prestack migration method using a plane wave decomposition and an inverse generalized Radon transform algorithm to yield estimates of radial and transverse component scattering potential from each composite event. Data deficiencies limited this processing to 50 of the composite events. The migrated results from these 50 events were stacked to produce a final three-dimensional image volume examined in this paper through three-dimensional visualization techniques. The crust-mantle boundary, the 410 km, and 660 km discontinuities are imaged. An erratic negative conversion at the depth that has been suggested as the seismic lithosphere-asthenosphere boundary is observed in these results, but I avoid interpreting this feature as it may be biased by interference with first-order, free-surface multiples from the Moho. The most significant new feature imaged by this technique is a series of east-dipping conversions directly above the 410 km discontinuity. This feature is imaged in the region immediately above the transition zone across the entire western U.S., although it becomes partly obscured in Arizona by interference with what I interpret as sediment reverberations that contaminate some stations in southern California and the Rio Grande Rift. Overlaying these results with published tomography models suggests that this feature is linked to the Juan de Fuca–Farallon slab. I follow this hypothesis to build a three-dimensional model of the top of the Farallon slab that honors the new data and results from published tomography models. The model clarifies that these dipping features are consistent with a bending plate model of the slab. Using a simple kinematic model of the geometry of the slab window linked to the widening of the San Andreas system, I argue that the same dipping features seen under Nevada are consistent with those seen to the north, suggesting that there is a continuous, relatively smooth slab-related boundary layer under most of the western U.S.

INTRODUCTION

One of the main component of Earthscope (http://www.earthscope.org/) is a large aperture seismic array called the Transportable Array (TA) (part of the USArray; http://www.iris.edu/USArray/). The TA is, by most metrics, the largest passive seismic array every deployed. The only serious competition for this title is the large aperture seismic array (LASA) deployed as a nuclear monitoring experiment in the 1960s (Bates, 1982). Advances in instrumentation, however, have created a data set with a quality so beyond LASA that any such comparison borders on ridiculous. The most significant feature of the TA for this study is that it provides high-fidelity measurements of waveforms recorded by teleseismic earthquakes. This allows a form of imaging closely akin to modern three-dimensional (3D) seismic reflection imaging, but at a spatial scale about two orders of magnitude larger than typical reflection seismic image volumes. The technology used here is unique and is the result of a development effort by my students and me spanning more than a decade. The approach used is a form of three-dimensional, prestack migration designed for application to teleseismic P wave data. It is best viewed as a 3D implementation (Pavlis, 2011) of the plane wave migration method described in Poppeliers and Pavlis (2003a). It is a significant advance for the USArray because this approach has a much higher resolution than existing tomography models.

The TA began in deployment 2004 and grew in place to the full deployment of ∼400 sensors along the western United States (U.S.). Since then the array has been in the process of moving in a west to east roll-along fashion (see http://www.usarray.org/). The experiment is planned to cover the entire lower 48 states by 2013. At the time of this writing the array is straddling the eastern boundary of the Cordillera. A significant body of work already exists in the application of modern seismology imaging techniques to TA data. The most relevant to this paper are teleseismic body-wave tomography methods, because they provide the highest resolution images of the upper mantle yet produced from these data. The first P wave tomography results for the TA were published by Burdick et al. (2008) and Sigloch et al. (2008). Burdick et al. (2008, updates in 2009, 2010) used first-arrival picks produced by the USArray's Array Network Facility. Sigloch et al. (2008) used a more sophisticated method utilizing both arrival times and amplitudes of P waves in multiple frequency bands to estimate a comparable P wave model. Similar methods were used by Tian et al. (2009) to produce a closely related SH model. In addition, Roth et al. (2008) and West et al. (2009) published independently determined P wave models for a more restricted region centered on the Pacific Northwest. Abstracts of recent meetings show that other seismic tomography efforts are under way and in various stages of publication. The Burdick et al. (2008) and Sigloch et al. (2008) papers are particularly important to this paper as both models are available online and were used for cross-validation of the results from this paper.

My results are among the first comprehensive, converted wave image results from the TA. Smaller scale studies (e.g., Boyd et al., 2004; Zandt et al., 2004) using a comparable technology have been published. In addition, I am aware of several other papers based on results by Abt et al. (2008), Miller et al. (2009), Levander et al. (2008), and Cao et al. (2008) that are in the publication process. The approach used here, however, is different from any wavefield imaging approach being applied by others. I will show that the method applied here is capable of resolving details of mantle structure at least an order of magnitude smaller than any existing tomography model. The amount of information in these results is so huge that the 3D data volumes could be used as a component of a large number of possible follow-up papers. Consequently, an important purpose of this paper is to document the nature of the technology I used to produce these image volumes and put the data in a form that can be utilized easily by others for further studies; this was done in the spirit of data sharing that is the backbone of Earthscope as a community facility. I emphasize that this paper would have been impossible without three data products produced by others: (1) waveform data from the Earthscope Automated Receiver Function Survey (EARS); (2) P wave tomography models (Burdick et al., 2008, 2009); and (3) the P wave model of Sigloch et al. (2008). Similarly, I argue that publication of this paper is important to provide a similar resource for others. The results shown here could be utilized for a number of purposes, but I have elected here to focus mainly on one issue, the geometry of the Farallon slab. I made this choice because, as shown in the following, these results provide independent supporting evidence for models of the Farallon slab geometry described by Sigloch et al. (2008) and Tian et al. (2009). I believe that this is the correct focus for this paper because this is an entirely new technology, and it needs to be validated against more mature methods. To do anything else runs a serious risk of interpreting noise as something real. There is a long list of intriguing features that could be teased from these results, but validation requires testing and comparison with independent results that are best left as future work. As a result, in the spirit of my colleagues (Burdick et al., 2008, 2009, 2010), I plan to update the scattered wave images found here as new data are acquired and new technologies are developed for improving the results. For the present, most of the data files used in this paper are contained in Supplemental File 11. Most of the figures in this paper can be recreated with data files and a public domain visualization package as described in Supplemental File 1 (see footnote 1). Some readers may choose to download these data files directly and manipulate them in the 3D visualization package in preference to many of the figures in this paper.

DATA PROCESSING METHODOLOGY

The imaging method I used for this paper is described in another paper (Pavlis, 2011), which describes the computational implementation of an approach defined from a more theoretical perspective in Poppeliers and Pavlis (2003a). The reader is referred to these background papers for details, but I summarize key concepts here for their convenience.

Input data are assumed to be three-component seismograms derived from deconvolution of teleseismic P wave data from known source locations, thus assuming that the input data are an approximation of the impulse response of the Earth to an incident teleseismic P wavefield. The method aims to image P to S conversions from subsurface structures. This allows the use of an approximation to the impulse response (Pavlis, 2005) commonly called receiver functions. This implementation, however, explicitly treats the data as vector (three component) data, anticipating that future developments could improve data quality over current generation receiver function estimates.

The raw data are teleseismic P wave data recorded by the Earthscope TA. The data I used here were processed with an automated system in a project called EARS (Crotwell and Owens, 2005). The EARS data are conventional receiver function estimates produced by applying the iterative deconvolution method of Ligorria and Ammon (1999). Crotwell and Owens (2005) applied a set of auto-editing rules to allow automated processing of the raw waveform data. Crotwell generously helped me acquire all the receiver function waveform data produced by EARS through May 2009; I used only data that had passed their automated quality control checks. This is an example of how well-defined data products produced by Earthscope can help reduce effort for other investigators. Months of work would have been required to produce a comparable database by the more conventional eyes-on model of data processing. The cost, however, is no quality control on the input beyond that produced by a robot.

The entire EARS data set consisted of 218,032 seismograms. However, the EARS project processed data from all available broadband stations available through the IRIS (Incorporated Research Institutions for Seismology) Data Management Center. When I reduced the data to only include stations in the western U.S., the number of seismograms was reduced to 89,536 associated with a total of 1740 events. A large fraction of these events, however, had data for only a handful of stations because of the automated editing features used in their processing flow. Because the method I employed here is a prestack migration method, the idealized data is a set of deconvolved data recorded at every station. Experience has shown that irregularly sampled data create artifacts (see following) that limit the resolving power of the data and seriously complicate interpretations. For this reason I developed a common-receiver stacking algorithm that produced composite, common event gathers from different source regions (Fig. 1). This algorithm has two elements. First, one defines a specific source grid geometry. I used a radial grid that is best understood as being equivalent to putting the pole at a specific point on the Earth. The center point for this study is the location in the western U.S., illustrated in Figure 1 (40.5°N, 112.5°W). The grid is then defined by epicentral distance and receiver to source back azimuth to the grid center. For this study the grid was made uniform in distance and azimuth angles with spacings of 10° and 30°, respectively. The grid points illustrated in Figure 1 are center positions for each radial segment. Every event in the EARS data set was then associated with one of these radial grid areas. The events that were associated with each cell were grouped by station. The receiver function data from each station in each such group were aligned on the P wave arrival time, and rotated to standard geographic coordinates (north-south, east-west) and stacked with no moveout correction. The stacked seismograms were then associated with a pseudoevent with a hypocenter location defined by the centroid of the events in each grid cell and an artificial origin time (Fig. 1). The P wave arrival time for each stacked seismogram was then set to the theoretical P wave arrival time computed from IASPEI91 (International Association of Seismology and Physics of the Earth's Interior model) from the centroid position to the each receiver. This yielded 60 pseudoevents, illustrated in Figure 1.

The second phase in this data processing flow is estimation of a local plane wave decomposition. In this paper I used a 3D implementation of the pseudostation stacking method (Neal and Pavlis, 2001; Poppeliers and Pavlis, 2003a) in a program called pwstack (described in detail in Pavlis, 2011). This program uses a Gaussian window function to estimate a plane wave decomposition on a regular grid of image points. This method effectively breaks the actual array geometry into a collection of overlapping, virtual arrays centered on each image point of this regular grid of pseudostation points. In Neal and Pavlis (2001), it was shown that this procedure can be viewed as a spatial convolution of the incident wavefield with this window function. This is an important insight to understand. In this context it means the entire wavefield is smoothed laterally by this windowing operator. When used to estimate a plane wave decomposition, this algorithm can further be viewed as a combined spatial antialiasing filter and wavenumber filter. The wavenumber limit depends upon the choice used for the plane wave phasing applied to the collection of virtual arrays. In particular, all results in this paper used an 11 × 11 rectangular slowness grid centered on the incident wave slowness computed by IASPEI91 with spacing of 0.01 s/km and a range of ±0.05 s/km. I argue that a strength of this approach is that the well-defined spatial smoothing and wavenumber limits defined by this method improve the signal-to-noise ratio of the ultimate result because all smoothers tend to reduce incoherent components of the data. In addition, the plane wave decomposition with a limited range of slowness vectors reduces the impact of wave modes that are aliased (e.g., steeply dipping interfaces or body wave to surface wave scattered modes). The cost is a reduction in lateral resolution and dip resolution. An approximate limit on the maximum resolvable dip, ɸ, by this method is the following: 
graphic
This formula is easily derived from a dipping, planar scattering body with dip ɸ, assuming that umax is the maximum slowness in the plane wave decomposition and uS and uP are nominal S and P wave slowness values, respectively. The poorest dip resolution is obtained for vertical incidence where umax is just the range of the plane wave decomposition. Thus, if we assume a nominal upper mantle P velocity of 8.1 km/s and S velocity of 4.5 km/s, we can compute a rough lower bound on dip resolution with umax = 0.05 s/km as 27°. The actual dip that may be present in the final image is always larger than this, however, because the data are phased around the incident wave slowness vector, which always increases umax. However, this also shows that dip resolution at any point is azimuth dependent because individual events are migrated into a limited range of dip angles. This prestack smoothing and wavenumber-limiting feature is a core idea in this method, and whether it is a good or bad thing is subject to debate. Time will tell how alternative algorithms for imaging with this type of data compare, but I argue that this method is the only way yet conceived to properly deal with the fact that the USArray data are spatially aliased by any definition.

There remain two implementation details about how the plane wave decomposition was accomplished that are important. (1) Note that I used the variable aperture, Fresnel zone method (Pavlis, 2011) with P and S velocities of 8.1 and 4.5 km/s, respectively, with the wave period set to 5 s. The actual data have an upper band limit closer to 1 s, but 5 s provides an aperture appropriate for the 70 km nominal spacing of the TA. To make this less abstract, this translates to a radius of 11.25 km at the surface, ∼30 km at the Moho, and ∼100 km at the 410 km discontinuity. I set the absolute cutoff radius (Pavlis, 2011) as twice the Fresnel zone width. These parameters provide a reasonable level of spatial antialiasing to stabilize the solution and make the lateral resolution depth dependent. With this method the aperture of each virtual array is time variable and based on a diffraction formula that makes the smoothing parameter less ad hoc than the time-invariant window function in the original paper (Neal and Pavlis, 2001). (2) The second implementation detail relates to handling of coverage irregularities. Early results using these data were vastly inferior until this detail was addressed, i.e., a set of automatic edit features in the plane wave decomposition code (described in Pavlis, 2011) to address issues of irregular coverage. To provide minimal beam steering resolution at each image point, I used a five-station cutoff parameter. That is, any image point for which the maximum aperture for the pwstack program contained fewer than 5 stations was discarded. Similarly any point for which the centroid of the actual set of stations inside the computed aperture was more than 35 km (half the nominal TA station spacing) from the target position was also discarded. These criteria led to 10 of the 60 composite events having no usable data (Fig. 1). An additional coverage problem was uncovered by making fold maps for each image point. These maps showed that only 17 of the 50 remaining composite events had anything close to full coverage of the study area. As a result I divided the data into the 17 best coverage events to contrast with results from all 50. I henceforth refer to the former as the “best events subset.”

The third phase of this processing can be viewed as a form of prestack, depth migration or alternatively a procedure commonly called migration-inversion in the exploration literature (e.g., see special section with introduction by Gray, 1999). The latter applies because the algorithm uses an analytic inverse called the inverse generalized Radon transform (IGRT). The IGRT has also been used to produce 2D images from P to S conversion data by several groups using the method of Bostock et al. (2001), but their approach has a more ambitious objective: inversion for actual P and S velocities. Here the final output is best understood as an image of P to S conversion scattering amplitude in a longitudinal, radial, transverse coordinate system. This new method has several features that need to be recognized.

1. The method is a fully 3D, vector data method. That is, it was designed to work with areal arrays of broadband instruments like the TA. It is thus theoretically capable of resolving dipping interfaces within limits imposed by the TA stations spacing and the range of illumination angles provided by the actual data. Others have implemented 2D prestack Kirchoff methods (Sheehan et al., 2000; Aprea et al., 2002; Wilson and Aster, 2005; Levander et al., 2005; Boyd et al., 2007), but this is the first application of a fully 3D, three-component method to the TA. Note that in this context 3D means that scattering bodies outside the vertical plane aligned with the propagation direction can be properly positioned in space. Fully vector means that the method treats all data as intrinsically made up of three components. This is in contrast to the common practice of treating the problem as a scalar wavefield defined by the radial component data. The radial wavefield approximation, strictly speaking, is valid only for radially symmetric Earth models with isotropic layers. Imaging the tangential data, for example, has potential for defining edges of isotropic layers and scattering linked to anisotropy.

2. This is a prestack method. As noted above, although the TA uses a large number of modern, broadband sensors, the station spacing (nominally 70 km) renders the data spatially aliased in a Fourier sense for anything but low frequencies and/or nearly flat interfaces. This is problematic because most algorithms used in seismic reflection processing, and wavefield imaging results published for the TA to date (e.g., Abt et al., 2008; Cao et al., 2008) have mainly focused on common conversion point (CCP) stacks (Dueker and Sheehan, 1997, 1998). One reason this has been the focus of most work to date is that CCP stacking allows data to be resampled onto a grid that is finer than the station spacing. This allows a form of poststack imaging similar to that used in reflection processing (Ryberg and Weber, 2000), and subject to the same fundamental limitations (e.g., Yilmaz, 2001). Prestack imaging is now the preferred method in reflection seismic imaging because prestack methods are known to provide a superior job of resolving horizons with conflicting dips (Yilmaz, 2001).

3. The IGRT formula was implemented using a plane wave summation approach (described in Pavlis, 2011). This means that the entire imaging process can be conceptualized as four distinct steps: (1) assembly of data into plane wave components associated with each pseudostation point; (2) back projection of data from each pseudostation point along a direction of look defined by the vector sum of the incident wave slowness vector and the slowness vector used in the plane wave decomposition computed by pwstack; (3) weighting to honor the IGRT formula and an (optional) coordinate rotation on a sample by sample basis; and (4) summation of back-projected data into a master image volume. The use of IGRT weighting of each data sample adds significant computational effort, but is justified to scale the amplitudes of each image point correctly.

4. The method uses ray-based propagators. This provides a theoretical framework internally consistent with the asymptotic formula of the IGRT and makes the forward model for complex media more computationally tractable. I use a further approximation in this implementation that is a descendent of the approximate ray-tracing method originally developed in a completely different context by Thurber and Ellsworth (1980). That is, back projection of data is made on ray paths computed from a radially symmetric Earth model. Given the dependence of this method on pseudostation stacking, I argue that this approximation is unlikely to have a major impact on the results. The most important thing this approximation adds is that it provides a computationally tractable way to utilize 3D Earth models to properly position any scattering body. As shown in Poppeliers and Pavlis (2003a), this calculation then reduces to a large number of line integrals through the Earth model.

Since this is a prestack method, the final step is stacking the data from multiple events. A significant issue in passive array imaging is that the source geometry is not controlled. As a result illumination is stronger in some directions than others. In Poppeliers and Pavlis (2003b) we argued that this is an important advantage of a prestack method, and discussed some means for balancing the stack. These concepts have been implemented along with some new alternatives in the current implementation (Pavlis, 2011). The stacking algorithm described in Pavlis (2011) has the added capability of computing measures of repeatability. In particular, in the results following, I show an example of stacking coherence that can be used to appraise the relative reliability of different sections of the stacked image volume.

Appendix 1 contains elements of parameter tests run to examine the stability of the results, and discusses several additional practical issues on the sensitivity of the results to some of the parameters that can be varied in this algorithm. I have found that with these data the most essential feature is to apply a cutoff on viewing angle range in the final stack. The method uses the IGRT formula to form the image at each point in space, but the IGRT formula involves an integral over solid angles controlled by the range of illumination angles of the data. Irregular source and receiver geometry causes this quantity to vary strongly with position. In addition, previous applications of the IGRT (e.g., Miller et al., 1987) illustrate that the intrinsic coverage limitations of an array constrained to Earth's free surface causes data to be smeared along isochron surfaces. A practical issue illustrated in Appendix 1 is that it is useful to apply a solid angle cutoff in the grid summation step to keep the result from being contaminated by the data with poor coverage. A further clarification that this is a necessary constraint on the solution is also shown in Appendix 1. That is, the stack of the best event subset and the complete data set with a reasonable solid angle cutoff are nearly indistinguishable. I have relegated these to an appendix because the primary purpose of this paper is to present the results as they exist with the currently available data and state of processing. Future work will be aimed at improving these results and increasing the level of validation of the results. Appendix 1 should be viewed as initial efforts in these directions. The rest of this paper focuses on preliminary geologic interpretations from the results of the current best processing results.

INTERPRETATIONS

Mantle Discontinuities

Figure 2 is a static view that can serve as an anchor point for a series of animations that show the complete results for this study. Focus first on three discontinuities highlighted in Figure 2: (1) the Moho, (2) the 410 km discontinuity, and (3) the 660 km discontinuity. The importance of these features for this paper is the simple fact that they are completely expected. That is, these three discontinuities are universally accepted as being present everywhere on Earth. The fact that we image these horizons and they are migrated to the correct depths is a first-order validation of this method. Furthermore, as Figure 3 and Animation 1 show, the 410 and 660 km discontinuities are not well imaged on the transverse component solution. This is consistent with the standard model of these discontinuities as resulting from phase changes of olivine, and suggests that the materials on both sides of both boundaries are approximately isotropic. The reality of these features is further demonstrated in Animation 2, where coherence of the stacks increases as we slice through both depths. However, the 410 and 660 km discontinuities are more diffuse in this solution than might be expected when the result is compared to a more conventional CCP stacking result illustrated in Figure 4 (Animation 3). This observation is clarified by the simulation result shown in Figure 5 (Animation 4). That simulation was produced by assuming positive valued conversion horizons at 30, 200, 410, and 660 km and a single negative conversion horizon at 100 km depth. The simulation data were computed to exactly duplicate the recording geometry of the real data and were passed through the same processing stream as the actual data. In the simulation, layers are focused and correctly migrated to the proper depth. The result is subject to some artifacts where the station coverage is irregular, but for most of the area this produces random hash above each horizon. In some places (notably the edges of the image) this produces smile-shaped artifacts (Fig. 5C). This issue is discussed in more depth in Appendix 1, but note that this is a limitation of these results and a potential pitfall in interpretation. Holes in coverage created by the automated editing scheme used by EARS create the smile-shaped artifacts seen in Figure 5C. These artifacts limit the resolution of the results and may be one reason the 410 and 660 km discontinuities are more diffuse than the simulation results. However, while the simulation data have approximately the same bandwidth as the real data, the thicknesses of the boundaries imaged in the simulation result are notably thinner and more regular than the results with the real data. Four hypotheses can explain this result: (1) the data are noisy and this is a noise artifact; (2) unresolved larger scale velocity variations are defocusing the image; (3) high S wave attenuation in the upper mantle broadens the pulse from both horizons; or (4) both discontinuities have structure at a scale near the upper bandwidth of these data. The first is always conceivable, but I argue that it is highly unlikely. Truly random noise would fold into the solution as random smile shapes (the migration impulse response) of alternating polarity to form a random background with horizontal smearing at the scale of the pseudostation array aperture. More problematic is signal-generated noise, which could conceivably be created by scattering near the Earth's surface. This effect, however, would not be expected to be depth dependent. The relative sharpness of the observed Moho suggests that this is not a significant process.

The second hypothesis cannot be ruled out completely, but sensitivity tests described in Appendix 1 and summarized in Figure 6 suggest that it is unlikely. That is, I show that at least for the Burdick et al. (2009) model, the difference in the results with a radially symmetric and the tomography model are small. In retrospect this is not surprising, since most of the Burdick et al. (2009) tomography model has velocity variations <1%. Since in this method the data are propagated to depth using line integrals through the tomography model, if follows immediately that an upper bound on the depth deviations linked to a 1% deviation is on the order of ±7 km at the 660 km discontinuity and ±4 km for the 410 km discontinuity. Given the bandwidth of these data (»1 s duration pulse), a perfectly sharp horizon would at best be imaged as an ∼10-km-thick positive pulse. This is verified by the simulation in Figure 5. Hence, the fact that depth variations with the Burdick et al. (2009) model compared to a radially symmetric model are small is a simple consequence of the relatively small variations in these velocities. A larger potential issue, however, is that the S model used to generate Figure 3 was generated by scaling the P model by an average Vp/Vs for ak135 (the background radially symmetric model used in Burdick et al., 2009). If one used a completely independent S wave model, larger differences might be apparent. However, time-to-lag-to-depth in any valid migration method with this type of data is always controlled by the amplitude of the P and S velocity fluctuations, and as long as the lateral scale is of the order of 1%, the simple scaling relation should hold. Large variations in Vp/Vs, however, could locally lead to much larger errors that could have a much larger impact. Additional work is needed with a range of independently determined P and S velocity models to completely rule out this hypothesis.

This leaves attenuation and smaller scale structural variations as the most viable candidates to explain these observations. Significant additional work is needed to sort out which is more important. What will be needed to address this issue is a set of variations on the simulation used to create Figure 5. That is, forward models for attenuation or models of wavefield interaction (e.g., transitional layering or irregular surface scattering models) at the discontinuities could be used to test these alternative hypotheses. Such simulations are well beyond the scope of this paper, but are put forward to encourage future work in this area. As with all inverse problems, validation of a hypothesis linked to a particular solution is a more difficult problem than producing the solution.

The upper section of the image volume is dominated by the signal from the crust. As expected, the crust is imaged clearly everywhere on the radial component and is the highest amplitude signal observed. The transverse component result is irregular with a correlating peak at the same time as the radial component, although not necessarily the same polarity everywhere and not always present. It is not clear how much of the irregularity on the transverse component is due to crustal anisotropy and how much is due to irregularities in illumination. Furthermore, consumers of the result of this study must recognize that this technique is not ideal for imaging the crust-mantle boundary, because the upper 2 s of the data are affected by a top mute used to separate the incident wavefield (as described in Poppeliers and Pavlis, 2003a; Pavlis, 2011). Furthermore, the pseudostation stacking aperture used here is relatively large (σ = 28 km with an outer cutoff radius of 56 km) at the depth of the crust-mantle boundary, leading to more lateral smoothing than is essential at that depth. Consequently, interpretations of crustal thickness from these data should be viewed skeptically and used only in combination with other constraints.

A number of papers in recent years have addressed the issue of the seismic lithosphere-asthenosphere boundary (LAB) interpreted from S to P (Kumar et al., 2005a, 2005b, 2006; Landes et al., 2007) and P to S receiver function data (e.g., Dueker and Sheehan, 1997, 1998; Boyd et al., 2007; Rychert and Shearer, 2009). The seismic LAB by this definition is a negative polarity conversion at depths between ∼60 and 120 km. Rychert and Shearer (2009) conducted a global survey of inferred LAB thickness and grouped data into three regimes: oceanic lithosphere, tectonic lithosphere, and shields. Since the western U.S. is unambiguously tectonically altered lithosphere, their results would predict a LAB thickness of 81 ± 2 km. The results here show an irregular negative conversion in this depth range throughout most, but not all, of the western U.S. That is, the entire area shows a negative polarity signal of variable amplitude below the Moho, which transitions at depths below 100 km to a continuous, positive conversion horizon. Making a physical interpretation of this feature is risky from these data alone, however, for two reasons. First, as pointed out by Romanowicz (2009), we don't really know how the seismic LAB is related to upper mantle processes, particularly under continents where there is an inconsistency in results from surface wave inversion and scattered wave results like this. For this reason I only refer to this feature with the “seismic” modifier to clarify that it may have absolutely nothing to do with lithospheric thickness. Second, an interpretation of the seismic LAB from these data is premature because the result seen in Figure 3 (Animations 1 and 2) may be seriously overprinted with a signal from crustal multiples. Figure 2 shows two horizons with a question mark labeled PpMs and PsMs. PpMs and PsMs are free surface multiples from the Moho. For a typical crustal thickness, PpMs arrives ∼10 s after P with a positive polarity, while PsMs has a negative polarity and arrives ∼18.5 s after P (e.g., see Fig. 5 of Mercier et al., 2006). When treated as primaries in this migration algorithm these multiples would map to the approximate depths of the two features tagged with question marks in Figure 2. A further hint these horizons are multiples is that if you carefully observe the full animation in any of the animations, both horizons tend to track each other and mirror crustal structure. Thus, although it is tempting to try to interpret this very clear pair of horizons as upper mantle structure, there is a strong possibility these are a pure artifact. It is also possible that these multiples are exceptionally bright in this result because the EARS automated processing scheme may have a tendency to favor data with strong multiples. The reason is that the EARS processing passes the data through an h–k stacking algorithm (Zhu and Kanamori, 2000), which is used to estimate crustal thickness. In any case, the results between 100 and 250 km depths for any P to S conversion method like this should be viewed with skepticism until a suitable algorithm is found to remove these multiples. Furthermore, where the crust is thin (e.g., in the Basin and Range), I suspect that PpMs may interfere with any real seismic LAB arrival to distort the inferred seismic LAB depth.

It is also important that the reader recognize a different multiple problem that affects these data. Rondenay et al. (2005) described issues with sedimentary multiples and P to Rg scattering on direct imaging methods with P receiver function data in the Los Angeles Regional Seismic Experiment (LARSE). I suggest that the same problem is distorting these results in southern California. Free surface and interbed multiples and possible P to Rg scattering in sedimentary basins appear to create reverberations that can extend for tens of seconds after the direct P arrival. It appears that the receiver function estimates from stations in the Los Angeles basin are contaminated by problems in deconvolving data from these stations. This is revealed in the output image as a cone-shaped blemish in the image with its apex pointed upward to the Los Angeles basin. This blemish effectively destroys the image in most of southern California. It should be possible to eliminate this problem by additional data editing, using alternative deconvolution methods, and/or development of appropriate multiple suppression algorithms like that given in Fan et al. (2006b). The known issue with sedimentary reverberations in the Los Angeles basin and its impact on the results help in understanding similar blemishes elsewhere in this image volume. The clearest are stations in the Rio Grande Rift that appear to be associated with comparable smile-shaped artifacts. The artifacts there, however, are significantly less pronounced than in the Los Angeles basin. A hypothesis to explain this observation is that the problem is worse for the Los Angeles basin because of the large number of stations deployed there for earthquake monitoring. Image points that enclose the Los Angeles basin appear to be dominated by stations subject to sedimentary reverberations.

Constraints on Farallon Slab Geometry

One of the early seismic tomography results from the TA has been new constraints on the geometry of the Juan de Fuca–Farallon slab beneath western North America. In Burdick et al. (2009) we compared results of our updated P wave model with independent studies by Sigloch et al. (2008) and Roth et al. (2008). These and a related S wave model determined by Tian et al. (2009) all agree on at least two key components of the subducting slab geometry: (1) the slab dips steeply in northern California and flattens to a very low angle under northern Nevada and Utah, where it appears to be suspended above the 660 km discontinuity (see Fig. 7); and (2) the slab anomaly weakens beneath the High Lava Plains in eastern Oregon to form what Roth et al. (2008) called a slab window. The results here provide an additional, completely independent test of these ideas.

Figure 7 compares a cross section through the converted wave image volume similar to section B-B′ in Burdick et al. (2009). The comparison in Figure 7 with the Burdick et al. (2009) and Sigloch et al. (2008) models demonstrates the strong connection between east-dipping horizons imaged here with high-velocity bodies in the upper mantle imaged by tomography. That section demonstrates a pattern seen for the entire region in Animation 1. That is, this technique reveals a set of east-dipping bodies directly above the 410 km discontinuity that are continuous throughout the entire western U.S. The nature of the heterogeneities that generate the scattered waves imaged here is unknown, but Figure 7 suggests strongly that the scattering is linked to the upper surface of the subducting Juan de Fuca–Farallon plate. Animations 5 and 6 were constructed based on a hypothesis that these dipping features are markers for the top of this subducting plate. Figure 10 is a complementary representation using a contour plot of the depth to the surface in a standard map projection.

The surface illustrated in Figures 9 and 10 was created by merging available data into a kinematic model of the slab geometry. The control points used to define this model (Fig. 8) in the upper mantle were obtained by a procedure similar to standard interpretation methods used in seismic reflection imaging. That is, I worked through the volume picking the scattered wave image volume data guided by an overlay display of the two tomography models show in Figure 7. In the central part of the model the dipping horizons that are the focus of the model provide strong constraints, but to the east and west auxiliary constraints are needed. To the east the model should be viewed as an extrapolation guided by common features of the two tomography models. To the west the continuous horizons (that I argued herein may be multiples) prevent a direct interpretation from the converted wave image in the mantle shallower than ∼200 km. I thus made no picks from these data above the continuous negative horizon I suggest may be PsMs. Strong constraints to the west are imposed, however, by two important geometric components: (1) the location of the trench and (2) the location of active volcanoes in the Pacific Northwest. The former are hard constraints created by digitizing points along the active trench north of the triple junction at Cape Mendocino. The later provide a softer constraint because of ambiguity in the actual point of origin of magma relative to the active vent. I used the map location of the peaks of Holocene volcanoes and defined control points for this model at a depth 125 km below these points. I chose 125 km as a round number average for the source depth of calc-alkaline volcanics given by Gill (1981). More recent work by England et al. (2004) and England and Katz (2010) suggests that 125 km may a bit deep, but since there is no seismicity to constrain this depth it remains a softer constraint than might be possible in other areas of the world. Furthermore, at the scale of these figures, variations through a range of feasible values for this depth constraint are nearly invisible and provide a constraint at least as strong as the picks made from the seismic data.

I fit a surface through the full set of control points illustrated in Figure 8 using a Delaunay triangulation method with no smoothing. I then created the mesh shown in Figure 9 using a natural set of generalized coordinates for such a surface. The east-west curves of this mesh, which I refer to as the x1 coordinate, are surface flow lines defined by the current pole of rotation between Juan de Fuca and North America using NUVEL-1 (DeMets et al., 1990). I refer to the complementary generalized coordinate as x2. The curve defined by x2 with x1 = 0 is the western limit of this surface and is defined by the position of the trench. I scale the x1 axis by time to satisfy the surface plate rotation geometry (flow lines seen in Fig. 9) and an assumption of no longitudinal (parallel to flow lines) strain. If we let R0 denote the distance from the pole of rotation (e.g., Cox and Hart, 1986) to a flow line (line of constant x2) and, forumla, the surface plate relative velocity along a x1 flow line, the local plate motion velocity is vpm = forumlaR0. The scaling of the x1 axis to time is then made using a three-dimensional line integral 
graphic
where φ is the local dip for the surface at time τ computed by tracing a path along a surface flow line. In words this means we warp the plate motion path radially onto the surface at depth preserving distance. The value of this scaling is that within the limits of the no longitudinal strain approximation and the constant plate motion assumption, lines of constant x1 are approximate lines of constant time since the top surface of the slab was at the trench. The constant plate motion assumption is demonstrably wrong. Longer term motion would require stage pole data such as those of Doubrovine and Tarduno (2008), and a more elaborate kinematic model. Nonetheless, I argue that the wire frame defined by this coordinate system (Fig. 9) provides a useful way to visualize the geometry implied by my interpretations of the combined scattered wave image and tomography data. Keep in mind, however, that the flow lines (x1 coordinate direction lines) are meaningful in terms of current plate motion but the complementary x2 direction curves are valid as time lines only until ca. 6 Ma.

Extension of this model south of the triple junction near Cape Mendocino in California is more problematic. Northward migration of the triple junction has led to the creation of the San Andreas fault and left in its wake a slab window (e.g., Severinghaus and Atwater, 1990; Dickinson, 1997; Zandt and Humphreys, 2008; McQuarrie and Oskin, 2010) as the boundary switched from a subduction zone to a transform fault boundary. I modeled this process (Fig. 9) with a secondary slab model created with a similar generalized coordinate system. For this part of the model I constructed a surface constrained to pass through picks I made from the converted wave image volume and tomography data plus a set of control points at the current map location of the San Andreas set at a depth of 25 km (a crude average crustal thickness). This is based on an assumption that the slab once intersected the base of the crust near the current plate boundary. This completely neglects extension of the Basin and Range (e.g., McQuarrie and Wernicke, 2005; McQuarrie and Oskin, 2010), which distorts this coordinate system if we project it backward in time. Careful accounting of Basin and Range extension would steepen the dip of this surface at lithospheric depths with a correction that would increase progressively from south to north. This also grossly simplifies the geometry as it does not account for irregularities in the ridge system illustrated, for example, by Dickinson (1997). The model is founded on an assumption that the slab maintains its integrity and undergoes only bending strains. This assumption is unquestionably wrong at some depth where the slab rigidity becomes small. Nonetheless, as for the Pacific Northwest this model should be viewed as a simple starting model used to evaluate the hypothesis it was designed to test: are the dipping features seen above the 410 km discontinuity feasible as a marker of the top of the Farallon slab? Alternatively, do they at least mark a surface through which the top of the slab once passed?

The San Andreas section of the model uses the same flow line geometry to define lines of constant x2 as described above. An initial set of x2 = constant flow line curves was constructed in three dimensions in the same way I did for the Pacific Northwest section, but using the base of the crust at the map location of the San Andreas as tie points comparable to the trench. I then translated the origin for each of the x2 = constant curves to positions shown in Figure 9 using a simplified kinematic constraint based on the current relative motion vectors for the Pacific and Juan de Fuca plates relative to North America. That is, for each of the flow line curves I computed the arc length, sCM, of the path from Cape Mendocino to trial origin along the path defined by the current trace of the San Andreas system (Fig. 9). This was converted to a time (t) as 
graphic
where vPacNA is an average Pacific–North America relative velocity across the San Andreas (I used 45.9 mm/yr based on the NUVEL-1A rate; DeMets et al., 1990) at a central point chosen as 36.8°N and 121.4°W. The origin of each flow line in the model was then translated by forumlaR0tshift along the 3D path following the interpreted surface (Fig. 9). The result should be viewed as a rough quantitative estimate of the western edge of the top of the Farallon slab based on the interpretation I made of the combined scattered wave and tomography data. Like the rest of this model, note that this is a first-order attempt to evaluate these data and the model contains approximations that are questionable. It should be viewed as an initial starting point that can be improved with additional geologic constraints and validation against other tomography models developed from USArray data.

The most compelling argument to favor this model is the simplicity of the geometry it produces. The surface is not contorted or wildly variable and shows unambiguously that the dipping features it was aimed to evaluate are consistent with geometric constraints on the subduction system that are difficult to dispute. This suggests both the model and the scattered wave image data have elements of truth that should affect our thinking about the geometry of this system. The reader can further evaluate this model relative to the complete scattered wave imaging model and the Burdick et al. (2009) and Sigloch et al. (2008) tomography models in Animations 5 and 6. The slab model geometry and the scattered wave image volume data are also available in Supplemental File 1 (see footnote 1) to allow independent evaluation of this issue.

A second argument that strengthens the results of this paper is that the slab model I developed is crude, but does a surprisingly good job of explaining the available data. Hence, I argue that the scattered wave image data provide an independent validation of the model of the geometry of the Farallon slab described by Sigloch et al. (2008). The slab model (Fig. 9 and Animations 5 and 6) is effectively a joint interpretation of these multiple data sets aimed at evaluating the Sigloch et al. (2008) model, but contains elements that would not have been possible with the tomography data alone, because these two technologies have highly complementary resolution characteristics. The vertical resolution of the scattered wave image result is more than an order of magnitude finer than any existing form of tomography because this is a direct, wavefield imaging method limited ultimately only by the bandwidth of the data. The lateral resolution, however, is no better than the tomography images, because the station spacing of the TA relative to the data bandwidth requires a wide aperture to stabilize the plane wave decomposition algorithm. The TA data are seriously spatially aliased in a Fourier sense for anything but near vertical incidence. Although, as I stated herein, lateral smoothing is an important reason for the success of this approach, it accomplishes this at a cost, i.e., dependence on extensive horizontal averaging. This introduces an inherent large difference in horizontal versus vertical resolution. A simple way to understand the implications is to look carefully at the transverse component in Animation 1. Much of that image volume is pure noise, so cross-sectional images from the volume can, in most cases, be viewed as a random collection of smile-shaped artifacts with a lateral scale length of ∼100 km and a vertical scale of ∼10 km. This is a reflection of the migration impulse response (Schneider, 1999), which for this method takes the form of these smile shapes. The smile shape is defined by an isochron surface associated with a single datum in a single seismogram. The lateral extent of these smile-shaped artifacts is controlled by the range of plane waves used in to define the overall solution, including individual migrated event volumes and the overall stack. The vertical dimension is controlled by the deconvolved data pulse width.

The discussion of migration impulse and smile-shaped artifacts was included to empha size an important disclaimer for all potential users of these results. My interpretation was guided by an assumption that the conjecture I made above concerning sedimentary basin effects linked to the Los Angeles basin and the Rio Grande Rift was true. That is, picks I made essentially ignored the data volume beneath all of southern California and most of New Mexico. This puts two limitations on the result that may be possible to fix with additional data and improved processing. First, the upper limit of the Farallon slab predicted by this model, unfortunately, is everywhere within an area with data issues. Where coverage is adequate in northern California and Nevada, the predicted upper edge of the slab is within the region of the upper mantle affected by crustal, free-surface multiples. Where the predicted depth drops below that depth there are large coverage holes in the EARS data that were useable by the plane wave migration method. Reprocessing of the raw data and inclusion of the Sierra Nevada Earthscope Project data (Frassetto et al., 2010; Zandt et al., 2004), which were recently released, may allow us to work around this problem.

Burdick et al. (2008, 2009), Sigloch et al. (2008), and Tian et al. (2009) all noted the weakening of the slab along section similar to the right side of Figure 7. The tomography data shown there demonstrate this clearly, but note that unlike the Cape Mendocino section shown to the left, the tomography models disagree dramatically along this section line. In this case, if we accept the assertion here that the east-dipping horizons imaged above the 410 km discontinuity mark some measure of the top of the subducting slab, then the results here are more definitive than any of the tomography models. I make this claim because that structure is as clear here as anywhere else in the data, in spite of the fact that the slab appears washed out in the tomography results.

DATA PRODUCT

Earthscope was founded on a basic principle of data sharing. The results in this paper are not perfect; this is a work in progress. Nonetheless, a primary purpose of this paper is to provide these data in a form that will allow others in the community to move forward faster. I reiterate that this paper, in fact, would have been impossible without three data products produced by others: (1) the EARS waveform data of Crotwell and Owens (2005); (2) the Burdick et al. (2008, 2009) tomography models; and (3) the Sigloch et al. (2008) model. Furthermore, like many groups using the TA, the results will naturally evolve for years as new data are collected and new approaches are developed to get the most out of the available data. As that happens I intend to post new results (in the spirit of Burdick et al., 2008, 2009) at http://geology.indiana.edu/pavlis/USArray/. In addition, a critical item that is lacking for many data products is a direct means for someone else to duplicate the results of a study like this. The waveform data used to produce the results herein are now being archived as a data product through the IRIS Data Management Center (http://www.iris.washington.edu/ears/). Furthermore, the version of the codes used to produce this result are expected to be available as part of the Pavlis (2011) publication, but links to updated versions of the software will also be available with the most current results.

I thank Phil Crotwell for providing me a mechanism to obtain all the Earthscope Automated Receiver Function Survey waveform data before this was possible through IRIS (Incorporated Research Institutions for Seismology). The original teleseismic event data came mainly from the Transportable Array component of the USArray facility supported by the National Science Foundation's Major Research Facility program under cooperative agreement EAR-0350030. This work profited greatly from many conversations with my colleague and collaborator on this project, Art Weglein. This paper would not have been possible without support to me by the National Science Foundation under CMG-0327827, EAR-0934289, and advanced computing resources provided by the National Science Foundation's TeraGrid program at Indiana University.

APPENDIX 1. PARAMETER TESTS

Anyone who has done seismic reflection processing knows that all migration schemes contain a fair number of tunable parameters that can have a dramatic impact on the quality of the output. This method is no exception, and I have spent considerable efforts attempting to improve the results by adjusting different parameters. It is important to summarize results of these tests as it could significantly reduce efforts required for someone to duplicate this work, given that the computations summarized here consumed multiple central processing unit (CPU) years of computer time.

Anyone who has processed seismic reflection data knows that automatic gain control (AGC) operators and cleverly chosen color maps can hide a lot of blemishes in real seismic images. Figure 11 is a single frame from Animation 7 that is a display of the same result as Figure 3 (Animation 1) at true amplitude. This shows three things about the main image volume (the focus of this paper) that should be recognized by any potential consumers of these results.

1. There is a huge difference in amplitude between the top 200 km of this image volume and conversions from deeper horizons. The true amplitude result in Figure 11 has to be completely saturated (clipped) in the upper 200 km or structures in the transition zone region are completely invisible. At this stage in the development of this methodology the observed amplitude variation should be viewed cautiously, however, because I have not yet done simulations to fully validate that the end-to-end amplitude scaling is completely correct.

2. I argued herein that the portion of the image in southern California is contaminated by reverberations in the Los Angeles basin. The true amplitude results show that these reverberations have a large relative amplitude. When projected to mantle depths the amplitudes are comparable in amplitude to the Moho arrival. This further clarifies why this artifact is so dominant and destroys the image of the transition zone structure in that region.

3. On a more positive note, the true amplitude display reveals features like the east-dipping features seen on the transverse component in Figure 11 under northwestern Nevada. This correlates directly with the location of the slab inferred from the combination of these results with published tomography models (Fig. 7). This is classic example of how true amplitude displays are useful complements to results, like Figure 3, that have passed through an AGC operator. Users of the results of this paper are urged to never interpret anything without comparing the true amplitude and AGC results.

Because the method used here is a prestack migration method, an important issue is how the migrated image volumes are stacked. Each event in the data set defines a particular direction of illumination. In addition, the plane wave migration can be viewed as phasing the array around a range of angles centered around the incident wavefield's propagation direction. I have found that one of the most effective ways to reduce artifacts from irregular coverage is to discard data from the stack that have a limited range of illumination angles. In Pavlis (2011) I show that the image at each point in space is computed as an integral of all data that satisfy the timing constraint for P to S conversion from that point in space. The integral involves a weighting term derived from the inverse generalized Radon transform and a solid angle term, defined by the local isocron surface for each datum with a magnitude defined by the rate of change of the scattering angle with the slowness vector. A more intuitive perspective is that the accumulated solid angle at each image point defines the range of viewing angles that define the image at each point. The inverse Radon transform is an analytic inverse transformation that is exact only if the range of viewing angles is complete. This is not true anywhere for these results, but the solid angle term that is computed is a measure of the relative range of illumination at each image point. It is for this reason that the stacking program I used has a parameter to exclude data from the stack for which the accumulated solid angle is below a threshold. Figures 12 and 13 illustrate the importance of this issue in two ways. Figure 12 shows that migration of events with nearly complete station coverage produces an image volume with nearly uniform illumination throughout most of the interior of the volume. In contrast, poorly recorded events have irregular coverage that causes the illumination to be highly variable throughout the volume. Figure 13 (Animation 8) compares the results of stacking all the data with and without the solid angle cutoff parameter. The solution without a cutoff is clearly inferior to that using a fairly modest cutoff of 0.1. The reason is that the image volume constructed by this method can be thought of superposition of a large number of the smile shapes that define the impulse response for each sample in the data. When the coverage is more uniform, these shapes combine constructively at real interfaces and interfere destructively elsewhere. With irregular coverage this does not happen, and the result is subject to the smile-shaped artifacts. Given the clear advantages of using a scattering angle cutoff, all of the results of this paper were produced using a cutoff of 0.1 radians squared.

Given the results shown Figure 13 (Animation 8), it is helpful to consider the amount of useful information actually contributed by events with poor coverage. Figure 1 shows a division of the event composites used here into those with nearly complete coverage and those with very irregular coverage. Figure 14 (Animation 9) shows that when the 0.1 solid angle cutoff is used, the difference between stacking all the data and only the 17 best events is small. This is important for two reasons. First, it further emphasizes the importance of using a solid angle cutoff to stabilize the results, because without this one can expect a poorly focused image like that shown in Figure 13. Second, it suggests that migration of events with poor coverage is largely a waste of resources. This is important because this is not a trivial calculation. It requires 3–8 h of CPU time on a current-generation supercomputer to migrate 1 event, so it takes ∼3 times more computing time to produce a result with all 50 composite events than the 17 best events. Most of the results shown in this paper still use all 50 events, but potential users of this software on other data sets are cautioned to consider this issue in developing a data processing strategy.

The pwstack program (described in Pavlis, 2011) was used to estimate a plane wave decomposition as the input for all the results found in this paper. The pwstack program is best thought of as a variable-aperture, slant stack algorithm. Experience has shown that there is a fundamental tradeoff in aperture width parameters and stability of the results. Aperture widths below the nominal station spacing produce total junk in the output because the only leverage on dipping layer is three-component polarization. Extremely large apertures, in contrast, will heavily smooth the image to produce an equally useless result. The opposite end member to a very narrow aperture is a full array stack, which would reduce the data for each event to a single average seismogram. Clearly the optimum aperture is some compromise between these end members. The process of finding this optimum aperture has direct parallels to finding the optimum regularization constant in a seismic tomography inversion. For the sake of brevity I will not show results of the extensive testing I have done to evaluate the range of appropriate apertures for this data set. Users of this code should recognize, however, that a series of runs with variable aperture is a necessary initial step for any experimental geometry.

Animation 10 is a complete comparison of results using PREM (radially symmetric Preliminary Reference Earth Model; Dziewonski and Anderson, 1981) and the result used for the rest of the paper imaged using the tomography 1 model in Burdick et al. (2009). Figure 6 was produced from one frame of Animation 10. This demonstrates that the results using this particular three-dimensional model and PREM are virtually indistinguishable for the entire image volume. Further tests not shown indicate that systematic errors in the VP/VS ratio create systematic depth errors, but do not alter the basic geometry of the image. This presumably happens because such systematic errors mainly distort the image volume in the vertical plane and have a small impact for the range of dips preserved in this image volume.

1Supplemental File 1. Zipped file containing .vtk, .txt, .xml, and .pdf files. The full graphical content of this paper is actually in this supplement. Most of the figures in the paper are static versions of one of the three-dimensional visualizations described in this supplement. Definitions of files and links to software that can be used to view these files are given in the Supplement Description PDF within the zipped file. If you are viewing the PDF of this paper or reading it offline, please visit http://dx.doi.org/10.1130/GES00590.S11 or the full-text article on www.gsapubs.org to view Supplemental File 1.