ABSTRACT

Focal-time analysis is a straightforward data-driven method to obtain robust stratigraphic depth control for microseismicity or induced seismic events. The method eliminates the necessity to build an explicit, calibrated velocity model for hypocenter depth estimation, although it requires multicomponent 3D seismic data that are colocated with surface or near-surface microseismic observations. Event focal depths are initially expressed in terms of zero-offset focal time (two-way P-P reflection time) to facilitate registration and visualization with 3D seismic data. Application of the focal-time method requires (1) high-quality P- and S-wave time picks, which are extrapolated to zero offset and (2) registration of correlative P-P and P-S reflections to provide VP and VS time-depth control. We determine the utility of this method by applying it to a microseismic and induced-seismicity data set recorded with a shallow-borehole monitoring array in Alberta, Canada, combined with high-quality multicomponent surface seismic data. The calculated depth distribution of events is in good agreement with hypocenter locations obtained independently using a nonlinear global-search method. Our results reveal that individual event clusters have distinct depth distributions that can provide important clues about the mechanisms of fault activation.

INTRODUCTION

With the recent growth in shale gas and tight oil development, hydraulic-fracturing well completions have become the norm in the exploitation of many unconventional reservoirs. In certain areas, it has been shown that a causal relationship may exist between hydraulic-fracturing operations and anomalous induced seismicity (e.g., Holland, 2013; Skoumal et al., 2015; Atkinson et al., 2016; Lei et al., 2017). To understand fault-activation processes and, in some areas, to comply with regulatory provisions, passive recording systems are deployed to monitor hydraulic-fracturing programs. Such passive-seismic monitoring systems are also widely used as a hydraulic-fracturing surveillance technology, for assessment of the stimulated reservoir region and to optimize completions design (e.g., Maxwell, 2014; Eaton, 2018).

In passive-seismic monitoring, focal depth is an important source parameter that can be used to distinguish induced events from natural earthquakes (Zhang et al., 2016). Accurate stratigraphic control on focal depth, such as determining whether induced events are located within the basement or more proximal to the reservoir, is also key for interpreting the depth extent of fault zones and/or the extent of the subsurface region affected by stress changes during stimulation (Bao and Eaton, 2016). Detailed interpretations may be hampered, however, by uncertainties in focal depth, especially if surface or near-surface sensor arrays are used (Eisner et al., 2009). With currently available methods, hypocenter locations (and thus focal depths) are dependent on the accuracy of the background velocity model, which may lead to inaccurate event locations — especially in the vicinity of strong velocity contrasts (Lomax et al., 2014). A background velocity model can often be fine tuned and validated using calibration shots (sources with known locations; Maxwell et al., 2010), including perforation shots or sleeve-opening events recorded during a hydraulic-fracturing program (Eaton, 2018). Given the noisy environment associated with hydraulic-fracturing programs, calibration shots are sometimes challenging to detect using a surface or near-surface geophone array (Chambers et al., 2010). In the case of induced-seismicity data recorded using broadband regional seismograph networks, calibration shots are rarely discernible and focal-depth uncertainties are correspondingly greater (e.g., Schultz et al., 2017).

If colocated seismic control is available, such as a 3D seismic volume, the ability to accurately position event hypocenters within a seismic image offers a potentially powerful interpretive tool. Indeed, stratigraphic control on focal depth of induced events is, arguably, as important for purposes of interpretation as the absolute depth of events. (The adjective “induced” implies that an event is anthropogenic, i.e., that it was triggered by human activity. Although some authors limit the term “microseismic” to events with magnitude below zero (Eaton, 2018), in this paper, the terms “microseismic events” and “induced events” are used interchangeably. The term “passive-seismic data” is also used to describe both of these.) In this context, the two-way P-wave time, rather than the depth, represents a natural choice for the primary location parameter because recorded microseismic P- and S-wavefields and seismic reflections propagate through the same medium so their arrival times are closely related. This concept is illustrated in Figure 1, in which the P-P and P-S reflection-seismic wavefields are generated by a surface source, whereas the source of passive seismic wavefields is a microseismic event due to, for example, shear slip on a fault or fracture. Furthermore, accurately positioning hypocenters within time-domain seismic volume enables existing methods for time-depth conversion of seismic data to be leveraged effectively.

In this paper, we introduce a data-driven approach that does not require calibration shots, nor even the construction of an explicit velocity model for microseismic processing. Our method relies on joint interpretation of passive-seismic and 3D-3C seismic data, and it is therefore limited, in its application, to areas where both of these types of surveys are colocated. Nevertheless, our new method provides benefits that could warrant additional data acquisition and processing effort. In particular, it implicitly accounts for factors that are often ill constrained for most microseismic velocity models, such as seismic anisotropy of the medium because these factors similarly affect the microseismic arrival times and reflection times. After outlining the basic method and its theoretical underpinnings, we present a case study from western Canada that illustrates the application of this method in practice, as well as benefits that this approach can provide for interpreting the architecture of preexisting fault systems.

THEORY

In a microseismic survey, when a seismic event is recorded in a continuous data stream, it is generally identified by a compressional (qP) wave followed by a shear (qS) wavetrain (we use the prefix “q” to denote quasi-P- and quasi-S-waves, as a reminder of the potentially strong effects of seismic anisotropy for unconventional plays; e.g., Ong et al., 2016). The one-way arrival times of the direct qP and qS waves can be expressed as  
τP(xS,xR)=τ0+LPdxVP
(1)
and  
τS(xS,xR)=τ0+LSdxVS,
(2)
where xS and xR denote the source and receiver locations, τ0 is the event origin time, x denotes the position along the raypath, VP and VS are the qP and qS velocities along the path, and LP and LS represent the respective raypaths from the source to the receiver. The origin time can be eliminated by taking the traveltime difference,  
τSτP=|LS|VS|LP|VP,
(3)
where VS and VP are the average velocities along each path. Assuming flat or gently dipping geology, we can make an approximation that |LS||LP||L|. In this case, equation 3 can be written as  
τSτP=VPVSVPVS|L|.
(4)
At the epicenter (i.e., the location at the surface, vertically above the hypocenter), this reduces to the zero-offset time difference  
ΔτM(Δ=0,z)=VPVSVPVSz,
(5)
where Δ denotes the offset (or epicentral distance, the distance measured at the surface from the epicenter to a receiver) and z is a parameter that specifies the focal depth of the event.
Then, we consider the two-way time for a P-wave reflection, which can be expressed as  
tPP(z)=2zVP=2VSVPVSz,
(6)
where z denotes the depth of the interface. Similarly, the P-S reflection time can be expressed as  
tPS(z)=zVP+zVS=VP+VSVPVSz.
(7)
For an interface at depth z, we can therefore write the difference between tPS and tPP as  
ΔtStPStPP=VPVSVPVSz,
(8)
which has the same form as the expression for ΔτM. Therefore, equations 5 and 8 can be regarded as parametric equations with respect to a depth parameter z. This relationship suggests a straightforward algorithm for estimation of focal depth based on microseismic observations, namely, by determining the depth at which ΔtM=ΔtS. Without loss of generality, it is convenient to cast this method in terms of two-way P-wave traveltime (focal time), rather than depth. This choice recognizes that, except in cases in which the velocity model is known with exceptional accuracy, the P-wave traveltime is better constrained than the depth and does not limit the applicability for joint interpretation with seismic data. The use of focal time, rather than depth, also allows for the subsequent use of existing methods for time-depth conversion that have been developed and honed for seismic-reflection data (Yilmaz, 2001).

METHOD

The focal-time method requires the following types of input data:

  1. 1)

    The 3D multicomponent surface seismic data, processed into P-P and P-S migrated volumes with a common datum. The spatial extent of seismic control should encompass the entire region containing epicenters for all of the induced events of interest.

  2. 2)

    Sonic logs (P- and S-wave) for computing synthetic seismograms to identify selected geologic horizons in the P-P and P-S seismic data. The geologic horizons should bracket the depth zone of interest.

  3. 3)

    Source parameters and other information for induced events, including pick times for qP- and qS-wave arrivals on the radial component (which may be either fast or slow qS waves), event epicenter locations, and receiver coordinates. The focal-time method is primarily suited for use with passive-seismic data acquired using a surface or near-surface sensor array, rather than a downhole system (for a description of these different acquisition scenarios, see Maxwell, 2014).

The overall workflow is illustrated in Figure 2. The seismic-reflection and passive-seismic data are initially processed using parallel streams that merge at a point in the workflow in which the focal time is determined for each event. The first step in the reflection-seismic workflow is to correlate (register) equivalent marker reflections that are clearly discernible and can be accurately picked in the P-P and P-S seismic volumes. This correlation is used to build a lookup table for focal-time analysis, and it also provides independent information about VP and VS average velocities using the Garotta equation (Garotta and Grange, 1987):  
VPVS=2(tPStPP)1.
(9)
Although a crude correlation can be achieved by adjusting the plot time scale based using an estimated VP/VS function derived from well logs, it is recommended to tie the P-P and P-S seismic data to sonic and density well logs using synthetic seismograms (Lawton and Howell, 1992). In the process of tying the seismic data to synthetic seismograms, distinct wavelets should be used with bandwidth that is derived from the usable bandwidth of the P-P and P-S sections. It should be noted that P-P and P-S seismic processing requires the implementation of an explicit velocity model, used for processing steps including moveout correction and migration. Although this independently derived velocity model could be considered as an implicit feature of our focal-time method, it is not explicitly used to locate event hypocenters. Furthermore, 3D seismic velocity models do not generally provide interval velocities (e.g., Cameron et al., 2007) and may not be directly applicable for computing hypocenter locations without the use of calibration shots.

Once equivalent horizons have been registered between the P-P and P-S volumes, the next step in the reflection-seismic workflow is to build a lookup table that relates the tPStPP time difference to the tPP time. A smooth interpolation method is desirable because the tPS and tPP times are derived from average velocities and will therefore each (as well as their difference) be smooth functions of depth — even in the presence of sharp velocity contrasts. A lookup table that is smooth and that honors control points can be achieved using spline interpolation between picked horizons. By picking horizons throughout 3D data volumes, the lookup table varies with the x- and y-positions throughout the region of interest. Thus, this approach provides a data-driven method to incorporate the 3D subsurface heterogeneity in velocity, anisotropy, etc.

Turning next to the microseismic workflow (Figure 2), the first step is to apply a relative datum shift to the P- and S-wave time picks. This step is necessary because the seismic-reflection datum is generally different from the vertical position of passive seismic sensors, which may be located at the surface or in the shallow subsurface. In our current implementation, the static correction applied to microseismic events assumes vertical raypaths in the near surface. A seismic datum is schematically illustrated in Figure 1. If the microseismic sensor is located below the surface, near-surface velocity models can also be inferred from near-vertical arrival velocities obtained using a shallow borehole array (Rodríguez-Pradilla and Eaton, 2019).

The next step in the microseismic workflow is to calculate ΔtM. We carry out this step using a simple t2-x2 method, where x represents the epicentral distance (Δ) for each sensor. For a layered medium, the P- and S-wave moveout for induced events is approximately hyperbolic, so the use of a t2-x2 plot will result in a roughly linear relationship for the respective time picks. This linear relationship is used to estimate the Δ=0 intercept and associated uncertainty, based on linear regression analysis. As elaborated below, the intercept uncertainty can generally be significantly reduced by excluding a small number of outliers in either the P- or S-wave time picks.

Then, the two separate streams are merged to obtain an estimate of the focal time for each event (Figure 2). In this step, the interpolated lookup table at the event epicenter is used to map the redatumed ΔtM values into the corresponding tPP values. As with any data-processing scheme, error analysis is a critical step in the procedure. We use 95% confidence limits in the intercept value, derived from linear regression, to estimate the corresponding uncertainty in focal time. The hypocenter depths will share the same velocity uncertainty as the 3D seismic, but because they are traveling through the same stratigraphy we can locate events in relation to the P-P horizons. Interpretations could be done in P-P time because this will eliminate any depth uncertainty from the 3D seismic. In principle, additional uncertainty arising from various time picks and approaches for constructing the lookup table can also be considered, but in practice we have found that other sources of error tend to be small compared with the uncertainty arising from scatter in the microseismic time picks.

CASE STUDY

To illustrate some of the key aspects and challenges of the procedure, we have applied the focal-time method to a data set from western Canada. The microseismic data are from the Tony Creek dual Microseismic Experiment (ToC2ME), a research-focused field program acquired by the University of Calgary in Fall 2016 (Eaton et al., 2018) using a diverse range of sensors, including a near-surface configuration with 68 shallow-wellbore arrays (Figure 3). This system was used to monitor a four-well hydraulic-fracturing program within the Duvernay resource play of Alberta, Canada, located in an area where anomalous seismicity induced by hydraulic fracturing has been documented (Atkinson et al., 2016; Bao and Eaton, 2016; Schultz et al., 2017). Processing of the ToC2ME data set yielded more than 14,000 event detections (Eaton et al., 2018), of which 13,016 are used in the analysis presented here. Events were selected for analysis based on the criteria that at least 10 high-quality (i.e., with signal-to-noise ratio [S/N]>3.0) P-wave picks and 10 high-quality S-wave picks were available. In Figure 4, raw 3C waveforms from one station are plotted for four representative events, ranging from a high-S/N event with moment magnitude MW1.0 and a low-S/N event (MW0.2). As described by Eaton et al. (2018), epicenters are assumed to coincide with the location at the surface where the S-wave arrival has a minimum traveltime. The traveltime is interpolated between stations to provide finer spatial resolution. One of the challenges with this data set is that perforation shots, which are often used for calibration of velocity models for microseismic processing (Maxwell, 2014), failed to yield accurate S-wave velocity picks (Eaton et al., 2018). In the absence of such calibration, depth uncertainties may be unusually large when conventional traveltime-based methods are used to determine hypocenters.

With increasing adoption of 3C acquisition technology in western Canada (e.g., Weir et al., 2018, 2019), it is no longer necessarily a rarity that a P-P and P-S data set is colocated with a microseismic surface acquisition. This creates opportunities, at least in this region, to leverage P-S data for application of the focal-time method. For this study, a colocated 3D multicomponent seismic survey was provided by TGS Canada Corp., covering most of the area of the ToC2ME field deployment (Figure 3b). The 3D data were acquired in January 2015 and processed using amplitude-variation-with-offset-compliant workflows for P-P and P-S data set, as described by Weir et al. (2018). Seismic horizons were tied to well control with a synthetic seismogram that was computed using a statistical wavelet extracted from the data over the zone of interest (Weir et al., 2018). Reflections observed in the P-S data volume were then registered to the P-P data, as illustrated in Figure 4. The P-S and P-P data were compared to refine the correlations and to tie selected events from the P-S data volume to the P-P data volume. Once the correlations were established, the horizons were picked throughout both of the data volumes. As an example, Figure 5 shows the calculated time difference between the P-S time pick and the P-P time pick for the top of the Swan Hills (the Devonian Beaverhill Lake Group member), a prominent seismic horizon that directly underlies the reservoir unit (the Duvernay Formation) in our study area.

The 2D time-difference grids, such as the one illustrated in Figure 6, were then combined and interpolated in two-way time to create a 3D lookup table for relating of tS-tP times to the tP times throughout the full data volume. Figure 7 shows an example of a lookup function for one bin from the 3D data volume, as well as a time-depth function derived from the well-log data and well tie. As noted above, the near-surface weathering model plays a significant role in seismic processing because the near-surface velocity layer has the slowest velocity in the geologic column, and small errors in the estimated velocity of this layer can create large time errors in the data. During processing of the P-P and P-S reflection-seismic data, a processing datum was chosen above the highest elevation point in the survey. To adjust the microseismic traveltimes to the same datum, we applied static corrections to the P- and S-waves that were the same as the receiver statics from the 3D multicomponent seismic processing. However, because the microseismic geophones were buried to 27 m (Figure 3), an additional correction was applied using the P-wave near-surface velocity from the 3D seismic refraction analysis and assuming vertical raypaths within that interval. At each station, the S-wave near-surface velocity was determined from S-wave uphole times that were recorded using the geophone array.

A linear regression in the t2-x2 space was carried out to determine the intercept times, where x denotes the offset distance from the event epicenter to the recording stations. Inspection of the picked times showed that the hyperbolic approximation for P- and S-wave moveout breaks down for offsets greater than 3500 m. Consequently, prior to regression analysis, time picks were discarded from stations at epicentral distances greater than 3500 m. Figure 5b shows an example of a linear regression for zero-offset determination, where a random sample consensus (RANSAC) algorithm (Meer et al., 1991) was performed to discard outliers in the data. Because the largest source of uncertainty is generated from the qP and qS time picks, error analysis was performed on the linear regression. The 95% confidence limit for qP and qS intercept errors was applied based on RANSAC analysis (see Figure 5). The errors were then propagated to tPP time and depth, as shown in Figure 7. The mean error for the data set (13,105 events) was 36 ms, corresponding to an uncertainty in tPP of approximately 25 ms. As a final step, computed focal times were converted to depth based on the synthetic well tie, resulting in an average depth uncertainty of approximately 88 m. Although this level of uncertainty is seemingly greater than generalized depth uncertainties of 10s of meters that have been reported for near-surface acquisition geometry (Eisner et al., 2009), it should be noted that depth uncertainties reported by microseismic processes are often confined to aleatory uncertainty (i.e., analytical values resulting from estimated error distribution for arrival time picks) and do not incorporate estimates of epistemic uncertainty (i.e., resulting from uncertainty in the velocity model). These types of uncertainty are discussed in detail by Eaton (2018). Simply put, for most hypocenter location methods, the absolute focal-depth uncertainty is primarily sensitive to the accuracy of the velocity model such that even small adjustments to the velocity model may alter the estimated focal depths to a significantly larger degree than implied by the analytical (aleatory) uncertainty derived from time picks. On the other hand, at least in terms of the stratigraphic level of events inferred by corendering event locations with 3D seismic data, the focal-time method is primarily dependent only on the quality of the P and S time picks. This aspect of depth uncertainty is considered in more detail below.

The method outlined above was applied to 13,105 events from the ToC2ME program. The resulting P-P focal depths are displayed in Figure 8a, along with the wellbore path of one of the treatment wells and seismic horizon surfaces for correlatable interfaces including the top of the Ireton Formation, top of the Swan Hills member of the Beaverhill Lake Group, and the Gilwood member of the Elk Point Group (Figure 3d). The Swan Hills and Ireton markers bracket the Duvernay zone in which the injection occurred. The calculated hypocenters span a stratigraphic interval from the Ireton Formation down to the Gilwood, with most events located at or above the treatment well (Figure 7a). As noted by Eaton et al. (2018), due to the moderate magnitudes considered, the event catalog used here does not include a preponderance of weaker, operationally induced microseismicity that are expected to be concentrated close to the depth of the treatment well.

To benchmark the focal-time results using a robust independent method, we also determined hypocenter locations using NonLinLoc, a probabilistic, global-search nonlinear package (Lomax et al., 2014; Lomax, 2018). Using a systematic grid-search algorithm, NonLinLoc generates a maximum likelihood hypocenter location based on the estimated posterior probability density function for each event. NonLinLoc was applied using a 1D velocity model, which was derived from smoothing well-log sonic slowness curves, as illustrated in Figure 9. Focal-depth distributions obtained using NonLinLoc and our new focal-time method are compared in Figure 10. The focal-time depth distribution is concentrated in a range from 3200 to 3600 m, bracketing the treatment depth of approximately 3450 m, whereas the depth distribution using NonLinLoc the induced events appears to be concentrated mainly above the treatment well at approximately 3300 m. We note that this unusual tendency for induced seismicity to be located above the treatment depth agrees with another nearby data set from the Duvernay resource play (Eyre et al., 2019) that made use of a similar passive-monitoring acquisition geometry but, unlike our current study, it was calibrated using high-S/N perforation shots.

It should be emphasized that the focal-time depths were obtained using a data-driven approach and thus do not use an explicit velocity model for event location, whereas the NonLinLoc depth calculations rely on a 1D velocity model that was created using sonic logs from the nearest available well with P- and S-wave sonic data. The velocity model used with NonLinLoc was not calibrated using perforation shots due to the poor quality of the associated S-wave time picks; hence, some mismatch in depth between the two methods may be expected, as shown in Figure 10. Moreover, it is important to realize that the relatively narrow depth uncertainty expressed by the focal-depth distribution for NonLinLoc (Figure 10b) incorporates neither lateral heterogeneity in the velocity model nor model uncertainty.

Event epicenters used for the focal-time analysis are shown in Figure 11. As discussed by Eaton et al. (2018), the epicentral distribution (derived independently from the focal-time analysis) exhibits clear spatial and temporal clustering in relation to the treatment stage location and time. This provides valuable insights about the underlying geometry of faults and fracture systems near the treatment wells. Based on a spectral fitting method applied to the broadband seismograph stations, Eaton et al. (2018) estimate the magnitude of completeness to be approximately moment magnitude MW 0.5. Although events are still detected below this level, the catalog is not considered to be complete for weaker events. Thus, it is important to note that the events considered in this study primarily constitute anomalous induced sequences and do not include the vast majority of microseismic events that occurred within the Duvernay Formation during hydraulic fracturing treatment, which are typically expected to have magnitudes that are generally less than zero (Eaton, 2018). A conspicuous north–south linear trend west of the treatment wells can be linked to strike-slip activation of a preexisting fault (Zhang et al., 2019). The largest event in this cluster was characterized by magnitude MW 3.2, with several other events of magnitude greater than MW 2.5 (Igonin et al., 2018). Other clusters of events are interpreted to represent activation of systems of fractures within a regional, north–south-trending flower structure (Eaton et al., 2018), including highly fractured zones localized within the damage zones around existing faults (Eaton, 2018).

To test whether spatial clustering of event epicenters is also manifested in the depth distributions, Figure 11 shows histograms of the depth distribution for three selected clusters. The north–south linear event cluster on the west side of the treatment program is marked by the blue dots. The calculated depths for this cluster, which has been interpreted as sequential strike-slip activation of a preexisting fault (Eaton et al., 2018), place the fault nucleation zone approximately 150 m above the treatment level (approximately 3450 m). On the other hand, focal depths for a cluster of events marked by the green dots bracket the injection zone. Epicenters for events in this cluster appear to be aligned along distinct lineations that strike approximately N30°E compared with the approximately N45°E regional direction of the maximum horizontal stress (SHmax) in this area (Reiter et al., 2014). The aforementioned magnitudes (generally above zero) and obliquity of these lineations with respect to SHmax, the expected orientation for hydraulic-fracture growth (Eaton, 2018), suggest that they may represent activation of a fracture system rather than conventional operational microseismicity during hydraulic fracturing. Finally, a cluster of events marked by the red dots in Figure 11, located east of the treatment program, form two approximately north–south lineations that appear to straddle a seismically defined fault that crosscuts the Duvernay Formation (Eaton et al., 2018). Although this cluster is approximately 300–400 m east of the nearest treatment well, the focal depths of these events lie close to the depth of the hydraulic-fracturing fluid injections. Taken together, these observations demonstrate that the focal-time algorithm yields coherent, cluster-specific depth estimates.

DISCUSSION

The inferred nucleation of strike-slip fault activation above the treatment zone is somewhat surprising, considering that most fluid-induced earthquakes observed elsewhere are found to nucleate within the underlying crystalline basement (e.g., Ellsworth, 2013; Skoumal et al., 2015; Walsh and Zoback, 2016). Based on seismic interpretation (Weir et al., 2018) and well logs used for developing the velocity model (Figure 9), this depth interval places the events within calcareous rocks near the top of the Ireton Formation, close to the interface with the overlying Wabamun Formation (limestone). We speculate that, based on the generally higher mechanical strength of carbonates compared with clastic rocks (Hart and Wang, 1995) and shale rocks (Rybacki et al., 2015), the nucleation zone of these events could be considered mechanically similar to crystalline basement rocks. It is important to note that, although the focal-time method eliminates some sources of uncertainty, the uncertainty in estimated focal depths is nevertheless an important consideration. For example, the average 95% uncertainty for focal depth in this data set is approximately 89 m, which compares with an approximately 50 m thickness of the Duvernay Formation. As outlined above, this uncertainty in focal depth derives from the uncertainty in the intercept times from regression analysis, which explains much of the observed scatter in depth distributions.

The NonLinLoc hypocenters used for benchmarking our results are impacted by the accuracy of the velocity model, which in this case was not constrained by calibration shots due to the noisy character of the S-wave arrivals. For simplicity, a 1D velocity model derived from well-log data was used for this purpose; however, this velocity model neglects heterogeneity arising from faults, damage zones, reef edges, and a regional dip of several degrees (Weir et al., 2018). Indeed, one of the significant advantages of the focal-time method is that it is entirely data driven and no velocity model is required. Hence, the likely presence of 3D velocity heterogeneity is implicitly incorporated into the solutions based on various assumptions, including that the kinematics of P-P and P-S reflections reflect the same velocity model as the microseismic events, and that the epicenter locations and statics corrections are sufficiently well-determined. The same considerations apply to seismic anisotropy, which may be poorly constrained in the case of velocity models calibrated using relatively sparse calibration-shot information.

Several studies have used double-difference and/or waveform crosscorrelation methods to obtain remarkably precise relative hypocenter locations for induced seismic events (Skoumal et al., 2015; Bao and Eaton, 2016; Schoenball and Ellsworth, 2017). By design, the double-difference algorithm reduces the impact of velocity-model uncertainty through the use of groups of events with similar source-receiver raypaths (Waldhauser and Ellsworth, 2000). Although this approach often yields much smaller uncertainties than our method, in terms of relative hypocenter locations (including depth), it does not reduce the uncertainty in the absolute location unless calibration events with known locations are available (Eaton, 2018). Furthermore, unlike our focal-time method, the double-difference method does not leverage independent constraints provided by the correlation of P-P and P-S horizons. Thus, in terms of location uncertainty, the advantages and disadvantages of the double-difference method are entirely different from the focal-time method.

Finally, our experience indicates that uncertainties arising from statics corrections applied to the microseismic observations may be problematic. If a near-surface VP and VS model is available, it can be used to compute the necessary one-way time shifts. In some cases, however, sufficient details to construct a near-surface velocity model are not available. For example, the method of Cary and Eaton (1993) for computing large converted-wave statics uses correlative P-wave reflections to estimate the long-wavelength component of statics to be applied to the P-S data. In this case, we recommend a data-driven (rather than model-based) approach based on common-receiver gathers with only shot statics applied. Using this approach, marker reflections identified in stacked common-receiver gathers can be correlated with those in the seismic volume to infer the static shift from the receiver to the seismic datum. As discussed by Rodríguez-Pradilla and Eaton (2019), final datum corrections for shallow buried geophone arrays can be adjusted using observed S-wave uphole times.

CONCLUSION

This paper outlines the theory and method for a novel approach to obtain direct focal-time estimates for seismic events, including anomalous induced events. The utility of the method is illustrated by a case study in the Duvernay play of central Alberta, Canada. The method requires colocated, multicomponent surface seismic data so that P-P and P-S reflection times can be correlated. These correlated reflection times are used to construct a lookup table for the difference in the S- and P-arrival times of induced events, extrapolated to zero offset. Care is required to ensure that the microseismic time picks are referenced to the same processing datum as the multicomponent seismic data. Once the focal times of events have been calculated, the stratigraphic levels of the hypocenters can be readily determined through joint interpretation with seismic data. Focal depths can be estimated using existing methods for time-to-depth conversion, such as from well ties to seismic data.

Our method has several distinct advantages over traditional approaches to focal-depth estimation, which ultimately rely on the accuracy of the velocity model. Because the same seismic velocities govern wave propagation for seismic reflections and waves radiated by induced events, various poorly constrained parameters, such as anisotropy, are handled implicitly. The use of focal-time analysis results in relatively precise stratigraphic depth control for induced events, which is of considerable importance for interpretation of the results. Our results suggest that there may be considerable value to the industry for P-P and P-S seismic surveys to be more routinely acquired and used in joint processing of hydraulic fracturing data sets.

ACKNOWLEDGMENTS

Sponsors of the Microseismic Industry Consortium are sincerely thanked for their support of this initiative. This work was supported by funding to DWE for the NSERC/Chevron Industrial Research Chair in Microseismic System Dynamics. We thank TGS Canada Corp. for providing multicomponent 3D seismic data for this study. We also thank two anonymous companies for providing access to the shallow buried array used for the ToC2ME field program, as well as Nanometrics and Terra-Sine for installation of the monitoring equipment used for the study. The BuriedArray acquisition method employed for ToC2ME was used by the University of Calgary under license from Microseismic Inc.

DATA AND MATERIALS AVAILABILITY

The data used in this paper will be released in July 2020. To access this data, please e-mail the corresponding author.

Freely available online through the SEG open-access option.