A priori assessment of the expected location accuracy of a sensor network is typically done through inversion of the traveltime spatial gradients. This approach assumes that the applied location algorithm successfully recovers the global minimum of the objective function. However, even for accurate and precise phase picks, complexity in the velocity model and limitations in the network layout may inhibit the finding of a global minimum. The location algorithms may end up in a local minimum instead. We compare the location performance of various objective functions and minimization approaches. Although most of the analyzed location approaches mostly lead to good location results, none of the analyzed approaches recovered the correct location for all event locations. This implies that (microseismic) event locations estimates include an additional inherent error, which is linked to the applied location approach. This aspect is often neglected when interpreting event locations. Our site comprises a sensor network of two vertical strings, in a 1D velocity model of the Groningen gas field (The Netherlands), complicated by two thin, very high-velocity layers. For a series of synthetic event locations, we calculate arrival times, add picking errors, and then feed these synthetic picks into a set of different location routines. We also determine a novel way to analyze this approach-dependent location error of the sensor network for the given velocity model and a set of event locations: We compare the distances between a set of assumed event locations to resulting locations in the target region of the subsurface. From the cumulative distribution function of that mislocation distance, we determine the 1σ and 2σ confidence distances for each method. This results in scalar values representing the location confidence distances for a given method. In turn, this can be used to easily compare the location capabilities of different sensor layouts.


The location of seismic events is the first step toward understanding the causal processes of a rupture. In the case of induced seismicity, the exact depth of an event is of special interest because a location within the cap-rock may indicate possible leakage. Events at the reservoir level affect the drainage capability of the reservoir. Moreover, events below the reservoir may lead to a breakthrough of the underlying aquifers. Therefore, operators and regulators need to understand that there is always uncertainty in the depth estimates of a reported event location.

In contrast to migration-style location schemes (e.g., Gharti et al., 2010), most phase-picking event location schemes are based on the approach of Geiger (1910), who proposes minimizing residuals between observed and theoretical phase arrival times. He applied matrix inversion to solve a linearized set of equations to efficiently find the minimum of an objective function, which is based on the minimum difference in absolute arrival times |tobservedttheoretical|. More generally, an objective function should include a set of observed data, which is compared with predictions based on theoretical models. The typically available data derived from recorded waveforms are phase arrival times and angles (back-azimuths and dip). Observed traveltimes are typically obtained through manual picks (e.g., Phillips, 2000) or automated schemes on the waveforms (for a review, see Akram and Eaton, 2016). The incidence angles can, e.g., be determined through principal component analysis of the hodograms or from stacking-based methods.

Uncertainty in event location can be attributed to various causes. Foremost, there is uncertainty in picking and identifying the arrival time in the waveforms at the sensors. This typically relates in a nonlinear fashion to the signal-to-noise ratio and the frequency content. The sensor geometry also plays a crucial role (e.g., Eisner et al., 2010). Furthermore, the available sensor network geometry in microseismic projects often requires the addition of back azimuth as a constraint to derive well-constrained locations. These back-azimuth estimates, obtained from hodogram analysis (e.g., Eisner et al., 2010), rely on a correction for sensor alignment in the borehole (Oye and Ellsworth, 2005; van Dok et al., 2016). Another significant source of uncertainty in the event location is introduced by necessary assumptions in the velocity model (e.g., Blias and Grechka, 2013; Poliannikov et al., 2013; Usher et al., 2013; Angus et al., 2014): Every velocity model has intrinsic uncertainty, but can the subsurface reliably be approximated by a 1D velocity model, or is a 3D model required? What is an acceptable grid spacing of the lookup tables of traveltime, and how many interfaces (i.e., layers) are appropriate? Is anisotropy required (e.g., Wuestefeld et al., 2011), and to what degree (P- and S-wave anisotropy; hexagonal or orthorhombic)? Note also that additional model uncertainty is introduced by the choice of method to calculate the traveltimes. Typically, these are calculated either through eikonal solvers (e.g., Podvin and Lecomte, 1991), ray tracing (e.g., Červený, 2001), or wavefront construction (Vinje et al., 1993). The source of uncertainty in event location is introduced then through the choice of grid spacing (eikonal) and smoothing or the occurrence of exaggerated shadow zones due to the inherent high-frequency approximation underlying ray tracing and wavefront construction.

Hence, this study focuses solely on the algorithmic aspects of the event location procedure. We analyze only the combined effect of the objective function and the minimization algorithm applied to find its minimum. Thus, we combine the approaches of Blias and Grechka (2013) and Zimmer and Jin (2011). We choose a realistic, isotropic 1D velocity model, complicated by two thin, very high-velocity layers. We choose a 1D model to demonstrate a simplified case and to avoid additional error sources introduced by traveltime table generation in 3D models (e.g., due to spatial smoothing). The radial symmetry of the 1D model allows us to generate 2D traveltime tables and locate events in 3D. We consider various objective functions, calculated from these traveltimes. Uniformly distributed traveltime noise is added on top of the synthetic phase picks.

Although the above aspects of location uncertainty are interesting in their own right, here we have full control on the considered sources of uncertainty. This allows us to evaluate the optimization approaches. A (microseismic) data set is often analyzed by different teams, e.g., universities or service companies. This often results in discrepancies in event location (Grandi et al., 2011). Leaving all other influences constant, we show the effect of different minimization methods and objective functions on event location. In addition, we compare the event locations with locations found with the widely used NonLinLoc software (Lomax et al., 2000) to have an external control. As discussed above, the choice of traveltime calculation method introduces additional variability to the event location, a fact that is often disregarded. However, because here we use the same method to determine theoretical picks as in the actual location algorithm, this variability does not affect our analysis. We also assume that all phases have been observed and correctly identified.

For the sensor setup, we consider a down-hole monitoring system, which was in place at the Groningen gas field and we use the most relevant 1D velocity model of the region. This sensor setup consists of two nearly vertical geophone strings of 10 and 7 sensors, respectively. This is a rather sparse, but realistic, setup in microseismic monitoring of hydraulic fracturing. Furthermore, we analyze the benefits of including head waves to constrain the event depth. Zimmer and Jin (2011) and Coffin et al. (2012) discuss in detail the effects of including head waves when locating microseismic events. They conclude that adding head wave phase picks improves the location accuracy because the apparent sensor aperture is increased. However, this advantage is often significantly reduced because head waves are very sensitive to the velocity in the refracting layer (which is often poorly calibrated). Furthermore, the signal-to-noise ratio for head waves is lower compared with direct waves, which might affect the accuracy of the arrival-time pick.

Concretely, for each analyzed hypothetic event location and the associated traveltimes and back azimuths to the sensors, we first use a full-grid calculation of the objective functions and analyze their spatial structure, including the local and global minima. Second, we investigate how the different minimization algorithms perform in finding the objective function’s global minimum. Note that the artificial noise added to the time picks (and back azimuths) can alter the objective function such that the minimum occurs at a location slightly different from the true, synthetic event location.

The motivation of this study is threefold: First, we provide a template of how to benchmark location algorithms against each other. By keeping all other parameters fixed, it is possible to evaluate the performance of a given location algorithm objectively. In addition, this approach can be used to objectively adjust algorithm parameters to the target zone for any velocity model. Second, our study highlights the differences of resulting event locations obtained through various optimization algorithms. The scatter in the resulting location from the correct (known) location is perhaps surprisingly large. In theory, given the simple velocity model and accurate phase picks, all analyzed optimization algorithms should lead to the same minimum location. We show in this that there is no single best approach, but careful synthetic analysis is needed prior to processing the real data to evaluate the quality of the results. Third, we introduce a novel parameter, the confidence length, to objectively evaluate the performance of algorithms. This can in turn also be used to evaluate the suitability of a sensor network for a given task. Finally, we present our approach of how to combine traveltime picks and back-azimuth for event location.


The global minimum of the objective function determines the event location estimate. In its simplest form for location, it is a function of the traveltime residuals (i.e., the difference between theoretical and observed phase arrival times) for each sensor’s phase pick. The objective function's scalar value is typically the least-squares fit (LSQ), sometimes also referred to as the root-mean square of the L2-norm, of all the measurements residuals. Sometimes, other norms, such as the L1-norm, are applied to reduce the influence of outliers (e.g., Schweitzer, 2001; Shearer, 2009).

Because this study is intended for (downhole) microseismic monitoring settings, we include the back azimuth in the objective function. This additional information derived from the recorded waveforms can greatly improve event locations — especially in cases in which the sensor network has poor lateral spread, for example, in borehole installations. Note that in laterally heterogeneous media, or in the presence of anisotropy, the theoretical back azimuth can deviate significantly from the geometrical “straight-line” direction. This can be addressed by instead considering the (theoretical) back-azimuths extracted from ray tracers capable of handling such model complexity. To reconcile the measurements of different units (traveltime [s] and back azimuth [°]) and to take the uncertainty in each measurement into account, we follow the approach of Schweitzer (2001) and normalize the residuals by the expected measurement standard deviations, σt and σβ, respectively. Note that these errors can vary between stations and phase picks. The LSQ objective function for location with back-azimuth included is
where t¯ denotes the mean traveltime residual, which is introduced to remove the absolute traveltime offset. We use a simplified summing notation here: The sums are over all available phase picks from all sensors. Note that we introduced a weighting factor ω to further adjust the influence of either contribution. Relative weighting between the stations is achieved through the expected measurement standard deviations, σt and σβ.
Furthermore, we test a heuristic alteration to equation 1. This approach, here called 1Plus, takes into account that a small angle uncertainty will, at large distances, nonetheless result in large location uncertainties. Thus, we scale the back-azimuth misfit with the traveltime misfit. Note that this specific approach might not work for all sensor setups:
In Appendix A, we present a comparison of the misfit maps of the different functions. Although equations 1 and 2 represent LSQ-type functions, the second type of objective function that we consider is very robust in the presence of outliers (equation 3). The equal differential time (EDT) formulation (Zhou, 1994) is implemented in the widely used NonLinLoc package (Lomax et al., 2000). Here, the differences between the observed traveltimes and the differences between theoretical traveltimes are calculated for all possible combinations of sensor pairs. The motivation for this approach is that the traveltimes are nonlinear with respect to variations in hypocentral distances (Zhou, 1994):
The misfit is then based on the difference between the observed and the theoretical differences for each phase pick or back-azimuth estimate pair. This expression is zero at the real hypocenter, where the two differences are equal, thus the name equal differential time. Again, we use a simplified summing notation. The symbol Σa,b indicates a sum over all N combinations of stations for identical phases. The back-azimuths are included in this expression as the weighted LSQ values. Compared with the simple differences for individual stations, this expression implies significantly more calculations. In our case with 17 stations, the number of combinations can be calculated by binominal coefficients to N=136, compared with N=17 calculations in the standard LSQ approach.
Note that when calculating the differences in angles (i.e., βobsβtheo), the periodicity of the angles must be considered. Simple trigonometry thus gives an angular difference δ defined as
The expression atan2 is the two-argument arc-tangent function present in most computer languages. It accounts for the correct signs of the input and returns the correct quadrant, thus allowing for a 360° periodicity.


The purpose of global optimization is to find the absolute minimum of the (possibly nonlinear) objective function, in the (possible or known) presence of multiple local minima. Different objective function minimization methods can be applied to obtain the smallest misfit between observed and predicted data (see also Zimmer and Jin, 2011). There is always a trade-off between accuracy and calculation speed. This is especially true in highly nonlinear problems, such as in earthquake location within complex velocity models. The minimization method shall not be trapped in a local minimum, but the event location is to be identified as the global minimum.

Here, we consider three families of optimization methods: exhaustive, heuristic, and statistical. Exhaustive methods systematically enumerate all possible candidate solutions and check if the problem’s statement is satisfied. Although such a brute-force approach is straightforward to implement programmatically and will always find a solution if it exists, its computational cost is proportional to the number of candidate solutions (i.e., test event locations). Furthermore, these methods can only find the exact solution for discrete problems; i.e., for continuous problems, such as earthquake location, the accuracy is limited by the search grid spacing. Heuristic (Greek for “I discover”) methods trade optimality, completeness, accuracy, or precision for speed. Some heuristics have a strong underlying theory, whereas others are just rules of thumb learned empirically. These often imitate nature in deciding which search branch to follow (see Pintér, 2017). Notably, evolution strategies mimic “populations” of candidate solutions, and each iteration step involves a competitive selection that drops a poor solution. The remaining pool of candidates with higher “fitness value” are then “recombined” with other solutions by swapping components with another; they can also be “mutated” by making some smaller scale change to a candidate. Finally, the most commonly used statistical methods are Bayesian algorithms, which are based on postulated statistical information (e.g., Lomax et al., 2000). During optimization, the problem-instance characteristics are adaptively estimated and updated. Pintér (2017) cautions that “theoretically, convergence to the optimal solution set is guaranteed only by generating an everywhere dense set of search points. One of the obvious challenges of using statistical methods is the choice and verification of an ‘appropriate’ statistical model, for the class of problems to which they are applied. In addition, it seems to be difficult to implement rigorous, computationally efficient versions of these algorithms for higher dimensional optimization problems.”

In this study, we evaluate the performance of six methods to minimize the objective function(s). In Appendix A, we show a comparison of comparison of these methods in selected examples:

  • The most robust minimization method is thus a global full grid search (FGS). The user defines a search volume and grid spacing. For each of the grid points, the objective function value is calculated. For fine grid resolutions and sufficiently large volumes, this is a very thorough but computationally expensive scheme. There is a trade-off between computation time and location accuracy, which depends on the grid resolution. This is a “brute force” approach because the full potential search space is investigated with no additional approximations — for example, linearization — being made to the objective function. One way to improve the efficiency of an FGS is to use a nested grid-search approach with an initial coarse global grid search, find the initial minimum, and then use successively smaller, nested local search cubes around the minimum with finer grid spacing. Typically, three iterations should be sufficient. The only downside of this iterative process is that the initial coarse model calculations might miss the global minimum in case of fine-scale variations. Hence, the secondary iterations might concentrate on local minima. Again, there is a trade-off between computation time and accuracy.

  • Directed grid search (DGS) is a gradient-based algorithm (e.g., Sundhararajan et al., 1998). From a given start location, the gradient of the objective function for the eight nearest grid points in the traveltime table is evaluated. The node within the direction of the lowest gradient is selected as a starting point for the next iteration. The process is terminated after a certain number of maximum iterations or a threshold misfit value. This method is well-suited for monotonic functions, i.e., when no local minima are expected. For location of earthquakes, this means simple velocity models with no high-velocity zones. Also, the geometry of the station network plays a role in complexity of the objective function. Finally, the start location for the gradient search can be crucial to finding the global minimum. More complete descriptions of the DGS do exist in, e.g., Oye and Roth (2003).

  • We also consider an OcTree approach, modified from the version described in the NonLinLoc software package (Lomax et al., 2000). An initial very coarse grid is defined (approximately 1000 nodes). For each node, a misfit through the objective function is calculated. The node with the minimum misfit value is taken as the center for a new cube with the node distance slightly smaller (here 80%) than the original grid spacing (Figure 1). This approach allows event locations outside the initial cube. In an iterative process, new cubes are built until a set number of iterations is reached or the grid spacing is smaller than half the traveltime table spacing. In addition, we avoid local minima by using the lowest three initial grid points as seed nodes. The lowest misfit value of those three trees is then the location.

  • Another popular family of global optimizers is the genetic algorithms. Here, we consider the differential evolution approach (Storn and Price, 1997). Differential evolution optimizes a problem by maintaining a population of candidate solutions (in this case, the event locations). A new generation of candidate solutions is created by combining these populations (i.e., a random “mutation”). Successful mutations in event locations have lower misfits. Random crossmutation ensures that a large search space is covered and the method is not trapped in a local minimum. As the stopping criterion, we use the standard deviation of the misfit value of all our populations (Figure 2). A sufficiently large number of starting populations is required. Storn and Price (1997) suggest 30 populations for a 3D problem.

  • Simulated annealing mimics the physical process of crystallization (Kirkpatrick et al., 1983). The method models the physical process of heating a material and then slowly lowering the temperature to decrease defects, thus minimizing the system energy. Here, the energy in thermodynamics corresponds to the objective function in simulated annealing, and thermodynamic states correspond to event locations. At each iteration of the simulated annealing algorithm, a new point is randomly generated. The distance of the new point from the current point, or the extent of the search, is based on a probability distribution with a scale proportional to the temperature. The cooling schedule (e.g., Nourani and Andresen, 1998) describes how the temperature is reduced, i.e., how fast a solution is accepted. The simulated annealing algorithm accepts all new points that lower the objective, but also, with a certain probability, points that raise the objective. By accepting points that raise the objective, the algorithm avoids being trapped in local minima in early iterations and is able to explore globally for better solutions. This is similar to the differential evolution approach. The best solution is the one with minimum energy (in our case, the traveltime residual). The advantage is that we are not limited to eight corners of a cube, but Gaussian distributed random values. Thus, we can sample much finer, and therefore also identify minima in thin layers. The user chooses the start point, initial search radius, and the cooling schedule.

  • In addition to the above optimization methods, which are all built into the same in-house software framework, we also compare with the independent location software package NonLinLoc (Lomax et al., 2000). This follows a probabilistic formulation of location inversion. Note that here we used the default settings of the software for local problems.


The Groningen gas field in the Netherlands has been exploited since the 1960s. In recent years, increased seismicity has been observed, possibly related to the gas production from the reservoir (Van Thienen-Visser and Breunese, 2015; NAM, 2016). Accurately and confidently determining the depths of microseismic events is of importance to determine possible changes in operational strategy to mitigate earthquake risk (Daniel et al., 2016). Notably, whether an event occurs in the reservoir, cap rock, or the basement has different implications to the mitigation strategy.

The Groningen-area subsurface is characterized by a complex velocity structure. There are two high-velocity layers above the reservoir, one high-velocity zone at 2100 m and a basal anhydrite layer capping the reservoir at approximately 2700 m depth. Two nearly vertical wells are instrumented with down-hole geophones at reservoir depth (2770–3040 m). The wellhead distance at surface is approximately 3000 m. Well SDM1 is instrumented with 10 sensors at 30 m spacing with the top sensor at 2770 m vertical depth. Well ZRP1 is instrumented with seven sensors at 30 m spacing and a top sensor at 2780 m depth. Figure 3 shows the 1D P-velocity, extracted from a 3D velocity model at the wellhead location of well SDM-1. Note the two, thin, high-velocity layers at 2099 and 2720 m, both with a thickness of approximately 45 m.

We use this model setup to benchmark the location algorithms. Note that in this study we test only the influence of the location algorithm and objective function. We keep all other parameters fixed. Notably the traveltimes are accurate. We calculate the (synthetic) traveltimes based on our implementation of the fast-marching method (Sethian, 1996) to solve the eikonal equation. By design, eikonal solvers return the first arrival (P or S). Depending on the velocity structure and the depth relation between source and receiver, this may be the direct or a head wave. In the case of a horizontally layered 1D model, it is, however, possible to calculate both arrival times (head and direct): we build temporary velocity models containing only the layers between each source-receiver combination; any layer above or below the direct (geometric straight line) path is thus ignored, and no head waves are possible. Thus, we can extract the direct and head wave arrival times and evaluate the benefit of including either phase in the location scheme. Figure 3 shows the raypaths of direct and head waves for an example event for the setup of the Groningen downhole instrumentation.

Our test consists of a series of event location depths between 2200 and 3200 m at 20 m spacing, and at nine epicentral (i.e., horizontal) locations, labeled A to I (Figure 4). Note that the distances these between the sensor strings and the event locations are rather extreme. Typical down-hole installations, especially in hydraulic fracturing scenarios, are designed with shorter distances. For each event location, we evaluate the performance of different location procedures. To mimic phase-picking errors, we define “observed” picks from calculating the theoretical arrival times and adding uniform random perturbation to the calculated arrival times. The maximum of the uniform distribution is chosen as ±2ms and given as pick error standard deviation σt and 5° back-azimuth error standard deviation σβ. For repeatability, each simulation resets the seed value of the random number generator to ensure identical variation for each test depth. These accurate phase picks are now used as inputs to various minimization schemes and objective functions to evaluate how well these perform to recover the input event location.

Parameter settings for minimization schemes to locate events

We choose for the FGS a fine grid of 10 m spacing and a cube with a 100 m side length horizontally around the event position. This results in 11×11×11=1331 nodes to be tested for each trial location. This makes use of the fact that we know the event location and search only within a comparatively small, localized cube. We also ran tests covering the full target area, with coarser cells (100×100×25  m side length, i.e., 51×51×56=145,656 nodes). However, the minimum of the objective function is well-constrained due to the simplicity of the sensor setup and velocity model, and we decided to run a fine full-grid search (10 m side length), to capture small-scale deviations from the “correct” location. However, for real applications, a nested-grid approach would be more efficient.

The initial OcTree search uses a grid of the same extent as the FGS; however, we use only 500 m spacing horizontally (between 2500 and +2500  m in the x and y) and 200 m vertically (between 2000 and 3400 m depth). This results in 11×11×8=968 initial nodes. Note that with a coarser vertical sampling, some event depths could not be resolved properly. The three nodes with the smallest misfit values are used as center points in the next iteration, with the side lengths of each cube successively reduced by 80%.

In simulated annealing, we choose an exponential cooling schedule. This means that the temperature decrease is defined by multiplying the initial temperature T0 by a factor that decreases exponentially with respect to temperature cycle k, here Tk=T0*0.9k (Kirkpatrick et al., 1983). For both, simulated annealing and DGS, the start position is at the center of the two strings. We set the initial search radius to 2000 m, and a radius reduction of 80% with each cooling step.

The differential evolution starts with 30 populations, randomly distributed within the same volume as the OcTree. In all tests, the back-azimuth weighting factor is set to ω=1 (see equations 13, respectively).


Figure 5 shows the comparison of depth mislocation for different minimization approaches and objective functions for trial event depths along profile E. The DGS fails to successfully locate events within and above the high-velocity cap rock. The events locate consistently deeper than the input trial location. A magnification (Figure 6) shows detailed small-scale variation between and within the approaches.

The DGS shows large scatter, locating the event consistently too deep. This reflects the limitation of gradient-based algorithms to pass through velocity inversions. In contrast, the FGS correctly retrieves the input depths in most cases. Only within the fast layer and for the deepest events (for EDT) are the mislocations off by one or two grid cells (Figure 6), which is most likely due to the imposed perturbations on the exact picks: Although we impose a uniform random error on the correct arrival times, the error is not evenly distributed to either side (earlier and later picks). The available 17 stations are probably too small in number to avoid a bias, thus resulting in a small mislocation.

Note that the input locations in our tests coincide with the search grid nodes. Realistic applications would result in an inherent uncertainty in location, and the location accuracy is limited by the choice of grid spacing. In contrast, the OcTree, differential evolution, and simulated annealing approaches are not confined to predefined nodes. They nonetheless retrieve the input locations successfully within half the traveltime grid spacing (here 5 m) in most cases. Note that the choice of pick error directly translates into mislocation distance. Events within the high-velocity cap rock are mostly not recovered correctly; only EDT seems better suited for such challenging locations. This implies that cap-rock events cannot be reliably located with the sensor setup tested here. If cap-rock events within such a thin high-velocity zone are of interest (e.g., in leakage monitoring for CO2 storage sites), this effect should be considered in future design studies for the monitoring network (e.g., Eisner et al., 2010). A new potential network setup could then be reevaluated in a study similar to the one presented here.

In the next step, we included head-wave arrival times in our simulations (Figure 7). Additional phases are expected to improve the location capability, especially in such cases with limited sensor distribution. Indeed, the scatter is reduced, especially in the reservoir and basement. The 1Plus objective function seems to generally perform better than EDT for the different minimization methods, and the LSQ is better again than the EDT. Note also that including head waves does improve the performance of the DGS in the reservoir level for event locations between the sensor strings. We will in a later stage of the paper aim at quantifying the performance of the different methods and objective functions.

Now, we investigate the performance of the location algorithms for all profiles as defined in Figure 4. Figure 8 shows the depth mislocation along profiles A to I, whereas Figure 9 shows the total (i.e., Euclidian) mislocation, i.e., the 3D distance between input and output event locations. To improve clarity, here we omit the NonLinLoc and DGS tests. We show the results for LSQ without head waves.

For all investigated methods, the depths of events along profile E are best recovered. This is expected because profile E is located directly between the two vertical sensor strings. Also, the FGS recovers all depths correctly, proving that a global minimum of the objective function is present. Interestingly, the FGS mislocates events from above the reservoir horizontally and in depths for several profiles (profiles E, F, G, and I; see Figure 9). Because the depths are recovered correctly (Figure 8), the 3D mislocations are only a result of horizontal mislocation.

Above the thin, fast layer, the event depths are generally poorly recovered. Simulated annealing and OcTree result in some strong deviations from the input depths. Differential evolution generally works better, but that may also be a result of increased processing time allowed (Figure 10). Note that only the FGS was performed on a limited grid, and because it resulted in constant processing times, it is not shown here. The other methods were allowed to search in the whole target area (OcTree) or fully unconstrained (DGS, SimAn, and DiffEv). Note that the timing results in Figure 10 are specific to our choice of parameters to the methods, which we left mainly to the default as suggested by the original authors. This may be tweaked to perform better for given cases, but such optimization is beyond the scope of this study.

Given the relatively symmetric setup of sensors and event profiles, it is somewhat surprising to see the strong variability of the different depth profiles in Figure 8. This may be in part due to the randomly generated test locations in differential evolution and simulated annealing. Especially the differential evolution should be sensitive to the randomness of the start locations for the first generation. In summary, event locations in the reservoir and below are recovered correctly for the chosen velocity model and sensor setup in the Groningen field. Any events located above the fast velocity layer are characterized by a large scatter and cannot be determined with high confidence, in depth and horizontally.

Finally, we aim to quantify the reliability of the different methods and algorithms. We sort the mislocation distances and calculate the kernel density estimates (Botev et al., 2010), to obtain smoothed probability density estimates of mislocation distances. Thus, we can calculate cumulative distribution functions (CDFs) (Figure 11). These CDFs represent the probability for an event to be located within a certain distance to the correct location. From these, it is straightforward to determine the specific confidence distance within which events can be located to a certain probability. Figure 11 shows the individual CDFs for each minimization function (panels) and method (colored lines). As seen in the earlier discussion, the DGS and NonLinLoc methods perform rather poorly. Differential evolution and OcTree perform better for EDT than LSQ, and yet better for the 1Plus. Table 1 shows the confidence length and depths for the studied approaches for 68% and 95% probability.

Note that this result only represents the studied setup and event locations. Also, the performance of the methods can be optimized. Changing the parameters for each optimization method is often hindered by trade-offs. This statistical approach to calculate a single scalar confidence distance thus offers a novel and robust measure to evaluate the performance of a network.


In this paper, we consider three different objective functions for earthquake location in a down-hole acquisition scenario, each incorporating the back-azimuth estimate as an additional constraint. We compared these objective functions using six optimization schemes, for a set of events in the setting of the Groningen field in The Netherlands. Although all optimization schemes should find the global minimum (i.e., the correct event location) accurately, the found event locations sometimes deviate strongly from the correct location. Notably, in the area above the high-velocity zone, the location capability deteriorates significantly, in depth and horizontally. Including head waves improves the location accuracy. Such large differences between the minimization functions, which should in theory lead all to the same minimum, were indeed surprising. After all, the objective function space is rather smooth. The differences are more pronounced for events outside of the sensor network. Our results highlight the fact that there is no single-best algorithmic approach, but careful synthetic analysis is needed prior to processing the real data to evaluate the quality of the results.

We find that the EDT objective function generally provides better locations, but at higher computational costs than the simple least squares. We introduced a modified least squares, named 1Plus, which performs similarly well in the studied setting as the EDT, but without added computational costs. Only the FGS method obtains the correct event locations in most cases, but this method has prohibitive computational costs. These considerations must be carefully balanced and justified.

Also, not all methods might be suited for all combinations of sensor setup and velocity model. We recommend that a synthetic study based on our template should be performed routinely before processing of real data. This helps in assessing the best-suited method and processing parameters. In turn, analyses similar the ones presented in this work can also be applied in the network design and evaluation stage of a project. Finally, we introduced a method to quantify a single, objective value describing the location performance of the system. This confidence distance is a statistical measure over the whole target region, and it can be a useful tool in the design stage of a project to define specifications of the sensor system. Specifications could, for example, state the allowed mislocation distance of a certain probability.


We are grateful to Nederlandse Aardolie Maatschappij for providing the velocity model and other data and information used. This project has been funded through Gassnova by the Norwegian National Program for research, development, and demonstration of CO2 capture and storage technology under CLIMIT-demo project number 242007: Assessing the capabilities of induced seismicity monitoring for CO2 storage. We also thank the three anonymous reviewers for their comments that helped to improve this paper.


In this section, we present how the three objective functions introduced in equations 13 differ. Figure A-1 shows the resulting misfit functions at one single event location at a depth slice for the single sensor string in well ZRP and two string (in wells ZRP and SDM, see Figure 3). We also compare various weighting factors. The setup and velocity model is identical to the case discussed in the paper body. The results shown in Figure A-1 indicate that the 1Plus method is less sensitive to the choice of weighting factor, which is good in the practical case because it reduces the number of free parameters to choose from. The LSQ and EDT do not show significant differences at this depth and for these setups. The 1Plus misfit surfaces show a somewhat less focused minimum and a clear influence of the back-azimuth. These features may quickly guide the different optimization functions toward the minimum and thus explain the better success rate in the location process of the 1Plus method.

Figure A-2 compares the misfit function maps for different event locations at 2800 m depths. Here, we fixed the weighting factor to ω=1. Furthermore, we show the trial locations of the different minimization functions (OcTree, simulated annealing, and differential evolution). These graphics illustrate how the algorithms work toward the minimum. The OcTree often shows multiple star-shaped features. These are related to the three trial start points after the initial grid search. Interestingly, the 1Plus does not show the separation of start locations, and they all focus around the correct location. Similarly, the simulated annealing minimization for 1Plus shows a well-focused point cloud around the correct location, whereas LSQ and EDT minimization produces more scatter in the trial location. Note that the choice of search distance and start location will influence the success. Here, we start at the center of the two strings, with a search distance of 1000 m. Finally, the differential evolution shows good success in finding events at this depth because the initial population samples the search space well.

Freely available online through the SEG open-access option.