Abstract
The size distribution of earthquakes is well described by an exponential law, with its unique parameter called “b‐value” in seismology. Several studies indicate that this parameter varies across stress regimes, tectonic settings, and focal mechanisms of earthquakes. Seismic events are usually classified into two types: background earthquakes, that is, events that happen spontaneously as an effect of tectonic loading; and triggered earthquakes, that is, events that are triggered by previous earthquakes. A key question, with huge implications in seismic forecasting, is if the two categories of earthquakes have the same b‐value. In seismology, algorithms that can separate these two types of seismicity are called declustering methods. In this work, using a global catalog of strong events and a stochastic declustering method, we demonstrate that background and triggered seismicity have similar b‐values. This implies that the b‐value reflects intrinsic properties of the faults or the crustal material, rather than the mechanism responsible for the type of event.
Introduction
The Gutenberg–Richter b‐value (Gutenberg and Richter, 1944) is one of the most studied parameters in seismology in the last decades. Indeed, numerous studies (Scholz, 1968; Wyss, 1973; Amitrano, 2003; Gulia and Wiemer, 2010; Schilz, 2015) have suggested a correlation between the b‐value and the stress state of the Earth’s crust. This hypothesis has motivated extensive research into spatiotemporal variations in b‐value, which are considered potential indicators of stress changes induced by significant foreshocks and precursor patterns (Wiemer and Wyss, 1997, 2002; Gulia and Wiemer, 2010; Nanjo et al., 2012; Tormann et al., 2014, 2015; Gulia and Wiemer, 2019; Gulia et al., 2020; Nanjo, 2020). More recently, interest in this field has grown further with the advent of machine‐learning‐based seismic catalogs, which have significantly expanded access to larger datasets, facilitating more comprehensive analysis (Ross et al., 2018; Kong et al., 2019). Aftershock sequences seem to have a larger than usual b‐value, if the strong event of the sequence is already passed, or a smaller than usual b‐value, if there is an impending strong event (Gulia et al., 2018; Gulia and Wiemer, 2019). However, such temporal variation before and after strong events is a matter of debate (Dascher‐Cousineau et al., 2020; Gulia and Wiemer, 2021; Lombardi, 2023; Godano, Tramelli, et al., 2024). Moreover, the classical distinction of the events in seismic sequence, for example, mainshocks and aftershocks, lead to biased estimation of the b‐value (Mizrahi et al., 2021): mainshocks tend to have a lower b‐value just because their selection is not random. Novel stochastic algorithms interpret the seismicity in a slightly different way, distinguishing background from triggered events (Zhuang et al., 2002, 2004). Such stochastic declustering does not bias the b‐value estimation (Mizrahi et al., 2021), and could be used to separate background and triggered seismicity and then retrieve their b‐values. Marsan and Lengline (2008), using a simplified stochastic declustering approach, showed that for the California seismicity the b‐value of the complete catalog (i.e., all the events) is similar to the b‐value of the background seismicity. However, for a particular fault system in Italy using small seismicity (), it was demonstrated that the b‐value of triggered seismicity is significantly lower than the one of background seismicity (Taroni et al., 2024). The goal of this work is to perform an analysis of strong worldwide earthquakes (), separating background and triggered seismicity using a proper stochastic declustering algorithm (Xiong and Zhuang, 2023), and then estimate their b‐values.
Data and Methods
Worldwide seismic catalog
In this work, we used the Global Centroid Moment Tensor (Global CMT) catalog (Dziewonski et al., 1981; Ekström et al., 2012) from 1980 to 2023, with a maximum depth of 50 km (as in Schorlemmer et al., 2005 and Petruccelli et al., 2019). It contains 7908 events from magnitude 5.7 (see the Magnitude of completeness estimation section for details). This catalog is the most used in global surveys of b‐value estimation (Schorlemmer et al., 2005; Petruccelli et al., 2019).
Magnitude of completeness estimation
The magnitude of completeness represents the smallest magnitude at which earthquakes are consistently recorded and accurately reported in seismic catalogs (Rydelek and Sacks, 1989). Accurately determining is crucial because underestimating it can lead to an underestimation of the b‐value, whereas overestimating it can result in a loss of valuable data and introduce bias by narrowing the range of magnitudes available for analysis. Numerous approaches have been developed to estimate (e.g., Cao and Gao, 2002; Godano, 2017; Wiemer and Wyss, 2000; Godano and Petrillo, 2023; Godano, Petrillo, et al., 2024). Here, we used the method of Taroni (2023), which is indicated in the case of a catalog that contains subsets of events that might have different b‐values. To avoid bias in the estimation, we split the catalog into two temporal windows, 1980–2003 and 2004–2023 because in 2004 there were improvements in the methods for event detection (Ekström et al., 2012). We obtained in the first part, and in the second part. Thus, is a threshold valid for the whole catalog. However, we must consider the short‐term catalog incompleteness, which is also present in the global catalog (Kagan, 2004; Iwata, 2008). Using the approach of Zhuang et al. (2017), we found that is a threshold that removes short‐term incompleteness (see Fig. S1, available in the supplemental material to this article).
SETAS and stochastic declustering algorithm
The Spherical epidemic‐type aftershock sequence (ETAS) model (SETAS) model is a reformulation of the classic space–time ETAS model designed to work on a spherical surface rather than a plane (Xiong and Zhuang, 2023). This modification addresses the spatial distortion issues inherent in the planar model, particularly in high‐latitude regions or over large geographical areas. The conditional intensity function of the spherical ETAS model is expressed in terms of the great circle distance (in radians) between seismic events, allowing for more accurate modeling of global seismic interactions. The temporal and magnitude components remain consistent with the original ETAS model, but the spatial component is adapted to align with the geometry of a spherical surface, ensuring that the model accurately reflects the curvature of the Earth.
The stochastic declustering method, first introduced by Zhuang et al. (2002), is a probabilistic approach used to separate background seismicity from triggered events. In this method, each earthquake has an associated probability of being classified as a background event, which is determined based on the conditional intensity function of the SETAS model.
This iterative process continues until convergence, providing estimates of both the background rate and the parameters of the SETAS model.
The first events in a seismic catalog are always classified as background events because there are no previous events that can trigger them. Then, usually, these events are excluded from the subsequent computations (see e.g., Taroni et al., 2024); here, we removed the first year of observations (155 events), assuming that this period is long enough to avoid the problem aforementioned.
b‐value estimation
Results and Discussions
The stochastic declustering algorithm assigns to each event in the catalog a value between 0 and 1, which corresponds to the probability of being independent (PBI) of that event. Background events without any foreshock activity will have near to 1 values, whereas events during aftershock sequences, previously triggered by other earthquakes, will have values close to 0. We applied such an algorithm to a global catalog (CMT catalog, Dziewonski et al. 1981; Ekström et al., 2012): Figure 1a shows the spatial distribution of the epicenters of the global catalog, along with their PBI. We carefully performed the magnitude of completeness estimation and checked for possible time‐incompleteness problems because they can affect the estimation of the PBIs and b‐values. We then used these PBIs to estimate the b‐values of the background and triggered seismicity, applying a weighted‐likelihood estimator to properly take into account the information on the PBI (Pastoressa et al., 2023). Figure 1b shows the magnitude–frequency distribution of background and triggered events: there is a clear overlap of the distributions until about magnitude 7.5, suggesting that the b‐values (i.e., the slope of the distributions) are similar. The main result is presented in Figure 1c: the b‐value of background and triggered events is very similar, with a large overlap of their 95% confidence intervals, for a wide range of completeness (from 5.7 to 6.0). The Utsu (1966) test on b‐value differences shows very high p‐values (Fig. S3), demonstrating the similarity of the two estimates. Thus, the two b‐values cannot be considered significantly different. Our result indicates that varying the minimum magnitude does not affect our findings, suggesting that the influence of earthquakes below is not substantial.
The similarity between the b‐values is in contrast with a previous study on the same topic (Taroni et al., 2024); however, here we used strong global events (), whereas Taroni et al. (2024) used small events () in a limited area. Thus, Taroni et al. (2024) capture a particular behavior of the microseismicity in a specific fault system, whereas here we found a global general behavior of strong seismicity. In supplemental material, we also estimate the b‐values in different tectonic zones (active continent, ridge, and trench, from Bird, 2003), always finding similar b‐values for background and triggered seismicity.
A possible explanation for the similarity in b‐values between background and triggered seismicity lies in the fundamental mechanics governing earthquake generation (Helmstetter et al., 2003; Scholz, 2015). Both background and triggered earthquakes may be controlled by the same physical laws of rock fracturing and fault rupture, suggesting that the process leading to stress release and, consequently, the seismic energy emitted, follows a similar distribution regardless of the triggering mechanism (Ben‐Zion and Rice, 1993; Scholz, 2019). This implies that the b‐value reflects intrinsic properties of the fault system or the crustal material, rather than the type of event. For larger events , it is also possible that similar physical constraints, such as fault dimensions and energy release mechanisms, govern both background and triggered earthquakes. Large fault ruptures tend to be limited by mechanical properties and scaling laws of seismicity that applies universally, leading to comparable statistical distributions of magnitudes (Landes and Lippiello, 2016). This would naturally result in consistent b‐values across both background and triggered seismic events, regardless of how the rupture is initiated.
In a critical system, the size distribution of events typically follows a power law (equivalent to an exponential distribution for magnitudes ), which is a footprint of scale‐invariant behavior (de Arcangelis et al., 2016). This means that, regardless of whether an earthquake is triggered by a preceding event or occurs spontaneously, the system as a whole operates near the same critical point, leading to similar statistical properties for both types of events. In this context, the b‐value can be viewed as an emergent property of the system that reflects its critical state, rather than being specific to the triggering process.
Conclusions
These results have an immediate important impact on both long‐ and short‐term earthquake forecasting models. They demonstrate that background and triggered events pertain to the same seismic process, at least if we look at their size distributions. Thus, trying to separate these two types of events for a better estimation of their b‐values is not recommended. For long‐term models, our results justify the use of the complete catalog, instead of the declustered one, to estimate the b‐values for hazard purposes (Taroni and Akinci, 2021; Gerstenberger et al., 2024). Furthermore, a complete catalog contains more events with respect to its declustered version, making the b‐value estimation more robust. For short-term models, the same argument made before applies: these models, for example, ETAS-like model, should have a unique b‐value for all the events. However, the applicability of our findings to smaller magnitude events remains uncertain because this study focuses exclusively on large events () at a global scale. The high magnitude of completeness used in this work could have masked the triggering effect of smaller events. Therefore, further research is necessary to explore the potential variability in b‐values across different magnitudes and types of seismic events.
Data and Resources
The data and code used in this article are freely available at zenodo.org/records/14978198 (last accessed March 2025). The supplemental material includes five figures that clarify and better explain some of the results shown in the main text.
Declaration of Competing Interests
The authors acknowledge that there are no conflicts of interest recorded.
Acknowledgments
The authors thank one anonymous reviewer and Paolo Gasperini for their precise and detailed comments, which have increased the value of this work. This research was supported by the Seismic Hazard Center (Centro di Pericolosità Sismica [CPS]) at the Istituto Nazionale di Geofisica e Vulcanologia (INGV), and partially by the research project “ARtificial Intelligence and STOchasTic simulation for the rEsiLience of critical infrastructurES ARISTOTELES” funded by MIUR—Italian Ministry for Scientific Research under the PRIN 2022 program (Grant Number 2022TAFXZ5) and by the European Union (EU) —Next Generation EU program. G. P. would like to acknowledge the Earth Observatory of Singapore (EOS), and the Singapore Ministry of Education Tier 3b project “Investigating Volcano and Earthquake Science and Technology (InVEST).” M. T., G. P., and E. L. outlined the project, M. T. and G. P. performed the statistical analyses, M. T. prepared the figure, M. T. and G. P. wrote the first draft of the article, and E. L. revised the article.