Abstract
Known earthquake sources include mapped active faults and locations where historical earthquakes have occurred. We propose a long‐term earthquake likelihood model that makes use of both types of sources but does not invoke the concepts of fault segmentation or characteristic earthquakes. The model has two components, one based on proximity to the locations of past earthquakes, taking into account their magnitudes, and another based on proximity to mapped faults, taking into account their slip rates. The Gutenberg–Richter law is invoked for earthquake magnitudes and an inverse power law for the diminution of earthquake rate density with distance from past earthquakes and mapped faults. The model is applied to the New Zealand region using the National Seismic Hazard Model fault‐source model and the earthquake catalog since 1951. The parameters of each component are optimized for the period 1987–1996, and the rate density of the combined model is estimated as an optimal linear combination of the two components. In a test on the independent period from 1997–2006, the combined model has an information gain (log‐likelihood increase) per earthquake of about 0.1 over each of the component models. The model is also optimized over the longer period 1987–2006 for comparison with other long‐term models in the New Zealand Earthquake Forecast Testing Centre.