Abstract
Discriminating low‐yield underground nuclear explosions from small earthquakes is a key task in monitoring nuclear test ban treaties. P/S amplitude ratios have been an effective discriminant for moderate‐sized events recorded at regional distances, but it is unclear if they are as effective in discriminating small seismic events recorded at local distances (<150 km). The difference between local magnitude () and coda duration magnitude () has been proposed as a new discriminant that may complement P/S amplitude ratios at local distances. Here, we calculate high‐frequency (up to ∼4 Hz) synthetic seismograms at epicentral distances of 0–30 km in realistic models of the Salt Lake basin (Utah, United States) to better understand how variations in source type and depth affect values. The Earth models incorporate simplified 1D and deterministic 3D structures, small‐wavelength stochastic velocity perturbations, and surface topography. Coda waves are enhanced for the more complicated models compared to the base 1D model, but still underpredict observed durations by about a factor of two, which results in overprediction of amplitude to duration ratios (i.e., values) for a near‐surface explosion and a 7 km deep earthquake. For both source types, the predicted and values decrease as source depth increases, and shows only minor variation with depth; however, is on average ∼0.5 units smaller for explosions than earthquakes. This finding may imply that has sensitivity to source type, in addition to being a depth discriminant, but more modeling is needed given the limitations of the current study. Future modeling should incorporate higher‐frequency (≳5 Hz) simulations over a larger distance range (0–150 km), where and are commonly measured, while honoring low shear velocities (<300 m/s) near the surface and sampling a wider range of earthquake and explosion source mechanisms.