Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument’s response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time‐invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users.
We present an iterative three‐step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least‐squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local‐minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least‐squares best‐fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two‐thirds octave band centered at each best‐fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
Online Material: MATLAB code for calibration analysis.