We present a new method to accelerate the process of matched filtering (template matching) of seismic waveforms by efficient calculation of (cross‐) correlation coefficients. The cross‐correlation method is commonly used to analyze seismic data, for example, to detect repeating or similar seismic waveform signals, earthquake swarms, foreshocks, aftershocks, low‐frequency earthquakes (LFEs), and nonvolcanic tremor. Recent growth in the density and coverage of seismic instrumentation demands fast and accurate methods to analyze the corresponding large volumes of data generated. Historically, there are two approaches used to perform matched filtering; one using the time domain and the other the frequency domain. Recent studies reveal that time domain matched filtering is memory efficient and frequency domain matched filtering is time efficient, assuming the same amount of computational resources.
We show that our super‐efficient cross‐correlation (SEC‐C) method—a frequency domain method that optimizes computations using the overlap–add method, vectorization, and fast normalization—is not only more time efficient than existing frequency domain methods when run on the same number of central processing unit (CPU) threads but also more memory efficient than time domain methods in our test cases. For example, using 30 channels of data with a sample rate of 50 Hz and 30 templates, each with durations of 8 s, SEC‐C uses only 2.3 GB of memory whereas other frequency domain codes use three times more and parallelized time‐domain codes use more. We have implemented a precise, fully normalized version of SEC‐C that removes the mean of the data in each sliding window, and thus can be applied to raw seismic data. Another strength of the SEC‐C method is that it can be used to search for repeating seismic events in a concatenated stack of individual event waveforms. In this use case, our method is more than one order of magnitude faster than conventional methods. The SEC‐C method does not require specialized hardware to achieve its computation speed; instead it exploits algorithmic ideas that are both time‐ and memory‐efficient and are thus suitable for use on off‐the‐shelf desktop machines.