## ABSTRACT

Spatial transformation of an irregularly sampled data series to a regularly sampled data series is a challenging problem in many areas such as seismology. The discrete Fourier analysis is limited to regularly sampled data series. On the other hand, the least-squares spectral analysis (LSSA) can analyze an irregularly sampled data series. Although the LSSA method takes into account the correlation among the sinusoidal basis functions of irregularly spaced series, it still suffers from the problem of spectral leakage: Energy leaks from one spectral peak into another. We have developed an iterative method called antileakage LSSA to attenuate the spectral leakage and consequently regularize irregular data series. In this method, we first search for a spectral peak with the highest energy, and then we remove (suppress) it from the original data series. In the next step, we search for a new peak with the highest energy in the residual data series and remove the new and the old components simultaneously from the original data series using a least-squares method. We repeat this procedure until all significant spectral peaks are estimated and removed simultaneously from the original data series. In addition, we address another problem, which is random noise attenuation in the data series, by applying a certain confidence level for significant peaks in the spectrum. We determine the robustness of our method on irregularly sampled synthetic and real data sets, and we compare the results with the antileakage Fourier transform and arbitrary sampled Fourier transform.

## INTRODUCTION

Regularization (a typical spectral interpolation/extrapolation) of irregularly sampled (unequally spaced) data is a crucial problem in seismology. Marine seismic data sets are usually irregularly sampled along the spatial direction because of cable feathering, editing bad traces, etc. Regularly sampled seismic data are required for various purposes, such as wave-equation migration, seismic inversion, amplitude variation with azimuth or offset analyses, and surface-related multiple elimination (Weglein et al., 1997; Xu et al., 2010; Chen et al., 2015). There are several effective algorithms proposed by researchers to interpolate seismic data such as the $f$-$x$ and $t$-$x$ prediction error filters (PEFs) (Spitz, 1991; Crawley, 2000; Fomel, 2002; Wang, 2002), masking in the frequency-wavenumber domain (Gülünay, 2003), projection onto convex sets (POCS) and its modifications (Abma and Kabir, 2006; Yang et al., 2012; Wang et al., 2016), minimum weighted norm interpolation (MWNI) (Liu and Sacchi, 2004; Trad, 2009), shaping regularization (Fomel, 2007; Chen et al., 2015), antileakage Fourier transform (ALFT) (Xu et al., 2005, 2010, 2011), and arbitrarily sampled Fourier transform (ASFT) (Guo et al., 2015).

In the PEF theory, the data must be wide-sense stationary and regular. The $f$-$x$ PEFs represent a spectral approach that uses the available traces to predict the linear events, and it is suitable for aliased seismic data. For seismic data, windowing is required to approximately fulfill the stationarity condition of the PEFs. Nonstationary PEF has also been used in seismic data interpolation to solve the interpolation of curved events (Crawley, 2000; Liu and Chen, 2017). The POCS method is based on the Gerchberg-Saxton iterative algorithm (Gerchberg and Saxton, 1972) widely used in industry. Its performance on noisy data and its computational efficiency have been recently improved (Ge et al., 2015; Wang et al., 2016). Abma and Kabir (2006) use a linear threshold in the iterations of the POCS method and point out that the threshold is the key parameter in this method. Thereafter, other works have been done on an appropriate selection of a threshold to improve the result of the POCS method (Gao et al., 2013). The MWNI method solves an underdetermined system of equations subject to suitable prior information for the purpose of regularization. The linear shaping regularization is another approach for regularizing an underdetermined geophysical inverse problem that uses a linear shaping operator. The nonlinear shaping regularization allows the shaping operator to be nonlinear making it more flexible in implementing the iterative framework.

The ALFT and ASFT methods estimate the Fourier coefficients of the data series first by searching for the peak with maximum energy and subtracting its component from the data series and then by repeating the procedure on the residual data series until all the significant Fourier coefficients are estimated. This way, the spectral leakages of Fourier coefficients emerging from the nonorthogonality of the global Fourier basis functions will be attenuated. However, these methods usually cannot find the correct location of a peak with maximum energy from a preselected set of wavenumbers because of the correlation between the sinusoids, and so they do not effectively reduce the leakage. This shortcoming becomes more severe when a data series has more spectral components. Moreover, the constituents of known forms such as datum shifts and trends are not explicitly considered in these algorithms. Therefore, if the trends or datum shifts are present in a data series, then the ALFT and ASFT methods implicitly approximate them by sinusoids of various wavenumbers (Xu et al., 2010). Hollander et al. (2012) propose a method that uses an orthogonal matching pursuit (OMP) to improve the ALFT results. In the OMP approach, the coefficients of all previously selected Fourier components are reoptimized, producing better interpolation results compared with the ALFT results.

Least-squares spectral analysis (LSSA) was introduced by Vaníček (1969) to analyze unequally spaced time series. It basically estimates a frequency spectrum based on the least-squares fit of sinusoids to the time series by accounting for measurement errors, trends, and constituents of known forms. In the LSSA method because the selected frequencies are examined one by one (out-of-context), the spectral leakages still appear and result in interpolation inaccuracy, although the nonorthogonality between the sine and cosine basis functions is taken into account for each frequency. The statistical properties of the least-squares spectrum (LSS) were discussed later by Lomb (1976), Craymer (1998), and Pagiatakis (1999). Seismic data are well-sampled along the time direction (regularly sampled), but they are usually are poorly sampled along the spatial direction (irregularly spaced). Note that the LSSA method and its applications are mainly discussed on an unequally spaced time series in the literature; however, we can concurrently apply its concept in the spatial direction that is based on wavenumber (cycles per unit distance) rather than frequency (cycles per unit time). Therefore, when we use the term “data series,” we mean a series of data points sampled along the spatial direction.

In this contribution, we apply the idea of maximum energy in the LSSA method to develop the antileakage LSSA (ALLSSA) method that mitigates the spectral leakages significantly. We further show how the ALLSSA method attenuates the random noise present in data down to a certain confidence level. The main assumption in the ALFT, ASFT, LSSA, and ALLSSA methods is that the data must be wide-sense stationary (Xu et al., 2010; Ghaderpour and Pagiatakis, 2017). However, we note that the LSSA method considers the constituents of known forms such as trends and/or datum shifts explicitly, so they are somewhat adaptable to the nonstationary data series (Craymer, 1998; Ghaderpour and Pagiatakis, 2017). In the synthetic data examples here, we show that if the data are wide-sense stationary, then the ALLSSA method is an excellent and very accurate method for regularization and random noise attenuation. Özbek et al. (2009) propose an iterative algorithm called interpolation by matching pursuit (IMAP) to regularize irregularly sampled seismic data. The IMAP method improves the original Lomb spectrum (LSS) result, and we will briefly compare it with the ALLSSA method.

However, seismic data are often nonstationary and contain many nonlinear events (Crawley, 2000). It is traditional to use windowing of seismic images in $t$-$x$ and $f$-$x$ prediction (Abma and Claerbout, 1995). The $f$-$x$ domain interpolation algorithms for regularly sampled data also need data windowing. Windowing strategy makes the nonlinear events approximately linear and usually works well when the data do not have a complex geologic structure. Liu et al. (2012) apply regularized nonstationary autoregression in the $f$-$x$ domain, which does not require windowing strategies in the spatial direction. Liu et al. (2012) propose a method called local parallel radial-trace time-frequency peak filtering (TFPF), which is less sensitive to the fixed window length used in the TFPF and performs well for random noise attenuation.

Windowing strategy also significantly decreases the computational time in the ALFT method; however, the problem of spatial aliasing may occur more likely within the windows (Schonewille et al., 2009; Xu et al., 2010). Because the events are approximately linear within the windows, an antialiasing scheme (Schonewille et al., 2009) may be used inside the ALFT method (and similarly inside the ASFT and ALLSSA methods) to improve the regularization result. In this paper, we will also use an appropriate weight matrix defined by the Gaussian function in the ALLSSA method to regularize the irregularly spaced seismic data more accurately. In addition, we shall show in our field data example that repeating the ALLSSA method can attenuate the random noise down to a certain level (typically, 10% of seismic data might be random noise).

It is customary to transform each trace from the time domain to the frequency domain using the fast Fourier transform (FFT) (Spitz, 1991; Abma and Claerbout, 1995; Xu et al., 2010; Guo et al., 2015). Then for each frequency, we generate a data series whose data points, located at the trace locations, are the Fourier coefficients of that frequency (a temporal frequency slice). For irregularly spaced seismic data, each temporal frequency slice is an irregularly sampled data series, and the ALFT or ASFT or ALLSSA method may be used to regularize the data series, and then each trace can be transformed back from the frequency domain to the time domain by the inverse FFT. Note that in this work, all the analyses are performed on the temporal frequency slices. We now begin by revisiting the ALFT, ASFT, and LSSA methods in more detail, and then we combine the ideas of these methods to introduce the ALLSSA method.

## METHODS

### Antileakage Fourier transform

If the $x\u2113$’s are regularly spaced, then the estimation of the Fourier coefficient $f^(\omega k)$ has no effect on the estimations of other Fourier coefficients because the sinusoids are orthogonal; i.e., the dot product between the sinusoids is zero (see Appendix A). However, the orthogonality condition fails for an irregularly sampled data series resulting in spectral leakages. To mitigate these leakages, Xu et al. (2005) propose the ALFT method, which has the following three steps:

- 1)
Compute all Fourier coefficients of the input data series using equation 1. Note that samples of function $f$ in equation 1 can be the samples of a frequency slice in seismic applications.

- 2)
Select the Fourier coefficient with the maximum energy.

- 3)
Subtract the contribution of this coefficient (equation 2) from the input data series to obtain the residual data series. Use this residual data series as the new input and go to step 1. Continue this procedure until the L2 norm of the residual data series becomes approximately zeros (below a threshold).

Although the ALFT method is fairly efficient and effective in practical applications, it has some shortcomings. The main shortcoming of the ALFT method is that it only uses wavenumbers in a preselected set of wavenumbers (e.g., set $K$) and does not search for the actual wavenumbers (real numbers) of components in a data series that may not be in the preselected set of wavenumbers. In other words, the wavenumber context of the data may not fall on one of the wavenumber samples used in set $K$. Therefore, for a better regularization result, a denser set of wavenumbers may be selected in the ALFT method (Xu et al., 2011). For a data series of size $n$ with $N$ wavenumbers, the ALFT method requires the computation of $N$ Fourier coefficients for each iteration, which has an order of $nN$ floating operations, and it usually takes $N$ iterations to converge. Thus, the total computational cost in the ALFT method is on the order of $nN2$ (Xu et al., 2005, 2010). Therefore, a denser set of wavenumbers will greatly increase the computational cost, and the actual wavenumbers of the components cannot still be accurately estimated. In practice, however, one may need fewer iterations when dealing with data that have by nature only a few different spectral components.

### Arbitrary sampled Fourier transform

- 1)
For each $\omega k\u2208K$, compute $f^(\omega k)$ and find the wavenumber $\omega k$ such that $\Lambda (\omega k)$ is minimum.

- 2)
Find wavenumber $hk$ in $I=[\omega k\u2212b,\omega k+b]$ such that $\Lambda (hk)$ is minimum. Because the wavenumbers in $K$ are integer, we may choose $b=0.5$.

- 3)
Subtract the contribution $f^(hk)e2\pi ihkx\u2113$ from the input data series to obtain the residual data series. Use this residual data series as the new input, and go to step 1. Continue this procedure until the L2 norm of the residual data series becomes approximately zero (below a threshold).

Estimating $hk$ up to a chosen decimal place in step 2 may simply be done by an appropriate partitioning of interval $I$ as follows: For $b=0.5$, compute $\Lambda (hk1)$ for all wavenumbers $hk1$ in set ${\omega k\u22120.5+0.1j;j=1,\u2026,9}$ and choose wavenumber $hk1$ that $\Lambda (hk1)$ is minimum. Note that wavenumber $hk1$ is estimated up to one decimal place. Next, compute $\Lambda (hk2)$ for all wavenumbers $hk2$ in set ${hk1\u22120.1+0.01j;j=1,\u2026,19}$ and choose wavenumber $hk2$ that $\Lambda (hk2)$ is minimum. Note that wavenumber $hk2$ is now estimated up to two decimal places. We may continue this process to estimate the wavenumber $hk$ up to any chosen decimal place. Estimating $hk$ up to four decimal places by using this technique requires $9+3(19)=66$ times calculation of equation 3 that is computationally much faster than 9999 (equally spaced numbers in interval $I$) times calculation of equation 3, and it is also practically acceptable. For a fair comparison between the ASFT and ALLSSA methods, we use this partitioning technique for both methods. One may also apply other techniques such as the conjugate gradient or the Broyden-Fletcher-Goldfarb-Shanno algorithm for step 2 (Guo et al., 2015) or other robust minimizers such as simulated annealing (Chen and Luk, 1999).

Note that wavenumbers estimated in the ASFT method are real numbers obtained directly from the cost function minimization that is easier to interpret (Guo et al., 2015). One of the advantages of the ASFT method over the ALFT method is that the wavenumbers are estimated more accurately at each step. Therefore, if a threshold is defined for the L2 norm of residual data series, then the ASFT method is able to arrive at that threshold in a smaller number of iterations compared with the ALFT method, resulting in more accuracy and less computational cost. In other words, for a data series of size $n$ with $n$ preselected wavenumbers, the ASFT method has an order of $n2m$ floating operations, where $m$ is the number of iterations that is usually smaller than $n$ by selecting an appropriate threshold. For an irregularly sampled data series with several components, although the ASFT method can estimate the wavenumbers of the components more accurately than the ALFT method, its spectrum still shows spectral leakages caused by the correlations between the sinusoids. Therefore, the wavenumbers of the data components cannot be estimated very accurately, and its spectrum contains many wavenumbers.

### Least-squares spectral analysis

**1**] and/or the position column vector $[x]$ to explicitly account for any linear trend and/or the column vectors of the sinusoids of particular wavenumbers.

To demonstrate the estimated amplitude and phases simultaneously, we may show the LSS in terms of wavenumber amplitude by $c^k$ in equation 9. In other words, for each $\omega k\u2208\Omega $, we calculate the square root of the sum of squares of the first and second elements of $c^k$ (separately for the real and imaginary parts of $c^k$ if $f$ is complex). This way, the LSS can be compared with the Fourier spectrum expressed in terms of absolute values of the Fourier coefficients as will be described in our synthetic data series example.

It is shown (Pagiatakis, 1999) that if $f$ has been derived from a population of random variables following the multidimensional normal distribution, then the spectrum given in equation 10 follows the beta distribution with parameters $\alpha =1$ and $\beta =(n\u2212q\u22122)/2$, where $q$ is the number of constituents of known forms and $n$ is the number of data points. We refer the reader to Craig et al. (2013) for a definition of the beta distribution. The confidence level (threshold) for the spectrum is then equal to $1\u2212c1/\beta $, where $c$ is the critical value (usually 0.05 or 0.01). The confidence level is independent of the weight matrix and wavenumbers (Pagiatakis, 1999; Ghaderpour and Pagiatakis, 2017).

### Antileakage LSSA

We apply the idea of maximum energy in the algorithm of the ALFT method to develop a new algorithm for the LSSA method that significantly reduces the spectral leakages and consequently regularizes the irregularly sampled data series very accurately. We may assume that $\Omega ={1,2,\u2026,\eta}$, where $\eta $ may be chosen as the largest positive integer less than half of the sampling rate. Beginning with an initial $\varphi _$ that may be selected as $\varphi _=[1,x]$, the algorithm for the ALLSSA method has the following four steps:

- 1)
Given $\varphi _$, compute $c_^$ and $g$ (equations 7 and 8).

- 2)
For all $\omega k\u2208\Omega $, calculate $c^k$ (equation 9) and find $\omega u$ in $\Omega $ such that $s(\omega u)$ (equation 10) is maximum.

- 3)
Eliminate the cosine and sine basis functions of a wavenumber $h$ in $\varphi _$ such that $|h\u2212\omega u|<b$ (if it exists). Assuming that the difference between any two consecutive actual wavenumbers of the components in the data series is greater than one, we may choose $b=0.5$ to resolve the peaks and avoid singularity of the matrix of normal equations.

- 4)
Find $h$ in $I=[\omega u\u2212b,\omega u+b]$ such that $s(h)$ (equation 10) is maximum. Concatenate the cosine and sine column vectors of wavenumber $h$ to $\varphi _$ and then go to step 1. Finding $h$ up to a chosen decimal place can be done by an appropriate partitioning of $I$. Terminate the process if $s(h)$ is no longer significant at certain confidence level (usually 95% or 99%).

When we consider the weight matrix $P$, the method is called the weighted ALLSSA. Removing the constituent of a particular wavenumber with maximum energy from the data series reduces the spectral leakages of the residual spectrum. On the other hand, the elimination of the basis functions of a wavenumber $h$ in step 3 is crucial in the ALLSSA method because considering the correlation between the sinusoids of different wavenumbers results in a more accurate $h$ in the next step. This correlation will be taken into account as the column dimension of $\varphi _$ increases in the process. The L2 norm of residual data series $g$ approaches zero rapidly because the wavenumbers are accurately estimated recursively that also prevents increasing the column dimension of $\varphi _$ in many practical applications especially when we apply the confidence level. We define the antileakage LSS (ALLSS) in terms of wavenumber amplitude by $c_^$ in step 1 after the process is terminated.

Similar to the discussion in the previous section, $c_^$ is a real or complex column vector (depending on $f$) whose elements are the estimated intercept and slope of the linear trend (if the initial $\varphi _$ is [**1**,**x**]) and the estimated amplitudes for the sine and cosine basis functions of the estimated wavenumbers. We use this $c_^$ for the regularization of a stationary data series that will also result in attenuating the random noise present in the data series down to certain confidence level. For nonstationary data series, we may also treat the final residual data series as a new input data series then perform the ALLSSA method again (by applying the confidence level) and repeat this procedure until the L2 norm of the final residual data series approaches zero (below a threshold). Note that the actual wavenumbers of the data series will be better approximated if the initial $\varphi _$ at least contains the column vector of all ones (Foster, 1996; Ghaderpour and Pagiatakis, 2017).

The computational complexity of the ALLSSA method depends on the estimation of $c_^$ and the matrix of normal equations in each iteration. For a data series of size $n$ with $\eta $ wavenumbers (e.g., $\eta =(n/2)\u22121$ if $n$ is even and $\eta =(n\u22121)/2$ if $n$ is odd), the process has an order of $n\eta m$ floating operations that $m$ mainly depends on the number of significant spectral peaks. When applying the confidence level, the total number of the significant peaks is usually small, making $m$ small, so the computational efficiency can be much better than the ALFT and ASFT methods for stationary data series.

In the IMAP method, a wavenumber $k1$ corresponding to the largest peak in the Lomb spectrum (Lomb, 1976) is selected. Then, the contribution of the sinusoidal basis functions of wavenumber $k1$ to the data series is subtracted from it to obtain the residual data series. In the next step, the residual data series is treated as a new input data series to choose a wavenumber $k2$ corresponding to the largest peak in the new Lomb spectrum, and this procedure continues until the desired wavenumbers are estimated. This iterative method mitigates the spectral leakages in the Lomb spectrum and results in a faster convergence (Özbek et al., 2009). Although the IMAP method improves the spectral leakages in the Lomb spectrum, it does not explicitly account for the correlations among the sinusoids of different wavenumbers. Vassallo et al. (2010) have also studied estimating many wavenumbers per iteration and have documented several dealiasing schemes.

On the other hand, in the ALLSSA method, the sinusoidal basis functions of different wavenumbers, corresponding to the largest statistically significant peaks in the spectrum, are added to the columns of design matrix $\varphi _$ in an iterative manner. Therefore, the wavenumbers will be more accurately estimated in the next step by considering the correlation among the columns. The accuracy of estimated wavenumbers significantly reduces the number of iterations and results in more accurate regularization.

## SYNTHETIC DATA SERIES EXAMPLE

This data series is inherently unequally spaced, and so the Nyquist wavenumber does not explicitly exist. Therefore, there is no restriction for the selection of maximum wavenumber in this example. However, because our goal is to regularize this data series on a series with regular spacing $1/128$ (the Nyquist wavenumber of this regular series is 64), we choose the initial set of wavenumbers as $\Omega ={1,2,3,\u2026,63}$ for the LSSA and ALLSSA methods, and we estimate the wavenumbers up to four decimal places. Note that the data series has three wavenumber components with wavenumbers $25.6/2\pi \u22434.0743665$, $128/2\pi \u224320.3718327$, and $140/2\pi \u224322.2816920$, rounded to seven decimal places.

For the ALFT and ASFT methods, we also choose $K={\u221264,\u221263,\u2026,63}$. We illustrate their spectra with green and blue dots in Figure 1a, respectively. The ALFT and ASFT methods implicitly accounts for the linear trend using the sinusoids of various wavenumbers including zero. Figure 1a clearly shows the presence of the spectral leakages in the ALFT and ASFT methods (with the threshold 0.5). However, the actual wavenumbers of the data series in the ASFT method are estimated more accurately than the ALFT method (see the numbers shown in blue).

For the LSSA and ALLSSA methods, we select the initial $\varphi _$ as $\varphi _=[1,x]$ to explicitly account for the linear trend present in the data series, where **1** and “$x$” are the column vectors of all ones and the position vector, respectively. Because there is no covariance matrix associated with the data series, we do not consider the square weight matrix $P$ in the calculation. We illustrate the LSS (black) and the ALLSS (red) in terms of wavenumber amplitude, shown as “Amplitude (Abs)” on the $y$-axis, in Figure 1b. The LSS shows one strong peak at wavenumber 4 that is the closest value to the actual value 4.0743665; however, the other two signal peaks are buried in the leakage spectrum. On the other hand, it can be seen from the ALLSS that only the actual signal peaks are estimated very accurately at a 99% confidence level. Note that we estimated the signal peaks up to only four decimal places, and we choose this accuracy for our synthetic and field data examples in this contribution. The result of the ALLSSA regularization for the data series without the linear trend is also more accurate than the one for the ALFT and ASFT regularization (not shown here).

Table 2 shows the result of the ALLSSA method. In the first iteration, the wavenumber 4.0384 is estimated that is approximately 0.036 different from its actual value 4.0743665. This is a shortcoming of the LSSA method (out-of-context) caused by the presence of other constituents in the data series. In the second and third iterations, the other two wavenumbers are estimated that by removing their corresponding components from the data series simultaneously, the first wavenumber is better approximated and so forth (see the highlighted numbers in Table 2).

We illustrate the ideal data series and the regularization results using the three methods in Figure 2a. We also show the differences between each ideal data point and the estimated one for the three methods in Figure 2b. The L2 norm of the ideal regularized data series is 57.382. The L2 norm of the difference between the ideal data series and the regularized data series for the ALFT, ASFT, and ALLSSA methods are 9.864, 1.26, and 0.004, respectively. For the ALLSSA method, we only used the estimated slope and intercept of the linear trend as well as the estimated amplitudes and phases of the three wavenumbers for the regularization. Note that when a smaller threshold is selected for the ASFT method, the computational time significantly increases, and its spectrum will have much more wavenumbers with very low amplitudes (absolute value of Fourier coefficients), and it still cannot reach the accuracy of the ALLSSA method.

## SYNTHETIC DATA EXAMPLE

We now apply the ALFT, ASFT, and ALLSSA methods to the edited data shown in Figure 3b, and we regularize the temporal frequency slices. For a fair comparison, we apply the same threshold (0.05) for the ALFT, ASFT, and ALLSSA methods. We also amplify the difference between the original synthetic data and the regularized data (the error) by a factor of 30, and so the $f$-$k$ spectrums of the errors will correspond to the amplified errors for all the methods. The same partitioning for wavenumbers is used for the ASFT and ALLSSA methods to estimate the wavenumbers up to four decimal places. All the methods are programmed in MATLAB language, and we run them on a computer with two cores (processors). We also denote the sum of L2 norm of differences between the original and interpolated traces by “$\u03f5$.”

First, we choose $K={\u221250,\u221249,\u2026,48,49}$ for the ALFT method and show its regularization result and error in Figure 4a and 4c, respectively. The computational time is 7 s, and $\u03f5=37.29$. It can be seen that the missing traces are not very well constructed especially for the extrapolation (the traces toward the edges). This is mainly due to the lack of accuracy of the wavenumbers. We next double the number of wavenumbers, i.e., $K={\u221250,\u221249.5,\u221249,\u2026,48.5,49,49.5}$, and we show the ALFT regularization result and error in Figure 4b and 4d, respectively. The computational time increased to 14 s, but $\u03f5=6.76$. We observe that the traces are constructed more accurately than the previous choice for $K$. We also illustrate the $f$-$k$ spectrums corresponding to Figure 4 in Figure 5.

Next, we choose the preselected set of wavenumbers in the ASFT method as $K={\u221250,\u221249,\u2026,48,49}$, and we illustrate its regularization result and error in Figure 6a and 6c, respectively. The computational time is 13 s, and $\u03f5=2.6$, which is more accurate and computationally faster than the ALFT method with the denser set of wavenumbers.

For the ALLSSA method, we choose $\Omega ={1,2,\u2026,49}$. To ensure that all the real wavenumbers are considered up to the Nyquist wavenumber 50, we allow the partitioning for wavenumber 49 to cover the range [49, 50), and we choose $\varphi _=[1,x]$ to explicitly account for any linear trend. If we choose $\varphi _=[1]$, we may also allow the partitioning for wavenumber 1 to cover the range (0,1], so the sinusoids of lower wavenumbers can approximate the trends. We show the ALLSSA regularization result and error in Figure 6b and 6d, respectively. The computational time is 3 s, and $\u03f5=0.3$. It can be seen that the ALLSSA method is able to fill in the missing traces much more accurately and computationally faster than the ALFT and ASFT methods. We show the $f$-$k$ spectrums corresponding to Figure 6 in Figure 7.

To show how the ALLSSA method statistically attenuates the random noise, we add a distinct Gaussian random noise to each trace in Figure 3b. Therefore, approximately 20% of the synthetic data is random noise. We illustrate only trace 1 for the extrapolation, trace 15 for the interpolation, and trace 92 for noise attenuation purposes in Figure 8. The set $K={\u221250,\u221249,\u2026,48,49}$ is selected for the ALFT and ASFT methods. Comparing the ALLSSA method with the ALFT and ASFT methods, one can observe that the ALLSSA method performs much better in reconstructing the signals (the events) and attenuates the random noise down to a certain confidence level. The result of the ALFT and ASFT methods for traces 1 and 15 looks noisier because aside from the accuracy of the actual wavenumbers, more noise is interpolated in the traces. In the ALLSSA method, when the 99% confidence level is used, the noise attenuated slightly more than when the 95% level is used; however, we recommend using the 95% confidence level for real seismic data because of their nonstationarity behavior. Note that there is no rigorous statistical property of choosing a confidence level for the ALFT and ASFT methods, so a threshold was selected based on the amount of random noise for these methods.

To see the sensitivity of the ALLSSA method to spatially aliased events, we remove the traces of odd numbers except a few of them (e.g., trace numbers 5, 21, 83, 91) and show the result in Figure 9a. We select a dense set of wavenumbers $K={\u221250,\u221249.9,\u2026,49.8,49.9}$ (1000 wavenumbers) for the ALFT method. In this case, the ALFT and ASFT methods have approximately the same computational time (57 s) with $\u03f5=109.3$ and $\u03f5=93.4$, respectively. We only illustrate the ALFT result because the ASFT result is also very similar (see Figure 9). Note that the error is amplified by a factor of 10 in Figure 9e. To show the aliasing, we illustrate the $f$-$k$ spectrum of this synthetic data in Figure 9b. Because the dominant trace spacing is 20 m, all of the events are spatially aliased. We also show the $f$-$k$ spectrums of the ALFT regularization result and its amplified error in Figure 9d and 9f, respectively. One can clearly observe the effect of the alias events in the spectrums. The computational time for the ALLSSA method is 2 s, and $\u03f5=0.9$ which is significantly smaller than the ALFT and ASFT methods. The final regularization result is almost the same as Figure 6b (not shown here). The reason for this accuracy is that the ALLSSA method very accurately estimates the positive wavenumbers, and these few extra traces (four traces) prevent the matrix of normal equations to be singular. In the event that this matrix is singular, one may compute its pseudo-inverse (Rao and Mitra, 1972). Note that if we remove the traces of odd numbers, the Nyquist wavenumber will be 25 c/k and the aliasing will occur for the wavenumbers beyond 25 c/k. In this situation, an antialias technique similar to the technique in Schonewille et al. (2009) may be used.

## FIELD DATA APPLICATIONS

The field data example is a marine 2D shot gather from the deepwater of Gulf of Mexico that is the same as the one used by Crawley (2000), Fomel (2002), and Chen et al. (2015). The time sampling rate is 250 samples per second. To compare our interpolation approach with the nonlinear shaping regularization method, we remove the same traces (30%) as in Chen et al. (2015) (see Figure 10). Note that the normalized wavenumber axis in the $f$-$k$ spectrums is obtained by considering $\Delta x=1$. We illustrate the nonlinear shaping regularization result after 15 iterations and its error in Figure 11a and 11c, respectively.

Because the stationarity of data series is the main assumption of the ALFT, ASFT, and ALLSSA methods (i.e., the seismic events must be approximately linear), we use the windowing strategy. Therefore, we divide the seismic data to six spatial windows and apply the same threshold for the ALFT and ALLSSA methods. Within each window, we choose $K={\u221215,\u221214.9,\u2026,14.8,14.9}$ (300 wavenumbers) for the ALFT method, and $\Omega ={1,\u2026,14}$ and initial $\varphi _=[1]$ for the ALLSSA method. We illustrate the regularization result of the ALFT method and its error in Figure 11b and 11d. The $f$-$k$ spectrums corresponding to the nonlinear shaping regularization and ALFT methods are shown in Figure 12. The regularization result using the ALLSSA method and its error are shown in Figure 13a and 13c, respectively. Figure 14a and 14c shows the $f$-$k$ spectrums corresponding to Figure 13a and 13c, respectively.

It can be seen that the ALFT and ALLSSA methods performed well in interpolating the missing traces compare with the nonlinear shaping regularization method; however, they still have some small amount of artifacts (compare the magnified sections, for example). The data are very poorly sampled along the spatial direction within some of the windows (e.g., half of the traces are missing within the magnified section), and so the spatial aliasing mainly caused these artifacts. An antialiasing scheme within the windows may further improve these results (Schonewille et al., 2009; Xu et al., 2010). We now show an alternative approach for a better interpolation result.

For a better comparison between the ALLSSA and the weighted ALLSSA methods, we select $\lambda $ in equation 13 in such away that the Gaussian bell curve is almost truncated to zero in the margins of the translating windows whose lengths are the same as the window length used above. We show the result of the weighted ALLSSA method and its error in Figure 13b and 13d, respectively. The $f$-$k$ spectrums corresponding to Figure 13b and 13d are illustrated in Figure 14b and 14d, respectively. One can observe that the events are smoothly constructed, and the random noise is also attenuated down to 95% confidence level (e.g., the blue arrows in Figure 13). The almost linear event in the southeast part of the seismic data is clearly constructed (compare the events within the magnified sections shown in Figure 13a and 13b and the corresponding $f$-$k$ spectrums shown in Figure 14a and 14b, white arrows). All the methods show small artifacts above the water-bottom reflections that are not significant. We used a mask to set the values of the northeast part of the images shown in Figures 11 and 13 to zero after the interpolation.

We emphasize that the traces within each of the six spatial windows are simultaneously estimated in the ALFT and ALLSSA methods, and so the existing traces are almost fully constructed, resulting in error propagation to other traces (Figures 11b and 13a). However, in the weighted ALLSSA method, each trace is individually approximated by applying a 95% confidence level, and so the events are smoothly constructed and random noise is also attenuated down to a 95% confidence level for the entire seismic data.

Finally, for each temporal frequency slice of the original seismic data (regularly spaced), one may repeat the ALLSSA method until the L2 norm of the residual series becomes less than approximately 10% of the total L2 norm of the data series and use the inverse FFT to reconstruct the traces. We show the result of this process, its difference from the original seismic data (amplified by a factor of 20), and their corresponding $f$-$k$ spectrums in Figure 15. One can observe that the geologic structures are clearly reconstructed, and the random noise is attenuated (a 95% confidence level is used in each iteration).

## CONCLUSIONS

The ALLSSA method is a fast and very accurate new method of regularizing stationary data series that can also attenuate the random noise down to a certain confidence level. This method takes into account the correlations among the sinusoidal basis functions as well as the constituents of known forms by considering the covariance matrix associated to the data series. We demonstrated how an appropriate selection of the weight matrix (the inverse of the covariance matrix) adapts the method for regularizing nonstationary data series.

Similar to the ALFT and ASFT methods, the ALLSSA method can be used for seismic data regularization. Because the ALLSSA method estimates the wavenumbers of the data series more accurately than do the ALFT and ASFT methods up to a certain confidence level, it is computationally faster and more accurate than the ALFT and ASFT methods for regularization. The numerical data examples in this contribution show promising results. Applications of the ALLSSA method are not restricted to seismology, and it may show its potential performance in other fields such as geodesy, astronomy, medical sciences, and finance.

Taking into account that the correlation among the sinusoids is the key point in the ALLSSA method. This correlation will be considered when the matrix of normal equations is being inverted. In practice, this matrix is often regular because the traces are usually inherently unequally spaced. However, one may calculate the pseudoinverse of this matrix when it is singular. For seismic data with very complex geologic structures that have severe aliasing, the ALLSSA method may not be reliable for regularization because this method is established mainly for the wide-sense stationary seismic data similar to the ALFT, ASFT, and IMAP methods. In future work, we extend the method to the higher dimension cases and show its capability in regularization. It is expected that the generalization of the ALLSSA method to higher dimensions has better performance in regularization.

## ACKNOWLEDGMENTS

The authors would like to thank the editors, N. Gülünay and I. Moore, and the anonymous reviewers for their very useful and constructive comments that significantly improved the presentation of the paper. We would also like to thank Y. Chen for providing the seismic data and S. D. Pagiatakis and D. Trad for useful discussion. This research has been financially supported by the Pacific Institute for the Mathematical Sciences, the Natural Sciences and Engineering Research Council of Canada, and the postdoctoral program at the Department of Mathematics and Statistics, University of Calgary.

### LSS WHEN SPATIAL SAMPLING IS UNIFORM

Note that the first column of design matrix $\varphi $ above corresponds to the zero wavenumber. In fact, we could have two column vectors associated to the zero wavenumber: the first column for cosine whose elements are $cos(0)=1$ and the second column for sine whose elements are $sin(0)=0$. However, we avoid writing the zero column vector because it will make matrix $N$ singular. We emphasize that matrix $N$ is a real square matrix, and it is regular as long as the column vectors of $\varphi $ are linearly independent. We may also include the column vector of $cos(2\pi (n/2)x\u2113)=cos(2\pi (n/2)(\u2113/n))=cos(\u2113\pi )=\xb11$, $\u2113=1,\u2026,n$, in matrix $\varphi $ above when $n$ is even. This way $f$ will be reconstructed accurately if $f$ contains constituents of the Nyquist wavenumber $n/2$. Note that we do not include the column vector of the sine because $sin(2\pi (n/2)x\u2113)=sin(2\pi (n/2)(\u2113/n))=sin(\u2113\pi )=0$ for $\u2113=1,\u2026,n$. To avoid the singularity of square matrix $N$, the negative wavenumbers are excluded from matrix $\varphi $. Craymer (1998) discusses that when $f$ is real, we may only consider the positive wavenumbers. However, not much work is done on the LSSA method for the case in which $f$ is complex. Our analyses show that if $f$ is complex, we may still consider only the positive wavenumbers for regularization (see below for further details).

The elements of column vector $c^$ in equation A-7 from top to bottom are the estimated coefficients for the columns of $\varphi $ in equation A-3 from left to right, respectively. Note that Vaníček (1969), Craymer (1998), and Pagiatakis (1999) discuss the LSSA method mainly for a real time series (i.e., $f$ is a real vector) with its applications in geodesy and geodynamics. However, because design matrix $\varphi $ is a real matrix, $f$ can also be a complex vector (e.g., a temporal frequency slice). In this case, $c^$ in equation A-7 will be a complex vector whose real and imaginary parts contain the estimated amplitudes corresponding to the real and imaginary parts of $f$, respectively. One may also use equation A-7 for the real part and imaginary part of $f$ individually (treating each part as a real data series) and then form vector $c^$ by combining the estimated amplitudes of each part, producing similar regularization results to the previous case.

For an irregularly sampled data series, equations A-4–A-6 are no longer valid; i.e., the sinusoidal basis functions are no longer orthogonal. However, the correlation between them is taken into account when matrix $N$ is being inverted. The selection of wavenumbers is a critical task for the simultaneous estimation of the LSS. The correlation among the sinusoids must be taken into account carefully; otherwise, ill-conditioning in $N$ can produce incorrect results (Craymer, 1998). In the ALLSSA method, the wavenumbers are accurately estimated in an iterative manner, producing an accurate one-sided spectrum for stationary data series. We emphasize that in the ALLSSA method, we only estimate the coefficients of significant components of a data series of size $n$, and so the number of columns in equation A-3 may be much smaller than $n$. Also, note that we are not using the fast methods of the FFT for the ALFT, ASFT, and ALLSSA methods.