ABSTRACT

When aeromagnetic data are interpolated to make a gridded image, thin linear features can result in “boudinage” or “string of beads” artifacts if the anomalies are at acute angles to the traverse lines. These artifacts are due to the undersampling of these types of features across the flight lines, making it difficult for most interpolation methods to effectively maintain the linear nature of the features without user guidance. The magnetic responses of dikes and dike swarms are typical examples of the type of geologic feature that can cause these artifacts; thus, these features are often difficult to interpret. Many interpretation methods use various enhancements of the gridded data, such as horizontal or vertical derivatives, and these artifacts are often exacerbated by the processing. Therefore, interpolation methods that are free of these artifacts are necessary for advanced interpretation and analysis of thin, linear features. We have developed a new interpolation method that iteratively enhances linear trends across flight lines, ensuring that linear features are evident on the interpolated grid. Using a Taylor derivative expansion and structure tensors allows the method to continually analyze and interpolate data along anisotropic trends, while honoring the original flight line data. We applied this method to synthetic data and field data, which both show improvement over standard bidirectional gridding, minimum curvature, and kriging methods for interpolating thin, linear features at acute angles to the flight lines. These improved results are also apparent in the vertical derivative enhancement of field data. The source code for this method has been made publicly available.

INTRODUCTION

An iconic aspect of many geophysical surveys is that the acquired data are spatially dense along traverse lines and entirely devoid of data between these lines. This poses a unique interpolation challenge to use the high-density data but avoid introducing artifacts in an interpolated map, image, or grid, while at the same time respecting the measured data. At its core, it is an aliasing issue, in which features that occur across lines need to be handled appropriately or artifacts may occur (Reid, 1980). One such type of artifact is the aeromagnetic response of thin, linear features, like those often produced by dikes and dike swarms (Pilkington and Roest, 1998). If these linear features are trending at nonperpendicular angles with respect to the flight lines, the response often manifests as a string of beads or boudinage artifact on the interpolated map (Keating, 1997; Smith and O’Connell, 2005; Guo et al., 2012; Geng et al., 2014). This is particularly prone to occurring when data are interpolated using the most commonly used methods in mining geophysics, such as bidirectional splines (Bhattacharyya, 1969; Akima, 1970), minimum curvature (Briggs, 1974; Swain 1976; Smith and Wessel, 1990), and kriging (Hansen, 1993). Bidirectional splines interpolate along flight lines and across them, inherently developing a grid with some directional bias (Keating, 1997). This leads to effective interpolation of anomalies that are perpendicular to the flight lines; however, it can lead to these beading artifacts when a linear anomaly is at an acute angle to the line data. These artifacts are also inherent in minimum curvature interpolation because a trend at an acute angle to the flight line data is undersampled (Geng et al., 2014), thus leading to minimal data for the interpolation to be constrained by. This lack of constraints is found to result in a circular anomaly. Kriging interpolation results are similar to bidirectional methods, in that regional trends can be effectively handled (Hansen, 1993), but local trends are not accounted for (Keating, 1997; Guo et al., 2012). Other methods (Cordell, 1992; Mendonca and Silva, 1994, 1995; Billings et al., 2002) have similar issues due to undersampled anomalies (Guo et al., 2012). However, Billings et al. (2002) describe how the thin-plate spline continuous global surface (CGS) method may produce better results than minimum curvature when handling data in which an exact fit is not ideal, and perhaps it is then possible that with some adjustment undersampled trends could be accounted for.

Therefore, what is required to solve this type of artifact is a filter or interpolation method that can account for multidirectional, overlapping, undersampled trends that are trending at a variety of angles with respect to the flight lines. There have been several fairly effective treatments of the issue, by applying postinterpolation filters and by developing new interpolation methods as a whole. Keating (1997) inserts trend lines as new data between flight lines when local maxima and minima are discovered, and these trend lines include nearby real data maximas/minimas. This circumvents the beading issue by essentially trending features as separate entities from the rest of the interpolation process. Yunxuan (1993) and Sykes and Das (2000) use the Radon transform (also known as the slant stack in seismic applications) for a variety of trend-based processes, including the enhancement of linear trends. Guo et al. (2012) develop an inverse interpolation methodology, which shows reductions in beading artifacts of trends when compared with minimum curvature results, particularly in the vertical derivative enhancements. Smith and O’Connell (2005) apply an anisotropic diffusion enhancement that analyzes the structure of the data using structure tensors and iteratively smooths it along linear trends. This approach was later improved by Geng et al. (2014) by constraining the process to be only applied in those highly anisotropic locations that contain thin, linear trends.

The method proposed here is most similar to these last two methods because part of the process uses structure tensors to analyze the interpolated data and to iteratively enforce trends. The eigenvalues and eigenvectors of structure tensors have been effectively used in seismic applications (Fehmers and Höcker, 2003; Hale, 2010; Wu, 2017) to describe the strength and direction of anisotropy, and therefore can provide useful information on trends once an interpolation process has been applied. However, unlike other methods, we base our interpolation around a discrete version of the Taylor series expansion of two variables. This was chosen because, similar to the spline methods, it provides a simple yet flexible mathematical basis for the formulation of the problem and inherently will enhance trends by extrapolating features across flight lines. A further advantage of using this method is that the data do not necessarily need to be acquired along straight line traverses (as with bidirectional methods). In addition, because the formulation uses numerical derivatives, any derivative-based filter used on the data at a later time should be mostly continuous.

We begin by describing the Taylor series interpolation, as well as the method of using structure tensors for trend analysis. We then describe how we implement these two features in combination with a process that we refer to as “normalizing” the data to develop an iterative interpolation methodology. Finally, we show the capability of our method by applying it to synthetic and field data sets, and we compare the results with the maps produced by the readily available and commonly used techniques of bidirectional splines, minimum curvature, and kriging. For ease of use, the C# source code has been made available (see Data and Materials Availability).

METHOD

The core of the interpolation process is based on a Taylor series expansion of two variables (adapted from Abramowitz and Stegun, 1970, p. 880, equation 25.2.24):  
f(i+m,j+n)f(i,j)+mf(i,j)x+nf(i,j)y+12[m22f(i,j)x2+2mn2f(i,j)xy+n22f(i,j)y2],
(1)
where f(i,j) is a data cell in a grid whose horizontal coordinates are represented by x=i, y=j, and m and n are the offsets to the x- and y-directions, respectively. The directional derivatives are defined as (Abramowitz and Stegun, 1970, pp. 883–884)  
f(i,j)x=12h[f(i+1,j)f(i1,j)],
(2)
 
f(i,j)y=12h[f(i,j+1)f(i,j1)],
(3)
 
2f(i,j)x2=1h2[f(i+1,j)2f(i,j)+f(i1,j)],
(4)
 
2f(i,j)y2=1h2[f(i,j+1)2f(i,j)+f(i,j1)],
(5)
 
2f(i,j)xy=14h2[f(i+1,j+1)f(i1,j+1)f(i+1,j1)+f(i1,j1)],
(6)
where h is defined as the absolute distance (in meters) between each point on the equispaced grid. In this implementation, the spacing in the x- and y-directions is assumed to be the same. By rearranging equation 1 for the eight combinations of (m,n) that are adjacent to location (i,j), we can solve for f(i,j). Because there are eight surrounding cells, this means that there are eight separate estimates for f(i,j). A trimmed mean filter (Hall, 2007) with α=25% is applied to these eight estimates (the two smallest and the two largest values are removed from the mean calculation), and the resulting value is recorded for location (i,j) in a new grid, fNS(x,y). This new cell will be a convolution of the cell’s previous value, and that of the directional derivatives surrounding it (and hence the surrounding values). By applying this to every cell, a new grid with enhanced directional information is developed. However, to mitigate any potential discontinuities, cells that contain real data must also go through this process, thus changing them from their original (measured) value. To revert them back and properly honor the measured/real data, the cells surrounding them must also change because a direct replacement would also cause discontinuities. Our method approaches this problem by applying a scaling factor to all cells, such that the real data cells are changed back to their original values, and the surrounding interpolated data are “pulled” along with it. We refer to this process as normalizing the data (Figure 1).
To accomplish this normalization, we develop a grid of multipliers, σ(x,y), comprised of real data multipliers σr(x,y) and interpolated data multipliers σi(x,y). The real data multipliers are defined as  
σr(i,j)=|fr(i,j)fNS(i,j)|,
(7)
which is the absolute value of the original real data cell, fr(i,j), divided by the cell’s current value, fNS(i,j). The value of σi(x,y) at each interpolated location comes from nearby σr(x,y) values, which we find by completing a search along the path of greatest anisotropy. Although a more simple approach, such as a mean of nearest neighbor σr(x,y) values would be computationally quicker and less prone to noise, our goal of this method is to ensure anisotropic linear trends are enforced; therefore, following lines of anisotropy will help achieve this objective. To find the angle of greatest anisotropy, we calculate structure tensors S(i,j) at each interpolated data cell on the grid fNS(x,y). A 2×2 structure tensor is defined as (Smith and O’Connell, 2005)  
S(i,j)=f(i,j)f(i,j)T=[f(i,j)x·f(i,j)xf(i,j)x·f(i,j)yf(i,j)y·f(i,j)xf(i,j)y·f(i,j)y],
(8)
where the dot symbol indicates scalar multiplication. We then calculate the eigenvalues and eigenvectors for the tensor at each grid point because they describe the strength and the direction of any trend within that cell. Searching adjacent points along the trend in the positive and negative directions, we then calculate the interpolated cell’s multiplier based on an inverse distance-weighted average of any σr(x,y) values in those directions:  
σi(i,j)=d2(σr11+σr122)+d1(σr21+σr222)d1+d2,
(9)
where σr11 is the real data cell found in the positive direction, σr12 is the real data cell “closest” to σr11 and in a direction most perpendicular to the search path, σr21 is the real data cell found in the negative direction, σr22 is the real data cell closest to σr21 in a similar way, and d1 and d2 are the straight-path distances to σr11 and σr21, respectively, from the location of σi(i,j). Figure 2 shows an example of this calculation. Note that although averaging σr12 and σr22 may lessen the effect of normalization if a strong trend is found, they are implemented for a smoother normalization process and to ensure that no erratic multiplier effect may occur. If no data are found in either direction (i.e., the edge of the grid is hit, or a maximum interpolation distance φ is reached before finding a real data cell), the eigenvector search path is varied by the angle θ, a user-defined number of degrees, and the process is repeated until successful. A new grid fFS(x,y) is then developed by applying all multipliers to their associated cells:  
fFS(x,y)=fNS(x,y)·(τ·σ(x,y)),
(10)
where τ is a user-defined “trend strength.” By repeatedly applying equation 10, and recalculating the Taylor interpolation and structure tensors at each iteration, a final interpolated grid can be developed using the flowchart in Figure 3. This new grid will be comprised of real data cells that honor the flight line data and interpolated data cells whose values are enhanced to enforce linear trends across the flight lines.

User-defined variables

As seen in the previous section, several parameters in this process must be defined by the user because their effect can have significant consequences on the final resulting interpolation. Through extensive empirical testing, we have developed some basic guidelines to assist a user of this method. The maximum interpolation distance φ represents the maximum distance at which the method will search along the trend direction before stopping and varying the angle by θ, another user-defined variable. A maximum distance is required because an interpolated data cell’s eigenvector may be parallel to the flight lines, so that no real data cell will be along its search path. The authors have found that 50%–75% of the average flight line spacing is generally an effective value for φ. However, this is left as a user-defined variable because an astute user may wish to investigate trends that pass through the flight lines at very acute angles. These types of trends may require larger φ values to ensure that real data cells in the flight lines on both sides of an interpolated data cell are reached and used during the normalization process. The interpolation angle θ defines how much the trend direction will change if the maximum interpolation distance φ is reached before a real data cell is found during the normalization process. This angle should be small, generally 5°–10° to ensure that relevant real data cells will not be skipped during the search process. However, it should be noted that a smaller value will increase the computation time of the method.

The trend strength τ represents another user-defined variable. In equation 10, τ can range from 0 to 1, depending on the value entered by the user and the overall data set’s anisotropy. The user can enter a value ranging from 0 to 100, which represents how much data will be trended to the full effect. For example, at a trend factor of 75, the 75% of most anisotropic data (as measured by a statistical analysis of the structure tensor’s eigenvalues) is trended to the full extent, whereas the other 25% of the data will have a sliding reduction factor applied. The goal of this trend factor is to allow the user flexibility in what is trended, such that more or less strongly anisotropic features can be trended at various strengths.

Two user-defined steps not described in the previous section, but shown in the final flow chart (Figure 3) are the automatic stopping criteria and the subsampling process. The interpolation loop may be set by the user to run a specific number of iterations; however, an automatic stopping method is also available. After the current iteration n is completed, the resulting “corrected grid” fFS,n(x,y) can be analyzed, calculating a “difference grid”:  
fD,n(x,y)=|fFS,n(x,y)fFS,n1(x,y)|.
(11)
This grid fD,n(x,y) is compared with the previous difference grid fD,n1(x,y), and an average change is checked to determine if these differences are converging. If they are, the pass is recorded, and once this occurs three times (in total), the interpolation loop will end. These three checks are done to ensure that the method has properly converged. The other step is the option for subsampling the final grid. Following the standard convention (Reid, 1980), the output cell size should be one-fourth or one-fifth the line spacing distance. However, because this method is essentially “smearing” data in trend directions and not trying to develop a line-of-best-fit across flight lines, it has been found through extensive testing that in this method it can often assist the interpolation and trending process to set the interpolation cell size to half that of the conventional output cell size (i.e., one-eighth to one-tenth the flight line spacing). Once the interpolation is complete, the method can subsample the entire grid up to the larger, more appropriate, cell size. For strong linear features, the smaller cell size can assist in making the features better trended due to the effect of subsampling. However, a small cell size must be used with caution because it can often remove weaker linear features because they will now have “farther” to trend when connecting trends found in the real data cells.

SYNTHETIC DATA TEST

To test the new method, a 3×3  km synthetic aeromagnetic data set (Figure 4) was built using PyGMI (Cole, 2015). The data set, similar to the one used by Geng et al. (2014), consists of four trends along the southern extent at angles of 0°, 15°, 30°, and 45° with respect to north, another trend along the northern extent running east–west, and three isolated blocks in the northeast. All sources are at a depth of 100 m, and they are 50 m in their depth extent. The isolated blocks are 35×35  m in the lateral extent, and the dikes are 5 m wide. The data are represented as though they were measured at a flying height of 100 m, the magnetic inclination and declination are 72.10° and −10.12°, respectively, the earth’s magnetic field intensity was set to 55,000 nT, and the anomalies have a magnetic susceptibility of 1 SI. After generating the synthetic data set with 5 m cell size, it was corrupted with Gaussian noise at a standard deviation of 1 nT, a value suggested by Smith and Salem (2005) as being typical for a large noise value for aeromagnetic noise. The data set was then subsampled as though it was flown at 250 m line spacing, indicated by the solid black lines. The data set was interpolated onto a 50 m grid using bidirectional gridding (Figure 5), minimum curvature (Figure 6), kriging (Figure 7), and our new method (Figure 8). All maps were interpolated in Geosoft’s Oasis Montaj software (Geosoft, 2018).

All algorithms trended the east–west feature at the top of the image well. The three isolated features in the top right were not trended by any algorithm. Note that the bidirectional, minimum curvature, and kriging algorithms recover the amplitude of the middle isolated feature that was crossed by a flight line, but the other two features that they shift to be on a flight line. The new algorithm does a better job at recovering the amplitude of the left-most circular feature, placing it between the flight lines. However, the new method does not recover the center circular feature as well as the other methods. The 45° and 30° linear features at the bottom of the image show boudinage artifacts on the bidirectional, minimum curvature, and kriging grids; however, the new method has effectively removed the artifacts. The 15° linear feature is essentially gridded as two separate anomalies in the bidirectional gridding and the kriging results, whereas the minimum curvature and the new method have joined them. Figure 9 shows the residual plots of the methods, and Table 1 shows the minimums, maximums, means, medians, and standard deviations of these residuals. Unsurprisingly, bidirectional gridding has the most accurate result along the linear feature that runs perpendicular to the flight path, whereas the new method has a higher residual along the top edge of the feature. This is likely due to the influencing effect of the low values along the edge of the model, causing the new method to develop two thinner, linear features of polarizing values, rather than a single large feature with a drop off in value. The three standard methods have fairly similar residual grids, with the minimum curvature being the most accurate. Overall, however, the new method’s residual values along the linear features are much smaller compared with the other three. In addition, the standard deviation of the residual is smallest in the new method. To be noted, however, is that there are larger residuals along several areas of real data in the new method. This is due to the new method sampling real data in a different, likely simpler way than the other three methods, and this is accentuated by the large amount of noise added to the data set.

FIELD DATA TEST

We then applied the method to a real-world example. An aeromagnetic data set from Nunavut, Canada (Overby-Duggan), was downloaded from the Natural Resources Canada geophysical data repository (Geological Survey of Canada, 2018), and a small section of it with linear features was extracted. The survey was flown at a line spacing of 400 m; therefore, it was interpolated with a cell size of 80 m using minimum curvature (Figure 10) and our new method (Figure 11). With some experimentation, we found that our method’s results were better trended after completing the interpolation with a 40 m cell size, and subsampling up to the final 80 m cell size. In addition, φ was set to half the line spacing at 200 m, θ was set to 5°, and τ was set to 100% to ensure that the full trending effect would be applied. The automatic stopping option was used, completing after 97 iterations. Note that all coordinates are eastings and northings in UTM zone 13N.

Comparing the minimum curvature result with our new method, much of the beading artifacts have been removed, particularly the strong trends in the southwestern corner, central-eastern side, and the northeastern corner. Further, the linear trends appear as thinner, sharper features using the new method. The rest of the overall image has been kept similar to the minimum curvature results. We then applied a vertical derivative enhancement to both maps. Figure 12 is the vertical derivative from the minimum curvature method, and Figure 13 is calculated from the new grid. There is improvement to the result, with linear features less “beaded” and breaks in trends now connected. There are also some areas where no clear trend is occurring (e.g., compare the minimum curvature results with the new method’s results in the area at 7380000N, 467500E). This is most noticeable in the vertical derivative enhancement. If this was an area of interest, a lower trend strength may assist in interpretation; however, because it is an area of low linear structure, the minimum curvature is likely a more appropriate interpolation method to use.

CONCLUSION

When processing aeromagnetic data, many standard gridding methods have difficulty interpolating thin, linear features that lie at nonperpendicular angles to the flight lines. The resulting interpolation artifacts, often referred to as “beading” or boudinage, can make interpreting the data difficult, particularly when using analysis methods that involve derivatives, such as the vertical derivative enhancement. The goal of this research has been to develop a new interpolation method specifically for improving the results of data sets that contain thin linear features, such that they do not contain these artifacts. This iterative method uses a Taylor expansion of derivatives to interpolate data across the flight lines while maintaining linear features. However, to mitigate any discontinuities, this methodology must be applied to real data cells as well as the interpolated data cells. To honor the flight data, we then apply a “normalization” process, which returns the real data to its original values, while pulling the interpolated data along with it. To further enhance trends across flight lines, we apply this normalization along the paths of highest anisotropy, as calculated using structure tensors.

After testing the method on synthetic and field data, it can be concluded that this new method improves the resulting interpolation of this type of feature when compared with the widely used interpolation methods of bidirectional gridding, minimum curvature, and kriging. Linear features that cross the flight lines at acute angles are maintained without the usual beading artifacts in the total field and vertical derivative enhancement results.

This method involves several user-defined variables, and as such, it will generally require some user experimentation. However, most data sets will result in fairly effective interpolations if the guidelines described are followed. In addition, because this method has been developed explicitly to solve the issues that can often occur to linear features, areas of data with little linear structure may result in weak trends that are noticeable in enhancements such as the vertical derivative. It is possible to reduce this trending by decreasing the trending parameter τ.

ACKNOWLEDGMENTS

We would like to thank NSERC for funding the Ph.D. that this research is a part of, Natural Resources Canada for the publicly available aeromagnetic data, P. Cole for his assistance with PyGMI, and M. Lee of the National Research Council Canada for the helpful discussions. Finally, we would like to thank the editor, associate editor, and three reviewers for their helpful comments and assistance in extensively improving this paper.

DATA AND MATERIALS AVAILABILITY

The data used is public domain, and can be found following the link in the references section of the paper. The code for the method developed in this paper can be found at https://github.com/TomasNaprstek/Naprstek-Smith-Interpolation. It can be run as a stand-alone method or implemented into another program (e.g., we have included a custom dll for Geosoft’s Oasis Montaj).

Freely available online through the SEG open-access option.