ABSTRACT
Numerical modeling and inversion of electromagnetic (EM) data is a computationally intensive task. To achieve efficiency, we have developed algorithms that were constructed from a smallest practical computational unit. This “atomic” building block, which yields the solution of Maxwell’s equations for a single time or frequency datum due to an infinitesimal current or magnetic dipole, is a self-contained EM problem that can be solved independently and inexpensively on a single core of CPU. Any EM data set can be composed from these units through assembling or superposition. This approach takes advantage of the rapidly expanding capability of multiprocessor computation. Our decomposition has allowed us to handle the computational complexity that arises because of the physical size of the survey, the large number of transmitters, and the large range of time or frequency in a data set; we did this by modeling every datum separately on customized local meshes and local time-stepping schemes. The counterpart to efficiency with atomic decomposition was that the number of independent subproblems could become very large. We have realized that not all of the data need to be considered at all stages of the inversion. Rather, the data can be significantly downsampled at late times or low frequencies and at the early stages of inversion when only long-wavelength signals are sought. We have therefore developed a random data subsampling approach, in conjunction with cross-validation, that selects data in accordance to the spatial scales of the EM induction and the degree of regularization. Alternatively, for many EM surveys, the atomic units can be combined into larger subproblems, thus reducing the number of subproblems needed. These trade-offs were explored for airborne and ground large-loop systems with specific survey configurations being considered. Our synthetic and field examples showed that the proposed framework can produce 3D inversion results in uncompromised quality in a more scalable manner.