Linearized-inversion methods often have the disadvantage of dependence on the initial model. When the initial model is far from the global minimum, optimization is likely to converge to a local minimum. Optimization problems involving nonlinear relationships between data and model are likely to have more than one local minimum. Such problems are solved effectively by using global-optimization methods, which are exhaustive search techniques and hence are computationally expensive. As model dimensionality increases, the search space becomes large, making the algorithm very slow in convergence. We propose a new approach to the global-optimization scheme that incorporates a priori knowledge in the algorithm by preconditioning the model space using edge-preserving smoothing operators. Such nonlinear operators acting on the model space favorably precondition or bias the model space for blocky solutions. This approach not only speeds convergence but also retrieves blocky solutions. We apply the algorithm to estimate the layer parameters from the amplitude-variation-with-offset data. The results indicate that global optimization with model-space-preconditioning operators provides faster convergence and yields a more accurate blocky-model solution that is consistent with a priori information.

You do not currently have access to this article.