- Abstract
- Affiliation
- All
- Authors
- Book Series
- DOI
- EISBN
- EISSN
- Full Text
- GeoRef ID
- ISBN
- ISSN
- Issue
- Keyword (GeoRef Descriptor)
- Meeting Information
- Report #
- Title
- Volume
- Abstract
- Affiliation
- All
- Authors
- Book Series
- DOI
- EISBN
- EISSN
- Full Text
- GeoRef ID
- ISBN
- ISSN
- Issue
- Keyword (GeoRef Descriptor)
- Meeting Information
- Report #
- Title
- Volume
- Abstract
- Affiliation
- All
- Authors
- Book Series
- DOI
- EISBN
- EISSN
- Full Text
- GeoRef ID
- ISBN
- ISSN
- Issue
- Keyword (GeoRef Descriptor)
- Meeting Information
- Report #
- Title
- Volume
- Abstract
- Affiliation
- All
- Authors
- Book Series
- DOI
- EISBN
- EISSN
- Full Text
- GeoRef ID
- ISBN
- ISSN
- Issue
- Keyword (GeoRef Descriptor)
- Meeting Information
- Report #
- Title
- Volume
- Abstract
- Affiliation
- All
- Authors
- Book Series
- DOI
- EISBN
- EISSN
- Full Text
- GeoRef ID
- ISBN
- ISSN
- Issue
- Keyword (GeoRef Descriptor)
- Meeting Information
- Report #
- Title
- Volume
- Abstract
- Affiliation
- All
- Authors
- Book Series
- DOI
- EISBN
- EISSN
- Full Text
- GeoRef ID
- ISBN
- ISSN
- Issue
- Keyword (GeoRef Descriptor)
- Meeting Information
- Report #
- Title
- Volume
NARROW
GeoRef Subject
Interferometric modeling of wave propagation in inhomogeneous elastic media using time reversal and reciprocity
Abstract Time reversal of arbitrary, elastodynamic wavefields in partially open media can be achieved by measuring the wavefield on a surface surrounding the medium and applying the time reverse of those measurements as a boundary condition. We use a representation theorem to derive an expression for the time-reversed wavefield at arbitrary points in the interior. When this expression is used to compute, in a second point, the time-reversed wave-field originating from a point source, the time-reversed Green’s function between the two points is observed. By invoking reciprocity, we obtain an expression that is suitable for modeling of wave propagation through the medium. From this we develop an efficient and flexible two-stage modeling scheme. In the initial phase, the model is illuminated systematically from a surface surrounding the medium using a sequence of conventional forward-modeling runs. Full waveforms are stored for as many points in the interior as possible. In the second phase, Green’s functions between arbitrary points in the volume can be computed by crosscorrelation and summation of data computed in the initial phase. We illustrate the method with a simple acoustic example and then apply it to a complex region of the elastic Pluto model. It is particularly efficient when Green’s functions are desired between a large number of points, but where there are few common source or receiver points. The method relies on interference of multiply scattered waves, but it is stable. We show that encoding the boundary sources using pseudonoise sequences and exciting them simultaneously, akin to daylight imaging, is inefficient and in all explored cases leads to relatively high-noise levels.
Abstract An exact boundary condition is presented for scattering problems involving spatially limited perturbations of arbitrary magnitude to a background model in generally inhomogeneous acoustic media. The boundary condition decouples the wave propagation on a perturbed domain while maintaining all interactions with the background model, thus eliminating the need to regenerate the wave field response on the full model. The method, which is explicit, relies on a Kirchhoff-type integral extrapolation to update the boundary condition at every time step of the simulation. The Green’s functions required for extrapolation through the background model are computed efficiently using wave field interferometry.
Chapter 14. Seismic Data Acquisition: Recent advances in optimized geophysical survey design
Abstract Survey design ultimately dictates the quality of subsurface information provided by practical implementations of geophysical methods. It is therefore critical to design experimental procedures that cost effectively produce those data that maximize the desired information. This review cites recent advances in statistical experimental design techniques applied in the earth sciences. Examples from geoelectrical, crosswell and surface seismic, and microseismic monitoring methods are included. Using overdetermined 1D and 2D geoelectrical examples, a minor subset of judiciously chosen measurements provides a large percentage of the information content theoretically offered by the geoelectrical method. In contrast, an underdetermined 2D seismic traveltime tomography design study indicates that the information content increases almost linearly with the amount of traveltime data (source-receiver pairs) considered until the underdeterminancy is reduced substantially. An experimental design study of frequency-domain seismic-waveform inversion experiments reveals that a few optimally chosen frequencies offer as much subsurface information as the full bandwidth. A nonlinear experimental design for a seismic amplitude-versusangle survey identifies those incidence angles most important for characterizing a reservoir. A nonlinear design example shows that designing microseismic monitoring surveys based on array aperture is a poor strategy that almost certainly leads to suboptimal designs.
Abstract In the 1990s, the method of time-reversed acoustics was developed. This method exploits the fact that the acoustic wave equation for a lossless medium is invariant for time reversal. When ultrasonic responses recorded by piezoelectric transducers are reversed in time and fed simultaneously as source signals to the transducers, they focus at the position of the original source, even when the medium is very complex. In seismic interferometry the time-reversed responses are not physically sent into the earth, but they are convolved with other measured responses. The effect is essentially the same: The time-reversed signals focus and create a virtual source which radiates waves into the medium that are subsequently recorded by receivers. A mathematical derivation, based on reciprocity theory, formalizes this principle: The crosscorrelation of responses at two receivers, integrated over different sources, gives the Green’s function emitted by a virtual source at the position of one of the receivers and observed by the other receiver. This Green’s function representation for seismic interferometry is based on the assumption that the medium is lossless and nonmoving. Recent developments, circumventing these assumptions, include interferometric representations for attenuating and/or moving media, as well as unified representations for waves and diffusion phenomena, bending waves, quantum mechanical scattering, potential fields, elastodynamic, electromagnetic, poroelastic, and electroseismic waves. Significant improvements in the quality of the retrieved Green’s functions have been obtained with interferometry by deconvolution. A trace-by-trace deconvolution process compensates for complex source functions and the attenuation of the medium. Interferometry by multidimensional deconvolution also compensates for the effects of one-sided and/or irregular illumination.
Seismic interferometry—Turning noise into signal
Interferometric modeling of wave propagation in inhomogeneous elastic media using time reversal and reciprocity
Abstract Time reversal of arbitrary, elastodynamic wavefields in partially open media can be achieved by measuring the wavefield on a surface surrounding the medium and applying the time reverse of those measurements as a boundary condition. We use a representation theorem to derive an expression for the time-reversed wavefield at arbitrary points in the interior. When this expression is used to compute, in a second point, the time-reversed wavefield originating from a point source, the time-reversed Green’s function between the two points is observed. By invoking reciprocity, we obtain an expression that is suitable for modeling of wave propagation through the medium. From this we develop an efficient and flexible two- stage modeling scheme. In the initial phase, the model is illuminated systematically from a surface surrounding the medium using a sequence of conventional forward-modeling runs. Full waveforms are stored for as many points in the interior as possible. In the second phase, Green’s functions between arbitrary points in the volume can be computed by crosscorrelation and summation of data computed in the initial phase. We illustrate the method with a simple acoustic example and then apply it to a complex region of the elastic Pluto model. It is particularly efficient when Green’s functions are desired between a large number of points, but where there are few common source or receiver points. The method relies on interference of multiply scattered waves, but it is stable. We show that encoding the boundary sources using pseudonoise sequences and exciting them simultaneously, akin to daylight imaging, is inefficient and in all explored cases leads to relatively high-noise levels.
Interferometric surface-wave isolation and removal
Abstract The removal of surface waves (ground roll) from land seismic data is critical in seismic processing because these waves tend to mask informative body-wave arrivals. Removal becomes difficult when surface waves are scattered, and data quality is often impaired. We apply a method of seismic interferometry, using both sources and receivers at the surface, to estimate the surface-wave component of the Green’s function between any two points. These estimates are subtracted adaptively from seismic survey data, providing a new method of ground-roll removal that is not limited to nonscattering regions.
Abstract Geological information can be used to solve many practical and theoretical problems, both within and outside of the discipline of geology. These include analysis of ground stability, predicting subsurface water or hydrocarbon reserves, assessment of risk due to natural hazards, and many others. In many cases, geological information is provided as an a priori component of the solution (i.e. information that existed before the solution was formed and which is incorporated into the solution). Such information is termed ‘geological prior information’. The key to the successful solution of such problems is to use only reliable geological information. In turn this requires that: (1) multiple geological experts are consulted and any conflicting views reconciled, (2) all prior information includes measures of confidence or uncertainty (without which its reliability and worth is unknown), and (3) as much information as possible is quantitative, and qualitative information or assumptions are clearly defined so that uncertainty or risk in the final result can be evaluated. This paper discusses each of these components and proposes a probabilistic framework for the use and understanding of prior information. We demonstrate the methodology implicit within this framework with an example: this shows how prior information about typical stacking patterns of sedimentary sequences allows aspects of 2-D platform architecture to be derived from 1-D geological data alone, such as that obtained from an outcrop section or vertical well. This example establishes how the extraction of quantitative, multi-dimensional, geological interpretations is possible using lower dimensional data. The final probabilistic description of the multi-dimensional architecture could then be used as prior information sequentially for a subsequent problem using exactly the same method.
Abstract Opinion of geological experts is often formed despite a paucity of data and is usually based on prior experience. In such situations humans employ heuristics (rules of thumb) to aid analysis and interpretation of data. As a result, future judgements are bootstrapped from, and hence biased by, both the heuristics employed and prior opinion. This paper reviews the causes of bias and error inherent in prior information derived from the probabilistic judgements of people. Parallels are developed between the evolution of scientific opinion on one hand and the limits on rational behaviour on the other. We show that the combination of data paucity and commonly employed heuristics can lead to herding behaviour within groups of experts. Elicitation theory mitigates the effects of such behaviour, but a method to estimate reliable uncertainties on expert judgements remains elusive. We have also identified several key directions in which future research is likely to lead to methods that reduce such emergent group behaviour, thereby increasing the probability that the stock of common knowledge will converge in a stable manner towards facts about the Earth as it really is. These include: (1) measuring the frequency with which different heuristics tend to be employed by experts within the geosciences; (2) developing geoscience-specific methods to reduce biases originating from the use of such heuristics; (3) creating methods to detect scientific herding behaviour; and (4) researching how best to reconcile opinions from multiple experts in order to obtain the best probabilistic description of an unknown, objective reality (in cases where one exists).
Abstract Among the more challenging questions in geology are those concerning the anatomy of sedimentary bodies and related stratal surfaces. Though significant progress has been made on the interpretation of depositional environments, little systematic data are available on their dimensions and geometry. With the recent advances in computer power, software development and accuracy of affordable positioning equipment, it has now become possible to extract highresolution quantitative data on the anatomy of sedimentary bodies. In Asturias, northwestern Spain, aerial photography provides continuous 2-D cross-sections of a seismic-scale, rotated to vertical, carbonate platform margin of the Early Carboniferous. Digital elevation models, orthorectified aerial photographic imagery and ground verification of stratal surfaces generated the elements that are required to reconstruct the true dimensions, angular relationships of bedding planes and the spatial distribution of facies units in this platform margin. Along with biostratigraphy this provides sufficient constraints to estimate rates of progradation, aggradation, growth and removal of sediments in the slope environment. Here we present a methodology to create outcrop models and integrate complementary types of data that provide new insights in sedimentology that were previously unattainable.
Digital field data acquisition: towards increased quantification of uncertainty during geological mapping
Abstract Traditional methods of geological mapping were developed within the inherent constraints imposed by paper-based publishing. These methods are still dominant in the earth sciences, despite recent advances in digital technology in a range of fields, including global-positioning systems, geographical information systems (GIS), 3-D computer visualization, portable computer devices, knowledge engineering and artificial intelligence. Digital geological mapping has the potential to overcome some serious limitations of paper-based maps. Although geological maps are usually highly interpretive, traditional maps show little of the raw field data collected or the reasoning used during interpretation. In geological mapping, interpretation typically relies on the prior experience and prior knowledge of the mapper, but this input is rarely published explicitly with the final printed map. Digital mapping techniques open up new possibilities for publishing maps digitally in a GIS format, together with spatially referenced raw field data, field photographs, explanation of the interpretation process and background information relevant to the map area. Having field data in a digital form allows the use of interpolation methods based on fuzzy logic to quantify some types of uncertainty associated with subsurface interpretation, and the use of this uncertainty to evaluate the validity of competing interpretations.
Three-dimensional geological models from outcrop data using digital data collection techniques: an example from the Tanqua Karoo depocentre, South Africa
Abstract Recent technological advances have made the collection of digital geological data from outcrops a realistic and efficient proposition. The world-class exposures of Permian basin-floor turbidite fans of the Tanqua depocentre, Karoo Basin, South Africa have been the focus of one such study. These outcrops are faulted at a subseismic scale (displacements of up to 40 m), with continuous exposures of up to 40 km in depositional dip and 20 km strike directions. Digital data collection has been undertaken using a variety of methods: differential global-positioning systems (DGPS) mapping, surveying using laser total station and laser rangefinders, ground- and helicopter-based digital photography and photogrammetry, and digital sedimentary outcrop logging as well as geophysical data from boreholes. These data have then been integrated into several 3-D geological models of the study area, built using a subsurface reservoir-modelling system. The integrated dataset provides insights into the stratigraphic evolution of a deep-water fan complex by allowing true 3-D analysis and interpretation of data collected in the field. The improved understanding of these deep-water fan systems will improve existing models of offshore analogues by enhancing understanding of geometries and trends not resolvable from existing offshore data and by identifying potential problematic areas for fluid flow. Initial results from the application of this approach have been successfully applied to the conditioning of stochastic geological models of a subsurface deep-water reservoir from the North Sea.
Sensitive dependence, divergence and unpredictable behaviour in a stratigraphic forward model of a carbonate system
Abstract Although conceptual models of carbonate systems typically assume a dominance of external forcing and linear behaviour to generate metre-scale carbonate parasequences, there is no reason to preclude autocyclic and non-linear behaviour in such systems. Component parts of the carbonate system represented in this numerical forward model are entirely deterministic, but several parts are non-linear and exhibit complex interactions. Onshore sediment transport during relative sea-level rise generates autocyclic quasi-periodic shallowing upward parasequences but model behaviour is sufficiently complex that water depth evolution and parasequence thickness distributions are not predictable in any detail. The model shows sensitive dependence on initial conditions, resulting in divergence of two model cases, despite only a small difference in starting topography. Divergence in water-depth history at one point takes ~ 10ka, and for the whole model grid takes ~ 100 ka. Fischer plots from the two cases show that divergence leads to entirely different parasequence thickness evolution in each case. Chaotic behaviour is a specific type of sensitive dependence, and calculation of trajectory divergence in a 3-D pseudo-phase space indicates that water depth evolution is not truly chaotic. If sensitive dependence, divergence and complex processes generating random products turn out to be common in real carbonate systems, predictions should be limited to elements of the system unaffected by these phenomena, or limited to cases where an element of periodic external forcing over-rides their affects. These results also suggest that increasingly complex and sophisticated stratigraphie forward models are not necessarily going to lead directly to more deterministic predictive power, although they may well be useful sources of statistical data on carbonate strata.
Input uncertainty and conditioning in siliciclastic process modelling
Abstract Deterministic forward sedimentary process models enhance our quantitative understanding of sedimentary systems. They are also being used increasingly to assist in the reconstruction of the geological past and the inference of the present configuration of sedimentary deposits. Such usage presents the challenge of having to establish the initial and boundary conditions that will cause the model’s output to match present-day observations. This constitutes an inversion problem that can sometimes be handled by traditional optimization methods. Clastic sedimentation models, however, often incorporate complex non-linear relationships that preclude the use of these techniques. The problem must then be handled statistically by relaxing the requirement of honouring exact observations and matching only the spatial variability of the observed deposits. Recent advances in control of non-linear dynamic systems may also shed light on possible solutions to the inversion problem in siliciclastic models. This paper reviews known approaches to problems related to input uncertainty and conditioning, and presents original preliminary results on control of sedimentation models.
Abstract The uncertainty of knowledge, in contrast to that of data, can be assessed by its probability in the logical sense. The logical concept of probability has been developed since the 1930s but, to date, no complete and accepted framework has been found. This paper approaches this problem from the point of view of logical entailment and natural sequential calculus of Classical logic. It is shown herein that probability can be comprehended in terms of a set of formal theories built in similar language. This measure is compliant with general understanding of probability, can be both conditional and unconditional, accounts for learning new evidence and complements Bayes’ rule. The approach suggested is practically infeasible at present and requires further theoretical research in the domain of geoscience. Nevertheless, even within the framework of existing methods of expert judgement processing, there is a way of implementing logic that will improve the quality of judgements. Also, to reach the state of formalization necessary to use logical probability, techniques of knowledge engineering are required; this paper explains how logical probabilistic methods relate to such techniques, and shows that the perfect formalization of a domain of knowledge requires them. Hence, the lines for future research should be: (1) the development of a strategy of co-application of existing expert judgement-processing techniques, knowledge engineering and classical logic; and (2) further research into logic enabling the development of formal languages and theories in geoscience.
Abstract It is often desirable to describe information derived from the cumulative experience of human experts in a quantitative and probabilistic form. Pertinent examples include assessing the reliability of alternative models or methods of data analysis, estimating the reliability of data in cases where this cannot be measured, and estimating ranges and probable distributions of rock properties and architectures in complex geological settings. This paper presents a method to design an optimized process of elicitation (interrogation of experts for information) in real time, using all available information elicited previously to help in designing future elicitation trials. The method maximizes expected information during each trial using experimental design theory. We demonstrate this method in a simple experiment in which the conditional probability distribution or relative likelihood of a suite of nine possible 3-D models of fluvial-deltaic geologies was elicited from a geographically remote expert. Although a geological example is used, the method is general and can be applied in any situation in which estimates of expected probabilities of occurrence of a set of discrete models are desired.
Abstract An effective inverse scheme that can be applied to complex 3-D hydrodynamic forward models has so far proved elusive. In this paper we investigate an interactive inverse methodology that may offer a possible way forward. The scheme builds on previous work in linking expert review of alternate output to rapid modification of input variables. This was tested using the SEDSIM 3-D stratigraphie forward-modelling program, varying nine input variables in a synthetic example. Ten SEDSIM simulations were generated, with subtle differences in input, and five dip sections (fences) were displayed for each simulation. A geoscientist ranked the lithological distribution in order of similarity to the true sections (the true input values were not disclosed during the experiment). The two or three highest ranked simulations then acted as seed for the next round of ten simulations, which were compared in turn. After 90 simulations a satisfactory match between the target and the model was found and the experiment was terminated. Subsequent analysis showed that the estimated input values were ‘close’ to the true values.
Abstract Geological models are required because we do not have complete knowledge, in time or space, of the system of interest. Models are constructed in an attempt to represent the system and its behaviour based on interpretation of observations and measurements (samples) of the system, combined with informed judgement (expert opinion) and, generally, constrained for convenience by the limitations of the modelling medium. Geological models are inherently uncertain; broadly those uncertainties can be classified as epistemic (knowledge based) or aleatory (reflecting the variability of the system). Traditional, quantitative methods of uncertainty analysis address the aleatory uncertainties but do not recognize incompleteness and ignorance. Evidence-based uncertainty analysis provides a framework within which both types of uncertainty can be represented explicitly and the dependability of models tested through rigorous examination, not just of data and data quality, but also of the modelling process and the quality of that process. The inclusion of human judgement in the interpretation and modelling process means that there will be frequent differences of opinion and the possibility of alternative, inconsistent and/or conflicting interpretations. The analysis presented here uses evidence-based uncertainty analysis to formulate a complete expression of the geological model, including presentation of the supporting and the conflicting or refuting evidences, representation of the remaining uncertainties and an audit trail to the observations and measurements that underpin the currently preferred and the possible alternative hypotheses.
Abstract We argue for and present a reformulation of the seismic surface-wave inverse problem in terms of a thermal model of the upper mantle and apply the method to estimate lithospheric structure across much of the Canadian Shield. The reformulation is based on a steady-state temperature model, which we show to be justified for the studied region. The inverse problem is cast in terms of three thermal parameters: temperature in the uppermost mantle directly beneath Moho, mantle temperature gradient, and the potential temperature of the sublithospheric convecting mantle. In addition to the steady-state constraint, prior physical information on these model parameters is based on surface heat flow and heat production measurements, the condition that melting temperatures were not reached in the crust in Proterozoic times and other theoretical considerations. We present the results of a Monte Carlo inversion of surface-wave data with this ‘thermal parameterization’ subject to the physical constraints for upper mantle shear velocity and temperature, from which we also estimate lithospheric thickness and mantle heat flux. The Monte Carlo inversion gives an ensemble of models that fit the data, providing estimates of uncertainties in model parameters. We also estimate the effect of uncertainties in the interconversion between temperature and seismic velocity. Variations in lithospheric temperature and shear velocity are not well correlated with geological province or surface tectonic history. Mantle heat flow and lithospheric thickness are anti-correlated and vary across the studied region, from 11 mW/m 2 and nearly 400 km in the northwest to about 24 mW/m 2 and less than 150km in the southeast. The relation between lithospheric thickness and mantle heat flow is consistent with a power law relation similar to that proposed by Jaupart et al. (1998), who argued that the lithosphere and asthenosphere beneath the Canadian Shield are in thermal equilibrium and heat flux into the deep lithosphere is governed by small-scale sublithospheric convection.