Abstract

Advances in data capture and computer technology have made possible the collection of three-dimensional, high-resolution, digital geological data from outcrop analogs. This paper presents new methodologies for the acquisition and utilization of three-dimensional information generated by ground-based laser scanning (lidar) of outcrops. A complete workflow is documented—from outcrop selection through data collection, processing and building of virtual outcrops—to geological interpretation and the building of geocellular models using an industry-standard, reservoir-modeling software. Data sets from the Roda Sandstone in the Spanish Pyrenees and the Grabens region of Canyon-lands National Park, Utah, USA, are used to illustrate the application of the workflow to sedimentary and structural problems at a reservoir scale.

Subsurface reservoir models are limited by available geological data. Outcrop analogs from comparable systems, such as the Roda Sandstone and the Grabens, are commonly used to provide additional input to models of the subsurface. Outcrop geocellular models can be analyzed both statically and dynamically, wherein static examination involves visual inspection and the extraction of quantitative data on body geometry, and dynamic investigation involves the simulation of fluid flow through the analog model.

The work presented in this study demonstrates the utility of lidar as a data collection technique for the building of more accurate outcrop-based geocellular models. The aim of this publication is to present the first documentation of a complete workflow that extends from outcrop selection to model investigation through the presentation of two worked data sets.

INTRODUCTION

The intention of this study is to present new methodologies for the acquisition and utilization of three-dimensional (3D) information generated by the ground-based laser scanning (lidar) of geological outcrops. In particular, the focus is on (1) the accurate representation of geological entities from outcrops on a computer (referred to in this paper as a “virtual outcrop”); (2) utilizing the virtual outcrop to extract data for the building and testing of 3D geocellular models using conventional, hydrocarbon reservoir-modeling software; and (3) other applications of the collected data and the virtual outcrop. Since the pioneering work of Bellian et al. (2005), there has been a rapid increase in the application of lidar to the study and characterization of geological outcrops. Numerous groups are now working with such data, although, to date, no systematic methodologies for the collection, processing, and utilization of these data have been published (e.g., Adams et al., 2007; Aiken, 2006; Deveugle et al., 2007; Enge et al., 2007; Enge et al., 2006; Howell et al., 2007; Howell et al., 2006; Jones et al., 2007; Lee et al., 2007; Martinsen et al., 2007; Monsen, 2006; Oftedal et al., 2007; Olariu et al., 2005; Pedersen et al., 2007; Thurmond, 2006). This paper documents a complete workflow, from outcrop selection through data collection, processing, and interpretation, to the building of the geocellular model. The workflow is illustrated with two case studies that illustrate the application to sedimentary and structural reservoir, geology-related problems.

Lidar, which stands for light detection and ranging, includes both aerial and ground-based techniques (Ackermann, 1999; Buckley et al., 2006; Wehr and Lohr, 1999). Originally developed for aerial surveying, especially topographic mapping, the technique allows the rapid collection of spatially constrained point data that can capture the shape of a scanned feature (Baltsavias, 1999; Baltsavias et al., 2001; Nagihara et al., 2004). A geocellular model is a computer-based representation of a geological volume, typically a subsurface reservoir. The model comprises mapped surfaces that define zones. Zones are populated with cells, which, in turn, are assigned parameters such as porosity, permeability, facies, etc. Such models are routinely used to visualize and simulate the subsurface in the oil industry. Given the poor resolution of seismic data (e.g., Pickup and Hern, 2002) and sparse frequency of wells in most oil fields (typical spacing ∼1 km), outcrop data are commonly used to provide information on interwell facies and structural architectures (e.g., Alexander, 1993; Dreyer et al., 1993; Pickup and Hern, 2002; Reynolds, 1999) (Fig. 1). Reservoir modeling software has long been used to represent geological outcrops (Bryant et al., 2000; Bryant and Flint, 1993; Dreyer et al., 1993; Joseph et al., 1993), both for direct reservoir analogs and also as a tool for capturing structural and stratigraphic architecture (e.g., Bellian et al., 2005; Weber, 1986; White et al., 2004; Willis and White, 2000). Key issues with the utilization of outcrop data have been: (1) the collection of sufficient volumes of spatially accurate data; (2) correlation of surfaces over long distances and between individual outcrops; (3) the recognition of subtle dip and strike changes in the field; (4) safe access to vertical and sub-vertical portions of the outcrop; (5) the ability to iterate between the outcrop and the model during the model-building phase; and (6) the ability to illustrate the model and outcrop side by side for training and teaching purposes. The collection of ground-based lidar data and the building of virtual outcrops provide a means to address these issues.

Previous Work Review

The application of digital data collection techniques for outcrop studies is not new. Stafleu et al. (1996) acquired photogrammetric stereopairs of carbonate rock outcrops to form digital elevation models (DEMs). These were then linked with petrophysical data to identify a relationship between erosion and rock impedance. Xu et al. (2000; 2001) used the Global Positioning System (GPS) and a reflectorless laser to collect outcrop data and construct surfaces. Adams et al. (2005) used real-time kinematic GPS and a total station for recording 3D datapoints from the outcrop. These were combined with a DEM created from photogrammetry to form the basis for a geocellular outcrop model.

Recently, the use of modern data collection techniques in field geology has increased rapidly in popularity (McCaffrey et al., 2005). These methods were reviewed by Pringle et al. (2006) and include a variety of techniques for producing data of different resolutions and accuracies. The application of laser scanning as a method for ground-based geological fieldwork is now proven (Bellian et al., 2005; Buckley et al., 2006; Leren, 2007; Pringle et al., 2004a; Pringle et al., 2006; Redfern et al., 2007). The employment of lidar and the creation of virtual outcrops from the point clouds provide a means for the rapid collection and interpretation of large volumes of accurate geometric outcrop data. A particular advantage of terrestrial lidar scanning is that resultant surfaces are more efficient to produce and have a higher accuracy potential than photogrammetric surfaces, especially in areas that exhibit high relief, such as good quality geological outcrops (Baltsavias et al., 2001; Buckley et al., 2006).

The techniques for collecting, preparing, and presenting scan data in a geologically meaningful context have been reviewed by several authors (McCaffrey et al., 2005; Pringle et al., 2004a; Pringle et al., 2006). Other examples (e.g., Bryant et al., 2000; Pringle et al., 2004b) show that the use of digital spatial information in outcrop modeling is increasing. The utilization of the collected data, especially for the building of geocellular models is only beginning to be addressed (Dreyer et al., 1993; Løseth et al., 2003). While recent studies by authors such as Bryant et al. (2000) and Bellian et al. (2005) have discussed the possibilities for broader geological application, as yet very little has been published other than “state-of-the-art” papers describing the potential of the technology.

Overview of the Paper

This paper documents for the first time a systematic workflow from the collection of raw scan data to utilization of the final virtual outcrop and the building and testing of models. The workflow is illustrated by the construction of two, detailed geocellular models. The resulting models range in size from 100 × 100 × 2 m to several kilometers wide and tens or even hundreds of meters thick, respectively, and illustrate the utility of virtual outcrop data. A key aspect of this paper is to document the workflow for the use of virtual data to solve specific geological problems. The workflows will be illustrated with reference to the two outcrop data sets, the background geology of which is summarized briefly in the next section. The workflow is then illustrated, including the stages from the collection of the data and assembling the virtual outcrop, to its utilization in reservoir modeling within computer-based tools. The former stage includes outcrop selection, data collection, a brief summary of data processing, and the generation of virtual outcrops ready for geo-interpretation. The latter stage incorporates export of data to geocellular modeling software, model building, grid creation and population, and finally, model investigation and flow simulation to test the sensitivity of the model to reservoir fluid flow.

BRIEF SUMMARY OF THE GEOLOGY OF THE FEATURED DATA SETS

Laser scanning, combined with traditional field techniques, has been used to collect two high-resolution data sets from Spain and the United States. The geological backgrounds to these study areas are described below.

Both outcrops have been subjected to extensive previous studies (Leren, 2005; Leren, 2007; Lopez-Blanco, 1996; Lopez-Blanco et al., 2003; Molenaar and Martinius, 1990, 1996; Moore and Schultz, 1999; Peacock and Sanderson, 1991; Rotevatn et al., 2007; Trudgill and Cartwright, 1994; Yang and Nio, 1985), but include issues that can potentially be resolved with more accurate geospatial data. Ground-based laser scanning data provide an opportunity for the collection of very high resolution data that can be targeted to address these issues. Within the Roda Sandstone, these problems include the detailed delta-clinoform and bedset geometries, and the correlation across unexposed areas. In the Grabens area the detailed architecture of the fault overlap zone and distribution of antithetic structures are issues of interest. Digitally collecting the spatial data also eases the export of the data to geocellular modeling software.

Roda Sandstone, Spanish Pyrenees

The Eocene Roda Sandstone crops out in the Spanish Pyrenees and is interpreted as a predominantly siliciclastic, wave- and tide-influenced, Gilbert-type delta system (Leren, 2005; Leren, 2007; Lopez-Blanco, 1996; Lopez-Blanco et al., 2003; Molenaar and Martinius, 1996) (Fig. 2). The Roda comprises a series of Gilbert-type lobes with steeply dipping clinoforms. The entire unit comprises six seaward (south-westward) stepping, delta-front bodies. This study focuses on two of these packages. Individual lobes are separated by cemented hardgrounds, and the distal toeset deposits are reworked by strong west-northwest-directed, ebbtidal currents (Molenaar, 1990; Yang and Nio, 1985). Virtual outcrops and 3D geocellular models illustrate the lateral geometry within the lobes and their constituent clinoforms. By comparison to a conventional photograph, the inspection of these geometries is substantially eased by the three-dimensionality of the virtual outcrop (Fig. 3). Also of interest is the lateral and down-dip transition from steeply dipping foresets to large sub-tidal bars.

The Grabens, Canyonlands National Park, Utah

The Devil's Lane area in the Canyonlands Grabens was studied to test the feasibility of lidar technology for collecting data that could be used to address issues in structural geology (Fig. 4A). As the name suggests, the Grabens region of Canyonlands National Park is a heavily faulted area that has undergone deformation throughout the last 15 Ma, due to regional uplift and the collapse of a subsurface layer of salt (Moore and Schultz, 1999; Trudgill and Cartwright, 1994). The area features a series of interconnected systems of horsts and grabens, and a configuration of faults and fault blocks that is geometrically analogous to many subsurface hydrocarbon reservoirs, e.g., in the North Sea (Færseth, 1996). The host rock to the faulting is the predominantly aeolian, Permianaged Cedar Mesa Sandstone. The main feature of interest is a graben system featuring a right-lateral step or shift of the bounding faults, resulting in a right-lateral step of the entire graben (Fig. 4B). This type of stepping or shifting is common in graben systems and is related to the evolution of the faults through segment growth and linkage (Peacock and Sanderson, 1991; Rotevatn et al., 2007). In the step-over area, the bounding faults constrain two oppositely dipping relay ramps, both of which are swarmed by an array of smaller faults and fractures. It is the complexity of this structural field case that is sought to be captured using lidar technology.

Geological Issues to be Addressed

The Roda Sandstone shows an excellent example of seaward-dipping delta front clinoforms. Within shoreface and shallow-water delta systems, they typically dip between 1° and 3°, while in deeper water and bedload-dominated deltas, they may dip at up to 30° (Anderson et al., 2004; Bhattacharya, 2006; Gani and Bhattacharya, 2005; Gilbert, 1885; Nemec and Steel, 1988). The clinoforms record the basin-ward migration of the shoreline through time (Hampson, 2000; Howell et al., 2006). Because clinoform surfaces are frequently draped with mudstone, or are cemented, they are potentially important barriers to horizontal and vertical fluid flow within subsurface hydrocarbon reservoirs. Understanding their geometry is critical to the modeling of intrazone reservoir heterogeneities within such systems as the lower Brent Group from the North Sea and the Halten Terrace, as well as the Tampen regions (e.g., Brekke et al., 2001; Corfield et al., 2001; Helland-Hansen et al., 1992). Clinoforms can also be used to map the evolving shoreline trajectory. Recent studies (Hampson and Storms, 2003) have highlighted the importance of documenting clinoform evolution through time as a means of predicting medium to short-term beach evolution on modern coasts. The accurate measurement of clinoform geometry is very difficult in the field. The collection of lidar data and the building of virtual outcrops address that issue and allow the study of individual clinoform bodies.

Arrays of normal faults are connected through relay zones (Cartwright et al., 1996; Childs et al., 1995; Peacock and Sanderson, 1994). In addition to providing important information on the evolution of fault systems, relay structures may also provide conduits for fluid flow across fault zones within hydrocarbon reservoirs. Previous studies of flow through fault zones have used synthetic and theoretical relationships when accounting for relay zones in the determination of flow across faults (Childs et al., 1995). The aim of this study was to collect a spatially accurate data set from an outcropping example and to dynamically test the effects of this one case on simulated flow.

OUTCROP TO MODELING WORKFLOW

The modeling workflow includes a stepwise procedure from the selection of outcrops through the collection of data to the creation of a virtual outcrop, geological interpretation, and finally, building and testing of geocellular models based on these data.

Outcrop Selection

Outcrops are selected based on four criteria: suitability to the problem, level of three dimensionality, outcrop quality, and accessibility. It is important that the problem drives the selection of outcrop; i.e., a geological or reservoir issue is identified and the optimal outcrop(s) are selected to address that problem, rather than just collecting data because the outcrop quality is good. The term “3D outcrop” has become common in some areas of the geological community to describe outcrops in which there are a number of different orientations to the outcrop faces, and in which geological surfaces and bodies can be easily extrapolated. While such outcrops are clearly not 3D volumes, the key aspect is that they have a far greater utility than simple, single, straight cliff sections that provide a two-dimensional section through the geology. To quantify the level of three-dimensionality that an outcrop expresses, a new parameter, termed the Outcrop Area Ratio (OAR), is proposed. This is the ratio between the plan view length of exposure and the plan view area it occupies. The Roda Sandstone has an OAR of 2.13 and the Grabens is 2.4, i.e., the Roda area is slightly better, while both are good (Figs. 3–5). Whereas the OAR provides information about the level of three-dimensionality that an outcrop expresses, it does not quantify the quality of the outcrop. An outcrop can have a good OAR, but still can be poor in terms of outcrop (e.g., the Roda Sandstone has a good OAR, but a variable outcrop quality, especially in the more proximal parts). Outcrop quality is a function of vegetation and/or scree cover and is also considered. Preferably, the exposure should also be close to both strike and dip (structural or depositional) and should intersect these directions as well. If these criteria are met, the geology can be better represented in a 3D model.

In a subsurface study, it is typical to either model the entire field or, in the case of very large fields, to model a portion or sector. The size of the model is commonly limited by the computing and software capacity and is also dependent on the planned application of the model. Typical subsurface models are between 4 and 100 km2 and between 20 and 200 m thick. When modeling outcrops, the size of the model is also dependent on the purpose. The smallest outcrop models (dm to m) are commonly used to address fluid-flow behavior in individual bedforms (Jackson et al., 2005). Models in the hundreds of square meters scale have been used to study bedform and individual architectural elements (Falivene et al., 2006; Pedersen, 2005; Vipond, 2005). Larger models at the interwell (1–2 km2) to the entire oil field (up to ∼100 km2) have also been built. Long-range lidar as a means of data collection and capture lends itself to all but the smallest of these scales.

The effective range of a typical, reflectorless laser scanner on rock is currently ∼600 m, although this varies depending upon the instrumentation, the reflectivity of the rock, the angle of the scan, and the atmospheric conditions. Buckley et al. (2006) have described how the outcrop must be within range of the laser scanner, both horizontally and vertically. If the distance is too great, then the returns will be scattered and not representative. The same is true, if the angle between scanner and target is too wide. This is especially an issue when looking up at steep cliff faces from below. Preferably, the scanner can be positioned at a level at or close to half the total height of the outcrop. This position secures the lowest angle between the laser beam and the rock, giving the strongest laser return and, hence, a better representation. The nature of the topography may require that the scanner is positioned above or below this position, but the outcrops should be selected to minimize such effects. Corners and bends in the outcrop will also result in less than optimal angles and shadows where not all of the outcrop can be seen from a single position. This will typically require the outcrop to be captured from various positions with a high degree of overlap, both vertically and horizontally.

The laser return can be obstructed by any object in the line of sight between the scanner and the outcrop. Consequently, care should be taken to select scan positions where the outcrop is not obscured by obstacles, such as shrubs, trees or other vegetation, large boulders, and masts or other man-made objects. Shadowing from minor obstacles can be avoided by scanning from different positions. However, laser returns from the obstacles may need to be manually removed during post processing. This can be time consuming and should be taken into consideration during outcrop selection. Precipitation will return or diffract the laser beam, making arid areas most suitable for application.

The final consideration for the selection of study areas is the portability of the system and access to the outcrops. The total weight for the system used in this study is around 70 kg, with batteries generally needing to be charged daily. While lighter systems exist and helicopters can go anywhere, our studies suggest that it is practical to work within ∼2 km of vehicle access (which may include off-road jeeps and quad bikes). This can significantly affect outcrop selection.

In summary, suitable outcrops: (1) address the geological problem; (2) are accessible by vehicle; (3) have a high outcrop-area ratio (OAR); (4) can be scanned at a close to horizontal orientation; (5) have limited vegetation cover; and (6) are in arid areas.

Data Collection

Data collection in the field requires a laser scanner, a digital single-lens reflex (SLR) camera and photogrammetrically calibrated lenses, a dGPS setup, a laptop computer, tripod, mounting, batteries, and cables. The camera and one of the dGPS antennas are mounted on the scanner head, while the second dGPS is located at a semi-permanent base station (Fig. 5). Software on the laptop controls both the scanner and the camera, and records the scans and the images taken for later texturing of the virtual outcrop. The GPS readings are stored in the GPS receivers and downloaded to the project at a later stage. Use of dGPS allows all of the scans to have a common coordinate system, which is extremely useful during later processing when results from numerous scan locations are merged.

During data collection, the scanner continuously emits a low-energy laser beam at the outcrop as it slowly rotates around its own axis up to 360°, normally less than 180° for each scan position. The travel time of the reflected light is used to calculate the distance to the point of reflection on the outcrop. Together, the points produce a 3D point cloud. Using the azimuth, inclination, and distance of the laser return, the software calculates the XYZ coordinate for each scan point and ensures a consistent registration. The geometric relationship between the scanner and mounted SLR camera is calibrated at the scan site by using reflectors (typically, six to eight) that are positioned in different locations within the scan. Recording the reflectors using the scanner and camera allows the mounting calibration to be updated, accounting for a very small change in camera position when it is removed while a lens is changed.

Typically, a data set will have millions of points with an accuracy of around ±0.02 m for each point when the laser is shot at a range of up to 600 m (Riegl, 2006). One or more series of automatically registered digital photos are taken together with each scan by the mounted camera. The scanner has a vertical spread of 80°, while the field of view of the camera lens is commonly less, requiring additional photos to be taken with the mount for the camera tilted. In case of poor lighting conditions during scanning, the camera can be dismounted from the scanner, and photos can be taken separately, even at a different time. These photos can be manually referenced, but must be taken with a camera that has been calibrated. The photos are used to color the point cloud (i.e., assign an RGB property to each point) and also to texture the processed outcrop model (i.e., they are draped onto the surface). The quality of the photographs is a key aspect that controls the quality of the final virtual outcrop. It is important that photos are taken in optimal lighting conditions without strong shadows or haze. Obtaining optimal lighting conditions for each scan position will typically dictate fieldwork planning.

The time taken to collect data from a single scan position is dependent on the resolution and the width of the scan. Under typical field operating conditions, scanning and associated photography takes around one hour to collect, making it possible to collect up to eight to ten scans in a working day.

This project used a Riegl LMS-Z420i scanner (Riegl, 2006), together with a Nikon D100 camera and a set of dGPS receivers. Ashtech Solutions 2.70 was used for processing the GPS data. Riegl's own commercial software, RiSCAN PRO version 1.4.1, was used both to control the scanner and camera and for post processing of point clouds and generation of the virtual outcrop. Registration of the different scan positions was carried out in Poly-works version 9.0.2, using a surface-matching approach to adjust the scan positions based on the overlap of each scan.

Data Processing and Generation of the Virtual Outcrop

The data processing that leads to the creation of a finished virtual outcrop is, at present, very labor intensive. The procedure follows the following stages (Figs. 6 and 7):

Stage 1. Post Processing of the GPS Data to Include the Differential Correction

The GPS data for each scan position are processed relative to the static base station, so that errors are minimized.

Stage 2. Combination of Data from Single Scans into One Project

A single project will typically contain data from between three and twenty scan locations. These data are combined into one data set that will typically contain millions of data points. The use of a single project coordinate system renders the possible combination of an unlimited number of scan positions and also allows the integration of other data, such as sedimentary logs, within one reference system.

Stage 3. Coloring of the Point Cloud

Data from the photographs can be used to add RGB (red, green, blue) values to each of the points. This produces an image that looks similar to a “somewhat grainy” photograph. The colored point cloud can be used for the mapping and correlation of key surfaces and the identification of larger geobodies. Many groups working with lidar data focus almost exclusively on the colored point cloud. In the present study, a higher degree of detail was required than is obtainable using the point cloud alone, and, therefore, virtual outcrops based on textured surfaces were generated.

Stage 4. Point-Cloud Cleaning and Decimation

Given the limitations of currently available software, the raw, combined point cloud has to be modified before it can be triangulated. This involves a combination of both automated and manual processing. The procedure includes cleaning, through the manual removal of vegetation, and the fine tuning of sharp changes in topography and other features that can produce unwanted triangulation effects.

To be able to generate a useable triangulated model, the point cloud must be decimated. This involves removal of a certain proportion of the points to enable the surface to be triangulated and visualized on a typical computer. It is not unusual to remove 50% of the points, although this is not carried out in a uniform way. Builtin filter modes can perform different decimation operations, e.g., by octree-filtering, and manual editing can ensure that the most points are removed from areas of little interest (e.g., scree slopes and foreground), while maintaining detail in areas where it is required. Normally, processing using an octree filter will produce a satisfactory result. The raw data are stored so that higher resolution, triangulated virtual outcrops utilizing all available data can be built from smaller areas of special interest, if required.

Stage 5. Triangulation and the Creation of the DEM

The points of the decimated cloud are connected by triangles in a triangulation operation to form a mesh surface, or digital elevation model (DEM) that can be textured (see Bellian et al., 2005; Buckley et al., 2006 for details). While this process is largely automated, it involves a series of user-defined parameters that are required to produce a reasonable surface, such as manually setting the maximum edge length and angle between two adjacent triangles. In this project, RiSCAN PRO version 1.4.1 has been used for this purpose, although experimental triangulation has been performed using different software packages.

After the triangulated DEM is produced, it is prudent to carry out a visual quality check. It is commonly necessary to manually adjust the triangulated surface due to erroneous points and errors in the triangulation procedure. In-house software has been developed to create a difference surface that records the spatial difference between the DEM and the original point cloud. This surface illustrates the degree of spatial error that has been introduced by the triangulation process. This surface can be used to determine whether quality of the virtual outcrop within the areas of interest matches the original higher resolution point cloud (Fig. 8). Acceptable errors depend upon the proposed application of the virtual outcrop.

Stage 6. Texturing the DEM

The high-resolution digital imagery captured with the scans or added to the project later is used to render the triangulated mesh. The greater resolution of the image data allows continuous coverage of the required geological features. The rendering of the images is carried out automatically in RiSCAN PRO, which selects the optimum photography for each triangle. This is typically efficient, but results are variable because the lighting conditions and quality of the image portions selected for adjacent triangles are often different. This problem can be partially mitigated by removing very poor photos from the project. It is also possible to manually adjust the colors and lighting of photos so that they are more similar. This is done using image-editing software, such as Adobe Photoshop.

The resultant textured DEM (the virtual outcrop) captures the outcrop morphology and detail. The virtual outcrop can be loaded into a viewer and examined from any angle, and used for correlation and training. As each pixel in the virtual outcrop has an XYZ position, measurements can be made and surfaces and features can be traced and digitized.

Working with the Virtual Outcrop

A variety of both commercial and freely available software can be used to visualize the virtual outcrop, although nothing designed specifically for geological study is yet available, and consequently, all have their limitations. In addition, in-house software has been developed that permits rapid viewing and manipulation of the large volumes of data on a typical personal computer.

Visual inspection of the data allows improved understanding of bedforms and bedform geometries, the correlation of key surfaces, and, depending upon resolution, improved understanding of facies geometries and transitions (Fig. 3). In addition to qualitative visual inspection, a key utility of the virtual outcrop is the ability to extract quantitative spatial data— either manually or in an automated fashion.

Manual data extraction involves the user viewing the virtual outcrop and manually digitizing points along a surface such as a bed boundary or fault plane. The points can then be stored and exported as individual points or polylines. Other forms of manual data extraction involve the measuring of surface strikes and dips using three user-selected points and creating sedimentary logs. In the latter, a poly-line representing the log trace is highlighted on the virtual outcrop. Points along the line that represent bed boundaries are picked and used to generate a sedimentary log. The properties of the beds within the log are interpreted from the photographs and, ideally, calibrated to true field logs. Such logs can also be digitized and loaded into the reservoir modeling system as wells (Falivene et al., 2006). Faults can be mapped as planes, and accurate measurements of fault displacement along strike can also be made directly from the virtual outcrop, provided at least one continuous reference bed exists.

There is currently no commercially available software for the automated extraction of geologically meaningful spatial data from the virtual outcrop. Several research groups are working to create software using a number of novel approaches including algorithms similar to those used in the automated interpretation of seismic data (e.g., Monsen, 2006). This will be a significant growth area in the near future.

Once mapped and interpreted in three dimensions, the points, polyline, and log data can be exported to a reservoir modeling software to allow the construction of surfaced-based geo-cellular models. Data are typically exported in ASCII (American Standard Code for Information Interchange) formats that are suitable for the chosen software.

Export to the Reservoir Modeling Package

Irap Reservoir Modeling System (Irap RMS) is a commercial reservoir-modeling package from Roxar that is widely used in the oil industry for the visualization and simulation of subsurface oil-field data. This package can also be used, with some modification for handling and visualizing data extracted from the virtual outcrop. The definition and mapping of the intersection of the geological surface and the outcrop in the virtual outcrop produces a series of points and lines that are exported to Irap RMS as DXF or text files, using Irap RMS internal data format (Roxar, 2006). Other data types, such as sedimentary logs, can also be digitized and loaded into the modeling software.

Model Building

A reservoir model is a 3D quantitative representation of a volume of rock within a computer. Reservoir modeling has become a necessary and integrated part of predicting, planning, and updating information concerning subsurface reservoirs, and, as a database, it comprises a large amount of geological, petrophysical, and general production data. The models have a wide area of application and are used for calculating volumes, well planning, and predicting the paths of fluids during production. Models are limited by the available geological data that are used to build them. Outcrop analogs from comparable systems can be used to provide additional input to models, especially in an early stage of field development when subsurface data are limited and there are no production data. During later field life, analogs are more commonly used to improve understanding of the geological system that has controlled production and are used as a quality check on history-matched dynamic models. Outcrops can be used to provide direct inputs for property modeling (e.g., shale bed lengths, fluvial channel, width versus thicknesses; see Reynolds, 1999) and can be modeled to understand the behavior of a particular type of system.

The challenges associated with modeling outcrops are somewhat different than those faced when dealing with the subsurface. Most reservoir modeling packages are designed for modeling subsurface reservoirs on a scale of several to tens of kilometers. Although most of the tools and algorithms are scale independent, some adjustments are necessary when working with outcrop data. Secondly, outcrops provide very high resolution information that is somewhat spatially limited. While this is superior for the data available from well logs in the subsurface, considerable extrapolation is required away from, and between, cliff sections. Additionally, models built to study stratigraphic issues require the removal of later tectonic deformation (tilting, folding, or even faulting) that are not relevant to the use of the stratigraphic architecture as an analog. This is commonly done with specific structural restoration packages, such as 3DMove (Fernandez et al., 2004). Finally, many of the algorithms used for simulating fluid flow are dependent on certain pressure, depth, and temperature relationships. Therefore, it can be necessary to “move” the outcrop model to a typical reservoir depth (e.g., around 2000 m) for these calculations to be relevant to reservoir-related issues.

Building the models involves a series of stages, which are broadly similar to the procedure for modeling a subsurface data set (Fig. 9). These are discussed below.

Surfaces

When working with subsurface data, the first stage of the RMS modeling workflow is to import seismically mapped stratigraphic and structural (fault) surfaces. These surfaces are visually checked and tied in to the well data. The imported data are then used to build a structural, surface-based framework for the modeling. Surfaces form the framework and zone boundaries of the reservoir model and represent limits where changes in lithology and petrophysical properties occur. Faults are also represented by surfaces.

Virtual outcrop data are somewhat different. Polylines that represent the outcrop expression of a surface are not in themselves continuous surfaces; therefore, the surfaces need to be generated from them. This is done statistically, and RMS contains a variety of algorithms for the extrapolation of surfaces, each one producing somewhat different results from the same input data. Visual quality check and comparison to the conceptual geological model is used to determine which algorithms are producing the best results. As a general rule, the global b-spline produces the most geologically realistic results. In many cases, manual editing of the surfaces away from the control points is required to further satisfy the conceptual model. RMS requires that all of the surfaces cover the entire model area and do not cross each other (Fig. 10). Editing is done by introducing guide points, guide contours, and by using trends to guide the surfaces in the correct direction.

It is useful to generate a surface that represents the present-day topography to assist in the quality control of the stratigraphic surfaces (Figs. 10E and 10F). If required, the removal of tectonic dip can be performed in a separate software package as discussed earlier in this paper. Stochastic algorithms can be used to reintroduce small-scale irregularities that are lost between outcrops, if suitable (Falivene et al., 2004). When the surfaces have been created, the scalar operations (e.g., depth = depth − 2000 m) can be used to move the surfaces (and thus the model) into a typical depth regime for a reservoir.

Building a Fault Framework

In traditional reservoir modeling of subsurface reservoirs, a key step after the initial data import and stratigraphic surface generation is building a structural framework based on the imported fault surfaces. In this paper, faults are a key feature of the Canyonlands case study (Figs. 4 and 11B). There is also a minor fault that has been modeled in the Roda study area. The point data imported from the Canyonlands virtual outcrop represent fault polygons extracted from the exposed fault scarps and fault surfaces in the virtual outcrop. Having extracted the fault data from the virtual outcrop, and having measured the displacement changes along strike in the virtual outcrop, the fault model is produced using the preexisting algorithms in the RMS software package and editing the resulting structural model as necessary (Fig. 11). The fault model is then used to re-grid the stratigraphic surfaces accounting for the displacement. The fault model is also important for the modeling grid.

Grids and Grid Population

After the surfaces are generated and adjusted, they are then used to create modeling zones. The 3D grid is created within each of the zones (Figs. 10B and 10C). The 3D grid is the cellular framework in which all of the facies and property modeling within RMS take place. Grid scale and design is based upon the scale and nature of the geology that is being modeled, and there is a degree of flexibility in the way in which a grid can be built. To create a modeling grid, it is necessary to define the grid type, the horizontal and vertical layout, and the cell truncation. The resolution selected is usually a compromise between necessary resolution and computer memory limitation.

The grids need to be populated with properties; in most models, these are facies based (Fig. 12). In virtual outcrop models, properties at the outcrop are interpreted and placed directly into appropriate grid cells or added from the sedimentary logs. Sedimentary logs imported as deviated wells help to constrain the model, and facies modeling can be conditioned on wells. Logs have to be “blocked,” or averaged, so that each cell in the grid only contains one property. There are a number of different ways that this can be achieved.

The population of grid cells away from the outcrop involves a degree of interpretation. Depending upon the conceptual geological model and the prior knowledge, this can be achieved in a number of stochastic ways (using either Gaussian or Boolean type approaches—e.g., Falivene et al., 2006; Holden et al., 1998; MacDonald and Halland, 1993) or by simple extrapolation of the facies body margins. In all cases, the data are conditioned to the outcrop observations. Grids are normally designed to specifically follow the key geological heterogeneities because they control fluid flow from formation/reservoir level to lamina and pore level (Weber, 1986). Finally, both surfaces and grids are adjusted to mapped faults, and the grid is displaced accordingly. These faults potentially have a major influence on fluid flow.

Current limitations of computer hardware and software restrict the number of cells that can be represented within a cellular-reservoir simulation model, and, consequently, the resolution of input data. To streamline the models and save memory costs, the 3D grids are often designed with a very large X and Y spacing, with Z much smaller (e.g., 50 × 50 × 0.5 m). This design reflects the fact that, in most sedimentary systems, the properties are more homogeneous in the X and Y direction, and it is the Z direction that needs to be captured with higher resolution.

Clinoforms of the Roda Sandstone exhibit a systematic facies transition from delta front to toesets, with no sharp facies changes vertically (Figs. 5 and 12). Their thickness varies from up to a few meters in the proximal, up-dip portion to zero or close to zero in the distal or down-dip portion. This allows the individual clinoforms to be presented in the model as a zone that is one cell thick. For example, a model covering 200 × 200 × 30 m and containing ten clinoforms, may contain as little as 50–100 cells, although several thousand would be more typical. This is in contrast to a model built of an entire delta lobe of 2000 × 2000 × 50, in which several hundred thousand cells may be used.

Model Investigation

The final models can be analyzed both statically and dynamically. Static examination involves the visual inspection and extraction of quantitative data on body geometry, including the extraction of size variograms such as those presented in Reynolds (1999) that can be used for the population of subsurface models where such data are not available. Dynamic investigation involves simulating the flow of fluids through the model to understand how it would behave as a reservoir. Dynamic simulation requires the assignment of petrophysical properties to the grid cells. Petrophysical data from analogous subsurface systems can be used to populate the models on a facies-based approach. Using petrophysical numbers from outcrops does not necessarily give the desired analog values and sometimes can be difficult to collect correctly due to weathering or accessibility issues. On the other hand, petrophysical data from outcrops can give a higher sample resolution and better control, if collected in a systematic manner, e.g., by a facies approach (e.g., Froster et al., 2004).

All faults are also assigned values for transmissibility. This value determines the degree to which any fault in the model permits fluids to pass through the fault. This has a significant impact on reservoir performance, but often the lack of such data for subsurface faults makes the assignment of transmissibility values seem more like guesswork. By using realistic values based on empirical data from actual faults in the field, a greater understanding of how faults affect fluid flow in subsurface reservoirs can be achieved.

For the Canyonlands reservoir model, fluid-flow simulation has been conducted, demonstrating the final part of the workflow. As an initial approach, an experiment to investigate the effect of the fault framework and the presence of relay ramps on flow was devised. Both two-phase fluid flow and streamline simulation models were run. Streamlines follow the pathway of a particle of fluid through the volume at different time steps. To isolate the effects of the faults, the bulk rock properties were set as a homogenous volume. Porosity was set at 30% for the entire model, and, correspondingly, permeability was set to 1000 mD (Kh) and 100 mD (Kv). Net/gross ratio was set at one. Fault transmissibility was set to zero to make the faults completely sealing.

Two wells were placed on opposite sides of the overstepping fault system—one injection well and one production well (Fig. 13). A profile between the two wells illustrates the lack of two-dimensional lateral connectivity, due to the faults, while a visual inspection of the 3D model would predict some flow between the wells; flow simulation allows this to be quantified (Figs. 13 and 14). Water is injected into the injection well while the production well produces fluid (oil) until the water reaches it (water breakthrough). The result shown in Figures 15 and 17 demonstrates that, despite the apparent connectivity problems shown in Figure 14, communication is preserved, due to the overlap between the faults in the graben over-step area. Simulation of a comparable but unfaulted volume illustrates that the presence of the faults increases the tortuosity of the flow path and delays the time of breakthrough (Fig. 16).

The influence of the faults can be quantified by running streamline simulation on an unfaulted and faulted model. This simulation was undertaken with three different petrophysical setups in the host rock 01(Table 1) to determine whether the faults have more or less effect in lower (or higher) permeability settings.

The simulation shows that the lower the permeability and porosity, the greater the dissimilarity between the faulted and unfaulted reservoir. In other words, in this particular case, a better reservoir quality lowers the influence of the fault 01(Table 1). The streamline results were confirmed by dual phase, fluid-flow simulations (Fig. 17). These detailed results are beyond the scope of this paper and will be published elsewhere.

Error Examination

There is high potential for errors to exist and propagate throughout the workflow. The aim of this work is a general improvement in reservoir modeling by using accurate spatial data, while minimizing the error sources at each processing stage to minimize the overall uncertainty in the final model. In the past, with only approximations of geometric information taken at discrete intervals during sedimentary logging, undefined error could be introduced to the extrapolation of geology surfaces (e.g., Jones et al., 2004; Pringle et al., 2004b). This, in turn, may result in further error when a 3D grid was made, thus affecting the geometry of the resulting model, any volumetric calculations made, and fluid flow simulations. Modeling of the outcrop geometry using terrestrial laser scanning gives far better constraints on the available outcrop exposure. The stratigraphic layers can be followed continuously, instead of only being sampled at discrete intervals. This means that the geological surfaces are likely to be defined with higher accuracy, which, thus, will allow more accurate reservoir models.

CONCLUSIONS

Geometric data from outcrops and the modeling of outcrops using subsurface technology has started to bridge the gap between well bore and seismic methods and fill the gaps in our understanding of the 3D geometries of geological subsurface entities. Qualitative and quantitative outcrop analog studies can be used for this purpose.

In this study, we have presented new methodologies for the acquisition and utilization of 3D information generated by the ground-based laser scanning (lidar) of geological outcrops.

In particular, the focus has been on (1) the accurate representation of geological entities from outcrops within a computer (referred to in this paper as a virtual outcrop); (2) utilizing the virtual outcrop to extract data for building and testing 3D geocellular models using conventional hydrocarbon, reservoir-modeling software; and (3) applications of the collected data and the virtual outcrop.

This paper documents a complete workflow—from outcrop selection through data collection, processing and building of a virtual outcrop, and geological interpretation—to the building of the 3D geocellular models. The workflow is illustrated with two case studies that illustrate the application to sedimentary and structural-reservoir, geology-related problems.

Outcrops are selected based on four criteria: suitability to the problem, level of three dimensionality, outcrop quality, and accessibility.

The data processing that leads to the creation of a finished virtual outcrop is, at present, very labor intensive. The procedure follows the following stages: (1) post processing of the GPS data to include the differential correction; (2) combination of data from single scans into one project; (3) coloring of the point cloud; (4) point-cloud cleaning and decimation; (5) triangulation and the creation of the DEM; and (6) texturing the DEM.

A variety of commercial, freely available and in-house software is used to visualize and process the virtual outcrop. Once mapped and interpreted in three dimensions, the point, polyline, and log data can be exported to reservoir-modeling software to allow the building of surface-based geocellular models.

Models are limited by the availability of the geological data that are used to build them. Outcrop analogs from comparable systems can be used to provide additional input to models, especially in an early stage of field development when subsurface data are limited. During later field life, analogs are more commonly used to improve understanding of the geological system and for quality checking history-matched dynamic models. Outcrops can be used to provide direct inputs for property modeling. They can also be modeled to understand the behavior of a particular type of system. Building the models involves a series of stages, which are broadly similar to the procedure for modeling a subsurface data set.

Contrary to seismically mapped stratigraphic and structural (fault) surfaces from the subsurface, polylines that represent the outcrop expression of a surface are not in themselves continuous surfaces. Therefore, the surfaces need to be statistically generated from them. After the surfaces are generated and adjusted to faults, they are then used to create modeling zones. The 3D grid is created within each of the zones and is the cellular framework in which all of the facies and property modeling within RMS take place.

The final models can be analyzed both statically and dynamically. Static examination involves the visual inspection and the extraction of quantitative data on body geometry. Dynamic investigation involves simulating the flow of fluids through the model to understand how it would behave as a reservoir. Dynamic simulation requires the assignment of petrophysical properties to the grid cells.

There is high potential for errors to exist and propagate throughout the workflow. The aim of this work is a general improvement in reservoir modeling by using accurate spatial data, while minimizing the error sources at each processing stage to minimize the overall uncertainty in the final model. Consequently, geometric data from outcrops, and, most recently, the modeling of outcrops, has started to bridge the gaps in our understanding of the 3D geometries of geological subsurface entities.

This work is sponsored by the Norwegian Research Council and Statoil via the RUM project. We are grateful to Tony Reynolds and Jamie Pringle for constructive reviews on the initial version of this manuscript. We acknowledge all of our collaborators within the Virtual Outcrop Geology program at the Centre for Integrated Petroleum Research for assistance with fieldwork and software issues. We thank Statoil for the permission to use the satellite-photo in Figure 4B. Thanks to Roxar for providing the RMS software package and to Midland Valley for supplying 3DMove.