Abstract

Lidar (light detection and ranging) data provide a centimeter-scale–resolution digital outcrop model. This technology supplements and improves conventional outcrop investigations by providing ways for geoscientists to digitally visit and analyze outcrops on their computers or workstations.

Our current processing workflow includes creation of an optimized, triangulated surface, onto which high-resolution photographs are rectified and draped. For optimal resolution, lidar data should be acquired along a direction perpendicular to the outcrop face. Field constraints, such as sea cliffs or exposures without a good vantage point, sometimes necessitate scanning the outcrop in an oblique direction. And yet acquiring lidar data from an oblique direction creates large shadows (zones of no data) and anomalously elongated, triangulated areas. Using a three-dimensional transformation matrix that modifies the direction of triangulation, we can correct for elongated triangles. In addition, combining multiple scans shot from different angles minimizes data shadow. This procedure lets us obtain an optimized triangulated surface as if it were shot from an inaccessible angle, without altering the position or density of the original digital data. This angle-correction method is essential to accurate photo draping and virtual-reality model creation.

INTRODUCTION

Ground-based lidar scanning technology is becoming more widely used across a range of earth science disciplines, requiring accurate surveying of Earth's topography. Uses include mining surveys (Fardin et al. 2004), civil engineering (Louden 2002; Frueh and Zakhor 2003; Hsiao et al. 2004), forestry (Lovell et al. 2003), and geologic outcrop studies (Westerman et al. 2003; Bawden et al. 2004; Bellian et al. 2005; McCaffrey et al. 2005; Janson et al., 2007). This paper focuses on outcrop applications.

Ground-based lidar technology is based on the travel time of a laser beam between the source (scanner) and the target (outcrop) (Optech, 2006a). Multiple laser beams sweeping the laser range allow creation of a three-dimensional (3D) point cloud of the outcropping rock face at 1-cm resolution. These point clouds are then commonly converted into triangulated surfaces and are used for the draping of high-resolution digital photographs to create virtual-reality models of Earth's surface. The triangulation step is critical to the quality and accuracy of the final, photo-draped, virtual model.

Quality of the virtual-reality model also depends on (1) the density of points and (2) the scanning angle used in the triangulation. In ideal acquisition conditions, the lidar scan is acquired perpendicular to the outcrop face. The resulting density of points is optimal, and the final triangulated surface has a minimum of holes resulting from acquisition shadows and a minimum of elongated triangles. In difficult-to-access outcrops (e.g., a high or remote sea cliff), scanning normal to the outcrop face is commonly not possible. Scans are taken at an angle oblique to the rock face, resulting in a decrease in the density of points available for triangulation. Moreover, data holes or acquisition shadows are more numerous, resulting in a lower quality in the final triangulated model, which will have data gaps and elongated triangles.

Researchers at the Bureau of Economic Geology, the University of Texas at Austin, have developed a methodology for improving the final triangulated mesh and reducing elongated triangles on the basis of a correction of the 3D rotation matrix used. The result is a triangulated mesh that corrects for scans shot at severely oblique angles.

In this paper, we review acquisition and processing of ground-based lidar data in optimal conditions (Bellian et al. 2005). Then, we present a method that corrects for non-normal point clouds and renders an optimized triangulated grid of difficult-to-access outcrops. This method is used for correction of the distortion effects that arise when we scan outcrops obliquely. We will demonstrate the benefit of these methods using two examples, one from a sea-cliff outcrop and one from an outcrop partly masked by trees.

ACQUISITION AND PROCESSING OF GROUND-BASED LIDAR DATA

Instrument and Acquisition Method (Figure 1)

The ground-based lidar instrument used in these studies is an Optech ILRIS-3D, which uses a class 1 laser at 1500-nm wavelength and a data sampling rate of 2500 points/second (Optech 2006b). A beam of laser light is directed at the outcrop, and the time for its return is measured and converted into a distance. Using the distance and the laser-beam angle, each reflection point is accurately positioned in space. The process is repeated roughly 2000 times per second to produce a 3D point cloud.

Ground-based lidar is commonly used to scan outcrop consisting of near-vertical cliffs and rocky slopes along canyon walls and hillsides. The first step in the acquisition process is to set up the scanner, aim at the outcrop, and define a scanning window and a scanning resolution (typically 2–7 cm between points). A high-resolution photograph of the same area is taken and used later for photo draping. Typical scanning time for a 50-m × 150-m scene ∼500 m from the scanner at 5-cm point spacing is ∼20 min.

Because the ILRIS-3D scanner field of view is 40° both horizontally and vertically, large outcrops are acquired by a succession of individual scans (Fig. 1). Approximately 10% overlap between successive scans is required for the registration process of multiple scans to produce a mosaic (Bellian et al. 2005) (Fig. 1).

Registration (Figure 2)

Individual scans consist of a point cloud that has its own relative coordinate system, the origin of which is the scanner itself. The merging process permits the transfer of multiple scans into a single coordinate system by imposing a transformation matrix onto successive point clouds. This process involves repositioning one scan with reference to another using an iterative closest-point algorithm (ICP) (Besl and McKay 1992) and is implemented using the Polyworks IMAlign software. The transformation matrix describing the position of each scan in space contains both the 3D rotation component and the 3D translation component (Shirley 2002) (red and green boxes, respectively, in Fig. 2).

The first scan loaded in Polyworks IMAlign software serves as a reference (blue scan in Fig. 2). The subsequent adjacent scan (yellow scan in Fig. 2) has its own coordinate system and therefore does not have the correct position in the reference scan space.

The transformation matrix associated with each point cloud initially describes the scan coordinate system. Therefore, the part of the matrix describing the 3D, rotation translation is an identity matrix, which is a square matrix with ones on the main diagonal and zeroes elsewhere.

To register the second scan in the reference space, we manually pick several common points on the two scans and use them to calculate the initial rotation and translation value for the ICP algorithm. From this initial value, the ICP algorithm mathematically optimizes the fit between the two scans by minimizing the mean-square distance metric over the 6° of freedom (Besl and McKay 1992) until it reaches the user-defined RMS (root mean square) error. The result of the registration is stored as the transformation matrix, and the reference scan keeps its original identity matrix.

Triangulation (Figure 3)

Photographs and lidar scans can be merged to create a virtual-reality model of an outcrop that can be displayed in a standard Web viewer. The first step toward this product is the triangulation and optimization of the full-resolution, merged, 3D point cloud.

Using their respective transformation matrices, we translate these data and rotate them back to their original coordinate systems using software that was written in-house. At this time, to ensure that there are no overlapping Z values, the points are also collapsed onto an imaginary two-dimensional (2D) plane perpendicular to the scanning direction, which is based on the information stored in the transformation matrix (Fig. 3B). The perpendicular plane is calculated using the translation component of the transformation matrix. This step is crucial in our triangulation process because the quality of the triangulation ultimately depends on this projection, which is equivalent to the shooting direction.

The irregular point cloud is then regridded onto this pseudo-2D grid, which is generated using a weighted inverse distance algorithm, and a triangulated mesh is created that honors the regridded point cloud (Figs. 3C and 3D). Finally, the triangulated surface is optimized by reducing the number of triangles using a quadratic error metric algorithm (Garland and Heckbert 1997) implemented in the Qslim software (Fig. 3E). The algorithm reduces the number of triangles by vertex-pair contraction, which leads to fewer triangles on flat surfaces and more triangles where the surface rugosity is high to preserve the precise shape of the surface (Fig. 3E). Finally, each of the Qslim output files is then converted to a standard VRML (Virtual Reality Modeling Language) file for viewing in a 3D-enabled Web browser. During this conversion procedure, the original geometry of each data set is restored by reapplying the transformation matrices and reversing the steps by which the data were collapsed onto a plane.

ACQUISITION AND PROCESSING OF GROUND-BASED LIDAR DATA IN DIFFICULT-TO-ACCESS OUTCROP

Sometimes, constraints such as human-made obstructions, trees, and sea cliffs prevent the scanning of the outcrop along an optimal direction perpendicular to the outcrop (Fig. 4).

In the example shown in Figure 4, station #2 cannot be used to scan area #2 because of the trees obstructing the outcrops. Area #2 requires an oblique scan either from stations #1 or #3, or both. The following section first describes how a triangulated surface is created using a single oblique scan with and without correction for the oblique scanning direction. Then, it describes how a triangulated surface is created using a combination of two oblique scans with a scanning direction correction.

Triangulated Surface Using a Single Scan and Two Different Transformation Matrices

Acquisition of lidar data at an oblique angle results in fewer scanned points per outcrop area. Objects or topography in the foreground can mask areas in the background and create zones of no data or data shadows that result in holes in the 3-D point cloud that are not apparent if the point cloud is examined along the scanning direction (Fig. 5).

Figure 6A shows a point cloud triangulated by using the original transformation matrix that corresponds to an oblique shooting position. The triangulation process includes the regridding of point-cloud data into a regular, 2D, rectangular grid and is represented by the blue vertical plane. The surface on the right shows the resulting triangulated surface. Because the projection plane is oblique to the outcrop face, the resulting triangulation has highly elongated triangles oriented along this oblique direction.

Because the information stored in the transformation matrix is used to collapse and regrid the point cloud, the transformation matrix can be modified artificially to collapse the point cloud onto a more appropriately orientated plane. To modify the virtual scanning position, only values corresponding to the translation are changed in the transformation matrix. Figure 6B shows the same point cloud triangulated using a transformation matrix that has a “modified” scanner position to mimic ideal shooting conditions. In essence, it artificially reproduces the shooting condition from station #2. Modifying translation values in the transformation matrix does not change the data—only the angle of projection used for regridding. Note that the resulting triangulated surface (Fig. 6B) does not show the anomalously stretched triangles, but still has a data shadow since a single point cloud is used.

Triangulated Surface Using a Composite Point Cloud and a Modified Transformation Matrix

To maximize data coverage of the target area, two or more scans shot obliquely can be used together. First, the two point clouds are concatenated to form one composite point cloud, which does not have a transformation matrix associated with it. A composite transformation matrix is created by using the same rotation value from the initial point cloud and an optimized translation value that mimics a shooting direction perpendicular to the composite point cloud. Figure 7 shows a composite point cloud created using two scans shot from stations #1 and #3, respectively (see Fig. 4 for position of the stations). The transformation matrix used for triangulation is the same optimized matrix used to create the surface of Figure 6B. The triangulated surface (Fig. 7B), created using a composite point cloud and an optimized transformation matrix for the triangulation, has fewer data shadows and a better triangulation than both surfaces shown in Figure 6.

APPLICATION/EXAMPLE/LIMITATION

The following section shows two applications of the optimized workflow in which it was not possible to shoot perpendicular to the outcrop face and where obstacles were blocking the face-on view. Both required acquisition of oblique scans.

Sea-Cliff Outcrop: Ross Sandstone, Ireland

The Ross Formation Sandstone crops out along dipping sea cliffs in western Ireland. Because the ground-based lidar scanner requires a static base, it is not possible to shoot from a moving platform, and, therefore, a boat could not be used to acquire optimally oriented data. Consequently, the outcrop requires more than 50 oblique scans taken from various oblique vantage points (Fig. 8).

Using multiple scans, a composite point cloud was created. Figure 9B shows the resulting improved triangulated surface created using a modified transformation matrix, whereas Figure 9A shows the triangulated surface created using one of the existing oblique transformation matrices. The optimized surface reproduces bedding architecture with much more fidelity than does the uncorrected surface.

Outcrop Partly Masked by Trees: Dana Point, California

The second example used to illustrate improved processing flow comes from exposures of Tertiary sandstone cropping out along the seashore near Dana Point, California. This outcrop is partly masked by trees. In Figure 10A, the line shows the upper limit of the vegetation.

A single scan acquired perpendicular to the outcrop would result in a significant proportion of the exposed area being lost, due to blocking vegetation (Fig. 10B). To image the masked area, several scans were shot from several oblique locations (Fig. 11A). Using all these oblique scans, we created a composite point cloud that provides data on the area behind the obstruction. Figure 11B shows the resulting triangulated surface using the transformation matrix from the perpendicular scan that rendered both the data from the original scan and those masked by vegetation.

Limitation

There is no theoretical limit in correction for an oblique shooting angle. Nevertheless, practically, a single scan shot from an extremely oblique angle will result in a large data shadow and a poor image resolution once reprojected to a perpendicular position. With increasing shooting angle, an increasing number of scans should be used to create composite point clouds for angle-corrected triangulation.

The process of merging different lidar scans of the same area to create a composite point cloud is valid only if each individual scan is accurately registered. Any registration error would be carried into the composite scan and potentially create an artifact during the triangulation of the composite.

Potential limitations of the methods described here arise in topography involving overhang, cavern, or reentrant. These topographic features create multiple Z values for a single X and Y position. Where Z is parallel to the shooting direction, X is perpendicular to the shooting direction, and Y is the vertical direction. A composite point cloud made of multiple scans shot from various directions will better image these topographic features. The triangulation method described in this paper is a global triangulation that projects the point cloud along the Z direction onto a plane. This projection cannot be used for triangulation because of ambiguous multiple Z values at some given XY location. To reproduce topography with multiple Z values adequately, local implicit triangulation methods should be used (Bloomenthal, 1988; Linsen and Prautzsch, 2001).

SUMMARY

Ground-based lidar data are increasingly being used to create high-resolution models of Earth's surface. Eventually these high-resolution data are converted into optimized triangulated surfaces used for photo draping or for 3D visualization and interpretation. In optimal conditions, ground-based lidar data are acquired perpendicular to the outcrop. Doing so minimizes data shadows and distortion and produces a relatively simple triangulated surface. A standard triangulation algorithm often requires regridding of the initial irregular sample point cloud using the scanning direction as an input parameter for projection and regridding of the data. However, acquisition normal to the rock face is not always possible, and scans are often acquired from an oblique position. We have developed a workflow that produces optimized triangulated surfaces on the basis of merging multiple oblique scans and correcting the distortion by manipulating the transformation matrix.

This workflow allows us to create optimized virtual-reality models for difficult-to-access outcrops such as sea cliffs or outcrops where exposure is limited by trees or buildings. Without this method, creating virtual-reality models for such outcrops would result in models with poor data coverage and/or extremely deformed triangulated surfaces. This methodology can be especially beneficial in cases where stratigraphic interpretation is carried out directly on photo-draped, 3D models. In addition, vegetation and other obstacles that alter the view of exposures can be digitally removed to create a perfectly continuous outcrop rendering that enhances stratigraphic analysis and mapping of sedimentary bodies.

The authors wish to thank the Laser consortium industrial sponsors. Xavier Janson is thanked for his help editing the manuscript. We also thank Jerry Bellian, Marc Tomasso, Renaud Bouroullec, and David Pyles for their help with acquisition and processing of the lidar data. This manuscript benefited greatly from thorough editing by Lana Dietrich. Publication authorized by the Director of the Bureau of Economic Geology. Peter White, an anonymous reviewer, and Geosphere associate editor, Richard Jones, are gratefully acknowledged for their comments, which helped improve the quality of this article.