Abstract
Three-dimensional (3D) slip vectors recorded by displaced landforms are difficult to constrain across complex fault zones, and the uncertainties associated with such measurements become increasingly challenging to assess as landforms degrade over time. We approach this problem from a remote sensing perspective by using terrestrial laser scanning (TLS) and 3D structural analysis. We have developed an integrated TLS data collection and point-based analysis workflow that incorporates accurate assessments of aleatoric and epistemic uncertainties using experimental surveys, Monte Carlo simulations, and iterative site reconstructions. Our scanning workflow and equipment requirements are optimized for single-operator surveying, and our data analysis process is largely completed using new point-based computing tools in an immersive 3D virtual reality environment. In a case study, we measured slip vector orientations at two sites along the rupture trace of the 1954 Dixie Valley earthquake (central Nevada, United States), yielding measurements that are the first direct constraints on the 3D slip vector for this event. These observations are consistent with a previous approximation of net extension direction for this event. We find that errors introduced by variables in our survey method result in <2.5 cm of variability in components of displacement, and are eclipsed by the 10–60 cm epistemic errors introduced by reconstructing the field sites to their pre-erosion geometries. Although the higher resolution TLS data sets enabled visualization and data interactivity critical for reconstructing the 3D slip vector and for assessing uncertainties, dense topographic constraints alone were not sufficient to significantly narrow the wide (<26°) range of allowable slip vector orientations that resulted from accounting for epistemic uncertainties.
INTRODUCTION
Terrestrial laser scanning (TLS) is increasingly used to address geomorphic and geologic problems requiring high-density (102–104 points/m2) topographic quantification of surface morphology. Examples include investigations of earthquake rupture dynamics (e.g., Brodsky et al., 2011; Jones et al., 2009; Renard et al., 2006; Sagy and Brodsky, 2009; Sagy et al., 2007), deformation from surface-rupturing earthquakes (e.g., Gold et al., 2010; Gold et al., 2011; Oldow and Singleton, 2008; Wei et al., 2010; Wilkinson et al., 2010), structural and stratigraphic architecture of sediments and stacked flood basalts (e.g., Nelson et al., 2011; Wilson et al., 2009), and fluid-reservoir characteristics (e.g., Bellian et al., 2007, 2005; Enge et al., 2007; Labourdette and Jones, 2007; Olariu et al., 2008). In addition, repeat or time series TLS scans have been used to capture changes during hillslope denudation (Wawrzyniec et al., 2007) and ocean beach erosion (Pietro et al., 2008), as well as rock-fall volumes (Rabatel et al., 2008; Stock et al., 2011), landslide kinematics (Teza et al., 2007, 2008), and postseismic deformation (Wilkinson et al., 2010).
Though TLS has been used to address a broad spectrum of scientific questions, common challenges persist regarding (1) logistics for efficient site scanning, (2) effective use of large point cloud data sets, and (3) quantification of uncertainties in measurements and interpretations. First, TLS system portability is important in studies of, for example, ancient fault ruptures where evidence is typically best preserved at remote sites removed from human modification. Similarly, scanning ruptures immediately following an earthquake requires TLS systems that can be rapidly mobilized before features are modified (Gold et al., 2010; Wilkinson et al., 2010). Second, effectively managing and analyzing large point cloud data sets is a computational challenge often addressed by reducing the data density and then converting the point cloud to a continuous surface, such as a digital elevation model. This surface-based approach often fails to exploit the full resolution of the data, and it is unclear to what degree accuracy may be compromised for measurements based on an interpreted surface rather than on the base data. Alternative approaches enabling point-based visualization and analysis are emerging (e.g., Kreylos et al., 2008; Wilson et al., 2009), but have not yet gained wide use. The third problem is accurately quantifying uncertainties, which we divide into two classes. Aleatoric, or measurement, uncertainties reflect the reproducibility of a set of measurements. In our study, these uncertainties quantify the variations in point positions obtained from different scans of the same feature that are the product of instrumental limitations, different workflows, or imperfect scan registration. Epistemic, or definition, uncertainties result from incomplete knowledge regarding the features being scanned, and quantify the uncertainty in interpretations of the TLS cloud. In this study, such uncertainties result from using the posterosion point cloud to interpret the pre-erosion geometries of our field sites.
We addressed these challenges by designing our data acquisition workflow to be manageable by a single surveyor at remote but vehicle-accessible field sites and to balance the need for portability with the need to maximize workflow speed and flexibility without compromising registration quality, a requirement that generally means more equipment. Our analysis workflow takes advantage of the full-resolution point cloud by using a point-based method for determining three-dimensional (3D) slip vectors with the new software tool LidarViewer (Kreylos et al., 2008) in an immersive virtual reality environment. To quantify epistemic uncertainties, we determine the range in slip vector orientations that results from iterative reconstructions of models of the field sites that are constructed from multiple interpretations of the TLS data. We also use Monte Carlo experiments and experimental surveys to evaluate the impact of aleatoric uncertainties on the 3D slip vector measurements.
To develop and test our collection and analysis workflows, we undertook case studies at two sites along the trace of the Ms 6.8 1954 rupture of the Dixie Valley fault in central Nevada (United States). The averaged horizontal slip direction has been estimated for this event from observations of structurally controlled lateral shear indicators and offsets, but this measurement is associated with a large error (±15°; Caskey et al., 1996). Precise 3D slip vector measurements along the Dixie Valley fault would provide important constraints on the direction and magnitude of coseismic motion, which is important for understanding how slip is shared among faults in this seismically active region. Thus we chose the Dixie Valley fault as a geologically relevant place to test our methods and because the primary features of the earthquake remain relatively well preserved.
The primary purpose of this paper is to describe our TLS-based approach to measuring 3D slip vectors across complex fault zones using point-based analyses, and to describe how we quantify uncertainties associated with these measurements. We hope that our descriptions may serve as a starting point for those new to TLS who see a potential application for this tool and point-based analyses in their own research; some of methodological material discussed herein will likely be common knowledge to experienced TLS users.
We review the role of TLS in recent fault-rupture studies and introduce our point-based analysis approach. We then report the methods used in our field workflow, experiments to evaluate scan registration errors, and data analysis to derive 3D slip vectors and their associated uncertainties. We report our results, starting with the scan registration experiments and then presenting the aleatoric and epistemic uncertainties affecting the 3D slip vector at 2 sites along the Dixie Valley fault. We conclude by discussing the registration experiments and the error analysis, and find that the uncertainty associated with instrument measurement error is negligible compared to that associated with identifying the piercing points used to measure the 3D slip vector.
BACKGROUND
TLS in Neotectonic Studies
TLS has been used to investigate displacements accumulated over both multiple surface ruptures and single events. For example, Oldow and Singleton (2008) measured displacement magnitudes using scans of pluvial lake shorelines, which are difficult to locate precisely in the field, but less challenging to visualize and measure using TLS data. Likewise, the contested Holocene slip rate along the active, left-slip Altyn Tagh fault in northwestern China was clarified (Gold et al., 2011, 2009) by using TLS surveys of offset fluvial terrace risers to conduct coupled structural and geomorphic reconstructions at two sites. TLS scans have also proven useful in recording features exposed in paleoseismic trenches and in assessing potential seismic hazard with morphometric analysis of precariously balanced rocks (Haddad et al., 2010).
Several recent examples where TLS data have been used to immediately preserve new fault ruptures show the importance of this survey tool in furthering our understanding of coseismic surface rupture. Following the 23 October 2004 Mw 6.6 Chuetsu earthquake in Japan, Kayen et al. (2006) quantified damaged infrastructure (e.g., railroads and tunnels) as well as seismically induced surface failures, such as landslides. In response to the 12 May 2008 Mw 7.9 Wenchuan earthquake in China, Wei et al. (2010) scanned rupture faces to analyze roughness and length relationships of coseismic free-face striations. In a study of the 6 April 2009, Mw 6.3 L'Aquila earthquake in central Italy, Wilkinson et al. (2010) collected repeat scans of a coseismic surface rupture that revealed postseismic deformation totaling >50% of the coseismic displacement. Likewise, sections of the 4 April 2010, Mw 7.2 El Mayor–Cucapah earthquake in northern Baja California, Mexico, were surveyed (Gold et al., 2010), and a variety of rupture styles of variable scale and complexity as well as free-face striations were scanned.
Point-Based Analyses
TLS data sets contain millions to hundreds of millions of individual points, making processing and visualization a computational challenge. One method to simplify data sets is to reduce point density and then generate a surface model of the data using an interpolation process such as a triangulated irregular network. Such surfaces enhance visualization of TLS data sets, which can be a challenge due to spatially variable point densities. Because digital surface models represent a continuous surface (i.e., all areas are represented by an elevation value), they enable operations such as calculation of slope and aspect maps, hillshades, contours, or data differencing, in addition to a range of more complex calculations. Point-based methods are an emerging alternative (Bellian et al., 2007; Kovac and Zalik, 2010; Teza et al., 2007, 2008; Wawrzyniec et al., 2007; Wilson et al., 2009), but are less common because fewer established tools exist to support such approaches. Point-based analyses use the individual point measurements rather than a model and do not require the cloud to be decimated, thus preserving the very high point densities achievable with TLS, a primary motivation for using this surveying method. In this study we explore point-based analysis as an alternative to surface-based methods in order to assess the utility of recent advances in software tools that enable point-based visualization in tandem with quantitative structural analysis.
To explore the utility of point-based methods as a functional alternative, we use LidarViewer (Kreylos et al., 2008) to conduct structural analyses and measure 3D slip vectors from TLS data. LidarViewer is an open-source software application that runs on Unix-based operating systems (e.g., Mac OSX, Ubuntu) and has been made freely available by the W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES; http://www.keckcaves.org). The software was developed to enable analysis of multibillion-point lidar data sets without requiring binning, subsampling, decimation, or digital surface modeling. LidarViewer enables real-time interactive visualization and analysis of data sets arbitrarily greater than a computer's main memory by optimizing data handling using hierarchical data structures and view-dependent, multiresolution, out-of-core rendering. Point clouds are visualized with intensity coloring, user-defined RGB (red green blue) coloring, or, to enhance feature recognition, a dynamic hillshading effect over which sun azimuth and elevation are adjustable in real time. The point cloud remains fully manipulable in 3D at all times, so that interpretations and measurements can be evaluated from multiple viewpoints. In addition to interactive navigation, LidarViewer supports real-time point selection and extraction, fitting of geometric primitives (lines, planes, spheres, cylinders) to selected points, measurement of point locations and distances, visualization of point distances from a user-defined plane via plane-perpendicular adjustable color gradients, and extraction of profile curves. LidarViewer communicates with proprietary data management programs so that quantitative measurements can be easily exported, managed, and integrated into a data analysis workflow.
METHODS
Instrumentation and Field Workflow
A goal of this study was to design and implement a TLS field survey workflow that is practically managed by a single operator without additional field help to aid in transportation and setup. We use two primary instruments, a Trimble GX DR200+ terrestrial laser scanner and a Leica TCR407 Power total station (Fig. 1), although the total station is not required in all variations of our workflow. Manufacturer reported accuracy and precision values for single-point measurements are provided in Supplemental Table 1 in the Supplemental File1. All functions of the scanner (e.g., defining scan boundaries, measuring targets, initiating scans, evaluating the composite point cloud for completeness) are completed using PointScape (Trimble field software) installed on a ruggedized laptop (Supplemental Table 1 [see footnote 1]). Supporting field equipment for this study included 4 reflective paddle targets and rotating pucks, 5 tripods, and 5 optical tribrachs for mounting the targets and scanner on the tripods. For the total station measurements, we include a survey prism, prism pole and bipod (for single-operator surveying), and a handheld data collector for the total station. Because the laser scanner has a dual-axis compensator and electronic level, scans can be registered using only a single target: thus the minimum equipment necessary is the scanner; 2 tripods; 1 target, tribrach and puck; the computer; and the power system. To georeference the surveys, we used 2 Trimble R7 global positioning system (GPS) receivers and antennae to survey at least 3 ground control points at each site for an average of 3 h. We processed the static GPS data using the National Geodetic Survey (NGS) Online Positioning User Service (http://www.ngs.noaa.gov/OPUS) and geoid corrections from the NGS GEOID09 converter.
We have two systems for powering the scanner and laptop (Fig. 1). The most convenient source is a Honda 1000W generator, which allows unlimited scan time. An alternative source is 6 12V batteries (55Ahr, sealed lead acid). In this case, two batteries power the scanner and a third powers the laptop, while the remaining three are charged throughout the day with collapsible solar panels. With direct sunlight, we have scanned for as many as 6 consecutive 8 h days on this battery system without interruption.
Although general strategies for TLS methodology have been identified and TLS workflows have been proposed (e.g., Bonnaffe et al., 2007; Enge et al., 2007), they tend to focus on data processing rather than data collection. In the following, we describe our field approach in some detail because explicit descriptions of workflows for individual applications are needed to reveal common strategies and sources of uncertainty, leading to a set of best practices. In addition, we hope that new users may find the following description informative. We developed components of this workflow over three years of using TLS to image active faults in China, Mexico, and the United States, particularly fault displacements on alluvial fans and along range fronts at field sites typically <50,000 m2. Given comparable field settings, we have found the approach presented here to be appropriate for operating other moderate-range instruments that use reflective targets for scan registration (e.g., Riegl VZ-400).
Field sites often require multiple scans from different locations in the survey area to minimize data shadows. To support scan registration, we typically establish a network of ground control points (GCP) that we independently survey using the total station. To both locate the scanner in this network and register the scans, reflective paddle targets are mounted on tripods placed over these control points, and the scanner is positioned so that 3–4 targets can be scanned from each scan station. By assigning the appropriate point coordinates to each target, each scan is placed within the independently surveyed GCP network as the data are gathered, and scans are registered immediately so that we can evaluate the data set in terms of coverage and point density before leaving the field. In our study, a single surveyor (P. Gold) collected data at each site using the following workflow, which involved three steps.
To begin, the surveyor prioritizes the features to be scanned, plans roughly where the scanner stations will need to be to image these features, and determines where to place the registration targets so that 3–4 targets are visible from each planned scan station. At each target location the surveyor installs a GCP marker (either a section of rebar or a survey nail). The surveyor then moves the survey equipment from the vehicle in two trips, leaving the total station, targets, and 4 tripods at whichever benchmark is closest and has a line of site to all others. The scanner, computer, generator, extension cord, and 1 tripod and tribrach are carried in a single trip to the first scanner station, which is the location within the field area from which the largest and longest overview scan can be collected. The scanner is set up on a tripod with a leveled tribrach, and is not located on a benchmark to save time. As the scanner autocalibrates (∼5 min), the surveyor performs fine leveling, and then uses the scanner's internal camera to collect a color panoramic photograph of the site. Scan coverage is defined using this photo. We precisely define the extent of each scan by using a nonrectangular, multinode polygonal framing tool, which prevents time lost in collecting unnecessary data, and we generally delineate several subscans with different parameters to reduce gradients in point density. The scan is then started. In areas with vegetation, we do not use distance averaging (i.e., only one measurement is collected per laser pulse) to prevent collection of spurious data points.
The second step is to establish a GCP network by surveying the marker locations and setting up the registration targets. This process usually takes ∼20 min per GCP, and can often be accomplished while the scanner collects its first set of scans, as was the case in this study. This approach to measuring the GCPs provides the most flexibility in carrying out the scan, but can be simplified to decrease survey time and equipment. We start by piecing together 4 target assemblies (tripod, tribrach, puck, and paddle target). The surveyor positions the total station over one control point and the prism over a second (using the survey pole and bipod), and then measures the point. The surveyor then returns to the second control point, places a target assembly over the point, measures the target height, and orients it to face the scanner. This process is repeated until all control points are surveyed and available targets are placed. The GCP coordinates are then ported to the scanner software.
The third step is completion of the TLS site survey. When the first scans are finished, the surveyor scans the registration targets and links each with the GCP position as measured with the total station as well as the target height above the GCP point. The surveyor then moves the scanner to the next scan station and starts by scanning the registration targets to locate the scanner within the GCP network and the prior data set. New scans are then set up and run from this station, and the process is repeated for the duration of the project. When only one person is conducting the survey, this workflow is best suited to smaller sites (< 10,000 m2), though area is highly site specific. (Additional information and discussion concerning this workflow, variations on this workflow, single surveyor scanning, and the use of point-based methods can be found in Section S1 in the Supplemental File [see footnote 1].)
Scan Registration and Data Preparation
We used RealWorks (Trimble postprocessing software) to register scans from each station into a composite point cloud and to georeference the TLS data using the GPS measurements. We exported the full point clouds in ascii (xyzrgb) format for visualization, manual classification, and analysis in LidarViewer. Vegetation removal, visualization, and feature measurement were completed using LidarViewer in a 4-sided, 3 × 3 × 2.5 m CAVE immersive visualization environment, in which point data appear as independent 3D objects within the CAVE that can be interactively manipulated, explored, and analyzed. In the CAVE, stereoscopic images are projected onto three walls and the floor, and a wireless tracking system synchronizes the 3D display with the 3D position and orientation of the user's head. Users interact with data using a position-tracked, six-button handheld wand. This facility provides an immersive 3D environment for real-time interaction with the complete point cloud.
We manually classify most vegetation returns in the TLS clouds to isolate the ground surface, which is more time consuming, but ensures that removed points define structures visually identifiable as vegetation and avoids the loss of ground-surface data. Manual classification can be accomplished using tools in 2D or 3D flat-screen environments, but we find we can more reliably identify, and rapidly select, vegetation points in the CAVE because the realistic depth perception of a position-tracked environment (as opposed that resulting from motion parallax alone) aids in distinguishing vegetation points from a proximal background of closely spaced and similarly colored ground points. Removing vegetation is commonly accomplished using automated point classification algorithms, which we have found to be an efficient method for removing large brush or trees that have abruptly higher elevations than the immediately surrounding ground surface. However, in our experience these processes cannot distinguish between ground and vegetation returns at sites characterized by abrupt topography or detailed surface textures, and thus either remove important surface data or preserve unwanted vegetation points. An added benefit of manual classification is that the user gains a unique familiarity with the data set, especially with respect to which areas are most severely affected by registration errors. In this study, we performed coarse vegetation removal using an automated topographic sampling process in RealWorks to remove large brush, and then detailed vegetation removal using the LidarViewer real-time point selection tool in the CAVE.
Scan Registration Experiments
We conducted a simple experiment similar in scale to the Dixie Valley site surveys to evaluate registration errors expected to be produced by the field workflow (see Instrumentation and Field Workflow discussion). Registration error is the misalignment of two overlapping scans of the same surface and is the result of imperfect alignment of the center coordinates of registration targets scanned from each scanner station. Several factors constitute variables in our workflow that may influence the magnitude of registration errors and may need to be factored into the uncertainties of our slip vector measurements.
To begin evaluating how different workflow variables influence scan registration, we varied (1) how many targets were used to register scans and where they were placed relative to the scanner and a scanned object, (2) whether target scans were tied to an externally measured ground control network and, if so, (3) how the ground control network was measured and merged with the TLS project. Furthermore, though registration errors are the result of imperfect target alignment, they are a defect of the composite point cloud. Automatically assessing registration errors in the point cloud is not straightforward, and as a proxy, postprocessing software such as RealWorks (Trimble) and RiScan (Riegl) typically measures registration errors by reporting the misalignment of registration target coordinates. We designed this experiment so that we could independently assess the registration errors in the point clouds caused by target misalignment and so that we could compare these values to the mismatch in target coordinates reported by RealWorks.
Because the experiment is primarily designed to evaluate registration errors associated with the workflow (see Instrumentation and Field Workflow discussion), we began by installing and measuring a ground control network over which registration targets, but not the scanner, were set up. We used the total station, set up at a single location (Fig. 2A), to measure each control point in the network in three ways: (method 1) using a prism on a survey pole supported by a bipod and leveled over the control point with a circular level (the single-surveyor method); (method 2) with the prism mounted on a tripod and leveled over the control point with a tribrach; and (method 3) using a leveled paddle target on the tripod and measuring its center with the total station in reflectorless mode. In a fourth step, the scanner calculated the center of each target from regular target scans (i.e., not by a single laser pulse) as it does in the field. In the first method, we used the prism pole and total station heights to calculate the coordinates of the GCP on the ground, because the center of the prism on the pole is not at the same elevation as the center of the prism or target on the tripod. An additional measurement of the target height above the GCP was then required during registration. All other methods directly measured the coordinates of the center of the prism or target. We expected the total station measurement of the target center (method 3) to be the most accurate because the total station is capable of more precise and accurate measurements than the scanner, and because in this case we measured the target center directly, rather than measuring a proxy for the target center (e.g., the prism).
In the second step of the experiment, we placed the scanner within the network of four registration targets (t1–t4, Fig. 2A), which were measured by the scanner from each of two locations (s1 and s2, Fig. 2A). Targets were rotated about their vertical axes to face each scanner location in turn, which ensured equal quality laser returns for all the target scans from both scanner locations while holding the center point coordinates of the targets stationary (e.g., Soudarissanane et al., 2011). We then separately scanned a pyramid (Fig. 2B) constructed of white melamine-coated particleboard that was mounted on a frame and oriented so that its apex was pointed in a direction approximately bisecting the two scanner positions. We used a pyramid because scans on all three sides can be approximated by planes that intersect at a single point, the position of which in 3D space can be determined and compared between scans collected with different survey parameters. The pyramid remained stationary, and was anchored to the ground and weighted to avoid erroneous error measurements caused by any movement. Because it is unclear at what point coordinate information is used by the PointScape field software to begin preliminary automatic target registration, at this stage in the experiment we entered no information about the relative positions of the scanner and targets other than what the scanner collects via the target scans, to avoid unknowingly introducing any bias.
We then used RealWorks to compute different coregistrations of the two pyramid scans for seven different target configurations simulated by iteratively deleting target measurements from the project (Fig. 2D). In the registration processes, scans were registered by linking the target scans collected at each station to each other (method 4), or to the total station-derived coordinates from methods 1, 2, or 3, resulting in a total of 28 different pairs of registered point clouds. One measure of the registration error was provided by RealWorks as the residual target mismatch values. However, to obtain a measurement within the composite cloud we exported each scan individually and determined registration errors in the CAVE by measuring the distance between the pyramid apex positions. Results of these experiments are reported and discussed herein; individual measurements are reported in Supplemental Tables 2 and 3 (see footnote 1).
Slip Vectors from Manual Reconstruction
The slip vector is calculated as the line between two piercing points (x1y1z1 and x2y2z2 in Fig. 3A) defined by the intersection of a displaced linear feature with the fault plane. To measure the 3D slip vector recorded by faulted landforms at our sites, we followed essentially the same steps as those we would follow in the field, but instead virtually interacted with and measured the field area as represented by the TLS point cloud using LidarViewer in the CAVE virtual reality environment. It is possible to carry out the following measurement process on a 3D-enabled or even 2D computer, but we have more confidence in measurements made in the CAVE because interaction with the field area is more intuitive and visual inspection more straightforward.
Postevent erosion and degradation has destroyed or covered the actual piercing points, meaning that they are not captured in the point cloud and thus cannot be directly measured, making a straightforward measurement as illustrated in Figure 3A impossible. Instead, we used the TLS point cloud as the basis for reconstructing the field site to its pre-erosion geometry (Fig. 3B). We began by measuring the orientation of the offset linear feature and estimating the orientation of the pre-erosion fault plane. This requires selecting populations of TLS points that either characterize the offset feature or that we interpret to define the intersection of the fault plane with the surface. We then calculated best-fit lines or planes (line and plane primitives) to each point selection. Point selection and plane and/or line primitive fitting was performed interactively without exiting the data analysis environment, so we could visually confirm that each primitive was a reasonable approximation of the feature it represents. The result was a pre-erosion representation of the key features at the field site composed of lines and planes, the intersections of which define the critical piercing points (x1y1z1 and x2y2z2 in Fig. 3B) needed to calculate a slip vector. While these piercing points are not actually represented in the point cloud, we distinguish this method as point based rather than surface based because the pre-erosion reconstruction is based solely on the point data. In this way the extra processing step of converting a reduced TLS cloud to a digital surface model is eliminated; this may lend a higher degree of accuracy to our measurements because the pre-erosion model is based on the raw data, not on a model of it.
An additional complicating factor at our sites is that slip occurred on synthetic and antithetic faults, producing an intervening graben (Fig. 3C). In this case it was necessary to calculate slip vectors for displacement on both faults, and then sum these vectors to obtain the total slip vector, as shown in Figure 3C. The equations to calculate the slip vector from the piercing points and to translate these measured points to vector trend and plunge are detailed in Section S2 in the Supplemental File (see footnote 1). Calculating the uncertainties associated with our site interpretations in the data analysis environment is at best as approximate as doing so in the field, thus we addressed this challenge with more quantitative approaches described in the following section (see discussion of Uncertainties in 3D Slip Vectors).
Uncertainty Calculations
To estimate aleatoric uncertainty in the 3D slip vector orientations, we performed Monte Carlo experiments to evaluate the effects of instrument accuracy and registration errors. To evaluate the main epistemic uncertainties in our method, we calculated the range in slip vector solutions resulting from iterative reconstructions of the field sites. To enable these calculations, we constructed a geometric model of the key features measured from the TLS cloud at each site, including the faulted linear landform and the surfaces of the fan, graben floor, and synthetic and antithetic fault faces. In this analysis we computed the slip vector by first determining the amount of fault-perpendicular motion needed to restore the fan and graben surfaces so they are coplanar (i.e., the dip-slip component), and then computing the amount of lateral motion needed to realign the faulted linear landform (i.e., the strike-slip component).
The first Monte Carlo experiment estimated uncertainties in the slip vector stemming from errors in the point positions due to scanner accuracy (i.e., aleatoric uncertainty). A practical example of the effect of scanner inaccuracy is that a point cloud of a planar surface will define a planar volume rather than a true plane. In this test we used a geometry roughly based on the dimensions of Site 3 (see Fig. 4). However, to ensure reliability of the slip reconstruction in this first experiment, we idealized the site geometry so that the surfaces of the footwall, graben, and hanging wall were pairwise parallel, the linear landforms and the planes containing them were parallel, and the two fault planes had identical strikes. First we generated a synthetic point cloud over this idealized geometric model by uniformly sampling the primitives at 4800 points/m2 (i.e., average density of Site 3 cloud) and with an accuracy ±7mm (i.e., the reported scanner accuracy), using a uniform error model. Points were randomly positioned around the primitives within the bounds of the accuracy value to simulate the variability in point positions inherent to TLS scans. Next, we fit a new set of best-fit planes and lines to this randomized synthetic point cloud and ran the slip vector calculation on these planes and lines. To calculate the error interval we repeated the cloud generation, fits, and slip vector calculation 100 times to determine the range in slip vector solutions.
The second Monte Carlo experiment evaluated the impact of errors in target measurements and target alignment during scan registration. In this case we created a virtual registration target network within an area of 30 m × 80 m, roughly corresponding to the areas of the scan registration experiments and the field sites. In this experiment, no synthetic point cloud representing a field site was generated, and only randomized target coordinates were compared. We created two synthetic point data sets in which the coordinates of the ideal targets were randomly repositioned within an uncertainty space defined by the accuracy of the method used to measure target coordinates. In the first we randomly repositioned target points using a standard deviation of 0.003 to simulate a laser scanner measurement, and the second with a standard deviation of 0.02 to simulate a total station measurement with a 2 cm misalignment of the pole or tripod over the GCP. We then aligned the two point sets using a least-squares rigid-body transformation and calculated distances between corresponding target positions. We repeated the experiment 10,000 times.
The iterative site reconstructions yielded an estimate of the uncertainty in the slip vector stemming from incomplete knowledge of the pre-erosion site geometry (i.e., epistemic uncertainty). In this case, we used the CAVE to interactively create several primitives for the graben floor and each fault scarp, based on different interpretations of the data and different selections of raw lidar points. We held the primitives for the fan surfaces and piercing lines fixed because the range of best-fit primitives for these features is very limited in contrast to the wider allowable range of best-fit fault and graben planes. We then ran the slip vector calculation for all possible field site reconstructions created from different combinations of primitives and calculated the range of all reported results. This process is described in more detail in Section S3 and Figure S1 of the Supplemental File (see footnote 1).
RESULTS
Registration Experiments
Results from the registration experiments quantify both the variability in GCP measurements determined using different survey methods (Fig. 2C; Supplemental Table 2 [see footnote 1]) and the registration errors that result when each of those methods is used with different target geometries (Figs. 2D, 2E; Supplemental Table 3 [see footnote 1]). Figure 2C shows results from the four different methods for measuring GCP positions by comparing 3D distances measured between different target coordinate pairs. The data are ranked on the x-axis by increasing distance between targets. For each target pair we plot the absolute value of the difference between the 3D distances determined using method 3 (target on tripod) and each of the distances measured using methods 1 (prism on pole), 2 (prism on tripod), and 4 (TLS of targets). All methods yield intertarget distances that are within 6.5 mm of each other, except for 2 points from method 4 (pairs 1–2 and 2–4). Both of those points involve target 2, which may reflect an inaccurate TLS scan of that relatively distant position. The differences do not vary systematically with intertarget distance, e.g., differences for the shortest pairs (2–3 or 1–4) are similar to those for the longest pairs (2–4 or 1–3). In contrast, there is some variability by method: on average method 1 has the smallest difference relative to method 3 (i.e., average difference is ∼3 mm), whereas method 4 has the largest (i.e., average difference is ∼5 mm). Method 2 yields measurements with the lowest amount of variability (i.e., all points are between 2 and 6 mm).
Registration errors for 28 different configurations are shown in Figure 2E, as pyramid-apex offsets (black points) and residual target errors reported by RealWorks (gray points) for pairs of scans registered against GCP positions measured with the 4 different survey methods and the 7 different target geometries shown in Figure 2D. Target configuration 4 is notable for producing pyramid offsets that are more than twice as large as those resulting from the other configurations. For methods 1 through 3, the residual target errors overestimate, or are equal to, the within-cloud registration errors determined by the pyramid offsets, except for configuration 4. However, method 4 yields the opposite result, with residual target errors generally smaller than the pyramid offset values. In summary, with the exception of target geometries 4 and 6, all 4 methods for measuring GCP positions yield pyramid offsets that are <5 mm, and no systematic differences between the different methods are evident.
Aleatoric Uncertainties
The Monte Carlo experiments estimate uncertainties in the slip vector stemming from two sources of aleatoric uncertainty, i.e., scanner accuracy and errors in GCP measurements. Here we report the uncertainties in slip vector as value brackets around all results, independently in x (easting), y (northing), and z (vertical). In Monte Carlo experiment 1, the fault strike was parallel to the x-axis and the average slip vector we used was approximately −5.3 m E, −6.6 m N, and −0.5 m V, meaning 5.3 m of right slip, 6.6 m of horizontal extension, and a 0.5 m lowering of the hanging wall relative to the footwall. The error interval of the slip vectors due to random point variability in the synthetic cloud was approximately ±3 mm E, ±0.4 mm N, and ±3 mm V. The horizontal extension is more accurate because it only depends on the fault planes, which are well constrained even within the synthetic point cloud, whereas the lateral and vertical components depend on the linear features, which are less well constrained in the synthetic cloud. We view these numbers as order-of-magnitude estimates of the slip vector uncertainty due to scanner measurement errors. This analysis indicates that these errors are of the same order of magnitude as per-point measurement error. The uncertainly is largely derived from the random variability in the points on the linear features, because if only the fault plane points are randomized, and not the linear features points, then the error intervals drop to ∼±0.07 mm E and 0.14 mm N (the vertical error is zero because there is no variability in the lines).
In Monte Carlo experiment 2, we found that randomizing the target positions in the simulated total station and TLS data sets and then registering the scans yielded uniform position errors on all targets. Averaged over all 10,000 runs, the mismatches between TLS and total station target positions ranged from 13.8 ± 7.9 mm to 16.0 ± 9.9 mm (1 standard deviation), with no clear systematic variation as a function of target distance from the scanner and/or total station. In a single run, errors ranged from 11.2 mm to 31.5 mm.
Slip Vectors and Associated Epistemic Uncertainties
Both Sites 3 and 6 were surveyed by a single worker (P. Gold). Hillshaded point clouds and the locations of the scan stations and targets are shown in Figure 4, and details of each survey are listed in Table 1. We computed 3D slip vectors from these composite clouds by manually reconstructing the sites to pre-erosion geometries using the point-based approach (see discussion of Slip Vectors from Manual Reconstruction), and we computed slip vectors with uncertainties using the automated iterative site reconstructions (see discussion of Uncertainty Calculations). For the latter analysis at Site 3, we used 10 different measurements for each of 3 different surfaces (graben floor and both synthetic and antithetic faults), for a total of 1000 iterations. In contrast, at Site 6 we used 5 synthetic planes, 6 antithetic surfaces, and 2 measurements of the graben floor, for a total of 60 iterations. Results are reported in Table 2. This analysis indicates that the epistemic uncertainties lead to errors in the slip vector components that range from 10 to 60 cm. These errors largely stem from variability in the reconstructed fault scarps, which vary by ∼5° to 15° in dip, depending on the site, and by ∼2° in strike at both sites.
Manual reconstruction of Site 3 (Fig. 4A) yielded a slip vector with a trend of 281° and plunge of −22.3° (Table 2). By comparison, the iterative reconstruction analysis yielded a vector oriented 283.1° ± 12.6° and −28.9° ± 8° that overlaps with the manual reconstruction within error. The subsurface fault dips calculated presuming that the Dixie Valley fault strikes 015° are 24.5°E and 32.0° ± 6.8°E for the manual and iterative methods, respectively. At Site 6 (Fig. 4B) manual reconstruction yielded a slip vector oriented 279.3° and −26.1°, compared to 279.7° ± 2.3° and −26.1° ± 4.1° as determined using the iterative reconstructions. Using the same Dixie Valley fault strike (015°) yields subsurface fault dips of 28.2°E and 28.3° ± 4.3°E for the point-based and iterative methods, respectively. All uncertainties reflect the full range in possible slip vectors.
DISCUSSION
Registration Experiments and Aleatoric Uncertainties
The registration experiments indicate that the registration errors measured in the point cloud are generally small (∼5 mm), and do not clearly depend on which of the four different methods is used to measure the GCP positions. Likewise, the number and geometry of targets have minimal effect for most, but not all, configurations. In only one case (configuration 4 in Fig. 2E) does target geometry appear to cause a meaningful change in registration error. In this case, registration error sharply increased when the targets were close to, but on the opposite side of, the scanner relative to the feature of interest (i.e., the pyramid). Geometries such as this produce a leveraging effect that amplifies registration errors, supporting the common practice of distributing targets evenly throughout the field area.
Though registration errors often increase to more significant magnitudes over larger spatial scales than those covered in this study, those resulting from this experiment are 1–2 orders of magnitude smaller than the uncertainties in the slip vector resulting from epistemic uncertainties. For this reason we conclude they can be ignored in most studies making measurements of comparable resolution using instruments and workflows similar to those described here, but note that they are likely large enough to be important for detailed studies of fault or surface roughness or of change detection using repeat scans. The generally ambiguous relationship between target number, survey geometry, and registration error revealed by these experiments allows us to make several recommendations for registration methods focused on maximizing workflow efficiency or flexibility, rather than on the expected errors. Establishing an independent GCP network is always wise because (1) it allows a target array to be precisely replicated so that new scans can be immediately integrated with previous scans should a site require additional surveys or expansion, (2) independent target coordinates streamline the field registration process by removing ambiguities in target identification, and (3) even if not used in the registration process, independent verification of target coordinates can aid in troubleshooting during postprocessing steps. For these reasons, we typically do not establish a target network based solely on TLS target scans (i.e., method 4). If it is likely that targets will remain stationary over a GCP throughout a survey (i.e., target height does not change), the best practice is to directly measure the target center point with the total station (method 3). This reduces field gear by eliminating the need for a prism, survey pole, and bipod, and eliminates the possibility of introducing error with the intervening steps of replacing prisms with targets or translating prism coordinates to target coordinates via imprecise height measurements. However, if it is likely that a GCP may be reoccupied by a target more than once, the best practice is to measure the GCP coordinate (rather than the target center coordinate); this negates the need to resurvey the target center, which will be over the same GCP, but at a slightly different elevation each time a target is moved and set up. Measuring the GCP coordinate is most straightforward with method 1 (prism on survey pole), but can also be done with methods 2 and 3. Measuring the GCP coordinate allows targets to be freely moved and relocated within a GCP network; this adds flexibility when scanning complex sites, conducting multiday scans, or scanning with a minimum of tripods and targets.
The registration experiments indicate that the target residual errors reported by proprietary scan registration software do not accurately reflect the registration errors contained within the cloud, and should not be considered reliable proxies for internal point cloud precision. Simply put, target mismatch is not representative of scan misregistration. In the case where external GCP measurements are used, the target residuals tend to overestimate the registration errors, in which case the target residuals are a conservative approximation of the true registration errors. However, in the case where the GCPs are not measured independently, the target residuals generally underestimate the registration errors. More experiments are needed to evaluate how registration error scales with distance and to test for scanner bias that might result from systematic errors in distance or angle measurements. Such systematic errors would cause imaged features to be magnified, reduced, or distorted in the scan relative to their true geometry.
As with the registration tests, the results of the Monte Carlo experiments yield errors smaller than those produced by the epistemic uncertainties. The first experiment indicates that random errors in the scanned positions of points relative to the true locations of the points on the surface should produce errors of a few millimeters in the slip vector. Errors in the determinations of GCP positions due to prism-pole misalignment or differences in the location of the paddle target relative to the prism can lead to target mismatches on the order of 1–2 cm. These errors are of the same order of magnitude as the residual target errors observed in the registration experiments. We ignore these errors because they are significantly smaller than the uncertainties in the slip vector stemming from epistemic uncertainties associated with the site reconstructions.
Uncertainties in 3D Slip Vectors
The results from the iterative site reconstructions show that accounting for epistemic uncertainties has a significant impact on the slip vector uncertainty at both sites, although the effect is larger at Site 3. In addition to providing more information about the magnitude and direction of coseismic slip, our direct measurements of slip vector orientations provide an important confirmation of the net extension azimuth (∼274°) previously approximated for the length of the rupture by Caskey et al. (1996). At Site 6, the full error envelope on our measurement (4.6°) places significantly tighter constraints on the coseismic net extension direction for the Dixie Valley earthquake, which has been associated with a 30° error envelope (Caskey et al., 1996). In contrast, the error envelope associated with our measurement at Site 3 (25.2°) does not represent a significant reduction. We see several possible explanations for this difference in precision of our analysis.
The most likely possibility is that the difference in slip vector uncertainty is due to differences in the geometry at the two sites. Site 3 is characterized by more subtle topographic relief than Site 6, where the fault traces curve as they climb a riser on the north side of the site (Fig. 4B). At Site 6 the nonlinear fault traces more narrowly constrain fault orientations (essentially by enabling visual 3-point solutions), resulting in a less variable population of pre-erosion fault planes for the iterative reconstructions. In addition, at Site 3 the piercing line intersects the graben obliquely (∼60°), meaning that variations in fault dip will produce greater variations in slip vector trend at this site relative to Site 6, where the piercing line intersects the graben at a higher angle (∼80°). The method for calculating fault-parallel slip in our iterative reconstructions accounts for errors introduced if the feature lines are not perfectly parallel. This error increases with graben width, which is greater at Site 3. Thus we expect higher uncertainty at Site 3 due to the combined effects of (1) a greater variation in fault plane dip (∼3 times that at Site 6), which contributes to dip-slip error, (2) a more oblique intersection of the piercing line and graben also contributing to dip-slip error, and (3) a wider graben, which contributes to lateral-slip error.
An alternative possibility is that our iterative reconstructions underestimated the true epistemic uncertainty. One way we could have underestimated such uncertainty was by using an overly simplified model at Site 6 relative to that at Site 3. However, we think this is unlikely given that both sites were similarly simplified and that our analysis does not force primitive orientations to be parallel or coplanar (e.g., fault strikes can be different). It is not clear why such simplifications would produce the difference we see between the errors at Sites 3 and 6. In addition, the 3D slip vectors determined by the iterative reconstructions match those we obtained by manual reconstruction, suggesting that the simplified geometric models accurately represent the true site geometries. Another way we could have underestimated the epistemic uncertainty was to not calculate a sufficient number of iterations to determine the full range in epistemic uncertainty at Site 6 (60, versus 1000 at Site 3). It is possible that this 94% reduction in the number of iterations could reduce the error in the trend of the slip vector by ∼80%, and in the plunge by 50%. Because the graben floor is not well preserved or imaged in the TLS survey at Site 6, we used only 2 different fits to the graben floor; this prevented calculating as many iterations as possible for Site 3. Especially at Site 6, where the feature intersects the faults at a high angle (80°), the orientation of the graben floor mainly affects the slip vector plunge, and so the lack of variable graben planes could be contributing to the lower error in uncertainty in the plunge, though not the trend, of the slip vector at Site 6. However, the overall variability of the 10 graben planes measured at Site 3 was small (1.5° in strike, 0.5° in dip), and thus contributed insignificantly to the reconstruction uncertainty at Site 3 as compared to that introduced by fault plane variability. Thus, we do not think that the difference in uncertainty between the two sites is related to the simplifications used in the iterative reconstructions or to the different number of graben planes at Site 6. In summary, although the most likely explanation is the difference in site geometry, future work will test whether using more fault planes in our reconstructions of Site 6 could equalize the uncertainty, and will explore the effects of increasing the number of fault planes at both sites as well as varying other key features.
CONCLUSIONS
TLS surveys uniquely facilitate rapid collection of topographic data that are critical for measurements of 3D fault kinematics. The following conclusions can be drawn from our study.
1. It is straightforward for a single surveyor to image fault ruptures using the workflow we have developed and describe here. This approach balances the need for portability, rapid data collection, and high precision and accuracy in multiscan data sets.
2. To assess the errors that are likely to result from variables in our workflow, we carried out a set of simple registration experiments using four different survey methods to measure GCP locations. The registration experiments indicate that the registration errors measured in the point cloud are generally small (∼5 mm), do not depend on the survey method used to measure the GCP, but show some dependence on the GCP geometry. The registration experiments also indicate that the target residual errors reported by proprietary scan registration software do not accurately reflect the actual registration errors contained within the cloud, and should not be considered reliable proxies for internal point cloud precision.
3. To estimate uncertainties in the 3D slip vector determinations due to scanner accuracy or random errors in GCP measurement, we performed two Monte Carlo experiments in which we generated synthetic point clouds, and then measured the resulting 3D slip vectors. Errors from these variables range from 0.5 to 10 mm. We do not consider these errors in our slip vector analysis because they are typically more than an order of magnitude smaller than the uncertainties in the slip vector stemming from epistemic uncertainties associated with the site reconstructions.
4. We present methods for determining coseismic 3D slip vectors and associated uncertainties using both point-based manual reconstructions of faulted landforms and iterative reconstructions of modeled sites. We illustrate both the workflow and slip analysis using scans at two sites of linear landforms displaced by the 1954 Dixie Valley earthquake. Manual reconstruction of Site 3 yields a slip vector oriented 281.0°, −22.3° and iterative reconstructions yielded a slip vector oriented 283.1° ± 12.6° and −28.9° ± 8°. Subsurface fault dips are 24.5°E and 32.0° ± 6.8°E for the manual and iterative methods, respectively. At Site 6, manual and iterative reconstructions yield slip vectors oriented 279.3°, −26.1° and 279.7° ± 2.3°, −26.1° ± 4.1°, with subsurface fault dips of 28.2°E and 28.3° ± 4.3°E, respectively.
5. We find that the dense survey measurements provided by TLS data are important for both visual analysis of site geometry and generation of accurate primitives to determine the 3D slip vector and its associated uncertainty. However, our study demonstrates that higher resolution data alone are unlikely to resolve uncertainties in site reconstruction that result from incomplete knowledge of the original site geometry at the time of rupture, even where primary rupture features are still relatively well preserved.
6. We evaluate the functionality of new point-based analysis and immersive 3D visualization tools, and find that point-based methods are an appropriate alternative to surface-based methods for quantitative structural analysis.
This work was supported in part by National Science Foundation grants EAR-0610107 and OCI-0753407, the W.M. Keck Foundation, the University of California Davis, the Geological Society of America Graduate Student Research Grant Program, and the UC Davis Department of Geology Durrell Fund. We thank Kurt Frankel, George Hilley, Doug Walker, and Rich Briggs for their comments, which greatly improved this manuscript. Previous versions of this manuscript benefited from reviews by Mike Oskin and Sarah Roeske.