Terrestrial laser scanner (TLS) images provide assessment of geomorphic surfaces at a centimeter scale, but for quantitative analysis require an understanding of the uncertainty budget and the limit of image resolution. We conducted two experiments to assess contributions of instrumental, georeferencing, and surface modeling methods to the uncertainty budget and to establish the relation between reference network uncertainty and the repeatability and resolution of imaged natural surfaces. Combinations of Riegl LMS-Z620 and LPM-800HA instruments were used to image fault scarps and erosional ravines in Panamint Valley and the San Gabriel Mountains of California (USA), respectively. In both experiments, a control network of reflectors was surveyed using total station (TS) and georeferenced with the Global Navigation Satellite System (GNSS) in real time kinematic (RTK) and static (S) modes in the first and second experiment, respectively. For successive scans, we tested the impact of using a fixed network of control reflectors and scan positions versus using variable scan positions in a fixed reflector network and variable scan and reflector network configurations. The geometry of the reflector network in both experiments was established using a TS to within ±0.005 m and in addition to ±0.006 m using S-GNSS occupations during the second experiment. TLS repeatability in a local frame is ±0.028 m, with uncertainty increasing to ±0.032 m and ±0.038 m using S-GNSS and RTK-GNSS, respectively. Point-cloud interpolation, where vegetation effects were mitigated, contributed ±0.01 m to the total error budget. We document that the combined uncertainty for the reference network and surface interpolation represents the repeatability of an imaged natural surface.
The capacity to acquire precise and high-resolution three-dimensional (3D) surface images by terrestrial laser scanners (TLS) is becoming more prevalent in the analysis of geomorphic surfaces, change detection, and geological processes. TLS enables high-resolution quantitative characterization of natural surfaces and is used in various applications, which include monitoring landslides (Bitelli et al., 2004; Rosser et al., 2005; Lim et al., 2005; Teza et al., 2007; Du and Hung-Chao, 2007; Oppikofer et al., 2009), assessing the dynamics of river systems (Milan et al., 2007; Brasington et al., 2007; Hodge et al., 2009a, 2009b), estimation of fault displacement rates from the offset of geomorphic features (Oldow and Singleton, 2008), and monitoring postseismic surface deformation (Wilkinson et al., 2010). The high data density acquired over broad areas made possible by TLS contributes to landslide characterization and monitoring by providing surface images acquired over a range of time scales. These measurements facilitate computation of displacement rates, internal strain, and estimates of volumetric change (Bitelli et al., 2004; Rosser et al., 2005; Lim et al., 2005; Teza et al., 2007; Du and Hung-Chao, 2007; Oppikofer et al., 2009), typically determined from centimeter- to decimeter-scale variation in surface morphology over areas of ∼15,000 m2 (Oppikofer et al., 2009). The high data-acquisition rates of TLS allows characterization of centimeter- to decimeter-scale changes in the spatial pattern of erosion and deposition within braided river systems in areas of ∼6000 m2 over daily time scales (Milan et al., 2007). These observations provide documentation of river channel response to the short-duration changes in discharge rate and sediment supply. Geospatially registered digital elevation models (DEMs) derived from centimeter-scale TLS point clouds also allow decimeter-scale measure of fault displacements recorded by the offset of ancient pluvial shorelines. The measurements provide the means to calculate integrated displacement rates over time scales of 104 to 105 yr, and thus enable evaluation of the correspondence between geodetic and geologic deformation rates over a range of 106 yr (Oldow and Singleton, 2008). TLS has been used to characterize postseismic creep over durations of few months by time series analysis of subcentimeter displacements of a fault scarp following an earthquake (Wilkinson et al., 2010). The DEMs derived from TLS data allow quantification of different components of postseismic deformation that can be used to accurately estimate the magnitude of paleoearthquakes (Wilkinson et al., 2010).
All of these studies demand sampling precision at the centimeter scale, typically over areas ranging from several hundred square meters to 1000 m2. These requirements raise obvious questions about the reliability of the images in characterizing geomorphic surfaces and focus attention on the repeatability of TLS images for modeling natural surfaces and their change through time. In the past, most studies using TLS to image natural surfaces (Bitelli et al., 2004; Du and Hung-Chao, 2007; Oppikofer et al., 2009; Hodge et al., 2009a, 2009b; Wilkinson et al., 2010) provided very optimistic estimates of uncertainty derived from manufacturer quoted repeatability. Recognition of the potential for underestimation of various sources of error led some researchers (Mechelke et al., 2007; Milan et al., 2007; Cuartero et al., 2010) to test the precision of the images by comparing TLS-determined coordinates of targets with positions calculated from total station (TS) measurements. We expanded upon this approach and evaluated the repeatability of recurring TLS imaging using fixed and variable scan positions employing the same and different scanners. In addition we addressed the uncertainty budget associated with relative versus georeferenced Earth Centered Earth Fixed (ECEF) reference frames. The final aspect of our analysis examined the impact of point-cloud interpolation during the construction of continuous surfaces.
A broad range of TLS instruments is available and the reported instrumental repeatability varies substantially due to differences in scanning mechanisms (for a good overview of TLS technology, classification, processing methods, and applications, see Shan Jie and Toth, 2008). Assessments of scan uncertainty budgets for different TLS instruments were addressed in several papers (Lichti et al., 2000; Boehler et al., 2003; Mechelke et al., 2007; Cuartero et al., 2010), and these studies emphasize the necessity of investigating the precision of each scanner before use in a specific application. In this paper we document the uncertainty budget for long-range TLS, Riegl Z620i and Riegl LPM-800HA (http://www.riegl.com/), used to characterize natural surfaces. We evaluated the performance of both scanners and develop a procedure to test componential uncertainty budgets of the TLS system used in imaging natural surfaces. Although the results presented here are instrument specific, they provide reasonable minimum estimates of the repeatability of a wide range of TLS systems operating in natural conditions and, more important, provide a protocol that can be used to assess total uncertainty budgets for older and next-generation TLS instruments used in geoscience applications.
The use of TLS to produce centimeter-scale images of geomorphic surfaces requires assessment of the total and componential uncertainty budget, particularly when used to monitor change. Sources of error are derived from instrumental capabilities, environmental conditions during observation, and subsequent data-analysis methods. In addition, georeferencing error must be considered if the position of a TLS-derived surface is to be delineated in an ECEF reference frame typically needed for integration with other geospatially registered datasets during analysis.
For natural surfaces, the repeatability of TLS images is influenced by the combined effects of scanner precision, environmental conditions, and the complexity of the surface and the surface interaction with the laser beam. Manufacturer-reported scanner precision is based on controlled laboratory conditions and does not address the contribution of extrinsic parameters to the total uncertainty budget encountered in field conditions (Boehler et al., 2003; Kersten et al., 2005). TLS precision is degraded by factors such as atmospheric-pressure, temperature, humidity, and vibration (Boehler et al., 2003). Environmental conditions may affect the speed of light and delay the time of flight recorded by the scanner due to the attenuated amplitude of the reflected laser pulse, with a resulting error in range measurement (Lichti and Harvey, 2002; Lichti et al., 2002; Boehler et al., 2003). The characteristics of the scanning surface such as color, reflectance, and surface roughness also can greatly influence the quality of scan data (Lichti and Harvey, 2002; Boehler et al., 2003; Kersten et al., 2005). Previous studies (Lichti et al, 2002; Lichti and Harvey, 2002; Rosser et al., 2005; Mechelke et al., 2007) documented the impact of these surface properties on the intensity of reflected laser pulses due to diffuse scattering, absorption, and refraction. Furthermore, the angle of incidence of a laser beam with a surface contributes to the reflectance of the laser beam and results in variations in a beam spot size and shape that changes the averaging characteristics of a sampled area (Lichti et al, 2002; Lichti and Harvey, 2002; Rosser et al., 2005; Mechelke et al., 2007). The ellipticity of the beam spot escalates with increasing angle of incidence of a laser beam to the target, and the size of the beam spot enlarges with increasing distance and degrades. More reliable and precise measurements are obtained with a small, circular beam spot. The complexity of many geologically interesting surfaces together with the presence of vegetation results in occluded areas or gaps within the scan coverage. These gaps significantly degrade the quality of subsequent data analysis. This problem is exacerbated by the difficulty in precisely realigning the scanner in subsequent occupations and produces different occluded areas that increase lack of spatial correspondence between scans.
For many geoscience applications, the need to correlate geospatially referenced datasets requires registration of TLS images in an ECEF frame. Analysis of geomorphic surfaces typically is combined with other geophysical and geological observations, space and airborne imagery, and digital elevation data from a wide range of sources and with different resolutions. For change detection, repeat observations require a reference network, either supplied by local monuments or by reference to the ECEF frame. In light of well-established issues associated with installation, recoverability, and stability of local monuments (Wyatt, 1982; Sylvester, 1984; Langbein et al., 1995; Langbein and Johnson, 1997), coupled with the positioning resolution readily available through the Global Navigation Satellite System (GNSS), it seems advisable to use the ECEF frame for most studies.
When depicted and analyzed in an ECEF frame, the repeatability of TLS surface models must include GNSS-derived positioning uncertainty in the total error budget. The GNSS position uncertainty arising from ionospheric and tropospheric conditions, satellite and receiver clock bias, and delays associated with multipath signals are well understood and largely compensated by use of differential dual frequency GNSS receivers (Hofmann et al., 1997; Sickle, 2001). Over short baselines of hundreds of meters to tens of kilometers, GNSS positioning relative to local reference stations can achieve uncertainties of 0.025 m (Featherstone and Stewart, 2001; Mekik and Arslanoglu, 2009). High-precision georeferencing of local GNSS observations requires processing local measurements in a global reference network using geodetic-quality processing available from several platforms (GAMIT, GIPSY, BERNESE; King and Bock, 2000; Zumberge, 1999; Hugentobler et al., 2001; Dach et al., 2007), but for most applications transformation to the ECEF frame is possible by uploading local measurements to the Online Position User Service (OPUS) provided free of charge by the National Geodetic Survey (NGS; http://www.ngs.noaa.gov/OPUS). There are two products of OPUS provided by the NGS, OPUS-S (static) and OPUS-RS (rapid static). OPUS-S is available for GNSS data collected throughout the world and OPUS-RS works only for the conterminous United States and its territories (Mader et al., 2003; Rick Foote, 2015, personal commun.) OPUS solutions tie local measurements into the U.S. Continuous Observation Reference Station (CORS) network and/or the International GNSS Services (IGS) network and provide ECEF solutions in various reference frames together with positioning uncertainty. Whatever the source of transformation from local to global coordinates, the uncertainties must be propagated together with other sources of error into the total uncertainty budget.
The output of TLS imaging, the point cloud, consists of a cluster of irregularly spaced points with 3D coordinate attributes without any topological context and is adequate for some qualitative studies. For studies that require quantitative analysis, however, production of an interpolated continuous surface is required to take full advantage of the rich TLS datasets (Rosser et al., 2005). TLS images produce dense point approximations to an original continuous surface where there is no knowledge of the initial geometry, and the final measured geometry is dependent upon instrumental precision, environmental effects, scanner location, point-acquisition density, and surface complexity. For change-detection studies, these issues are amplified by the fact that successive measurements will not image the same points on natural surfaces. Fortunately, there are various interpolation methods available with applicability dependent upon computational capacity, user objectives, nature of the measured data points, and the precision requirement for the work (Chang, 2008). All methods must conform to a compromise between scan density and the resolution limit of the interpolated surface, which based on the Nyquist sampling theorem, is reduced by a factor of two from the point distribution of the point cloud (Olshausen, 2000; Proakis and Manolakis, 2006). Production of surface interpolation from point-cloud data provides the capacity to measure natural surface characteristics such as aspect, slope, and area and to provide uncertainty measures by producing residual maps tracking the differences between interpolated surfaces and point-cloud distributions. Analysis of residual distributions provides significant insight into the spatial contributions to surface repeatability and statistically represented error.
To estimate the componential contributions to the total TLS uncertainty budget, we conducted two experiments at different times and locations. In the first experiment, we used two Riegl LMS-Z620 scanners to image the Happy Canyon fault system, which cuts a Holocene alluvial fan in Panamint Valley (southeast California; Fig. 1). In the second experiment, we used a Riegl LMS-Z620 and a Riegl LPM-800HA to produce repeat scans of a geomorphic slope in the San Gabriel Mountains, southwest California (Fig. 2). The Happy Canyon project provided a robust assessment of the interscanner repeatability using comparable instruments during successive fault surface images. In contrast, the San Gabriel project provided interscanner repeatability using different types of instruments. In addition, we explored the impact of fixed or varied scanning positions during subsequent occupations and used different GNSS schemes to georeference the control network. In one experiment, real time kinematic (RTK) GNSS measurement of the control network was carried out periodically during scanning, and in the other we employed continuous measurement of the network using static GNSS occupation of each control reflector. A TS survey of the control network was used in both experiments to establish a reference frame, and served as the baseline for comparison between instruments and techniques employed during the experiments.
The Riegl LMS-Z620 is a long-range laser scanner that uses pulse-ranging technique based on the time of flight measuring principle. The scanners have a range of 2–2000 m for objects with a reflectance of 80% and have a manufacturer reported range measurement repeatability of ±10 mm for a single shot and ±5 mm for the average of multiple measurements at 100 m range. Measurements are acquired at a rate of between 11,000 and 8000 points/s, depending upon operation mode (oscillation versus rotation), and are made in horizontal scans by rotating the complete optical head around a vertical axis as much as 360° and for vertical scans as much as 80° through the oscillation or rotation of a polygonal mirror about a horizontal axis. The laser beam divergence specification is 0.15 mrad, which corresponds to 15 mm increase of beam width per 100 m of range. The Riegl LMS-Z620 uses a tie-point alignment technique and 3D transformation matrix supported by Riegl software (RiSCAN-PRO) to merge multiple scan positions into a common reference system.
The Riegl LPM-800HA is also a long-range laser scanner that uses a pulse-ranging technique and has a range of 10 to 800 m for objects with a reflectance of 80%. Manufacturer-reported range measurement repeatability is ±15 mm at 50 m range with an acquisition rate of 1000 points/s. The instrument performs horizontal scans up to 180° by rotation of the scanner assembly about a vertical axis and makes vertical scans of up to 140° by oscillating the scanner assembly about a horizontal axis. The laser beam divergence specification is 1.3 mrad, which corresponds to 130 mm increase of beam width per 100 m distance. As such, the beam width of the LPM-800HA is ∼9 times larger than the beam width of LMS-Z620. The LPM-800HA also uses a tie-point alignment technique and 3D transformation matrix to merge multiple scan positions into a common reference system.
In both experiments we used a Topcon GPT-3000 (http://global.topcon.com/) series TS with an angular resolution of 5 s to establish the geometry of the reference network consisting of reflectors mounted on tripods or survey poles. The TS can measure to distances of as much as 250 m in a reflectorless mode and as much as 1200 m in the long-range mode using reflectors. Manufacturer-reported instrumental precision is 3 mm + 2 ppm at 1 km baseline distance.
Georeferencing the control network of reflectors in a relative and ECEF frame was carried out by GNSS. In the first experiment (Panamint Valley), Topcon HiperLite dual-frequency receivers were used in RTK-GNSS mode. In the second experiment (San Gabriel Mountains), S-GNSS measurements were acquired using Leica SR530 dual-frequency receivers. Manufacturer specifications for RTK-GNSS measurements for horizontal and vertical position precision are ±10 mm + 1 ppm and ±15 mm + 1 ppm, respectively, and for S-GNSS measurements are ±3 mm + 0.5 ppm baseline rms (root mean squared) and ±6 mm + 0.5 ppm baseline rms, respectively.
In the first experiment, the alluvial fault scarp at Happy Canyon in Panamint Valley was imaged by two consecutive scans from three positions using two Riegl LMS-Z620 instruments. The three scan positions and a reference network of seven reflectors (Fig. 3) were fixed during the two day survey. The reference network of seven cylindrical reflectors (10 cm diameter and 10 cm height) coated with retroreflective material was distributed around the study area in a geometry that provided high-angle intersection of baselines with distances from each scan position ranging from 10 m to 525 m (Fig. 3). The geometry of the reflector network used to control TLS imaging was established through 10 repeat measurements, made by 2 methods, of each reflector using a TS from a single location. The TS-determined position of each reflector was made by direct measurement of the estimated center and compared to a computed center obtained by measuring the top and the bottom of each reflector.
To geospatially reference the relative network, the reflector positions were determined by RTK-GNSS, with each reflector being measured 5 times for 1 min occupations separated by ∼2 h. The reflectors were referenced to a GNSS base-station position established by 1 s epochs during two occupations of 7 and 5 h each. The base station was referenced to the NGS CORS using OPUS, and the RTK-GNSS positions for reflectors were postprocessed in the ECEF frame using Topcon software.
The Happy Canyon fault scarp was imaged over an area of ∼400 m × 300 m from each scan location by the 2 Riegl LMS-Z620 scanners, one from the University of Texas at Dallas (UTD) and the other from the University of Kansas (KU). Both scanners were adjusted to the same acquisition parameters, with a scanning resolution of 0.025°, which resulted in a spot spacing of 0.05 m at 100 m from the scanner. During successive scans, the TLS instruments occupied the same tripods through the measurement circuit. Although the scanners occupied the same tripods, perfect alignment was not attempted between consecutive occupations; this resulted in a slight variation in spot position on the ground for each scan. Use of three scan positions reduced coverage gaps and enhanced the spatial coverage of the study area. Multiple scans also improved the image resolution by reducing the spacing between measured points on the ground and by increasing the spatial density of areas imaged by small beam spots. (For detailed information on the work flow of TLS imaging, see the Supplemental File1.)
In the San Gabriel Mountains study, a geomorphic slope exposed after a recent wild fire was scanned twice in one day and the slope was rescanned three days later. Each deployment covered the same area of 100 m × 100 m, but used different scan and reference network positions. During the first deployment, both scanners were set up at three different positions, for a total of six scan positions during the survey. The scan area was located within a control network of five disk-shaped reflectors (10 cm diameter) with a center mark coated with retroreflective material. The control network (Fig. 4) was fixed for the duration of the first deployment. The scan area was reoccupied three days later and imaged from three scan locations different than those of the first occupation using the Riegl LMS-Z620. During the second survey, the control network consisted of five reflectors, which were not placed in the same locations used in the previous deployment. The geometry of the control network was established during each occupation by TS surveys consisting of three measurements for each reflector center. Alignment of the relative position of each reference network in ECEF frame was accomplished using S-GNSS by measuring each reflector for 6 h at 15 s epochs. This experiment enabled a comparison of the uncertainty budget of simultaneous scans using different instruments, the Riegl LMS-Z620 and Riegl LPM-800HA, and for repeat scans using the Riegl LMS-Z620.
Calculating the measurement uncertainty budget of an instrument requires a well-established reference frame derived from a precise measurement tool that characterizes a spatial position at a higher resolution than the tested instrument (Schmidt and Wong, 1985). In these experiments, the geometry of the reflector network was used as the reference frame for repeatability analysis. The geometry of the control networks were established in a relative and an absolute frame by using the TS and GNSS measurements, respectively. Both instruments have precisions substantially greater than TLS. When the S-GNSS scheme was employed to define the relative position of each reflector in the network, TS and GNSS measurements were comparable. In contrast, the precision of RTK-GNSS measurement of the reflector network position in a relative frame was substantially degraded from the S-GNSS positioning. In both cases, georeferencing the reflector networks in the ECEF frame was established by registration of the survey base station to nearest NGS CORS using OPUS and resulted in additional positional uncertainty. In both experiments, the TS-derived reflector network was used as a fiducial reference frame to determine the measurement uncertainty of all instruments.
The coordinate system used in the experiments was a right-handed Cartesian coordinate system. This coordinate system is a primary output of both TS and GNSS and was used to define the 3D position of each reflector of the reference network both in a local and an absolute reference frame (Langley, 1998; White, 2010; Lemmens, 2011). In that the origins of the 3D coordinate system for TS are local and GNSS is the Earth’s center of mass, we designate a 3D position defined in a local and absolute (ECEF) reference frame are expressed by x, y, z and X, Y, Z coordinates, respectively. A major advantage in using the unprojected ECEF reference frame is that the curvature of the Earth does not affect the comparison of laser and GNSS positioning. The Cartesian coordinate system provides a simple, worldwide standardization and convenient method for accuracy calculation (Lemmens, 2011).
Fiducial Reference Frame Uncertainty
During the first experiment, we used cylindrical reflectors and the centroid of each was estimated by adjusting the mean value of the reflector center, which was estimated from the top and bottom measurement and direct measurement of the center of the reflector surface. In the second experiment, we used flat reflectors with clearly marked centers, which were directly measured three times. The statistical analysis of reflector centers for the cylindrical reflector yielded a RMSE of ±0.005 m and for the flat reflector was ±0.005 m. The calculated mean values of reflector centers using both methods are listed in Table 1.
In both experiments, the use of TS as the fiducial reference was confirmed and the contribution of covariance to the repeatability of the measurements was negligible. In the first experiment, the combined standard deviation of 10 repeated measurements of each of 7 reflectors established the network geometry with an error of ±0.005 m. In case of the second experiment, the 3 repeated TS measurements of each reflector are also ±0.005 m. The variance in the vertical coordinate z is usually larger than the variance in the horizontal coordinates x and y (Table 1). The magnitudes of variance, which are at the millimeter level, are on average 10 times larger than the covariances, and the covariance is considered to be insignificant. Our tests show that the variation in range to the reflector, to distances of 500 m, does not affect the repeatability of the TS (Fig. 5).
Georeferencing Uncertainty of the Reflector Network
The reference network was georeferenced using GNSS measurement with both RTK-GNSS and S-GNSS methods being investigated. In the first experiment, the RTK-GNSS measurement of the reflector network in a relative frame was transformed into an absolute reference frame by postprocessing all the reflector positions with reference to the base station, which was registered into the nearest CORS using OPUS. In the second experiment each reflector position was obtained as an individual base station using S-GNSS. The entire set of reflector positions were postprocessed and network adjusted with reference to the reflector position that had the minimum OPUS reported positional error. The resultant reflector coordinates in the ECEF frame for both experiments are listed in Table 2.
The repeatability of GNSS measurement of the reflector network in a local frame was estimated by transforming the results into the TS fiducial frame using a six-parameter Helmert transformation. The six-parameter transformation was selected over a seven-parameter transformation because the unscaled misfit between coordinate values represents a measure of repeatability. In the case of GNSS and TS measurement of the reflector network, the observations are independent and calculation of covariance is not required. The combined positional uncertainty (Δcomb) for each reflector is represented as the quadratic sum of the residuals in x, y, z and the precision of the GNSS measurement is determined by the RMSE in the local frame. For expansion to the ECEF frame, the local RMSE was combined with the uncertainty of the OPUS position referenced to CORS in the NAD83 (North American Datum 1983) datum frame (Snay et al., 2011a).
The repeatability analysis shows that the S-GNSS strategy produces a precision three times greater than comparable measurements using RTK-GNSS. In addition, we found that the S-GNSS measurements recorded in a local frame have the same level of precision as TS measurement. The precision of the RTK-GNSS measurement is ±0.016 m, whereas with S-GNSS the precision improves to ±0.005 m (Fig. 6). To obtain the repeatability of GNSS measurements in the ECEF, the base station uncertainty in the CORS network was propagated into the total error budget. The RTK-GNSS base station uncertainty reported by OPUS is 0.022 m. During the second experiment, the S-GNSS station (SGD1C01) with a repeatability of ±0.014 m (Table 3) was used as the base. The calculated repeatability of the RTK-GNSS and S-GNSS in the ECEF frame is ±0.027 m and ±0.015 m, respectively.
We also confirmed the OPUS computed uncertainty of ∼0.02 m (Table 3) reported by the NGS. During the OPUS processing of our S-GNSS network solutions made in the San Gabriel experiment, the base station uncertainty was derived from the combination of the peak to peak error in each axial component of the 3D position (Snay et al., 2011b). We postprocessed all four GNSS measured reflector positions by fixing one of the positions as a base and performed a network adjustment for the others relative to the base. The process was repeated four times by altering the base among the four reflector positions. The repeatability of ∼0.02 m (Table 3) was obtained from the repeated positioning of each reflector, which is identical to the OPUS reported uncertainty.
TLS REFERENCE FRAME REPEATABILITY
For our experiments, the fiducial reference network established by TS measurement provides the basis for estimating the repeatability of two long range scanners, the Riegl LMS-Z620 and LPM-800HA. The TS characterization of the reference network, with an RMSE of ±0.005 m, exceeds manufacture uncertainties for the TLS reported as ±0.01 m, and coordinate residuals from transformation of TLS positions to the TS frame are viewed as limits of TLS repeatability imposed by instrumental and environmental inputs. The analysis result shows that the TLS repeatability in a field environment is almost three times inferior to the manufacturer-reported precision.
In both experiments, the reflector network positions acquired from each scan position were compared with the TS-derived positions separately. The residuals between coordinate pairs were calculated, and the combined positional uncertainty (Δcomb) for each reflector was estimated. The observed positional coordinates and the residuals are presented in Tables 4 and 5. The combined positional uncertainties obtained from all scan positions were incorporated to yield the total network RMSE. The RMSE reflects the instrumental repeatability together with the uncertainty of point clouds alignment from multiple scan positions. The georeferencing uncertainty was combined with the TLS relative repeatability to compute the network RMSE in the ECEF frame.
In the first experiment, two LMS-Z620 instruments (UTD-Z620 and KU-Z620) were used and successively scanned the reference network and fault scarps from the same three positions. For both instruments, there is no relationship between the positional uncertainty and the distance between scanner and reflector, which ranged between 10 m and 525 m (Fig. 7). The results of the survey, however, indicate the existence of a significant disparity between instruments. The UTD-Z620 and KU-Z620 scanners yield a relative network RMSE of ±0.053 m and ±0.027 m, respectively. The difference in the repeatability results between KU-Z620 and UTD-Z620 is ∼0.02 m and the discrepancy is clearly displayed in Figure 7. The UTD-Z620 uncertainties are larger than those of the KU-Z620 instrument, with residual values ranging from 0.01 m to 0.1 m and 0.006 m to 0.053 m for the UTD and KU Z620 scanner, respectively. In case of the UTD-Z620 scanner, the reflector network repeatability is ∼0.06 m, which is twice that of the KU-Z620 in every scan position and exhibits a larger variance of ±0.03 m in the positional uncertainty of the reflector positions. For the KU-Z620 scanner, the RMSE (∼0.03 m) for the reflector network was uniform for each scan position and the variance for 80% of the measurements was ±0.01 m. There were only two outliers with the positional uncertainty of ±0.053 m and ±0.006 m (Table 4).
In the second experiment, the repeatability of the UTD-Z620 instrument was greatly improved (following recalibration by Riegl) with an uncertainty of ±0.029 m for both day one and day four occupations. Within the range of 16 m to 207 m, the residual values are independent of the distance between scanner and reflector for both scanning periods. The residual values range from ±0.012 m to ±0.045 m in both cases, with only one outlier of ±0.051 m during day four deployment (Table 5). The use of different scanning set-up configurations between occupations did not contribute any error to the results, and the RMSE (∼0.03 m) for the reflector network was uniform for each scan position for both deployments. The result of this experiment confirmed that the Riegl LMS-Z620 TLS can deliver measurement repeatability of ±0.028 m (an average of the all 3 results) in a local reference frame in the field environment.
Based on the experiment repeatability results during the first experiment, it is clear that there was a malfunction with the UTD-Z620 scanner during the Panamint Valley deployment. Although both scanners are mechanically identical and operated under the same conditions, the repeatability discrepancy between the UTD and KU scanners was not observed between repeat scans of the UTD scanner during the second experiment (Fig. 8). We conclude that the UTD scanner lost calibration in transit to the site or at the onset of the first experiment. As standard procedure, the UTD-Z620 was returned to Riegl for calibration before the second experiment in the San Gabriel Mountains and during that experiment produced expected repeatability results. We use this as a cautionary note to recommend that users have scanners calibrated both before and after scanning deployments to assess instrumental repeatability.
In addition to the problem with the UTD-Z620, we discovered that measurement precision of the Riegl LMS-Z620 (both instruments) was sensitive to the relative elevation of the reflector with respect to the scan position. We noted that during the first experiment, reflectors elevated above the scan position had greater uncertainty than those at the same elevation as or below the scanner. The impact of vertical angle on measurement uncertainty was investigated for a range from 0° to ∼ ±5°, where positive and negative angles define the position of the control reflector above and below the horizontal axis of the scanner, respectively. When the control reflectors are below the horizontal axis, the UTD-Z620 uncertainty values indicate that 45% of the uncertainty values are ≥±0.03 m and 55% of the positional uncertainties are <±0.03 m. The uncertainties for the KU-Z620 are smaller, but indicate the same pattern, with 18% of the positional uncertainties ≥±0.03 m and 82% of the positional uncertainties <0.03 m. In contrast, when the control reflectors are above the horizontal axis 100% of the positional uncertainty values are >±0.03 m for UTD-Z620 and for the KU-Z620, 63% of the errors are >±0.03 m, and 37% are <±0.03 m.
During the second experiment, we estimated the measurement repeatability of LPM-800HA at ±0.070 m in a relative frame. As in the case for the other instruments, positional uncertainty is not correlated with distance between the scanner and the reflector position. The residuals between the TLS and TS realizations of the reference network range from ±0.014 m to ±0.13 m (Table 5). The RMSE for the reflector network, however, was inconsistent among the three scan positions (SP). The RMSE values of ±0.019 m, ±0.087m, and ±0.064 m for SP01, SP02, and SP03, respectively, show significant variance. The uncertainty in each reflector position shows that SP01 produced a result that was comparable to the LMS-Z620 scanner, and was consistently better than the results from SP02 and SP03. The positional uncertainty of the reflector positions shows large variance, ±0.04 m, which arises from the mechanical limitations of the LPM 800HA scanner and is consistent with the manufacture specifications. The LPM-800HA has a spatial resolution of 13 cm at 100 m distance and in our experiment most of the reflector positions are ∼100 to 200 m distant.
TLS REPEATABILITY OF NATURAL SURFACES
The primary motive for this investigation is to characterize the repeatability of TLS images of natural surfaces. Of particular interest is establishing the limits of repeatability during recurring measurement of a surface to estimate detection limits for characterizing changes to geomorphic features. The TLS repeatability using the reference network, measured both in relative and ECEF frames, provides a quantitative basis for repeatability analysis but does not adequately address the repeatability of a natural surface. Natural surfaces are, by their nature, irregular and composed of material with wide variations in reflectivity. Further complicating assessment of repeatability is the fact that successive TLS scans will not image the same position on the surface, because even slight variations in the origin of the scanner axes will change the distribution of points imaged and represented in a point cloud. This inability to replicate measurements of specific features on a surface produces uncertainty in the analysis of point-cloud data that is intrinsically complicated, if not rigorously impractical.
Without a priori knowledge of surface properties, including surface roughness, reflectivity, and slope facing, the most practical means of estimating repeatability of a natural surface is by analyzing and comparing an interpolated surface to point-cloud data. To this end, we evaluated acquisition methods to understand the parameters that control the variations in natural surface analysis. In the first experiment, we estimated TLS repeatability under constant environmental conditions. All scan positions and the reflector network were fixed during successive scans and two point-cloud models of the natural surface were acquired. During the second experiment, we compared different instruments and examined the impact of using different reference networks and scan positions in surface characterization. During the first occupation, the reflector network was fixed but different scan positions were used for the LMS Z620 and LPM 800HA instruments. During reoccupation, three days later, we deployed the LMS Z620 but used new positions for the reference network and for the scan positions.
Before analyzing the repeatability of the scanned surfaces, individual scans must be merged, the data distribution assessed in the context of surface resolution, geospatially referenced, and interpolated into a continuous surface. The multiple point-cloud images acquired from various scan positions were aligned into single point clouds. The data density of the merged point cloud was evaluated to determine the resolution of the interpolated surface that can be produced, and if needed, the data are sliced into domains for interpolation. The point-cloud data were output in ASCII (American Standard Code for Information Interchange) format, and were eventually transformed into a plane-projected UTM (Universal Transverse Mercator) coordinate system. Following a series of tests, a radial basic function (RBF) was selected for the gridding method. Continuous surfaces were created and their repeatability analyzed using ArcGIS (https://www.arcgis.com/) software. In the following we describe the data reduction process, evaluation and selection of interpolation routine, and analysis results of the interpolated surfaces. From the surface analysis, we document that the TLS can image a natural surface at centimeter-level precision.
The first step in the data reduction is to merge the scans from all scan positions into a common fiducial reference frame using either vendor software (RiScan-Pro for Riegl instruments) or a six-parameter Helmert transformation (UTD software). For optimal alignment, individual images should have some surface overlap, and all require a minimum of three and preferably more common targets of the reference network. Although any scan position can be used as a reference for image combination, we strongly advocate using an external fiducial reference frame established by either TS or S-GNSS to minimize the alignment uncertainty (to understand the alignment procedure, see the Supplemental File). In our processing, we used RiScan-Pro to produce a merged point cloud.
The assessment of size and data density of the point cloud is needed to determine the data partitioning during processing and to establish the surface resolution that can be produced from the dataset. One of the major issues during TLS dataset processing is handling large file sizes that typically contain millions of points and occupy several hundred megabytes of computer memory. Merged scans were cropped and all unnecessary points outside the designated study area removed. Typically, large datasets (contained more than 107 points and 300 MB memory) were segmented, ensuring overlap between segments of 15% to avoid edge effects when the continuous surface created from the results are recombined to build complete surface model of the study area. In the first experiment each scan contained a spatial resolution of 0.05 m to 0.15 m on the ground. The combination of 3 scans yielded a spatial coverage of ∼0.05 m and resulted in a dataset containing ∼13 × 106 points, covering an area ∼300 m × 290 m. Similarly, in the second experiment the combined scan contained a spatial coverage of ∼0.025 m and datasets for LMS-Z620 hold ∼14 × 106 points, covering an area ∼100 m × 100 m. The LPM-800HA dataset contains ∼5 × 106 points, with an average spatial resolution of ∼0.03 m on the ground, covering the same area extent. The data from Panamint Valley did not require division, but the San Gabriel datasets were too large to process as a single entity and were divided using utilities within RiScan-Pro. The data from both areas were exported as point clouds in an ASCII format.
The merged scans are transformed into an ECEF frame and eventually converted into UTM coordinate system via a six-parameter Helmert transformation. Depiction of the merged scans in UTM coordinates improves the capacity to compare the data with other georeferenced information. Using a fixed reference network, comparison of the two merged point clouds for Panamint Valley can be analyzed in a relative or absolute frame. In contrast, the lack of a common reference network for successive measurements in the San Gabriel experiment necessitated employment of the absolute (ECEF) frame for point-cloud comparison.
The TLS datasets in ECEF frame were converted into plane-projected UTM coordinates to aid in computation and interpretation of a surface model (Langley, 1998). The geospatially referenced TLS datasets were transformed into the NAD83 UTM Zone 11N coordinate system. The transformation uncertainty for the projected TLS dataset yields a RMSE of ±0.077 m and ±0.04 m for the Panamint Valley and the San Gabriel Mountains dataset, respectively. This distortion associated with projection into UTM does not affect our analysis of repeatability, which is based on differencing the successive observations.
The georeferenced point clouds interpolated in ArcGIS and all the surface analysis were conducted within ArcMap because of the program’s capacity to analyze and manage an extensive range of spatial datasets. The resulting interpolation provides a predicted surface that is compared with the input spatial data to provide a residual map and a total RMSE. From the computed surface, a digital elevation model (DEM) was extracted using the original data density to establish the Nyquist sampling criteria. We obtained a 0.1 m resolution DEM for Panamint Valley and achieved a resolution of 0.05 m for the San Gabriel Mountains dataset.
The repeatability of the DEMs was estimated by differencing the surface scans using two instruments (Panamint Valley) and through successive scans using a single instrument (San Gabriel Mountains). The DEMs of a common area were cropped into same spatial extent using a float raster (DEM), which was then converted into an integer raster for differencing. In addition to a difference map, value attribute tables were computed that enabled calculation of the residual between the elevation values at each pixel location and an estimate of the RMSE. To minimize the surface degradation during the conversion between float raster to integer raster, a threshold value was set at millimeter scale.
Evaluation and Selection of Interpolation Method
In this section we assess the merits of various interpolation tools available in ArcGIS. Options include triangulated irregular network (TIN), inverse distance weighted (IDW), radial basic function (RBF), local polynomial (LP), global polynomial (GP), and Kriging (K), all of which were described in detail by Chang (2008). Each interpolation routine has characteristic strengths and weaknesses, and we chose the method best suited to interpolate over data gaps. For the selection of appropriate interpolation method, an evaluation and a cross-validation of available interpolation tools were carried out using a small segment of a data sample from the Happy Canyon scarp surface.
The experimental results show that with some exceptions (GP and TIN), the selection of interpolation method does not make a significant difference in the production of an interpolated surface. The data density of the point clouds suppresses the impact of the different characteristics of the interpolation routines, and only the GP and TIN methods produced unacceptable results. Comparison of the IDW, RBF, LP, and K methods, by differencing the raster interpolations of each, shows negligible difference. In contrast, the GP method produced large residuals because the method is best suited to evaluate general trends (long wavelength) and does not adequately characterize short wavelength surface irregularity. Similarly, the TIN method, which uses the concept of Delaunay triangulation, is not practical for repeatability analysis because the continuous surface is created from contiguous nonoverlapping triangles using each data point as a node (Childs, 2004; Chang, 2008). The interpolated surface lacks cross-validation statistics and is not capable of interpolation across data gaps.
The DEMs generated from RBF, IDW, and K methods are virtually indistinguishable (Fig. 9), but differ from those produced by LP. Comparing the point cloud and interpolated surface produced by RBF, IDW, and K methods results in a, RMSE of ∼±0.03 m, which has slightly less variance than the LP DEM with an RMSE of ±0.04 m. The LP method does not smoothly interpolate across data gaps and produces spikes in the continuous surface at these locations. This problem is exacerbated when the polynomial order was increased above second order. The greatest discrepancy is highlighted in the comparison of the interpolated surfaces where the DEMs produced by RBF, IDW, and K differ sharply from that produced by LP (Fig. 10). The RMSE misfit of ∼±0.34 m is produced by outliers in scan areas with data gaps and originates from surface spikes. In contrast, the DEMs produced by RBF, IDW, and K have RMSE surface differences, or misfits, of ∼±0.01 m (Fig. 10). Where the terrain surface is relatively smooth, all the interpolation methods produced surface repeatability of ∼±0.005 m. The greatest variations between the DEMs produced by these interpolation routines occur in relatively rough areas, around channels, and in areas with data gaps produced by filtering vegetation.
In our analysis, we selected RBF for gridding the TLS data because of the method’s ability to efficiently handle large datasets and its capacity to interpolate across data gaps. Kriging was not employed because it is relatively slow; the K method is a stochastic interpolator that uses a statistical probability in the development of a predicted surface and is best suited for sparse datasets. Both RBF and IDW are computationally fast and well suited for large datasets. As such, the methods are statistically robust and have outputs that are nearly indistinguishable. The methods differ slightly in the propensity of IDW to produce minor bull’s eye patterns in the interpolated surfaces, and so we prefer to use RBF in surface construction.
Analysis of Interpolated Surfaces
The geospatially referenced TLS point clouds for both experiments were interpolated as continuous surface using RBF and the repeatability of the recurring scans using different acquisition schemes were estimated. We calculated the uncertainty of interpolated surface models (DEM) by mapping the residual between the point-cloud data and the computed surfaces. If the effect of vegetation is removed from the point-cloud data, RBF interpolation creates a surface model with a centimeter-level precision. The repeatability of successive DEMs for each study area was determined by differencing the raster surfaces. It is important that our analysis indicates that the repeatability of the continuous surfaces is equivalent to the total uncertainty budget associated with TLS surface modeling and the surface residuals are consistent with the summation of error components arising from the field deployment of the TLS, georeferencing, and interpolation of continuous surfaces.
Vegetation is the greatest natural source of noise in surface analysis, and cleaned datasets reduced uncertainty in the interpolation error by as much as a factor of 10. This is well established by comparison of the Panamint Valley and San Gabriel Mountains experiments. Panamint Valley, unlike the San Gabriel Mountains study area which had recently experienced a major fire, retained a natural vegetation cover. The Panamint Valley interpolated surfaces exhibited RMSE misfits with the point clouds of ∼±0.2 m for both scanners (UTD-TLS and KU-TLS). Analysis of the interpolated surfaces using residual maps created in ArcMap (Fig. 11) show that the highest residuals are in areas of vegetation. Production of bare-earth point clouds, using the TerraScan software (https://www.terrasolid.com/), reduced the surface–point-cloud RMSE of ±0.02 m in the worst case and typically resulted in an RMSE of ∼±0.01 m.
The TerraScan program provides an efficient means to classify the raw LiDAR (light detection and ranging) point-cloud data into different point classes, such as ground points, vegetation points, and erroneous low points, by using a set of algorithms with user-defined tolerances. Among the classifying algorithms, the most applicable for TLS data is the ground point classification routine, which requires the user to establish a dispersion limit and a reference area where the ground surface is not obscured by vegetation. Although the algorithm is best suited for airborne LiDAR datasets, it is useful to run the ground routine to establish a bare-earth point classification in TLS data. This procedure helps segregate the vegetative part of the point cloud from the ground surface and establishes a means of stripping the vegetation during the development of a bare-earth surface model. The issue in implementing this procedure is to discriminate between the ground surface and other features (vegetation and spurious low-point outliers) in the dataset. This process is aided by the observation that within the 3D point cloud data, areas of evenly distributed points that mirror the stepping function of the laser beam during acquisition denote a relatively smooth, continuous surface that corresponds to the ground surface. In contrast, in areas covered by vegetation, points above a more or less continuous surface are highly dispersed and form data shadows along the axis of the scan direction. Once classified, vegetative cover and spurious low points are cleaned from the dataset manually and no longer used in the analysis. Using TerraScan, however, does not delete the data points, which can be retrieved for further analysis if needed.
Even in areas not affected by vegetation, natural surfaces composed of material with wide variations in reflectivity and grain size will provide differences in surface dispersion of data points. We observe modest dispersion of ∼±0.05 m in the bare-earth surface in Panamint Valley, but they retain the definitive grid pattern typical of TLS data (Boehler et al., 2003; Mechelke et al., 2007; White, 2010). This compares favorably with the San Gabriel data (which were not cleaned because no vegetation existed) that had a point dispersion ±0.025 m. The reduced dispersion in the San Gabriel data arose from the fine-grained nature of the surface cover that contrasted with the pebble-cobble surface in Panamint Valley. In the Panamint Valley study site, any points that were within a dispersion limit of ±0.05 m and close to the selected ground-point reference area were classified as bare-earth points and any isolated point or small group of points that were below and above the ground surface were classified as low points and vegetation points, respectively. Once the classification process was completed, the ground points were extracted and exported separately as a text file. Although the process is laborious, it is a very effective tool for obtaining cleaned bare-earth point clouds because the TerraScan is a robust interface that can easily handle large point-cloud datasets.
The experiments performed here show that the residual between surfaces obtained from recurring scans using various acquisition schemes, such as variable or fixed look directions and reflector-network locations, have no impact on the repeatability of TLS surface characterization. The experiment in the San Gabriel Mountains resulted in a difference between two raster surfaces, produced from imaging the same surface using variable scanner and reflector network positions after a 3 day interval, with an RMSE of ±0.046 m. The residual map (Fig. 12) shows that an uncertainty of ±0.05 m is evenly distributed over the entire surface, with the most significant outliers occurring in areas where data gaps were produced by filtering vegetation. In the Panamint Valley experiment, where a constant scan position and reflector network setup were used, the residual map (Fig. 13) reveals same spatial distribution but a larger RMSE of ±0.070 m. The increase in RMSE is due to the ±0.02 m calibration variation between KU-Z620 and UTD-Z620. Discounting the ±0.02 m calibration error in UTD-Z620, the distributions of surface residuals in the Panamint Valley and San Gabriel Mountains experiments are essentially the same. Where the effects of vegetation and data gaps are mitigated, the difference in acquisition protocol does not affect surface repeatability.
Changing the resolution of the interpolated surfaces does not substantively affect the residuals produced by differencing. We recalculated the repeatability of the two surfaces acquired using repeat scans with the UTD-Z620 in the San Gabriel Mountains experiment. The interpolated surfaces were resampled at resolutions ranging from 0.01 m to 20.0 m using the nearest neighbor technique. For interpolation values near the Nyquist sampling frequency, the surface repeatability is not affected (Fig. 14), and the surface repeatability remains constant up to 0.30 m. Above 0.30 m to 20 m, we observed slight fluctuations in the residuals of from +0.001 m to –0.005 m, but given the resolution of the Riegl scanner, these variations are statistically insignificant.
Of particular note is the experimental documentation that the residual between DEMs acquired by successive surface scans is the same as the combined aggregate error determined for the reference network for each scan dataset. Experimentally determined surface repeatability for raster datasets coincides with the quadratic sum of the observed errors associated with the TLS instrumental, georeferencing, and interpolation uncertainty. Total propagated errors for the reference network were ±0.047 m and ±0.073 m for the San Gabriel and Panamint Valley experiments, respectively. These values are equivalent to the surface repeatability obtained in both experiments of ±0.046 m and ±0.070 m for the LMS-Z620 datasets for the San Gabriel and Panamint Valley. Furthermore, the mixed LMS-Z620 and LPM-800HA survey, using different scan positions simultaneously for the same surface, yields a surface residual of ±0.083 m, which is consistent with the combined reference error of ±0.080 m for both instruments.
DISCUSSION AND CONCLUSIONS
The results of these experiments allow us to characterize the repeatability of the TLS images of natural surfaces and provide insight into the detection threshold for characterizing changes in geomorphic surfaces. In addition, we present acquisition and processing protocols that will minimize the error budget in TLS surface modeling. Of the three components of analytical error estimated in this investigation, instrumental repeatability represents the primary contributor. Georeferencing uncertainty represents only a small component of the total uncertainty budget, particularly if positioning of a reflector reference network is done using S-GNSS occupations in lieu of RTK-GNSS methods. Similarly, even though the use of continuous surfaces interpolated from point-cloud data poses a small fraction of the error budget, its use provides the means to rigorously compute measurement uncertainty and the limits of surface resolution critical to geomorphic change detection.
Based on the results of our experiments, the instrumental repeatability estimates of TLS instruments used to image natural surfaces in field conditions are substantially increased over measurements made in laboratory settings, and confirm the concerns expressed in several previous studies about using manufacturer specifications as a basis for error analysis (Lichti et al., 2002; Rosser et al., 2005; Boehler et al., 2003; Milan et al., 2007; Buckley et al., 2008). The Riegl LMS-Z620 and LPM 800HA used in our experiments both have computed repeatabilities greater by a factor of two to three than reported in manufacturer specifications. Newer instrumentation will undoubtedly provide greater reported precision, but caution is advised in using manufacturer specifications in estimating positioning repeatability. Although in most cases manufacturer assessments of repeatability appear to be optimistic, the instruments provide robust imaging capacity at the centimeter scale and show no systematic degradation of precision with distance. Using a reflector network with a geometry established by TS, the LMS-Z620 TLS exhibited an uncertainty of ±0.028 m in local frame and the LPM 800HA had a positional uncertainty of ±0.068 m. The results are encouraging in that instrumental repeatability is consistent under varying field environments (Fig. 8) and is not dependent upon the selection of reflector type (cylinder or flat) or distance to the reflector, at least to distances of 500 m. Our repeatability analysis is straightforward and easily incorporated into typical field deployments and does not require complex computations.
The use of older versions of TLS scanners in these experiments may limit direct use of the uncertainty values cited here, but it is doubtful that the relative contributions to the total error budget will change even when new scanners with better reported precisions are used. We surmise from our experimental results that the instrumental repeatability will be reduced by at least a factor of two in field conditions over that reported in manufacturer specifications. The recognition during our analysis that one of the LMS Z620 scanners had lost calibration on the way to or during one of our experiments highlights the advisability of checking instrumental calibration both before and after all field deployments. Fortunately, the lack of calibration of one of the instruments did not render the observations useless and the data were still of use at a lower, quantitatively determined resolution.
During our analysis we discovered a slight range dependency for the Riegl LMS-Z620 associated with relative elevation of the reflector with respect to scan position. When the uncertainty results in the Panamint Valley were reviewed, the positioning errors of the reflectors located at elevations higher than the scan position typically were greater than those for reflectors at lower elevations. For the KU-TLS, which retained calibration during this experiment, the discrepancy was ∼±0.01 m. In contrast, the UTD-TLS, which lost calibration, the discrepancy was ∼±0.03 m. To minimize the instrumental contribution to the repeatability budget, we recommend using a reference network located at or below the level of all scan positions.
Our experience indicates that many researchers using TLS imaging have concerns about the potential for resolution degradation if point-cloud positions are transformed into a geospatial referenced frame. For single occupation studies, the lack of an absolute reference frame only affects the potential to combine the TLS results with other geospatially referenced data and may be warranted if a slight increase in repeatability is demanded. For change-detection studies consisting of repeat measurement, the benefit of an absolute reference is much greater. Relocation of the reference network established in a relative frame requires the use of permanent monuments. In addition to the time and effort required to set stable monuments, a substantial effort is required to address monument stability (Langbein and Johnson, 1997). Without independently established monument stability and instrument relocation uncertainty assessment, contributions to the error budget from the monument stability and relocation must include an arbitrary or at best poorly understood estimation. We advocate that enrichment of the TLS dataset by adding geospatial attributes is worth the minor reduction in resolution, which can contribute as little as ±0.015 m if S-GNSS positioning is used.
Another source of controversy in the TLS community is the advisability of performing surface analysis with either the raw point-cloud data or with derivative interpolated surfaces. At the center of our advocacy for the use of gridded data is the realization that for natural surfaces, quantitative analysis of point-cloud characterizations cannot provide realistic repeatability assessments from the discontinuous, irregularly spaced data. This issue is exacerbated for change-detection studies because successive point-cloud characterizations do not actually measure to same point (Boehler et al., 2003). In addition to providing quantitative assessment of repeatability, interpolation of geospatially registered point-cloud data provides the means to merge and analyze very large datasets that exceed limits of conventional computational capacity. Currently, point clouds of ∼107 points represent the practical limit for data processing in ArcGIS software using readily available computer platforms. Data densities exceeding this limit by more than a factor of four are easily achieved in high-resolution scans of areas as small as several hundred meters on a side. Data-acquisition strategies that segment a study area produce datasets that are easily managed, and if geospatially registered and interpolated they can be merged and analyzed in ArcGIS without compromising surface resolution. Interpolation of point-cloud data, so long as the Nyquist sampling criteria are honored, can produce well-understood limits to image resolution and the capacity of assess goodness of fit.
A wide range of interpolation routines is available in ArcGIS and provides a straightforward means of transforming point-cloud data to gridded surfaces. Our tests indicate that the high density of TLS data makes selection between several well-understood interpolation routines relatively unimportant, but selection of the RBF method is slightly advantageous. Although the RBF, IDW, and Kriging methods produced surface models that were statistically indistinguishable, the computational speed and lack of small surface irregularities (bull’s eye patterns) make use of the RBF method compelling.
Possibly the most significant finding of our experiments is the documentation that uncertainty assessment of the control network used during TLS scanning provides a very good estimate of the repeatability for the target natural surface. In all of our results, the quadratic sum of all contributors to the error budget, which is the aggregate of instrumental uncertainty, georeferencing uncertainty, and interpolation error, for the control network is indistinguishable from the residuals between different scans of a common surface. The repeatability of LMS-Z620 images of the reference network range from 0.073 m and 0.047 m in the first experiment and the second experiment, respectively. These values are essentially the same as the surface residuals, 0.70 m and 0.046 m for the same experiments. This conclusion is further supported by comparison of mixed scanner imaging using the LMS-Z620 and LPM 800HA, which produce control network resolutions of 0.080 m. Differencing the surfaces generated by both instruments, which have the same control network but different scan positions, yields a residual of 0.083 m.
We thank A.J. Herrs (University of Kansas) and Alexander Biholar, Brian Burnham, Graham Mills, and Jeff Dunham (University of Texas at Dallas) for assistance in the field and Jarvis Cline and Lionel White for providing transformation software. We acknowledge the insightful reviews and discussions of John Ferguson on the statistical methods used for the error analysis. This research was partially funded by National Science Foundation grants EAR-0650855 and EAR-0922270.