The application of terrestrial laser scanning (TLS) for measuring Earth surface features is increasing. However, TLS surveys require users to choose and specify certain properties of the scan (i.e., resolution, height, distance, number of scan positions), often with limited understanding of how these properties affect the accuracy of the data. This paper presents results from an experiment that quantifies the effects of different scan settings and survey configurations on the measurement of centimeter-scale surface roughness. The main goal is to provide quantitative evidence to help guide and optimize field-based surface roughness measurements involving TLS data. The experiment involved an array of artificial roughness elements placed on an asphalt surface, similar to the approach of using inverted buckets in boundary layer experiments to simulate a rocky or sparsely vegetated surface with smooth interspaces. The independent variables consisted of laser point spacing, number of scan positions, and the height and distance of the scanner relative to the roughness array. The dependent variables were roughness element height, data occlusion, relative vertical accuracy, the root mean square height of the cup array, and the relative roughness of the asphalt surface. Two roughness patterns were tested, isotropic and anisotropic. Results show that when the laser point spacing was greater than the size of the individual roughness elements, their calculated height was between 32% and 73% below their actual height, but with a smaller spacing the calculated height was either equivalent to their actual height or only slight lower. Therefore, before a TLS survey is undertaken, manual measurements of roughness elements should be used to determine the size of the smallest roughness elements of interest, thus guiding the selection of laser point spacing. Larger point spacing also decreased the vertical accuracy of surfaces interpolated from the point clouds compared to global positioning system points. By combining point clouds from three scan stations arranged in a triangular network around the roughness array, the proportion of data occlusion decreased to as little as 0.24%, but due to error associated with point cloud registration, the accuracy of roughness element height decreased and the roughness of the asphalt surface artificially increased. For the most accurate measurements of the roughness element height and interspace roughness our results suggest that high-resolution point clouds obtained from one vantage point should be used. In regard to scanner height and distance, we measured a doubling of the occluded area when the scanner height decreased and distance to the array increased, thus increasing the angle of incidence. Reducing the angle of incidence decreases occlusion but also limits the areal coverage of each TLS scan and increases the number of scan stations at different vantage points required to cover large areas, which ultimately affects the accuracy of other roughness metrics. Overall, this case study demonstrates that there are trade-offs in that the optimization of one metric (e.g., roughness element height) can have negative effects on another (e.g., data occlusion). The choice of TLS settings and survey configuration, therefore, influences the accuracy of surface roughness measurements.
Surface roughness is a measure of the magnitude of topographic variability at different scales, including broad-scale roughness related to the structure and organization of landforms as well as small-scale roughness related to features such as vegetation, rocks, and soil (Shepard et al., 2001; Grohmann et al., 2010). From a geoscience perspective, the quantification of surface roughness (see review by Shepard et al., 2001) is important for a number of mapping applications and process parameterizations. Roughness metrics such as root mean square height (rmsh) and the Hurst exponent can be used to map material properties (e.g., Cord et al., 2007; Morris et al., 2008), constrain the relative age of geological surfaces (e.g., Frankel and Dolan, 2007), and help us to infer landform activity (e.g., McKean and Roering, 2004; Glenn et al., 2006). In the process domain, surface roughness is important for characterizing fluid-driven flows because it directly affects drag and sediment transport. For rivers, roughness is often expressed as a coefficient for use in hydraulic computations, including, among others, Manning’s n, Chézy’s C, and the Colebrook-White friction factor. For eolian processes, aerodynamic roughness, z0, controls the transfer of shear stress to wind-driven sediment flux and is important for predicting wind erosion. Considering the wide implications of surface roughness, it is not surprising that a number of methods have been developed to measure it.
Field techniques to measure surface roughness vary considerably in terms of the technological requirements and the scale of roughness that can be resolved. For centimeter- to millimeter-scale roughness associated with, for example, sedimentary particles or small-scale surface features like ripples, there is typically a trade-off between spatial resolution and sample area. Specifically, higher-resolution measurements tend to have small footprints (e.g., Huang and Bradford, 1990; Jester and Klik, 2005; Mazzarini et al., 2008). Translating these measurements to larger scales requires that roughness is truly homogeneous; this assumption is often implied but rarely tested. Newer techniques such as terrestrial laser scanning (TLS) are emerging as viable options for increasing the footprint of high-resolution surface roughness measurements, especially in the context of process research where field sites might cover 101–104 m2. Recent examples involving the use of TLS for surface roughness measurement include parameterization for hydraulic modeling (Milan, 2009), mapping river bed grain size and sedimentary facies (Brasington et al., 2012), estimating Manning’s n (Pignatelli et al., 2010), characterizing the roughness of pyroclastic deposits (Mazzarini et al., 2008), quantifying roughness effects on concentrated flow (Eitel et al., 2011), post-fire desert surface roughness (Soulard et al., 2013), new quantitative methods (Pollyea and Fairley, 2011), and, among other applications, measuring eolian saltation (Nield and Wiggs, 2011). While the ultimate focus of these studies may vary, they each rely on TLS to provide the data for calculating surface roughness. The problem is that few studies justify the selection of different settings and scan configurations for acquiring the TLS data, which may affect the calculations and potentially render the data incommensurate from one study to another.
Before the first laser pulse is fired TLS operators must choose a number of settings that determine how the instrument acquires measurements (e.g., point spacing, atmospheric correction factors, scan speed, field of view). They must also decide on the placement of the TLS in order to maximize laser returns (e.g., scan boundaries, angle, scanner height, number of scan position or vantage points around target area, scan overlap). Simple geometric considerations probably drive the majority of decisions about the settings and configuration, but there are few quantitative data to demonstrate how these decisions affect the accuracy or quality of the data for measuring surface roughness and allied metrics. For example, setting the point spacing to the smallest setting (e.g., 1 mm on many commercial TLSs) may lead to oversampling and smoothing of fine-scale features that contribute to the surface roughness; this occurs if the beam width is large relative to the point spacing (Lichti and Jamtsho, 2006). Furthermore, because these properties change with distance in most TLSs, there may be variations in the spatial resolution of the data within a single scan.
While it is encouraging to note that the understanding of TLS scan geometry and target properties is steadily increasing from theoretical and controlled experiments (e.g., Lichti and Harvey, 2002; Boehler et al., 2003; Lichti and Jamtsho, 2006; Henning and Radtke, 2008; Keightley and Bawden, 2010; Soudarissanane et al., 2011; Pollyea and Fairley, 2012), it is also worth noting that in the context of field-based geoscience research, there is little quantitative evidence to justify the selection of user-defined settings and survey configurations that optimize the accuracy of surface roughness measurements. To this end, this research is an attempt to provide a quantitative basis for informing field-based TLS surveys of surface roughness, thus building on previous theoretical and laboratory research. Our approach involved a controlled field experiment with artificial roughness elements distributed on a smooth surface. The occurrence of alternating roughness elements and smooth interspaces is representative of natural surfaces in deserts and semiarid settings where the elements are rocks and/or sparse vegetation, and the interspace is sediment (e.g., Raupach et al., 1992; Wolfe and Nickling, 1993; Lancaster and Baas, 1998). The idea for this experimental approach stems from research involving the use of inverted plastic buckets to investigate the effects of solid roughness elements on shear stress partitioning in the atmospheric inertial sublayer (Gillies et al., 2007).
We focus on three TLS survey properties that require specification by the operator each time a survey is conducted: point density, number of scan positions, and the height and distance of the scanner relative to the array of targets. These properties were selected because they must be defined in virtually all TLS surveys regardless of the type or brand of TLS system being used. Based on anecdotal evidence from previous field research using a TLS, we hypothesized that the experiment would reveal trade-offs in the measurement accuracy of roughness and related geometric properties of the artificial targets. Specifically, maximizing the accuracy of one measurement, such as roughness element height, may adversely affect the accuracy of another measurement (e.g., data occlusion). We tested the hypothesis by calculating several metrics from the point cloud data: roughness element height, data occlusions (proportion of data voids), root mean square error (rmse) of the interpolated point clouds, rmsh of the cup array, and the relative roughness of the asphalt surface. While roughness is typically defined as topographic variability (e.g., rmsh), all of the calculations considered in this investigation have direct or indirect implications on the accuracy of the topographic data used to calculate it.
Experimental Design and Instrument
The experiment (Fig. 1) was designed to reduce the complexity of the surface and targets and allows for the effects of different scan settings and geometries to be isolated and evaluated. The cups (0.090 m diameter, 0.132 m height) were filled with sand to ensure that their position remained constant during the experiment. The independent variables consisted of point density, number of scan positions or vantage points, and the height and distance of the scanner relative to the array of targets. These variables must be established before each TLS survey regardless of the commercial brand of TLS used. The dependent variables examined were target height, data occlusions (proportion of data voids), the rmse of the interpolated point clouds, the rmsh of the cup array, and the relative roughness of the asphalt. We tested how changes in the independent variables affected the accuracy of the dependent variables. We also investigated how the roughness element spacing and orientation affected the foregoing measurements by changing the density of cups from 1.25 cups m−2 to 2.36 cups m−2 (Fig. 1C). The low-density array (1.25 cups m−2) consisted of an isotropic pattern, whereas the higher density array (2.36 cups m−2) involved an anisotropic pattern; the latter is analogous to the periodic or oriented roughness associated with tilled agricultural fields. We define accuracy in this investigation as the degree to which the TLS measurements correctly represent the real-world construct to which they refer. Specific accuracy criteria are explained in the Data Analysis discussion.
For the experiment we used a Trimble GX 3D scanner, which uses a pulsed laser with a wavelength of 532 nm (green). The GX 3D scanner measures single returns and is capable of scanning as many as 5000 points s−1 with a vendor-reported single point positional accuracy of 0.012 m (at 100 m) and a distance accuracy of 0.007 m. The manufacturer reports a systematic error of ∼0.006 m after compensations. A dual-axis compensator is used to level the instrument over a known point. The dual-axis compensator actively corrects the horizontal and vertical angles during the scanning.
While previous research espoused the importance of scanning targets from multiple vantage points in order to reduce data occlusions (e.g., Bitelli et al., 2004; Nagihara et al., 2004; Schmid et al., 2005; Heritage and Hetherington, 2007; Buckley et al., 2008; Guarnieri et al., 2009), there is little quantitative, field-base evidence to demonstrate how much improvement is actually achieved, or whether there are trade-offs that affect the accuracy of related measurements. As part of the experiment a triangular network of scan stations was established around the array of cups where the TLS was positioned for each scan (Fig. 1B). The vertices of the inner and outer triangles are labeled A, B, or C. The distance between the vertices of the inner triangle was 30 m, while the distance between the vertices of the outer triangle was 60 m.
There are several types of TLS survey design methodologies that can be used when multiple scan stations or vantage points are needed. If a series of stable reflectors with known coordinates is deployed around the survey site the operator can calculate the position of the scanner relative to the targets. This provides on-the-fly point cloud coregistration. Alternatively, the reflectors can be used to merge individual point clouds into a composite during post-processing through coregistration. Another method involves the use of a single reflector and requires that the operator measures the ground coordinates of each new scan station before relocating the scanner to the new vantage point. Backsighting to the previous control point is then used to assess positional error and correct the scanner position prior to each new scan as needed to maintain a high level of accuracy. This approach is similar to the traditional total station workflow (Lemmon and Biddiscombe, 2006).
In this investigation we used the single-reflector approach and established the coordinates of each scan station by locking on the reflector positioned over the new control point and measuring its ground coordinate. For each triangular network we measured the coordinates of two vertices with a Trimble R7 real-time kinematic global positioning system (GPS). Once the coordinates were measured the scanner was placed over one of the vertices, while the second was used to orient the scanner. The coordinates of the third vertex were measured by the scanner. Despite our best efforts we measured as much as 0.026 m of vertical error and 0.014 m of horizontal error when the scanner was set up over the vertices, but on average the error was 0.016 m in the vertical and 0.007 m in the horizontal. GPS error of control point positions was also unavoidable. The maximum horizontal error was 0.009 and 0.007 m, and the maximum vertical error was 0.012 and 0.008 m. In addition to GPS it is important to recognize that there are many sources of positional inaccuracy in TLS point clouds regardless of the survey method chosen, including scan coregistration error, point cloud georeferencing error, reflector position definition error, and internal system error (Coveney and Fotheringham, 2011).
During the experiment scans were completed with a laser point spacing of 0.03 m for the 30 m and 60 m distances. At the 30 m distance scans were also completed with a laser point spacing of 0.30 m. Hereafter the 0.03 m data are referred to as the high-resolution data, and the 0.30 m data are termed the low-resolution data. The reason for choosing these values was to determine the difference between a point spacing that was larger than the roughness element versus one that was smaller (recall cup diameter = 0.090 m and cup height = 0.132 m). Also, given the horizontal and vertical errors associated with the GPS and scanner setup, a smaller point spacing (e.g., ∼0.01 m) could lead to vertical overlap of points when combining point clouds from different scan stations, which requires additional processing before interpolation.
We modified the scanner height to determine how the setup geometry affected surface roughness measurements. Previous research has demonstrated or suggested that increasing scanner height, effectively decreasing the angle of incidence, improves point accuracy (Schaefer and Inkpen, 2010; Soudarissanane et al., 2007, 2009, 2011; Pollyea and Fairley, 2012) and can reduce data occlusion in rocky and vegetated terrain (Heritage and Hetherington, 2007; Guarnieri et al., 2009, Heritage and Milan, 2009). We chose two nominal setup heights for the scanner above the asphalt surface, ∼1.8 m and ∼2.8 m. We used an extra-tall tripod (Fig. 1A) capable of extending to 3.8 m; however, we chose the more conservative height of ∼2.8 m because we found that the scanner vibrated when the tripod was fully extended, even with added bracing, which introduces an additional source of positional error to the point cloud. Moreover, our ability to control setup error increased as the tripod height decreased. We hereafter refer to the ∼1.8 m height as the low setup and the ∼2.8 m height as the high setup. Scanning from the inner triangle (30 m) was completed only with the high setup, while measurements with both high and low setups were completed from the outer triangle (60 m).
After all scans were completed a GPS survey of the test surface was undertaken (n = 404 points) in order to acquire a large sample of points for gauging the relative vertical accuracy of interpolated surfaces created from the point clouds. The coordinates of each cup were also measured in both cup patterns. The average horizontal and vertical error of the GPS points was 0.016 and 0.019 m, respectively, while the standard deviation was 0.008 and 0.009 m, respectively.
Post-processing and analysis of the TLS point cloud data were performed to isolate the dependent variables of cup height, data occlusion, vertical rmse, rmsh of the cup array, and the relative roughness of the asphalt surface. These measurements were used to gauge the effects of changes in the independent variables. Cup height was compared to the true height (0.132 m). Data occlusion is a relative measurement that approaches zero for optimal data quality. The vertical offset between GPS points and the raster surface interpolated from the point cloud was used to calculate the rmse, which quantifies the relative vertical accuracy of the latter. The rmsh was calculated for the point clouds and compared to the actual cup height and spacing. The asphalt surface exhibits millimeter-scale microtopography owing to the dominance of sand and pea-sized gravel inclusions (Fig. 2); thus, for optimal accuracy the relative roughness measured from the point cloud data should be no more than the maximum height of the largest pea-sized gravel inclusion (0.003 m).
Scan data were processed using Trimble Geomatics RealWorks (http://www.trimble.com/3D-laser-scanning/software.aspx) and ArcGIS v10 (http://www.esri.com/software/arcgis) software. First, each scan was cropped to the same extent in order to isolate a consistent area encompassing the roughness array. This was done because each scan acquired measurements of slightly different areas comprising the roughness array and surrounding asphalt surface. The next step was to manually edit the point cloud data in Trimble Geomatics RealWorks software, where all points acquired from the laser target while orientating the scanner, as well as all anomalous data that were acquired from erroneous reflections, were removed manually. Anomalous returns were infrequent (n < 30), but primarily included points located well above the cups. These may represent extrinsic (random) or intrinsic (systematic) noise in the TLS.
Filtering was used to discriminate the cups from the underlying asphalt surface in the point cloud. Following Mundt et al. (2006), Streutker and Glenn (2006), Guarnieri et al. (2009), and Wang et al. (2009), a moving kernel was used to determine local minima and maxima. The selected minima or maxima were assigned to the cell centered in the middle of the kernel, thereby creating two surfaces representing the asphalt (minima) and top of cups (maxima). The difference between these surfaces yielded a map of cup height. Because each cup was represented by several points in the high-resolution data, which denote the top and sides, we ran a second filter (0.09 m diameter, which is the diameter of the cup) in order to identify the local maximum point representing each cup. We used the Sample tool in ArcGIS to derive the height of each cup; from this we calculated statistics of the absolute difference between the actual (0.132 m) and TLS-derived cup heights.
Due to line of sight restrictions (Schmid et al., 2005; Heritage and Hetherington, 2005, 2007; Guarnieri et al., 2009; Heritage and Milan, 2009), the presence of roughness elements can obstruct the laser pulse causing data occlusion, affecting the accuracy and quality of surface roughness measurements. By reducing the amount of occlusion the three-dimensional representation of the surface is more complete. As expected, observations of point clouds derived from the experiment showed consistent patterns of data occlusion behind cups; however, there were also missing data within the interspaces that could not be attributed to blocking by cups; this suggests that a proportion of laser pulses were not returned to the TLS. To determine the occluded area we developed a three-step procedure involving (1) filtering, (2) classification, and (3) calculation. For this analysis only the high-resolution data were examined. Before filtering we developed a procedure to identify a threshold kernel size that would determine if an occlusion was present or not. We could have used a square kernel of 0.03 × 0.03 m based on the predefined point spacing, but this assumes reflection from all surfaces, which is not the case. Instead, for each single-scan point cloud we calculated the average point density in three 25 m2 areas along the primary edges of the cup array. From these averages we calculated a scan average and from all single scans we determined the maximum average point spacing, which was 0.039 m. This threshold value, which is slightly greater than the point spacing, establishes the minimum distance for detecting missing points. Thus, as the kernel moves over the data it calculates point spacing and outputs a raster surface. Cells with a value of zero (i.e., >0.039 m point spacing) are then classified as NODATA, while those with values greater than zero are classified as DATA. The total area of each class was then calculated, with the NODATA area representing the total occluded area. The Intersect tool in ArcGIS was used to calculate the total occluded area for composite point clouds comprising coregistered data from two or three scan stations.
The final analysis was to determine how different scan settings and survey configurations affected the relative roughness of the asphalt surface (the interspace). This helps identify the minimum resolution of surface roughness that can be accurately detected from the TLS point clouds. The analysis was completed by manually clipping and removing all points representing the cups in the raw point cloud data. Point clouds from different scan stations were then combined, and the vertical variability of points was calculated using the Range function in ArcGIS. This function determines the maximum elevation difference within a moving kernel, which is then assigned to the cell in the center of the kernel. For this analysis a slightly larger kernel (0.042 m) was used than for the occlusion in order to ensure that a minimum of two points would be encapsulated by the kernel. The maximum vertical variation was extracted for each point cloud.
Table 1 shows that the absolute difference between the actual and the TLS-derived cup height is in the millimeter range for the high-resolution data (0.03 m point spacing) and in the centimeter range for the low-resolution data (0.30 m point spacing). In some cases the absolute difference is close to the actual cup height for the low-resolution data, which suggests this resolution is simply too coarse to measure the height of the targets accurately. This can also be visualized in Figure 3, which shows that individual cups are easily defined with the high-resolution data, but they are poorly constrained with the low-resolution data and sometimes they are not detected.
According to Table 1 there was no significant difference between results from the 30 and 60 m scan distances, which is to be expected because the Trimble GX 3D scanner uses patented technology that automatically adjusts the scan parameters to ensure equivalent point spacing with changing distance to targets. Other types of scanners lacking this functionality might yield different results. There was also little overall difference in the results between the two cup patterns and scanner heights. However, when individual point clouds from different scan stations were combined into two- or three-scan composites, which produced smaller point spacing, the absolute difference in cup height decreased in the low-resolution data, and was inconsistent in the high-resolution data. For the latter the magnitude of the difference depended on which combination of scans was used, as well as the distance of the scanner. At the 30 m distance the smallest difference occurred when point clouds from scan stations B and C were combined for the isotropic cup pattern and high scanner position, but this did not repeat at the 60 m distance. Presumably, this represents a situation in which the positional error cancelled out when the point clouds were combined from stations B and C, whereas the error was additive for other scan combinations.
As expected, the analysis of data occlusion showed that the area of missing data decreased when two or three point clouds from separate scan stations were combined (Fig. 4); however, it was also found that the magnitude of the reduction changed according to the survey configuration and cup pattern. Most of the occlusion remaining in the three-scan point cloud composites was located around the cups (Fig. 4C). However, in addition to shadowing effects, there was also evidence of data occlusion in the asphalt interspace, where the transmitted laser pulse was not returned to the scanner or was too weak to be detected. This was likely caused by surface interactions such as deflection and/or scattering of laser pulses, which decreases signal return strength and signal to noise ratio (Pfeifer et al., 2007; Soudarissanane et al., 2011).
Results shown in Figure 5 indicate that the anisotropic pattern consistently produced a larger occluded area than the isotropic pattern; however, the magnitude of the difference between the two roughness patterns was much smaller in the three-scan point cloud composites. In the low setup height at 60 m the point cloud from scan station B was almost half the total area of the cup array (48.21%), which illustrates the geometric effect of scanning perpendicular to the roughness pattern from a high angle of incidence (see Fig. 1). Single-scan point clouds from scan station C consistently produced a lower occluded area. This is due to the parallel alignment of the scanner relative to the rows of cups (see Fig. 1), which also affects the size of the occlusion area according to which combinations of scans are used in the calculation. Similar outcomes could be expected in an agricultural setting consisting of row crops or ridge till due to the anisotropic patterns.
By increasing the distance between the scanner and the cup array, which is equivalent to increasing the angle of incidence, the percentage of data occlusion more than doubled, on average. Thus, data occlusion can be reduced considerably by decreasing the angle of incidence between the roughness array and the scanner (Fig. 5C). There are, however, physical limitations on how much the angle of incidence can be reduced, either in terms of positioning the scanner proximal to the target or placing it high above the target. Furthermore, by reducing the angle of incidence the potential trade-off is that the operator will reduce the areal coverage of a scan. This stems from the fact that the field of view of most TLSs is greater in the horizontal plane than the vertical plane.
Relative Vertical Accuracy
The rmse values in Table 2 show that the cup pattern and scan geometry had little impact on the relative vertical accuracy of the interpolated point clouds. We tested several different interpolation algorithms, but ultimately spline with tension yielded the smallest rmse values (cf. Starek et al., 2011). The average rmse for the high-resolution data was 0.016 m, with a standard deviation of 0.002. For the low-resolution data the average rmse was 0.033 m, with a standard deviation of 0.018. These results indicate that resolution is the principle mediator of the relative accuracy of the interpolated point cloud. By reducing the point cloud resolution the relative surface accuracy becomes more sensitive to different survey configurations. For example, in the anisotropic cup pattern rmse values range from 0.022 m to 0.079 m (Table 2). In general, point clouds acquired from one scan station have higher rmse values for the low-resolution data than from the composites. The reason the low-resolution data yield higher and more variable rmse values is that the surface interpolated from the point cloud contains fewer points reflected directly from the asphalt interspace (see Fig. 3). Thus, many of the GPS ground points were located in the interpolated region between laser points, which may increase the potential for vertical offset. By combining point clouds from different scan stations the number of reflections from the asphalt interspaces increases for the low-resolution data, which lowers the rmse compared to those from only one scan station.
The rmsh values shown in Table 3 indicate that the calculated scene-wide roughness was minimally influenced by scan geometry and point cloud merging, which is encouraging in the context of data commensurability. Values were slightly greater for the anisotropic cup pattern, which is expected given the greater density of cups; however, overall the values were relatively consistent for each pattern, implying that scanning from multiple vantage points around the cups arrays produced no net improvement. This could change if there were a range of cup sizes with tighter spacing because some of the cups would not be detected due to occlusion. In both patterns rmsh values derived from the actual cup height and spacing (i.e., 0.0093 m for the isotropic pattern and 0.0129 m for the anisotropic pattern) were smaller than those calculated from the point cloud data. The difference increased further for the low-resolution data, which shows that the roughness value derived from Equation 5 is resolution-dependent and therefore sensitive to the user-defined point spacing in the TLS settings.
Figure 6 shows that the maximum vertical variation of point clouds from single scan stations was millimeters regardless of the height and distance of the scanner relative to the cup array, but when point clouds from different scan stations were combined to create composites the variation increased, reaching 0.021 m for the isotropic pattern with the point cloud composite created from all three scan stations. The actual vertical variation of the asphalt surface is no more than 0.003 m (Fig. 2); thus, the high-resolution point clouds from individual scan station were consistently more representative than the composites. However, these values are in the range of the systematic error of the instrument (∼0.006 m), so it is unclear if these changes represent real topographic variations or instrument noise.
The values for point cloud composites from two scan stations varied according to which point clouds were combined. Some showed as much as 0.02 m of maximum vertical variation, while others were closer to the actual variation. When three point clouds were combined the maximum vertical variation was higher than when two point clouds were combined. Overall, it appears that by combining two or more point clouds of the same scene from different vantage points artificial small-scale topographic variability (noise) was introduced that exceeded the actual roughness of the asphalt surface. This is also shown in Figure 3A. Presumably this represents the effects of positional error during scanner relocation.
The results from our field-based experiment demonstrate that the selection of user-defined scan properties and geometry can influence the measurement of surface roughness, including individual roughness elements and the interspaces. Thus, the ultimate goal of a TLS survey should guide the selection of scan resolution, height, distance, and the number of scan positions. In this regard, our experiment provides some initial direction for making these selections. Our results provide a quantitative gauge of the degree to which user-defined TLS settings and survey configuration affect the accuracy of roughness measurements, and indirectly demonstrate trade-offs in that the optimization of one type of surface roughness measurement can have a negative effect on another.
If a goal of a TLS survey is to optimize the measurement of roughness element height, our results indicate that the most accurate measurements can be achieved using high-resolution point clouds obtained from a single scan station. Combining or coregistering point clouds acquired from different scan stations or vantage points inevitably introduces some level of positional error on top of the instruments’ systematic error. Ultimately, this decreases the accuracy of measurements of roughness height when the elements are relatively uniform. However, for surfaces with a wider range of roughness element height TLS operators might need to combine scans from different vantage points to ensure that the smaller elements are not underrepresented in the sample.
Regardless of which survey approach is used, each time the scanner is moved to a new scan station positional error is introduced to the point cloud (cf. Coveney and Fotheringham, 2011), thus misrepresenting the scale of roughness under investigation, and potentially rendering smooth surfaces artificially rougher in the point cloud than they are in reality. Depending on how the positional error is distributed, some combinations of point clouds from different scan stations can cancel it, while others increase it.
Point spacing or scan resolution influenced the accuracy of roughness element height because by increasing the laser point spacing, effectively decreasing point cloud resolution, the probability decreased that the top of each roughness element was struck by the laser. Ultimately, this setting should be determined in advance such that the laser point spacing is much smaller than the size and spacing of the roughness elements. Thus, while the low-resolution setting used in this investigation (0.30 m) was not suitable for the inverted cups, it might be adequate for measuring the height of much larger elements, such as large boulders or large topographic features. In this regard, manual measurements of roughness elements prior to surveying could be used to define the laser point spacing that satisfies the goal of the TLS survey.
Field-based geomorphological applications of TLS often involve challenging terrain conditions that restrict line of sight, and therefore lead to data occlusion. Although this simplified experimental approach confirms the importance of scanning surface roughness from multiple vantage points, as espoused in previous literature (e.g., Keightley and Bawden, 2010), it was demonstrated that occlusion can be considerably reduced by combining point clouds acquired from three different scan stations arranged in a triangular network around the roughness array. In the most extreme case in this study the occluded area decreased from 48.21% for a point cloud acquired from a single scan station to 3.73% with point clouds combined from three separate scan stations (Fig. 5A). By scanning at close range and mounting the scanner as high as possible (2.8 m), the occluded area was further reduced to a minimum of 0.24%; therefore, decreasing the angle of incidence and combining scans acquired from three separate positions around the roughness array can virtually eliminate data occlusion. Use of extra-tall tripods is one approach to overcome some of the incident angle effects in the field, but caution should be taken as positional error may be introduced through mechanical vibration unless the tripod is sufficiently braced.
While combining scans from different vantage points reduces data occlusion, our results show that this process can introduce artificial (erroneous) topographic variability to the point cloud, rendering relatively smooth surfaces rougher than they are. This effect is caused by error in the registration of point clouds from separate scans. The bulk of the registration error is produced from setup error and GPS error, which create a misalignment between known scan station positions (Lichti et al., 2005; Hodge, 2010). By combining point clouds acquired from different scan stations, we showed that the average maximum vertical variation of an otherwise smooth asphalt surface was as much as 0.021 m. This is much larger than the actual roughness, which is on the scale of a few millimeters. Thus, if the focus of a TLS survey is to measure the roughness of the interspaces between larger objects (e.g., Sankey et al., 2011), point clouds from one scan station or vantage point are likely to yield the most accurate data.
Results of this investigation demonstrate how user-defined TLS settings and scan geometry influence the accuracy of surface roughness measurements. While the experiment does not cover the diversity of roughness conditions encountered in nature, it allows us to constrain the effects of a selection of independent variables affecting the accuracy of the TLS point cloud data. The following conclusions outline some rules of thumb and trade-offs that can guide the selection of TLS settings and scan geometry for measuring surface roughness in the field.
The choice of the laser point spacing should be determined by the size of the roughness elements under investigation. If the laser point spacing is greater than the size of the individual roughness elements, their height will be misrepresented and the vertical accuracy of surfaces interpolated from the point clouds will be low. Ideally, the laser point spacing should be considerably smaller than the smallest element of interest. One approach to define the laser point spacing is to measure a sample of roughness elements before the TLS survey is undertaken so as to identify the dimensions of the smallest elements contributing to the roughness.
In order to reduce data occlusion to minimal levels TLS users should scan from at least three vantage points arranged in a triangular network around the roughness array. While this type of survey configuration can reduce occlusion to as low as 0.24%, it is unlikely that zero occlusion can be achieved because not all laser pulses are returned to the scanner due to interactions that produce scattering and deflection.
There are trade-offs in that the selection of one scan setting or survey configuration can adversely affect measurements of other roughness characteristics. We found that reducing data occlusion by scanning from multiple vantage points increased the positional error of composite point clouds relative to those acquired from one scan station. The causes of positional error are numerous; they affect the accuracy of roughness element height calculations and the apparent roughness of otherwise smooth interspaces. Point clouds acquired from one scan station (i.e., one vantage point) provide more accurate representations of roughness element height and interspace roughness.
The height and distance of the scanner relative to the roughness array play a role in data occlusion. While it is desirable to reduce the angle of incidence by scanning roughness close and from a high vantage point, this type of survey configuration necessarily reduces the areal coverage of TLS point clouds, requiring users to relocate the scanner to different vantage points in order to acquire data over large areas. The trade-off is that by attempting to minimize occlusion the data are less accurate for other roughness metrics.
While surface roughness parameterization remains an ongoing research challenge for many geomorphological applications, this case study indicates that TLS measurements can be used to supplement other forms of roughness measurement, so long as the limitations of the technology and user-defined settings are understood. Ultimately, users must configure their settings and survey design to optimize the desired roughness metric.
This research was funded by a Natural Sciences and Engineering Research Council of Canada Discovery Grant and Alberta Innovates Award to Hugenholtz. We acknowledge the assistance of Dan Koenig during field measurements. Comments from Ian Walker, the Associate Editor, and an anonymous reviewer greatly improved this paper. Research presented here is based on Brown’s Master of Science thesis.