Abstract

This paper presents a low-cost true color terrestrial laser scanning system, described in terms of the hardware and software elements necessary to add color capability to existing non-color laser scanners. A purpose-built camera mount allows a digital camera to be positioned coincident and collinear with the beam detector of the laser scanner device, such that mismatch between color data and laser scan points is minimized. An analytic mapping implemented in a Matlab toolbox registers the photographs, after rectification, with the laser scan point cloud, by solving for two tie-points per image. The resulting true color point clouds are accurate and easily interpretable. The adjustable camera mount fits a range of cameras and, with minor adaptation, could be used with different laser scanner devices. A detailed error analysis will allow comparisons to be drawn with alternative technologies, such as photogrammetry and commercial true color laser scanners. This low-cost, accurate, and flexible true color laser scanning technology has the potential to make a significant improvement to existing methods of spatial data acquisition.

INTRODUCTION

Terrestrial laser scanning (TLS) is a powerful method for the acquisition of detailed positional data, and is now routinely used as a standard tool in civil engineering and in as-built surveying (Jacobs, 2005; Dunn, 2005). In recent years the use of TLS has expanded widely with applications in many other fields, including mining (Ahlgren and Holmlund, 2002), geomechanics (e.g., Slob and Hack, 2004), geological surveying (Jones et al., 2004; McCaffrey et al., 2005), erosion and landslide monitoring (Haneberg, 2004; Lim et al., 2005; Rosser et al., 2005), petroleum reservoir modeling (Løseth et al., 2003; Bellian et al., 2005; Pringle et al., 2006; Jones et al., 2008), architecture (El-Hakim et al., 2005), archaeology and heritage conservation (Barber et al., 2006), forestry (Thies et al., 2004), city modeling (Hunter et al., 2006), and many others.

The overall principle of terrestrial laser scanners is to measure the return traveltime (and hence distance) for an emitted laser beam to hit and reflect off the target object. The scanner automatically rotates continuously while the laser beam is fired repeatedly (with a frequency up to 100,000 points/s, depending on the type of scanner used). In this way, a detailed three-dimensional (3D) image of the surface of the target object is captured (Figs. 1A, 1B). With conventional TLS, measured points in the laser scan point cloud can be assigned different shades of gray according to the intensity of the reflected laser beam (Figs. 1B, 1E), or can be mapped to an arbitrary color ramp according to intensity or a positional attribute such as relative elevation or distance from the scan position (Fig. 1C). For some kinds of analysis (particularly purely geometric studies such as volumetric calculations), the resulting gray scale or false color point cloud is adequate. However, while gray scale and false color rendering can provide some help during interpretation of the laser scan data, they are often less useful for many kinds of more detailed analysis. For example, we find that the additional visual cues provided by true color data (Figs. 1D, 1F) are important for the correct identification of geological features from high-resolution TLS point clouds (Clegg et al., 2005; Trinks et al., 2005; Waggott et al., 2005). Added color is equally beneficial when interpreting scans taken at short range (e.g., Kokkalas et al., 2007) and longer ranges of several hundred meters (e.g., Labourdette and Jones, 2007). Here we use the term “true color” to mean high-resolution (32 bit) color typically acquired by a good-quality digital camera; i.e., in the opposite sense of the “false color” outcrops shown in Figure 1C.

True color terrestrial laser scanning (TCTLS) requires undistorted digital photographs to be precisely mapped onto the laser scan point cloud data, thus producing a true color 3D model. A number of TCTLS devices are already manufactured and marketed, but generally capture low-quality color data, and are much more expensive than conventional non-color TLS equipment. This paper presents an alternative, relatively low cost method for adding high-resolution color capability to standard terrestrial laser scanners. In this system, the color information is captured by a digital camera and registered with the point cloud using an analytic mapping implemented in Matlab. The key component in the system is a specially designed camera mount that enables the color information to be captured from precisely the same point as that from which the laser scanner captures its spatial information.

In this paper, the examples we show all use a common approach in which each point in the laser-scan point cloud is allocated a color value from the nearest pixel in the corresponding mapped photo; i.e., the output as shown here is a colored point cloud, with brighter, more accurate color rendition compared with most existing TCTLS devices (which typically use a low-resolution onboard video camera to provide color data). This approach can also be extended to allow the mapped images to be draped onto a triangulated (meshed) surface derived from the point cloud (cf. Xu et al., 2000; Bellian et al., 2005), so that the output is a textured model of the surface of the outcrop. This gives a model with even greater pixel resolution and visible detail.

While this work was particularly motivated by the aim to demonstrate a prototype solution for a low-cost alternative to high-end commercial TCTLS, there were a number of additional reasons behind the project:

  1. To be able to provide the best possible color-mapped point clouds as input for new methods for automated feature recognition (e.g., automated removal of vegetation; extraction of fracture planes from the point cloud, c.f. Slob et al., 2005). Existing methods typically use only geometrical analysis (e.g., edge and corner detection) and/or intensity contrasts. Supplementing feature recognition algorithms to exploit additional color information has very interesting potential, but clearly relies on a precise matching of the color data and point cloud as input.

  2. To provide a platform to allow a wider range of digital cameras to be used. For example, this has allowed us to test the use of lenses with longer focal lengths to capture very detailed photographs that can then be draped onto meshed surfaces derived from the point cloud.

  3. To allow for future development and testing of improved color mapping algorithms (e.g., to develop smoother color mapping in areas of overlapping images).

  4. To be able to test the sensitivity of positioning of the camera with respect to the laser scanner sensor. At present, some TCTLS devices (e.g., Riegl LMS scanners) use cameras mounted on top of the scanner body (i.e., non-collinear with the laser scanner beam), and this introduces an error during matching of the photos with the point cloud data.

  5. To provide calibrated color outcrop models for benchmark comparisons with other methods of building 3D outcrop models such as digital photogrammetry (e.g., Poropat, 2001; Pringle et al., 2001, 2004).

  6. To have a precise, flexible system that is capable of mapping spatial data and imagery captured with alternative kinds of scanner and/or camera device: e.g., for testing prototype scanners built with different wavelengths or other laser properties, or for use with other types of camera such as infrared or multi-spectral equipment (see also Bellian et al., 2007).

  7. To develop the necessary expertise to enable independent calibration of camera lenses and testing of TCTLS performance.

While a number of scanner manufacturers clearly have their own proprietary methods to provide TCTLS, there is relatively little information available in the public domain that describes existing methods for color mapping of laser scan data. Balis et al. (2004) implied how mapping is achieved using a Mensi GS200 scanner, which has internal onboard video circuitry. Abmayr et al. (2004) used a specialized line-scanning chromatic recorder to gather color information from the scanned area. Some workers have developed methods to combine scan data with photos taken with a standard digital single-lens reflex (SLR) camera from unspecified positions (i.e., scanner and camera need not be coincident and collinear). Such methods typically minimize errors by using a large number of manually defined tie-points for each image (tie-points are points that can be located in both the rectified photographs and the point cloud). Examples of this approach include Xu et al. (2000), Grammatikopoulos et al. (2004), and Abdelhafiz et al. (2005), although the details of their methods are not all fully published. Abmayr et al. (2005) give a comprehensive description of the use of such a method with the Z+F 5003 scanner.

CAMERA MOUNT

In order to minimize potential mismatch between the scan data and color photos, a special camera mount was designed so that the optical center of the camera can be positioned to align with the center of the laser scanner. The essential feature of the mount is that it ensures that the center of the entrance pupil of the camera lens remains coincident with the laser scanner center even when the camera is panned and tilted. A prototype mount was designed to be used with a Measurement Devices Ltd. (MDL) LaserAce 600 scanner. The mount accepts a wide range of cameras. The camera can easily be adjusted on the mount within a range of positions, although minor design modification to the dimensions of the mount was needed to make it suitable for use with some other makes of scanner. The mount was designed to fit the same standard tribrach fitting as the MDL scanner and maintain the camera lens pivot point at the same height above the tribrach as the scanner center. Thus the scanner can be removed from the tripod once scanning is completed, and the camera mount attached in its place, thereby ensuring that the laser scanner center and camera lens pivot point are coincident.

To facilitate prototype design and fabrication, a rotating tribrach adapter (www.surveying.com/products/details.asp?prodID=2020-00) was used as the base of the camera mount. As well as fitting the scanner tribrach, the rotating tribrach adapter provided a flat surface with a good sized protruding thread in the center, and the rotation mechanism necessary for panning the camera. The rest of the mount, fabricated from aluminum, was built on top of the adapter. A camera attaches to the mount using the standard ¼ in Whitworth (6.35 mm) thread tripod socket found on the base of most cameras. Figure 2 shows a rendered AutoCAD model of the as-built mount and a photograph of the mount with an SLR camera attached.

The mount holds the camera in portrait format, partly because if a wide-enough lens is used, a single photograph at each pan position will often be sufficient, and partly because it would be difficult to design and manufacture a mount that would hold the camera in landscape format and allow the necessary panning and tilting while keeping the position of the center of the entrance pupil constant. The design process was simplified by the fact that the majority of SLR cameras have their tripod sockets situated directly below the longitudinal lens axis (when the camera is viewed in landscape orientation). This meant that in the plate to which the camera is attached, a simple slot with its centerline passing through the horizontal axis of rotation was sufficient to allow the center of the entrance pupil to be positioned in the horizontal rotational axis. Adjustments are therefore only required in two axes in order to position the pivot point of the camera lens in the vertical rotational axis.

The prototype camera mount described and used in this paper is non-automated, such that the operator must manually reposition the camera around its rotation axes between each overlapping consecutive shot. Work is under way to prototype a motorized version of the mount that will automatically take the photographs needed to cover a prescribed area.

CAMERA CALIBRATION AND IMAGE RECTIFICATION

Camera calibration and image rectification are well documented in relation to applications in computer vision. Camera calibration is the process of determining the internal camera geometric and optical characteristics (intrinsic parameters) and/or the 3D position and orientation of the camera frame relative to a certain world coordinate system (extrinsic parameters) (Tsai, 1987). It is not necessary to determine the extrinsic parameters in the TCTLS system described in this paper, because the camera is mounted at the center of rotation of the laser scanner and tie-points are used to register the images with the point cloud.

In order to register the photographs with the point cloud, the photographs must be rectified such that they appear to have been captured with a pinhole camera, the pinhole of which is coincident with the laser-beam detector inside the laser scanner device. Figure 3 shows the pinhole camera model with lens distortion, as well as how the laser scanner and image coordinate systems relate to each other. The orientation of the scanner coordinate system about the vertical axis is arbitrary (for the MDL scanner used in this study it is defined by the direction in which the device is pointing when it is turned on).

According to Heikkilä and Silvén (1997), the pinhole camera model is based on the principle of collinearity, where each point in the object space is projected by a straight line through the projection center into the image plane. Since after undistortion the photographs appear to have been captured with a pinhole camera, and if the camera-lens system is positioned such that the pinhole is coincident with the center of rotation of the laser scanner, then no further image rectification is required. Therefore it is necessary to find the pinhole point for the camera-lens system. This point is the center of perspective of the lens, about which the camera-lens system can be pivoted without introducing any parallax error between photographs (Kerr, 2005). According to Kerr (2005), the correct pivot point is the center of the entrance pupil of the lens and not, contrary to popular belief, the front nodal point. The center of the entrance pupil is easily located empirically by observing the relative movement of the background and foreground as the camera is pivoted about different points. The point at which there is no relative movement between the background and foreground is the correct pivot point.

The intrinsic parameters required to rectify the image to achieve this pinhole camera model are usually the effective focal length, scale factor, and image center (principal point), as well as those needed to correct for the systematically distorted image coordinates (Heikkilä and Silvén, 1997). The main type of distortion is radial distortion, in which the actual image point is shifted radially in the image plane. In addition, a decentring distortion with both radial and tangential components occurs due to the centers of curvature of lens surfaces not being collinear. Figure 4 illustrates the effects of radial and decentring distortions on a synthetic image.

A number of methods have been documented for determining the distortion parameters empirically. One example of an application for camera calibration and image rectification is the GML (Graphics and Media Lab) C++ Camera Calibration Toolbox (v. 0.31 or later) by Vezhnevets and Velizhev (2005, 2006). This application has been tested with the TCTLS solution presented in this paper, although any alternative method that adequately removes distortion from the photographs can be used as part of the system. Vezhnevets and Velizhev's (2005, 2006) toolbox is an implementation in C++ of a Matlab toolbox by J.-Y. Bouguet (www.vision.caltech.edu/bouguetj/calib_doc/download/TOOLBOX_calib.zip), with the addition of another corner detection algorithm. Both toolboxes use an intrinsic camera model inspired by Heikkilä and Silvén (1997), and a main initialization phase partially inspired by Zhang (2000) and partially developed by Bouguet (both available from the Intel Open Source Computer Vision Library at www.intel.com/technology/computing/opencv). Unlike Bouguet's toolbox, Vezhnevets and Velizhev's (2005, 2006) software is capable of rapidly undistorting large, high-resolution images.

In order to calibrate a camera, a set of photographs is taken of a planar “checkerboard” calibration target. The toolbox is able to detect the corners of the squares in the photographs of the target and use their positions to calculate the intrinsic calibration parameters. Any photographs taken with the same camera and lens (at the same focal length, if it is a zoom lens) can then be undistorted using the toolbox, whether or not the photographs were taken before the calibration was carried out.

COREGISTRATION OF COLOR IMAGES AND SCAN DATA

In order to register an image with a point cloud, a mapping must be determined that gives expressions for the image pixel coordinates u and v(Fig. 5), in terms of the azimuth (horizontal) and inclination (vertical) angles, 𝛉 and ϕ, respectively, from the laser scanner center to the corresponding point in the cloud.

It is important to note the way in which laser scanners measure the azimuth and inclination angles. The azimuth is measured as the scanner rotates about the vertical axis and is, therefore, the angle between the azimuth zero line in the horizontal plane and the projection of the subject point onto the same plane. This is not the same as the horizontal angle between the subject point and the vertical plane that passes through the azimuth zero line. The inclination is simply the direct angle between the horizontal plane passing through the image center and the subject point.

The implications of the method of operation of the laser scanner can be understood by considering the lines traced out by the laser beam on the inside of a virtual sphere when 𝛉 or ϕ is kept constant, given that the laser scanner center is at the center of the sphere. If the inclination is kept constant while the azimuth is varied, a line of “latitude” is traced on the surface of the sphere. If, instead, the azimuth is kept constant while the inclination is varied, a line of “longitude” is traced on the surface.

The interaction of the laser scanner and the camera can be understood by imagining the projections of the lines of latitude and longitude onto a virtual plane external to the sphere by straight lines passing through the center of the sphere. If the plane is vertical, the lines of latitude above the equator map to upward-curving lines on the plane, and those below the equator map to downward-curving lines. The lines of longitude, however, map to vertical lines on the plane. If the plane is now tilted upward, the projected lines of latitude now all curve upward, as long as the bottom of the virtual plane is above the equatorial plane. The projections of the lines of longitude remain straight, but splay out toward the bottom of the plane. If the plane is tilted downward, the projected lines of latitude curve downward and the projections of the lines of latitude splay out toward the top of the plane. This is illustrated in Figure 6.

Since the scanner is an inherently spherical system using polar coordinates, and the camera sensor is a plane, the scanner-camera system conforms to this behavior, with the virtual image plane corresponding to the camera's sensor. When the virtual plane in Figure 6 is vertical, this corresponds to the case in which the camera's sensor is vertical, i.e., the longitudinal axis of the lens is pointing horizontally. Similarly, when the virtual plane is inclined, this corresponds to the situation in which the camera is tilted about a horizontal line passing through the laser-beam detector (i.e., center of the scanner) and the camera lens pivot point. From Figure 6 it can be deduced that in the case where the lens is pointing horizontally, the expression for v will be a function of both 𝛉 and ϕ, while u will be a function of 𝛉 only. In the tilted case, however, both u and v are functions of both 𝛉 and ϕ. This information is summarized in 01Table 1. A full mathematical derivation of the actual mapping functions is given in the Appendix.

COREGISTRATION TOOLBOX

The Matlab programming environment was used to create a set of tools that can be used to carry out the registration process of a set of photographs with its associated point cloud. The toolbox comprises a number of individual programs collectively controlled through a common Graphical User Interface (GUI). The toolbox encompasses the following functionality.

  1. Loading and parsing of the raw ASCII laser scan point cloud file (comprising the x, y, and z Cartesian coordinates for each point).

  2. Conversion of the points from Cartesian coordinates to a spherical reference frame with the center of the scanner (and camera) at the origin. The azimuth and inclination coordinates are calculated from the x, y, and z coordinates using the following formulae:  
    formula
    and  
    formula
  3. Loading of the corrected (i.e., undistorted) images.

  4. Input of the tie-point coordinates.

  5. Mapping color values from the images onto the point cloud. This is done by using Newton's method to solve the three simultaneous nonlinear equations as derived in the Appendix.

  6. Saving an output file of point cloud data, now comprising x, y, and z coordinates and RGB (red green blue) color values.

The point cloud data can thus be suitably colored and can be imported into appropriate visualization software, such as the open-source program ParaView (www.paraview.org).

TESTING THE SYSTEM

To test the system a number of field tests were performed, using a variety of laser scanners and cameras. In the first tests, buildings were mainly used as the chosen target objects, because they have distinct geometric features that can readily be checked in both the scan data and photographs. Later tests used geological outcrops.

In the first field test illustrated here, part of Durham Castle was chosen to provide a target with complicated geometry. For this test the laser scanner chosen was a Riegl LMS-Z420i, which has its own in-built function for high-quality color matching through the use of a separate precision-mounted digital SLR on top of the scanner. A 360° scan was acquired, consisting of 8 × 106 points. Two separate sets of photographs were acquired, to allow direct comparison between our method of color matching and that provided by Riegl. The first set of 7 photographs was taken with a top-mounted 6.1 mega-pixel Nikon D70 camera (giving a total of 42.7 × 106 pixels), following Riegl's standard acquisition method for the scanner. The scanner was then removed from the tripod, and a second set of 18 photos was taken using an 8 mega-pixel Canon EOS 350D together with the camera mount and color matching method presented in this paper (total pixels = 144 × 106). Figure 7 shows the quality of the colored point cloud matching for part of the castle. Color matching is very precise in all areas of the photos, including objects in the near foreground and far background (this is difficult to achieve when the camera is not collinear with the scanner center, as in the top-mounted Riegl scanners). This benchmark test shows that the color matching method we present in this paper can perform at least as well as high-end commercial solutions.

The second example shown here is from the Jurassic coastal cliff sections at Staithes, North Yorkshire. This site is part of a long-term project using laser scanning to monitor coastal erosion (Rosser et al., 2005; Lim et al., 2005). In this test an MDL LaserAce 600 scanner and an Olympus Camedia E-20P digital SLR camera were used. The results (Fig. 8) show that color matching has high precision, and that the addition of color to the point cloud greatly increases the amount of geological detail visible in the virtual outcrop model.

ERROR ANALYSIS

There are many stages in the data capture and image registration processes, some of which introduce errors. Quantitative aspects of the following error analysis are specific to the MDL LaserAce 600 scanner and Olympus Camedia E-20P camera, but other scanner and camera combinations are conceptually similar.

A significant source of error can arise because of dispersion of the scanner's laser beam. The scanner assumes that the strongest part of the returning beam originates from the center of the laser-beam footprint on the target, but when the footprint overlaps areas of contrasting reflectance, this concentric weighting is distorted. The effects of this error are usually most noticeable at the edges of objects that have sky (or distant background beyond the range of the scanner) behind them, and is often seen as a band of sky-colored points (typically one point wide) extending around the edge of the scanned object. When a dispersed beam hits the edge of an object, part of the beam footprint is reflected and recorded by the scanner, even though the center of the beam footprint was in the sky. The scanner therefore captures a data point beyond the edge of the object, even though that point does not actually exist. When the resolution of the accompanying photo is high relative to the spacing of points in the point cloud, a thin band of sky is mapped onto the extra trace of spurious points that flank the true edge of the object. This source of error is common to all scanners that have wide beam dispersion. It needs to be taken into consideration because the error is caused by an intrinsic property of the scanner, and is not related to poor camera calibration or image matching.

A second source of error can arise due to occlusion. Lim et al. (2005) carried out an experiment using the same MDL LaserAce 600 scanner used to capture the point clouds in Figure 8. They scanned a section of cliff at 0.05° resolution twice in immediate succession and quantified the discrepancies between the two scans due to occlusion errors. Occlusion errors are due to problems resulting from scanning a complex surface, in which some surface data are missed and, therefore, interpolated. The errors, which are concentrated toward the edges of the scans and on protruding ledges, occur due to the occluded data being interpolated differently during successive scans. Despite these errors, Lim et al. (2005) stated that point cloud data acquired with the MDL LaserAce 600, over ranges typically associated with scanning cliff faces, are capable of accuracies within ±0.06 m. This error is the maximum error between the position of a subject point in space and the coordinates of the corresponding point in the point cloud. The magnitude of this error is purely related to the scanner; a higher quality scanner would give greater accuracy.

Other sources of error relate to the discrepancy between the color of the point in the point cloud and the color of the corresponding point in the subject, i.e., errors in the registration of the images with the point cloud. In order to quantify the maximum registration error, approximate expressions were derived for the angular errors at different stages in the capture and registration of the color information.

The maximum angular registration error, αreg, is given by:  
formula
where αpos is the angular error due to positioning the center of the entrance pupil of the lens coincident with the beam detector of the laser scanner; αrot is due to the deviation from vertical of the plate to which the camera attaches on the mount; αrect is the error in the rectification process using the GML toolbox; and αtie is due to the human judgment error in locating the tie-points in the gray scale or false color point cloud. αpos and αrect were quantified by making assumptions about the magnitude of tolerances and analytically deriving expressions for the resulting errors. αrot was determined by empirically measuring the effect of an artificially applied rotation and αtie was based on empirical observation.
When the expressions for these individual errors are substituted into the above equation, the following expression is obtained for αreg, in degrees:  
formula
where D is the distance to the subject in meters; β is the rotation of the camera from vertical in degrees; and G is a subjective “geometricity” coefficient. For a subject with distinct geometric features, such as a simple building, G would take the value 1, whereas for a very irregular subject, such as a cliff face, G would take a value of 0. If necessary, however, αtie can be virtually eliminated by adjusting the image registration by altering the three mapping parameters. To obtain the approximate maximum registration error in terms of a number of points, αreg should be divided by the resolution of the scanner in degrees.

Although it was necessary to make some fairly crude assumptions in order to derive some of the individual errors, the resulting expression for αreg is conservative. With G equal to unity, by far the most significant term in the above expression is αrot, contributing 57% of the maximum registration error of 0.16° when β = 0.2° and D = 5 m, and 72% of the 0.25° maximum registration error when β = 0.4° and D = 70 m. To combat this error, the camera mount design could be modified so that it is stiffened, in order to reduce the distortion due to the weight of the camera. Better still would be the addition of a mechanism for providing a fine adjustment to ensure that the camera is truly vertical. This would greatly reduce the rotation error, and therefore the registration error.

The above expression was derived to give an approximate maximum limit on the registration error. In practice, the errors observed are significantly less than αreg over most of the point cloud. Over those areas of the point cloud colored by pixels from near edges of the photographs, the errors approach αreg. This error analysis will allow comparisons to be drawn with associated technologies, such as photogrammetry, as well as helping to prevent inappropriate use of the data.

CONCLUSIONS

A low-cost true color terrestrial laser scanning (TCTLS) system has been developed for use in a wide variety of applications. The low cost of this system makes TCTLS technology more financially viable to a larger number of people in a greater range of fields. The flexibility of the system also provides a useful platform for further research into TCTLS to test performance of a range of digital cameras, lenses, and laser scanner devices.

True color capability can be added to existing terrestrial laser scanners using a purpose-built camera mount and coregistration software. The camera mount allows the photographs to be captured from exactly the same point as the point cloud and, in doing so, reduces one source of error present in some commercial laser scanners that use top-mounted cameras. Using a digital SLR, rather than onboard video circuitry mounted inside the scanner, provides higher resolution images, and gives more faithful chromatic data.

The use of an adjustable camera mount, which takes a range of cameras, allows the user flexibility over the hardware used to capture the data. Further work is under way to construct an automated system prototype that applies the same principles of image and/or scan coregistration to a motorized version of the camera mount.

The calibration of the camera is simple and quick, using a stiff planar checkered calibration target and standard camera calibration algorithms. There is no requirement for the camera to be calibrated prior to taking the photographs, and each camera and lens need only be calibrated once. The undistortion process is fast and allows a batch of images to be undistorted with one click of a mouse.

The level of accuracy with which the photographs can be registered with the point cloud is such that the resulting true color point cloud can support very detailed analysis and data interpretation for many applications. The ability to adjust the registration of the photographs with the point cloud has considerable benefits when scanning irregular subjects such as cliff faces, which generally have few geometrically regular features.

Detailed error analysis enables comparisons to be drawn with related technologies, as well as with commercially available true color terrestrial laser scanners. The accuracy of the registration is comparable with that of commercially available TCTLS devices. This novel, low-cost, accurate, and flexible TCTLS technology has the potential to make a significant impact on the spatial data acquisition capabilities of many companies and academic institutions. We hope that this paper will provide the basis for further research and development in the fields of low-cost TCTLS and the analytic mapping approach, and that it has the potential to lead to the commercial implementation of a user-friendly, low-cost, true color terrestrial laser scanning system.

*White: Present address: Arup, Central Square, Forth Street, Newcastle upon Tyne, NE1 3PL, UK; peter.white@arup.com. Jones: Corresponding Author: richard@geospatial-research.co.uk

APPENDIX

First, the mapping functions for the horizontal case will be derived. These functions are then extended to account for the case where the camera is tilted. Finally, a method is determined for solving the resulting functions for a pair of tie-points for each image. Figure 9 shows the relationship between the laser scanner coordinate system and the virtual image plane with its coordinate system. The laser scanner center and the virtual image plane are separated by a virtual distance, d, perpendicular to the plane. 𝛉c is the angle from the azimuth zero line to the center of the virtual image plane, and u′ and v′ define an alternative image coordinate system with respect to an origin at the center of the virtual image plane.

Deriving an expression for u in the horizontal case, considering triangle OAC:  
formula
Converting between u′ and u:  
formula
Substituting for u′ in equation 5:  
formula
Rearranging:  
formula
Deriving an expression for v in the horizontal case, considering the triangle defined by O, A, and the point (u, v):  
formula
Considering triangle OAC:  
formula
Rearranging:  
formula
Converting between v′ and v:  
formula
Substituting for v′ and l in equation 9:  
formula
Rearranging:  
formula
Next, the general case in which the camera can be tilted at an angle to the horizontal is considered. Figure 10 shows the relationship between the laser scanner coordinate system and the image plane coordinate system in the tilted case.
First, converting between Cartesian coordinates and spherical polar coordinates as defined in Figure 11:  
formula
In order to derive expressions for u and v in the case where the camera is tilted at an angle 𝛉c to the horizontal, the coordinate system shall be rotated by 𝛉c about the y axis. The rotation matrix required to perform the rotation by angle ϕ c about the y axis is:  
formula
Therefore, premultiplying the matrix on the right side of equation 5 by the rotation matrix above:  
formula
 
formula
The equations corresponding to equations 8 and 14 for the tilted case are:  
formula
and  
formula
Using equation 17 to express tan(𝛉′– 𝛉c) in terms of 𝛉, ϕ, 𝛉c, and ϕc:  
formula
 
formula
Similarly, using equation 17 to express sec(𝛉′– 𝛉c) tan ϕ in terms of 𝛉, ϕ 𝛉c, and ϕc:  
formula
 
formula
Multiplying numerator and denominator of both the above expressions by sec ϕc sec(𝛉 – 𝛉c) sec ϕ:  
formula
 
formula
Substituting for tan(𝛉′– 𝛉c) from equation 22 into equation 18, and for sec(𝛉′ – 𝛉c) tan ϕ′ from equation 23 into equation 19 to give expressions for u and v for the tilted case:  
formula
 
formula
Now a method of solution of equations 24 and 25 is described that can be implemented algorithmically (e.g., in a Matlab program).

For each image, if the u, v coordinates of two points in the image and the corresponding 𝛉, ϕ coordinates of the same two points in the point cloud are known, and given the horizontal and vertical pixel dimensions of the image, then four equations can be formed containing only three unknown mapping parameters, 𝛉c, ϕc, and d:

Tie-points (u1, v1, 𝛉1ϕ1) and (u2, v2, 𝛉2ϕ2):  
formula
 
formula
 
formula
 
formula
Only three of these equations are required in order to solve for the three mapping parameters.
If the three simultaneous nonlinear equations 26, 27, and 28 are rearranged thus:  
formula
 
formula
 
formula
then they can be solved by using Newton's method (http://numbers.computation.free.fr/Constants/Algorithms/newton.html) in the following form:  
formula
where J is the Jacobian matrix:  
formula

We thank John Parker for his help with the analytical mapping, and Nick Rosser, Alan Purvis, David Toll, Roger Little, Nick Holliman, and Steve Waggott (Halcrow) for their help and insight. We also thank Jerry Bellian and Klaus Gessner for useful reviews, as well as Tim Wawrzyniec and Randy Keller for editorial input. Aspects of material presented in this paper are protected under UK and International Patent laws.