When present, fractures tend to dominate fluid flow though rock bodies, and characterizing fracture networks is necessary for understanding these flow regimes. X-ray computed tomography (CT) has long been successfully used to image fractures in solid samples, but interpretation of CT data is complicated by the inevitable blurring that occurs when fractures are thin compared to the data resolution. This issue is particularly acute when attempting to quantify fine fractures in scans of larger samples, as typically required for characterizing flow systems on a meaningful scale. A number of methods have been proposed to account for CT blurring, but do not include the ability to account for material inhomogeneity and fracture orientation. We here propose an improved method for fracture measurement that consists of characterizing the blurring as a point-spread function (PSF), and using it, in combination with a calibration for the CT number for void space, in an iterative procedure to reconstruct the fracture and material configuration; we call this the inverse PSF (IPSF) method. Tests on CT scans of homogeneous natural samples show that the IPSF method provides more precise results than others. Further testing demonstrates that it can also recover accurate measurements in heterogeneous materials, although particularly severe inhomogeneities may lead to a locally noisy signal. The accuracy, generality, and adaptability of the IPSF method make it very well suited for characterizing fractures and fractures surfaces in natural materials. The principles behind the IPSF method also apply to the reverse problem of measuring thin features that are denser than their surroundings, such as veins or membranes, when they have one dimension that is small compared to CT data resolution.


Fractures and fracture networks are three-dimensional (3D) features that have 3D effects on the flow properties of natural materials. Because fractures are characteristically rough and uneven, flow through them tends to follow a meandering path of least resistance defined by local pressure gradients, which are in turn largely determined by the fracture topology, in particular variation in aperture and surface roughness. As a result, fractures are optimally studied and characterized in three dimensions and in situ, and accurate depiction of fracture aperture and roughness is important for both intuitive understanding and meaningful numerical modeling.

X-ray computed tomography (CT) is a nondestructive imaging technique that provides 3D imagery of the interiors of solid objects based on X-ray attenuation, which is to first order a function of density. A wide variety of CT configurations exists, from synchrotron-hosted systems that provide submicron-scale data on relatively small (<1 cm) samples (Gouze et al., 2003; Noiriel et al., 2007), to high-energy industrial systems that can penetrate several centimeters to decimeters of solid rock. CT has been used by various research groups to image and measure fractures in solid samples (Bertels et al., 2001; Johns et al., 1993; Keller, 1998; Mazumder et al., 2006; Van Geet and Swennen, 2001; Vandersteen et al., 2003; Yoshino et al., 2004) with promising results. Because of the great density contrast between void space and solid rock, fractures tend to image very clearly, and can be observed even when their aperture is a fraction of an image pixel width (e.g., Ketcham and Carlson, 2001, their fig. 6).

An additional benefit of CT is that it can image fractures in situ within experimental configurations, allowing them to be measured and then subjected to flow testing (Thompson, 2005; Weerakone and Wong, 2006). Because CT captures 3D information it also has the potential to characterize 3D networks, rather than simply individual fractures or surfaces. Such applications by their nature require measuring potentially very fine fractures within comparatively large specimens, which must be scanned at coarse resolution to be imaged in their entirety. In general CT produces voxels (volume elements, or 3D pixels) that are a factor of ∼103 smaller than the imaged field of view, and so if one wants to characterize 100-mm-scale samples one is forced to use 100-μm-scale voxels, putting a premium on methods that maximize the range of fracture sizes that can be quantified.

Previously published methods for extracting precise fracture aperture measurements from CT data share a number of limitations that inhibit their robust application. First, they confine their analyses to individual slice images, and thus are at best measures of apparent aperture along the image plane rather than true aperture, unless the fracture is nearly perpendicular to the plane. Second, although these methods have in some cases been employed on heterogeneous materials (e.g., polymineralic rocks such as granite), they do not account for heterogeneity in their calibration or formulation. Third, the sensitivity of these methods to various CT imaging artifacts has not been well documented. Fourth, although the accuracy of aperture measurement using CT has been demonstrated, precision has not been as carefully evaluated with respect to aperture or fracture surface estimation. Insofar as method-induced variability in such determinations could introduce a spurious component of necking or roughness, more careful attention to this point is warranted.

This paper proposes generalized concepts and new methods for measuring true 3D aperture and orientation in heterogeneous natural materials that address these issues. Particular attention is given to how some of the limitations of CT scanning, such as finite resolution and image artifacts, can affect results. These methods will allow routine and rigorous evaluation of fracture network geometry, and set the stage for improved experimental and numerical assessment of the effects of fracture roughness on fluid flow.


Acquisition of CT data generally consists of projecting X-rays through an object from a range of orientations to measure X-ray attenuation along thousands to millions of paths through it. These data are then processed to determine the spatial distribution of X-ray attenuation within the object, which is primarily a function of density, atomic number, and X-ray energy. Below X-ray energies of ∼100–150 keV attenuation varies with atomic number Z as a function of ∼Z4, making it very composition dependent; at higher energies it is a linear function of Z, and thus primarily a function of density (Markowicz, 1993).

CT data commonly are provided as a set of slice images, each of which depicts X-ray attenuation along a plane with a thickness determined by scanning conditions and geometry. The basic element of CT data is the voxel, a 3D pixel with volume reflecting this thickness. Voxel thickness, or z-dimension, need not be the same as the in-plane, or x-y, dimension. The value associated with each voxel is generically termed a CT number. Although a CT number reflects X-ray attenuation, in most cases it does not correspond to a particular material X-ray attenuation coefficient owing to the X-ray energy dependence of attenuation and the fact that most laboratory CT scanners use polychromatic X-rays. Although medical scanners produce standardized data based on Hounsfield units, industrial instruments typically employ more arbitrary scales that are usually optimized for the material being imaged (Ketcham and Carlson, 2001).

A persistent feature of most CT scanning is beam-hardening artifacts, which occur when using polychromatic X-rays and are caused by the differential attenuation of high-energy and low-energy X-rays. Because low-energy X-rays are more easily attenuated, the mean energy of an X-ray beam will become higher as it passes through a dense material, even as the overall beam intensity drops. Beam-hardening artifacts are in many cases manifested as bright rims and dark centers of objects; more generally, the centers of long ray paths tend to be dark compared to the ends. Beam-hardening artifacts are more prevalent when low-energy X-rays are used for scanning denser objects (e.g., Bertels et al., 2001). They can be reduced by filtering the X-rays, applying software corrections, or via some calibration procedures; a calibration approach is demonstrated and utilized in this study.

Another fundamental and unavoidable feature of CT imagery is that, regardless of how crisp it appears in any particular instance, it is a blurry approximation of the true material configuration. This blurring has two components. The first is the partial-volume effect, in which any given data voxel that encompasses multiple materials must have some averaged attenuation. The second stems from the many nonidealities of scanning, such as the finite sizes of the X-ray focal spot and detectors, detector crosstalk, averaging due to motion during data acquisition, and the finite resolution of reconstruction algorithms. The majority of blurring is caused by this second component, and it is through technological improvements in each of the areas mentioned that CT scanner resolution continues to improve.

The blurring of CT can be usefully summarized as a point-spread function (PSF), a mathematical construct portraying how a signal originating at a single point is spread over its neighborhood. For the purposes of this work it can be conceptualized as a smoothing window that is convolved with reality (G) to produce the data actually obtained (F), with some noise overprint (ε), i.e., 
The form of the PSF is undetermined. ASTM (1992) use a cylindrical box function for illustrative purposes, but other forms may be more faithful to the data. Computer simulations indicate that a Gaussian function is often appropriate, a result corroborated by our study. For slice-based CT the PSF is a 2D function, whereas for any volume CT system it is a possibly anisotropic 3D function that may change with position within a scan volume. When considered along a linear traverse, the PSF reduces to 1D, which is an adequate approximation when the features of interest vary predominantly in the direction of the traverse. This study demonstrates a means of using a 1D PSF while accounting for fractures that are nonorthogonal to the traverse direction.

The effect of the PSF on CT imaging of fractures is shown in Figure 1, with a series of CT scans (voxel x-y dimension 46.9 μm) of artificial fractures of progressively increasing aperture, and example traverses across each one. It is evident that the limestone matrix has a mean CT number in these images of ∼3000, and the air in the fracture has a CT number of ∼800. Two interesting features are apparent among the thinner fractures. First, the smallest-aperture traverse has a minimum CT number that is only faintly distinguishable from the matrix noise; as the apertures increase, the minimum CT number falls until it reaches the value for air. Second, the rim-to-rim widths of the negative anomalies in the CT numbers for the two smallest apertures are about the same, even though the larger is more than four times wider than the smaller; notably, both rim-to-rim widths of ∼400 μm far exceed the true apertures. Both phenomena are direct manifestations of the PSF. While the aperture is on the order of a voxel width or less, the dispersion of its signal corresponds directly to the PSF size and shape. As the aperture grows, more air is included, lowering the minimum, until ultimately the fracture aperture exceeds the PSF radius and the end-member CT number for air is observed.

We can infer from Figure 1 that the PSF radius for these scans is ∼200 μm, based on both halving the rim-to-rim width of the anomaly for the smallest aperture and estimating how wide an aperture must be to achieve the CT number for air. The size of the PSF is influenced by many factors, and will vary with instrument characteristics and settings, and even from scan to scan. Parameterization of the PSF will be discussed in section 3.3, and measurement in section 4.2.1.

The blurring effect has both advantages and disadvantages for fracture analysis. The most substantial advantage is that it allows measurement of fracture apertures to within a fraction of the voxel width, as illustrated in Figure 1. However, it will smooth out natural roughness at a scale corresponding to the PSF radius.

A reasonable operating assumption for most CT imagery is that the PSF is attenuation conserving, redistributing the signal for any given feature but leaving net attenuation effectively unchanged (Wellington and Vinegar, 1987). A corollary assumption is that X-ray attenuation is the only factor influencing CT numbers. However, some artifacts in CT data are not attenuation conserving, including beam hardening, the exponential edge-gradient effect (EEGE; De Man et al., 1999; Joseph and Spital, 1981), and scattering. Accounting for the former two is discussed herein; scattering is less frequently encountered, but may arise in some applications, for example when noncollimated, high-energy (>150 keV) X-rays are used, as in some cone-beam volume CT systems.

Fractures can be visually enhanced in parallel-beam or high-spatial-coherence CT imagery by the phase-contrast effect (Cloetens et al., 1996; Wilkins et al., 1996). In some cases scans are configured to maximize this effect when attenuation contrasts are otherwise very weak (e.g., Cloetens et al., 1997; Heethoff et al., 2009; Marinoni et al., 2009). However, such enhancement violates the attenuation-conservation assumption, and the methods presented here are not appropriate for phase-contrasted imagery. It is possible that further processing using a Bronnikov-type filter (e.g., De Witte et al., 2009), which seeks to remove the phase contrast signal from the data, may sufficiently restore the imagery to an attenuation-only state to allow use of the methods developed here, although this has yet to be tested.

CT ring artifacts can also affect measurements when they intersect fractures. There are a number of correction methods, and the data in this study were processed using the sinogram-based methods outlined in Ketcham (2006b), which do not leave residual artifacts.


Three approaches have been used for measuring aperture from CT images: defining a constant threshold differentiating air from rock; using the minimum value in a traverse across the fracture, corresponding to a peak height (PH; Vandersteen et al., 2003); and summing the missing attenuation (MA) associated with a fracture and converting it into a nonblurred equivalent (Johns et al., 1993; Keller, 1998; Van Geet and Swennen, 2001). These three methods are illustrated in Figure 2.

For thresholding, the standard method for choosing the threshold value is the full-width–half-maximum (FWHM) criterion; in the case of aperture measurement, this corresponds to using the midpoint CT number between air and the rock matrix. This is also referred to as the half-maximum height (HMH) criterion. This method is correct when the fracture is wider than the PSF (i.e., Fig. 1). However, it is flawed when used for thinner apertures, as shown in Figure 2; the appropriate threshold value for the larger aperture does not capture the smaller one. Further worsening matters is that the tempting remedy of manually adjusting the threshold to capture smaller fractures results in essentially arbitrary values for their apertures, while at the same time systematically and inappropriately thickening larger fractures.

3.1. MA Method

Assuming all X-ray attenuation is conserved in a CT image, the true dimension of an object can be ascertained if its total attenuation signal can be delineated and summed, and then converted into a nonblurred equivalent (Johns et al., 1993; Van Geet and Swennen, 2001). The general equation describing this method is: 
where dfoi is the dimension (length or, in other applications, area or volume) of the feature of interest; dvox is the corresponding dimension of a data voxel; pfoi is the proportion of the attenuation value in a voxel attributable to the feature of interest, which is summed over all affected voxels; and C is a constant that should ideally be zero but in practice can compensate for nonideal behavior.

For the MA method to work for aperture estimation, all that needs to be present is an attenuation anomaly associated with a fracture (i.e., some darkening of the CT image) that can be calibrated. This is in theory straightforward for homogeneous materials, but not necessarily for heterogeneous ones, in particular if different materials line each edge of the fracture, as might be expected when fractures form predominantly on grain boundaries. This method also can be affected by scan artifacts, notably beam hardening (Bertels et al., 2001).

The anomaly magnitude in each voxel can be calculated based on a relative or absolute scale. An absolute calibration can be derived by determining end-member CT numbers for rock matrix and air, CTmat and CTair, resulting in: 
where pi is the anomaly for a voxel with a value CTi. A relative calibration is based on the difference between matrix and air without the requirement that the CT number for matrix be fixed, but can locally vary in parallel with air. The resulting formulation is: 
where R is the relative calibration factor. In both cases, CTmat can be determined globally (across a whole scan image or volume), or locally, such as based on the end points of a traverse. A local determination is potentially more able to account for material inhomogeneity, but may also make measurements more susceptible to image noise. All previous studies utilizing MA have employed a relative calibration. The two methods are equivalent when the matrix is homogeneous and beam hardening is absent, and each has utility in certain situations.

3.2. PH Method

The PH method calibrates aperture width to the height of the negative anomaly (Fig. 2). Vandersteen et al. (2003) fitted a Gaussian PSF to the anomaly profile and interpolated the minimum point, essentially utilizing more information to obtain a more robust determination. They found that the PH method is less noisy than MA for small apertures, but MA is better for large ones, with the crossover point being instrument dependent. Once the aperture width exceeds the PSF width and the CT number for air is reached, the PH method becomes inapplicable, though in such cases an alternative method such as MA can be used (e.g., Mazumder et al., 2006). As posed, the PH method utilizes a relative calibration, and is thus only appropriate for homogeneous materials. Because of this, it is not examined further in this paper.

3.3. Inverse PSF Method

The inverse PSF (IPSF) method proposed here is a hybrid between the PH and MA approaches. If the PSF is consistent throughout a CT image, or if its variability can be characterized, then its magnitude can be used as a constraint for measurement, rather than a freely varying parameter. When combined with an absolute calibration for CTair, it can serve as the basis for a constrained deconvolution in which the CT numbers for the solid are allowed to vary freely, providing a material-independent means of calculating missing attenuation. Deconvolution is done by posing a simplified model of rock-void-rock and using an iterative procedure to find the aperture width, position, and (optionally) rock CT numbers that best reproduce the data traverse (Fig. 3). Because the PSF defines the spatial limit of influence of a fracture on the CT data, it can also be used to refine the length of the data traverse so that only the minimum necessary depth of penetration into the solid is examined, allowing small-scale inhomogeneities (i.e., mineral grains) to be better accounted for.

We stress that the IPSF method as implemented here essentially employs a hard-wired deconvolution based on an assumed model configuration, rather than a general numerical deconvolution based on the Fourier transform, such as the Richardson-Lucy method (Lucy, 1974). As a result is it more computationally expensive, but adheres more closely to the actual conditions being characterized, and gives more accurate and easy to interpret results, as the fracture boundaries are explicit outputs of the procedure.

The form of the PSF used here is a function g defining a smoothing window: 
where gi is the PSF value at voxel coordinate i, r is the radius or half-width of the PSF (hereafter termed the PSF parameter) in voxel widths, rc is the smallest integer ≥ r, and xi are integer coordinates from 0 to 2rc. The only free parameter is r. This definition corresponds to a Gaussian function out to approximately four standard deviations, which ensures smoothness and also allows the r parameter to represent the diameter of the smoothing window, as 95% of the signal from a voxel will be inside this region.

3.4. Aperture Tracing and Orientation Determination

The algorithmic problem of tracing a fracture to obtain measurements at various points along its extent has not been part of the discussion of aperture measurement, in large part because the issue was largely trivial for the cases considered. Scans were generally conducted so apertures were largely vertical or horizontal in the imagery, and apertures were easy to recognize because they were large and/or the material being imaged was relatively homogeneous. In this study, and with geological materials in general, these latter assumptions no longer necessarily hold. When material inhomogeneity is severe and fractures are small, even knowing where a fracture is located along a given data traverse is not straightforward, as will be seen in section 4.3. In such cases, using a tracing method is particularly important for helping direct the measurement procedure.

Aperture tracing in this study employed a simple procedure of using a line fit to two or three points along a trace to estimate where the next would be, and then searching a limited neighborhood around the estimate. In general the bounds of this neighborhood could be set widely to accommodate bends in the fracture, but when materials are very heterogeneous tighter bounds are required to ensure that the trace does not get off track due to local complexities. Generating the initial points to help find subsequent ones can generally be done by starting a trace where a fracture is clearly visible and then tracing into more complex areas, much as one would do visually.

Related to the problem of tracing is determining 3D fracture orientation, which is necessary both for tracing fractures through a CT data volume and for determining true versus apparent aperture when the fracture is not perpendicular to the image plane. This can be accomplished relatively easily by measuring the fracture across multiple slice images and doing a geometric conversion. Alternatively, by sampling in 3D all CT values surrounding the aperture midpoint in a radius somewhat larger than the aperture apparent thickness, the fracture normal can be estimated as the tertiary eigenvector of an orientation matrix (e.g., Ketcham and Ryan, 2004).


The aperture measurement methods were compared using scans on two CT subsystems at the University of Texas High-Resolution CT Facility (UTCT). The microfocal system consists of a 200 or 225 kV Feinfocus source capable of a spot size of 5 μm, combined with an image intensifier and 512 × 512 or 1024 × 1024 CCD video camera; scans for this study were acquired over several years, during which time the system was upgraded a number of times. The high-energy subsystem used a 420 kV Pantak X-ray source and 512-channel linear detector with detector pitch of 0.381 mm. The UTCT scanners are described in Ketcham and Carlson (2001). All data were gathered using single-slice acquisition and reconstructed using standard backprojection with the Ram-Lak filter.

Rock phantoms were made from 25 mm cores of homogeneous lithographic limestone and a series of granitic rocks of varying composition and mineralogy. A high-precision saw was used to make specimens with vertical and, for the limestone, 67.5° and 22.5° to vertical cuts. When scanning, aperture was controlled using a series of plastic shims (Artus Corporation, Englewood, New Jersey) from 0.015 to 0.777 mm in thickness (Table 1). Measurements with high-precision mechanical calipers were found to disagree slightly with the manufacturer's specifications; the measured values were used for this work.

In general, the samples were scanned so that the aperture and fracture traces were vertical in the CT images (e.g., Fig. 1). Measurements were then made using the methods described herein along a series of traverses normal to the apertures, and reproducibility was gauged by comparing multiple determinations along the fracture traces. In this study 60–400 determinations were made per image, depending on image dimensions, taking care to avoid any irregularities along the sample cut surfaces.

4.1. MA Method

Different sets of experiments illustrate and test different aspects of the principles discussed in the previous section. We examine various instances of the MA method first, and then compare these with the IPSF method in the next section.

4.1.1. Homogeneous Matrix, Vertical Aperture, Different Energy Calibrations

The first example consists of imaging the vertically cut lithographic limestone sample using different calibration conditions to examine their effect on aperture measurement. One of the calibrations necessary for CT imaging is a detector reading with X-rays on for signal normalization. In most cases this signal is collected with the sample chamber empty (i.e., through air), but some CT systems allow it to be collected through a phantom, ideally with the same geometry and X-ray attenuation characteristics similar to those of the object being scanned. This is termed a wedge calibration, and the idea behind it is to make the X-ray spectrum measured for calibration as similar as possible to the spectrum detected during imaging (Ketcham and Carlson, 2001), reducing beam hardening and ring artifacts.

Figure 4 shows two example images; the left image (Fig. 4A) was collected using air as the wedge material and for Figure 4B, the wedge calibration used an uncut limestone core. The bright rim of the image in Figure 4A is a typical manifestation of beam hardening, while the evenness of the image in Figure 4B reflects the compensating effect of the wedge calibration. Sets of scans with varying apertures acquired with each method were used to compute calibration factors for the MA method. Proper calibration of the beam-hardened images required use of the relative calibration scheme (equation 4), whereas the scans that employed a limestone wedge used an absolute calibration (equation 3). In the former case, the R factor is calculated as the slope of CT number deficit versus aperture. An absolute calibration for CTair can be obtained by subtracting the slope from the mean matrix CT number, which can be found by inspecting the image. This method is often more reliable than inspecting the image to find CTair directly, for reasons discussed in the next section.

Figure 4C shows that aperture measurements by each method were equivalent and consistent along the entire trace of the fracture, despite the marked change in limestone CT number from center to edge in the air-wedge image (Fig. 4D). This is because the difference in CT number between air and matrix remained largely unchanged despite beam hardening, as captured by the relative MA calibration. However, the relative method can only be used if the matrix is homogeneous. In addition, aperture determinations from the beam-hardened scans are somewhat noisier: standard deviations on <0.2 mm apertures averaged 42% higher.

4.1.2. Homogeneous Matrix, Vertical Aperture, Different Scanners

Microfocal scanners can image at very high resolution, but are confined to relatively small (generally <5 cm diameter) specimens. To examine samples of sufficient size to be useful for flow testing experiments, higher energy but lower resolution scanners must be employed. A series of scans at varying apertures were acquired on the UTCT high-energy subsystem, and an absolute MA calibration was calculated. For these scans an air wedge was used, but the X-ray beam was prefiltered with a 1.6-mm-thick brass plate to eliminate the low-energy part of the spectrum, reducing beam-hardening and ring artifacts.

To better simulate the effect of a fracture only subtending part of a larger specimen, scan series were obtained with the limestone phantom surrounded by either a PVC (polyvinyl chloride) sleeve or a thick sandstone annulus (Fig. 5). These images illustrate some pitfalls in calibrating CT numbers. When the limestone is surrounded by sandstone, its characteristic CT number falls relative to its value within PVC, even though there are no obvious beam-hardening artifacts except for a slight brightening at the annulus rim. The cause is the increased beam hardening due to the sandstone, which preferentially strips out the low-energy X-rays, leaving higher energy X-rays that are not as strongly attenuated by the limestone, in turn lowering its CT number. Conversely, and more worrying with respect to calibration, air brightens when it is surrounded by attenuating material. Multiple mechanisms underlie this effect, including beam hardening and nonuniqueness in CT reconstruction resulting in dispersion of some of the attenuation signal into enclosed void space. Fortunately, for a given sample lithology and geometry the effect is generally spatially consistent from center to rim, as evidenced in this study by the radial consistency of air calibration values and aperture determinations. This may not be true in all cases, particularly if a polynomial transformation method is utilized to reduce apparent beam hardening during reconstruction; such processing can homogenize the CT numbers for the rock matrix while, less apparently, inducing a radial gradient in air values. In practice, this means that if one calibrates CTair by direct inspection of the imagery, one should use fully enclosed void spaces at various radial positions.

The results of the high-energy analyses are compared to those for the limestone-wedge microfocal calibration in Table 2 and Figure 6, which shows the residuals of predicted versus actual aperture width. Both residual trends show no structure, indicating a robust solution. The high-energy results are less noisy in terms of pixel widths, but uncertainties in millimeters are higher due to the large voxel size. Apertures are reliably measured down to one-tenth of a voxel width; the smallest aperture, 0.0152 mm, is ∼0.06 of a voxel width, and although the mean of these measurements is close to the correct answer, the standard deviation exceeds one-tenth of a voxel width, indicating a low reliability for individual measurements.

The better voxel-width precision of the high-energy results signifies a difference in the innate resolution of the two systems: although the microfocal scanner features smaller voxels, evidently a high-energy-system voxel carries more information, which in turn compensates somewhat for its greater size. This difference is in part due to reduced noise, as the high-energy X-ray source has 50–100× higher intensity than the microfocal one. Part of the difference also stems from choices in selecting scanning parameters; the uncertainties for the microfocal determinations could be reduced by increasing acquisition time, which could both reduce noise and increase angular sampling. The remainder can be traced to differences in the detectors: the high-energy detectors are large, but very efficient and highly collimated, eliminating all crosstalk, whereas the area detector used by the microfocal system results in a more dispersed signal. In summary, the high-energy CT system has a smaller PSF when stated in pixels, but larger in millimeters.

4.1.3. Homogeneous Matrix, Dipping Aperture

Previous implementations of fracture measurements only worked with single images, and thus could only provide an apparent aperture in the image plane. If the dip θ of the fracture with respect to the scan plane is known, then the true width can be found by multiplying by sin(θ). As shown in Figure 7A and Table 3, the resulting measurements are accurate while the aperture is small, but an error arises at apertures above 0.2 mm on the microfocal subsystem. This error is unrelated to the sine correction, but stems from scanning conditions and the increasing severity of an exponential edge-gradient effect (EEGE) artifact.

The EEGE artifact (Fig. 8) occurs whenever there is a long straight edge with high X-ray attenuation contrast (Joseph and Spital, 1981). It arises because CT reconstruction assumes that the signal reaching a detector reflects the average of the attenuation coefficients along the beam path. This is usually an adequate approximation, but fails if the paths are very different, as the signal corresponds to the negative exponential of the attenuation (i.e., Beer's Law), not the attenuation itself. The net artifact is always negative, but can also lead to an alternating dark-light pattern (De Man et al., 1999).

As shown in Figure 8, the EEGE varies depending on where along the edge one examines it. In this case, it creates a relatively high attenuation rim artifact in the matrix, and an X-shaped low-attenuation artifact in the fracture in which CT numbers fall far below the value for air. The effect of these anomalies on aperture measurement can be reduced if the high-attenuation rim is included in the traverse and counted as a negative contributor to the width, to partially offset the negative anomaly; as shown in Figure 7B, this reduces the error in the MA calculation by more than half for the cases studied here. These cases are more severe than will typically be encountered when scanning natural fractures, which are not straight, and thus this example serves primarily as a cautionary end member. In addition, it illustrates a pitfall to avoid when attempting to measure the PSF diameter, as discussed in the next section.

4.2. IPSF Method, Homogeneous Matrix

In addition to determining a CT number calibration for air, as is also required for the previous section, the IPSF method also requires a calibration for the PSF parameter r. Although this section only examines the homogeneous materials discussed in section 4.1, the IPSF method as implemented here does not presume homogeneity, and CTmat is allowed to vary.

4.2.1. Calibrating the PSF and CTair

The 1D cross section of the 2D PSF can be measured by scanning some feature of known configuration and iteratively finding the r parameter value that, when convolved with an idealized version of the feature, provides the best match with the data. The feature can be a calibrated aperture (e.g., Fig. 1), or a simple vertical air-solid boundary, provided it is not affected by scan artifacts (e.g., Fig. 8). To avoid EEGE artifacts, it is often a good idea to use a slightly curved interface. Typically several determinations must be averaged to obtain a reliable r value and confidence interval due to image noise. The PSF will vary with scanning conditions, including geometric variables such as source focal spot size, source-object distance, source-detector distance, number and size of detectors, center of rotation offset, number of views, and practical machine limitations such as focal spot migration and blurring, misalignments, detector latency and crosstalk, and noise. As a result, it is important to acquire the calibration images using the same imaging parameters that will be used on unknowns, ideally at the same time.

The PSF parameter measurements for the various CT systems and configurations used in this study are listed in Table 4, along with the sections in which their respective data are used. Values measured using edges or apertures of various widths were found to be reproducible to within ∼10% or better, which is adequate for this method to yield the benefits documented in the following. In fact, the IPSF method implementation includes the option of varying the PSF parameter within confined bounds as part of the fitting procedure, and fits are typically improved somewhat by setting this interval to about one standard deviation.

The CTair value can be found using the same calibration procedure employed for the MA method, by scanning a set of known apertures and finding the slope of the CT number deficit versus aperture, and subtracting this value from the matrix CT number. It can be estimated more roughly but more conveniently by inspecting enclosed voids large enough to have regions not affected by blurring from nearby interfaces, taking care to avoid areas contaminated by scanning artifacts.

4.2.2. Vertical Apertures, Different Scanners

Processing the data analyzed in section 4.1.2. using the IPSF method (Table 2) demonstrates that it achieves accuracy comparable to that of the MA method with significantly greater precision. On average, for the microfocal (200 kV) data the standard deviation of aperture determinations was reduced by 39% using the IPSF method. On the high-energy system the average reduction in standard deviation was 20%. Residuals for the high-energy IPSF determinations were slightly positive, averaging 10 μm across all length scales, which can be accounted for by C in equation 2.

4.2.3. Dipping Apertures, Different Scanners

The simplified deconvolution used for the IPSF method presumes that the interface is normal to the traverse direction. If the traverse intersects the fracture at a significantly different angle (for example, if the fracture is not vertical with respect to the scan plane), it will induce an additional component of blurring to the interface beyond that characterized by the PSF. There are three potential ways to account for this eventuality: adjust the traverse direction to be normal to the fracture; include the interface angle in the deconvolution; or broaden the PSF to account for the additional blurring. All three options require determination of the fracture angle. The first option requires a 3D traverse through the data volume, and also needs to account for how the PSF changes normal to the scan plane; however, it may ultimately be the most appropriate method for tracing complex fracture networks. The second and third options are broadly similar, and the third was chosen as simplest to implement for this study. Convolving a dipping interface with a Gaussian PSF shows that, for a fracture dip angle θ from 90° to 20°, an effective PSF parameter (re) can be estimated as: 
where AR is the voxel aspect ratio (slice spacing/pixel spacing). The smaller the PSF the more significant this effect is: a 0.2 pixel increase in the PSF r parameter occurs at a dip of 67° if the r parameter is 4 pixels, and at a dip of 50° for an r of 8. However, it is apparent that fractures can be considerably nonorthogonal without requiring a significant adjustment to r.

Reprocessing the data from section 4.1.3 (Fig. 7C; Table 3) with the IPSF method reveals that at apertures not affected by the EEGE artifact, the deconvolution method again achieves comparable accuracy and significantly improves precision compared to the MA method. When the EEGE artifact becomes significant in the microfocal CT data, the IPSF method actually provides improved accuracy as well. However, the IPSF method does not include any accounting for the darkening or brightening caused by the EEGE artifact; by its nature it is generally most sensitive to the position of the CT number transition between rock and air rather than the absolute gray values when apertures are large. The apparent decrease in precision is caused by a systematic effect as the transition appears to move slightly due to the EEGE artifact.

Data acquired on the UTCT high-energy system are less prone to the EEGE artifact. Table 5 shows MA and IPSF analyses of fractures dipping at 66.5° and 22.5°. For both methods, correcting the determination by the sine of the dip angle results in fairly accurate true aperture measurements, although residuals tend to be slightly positive. In both cases, the IPSF method delivers significantly better precision, with an average standard deviation reduction of ∼48%.

4.3. IPSF Method, Heterogeneous Matrix

Although the ability of the IPSF method to make more accurate measurements in homogeneous materials is beneficial, the main impetus behind the creation of the method was to properly characterize heterogeneous geological materials. The robustness of the IPSF method in cases of extreme inhomogeneity was tested using various granites with different compositions and modal mineralogies imaged with the microfocal system, and a welded tuff imaged using the high-energy system.

4.3.1. Granites, Microfocal Scanner

Figure 9 shows CT images of a pair of granites, along with aperture determinations using the MA and IPSF methods. Apertures were set using 0.040 mm shims, and scans were made at 100 kV to achieve a high degree of contrast among phases. Figure 9A shows the results on a graphic granite, in which the two primary phases crossed by the aperture are quartz and orthoclase. With the IPSF method there is no sign of an effect on measurements when phase boundaries are crossed, demonstrating that the IPSF method successfully accounts for these modest attenuation contrasts. The mean aperture determination is 0.037 mm and the standard deviation is 0.006 mm, only marginally above the 0.005 mm obtained for homogeneous material. When the MA method is used with a relative calibration assuming a homogeneous matrix (equation 4) based on the difference between rock and air, results are much less consistent, as this difference changes from one mineral to the next. Results are improved if an absolute calibration is used (equation 3), in which only the CT number of air is assumed constant and the value for solid material can be determined locally from the end points of the traverse. However, determinations are significantly noisier (standard deviation 0.10) than for the IPSF method, and there is still some evidence of systematic variation with mineral properties reflected in the correlations of highs and lows with the relative calibration determinations, although the overall mean (0.035 mm) is close to the correct answer.

A more challenging case is shown in Figure 9B, consisting of a coarse-grained granite with generally euhedral 1–6 mm crystals of quartz, plagioclase, potassium feldspar, biotite, amphibole, and magnetite, in order of increasing X-ray attenuation. When using the IPSF method aperture measurements in regions of high contrast are considerably noisier, but still within fairly well-bound limits. However, neither MA method is able to cope with this degree of variation; the traditional relative calibration returns absurd values because the fracture can attain CT numbers higher than solid quartz due to the blurring effects from higher attenuation phases. Using an absolute calibration and taking local values for CTmat improves matters, but the measurements remain unreliable.

The particular challenges of measuring apertures in very heterogeneous natural materials are illustrated in Figure 10, showing four traverses over the highlighted part of the scan image in Figure 9, and the corresponding IPSF measurements. Traverse A shows a typical situation for a relatively homogeneous region consisting of quartz and/or plagioclase. Traverse B passes through denser amphibole, for which the higher attenuation in combination with blurring made CT numbers within the aperture even higher than those for solid quartzofeldspathic material in traverse A. In traverse C magnetite is separated from quartzofeldspathic groundmass, and there is barely any visual evidence of an attenuation deficit. However, the IPSF procedure was still able to measure the aperture well, owing to the information it was able to bring to bear, notably the length scale of the PSF, the CT number for air, and the tracing of the fracture up to this point that essentially told the algorithm where to look. In traverse D the aperture mostly crosses magnetite, though the CT image shows signs of small-scale variability beyond the resolution of this scan to image in detail.

All apertures in Figure 10 should be the same thickness (0.040 mm) and occur at the same traverse position, so the variation in both is a sign of the limitations of the IPSF method. The principal cause for divergent results stems from the simplified assumptions employed by the model: that the fracture is bounded by two materials that may be different but are themselves homogeneous, at least within a PSF half-width of the aperture. Smaller scale variation violates this assumption, although the constraint imposed by PSF being determined prior to the measurement serves to limit the resulting negative effects. The case studied here of an artificial cut is actually worse than would be typically encountered with natural fractures, which will preferentially form along grain boundaries.

4.3.2. Welded Tuff, High-Energy Scanner

Higher X-ray energies are necessary for imaging specimens large enough to conduct reasonable flow-testing experiments. A beneficial side effect of these conditions is that they tend to diminish X-ray attenuation contrasts among materials. However, significant variation can still be present. The welded tuff in Figure 11 is such a case, in which the compact quartzofeldspathic groundmass includes a heterogeneous assemblage of more attenuating and less attenuating rock fragments, including extremely porous pumice. The sample features a well-preserved natural fracture with little weathering, and was collected to serve as a basis for both physical (Slottke et al., 2008) and numerical (Cardenas et al., 2007, 2009) flow test experiments.

The IPSF algorithm was able to construct a detailed map of the fracture aperture (Fig. 11B) and surface (Fig. 11C) that successfully accounted for and overcame the complexities of the sample. The associated animation (Animation 1) shows a simplified version of the algorithm at work as it interprets an example traverse. The surface and distribution of apertures reveal a spatially variable distribution of open spaces, pathways, and choke points that will certainty have a first-order effect on fluid flow. Open spaces were probably caused by selective disaggregation and loss of pumice.


The primary purpose of this paper is to set out a framework for quantifying features within CT images that are small with respect to the blurring that is innate to all CT data. In the case of fractures, by quantifying and taking this blurring explicitly into account using a point-spread function, measurements of aperture and position in homogeneous materials can be made more reliably and more reproducibly than with previous methods. The IPSF method introduced here does an excellent job of overcoming effects from limited heterogeneity, and even produces reasonable results when heterogeneity is much more extreme than typically encountered in most natural materials. The success of the method stems from the extent, albeit still limited, to which it considers CT data in a realistic way, as an outcome from measuring a physical system with a particular device rather than an abstract set of grayscales and gradients.

A corollary to this success is that it demonstrates the endemic limitations of any image-processing method for CT data that ignores blurring. Figure 2 demonstrates how using a global threshold is incapable of producing a correct answer simultaneously for features across a range of scales, even in homogeneous materials. Figure 10 provides an even starker example in the heterogeneous case: the CT numbers in the center of the fractures in traverses B and D are higher than the CT numbers for the solid quartz + feldspar groundmass, illustrating that there is no single threshold value that can capture, much less characterize, the fracture in this image. The key point is that the fracture CT numbers can only be interpreted correctly when considered in the context of the other materials in their immediate neighborhood, the scale of which can be ascertained by measuring the size of the PSF. Any image-processing method that neglects or loses this local information is crippled from the start in its ability to accurately characterize small or thin features.

A particular case in point is segmentation methods that make direct use of image histograms, which interpret the data as a mixture of two or more normal or binomial curves whose degrees of dispersion are considered to reflect image noise or small-scale heterogeneity. Thin fractures and small pores can confound this model because they have no characteristic CT number described by an a priori curve. Although both void and solid may have underlying normal CT-number distributions, the void curve may not be directly manifested on the histogram because most or all void space is too close to a material interface to escape boundary effects. In such cases, the distribution of CT numbers is controlled by the aperture and surface area of cracks and pores. Image-processing algorithms that presume that this distribution reflects a distinct and separable component begin from an incorrect assumption when applied to such cases. For example, the high sensitivity of the popular indicator kriging segmentation method to initial user-entered settings when applied to high-surface-area cases (Oh and Lindquist, 1999) can be traced to this circumstance. Although both kriging and region-growing algorithms, which utilize additional information such as autocorrelation or slope to aid segmentation, can have better success in tracing fractures than thresholding alone, in so doing they still may lose the metrical information embedded in the CT data that are necessary to achieve the most accurate measurements.

It also bears repeating that this and previous work on CT measurement of fractures demonstrates that apertures can be accurately determined down to one-tenth of a voxel width. Insofar as most segmentation schemes consist of classifying each voxel as either solid or void, any subvoxel-width fracture will either be overestimated or omitted, and slightly larger apertures will suffer a pixilation effect of having their apertures rise by integer increments. Whether such discrepancies matter depends on the problem being considered. Because flow tends to be dominated by the largest apertures, misclassification of small apertures may not have much effect. However, if there are choke points, or if connectivity is an important issue, consequences may be more severe.

These same principles apply to the similar problem of measuring small or thin features that are denser than the surrounding material, such as mineralized veins or small high-density mineral grains (Ketcham, 2006a; Kyle et al., 2008), or thin bone membranes surrounded by tissue or air. The concepts of the point-spread function and deconvolution, and the equations describing them, are the same whether one is measuring a feature with greater or lesser attenuation than the surrounding material, and any accurate measurement schema must take them into account. With some materials, such as gold or lead, the possibility of additional CT artifacts due to their extremely high X-ray attenuation may require additional caution.

The IPSF method as outlined here is only appropriate for features with only one small dimension that can be measured using a linear traverse (i.e., normal to the fracture wall). Features that are small in two or three dimensions, such as individual pores, require a more fully 2D or 3D strategy that accounts for blurring in all relevant directions. However, the underlying concept of using the PSF to estimate how far the attenuation anomaly attributable to a particular feature extends will remain the same.

Although the IPSF method is very general, there are some cases for which it may not be necessary or advisable. One limiting case could be where the CT data are so sharp that blurring is negligible; this could be verified by measuring and reporting the PSF scale. Even in such a case, using the IPSF or a similar method would be necessary to obtain subvoxel-precision measurements, and to measure subvoxel-scale apertures. Another circumstance that may rule out use of the IPSF method would be where image artifacts or properties violate the assumption of attenuation conservation, for example, as discussed with regard to EEGE artifacts and phase contrast imaging.

Naturally, the necessity for accounting for blurring can also be superseded by imaging at a high-enough resolution so that all features of interest are large compared to the PSF. For many applications and for many measurements, this condition is met. However, as CT imagery reaches higher resolution, additional smaller-scale features of the material being imaged can become apparent at the edge of detectability. Furthermore, there will always be a need for measuring fine features across large objects that must be scanned at relatively coarse resolution. Inevitably, in such cases, we will want to measure what we see.

This work was supported by National Science Foundation grants EAR-0113480 and EAR-0439806. Greg Thompson's assistance in preparing the rock samples was greatly appreciated. We thank D. Sahagian and an anonymous reviewer for constructive comments, as well as G. Gualda for helpful comments and editorial work.

Supplementary data