Porosity plays a clearly important role in geology. It controls fluid storage in aquifers, oil and gas fields and geothermal systems, and the extent and connectivity of the pore structure control fluid flow and transport through geological formations, as well as the relationship between the properties of individual minerals and the bulk properties of the rock. In order to quantify the relationships between porosity, storage, transport and rock properties, however, the pore structure must be measured and quantitatively described. The overall importance of porosity, at least with respect to the use of rocks as building stone was recognized by TS Hunt in his “Chemical and Geological Essays” (1875, reviewed by JD Dana 1875) who noted:

“Other things being equal, it may properly be said that the value of a stone for building purposes is inversely as its porosity or absorbing power.”

In a Geological Survey report prepared for the U.S. Atomic Energy Commission, Manger (1963) summarized porosity and bulk density measurements for sedimentary rocks. He tabulated more than 900 items of porosity and bulk density data for sedimentary rocks with up to 2,109 porosity determinations per item. Amongst these he summarized several early studies, including those of Schwarz (1870–1871), Cook (1878), Wheeler (1896), Buckley (1898), Gary (1898), Moore (1904), Fuller (1906), Sorby (1908), Hirschwald (1912), Grubenmann et al. (1915), and Kessler (1919), many of which were concerned with rocks and clays of commercial utility. There have, of course, been many more such determinations since that time.

There are a large number of methods for quantifying porosity, and an increasingly complex idea of what it means to do so. Manger (1963) listed the techniques by which the porosity determinations he summarized were made. He separated these into seven methods for obtain total porosity, sixteen for determining apparent (connected) porosity and five where the methods used were, for one reason or another, uncertain. Most of the total porosity measurements are variations on bulk volume/grain volume or bulk density/grain density approaches, and the apparent porosity measurements were made by variations of absorption methods for different fluids or gases. These techniques were augmented just before and soon afterward Manger’s report by point counting approaches (Chayes 1956; Manger 1963; Gazzi 1966; Dickinson 1970; Folk 1974). For downhole petrophysical analysis Archie’s Law (Archie 1942, 1947, 1950, 1952; cf. Rider 1999; Ellis and Singer 2008; Peters 2012; Tiab and Donaldson 2012) provided a relationship between electrical conductivity/resistivity porosity and brine saturation, and porosity information is also provided by density, sonic and neutron logs (Peters 2012; Tiab and Donaldson 2012).

Porosity, itself, is a rather easy parameter to define—the fraction of void volume over total volume - but certainly not so easy to quantify. The reason is that “void” space expressed in earth materials can span over 8 orders of magnitude in length scale—i.e., nanometer to 10s or even 100s cm or larger. As we describe below there is really no one method that can adequately cover this enormous range in scale. Whether rocks are formed by crystallization of a melt, or deposition of sediments in a stream, lake or ocean, they initially contain some inherent primary porosity. This porosity can then be modified by a variety of processes such as deformation (including fracture), metamorphism, hydrothermal alteration, diagenesis and weathering, producing secondary or fracture porosity. In a hydrologic sense we have two types of porosity—effective or open porosity that permits flow of fluids or volatiles and ineffective or closed porosity that does not. As our ability to quantify the nature of porosity in complex heterogeneous matrices has improved through the application of sophisticated techniques such as electron microscopy and X-ray and neutron scattering, so too has our need for a finer breakdown in the definition of pore types. Leaders in the field such as Loucks (Loucks et al. 2012) have expanded our understanding of porosity by necessity as we examine in detail very fine-grained, tight formations such as shale for hydrocarbon potential. In this context we now have a hierarchy of pores types that range from interparticle and intraparticle mineral-matrix pores, to organic matter pores to fracture pores. The primary goal in pore assessment is to quantify these pores, not just in terms of shape and size, but how they contribute to the overall fabric of the rock and its ability to transmit fluids, and the bulk physical properties of the rock itself.

The goal of this paper is to summarize, in a manner the reader can use for future experimental work, many of the techniques for analyzing the porosity of rocks or other porous materials. The paper is divided into four sections. The first summarizes petrophysical approaches. While some of these have been known since the early work just mentioned, such as direct measurements of volume and density, pore saturation, gas expansion and mercury intrusion, others, like focused ion beam milling, QEMSCAN imaging, and nuclear magnetic resonance approaches are relatively new and may be less familiar.

The next section of this paper discusses scattering methods for porosity analysis. While the first scattering studies of rock materials were published in the 1980’s (e.g., Hall et al. 1983), it is only recently (cf. Anovitz et al. 2009) that these have been used to study porosity changes in larger geologic contexts. This section discusses the theoretical basis of scattering experiments, sample preparation, the nature of scattering instruments, contrast matching experiments, data reduction and analysis. Tomographic approaches are covered elsewhere in this volume (Noiriel 2015, this volume).

The third section of the paper discusses image analysis approaches other than the optical petrophysical techniques discussed in the first section. These include methods and limitations of obtaining binary pore/rock images, calculations to use imaging to extend the size range of scattering data using one- and two-point correlations, and the potential utility of three-point correlation analysis. Fractal approaches to image quantification, including mono- and multifractal analysis, lacunarity and succolarity, as well as other types of correlations are also discussed.

Finally, it should be kept in mind that different techniques are based on different inherent assumptions. This is true both in terms of how they function and what they are trying to, or are capable of measuring. Thus, different approaches to pore analysis may, in fact, be likely to yield differing results. This may advantageous, as the combination of several techniques, each with its own goals, may elucidate differing features of the pore structure, but few such comparisons, especially involving scattering studies, are available. In the concluding section of the paper, therefore, a few examples comparing the results of pore structure analysis using different techniques are considered.

Conventional petrophysics involves the characterization of reservoirs for their hydrocarbon potential through the quantitative assessment of rock properties. Of primary interest is how pores (and not uncommonly fractures) are interconnected, thus controlling migration and accumulation of hydrocarbon fluids and gases. The key properties of interest include lithology (e.g., mineralogy, grain fabric), water saturation, porosity, permeability and density. Three broad categories of measurements are typically made, (a) on core or crushed rock, (b) within the bore hole via well logging tools and (c) seismic. In this chapter we will briefly address only those methods used in (a) and (b) that lead to quantification of porosity or pore features. In category (a) we can further subdivide the methods into direct measurements on solids and indirect via imaging of a rock core or chip. A number of these overlap with respect to length scales but yet complement methods such as neutron scattering (Fig. 1)

Direct methods

Several methods can be employed to directly measure rock porosity: (1) saturation or imbibition, (2) buoyancy, (3) gas expansion (He porosimetry), (4) gas adsorption (BET) and (5) mercury intrusion porosimetry (e.g., Tiab and Donaldson 2004; Sondergeld et al. 2010; Clarkson et al. 2012c). All five of these methods only measure the effective (connected) porosity of the rock sample, similar in a general way to the contrast-match method used in neutron scattering to assess connected pores (see section on Contrast Matching below).

Saturation (or imbibition)

In the saturation (or imbibition) method, a clean, dry rock is weighed prior to full saturation with a wetting fluid (Wdry). Typically toluene or dichloromethane have been used in the past but more recently it has been common to saturate a rock with a synthetic brine that has a cation and anion composition similar to that observed in the reservoir of interest. The weight of the saturated sample (Wsat) is determined after excess brine is removed from the surface of the sample. The bulk volume of the sample (Vbulk) in the form of a cylinder or cube can be determined by geometric means using a caliper. One must also know the density of the saturating fluid (ρfluid) or determine it by weighing a known volume of the fluid.

The porosity, φ is then given by:

(1)

It is also possible to use these types of data to quantify the mean grain density (ρma) of the sample using this relationship.

(2)

One key limitation in this method is the difficulty of the imbibing fluid to displace air from the smallest nm-sized pores.

Buoyancy

The buoyancy method is somewhat similar to the saturation method in that one determines the dry weight (Wdry) and then saturates the rock sample with a wetting fluid of known density (ρfluid) as described above. The bulk volume of the rock (Vbulk) is determined as described above. In this approach, however, the saturated sample is suspended in a bath of the same fluid with which it was saturated to yield its suspended weight, Wsus. The weight of the cradle (Wcrad) used to suspend the sample is also required so that the actual weight of the sample and cradle suspended in the fluid is (Wsus + Wcrad), i.e., the cradle weight can be accounted for in the final calculation of porosity. The porosity is given by the relationship:

(3)

The mean grain density (ρma) can also be quantified as

(4)

Saturation and buoyancy methods that employ fluid saturation are not recommended for assessment of mudstones (shale) and very dense carbonates because of their low permeability (Newsham and Rushing 2001).

Gas expansion

Gas expansion methods employing Boyle’s law, most notably helium (He) porosimetry, are considered among the most accurate techniques for measuring effective porosity in low permeable rocks as well as lithologies such as sandstone. The helium gas stored in a reference cell is isothermally expanded into a sample cell. After expansion, the resultant equilibrium pressure is measured. Helium has advantages over other gases because: (1) its small molecules rapidly penetrate small pores, (2) it is inert and does not adsorb on rock surfaces as H2O or CO2 in air may do, (3) helium can be considered an ideal gas (i.e., z = 1.0) for pressures and temperatures usually employed in the test, and (4) helium has a high diffusivity and therefore affords a useful means for determining porosity of low permeability rocks.

The method consists of placing a dry core (or crushed rock) of known bulk volume (Vbulk, as determined by methods described above) in a container of known volume (Va). This volume is connected with another container with a known volume (Vb) that is evacuated. He gas is introduced into Va and the pressure (P1) set to an arbitrary value typically around 100 psi. This He gas is then released into Vb and allowed to equilibrate throughout both chambers. The helium gas then penetrates into the pores of the rock sample. During this process the pressure will decrease to a new stable level (P2). Using the ideal gas law, the volume of the pores can be calculated from

(5)

From this relationship one can determine the porosity, φ = VV/Vbulk. There are, however, a couple of caveats. First, the use of crushed versus core samples can lead to some error especially if the sample has low matrix permeability such as a tight sandstone or shale. Early work by Luffel and Guidry (1992) showed that for shale crushed porosities were generally 0.1–0.2% higher than core. More recently Karastathis (2007) noted that sample cleaning and mass conservation during the crushing and grain volume measurement were crucial to an accurate measurement. Second, this method provides slightly higher porosity values on any given rock because the He kinetic diameter is smaller than most reservoir gases, so a greater percentage of connected porosity will be interrogated compared to typical reservoir gases (Bustin et al. 2008). These are commonly referred to as pycnometry methods which has some similarity to the BET methods described by the early work of Brunauer, Emmett, and Teller (1938).

Gas adsorption

Low-pressure gas adsorption techniques can also be used to characterize pore surface area and volume in geologic materials although these methods are used far less than those described above and MICP described below for porosity assessments. To date, this method has been less frequently used to characterize rock pore structure. Subcritical N2-gas adsorption techniques are best suited for investigation of materials with fine pores in the range from about 2–300 nm, similar to those prevalent in mudrocks and coals. Ross and Bustin (2009) studied the pore structure of Devonian–Mississippian and Jurassic mudrocks from western Canadian sedimentary basins using both subcritical gas adsorption and MIP. Adesida et al. (2011), Chalmers et al. (2012), Clarkson et al. (2012a,b, 2013) demonstrated the usefulness of gas adsorption techniques to evaluate the quantitative pore–structure parameters of mudrocks from different ‘gas shale’ plays in North America. There are numerous references that provide far greater detail about the various aspects of gas adsorption than we can cover in this chapter—such as Webb and Orr (1997) and Rouquerol et al. (1999), but a brief description follows that is relevant to the assessment of surface area and porosity in rocks.

In general terms, this method involves bringing a gas or vapor into contact with a solid (as powder or rock chips) wherein part of the gas is taken up inside the material and the remaining interacts with the outside surface. These are physisorption (i.e., physical adsorption) techniques (although instruments may permit chemisorption (i.e., chemical adsorption) measurements as well. In physisorption, there is a weak Van der Waals attraction between the adsorbate and the solid. In low pressure adsorption experiments the temperature–pressure regime is below the critical point of the fluid used. These experiments yield valuable information about the textural properties of porous material, such as surface area and pore structure. Since the gas is below its critical point, capillary condensation becomes important in these experiments, which provides pore size information. N2 (at 77 K) is the most commonly used gas for surface area and mesopore characterization, however, alternative gases can also be used, such as krypton (at 77 K), argon (at 87 K), and carbon dioxide (at 273 K).

Physisorption has a number of attributes that make it attractive to use to assess porosity and surface area of complex rock materials. It exhibits low heats of adsorption so there are no violent or disruptive changes to the material. It can involve multiple layers of adsorbate, thus allowing for pore measurements as pores, at least small ones, can be filled. Low temperatures are used to enhance the adsorption process, and adsorption equilibrium is achieved quickly since little or no activation energy is generally required. Finally, physical adsorption is fully reversible, allowing the adsorbate to fully adsorb and desorb revealing potential hysteresis behavior.

Subcritical N2-gas adsorption experiments are performed using various procedures. In most applications, the amount of gas adsorbed is determined using a discontinuous static volumetric method (see Webb and Orr 1997 for more detail). A degassed evacuated sample is exposed to N2 at liquid-N2 temperature. The quantity of adsorbed gas on the solid surface is measured at discrete pressure (P) steps over the relative equilibrium pressure (P/P0) range of 0.0075 to 0.995 at constant temperature, where P0 is the condensation pressure at the temperature of the experiment The experiment systematically increases pressure up to the condensation pressure (adsorption branch) followed by reduction of pressure from P0 (desorption branch) and the data are reported as the adsorption isotherm: quantity of gas adsorbed per mass expressed as moles or volume in cm3.g−1 (S.T.P.) as a function of relative equilibrium pressure (P/P0).

The shape of the isotherm and its hysteresis pattern provide useful information about the physisorption mechanism, the solid and gas interactions, and can be used to qualitatively predict the types of pores present in the adsorbent. IUPAC (Sing et al. 1985) classified the adsorption isotherms into six types (Type I to VI), along with four hysteresis pattern types (H1 to H4). The different hysteresis patterns H1 to H4 are characteristic of different mesopore shapes. A detailed description of the IUPAC isotherm classification is presented in Sing et al. (1985) and Rouquerol et al. (1999). Three isotherm types especially important for low permeability formations such as gas shale (Fig. 2) are described here.

Type I: A concave-shaped curve is indicative of a material dominated by micropores with the exposed surface residing almost exclusively inside the micropores. Once these fill with adsorbate there is little or no external surface remaining for further adsorption, hence the plateau region in the curve.

Type II: Materials that are nonporous or only contain pores with diameters exceeding micropore dimensions will exhibit this curve shape. The inflection point occurs near the completion of monolayer coverage and the beginning of multilayer sorption. The adsorption and desorption branches follow the exact same path—i.e., there is no hysteresis.

Type IV: Dominantly meso- to macroporous materials (~2–100 nm) will have a characteristic hysteresis loop, which is associated with capillary condensation and evaporation in the mesopores. As with Type II materials at lower relative pressures monolayer adsorption takes place but transitions to multilayer behavior with a pseudo-plateau region indicative of near complete filling of the mesopores.

The size and shape of the hysteresis loop can also be used qualitatively to infer details about the types of pores encountered by N2 during a sorption experiment. For example, the loop shown in Figure 2 is indicative of a narrow distribution of cylindrical or tubular pores. A broader plateau region of the loop at higher P/P0 represents the presence of a complex, interconnected pore network with narrow pore openings. A very narrow hysteresis loop that extends across part of the plateau region of the adsorption curve to somewhat lower P/P0 is typical of materials with slit-shaped pores. Figure 3 illustrates some of these trends through a comparison of N2-isotherms for montomorillonite and kaolinite (Kuila 2013).

A number of models are used to quantify surface area, the most common of these is that developed by Brunauer, Emmett, and Teller (BET method; 1938) which is an extension of the Langmuir model of monolayer adsorption to multilayer adsorption. The fundamental assumption is that the forces active in the condensation of the gases also are responsible for the binding energy in multi-molecular adsorption (Webb and Orr 1997). By equating the rate of condensation of gas molecules onto an already adsorbed layer to the rate of evaporation from that layer and summing for an infinite number of layers, the following linear expression can be written:

(6)

where Va is the volume of gas adsorbed, P/P0 is the relative pressure, Vm is the volume of adsorbate as a monolayer and C is the BET constant. A plot of P/(Va(P0P)) versus P/P0 can yield a straight line with the intercept i = 1/VmC, slope s = (C − 1)/VmC and volume of a monolayer Vm = 1/(s + i). The values of C and Vm can be obtained from linear regression of the data.

The total surface area (St) can then be derived from

(7)

where NAv is Avogadro’s number (6.023 × 1023), M is the molecular weight of the adsorbate and Acs is the adsorbate cross sectional area (16.2 Å2 for N2). The specific surface area, (S) is then determined from total St by dividing by the sample weight. One can use either the single point BET method, typically taken at a P/P0 value of 0.3, or a multi-point BET (minimum of three points) with the realization that some error will be introduced by using the former approach, the magnitude of which will scale as the value of C decreases.

Total pore volume can be derived from the amount of vapor adsorbed (Vads) at a relative pressure close to unity (assuming pores are filled with liquid adsorbate, Vliq).

(8)

where Pa is ambient pressure, R is the gas constant and T is temperature in K. Average pore radius (rp) can be estimated from the pore volume assuming a cylindrical pore geometry using this relationship

(9)

There are a host of other models that can be used to assess surface area and pore volume as detailed in Webb and Orr (1997). Of these the Barrett–Joyner–Halenda (BJH) theory (1951) has become a popular choice to assess gas shale materials (Clarkson et al. 2012b). Density Functional Theory (DFT) is a technique that uses modern statistical thermodynamics to assess materials dominated by micropores—i.e., those pores less than 2 nm that are typically encountered in gas shale lithologies (Do and Do 2003). A recent study by Adesida et al. (2011) used DFT for analyzing pore structure of kerogen in the Barnett shapes, but they made no comparison between these results and traditional BJH or MICP methods.

Mercury intrusion capillary pressure (MICP)

Mercury intrusion capillary pressure (MICP) measurements are the standard method for characterizing pore features, particularly pore throat size distributions in porous media from the micron scale (up to about 350 μm) to the nano-scale (below 1 μm to about 3 nm)—five orders of magnitude (e.g., Eigmati et al. 2011; Josh et al. 2012; Ortega and Aguilera 2014). This is equivalent to using the same tool to measure with accuracy and precision for length scales varying from the diameter of a grain of sand to the height of a 30-story building (Webb 2001). Not only is mercury porosimetry applicable over a wide range of pore sizes, but the fundamental data it produces is also indicative of various characteristics of the pore space and is used to reveal a variety of physical properties of the solid material itself. Applications can be divided into three broad categories: (1) information gained using volume and mass measurements only—material volume and density, interstitial void volume, percent porosity and percent porosity filled; (2) information obtained from Washburn’s Equation—pore volume distribution by pore size, pore area and number of pores; and (3) information obtained from special or multiple methods—pore cavity to pore throat size ratio, distribution of pore cavities associated with a pore throat size, material permeability and pore fractal dimensions (Webb and Orr 1997; Webb 2001). Discussing all of these in detail is beyond the scope of this chapter but we will cover the theory behind Washburn’s Equation, how it is used and a brief explanation about how the measurement is made.

Mercury is considered the best example of a non-wetting phase. It will not enter pores by capillary action, and it can only access interconnected pores. The volume of mercury that can enter pore space is limited by the maximum pressure attained during analysis, which for many instruments is 60,000 psi. Entry pressure is inversely proportional to opening size. Liquid mercury has a high interfacial surface tension, γ—i.e., the molecular force (485 dyne cm−1) in its surface film tends to contract its volume into the form with the least possible surface area. Mercury also exhibits a high contact angle (θ) against most solids—ranging between 112º and 142º with 130º being the most widely accepted value for use for an advancing stage experiment (imbibition) (Fig. 4), Receding (drainage) angles are typically about 30º less in magnitude—i.e., 80 to 110º.

For a circularly shaped pore the surface attraction of mercury acts along the circle of contact for a length equal to the perimeter of the circle. The force with which mercury resists entering the pore is equal to −πDγ cosθ, where D is the pore diameter. The negative sign appears because for θ > 90º the term is intrinsically negative. An externally applied pressure produces a force that acts over the area of the circle contact and is expressed as πD2P/4 where P is applied pressure. At equilibrium, where applied force is equal to the resistance, we have

(10)

Simplifying this equation yields:

(11)

This is known as the Washburn Equation. Assuming a contact angle of 130º and a surface tension of 485 dyne cm−1, it takes a pressure of only 0.5 psi for mercury to enter pores approximately 360 μm in diameter. For smaller pores such as those encountered in tight sands or shale, 60,000 psi pressure can result in mercury accessing pores as small as 3 nm in diameter. These examples assume a spherical pore shape that is, of course, an oversimplification. For a case where the rock is dominated by phyllosilicates where slit-like pore openings dominate the Washburn expression can be modified to

(12)

where W is the width between plates.

In a typical mercury intrusion porosimetry experiment a dry sample is placed into a container, which is then evacuated to remove contaminant gases and vapors (usually water). While the container is still evacuated mercury is allowed to fill the container. This creates a system that consists of a solid, a non-wetting liquid (mercury), and mercury vapor. In the next step pressure is increased toward ambient. This causes mercury to enter the larger openings in the sample, and the amount that does so is reflected in a volume change (Jennings 1987; Pittman 1992). At this point, at ambient pressure pores of diameters large than about 12 mm have been filled. The sample container is then placed in a pressure vessel and attached to a pressurization system that allows the pressure on the system to be increased up to approximately 60,000 psi (414 MPa) (a typical maximum value for commercial instruments. As per Equation (12) this will force mercury into pores as small as approximately 0.003 μm in diameter. Regardless of the pore geometry and the model employed to quantify it, the volume of mercury forced into the pore (and other void spaces) increases as pressure increases. Therefore, increasing the applied pressure on the mercury surrounding the porous sample produces unique pressure-volume curves such as those shown in Figure 5 for the Dolgeville formation, a Utica shale (Eigmati et al. 2011). These curves are dependent on (a) pore size distribution and tightness, (b) rock type and (c) saturation history (intrusion versus the extrusion process).

According to Ramakrishanan et al. (1998) self-consistent pore-body/pore-throat ratios can be obtained from the measured hysteresis between the intrusion and extrusions curves. The intrusion curve is controlled by pore-throats whereas the extrusion curve is controlled by pore radii, and pore connectivity. This hysteresis behavior can be attributed to variations in the saturation process, alterations due to advancing and receding contact angles and mercury trapped in pores. These curves are typically normalized to the total amount of mercury in the sample at the end of the intrusion cycle (Venkataramanan et al. 2014). An initial estimate of the pore-body/pore-throat ratio is the ratio of radii for a given mercury-saturation volume. This initial ratio can further be averaged over all mercury saturations. The initial estimate can be refined by correcting the measured extrusion curve for mercury trapped in the sample. At any point along the extrusion curve, the mercury saturation in the sample (Simb) is the sum of the disconnected (Sdc) and connected (Sc) network of mercury saturations

(13)

At the end of the intrusion cycle, the amount of mercury saturation in the sample, Si, is unity. At the end of the extrusion cycle, there is a residual disconnected network of mercury in the sample, Sres. Using the initial and final saturations with Land’s Equation, which relates the residual to the initial saturation (Ramakrishnan and Wasan 1986), the amount of connected and disconnected saturations at any point along the extrusion curve can be calculated:

(14)

The total disconnected mercury saturation at the end of the extrusion cycle is the sum of the disconnected mercury saturation at any point and the disconnected saturation that could arise from the connected phase from further extrusion:

(15)

From Equation (14), with Si = 1, the variable C is calculated. Equations (13) and (15) lead to a quadratic equation of the form, aSdc2 + bSdc + k = 0, where a = C, b = −C(Sres + Simb) and k = Simb[CSres − 1] + Sres. Typically there is a fairly constant horizontal shift between the intrusion and modified extrusion curves when plotting cumulative saturation against capillary pressure. Procedurally, a single pore-body/pore-throat ratio is estimated by computing a scaling factor that matches the measured intrusion and corrected extrusion curves.

It is possible to transpose the mercury injection data to represent water–oil (w-o) or water-air (w-a) capillary pressure curves using the Leverett J-function (Tiab and Donaldson 2004, 2012).

(16)

where Pc is the pressure difference between the non-wetting and wetting phase, k is the permeability in darcies (measured prior to mercury intrusion), φ is the porosity (also measured prior to mercury intrusion), γ is the interfacial surface tension and θ is the contact angle. From this expression we can generate equalities that lead to estimates of other capillary behavior based on mercury intrusion data.

(17)

It is important to keep in mind that mercury porosimetry suffers as a measure of pore size for several reasons (Clarkson et al. 2012a). First as noted above it is limited to pore sizes generally greater than about 2 nm which excludes the micropores observed in organic-rich shales that contribute to significant matrix porosity. Second, the method can distort, compress and damage the pore structure of highly compressible clay-rich samples and possibly others because of the high intrusion pressures required for measurement (up to 60,000 psi). Finally, the method requires a reasonable estimate of the interfacial surface tension and contact angle for pore size calculations from the Washburn Equation which are not well constrained for shale.

Imaging methods

A broad range of direct imaging methods are available to describe the nature of porosity and its association with minerals in rock materials. These include optical light microscopy, standard scanning electron microscopy (SEM) with energy dispersive X-ray spectroscopy (EDX), focused ion beam SEM (FIB SEM), transmission electron microscopy (TEM), nuclear magnetic resonance imaging (NMRI) and X-ray tomography. In this section we provide background and examples of all but TEM and tomography, the latter of which is covered in detail by (Noiriel 2015, this volume).

Optical petrology

Optical petrology is the most straight forward, accurate and repeatable means of evaluating the pore system and associated mineralogy of seals and reservoir rock samples (core, sidewall core, drill cuttings and outcrop). In order to better identify pores and fractures, one of two types of epoxy, normal blue or rhodamine-B for fluorescence under ultra-violet light, are impregnated into the rock to highlight the pore system. The finished thin section is viewed under plane-polarized, cross-polarized and/or ultra-violet light to examine a two-dimensional cross section through a rock, estimate the bulk mineral composition, and make important observations regarding grain fabric and texture by point counting or image analysis of the mineralogy, texture, diagenesis, pore system and reservoir quality of the sample. However, despite the large amount of information available, the actual three-dimensional grain relationships and details of the intergranular pore structure were always beyond our reach.

While digital image analysis is not new, important recent advances in petrography include pattern recognition and pattern classification software for description and quantification of rock-ore geometric characteristics. These approaches built on the early work of Ehrlich and co-workers (Ehrlich et al. 1984; Ehrlich and Etris 1990) who pioneered the arena of Pore Image Analysis (PIA) to determine the size, shape and relative proportions of different pore types through computer-based thin-section porosity analysis. It is possible to define several hundred variables for each field of view using this technique. PIA is used in conjunction with MICP data to develop physical models for the determination of capillary pressure characteristics related to pore-type and pore-throat size (Fig. 6). Currently the two more popular free-ware platforms for implementing PIA are Fiji/Image J™ and Image-Pro™.

To facilitate mineral assessments in concert with pore identification, carbonate stains (alizarin Red-S for calcite and potassium ferricyanade for ferroan carbonate) and/or feldspar stain are applied to the 30-μm thin section (Warne 1981). Applying such stains to the rock chip before gluing and sectioning leaves the polished surface pristine for other analytical approaches as needed. Also any sedimentary structures, morphology, bioclasts, crystals habit, textures and fabric of the thin section are noted. From the calculated mineralogy and pore system estimation, the reservoir or seal quality can be estimated with references to potential problems (acid, fines migration and fresh water sensitivity).

Scanning electron microscopy (SEM)

Through the application of SEM and EDX systems, earth scientists are now able to go one step beyond thin section analysis—to look down into the pores, identify the smallest minerals, and examine the distribution of these minerals within the pores (cf. Welton 2003). Other advantages of the SEM over optical petrography are ease of sample preparation (for certain types of applications), greater depth of field and resolution, and a significantly higher magnification range (most SEM analysis of rocks involves magnifications between 10× to 20,000×). When examining an SEM micrograph for the first time, the major problem is one of scale. However, with minimal training and experience, the user can soon identify minerals and textures previously observed only in thin section. This is not to say that the SEM replaces thin section analysis; instead, the SEM complements thin section analysis by providing a different type of information which—when used in combination with other techniques—provides important new information to help characterize pore features in rocks.

Huang et al. (2013) provide a very nice summary of SEM operational aspects and selected imaging applications with emphasis on shale. As they note, the mechanics of the modern scanning electron microscope (SEM) system allow for various imaging and detecting techniques that can be used to study different aspects of the composition of shale and other rock materials at very high resolution. Scanning electron microscopy, unlike conventional light microscopy, produces images by recording various signals resulting from interactions of an electron beam with the sample as it is scanned in a raster pattern across the sample surface. A fine electron probe, with a spot size from a few angstroms to several hundred nanometers, is generated by focusing electrons emanating from an electron source (conventionally called the electron gun) onto the surface of the specimen using a series of electro-magnetic lenses. The primary electron beam interaction with the sample generates a number of different types of signals: (a) secondary electrons useful for 3-D textural assessment, (b) backscattered electrons used for characterizing composition and crystalline structure, (c) characteristic X-rays used for element-specific mapping and mineral identification and (d) photons resulting in cathodoluminescence (CL) indicative of certain trace impurities in minerals (Goldstein et al. 2003).

Figure 7 illustrates the value of combining different SEM approaches to describe a geologic sample, in this case, a gas shale from the Utica formation of the Appalachian Basin, USA. In this analysis FEI QEMSCAN™ software was used to generate a very detailed mineral map of an organic-rich shale demonstrating the power of combining X-ray emission assessment, which yields individual elemental signal intensity, with backscattered-electron signals based on average atomic number contrast. The resolution for this type of analysis is typically on the order of 1–2 μm per pixel (Swift et al. 2014). It cannot be over stated, however, that when imaging clay-rich samples or other fine-grained or chemically complex materials it is paramount to have confirmatory data from X-ray diffraction for accurate mineral identification.

Another important consideration is the quality of the polish. Even with the utmost care during grinding, there is nearly always the chance for creation of artificial pores and fractures that once imaged yield an inaccurate accounting of true porosity. In many cases the softer clays can be smeared and stretched producing unnatural textures (Fig. 8). Additionally, artificial pores are commonly created at the contact between soft phases such as clays and more resilient phases such as quartz or feldspar. To circumvent this problem many labs now use argon ion milling to complete the polishing process thus preserving true mineral textures and pore structures (Erdman and Drenzek 2013). High-resolution SEM of ion-milled samples has yielded remarkable new insights such as the occurrence of pores and pore networks contained within the organic matter of gas shale (e.g., Loucks et al. 2009, 2012; Ambrose et al. 2010; Klimentidis et al. 2010; Sondergeld et al. 2010; Curtis et al. 2012). Interestingly, in some over-mature organic-rich rocks, it appears that as much as 50% of the volume of the original organic matter may consist of pores smaller than 100 nm (e.g., Passey et al. 2010; Heath et al. 2011). An example of this is given in Figure 9, which shows submicron pores in organic matter intercalated with clay and carbonate in a very mature shale from the Utica formation, Wood County, West Virginia. What is interesting is that wherever organic matter is “protected” by resilient grains of carbonate or quartz, we see little or no development of submicron pores. Conversely were organic matter is wrapped by clay we see abundant pore formation. This suggests that the clay can deform and accommodate the volume expansion due to gas generation whereas the stiffer grains prohibit the volume expansion, hence they lack pores. The point here is that SEM coupled with ion milling is very valuable in revealing details that reflect the evolution of the pore system in complex lithologies.

Focused ion beam (FIB) SEM

Focused ion beam (FIB) SEM is finding a growing number of applications in the earth sciences (Goldstein et al. 2003). This method uses serial sectioning and imaging in order to produce sets of sequential SEM images (generally several hundred) that permit a three-dimensional (3-D) visualization of minerals, organics and pores. From these 3-D images one can calculate porosity, pore-size distribution, kerogen volume percentage, and permeability (e.g., Heath et al. 2011; Zhang and Klimentidis 2011; Curtis et al. 2012; Landrot et al. 2012; Huang et al. 2013).

In a typical FIB-SEM system, an extraction field is applied to a gallium (Ga) liquid metal ion source to field emit Ga ions and form a Ga beam. Due to its relatively high atomic mass, the Ga beam not only can be used to generate electron and ion images, but also may be used to mill samples to remove material. A cross-section of the sample is milled by a Ga FIB beam and is imaged simultaneously by the SEM (Fig. 10, Curtis et al. 2012). Thicknesses of the milled slices depend on the milling ion current, with high currents (on the order of nA) milling thicker and less precise slices greater than about 25 nm. In most FIB/SEM systems, the electron beam and ion beam are positioned between 50 and 54° from each other (Fig. 10). Because of this configuration, any reasonably flat location on a sample can be milled and imaged (Silin and Kneafsey 2012). To avoid repositioning the sample so that the surface is orthogonal to the SEM following each FIB slice, the SEM images are adjusted to account for this angle.

For most advanced studies, measurements take advantage of multiple detector types for detecting secondary (SE) and backscattered electrons (BSE) and X-rays. A BSE detector is preferable for imaging because it minimizes surface electron charging. However, an SE detector can also produce satisfactory images. Because most geological materials are nonconductive, super electron charges can build up on the sample surface. To mitigate this issue, one must use a low voltage electron beam (2–5 kV) on the SEM side when a high electron beam current (0.17–1.4 nA) is applied.

Silin and Kneafsey (2012) provide a good discussion of some of the issues encountered with FIB/SEM applications that we summarize here. One of the major limitations of FIB/SEM is the extremely small size of the sample area. Therefore, when performing nanometer-scale interrogations of fine-grained, low porosity materials like shale, it is important to consider the scale of the observation in the context of the scale of interest. Volumes of 20 μm × 10 μm × 5 μm are typically imaged, whereas the rock unit the sample came from may be 10 s of m thick with a lateral extent of 10s or 100s of km. Hence the sample may be 20 to nearly 30 orders of magnitude smaller than the lithologic unit. A case in point is illustrated in images shown in Figure 11A–C. This FIB/SEM sample comes from the Utica formation, Wood County, West Virginia at a depth of 9,502.7 feet (Arthur and Cole 2014). In this deep part of the Appalachian Basin the Utica is roughly 300 feet thick; and the formation extends north, northwest and northeast for several hundred kms. The pores (Fig. 11C) within this sample occur primarily within the kerogen (Fig. 11B), and exhibit a fair degree of connectivity, but the image is only ~ 20 μm across.

A second issue impacting pore assessment is that sampling bias must be taken into account. Most geologic materials exhibit some form of heterogeneity that may cross a variety of length scales. For example, in the case of shales they are usually anisotropic in the form of thin laminae and contain pores ranging in scale from 10s of μm down to below 10 nm (see Fig. 9). The porosity may vary within a given layer and between layers, as might permeability both of which are also typically anisotropic.

The rheologic integrity of the sample may also be affected by the nature of the sampling, sample handling, desiccations, especially for clay rich materials, and machining, which all place stress on the sample. Further, even the FIB milling process, particularly at high beam currents, can adversely impact the sample leading to varying forms of artificial porosity. Small cracks in the sample may result from machining, so insights into their nature such as the presence of clay particles within the fracture need to be pursued and their origins evaluated. Lower currents are recommended for polishing and prior to image collection.

Determination of the size of the representative elementary volume is required to effectively see the 3-D pore structure for flow simulations and to scale up pore-scale results to answer reservoir-scale questions. Clearly this is a difficult task given the extent of heterogeneity observed in most geological materials. Silin and Kneafsey (2012) point out that for shale samples this can be done independently of the fracture network, which imposes another scale of interest.

Despite these various issues, nanoscale imaging via FIB/SEM has a number of advantages. As noted above, from the images one can obtain a fundamental understanding of the 3-D nature of pore space (Fig. 11C), pore connectivity, and the location and distribution of mineral and organic phases (Fig. 11B). The images provide a foundation for conceptual model building that leads to quantification of permeability and fluid flow. Recently this approach has been used to estimate the accessible surface area in the Lower Tuscaloosa sandstone (Landrot et al. 2012). Mineral distributions mapped in 2-D by SEM/EDX were coupled with dual-beam FIB/SEM and X-ray-based micro tomographs of seclect regions within the samples to quantify the connected pore network.

Core-scale NMR imaging

A wide range of NMR spectroscopic methods are available for non-destructive characterization of porous materials. According to Mitchell and Fordham (2014) the majority of NMR spectrometers are high field instruments (magnetic field strength B0> 1.5 T) used for chemical analysis and biomedical studies. In these applications, the use of a high magnetic field offers several advantages. The inherent signal-to-noise ratio (SNR) of the measurement is improved by increasing the field strength, enabling better spectral (chemical) and spatial (image) resolution. Furthermore, nuclei with lower gyromagnetic ratios than 1H are more readily accessed (e.g., 23Na, 19F, 31P, 13C, 2H) for studies of molecular structure and chemical reaction monitoring (Mitchell and Fordham 2014). High-field NMR also offers the advantage of shorter radio frequency (rf) probe recovery times, allowing the detection of short relaxation time components in solids. Unfortunately, high field strengths can bring complications, especially in studies of heterogeneous materials (e.g., liquid-saturated porous media). The solid/fluid magnetic susceptibility contrast in such samples results in pore-scale magnetic field distortions (so-called “internal gradients”). Molecular diffusion through these internal gradients leads to an enhanced decay of transverse magnetization. Additionally, the field dependence of relaxation times prevents high field measurements from being compared directly to low field studies.

In the majority of chemical and medical applications, the advantages of high field significantly outweigh the disadvantages. However, this is not the case for laboratory studies of fluids in rock. It is the magnetic field dependence of relaxation times that necessitates laboratory instruments and logging tools to operate at similar frequencies; if laboratory data are to be used for log calibration, the measurements must be based on consistent spin physics. Consequently, the industry standard for laboratory NMR core analysis has been set at an 1H resonance frequency of ν0 = 2 MHz, corresponding to a magnetic field strength of B0 = 0.05 T. The use of low field also limits the influence of internal gradients, enabling quantitative analysis (Mitchell et al. 2010). More recently researchers have been exploring the use of other field strengths to assess core such as 0.3 T as reviewed by Mitchell and Fordham (2014).

Regarding pore assessment in rock cores, low-field NMR measures the response of hydrogen protons inside an external magnetic field. Therefore the signal response comes from the water or oil saturated in the rock and not the rock itself (Bryan et al. 2013). The protons in the oil or water are polarized in the direction of this static magnetic field, called the longitudinal direction. Another magnetic field is then applied as a radio frequency pulse to “tip” the protons onto the perpendicular transverse plane, where they rotate in phase with one another. As the protons give off energy to one another and to their surroundings, the magnetic signal in the transverse plane decays. This is known as transverse relaxation, or T2 (Coates et al. 1999).

In homogeneous magnetic fields such as those generated in NMR laboratory instruments, two types of relaxation exist in fluids: bulk relaxation and surface relaxation (Straley et al. 1997). When a bulk fluid is placed in the NMR the measured transverse relaxation is bulk relaxation, or energy transferred between protons in the fluid. Bulk relaxation is a property of the fluid, related to local motions such as molecular tumbling and diffusion (Kleinberg and Vinegar 1996; Straley et al. 1997). If solids are present, surface relaxation occurs at the fluid–solid interface where the hydrogen protons are constricted by the grain surfaces and therefore transfer energy to these surfaces. When samples of saturated porous media are measured, the amplitude of the T2 measurement is directly proportional to porosity, and the decay rate is related to the pore sizes and the fluid type and its viscosity in the pore space. Short T2 times generally indicate small pores with large surface-to-volume ratios and low permeability. Conversely, longer T2 times indicate larger pores with higher permeability.

Hydrogen nuclei in thin interlayers of clay water experience high NMR relaxation rates because the water protons are close to grain surfaces and interact with surfaces frequently. Additionally, if the pore volumes are small enough that water is able to diffuse easily back and forth across the water-filled pores, then the relaxation will reflect the surface-to-volume ratio of the pores. Thus, water in small clay pores with larger surface-to-volume ratios will exhibit fast relaxation rates and therefore short T2 porosity components. Because porosities are not equal in a given lithofacies, especially one with a significant mix of clays and clastic grains, capillary-bound or clay-bound waters are not very mobile, but free water can be. This can set up a scenario of two approximately equal porosities, but with entirely different mobility regimes that can be distinguished by their T2 time distributions.

Figure 12A shows an example of T2 behavior of water in different sizes of pores. One observes a much faster relaxation time for water contained in the smaller pores, in this case, clay. Water alone is a low viscosity fluid, thus its bulk relaxation is slow, on the order of approximately 2000 ms. However, when water is imbibed into pores of varying size, the water T2 distribution is essentially analogous to a pore size distribution because surface relaxation is so much faster than bulk relaxation of water. Heavy oil and bitumen are highly viscous and therefore exhibit very fast relaxation times. Figure 12B compares the spectrum of a bulk sample of bitumen to its signal inside sand. Due to the high viscosity, the relaxation times for heavy oil and bitumen occur at approximately the same T2 locations whether the fluid exists in bulk form or in a porous matrix. The presence of gas in pores poses an interesting problem for NMR that has been discussed for example by Sigal and Odusina (2001). Gas has much less hydrogen per unit volume than liquids such as water or oil and as such will yield a neutron-derived porosity that is too low. By comparing the density logs described below against NMR derived measurements one can distinguish liquid-filled versus gas-filled pore systems.

A pore-size distribution can be obtained from a fully water-saturated rock core using the NMR T2 distribution as follows:

(18)

where ρ is the relaxivity, A/V is the ratio of the area to volume of a pore (an uncertain value for fractal pore surfaces), and T2,B is the relaxation of the bulk fluid. For spherical pores with radius r, A/V ≈ 3/r. Therefore, the NMR T2 distribution can be converted to pore-size r distribution by suitable selection of the relaxivity ρ. Typically for natural porous matrices the magnitude of ρ ranges between 1 and 10 μm s−1.

NMR Cryoporometry (NMRC) is a recent technique developed at the University of Kent in the UK for measuring total porosity and pore size distributions (Strange et al. 1993). It makes use of the Gibbs-Thomson effect wherein small crystals of a liquid in the pores melt at a lower temperature than the bulk liquid. The melting point depression is inversely proportional to the pore size. The technique is closely related to that of the use of gas adsorption to measure pore sizes (Kelvin Equation). Both techniques are particular cases of the Gibbs Equations; the Kelvin Equation is the constant temperature case, and the Gibbs–Thomson Equation is the constant pressure case (Mitchell et al. 2008). To make a cryoporometry measurement, a liquid is imbibed into the porous sample, the sample cooled until all the liquid is frozen, and then warmed slowly while measuring the quantity of the liquid that has melted. According to Mitchell et al. (2008), it is similar to DSC thermoporosimetry, but has higher resolution, as the signal detection does not rely on transient heat flows, and the measurement can be made arbitrarily slowly (Fig. 13). It is suitable for measuring pore diameters in the range 2 nm–2 μm. NMR may be used as a convenient method of measuring the quantity of liquid that has melted, as a function of temperature, making use of the fact that the T2 relaxation time in a frozen material is usually much shorter than that in a mobile liquid. It is also possible to adapt the basic NMR experiment to provide structural resolution in spatially dependent pore size distributions (Strange and Webber 1997) or to provide behavioral information about the confined liquid (Alnaimi et al. 2004).

Atomic Force Microscopy

Atomic force microscopy (AFM) is a relatively new tool being used to characterize pores and pore features in complex rock matrices down to the atomic scale. This method cannot only be used to obtain topographic images of surfaces, but it also can simultaneously identify different materials on surfaces at high resolution (e.g., Javadpour 2009; Javadpour et al. 2012). This recent interest in AFM application for rock characterization can be traced to the emergence of unconventional shale-gas reservoirs and the interest reservoir engineers have in quantifying the wetting and flow behavior of hydrocarbon gases and fluids at the nanoscale. The unique capabilities of the AFM make it ideal for interrogating nanopores, organic materials (e.g., kerogen), minerals and diagenetic microfractures in shale. Additionally, it can be used to measure the localized bulk modulus of elasticity on a surface, which has implications for geophysical modeling and even designs for hydraulic fracturing (e.g., Kopycinska-Müller et al. 2007).

In AFM, a flexible cantilever acts as a spring to determine the net force between a coating at the tip of the cantilever and a sample, or substrate. Local attractive and repulsive forces between the tip and the sample bend the cantilever arm. Deflection of the cantilever is optically detected and converted into an electrical signal to determine force curves versus distance using Hook’s law. The detection system that has become the standard employs a focused laser beam that is reflected from the rear of the cantilever onto a detector. By the optical lever principle, a small displacement of the cantilever is converted to a measurably large deflection in the position of the reflected spot in a photodetector. Vertical deflection can be quantified by comparing the spot location from the top and bottom halves of the detector (e.g., Javadpour 2009).

An example of nondestructive AFM imaging of a nanoporous system is shown in Figure 14. Nanopores and nanogrooves can be seen in this mudrock sample where the imaging dimensions were 4 × 4 × 0.6 μm3. This rock sample was cut using a cryogenic ultramicrotome. The flat surface was scanned using a triangular AFM tip (7–10 nm tip radius, 50 N m−1 spring constant) to reveal the topography and pore network. Pores on the order of 30 nm in diameter and nanogrooves with widths of approximately 60 nm were observed in a larger depression that may have been occupied by a single grain that was plucked from the sample during preparation. This is, in fact, one key issue that can hamper the study of nanoporous samples is the creation of artificial pores when preparing surfaces from grinding and polishing or using microtome techniques. The use of ion milling can generally help mitigate this problem.

Images of nanopore features and grain boundaries derived from AFM can form the basis of models designed to assess the nature of fluid and gas flow in nanoscale networks (e.g., Javadpour et al. 2007; Javadpour 2009). Furthermore, AFM can be used to characterize the surfaces of discrete phases such as micas that have been reacted with fluids not in equilibrium with the phase to quantify the extent of dissolution and generation of surface pores (e.g., Hu et al. 2011a; Shao et al. 2011). Coverage of the voluminous AFM literature on mineral dissolution leading to pore evolution is beyond the scope of this article.

Downhole porosity logs

There are a number of downhole logging methods used to determine porosity: resistivity, density, neutron, NMR and sonic logs. Below we briefly describe the first four of these in rather general terms. Further details on these methods can be found in Tiab and Donaldson (2004, 2012), Ellis and Singer (2008) and Crain’s Petrophysical Handbook (https://spec2000.net/01-index.htm).

Resistivity logs

Resistivity logging works by characterizing the rock or sediment in a borehole by measuring its electrical resistivity, which is the ability to impede the flow of electrical current. Resistivity is a fundamental material property that represents how strongly a material opposes the flow of electric current. This helps to differentiate between formations filled with salty waters (good conductors of electricity) and those filled with hydrocarbons (poor conductors of electricity). In these logs, resistivity is measured using four electrical probes to eliminate the resistance of the contact leads. The log must run in holes containing electrically conductive mud or water. Resistivity and porosity measurements are used to calculate water saturation. Resistivity is expressed in ohms or ohms/m, and is frequently charted on a logarithm scale versus depth because of the large range of resistivity. The distance from the borehole penetrated by the current varies with the tool, from a few centimeters to one meter.

The foundation for using resistivity to assess formation properties is derived from the empirical Archie’s law (Archie 1942, 1947; Tiab and Donaldson 2004; Ellis and Singer 2008) that relates the in situ electrical conductivity of a sedimentary rock to its porosity and brine saturation:

(19)

where φ denotes the porosity, Ct the electrical conductivity of the fluid saturated rock, Cw represents the electrical conductivity of the brine, Sw is the brine saturation, m is the cementation exponent of the rock, n is the saturation exponent and a the tortuosity factor. The cementation exponent models how much the pore network increases the resistivity, as the rock itself is assumed to be non-conductive. For unconsolidated sand the m value is roughly 1.3; for sandstones between 1.8 and 2 and for limestones, 1.7 to 4. The saturation exponent n models the dependence of the conductivity on the presence of non-conductive fluid (hydrocarbons) in the pore-space, and is related to the wettability of the rock. Its value is usually very close to 2. These exponents are typically determined from measurements on core plugs whereas the brine conductivity can be measured on produced water. The tortuosity factor a is meant to correct for variation in compaction, porestructure and grain size, and lies between 0.5 and 1.5. Recast in terms of electrical resistivity, the equation reads

(20)

where Rt refers to the fluid saturated rock resistivity, and Rw represents brine resistivity.

The factor

(21)

is also called the formation factor, where R0 is the resistivity of the rock filled with only water (Sw = 1). The factor

(22)

is also called the resistivity index.

As noted above Archie’s law is a purely empirical relation attempting to describe ion flow (mostly sodium and chloride) in clean, consolidated sands, with varying intergranular porosity. Electrical conduction is assumed not to be present within the rock grains or in fluids other than water. For brine-saturated intervals (Sw = 1) Archie’s law can be written:

(23)

Therefore, plotting the logarithm of the measured in situ electrical conductivity against the logarithm of the measured in situ porosity (a so-called Pickett plot), should yield a straight-line relationship according to Archie’s law. The slope is equal to the cementation exponent m and the intercept is equal to the logarithm of the in situ brine conductivity. This relationship, however, breaks down when dealing with rocks containing appreciable amounts of clay. In this case one can apply the Waxman–Smits Equation that tries to correct for this (Waxman and Smits 1968).

Density and neutron logs

Density logs take advantage of the well-known linear relationship between density and porosity (e.g., Ellis et al. 2003). In the case of a binary system of a framework of rock with a density ρma and a portion of the volume filled with a fluid of density ρf it is given by

(24)

where ρb is the bulk density of the formation and φ, the porosity, or volume fraction that is not rock, or “matrix.” It is assumed to be saturated with a fluid of known density. This relationship can be recast to determine porosity

(25)

where the scaling constants a and b are not constants but depend on the formation parameters specific to the zone being investigated:

(26)

Thus, to estimate porosity properly, two important parameters must be known: the rock matrix (or grain) density (ρma) and the density of the saturating fluid (ρf) since they determine the slope and intercept of this simple relationship.

Density logs measure the electron density of the formation, which is related to the formation density. The logging tools contain a cesium-137 γ-ray source that irradiates the formation with 662 keV γ-rays. These γ-rays interact with electrons in the formation through Compton scattering (inelastic scattering of a photon by a quasi-free charged particle) and lose energy. Once the energy of the γ-rays has fallen below 100 keV, photoelectric absorption dominates: γ-rays are eventually absorbed by the formation. The amount of energy loss by Compton scattering is related to the number of electrons per unit volume of formation. Since, for most elements of interest (below Z = 20), the ratio of atomic weight, A, to atomic number, Z, is close to 2, γ-ray energy loss is related to the amount of matter per unit volume, i.e., formation density. A γ-ray detector located some distance from the source, detects surviving γ-rays and sorts them into several energy windows. The number of high-energy γ-rays is controlled by Compton scattering, hence by formation density. The number of low-energy γ-rays is controlled by photoelectric absorption, which is directly related to the average atomic number, Z, of the formation, hence to lithology. Modern density logging tools include two or three detectors, which allow compensation for some borehole effects, in particular for the presence of mud cake between the tool and the formation. Since there is a large contrast between the density of the minerals in the formation and the density of pore fluids, porosity can easily be derived from measured formation bulk density if both mineral and fluid densities are known. Neutron porosity measurements employ a neutron source to measure the Hydrogen Index in a reservoir, which is directly related to porosity (e.g., Ellis et al. 2003, 2004). The Hydrogen Index (HI) of a material is defined as the ratio of the concentration of hydrogen atoms per cm3 in the material, to that of pure water at 75 °F. As hydrogen atoms are present in both water and oil filled reservoirs, measurement of this ratio allows estimation of the amount of liquid-filled porosity. Neutron porosity logging tools contain an americium–beryllium neutron source, which irradiates the formation with neutrons. These neutrons lose energy through elastic collisions with nuclei in the formation. Once their energy has decreased to thermal level (see below), they diffuse randomly away from the source and are ultimately absorbed by a nucleus. Hydrogen atoms have essentially the same mass as the neutron; therefore hydrogen is the main contributor to the slowing down of neutrons. A detector at some distance from the source records the number of neutron reaching this point. Neutrons that have been slowed down to thermal level have a high probability of being absorbed by the formation before reaching the detector. The neutron count rate is therefore inversely related to the amount of hydrogen in the formation. Since hydrogen is mostly present in pore fluids (water, hydrocarbons) the count rate can be converted into apparent porosity. Modern neutron logging tools usually include two detectors to compensate for some borehole effects. Porosity is derived from the ratio of count rates at these two detectors rather than from count rates at a single detector. For a discussion of neutron logs beyond this simple over view consult Gilchrist (2008) for details on interpretation of compensated neutron log responses and Fricke et al. (2008) for a summary of downhole pulse neutron techniques.

The combination of neutron and density logs takes advantage of the fact that lithology has opposite effects on these two porosity measurements. The average of neutron and density porosity values, therefore, is usually close to the true porosity, regardless of lithology. Another advantage of this combination is the “gas effect.” Gas, being less dense than liquids, translates into a density-derived porosity that is too high. Gas, on the other hand, has much less hydrogen per unit volume than liquids: neutron-derived porosity, which is based on the amount of hydrogen is, therefore, too low. If both logs are displayed on compatible scales, they overlay each other in liquid-filled clean formations and are widely separated in gas-filled formations.

NMR logs

NMR logging, a subcategory of electromagnetic logging, measures the induced magnetic moment of hydrogen nuclei (protons) contained within the fluid-filled pore space of porous media (reservoir rocks). Unlike conventional logging measurements (e.g., acoustic, density, neutron, and resistivity), which respond to both the rock matrix and fluid properties and are strongly dependent on mineralogy, NMR-logging measurements only respond to the presence of hydrogen protons. Because these protons primarily occur in pore fluids, borehole NMR measurements can provide different types of formation porosity-related information (e.g., Allen et al. 1997). First, they tell how much fluid is in the formation. Second, they provide details about the formation pore size and structure that are not usually available from conventional porosity logging tools. This can lead to a better description of fluid viscosity and mobility—whether the fluid is bound by the formation or free to flow. Finally, in some cases, NMR logs can be used to determine the type of fluid—water, oil or gas.

As noted above the NMR behavior of a fluid in the pore space of a reservoir rock is different from the NMR behavior of the fluid in bulk form. For example, as the size of pores containing water decreases down to the micron scale, the differences between the apparent NMR properties of the water in the pores and the water in bulk form increases (e.g., Cole et al. 2013). Micro-porosity associated with clays and with some other minerals typically contains water that, from an NMR perspective, appears almost like a solid. Water in such micro-pores has a very rapid “relaxation time.” Because of this rapid relaxation, this water is more difficult to see with NMR logging tools than, for example, producible water associated with larger pores. Fortunately, modern NMR logging methods can see essentially all the fluids in the pore space, and the porosity measurement made by these tools is thus characterized as being a “total-porosity” measurement (Coates et al. 1999). An added feature of NMR is the fact that measurements of the formation made when the magnetic resonance imaging logging (MRIL) tool is in the wellbore can be duplicated in the laboratory by NMR measurements made on rock cores recovered from the formation. This ability to make repeatable measurements under very different conditions is what makes it possible for researchers to calibrate the NMR measurements to the petrophysical properties of interest (such as pore size) to the end user of MRIL data (Murphy 1995; Cherry 1997).

Although conventional porosity tools, such as neutron, density, and sonic, exhibit a bulk response to all components of the volumetric model (i.e., matrix plus pore-fluid), they are more sensitive to matrix materials than to pore fluids (e.g., Coates et al. 1999). Furthermore, the responses of these tools are highly affected by the borehole and mudcake, and the sensitive volumes of these tools are not as well defined as that of the NMR imaging tool. NMR porosity is essentially matrix-independent—that is, the tools are sensitive only to pore fluids. The difference in various NMR properties—such as relaxation times (T1 and T2) and diffusivity (D)—among various fluids makes it possible to distinguish (in the zone of investigation) among bound water, mobile water, gas, light oil, medium-viscosity oil, and heavy oil (Fig. 15). Clay-bound water, capillary-bound water, and movable water occupy different pore sizes and locations. Hydrocarbon fluids differ from brine in their locations in the pore space, usually occupying the larger pores. They also differ from each other and brine in viscosity and diffusivity. NMR logging uses these differences to characterize the fluids in the pore space.

In terms of the measurement process, before a formation is interrogated with an NMR tool, the protons in the formation fluids are randomly oriented. When the tool passes through the formation, the tool generates magnetic fields that activate those protons. First, the tool’s permanent magnetic field aligns, or polarizes, the spin axes of the protons in a particular direction. Then the tool’s oscillating field is applied to tip these protons away from their new equilibrium position. When the oscillating field is subsequently removed, the protons begin tipping back, or relaxing, toward the original direction in which the static magnetic field aligned them (Fukushima and Roeder 1981). Specified pulse sequences are used to generate a series of so-called spin echoes, which are measured by the NMR logging tool and are displayed on logs as spin-echo trains (e.g., Coates et al. 1999; Kleinberg 1999; Kleinberg and Jackson 2001). These spin-echo trains constitute the raw NMR data. To generate a spin-echo train an NMR tool measures the amplitude of the spin echoes as a function of time. Because the spin echoes are measured over a short time, an NMR tool travels no more than a few inches in the well while recording the spin-echo train. The recorded spin-echo trains can be displayed on a log as a function of depth. The initial amplitude of the spin-echo train is proportional to the number of hydrogen nuclei associated with the fluids in the pores within the sensitive volume. Thus, this amplitude can be calibrated to give a porosity.

The amplitude of the spin-echo-train decay can be fit very well by a sum of decaying exponentials, each with a different decay constant. The set of all the decay constants forms the decay spectrum or transverse-relaxation-time (T2) distribution (see discussion above for core-based NMR measurements). In water-saturated rocks, it is a single exponential with a decay constant proportional to pore size; that is, small pores have small T2 values and large pores have large T2 values (Kenyon 1997). At any depth in the wellbore, the rock samples probed by the NMR tool will have a distribution of pore sizes. In essence, a key function of the NMR tool and its associated data-acquisition software is to provide an accurate description of the T2 distribution at every depth in the wellbore. Hence, the multi-exponential decay represents the distribution of pore sizes at that depth, with each T2 value corresponding to a different pore size. Properly defined, the area under the T2-distribution curve is equal to the initial amplitude of the spin-echo train. Hence, the T2 distribution can be directly calibrated in terms of porosity.

Small and ultrasmall angle scattering (U)SAS techniques provide powerful, relatively new, uniquely useful tools for characterizing rock porosity and the properties of confined fluids. In wide-angle X-ray diffraction experiments, familiar to many geochemists in the context of laboratory-scale or synchrotron radiation sources, one probes the structure of materials on an atomic-length scale. Each crystallographic phase of a mineral produces a distinctive wide-angle X-ray diffraction pattern, which serves as a characteristic “fingerprint” of that mineral phase. A typical laboratory-scale X-ray diffraction instrument might measure diffraction at angles above 5° 2θ and, for CuKα radiation, d-spacings below about 17.67 Å. On the other hand, using one of several different geometries, small-angle scattering experiments analyze nano-scale structures (typically greater than ~10 Å) by measuring the intensity of the diffracted beam at significantly smaller-angles and/or longer wavelengths. Although natural materials are typically disordered and heterogeneous at these scales, the contrast in scattering length density between the mineral phases and the pore space of a rock produces a scattering signal in the small-angle regime that reflects the pore structure of the rock as a whole, in part because the contrast between the minerals and the pores is significantly larger than that between the mineral phases themselves. In this section, we explain the principles of small-angle diffraction experiments, how the data may be analyzed to obtain information about the pore space, and what opportunities and obstacles are presented to the experimenter.

While most SAS techniques, with the exception of spin-echo approaches, provide data in inverse space (as do other, more familiar diffraction experiments), all interrogate and average relatively large volumes when compared to standard microscopy. While transmission electron microscopy (TEM) provides an obvious and very useful method of characterizing nanoscale porosity it is difficult to use it to statistically quantify the structures of porous materials given the wide variation in length scales involved. This is because, while electron microscopy can provide detailed images of pores at high magnifications, the total volume of the rock imaged is, of necessity, very small. In fact, Howard and Reed (2005) calculated that if all the material that has ever been in focus in all of the transmission electron microscopes in the world were gathered together it would total less than 1 cm3. Thus tools that can provide a more statistically representative quantification of the pore structure are highly useful complements to high-magnification imaging techniques. This is provided by small angle scattering experiments, which allow characterization of porosity over a very wide range of scales, from nanometer to 10’s of mm (extendable to cm ranges with the integration of low magnification, high resolution imaging techniques, see below), and interrogate all types of void space in the rock, from nanopores to fractures for sample sized from < ~ 1 mm to ~ 2.5 cm, integrating volumes up to ~ 30 mm3 (Anovitz et al. 2009).

Scattering techniques can be broadly classified either in terms of the instrument geometry and data acquistion scheme used (e.g., pinhole, Bonse–Hart, spin-echo, time-of-flight etc.) or the type of radiation employed (neutron, X-ray or light), but all interrogate all types of void space in the rock. X-ray sources range from laboratory-sized instruments to those at synchrotron sources and typically have relatively small beam sizes. Synchrotron sources typically have high flux rates as well, which has the advantages of short counting times, good statistics, and the ability to map variations within a sample. Neutron sources can be continuous, typically reactor-generated (e.g., HFIR/ORNL, NCNR/NIST, ILL), or pulsed, including spallation sources (e.g., LANSCE/LANL, ISIS, SNS/ORNL) or pulsed reactors (IBR-2, Dubna, Russia). The latter, or the addition of choppers at the former allows measurements to be made in time-of-flight mode, and potentially provide the opportunity for analysis of fast reaction kinetics. Neutron sources typically have much lower intensities than X-ray sources, however, and thus neutron scattering experiments require longer counting times, but typically have larger beam sizes and thus interrogate and average a larger sample volume (~ 30 mm3, Anovitz et al. 2009). This latter can be advantageous, as the beam is typically much larger than the grain sizes of typical (although not all) rock materials and thus neutron scattering may provide a more statistically meaningful, quantified understanding of pore structures.

There are also significant and useful differences in how X-rays and neutrons interact with the sample. X-rays interact electromagnetically with the electron clouds around atoms. In contrast, neutrons interact either with atomic nuclei via the short-range strong force, or with unpaired orbital electrons via a magnetic dipole interaction. Thus, while X-ray scattering intensity is a function of atomic number, neutron scattering is not, allowing the two techniques to provide complementary information. Because the neutron-nucleus potential is approximately a δ-function, only spherically symmetrical scattering occurs from a single-fixed nucleus. That is, neutrons have a constant form factor with scattering angle. In X-ray scattering studies of condensed matter, however, the distribution of electrical charge within the atom produces a form factor so that the amplitude of a scattered photon depends upon scattering angle. In addition, there is no obvious or simple pattern connecting the neutron scattering length b to the atomic number of the atoms Z, as there is for X-rays.

Because neutron scattering is a nuclear, rather than an electronic effect, it is sensitive to isotopic variations, and two nuclear isotopes, such as 1H and 2H, may have dramatically different neutron scattering cross-sections, despite having the same chemical identity. Therefore, the scattering length density of the sample, or a fluid in it can be experimentally controlled. For instance, if the sample is soaked in an H2O/D2O mixture with a scattering length density matched to the rock matrix, connected (effective) porosity and total porosity can be separated. In addition, because of the nature of the neutron/sample interaction neutrons are much more penetrating than X-rays, and can be used to study magnetic (Shull and Smart 1949) as well as structural effects. However, unlike X-rays, neutrons can activate elements in a sample, and samples must be scanned for radioactivity after having been in a neutron beam. In rare cases activation may be large enough to prevent release of the sample after analysis. Despite this potential difficulty, these differences make X-ray and neutron scattering complementary approaches, presenting an opportunity for an experimenter investigating a complicated material such as natural porous media.

The initial studies of small angle scattering were those of Guinier (1937). He and others pioneered the theoretical and practical basis of the approach. (cf. Kratky 1938; Debye and Bueche 1949; Guinier and Fournet 1955; Debye et al. 1957; see also the comprehensive review of Hammouda 2008). In these techniques collimated radiation is deflected by small angles from areas in a sample across which there are variations in scattering length density (see below), and is measured as a differential cross section dσ/dΩ or the number of particles (neutrons or photons) scattered per unit time per unit solid angle, divided by the incident flux.

Small angle scattering experiments measure the angular dependence of scattering intensity, either without considering changes in the energy of the scattered particle or rejecting particles that are not scattered elastically. In neutron scattering studies of porous media, the incident neutrons interact with the sample by scattering from the short-range potential of the atomic nuclei making up the sample. The wavelengths of neutrons used in scattering studies of condensed matter are much longer than the range of these nuclear forces. As a result, the neutron-nucleus potential may be described by a simplified phenomenological model that takes the potential to be a δ-function located at the position of each nucleus and assigned a strength (b). The scattering length b is an empirically determined property of the neutron-nucleus interaction that depends upon the nuclear isotope and spin state. This makes neutron scattering sensitive to nuclear isotopes, which is the physical basis for the contrast matching techniques described below.

In a neutron diffraction experiment, the measured differential cross section dσ/dΩ may be decomposed into two distinct contributions, known as coherent and incoherent scattering as:

(27)

The terms ‘coherent’ and ‘incoherent’ refer to interference effects in the wavefunction of the scattered neutron. The size of each component depends upon the strength and variation of the scattering lengths b throughout the sample. The first component, coherent scattering, provides information about the structure of the material. The relative distances between the atoms making up the sample determine the phase factors eiQ·rl, the superposition of the scattering wavelets making the amplitude of the coherent scattering dependent upon scattering angle. The strength of coherent scattering is given by the average of the scattering lengths in the sample 〈b〉. The second component, incoherent scattering, provides no information about the relative distances between atoms in the sample. Unlike coherently scattered neutrons, the intensity of the incoherently scattered neutrons is independent of angle. In this case, one may think of the incident neutrons as interacting with each nucleus of the sample separately. The intensity of incoherent scattering is given by the variation in scattering lengths 〈b2〉− 〈b2, whether because of nuclear spin state or isotopic identity. In small-angle diffraction studies, incoherent scattering produces a flat, ‘background-like’ signal which may limit the analysis of the scattering data at larger angles (smaller scales). One important consequence of this effect is that the presence of hydrogen in a sample, for which the incoherent scattering cross section is very large, tends to generate a large background. In some experiments this can be overcome by substituting D2O for H2O in experimentally altered or synthetic materials. This is not, of course, possible for analysis of natural rock materials. Similarly, strong incoherent scattering from vanadium is often used for calibration of detectors in various kinds of experiments. Coherent and incoherent scattering also occur during the scattering of electrons by X-rays, the incoherent scattering in this case corresponding to free recoil scattering of individual electrons under the impact of a photon (Compton scattering). Because the photons in Compton scattering may be considered to be interacting with each electron in the sample separately, there are no coherent interference effects in the wavefunction of the scattered photon that reveal the relative positions of electrons in the sample.

The coherent and incoherent scattering cross sections, which may be found in standard tables, are defined in terms of the scattering lengths. The coherent cross section is given by:

(28)

and the incoherent cross section is given by:

(29)

These tables typically provide the bound scattering lengths, which assume that the scattering nucleus is fixed in sample.

In a neutron scattering study, the incident neutrons are entering a potential field produced by the sample given by a sum of δ-function scatters in space. For wide-angle diffraction studies one is probing length scales comparable to interatomic distances. However, in small-angle study, one does not resolve interatomic distances and a coarse-grained picture of the experimental system may be adopted. The scattering length density (ρ(r), see below for a definition) is a continuous function of position that assigns a scattering length to every point in space. As discussed further below, the measured scattering intensity, given as a function of angle, is related to local differences in scattering length density ρ(r) by means of a Fourier transform. In particular, for a porous rock, the differences in scattering length density ρ(r) between the pore space and different mineral phases produce its distinctive small-angle diffraction pattern. In this way, one may test or parameterize a model of the pore space and distribution of minerals in a rock by performing small-angle X-ray and neutron diffraction experiments.

Neutron and X-ray techniques commonly examine size ranges from approximately 1 nm to 10 μm, which can be adjusted somewhat by selecting appropriate wavelengths. X-ray sources typically use energies from 10–30 keV (0.124–0.041 nm wavelengths). Neutron energy ranges are are typically described as hot, which are in thermal equilibrium with the temperature of a hot graphite block in the reactor core (λ ~ 0.04–0.1 nm), thermal, which are in thermal equilibrium with the temperature of the cooling water around the reactor core (λ ~ 0.1–0.3 nm) and cold, which are in thermal equilibrium with the temperature of a cold source such as liquid H2 (λ ~ 0.3–2.0 nm). It is typically longer wavelength thermal and cold neutrons that are used for small angle scattering experiments. The kinetic energies of neutrons are in the meV range, with:

(30)

for λ in Å, much lower than those of photons of similar wavelengths. Small angle light scattering techniques (e.g., Jung et al. 1995; Alexander and Hallett 1999; Bittelli et al. 1999; Cipelletti and Weitz 1999; Holoubek et al. 1999; Stone et al. 2002; Chou and Hong 2004, 2008; Liao et al. 2005) have been less commonly employed for analysis of porous samples, but have the potential ability to extend the size range investigated because the wavelengths of the visible laser light used are significantly longer than typical X-ray and neutron values. As in standard petrographic analysis, however, samples for SALS analysis will have to be thin enough to permit transparency and reduce multiple scattering, given the long wavelengths used. In addition, a combination of scattering and imaging techniques allows characterization of porosity over a yet wider range of scales, from nanometer to centimeter.

Scattering data yield information about the shape and size of scatterers, including the scattering contrast and volume, overall and cumulative porosities, pore distribution geometry (mass fractal behavior), the nature of the pore/rock interface (surface fractal behavior), characteristic lengths, and the surface area to volume ratio. A number of early studies (Debye and Bueche 1949; Riseman 1952; Vineyard 1954, Guinier et al. 1955; Debye et al. 1957; Blech and Averbach 1965; Tchoubar and Méring 1969; Vonk 1976) defined the mathematics of the scattering experiment, and began examining particle size distributions and porous materials. To our knowledge, however, the earliest study to report SAS data on rock material is that of Hall et al. (1983) who used neutron scattering to examine the pore size distribution of shales. They noted the asymmetry of the scattering, and that it was related to the bedding plane of the samples and calculated pore size distributions and cumulative pore volumes, and found that the pore volume distribution appeared to be bimodal, a result confirmed in later work (e.g., Swift et al. 2014).

While a number of rock materials have been studied (cf. Schmidt 1989; Radlinski 2006), including: coals and hydrocarbon source rocks (Bale and Schmidt 1984; Reich et al. 1990; Haenel 1992; Radlinski et al. 1996, 1999, 2000a,b, 2004; Radlinski and Radlinska 1999; Sastry et al. 2000; Sen et al. 2001, 2002a; McMahon et al. 2002; Prinz et al. 2004; Avdeev et al. 2006; Connolly et al. 2006; Radlinski 2006; Mares et al. 2009; Melnichenko et al. 2009, 2012; Sakurovs et al. 2009; Mastalerz et al. 2012, 2013; Cai et al. 2014; Thomas et al. 2014), sandstones, shales, and carbonates (Hall et al. 1983, 1986; Mildner et al. 1986; Wong and Howard 1986; Triolo et al. 2000, 2006; Sen et al. 2002b; Lebedev et al. 2004, 2006, 2007; Giordano et al. 2007; Anovitz et al. 2009, 2011, 2013a,b, 2014a,b, 2015a,b; Favvas et al. 2009; Jin et al. 2011, 2013; Clarkson et al. 2012a,b,c, 2013; Clarkson and Haghshenas 2013; Ruppert et al. 2013; Wang et al. 2013; Bahadur et al. 2014; Barbera et al. 2014,Swift et al. 2014), clays and soils (Borkovec et al. 1993; Knudsen et al. 2004), sulfides (Xia et al. 2014), and igneous rocks (Lucido et al. 1985, 1988, 1991; Floriano et al. 1994; Kahle et al. 2004, 2006; Anovitz et al. 2011; Bazilevskaya et al. 2013; Navarre-Sitchler et al. 2013). Only the more recent of these studies, however, have transitioned from examining small angle scattering of rocks as materials, often noting the apparent fractal character of pore surfaces, to using these data to understand geological processes in broader contexts.

Theoretical basis of scattering experiments

As noted above, the primary variable in the analysis of scattering data is the scattering length density. The average scattering length density ρ for a particle is simply the sum of the scattering lengths (b)/unit volume, the local average of scattering lengths (strength) over a local subvolume. Thus:

(31)

where bc(i) is the bound coherent scattering length for the ith of n atoms in the formula, and Vm is the molecular volume such that:

(32)

where m is the molecular mass of the phase, NAv is Avogadro’s number, and d is the phase density. For X-rays bc(i) is replaced by Zire, where Zi is the atomic number of the ith atom, and re is the classical radius of the electron (2.81 × 10−13 cm).

For a group of identical, randomly oriented particles the intensity of the coherent, elastic scattering is dependent only on the magnitude of Q, the scattering vector, and is given as:

(33)

where N is the number of particles per unit volume, V is the volume of the particles, P(Q) is a form factor that depends on the shape of the particles, S(Q) is a structure factor that describes the inter-particle correlation structure, and

(34)

the scattering density difference between the scattering particles and the surrounding solvent. Q is the scattering momentum transfer. For neutron or photon scattering, Q is defined as the difference between the incident and scattered wavevectors kikf. Since elastic interactions are characterized by zero energy transfer |ki| = |kf|, Q is thus an inverse-space distance, defined as:

(35)

where d is the real-space correlation length and 2θ is the angle between the incoming and scattered beams, the scattering angle and λ is the wavelength of the neutrons or photons. (Note that in some cases this angle is defined as θ, in which case sin θ above becomes sin θ/2.) Scattering data is commonly presented as intensity as a function of Q, or as some transform of that plot (see below).

Scattering intensity is derived from the correlation function γ(r) of the material. In standard diffraction analysis diffraction intensity is derived from the lattice structure of the material under study, and the static structure factor S(Q) is, therefore, compose of a sum of δ-function peaks. However, for a system without long-range order, there are no δ-function peaks. Thus, as noted by Radlinski (2006, see there for additional review of scattering concepts discussed below), the correlation function γ(r) is analogous for a disordered material to the lattice structure in crystals. Following Debye and Bueche (1949) and Debye et al. (1957) the correlation function is defined as:

(36)

where η refers to value of the property (e.g., scattering length density for neutrons, electron density for X-rays, or the dielectric constant for light) whose variation provides scattering contrast, referred to as scattering length density for both X-rays and neutrons. From this basic relationship it can be derived (Guinier and Fournet 1955) that:

(37)

For isotropic media this becomes:

(38)

The key point here is that the correlation function and the scattering function are Fourier pairs. This concept should be familiar to most geochemists, as diffraction patterns and crystal structures are simply another example of the same relationship. In wide-angle diffraction, one is typically studying arrangements of point scatterers while in small-angle diffraction one examines a coarse-grained picture. In both regimes, one may find either disorder or long-range order.

The two-phase approximation and its limitations

In his discussion of small angle neuton scattering Radlinski (2006) noted that “for a wide range of substances, the SAS data for geological materials and porous media can generally be interpreted using a two-phase approximation.” This is because the scattering length density of most minerals is both roughly the same (3–7 × 1010 cm−2) and significantly greater than that for an empty pore (~0 cm−2), and the scattering intensity is a function of the square of the difference. This greatly simplifies analysis of scattering data. As will be discussed below this also facilitates combination of scattering and imaging data, as the images need only be considered in binary (mineral/pore) form.

While the two-phase approach based on a combination of scattering and imaging data is computationally sound, Anovitz et al. (2009, 2013a) pointed out a caveat in its application that must be considered when analyzing scattering data from mineralogically complex rocks. Table 1 shows the scattering length densities, and contrasts of those minerals relative to vacuum (empty pores), and dolomite. If we consider a calcite-dolomite-pore system with all three phases of the same size, then 1/3 of the scattering surfaces are calcite-pore, 1/3 is dolomite-pore, and 1/3 is calcite-dolomite. From Table 1 it is clear that the contrast due to the calcite-dolomite scattering is much smaller than that from the mineral-pore interfaces for both X-ray and neutron scattering. However, for neutron scattering mineral-pore scattering can become much more important where hydrous minerals (e.g. brucite in Table 1) are involved. Similarly, as the pore fraction becomes small the influence of otherwise relatively weak, but abundant, mineral-mineral boundaries becomes more significant. It is also clear from Table 1 that the ratio between X-ray and neutron scattering is quite variable, depending on the minerals involved. Thus for rocks with complex mineralogy the possibility exists to combine SAXS and SANS to better understand the structure of the sample.

Sample preparation

A key factor in obtaining high-quality SAS results is the method used for sample preparation. This is because several factors compete to improve or adversely affect the data. The first is counting statistics. For a given sample, the larger the area, and the thicker the sample, the greater the scattering, but also, for increased thickness, the greater the absorption and probability of multiple scattering. For X-ray scattering this is typically not a problem, as the large flux, especially on synchrotron-based instruments, often makes counting times quite short. For neutrons, however, increasing the count rate can be a significant advantage, as neutron experiments are often “flux limited”. This is because neutrons are relatively weakly interacting with most materials, and the flux of even the most intense neutron sources (currently the Spallation Neutron Source at the Oak Ridge National Laboratory, Tennessee, USA) is relatively low, especially when compared with those from modern synchrotron X-ray sources.

While sample area is only limited by beam size and available sample size, however, the thickness of the sample cannot be increased infinitely. This is both because of beam attenuation and because of multiple scattering effects that distort the scattering pattern in ways that, while generally predictable (scattering intensity is shifted from low to higher Q, especially at lower Q), are difficult to model and correct for.

The macroscopic bound atom scattering cross section of a material may be calculated as:

(39)

(40)

(41)

(42)

where mk is the atomic mass of element k, nk is the number of atoms of element k per scattering unit, NAV is Avogadro’s number, ρ is the mass density, ρn is the number density of the scattering units, ρk is the number density of atoms of element k, and σk is the total (coherent plus incoherent) bound atom scattering cross section for element k. The standard unit of absorption cross section is the barn. 1 barn = 10−28 m2 or 10−24 cm2.

The macroscopic bound atom scattering cross section is not, however, equivalent to the macroscopic total scattering cross section ∑S, which depends on a number of other factors such as temperature, neutron energy, and the structure and dynamics of the sample. This is given as:

(43)

where (d2∑/dΩdEf) is he macroscopic differential cross section for scattering into a solid angle Ω and energy Ef. ∑S may be significantly different from ∑b, especially for hydrous materials. The macroscopic total cross section ∑T is then defined as the sum of the macroscopic total scattering cross section ∑S and the macroscopic absorption cross section ∑A. As with all transmission techniques absorption from this source may be described using the well-known Beer–Lambert law as:

(44)

where ∑A is the macroscopic absorption cross section in units of inverse distance, and t is the distance through or into the material.

The value of the absorption cross section, however, depends on the wavelength and type of the radiation employed. As noted above, neutrons primarily interact with the nucleus or unpaired orbital electrons. For neutrons then, there are several types of absorption, depending on the isotope involved. An isotope may absorb neutrons, characterized by the capture cross section, fission, characterized by the fission cross section, or scatter neutrons, characterized by the scatter cross section. As noted above, scattering can be split into coherent, and incoherent interactions, and the macroscopic scattering cross sections are just the product of the microscopic cross section per molecule and the number of molecules per unit volume. Typically neutron moderators (e.g., 1H) have large scattering cross sections, absorbers (e.g., 10B, 113Cd) have large capture cross sections, and fuels have large fission cross sections (e.g., 235U, 238U, 239Pu). These may then decay or not. The first is the origin of the radioactive activation observed for some materials that have been in a neutron beam, which is typically emitted in the form of gamma or beta radiation. In calculating the absorption cross section one typically assumes natural isotope abundances unless the sample has been specifically modified. Differences due to natural isotopic partitioning are typically too small to have much effect. For most materials the total cross section, then, is just the sum of the scattering and absorption cross sections. The latter also depends strongly on the energy of the neutron, and increases at low energies, typically as the inverse of the neutron velocity for lower energy neutrons. Thus cold neutrons are advantageous for studying many material properties as they interact more strongly with the sample. Absorption is also somewhat temperature-dependent, but this typically makes little difference for most (U)SANS studies. For most minerals the linear attenuation factor for a combination of absorption effects is relatively small. Thus it is multiple scattering, not absorption that is a primary limitation in the preparation of most samples for neutron studies.

For electromagnetic radiation such as light and X-rays, however, absorption is primarily due to interactions with the electrons around each atom in a given material. Both scattering and absorption processes occur, but fission cross sections need not be considered. Energy level transitions in the electron orbitals can also be observed, permitting spectroscopic analysis as well (similar transitions can be observed using inelastic neutron techniques but, again, the absorption cross sections are very different, see Loong 2006). The interactions of light with matter have been considered extensively in a previous volume of this series (Fenter 2002) and will not be covered in detail here. However, in the context of sample preparation of SAXS/USAXS/SALS studies increased absorption with thickness, especially at longer wavelengths, must be considered.

Given the above definitions, we may now calculate the total scattering probability for a slab sample of thickness t at an angle α to the beam as:

(45)

If we know the scattering probability, we can then solve for the thickness of a slab perpendicular to the beam as:

(46)

which has no solution if:

(47)

(http://www.ncnr.nist.gov/instruments/dcs/dcs_usersguide/how_thick_sample/#cross_sections)

The second variable that must be considered is multiple scattering. If a neutron or electromagnetic wave can be scattered once during its transit through a sample, it stands to reason that it can be scattered more than once. Multiple scattering thus has two effects, both deleterious. It attenuates scattering that should be going in a given direction, and intensifies the signal at another angle. Typically this transfers intensity from low-Q to higher-Q without significantly changing the integrated intensity. The relationship between the differential cross section and the pair correlation function only holds in the limit of single-scattering. When an incident X-ray or neutron scatters multiple times from the sample, the straightforward relationship between the structure of the material and the measured scattering signal is lost.

There are two basic approaches to dealing with this problem: minimize the effect, or correct for it. For a given sample the thinner the sample and the shorter the transmission path, the less likely this effect is to be significant. Although it is not always clear, a priori, what this sample thickness should be, a rule-of-thumb is that if transmissions are greater than 90 percent multiple scattering effects are small. Shorter wavelengths also reduce multiple scattering effects, as they are less absorbed by the sample. A second approach (Sears 1975) is to subdivide the sample into a series of smaller sample using absorbing spacers parallel to the incident beam.

Alternatively, there have been several suggestions of data processing approaches to correcting for multiple scattering effects. These can be broken down into two groups, analytical approximations (e.g., Vineyard 1954; Blech and Averbach 1965; Sears 1975; Schelten and Schmatz 1980; Soper and Egelstaff 1980; Goyal et al. 1983; Berk and Hardman-Rhyne 1988; Andreani et al. 1989; Mazumdar et al. 2003) and Monte Carlo simulations (e.g., Copley 1988; Dawidowski et al. 1994; Rodríguez Palomino et al. 2007, Mancinelli 2012). In cases where the experimental design will require a thick sample where multiple scattering is likely it may also possible to correct for the effect, at least in part, using an empirical approach (e.g., Sabine and Bertram 1999, Connolly et al. 2006), in which measurements are made on pieces of the same sample of various thicknesses. These can be fitted, possibly using the equation described by Vineyard (1954), and multiple scattering from a sample of known thickness corrected for. Sabine and Bertram (1999) also suggest that measurements made at various thicknesses and wavelengths can be used to obtain absolute values for the scattering cross section for a material, but the reliability of this approach is uncertain.

For the purposes of sample preparation, the results of the earliest of these studies (Vineyard 1954) provide a good starting point. Vineyard (1954) considered an infinite slab of some thickness. He assumed that only first and second order scattering were of importance, a monoenergetic beam, elastic scattering, and a quasi-isotropic approximation. Figure 16 shows the results of his model of the fraction of multiple scattering as a function of the angle of the neutron beam relative to the slab normal and a thickness parameter σTt, where σT is the total scattering cross section (scattering and absorption) and t is the thickness. He concludes that for modest scattering angles the ratio of the multiple scattering fraction to σTt is almost independent of scattering angle, and that the value of σTt must be smaller than approximately 0.05 if the fraction of multiple scattering is to be kept less than 10 percent. He also notes that, while the fraction of multiple scattering does decrease with thickness, it does so as σTt ln(σTt) and not linearly with t.

Figure 17 shows the sample preparation strategies developed by Anovitz et al. (2009) for (U)SANS. These have also been used successfully for USAXS measurements at the APS, and thus probably form a reasonable starting point for those interested in neutron and X-ray small angle studies of geological and ceramic materials. The figures on the left in Figure 17 show the original technique in which samples were mounted on glass plates with superglue, ground to thickness, the floated off the glass using acetone to dissolve the glue and remounted on Cd masks. This was successful but difficult, as the thin samples tended to break. An alternative strategy of mounting the samples permanently on quartz glass plates is shown on the right of Figure 17. This is very simple to use and has been quite successful. In addition, as shown on the right-hand figure as well, powders or well cuttings can be cast in epoxy, then remounted on the quartz glass and ground to thickness. Initial experiments suggested that a thickness of approximately 150 μm yielded significant scattering intensity with minimal multiple scattering. This is illustrated in Figure 18, which shows the transmission measurements for a series of shale samples from the Eagle Ford shale as a function of thickness. As can be seen, near a thickness of 150 μm the transmission exceeds 90 percent, the value suggested by Vineyard (1954). Tests have shown this to be far superior to the alternative of filling 1-mm-wide quartz glass Helma™ “banjo” or “lollipop” cells, in which multiple scattering, especially at the USANS scale, can be significant.

Figure 17 also shows the samples mounted on Cd masks. This is necessary to define the beam in (U)SANS, but is not needed in USAXS where the beam can be focused or masked before the sample. Typically USAXS beam sizes are fairly small (< 1mm2 at APS), while those used at (U)SANS instruments are much larger (up to nearly 1 in2) to accommodate lower flux rates. However, in the latter cases specialized masks, such as rectangular, slit (cf. Navarre-Sitchler et al. 2013), or annular (Anovitz et al. 2015b) shapes can be used.

Geometrical principles of small-angle scattering experiments

There are five basic geometries/approaches used for small angle scattering (SAS) experiments: pinhole (SAXS, SANS, V-SANS, SALS), Bonse-Hart (USANS, USAXS), Kratky, spin-echo (SESANS) and time of flight (TOF-SANS). Depending on the wavelength of the incident energy each covers a specific size range. Thus, one or more are often used in combination to extend the range of pore scales interrogated. For neutron facilities a world directory of SANS instruments is maintained by the Large Scale Structures Group at the Institut Laue-Langevin at: http://www.ill.eu/instruments-support/instruments-groups/groups/lss/more/world-directory-of-sans-instruments/

Pinhole SAS

The basic geometry of a two-dimensional pin-hole SAS system is shown in Figure 19. The scale of the instrumentation for pin-hole geometry instruments varies dramatically. SANS spectrometers can be as long as 80 m (D11 at the Institut Laue–Langevin), and laboratory-scale SAXS instruments may be only cabinet-sized. In addition, the type of detector must be selected for the energy type (X-rays, neutrons, light) of interest.

These instruments are, indeed, very similar in design to a standard pin-hole camera. As noted above, the scattering variable, Q, is defined as Q = (4π/λ) sin(θ). Thus, like the more familiar X-ray diffraction (XRD), scattering data is measured in reciprocal space. However, unlike XRD data these are not derived from the absolute square of the Fourier transform of the structure, but rather of the density-density correlation function.

For SAS instruments using a two-dimensional area detector some typical results of scattering experiment looks like those shown in Figure 20. Figure 20a shows an example of a sample of the Garfield Oil shale, which is typical of most patterns obtained for rocks. Such patterns may or may not be circular (this one is slightly ellipsoidal, reflecting bedding structure in the shale), and more complex features may occur that represent large-scale repeating structures in the material. However, for simple isotropic systems the results are typically circular, or nearly so, and can be radially integrated where the intensity I is often given as d∑c/dΩ, the change in the macroscopic coherent scattering cross section with a change in angle. When normalized to an absolute scale (see below) this is given in units of inverse thickness (1/cm). Figure 20b, on the other hand, shows scattering from a powder sample of the synthetic zeolite MCM-41. The pores of this material are arranged in a regular lattice structure, and the first two Debye-Scherrer rings can be directly observed in the pattern.

It is often useful to extend the range of a SANS measurement to lower Q in order to better overlap the USANS data. While the combined ranges of SANS (e.g., 0.008 nm−1 to 7.0 nm−1 for NG7 SANS at NINS/NCNR) and USANS (e.g., 0.0003 nm−1 to ~ 0.1 nm−1 for BT5 USANS at NIST (Barker et al. 2005), Q > 0.0002 nm at HANARO/KIST, (M.H. Kim, pers. comm.), Q > 0.00014 nm−1 at Kookaburra/ANSTO, Rehm et al. 2013) techniques covers a wide range of scales from approximately 1 nm to 10 mm, Combination of data from the two approaches is, however, somewhat limited by the uncertainties in both instruments in the overlap range. For typical rock materials this is not a factor for USAXS, where this region is covered by the USAXS instrument itself, although even in that case there tends to be greater noise in the overlap region between the pinhole SAXS and USAXS at much smaller sizes (Q ~ 0.1–0.2).

One method to extend the Q-range for the SANS instrument employs a set of biconcave MgF2 lenses placed in the beam before the sample (Eskildsen et al. 1998; Choi et al. 2000; Susuki et al. 2003; Littrell 2004; Oku et al. 2004; Mildner 2005; Hammouda and Mildner 2007). These have the effect of shrinking the neutron spot size on the detector, thus lowering the Q range and increasing the intensity at low Q. Unlike light, however, for most materials the refractive index for neutrons is less than one, but only by a few parts in 105, for cold (~10 Å) neutrons. Thus concave lenses, rather than convex lenses are convergent, but a number of them are needed for significant focusing to occur. Nonetheless, this technique has now become a successful method to extend the minimum Q range of pin hole SANS instruments.

Another method that improves the quality of SANS data in the overlap region for neutron studies is the VSANS (Very Small Angle Neutron Scattering) instrument. There are several different designs for a stand-alone VSANS instrument: extremely long pinhole designs such as the 80-m D11 instrument at the Institut Laue–Langevin, Grenoble, France (Q > 0.005 nm−1, Lindner et al. 1992; Lieutenant et al. 2007); focusing instruments such as the KWS3 instrument at the Forschungs-Neutronenquelle Heinz Haier–Leibnitz (FRM II) near Munich, Germany (Alefeld et al. 1997, 2000a,b; Fig. 21), for which there are two sample positions, the 9.5 m position covers 0.001 nm−1 < Q < 0.03, and the 1.3 m position extends the high end of the Q range to 0.2 nm−1, and multiaperture converging pinhole collimator designs (Nunes 1978, Carpenter and Faber 1978, Glinka et al. 1986, Thiyagarajan et al. 1997; Barker 2006; Brûlet et al. 2007; Désert et al. 2007; Hammouda 2008) such as the V16 Instrument at BER II, Helmholtz-Zentrum Berlin (Clemens 2005; Vogtt et al. 2014; Q > 0.03 nm−1), the G 5-4 instrument (PAXE) at the Laboratoire Léon Brillouin, Saclay, France (Q > 0.01 nm−1), and the VSANS instrument under construction at NCNR/NIST, some of which are also combined time-of-flight instruments.

As an example, the VSANS instrument at NCNR/NIST (Fig. 22) is 45 m in total length, and uses a high-resolution (1.2 mm fwhm) 2-D detector along with the longer flight path (45 m, as opposed to 30 m for the more standard NG7 SANS instrument at the NCNR) to cover 0.002 nm−1Q ≤ 7 nm−1. To enhance the count rate at lower Q either larger samples using converging beam collimation, or relaxed resolution using slit collimation. The instrument has three detectors that can be placed independently at different distances from the sample allowing the full Q-range to be measured in one setting. The incident wavelength and wavelength resolution are controlled over a wide range with either a standard resolution mechanical velocity selector (Δλ/λ = 12%), high resolution graphite monochromator (Δλ/λ = 2%), or low resolution filtered beam covering 0.4 nm ≤ λ ≤ 0.8 nm with Be filter and guide deflector. The instrument has a large 2-m sample area permitting large sample environments to be used, and full beam polarization using a 3He analyzer is also available (Barker et al. pers. comm.).

A third type of pinhole instrument is a small angle light scattering (SALS) system, which uses a laser as the radiation source. While optical techniques have a long and honorable tradition in the analysis of geological materials, to our knowledge only one investigator (Liao et al. 2005) has, as yet, applied SALS to the analysis of porosity in rocks (coal), although it is well known in the study of aggregated particles, including soils. While some materials (black shales, sulfide ores) clearly will not lend themselves to such analysis others, especially those typically analyzed in thin section by transmitted light, would appear to be good candidates. The longer wavelength of light (relative to X-rays) will extend the low-Q range of available data (cf. Zhou et al. 1991; Weigel et al. 1996; Burns et al. 1997; Alexander and Hallett 1999; Cipelletti and Weitz 1999; Holoubek et al. 1999; Bushell and Amal 2000; Bushell et al. 2002; Gerson 2001; Stone 2002; Chou and Hong 2004, 2008; Nishida et al. 2008; Romo-Uribe et al. 2010). Liao et al. (2005) used several techniques, including SALS, light obscuration, settling, 2-D and 3-D imaging to estimate the mass fractal dimensions of coal aggregates. They conducted over 50 tests, and achieved results in reasonable agreement with 3-D structural analysis, while noting that SALS was a much faster method of analysis. This suggests that the potential applicability of this approach to analysis of geological materials needs to be more fully explored.

Bonse–Hart

The Bonse–Hart instrument (Compton and Allison 1935; Fankuchen and Jellinek 1945; Bonse and Hart 1965; Shull 1973; Schwahn et al. 1985; Agamalian et al. 1997; Bellmann et al. 1997; Takahashi et al. 1999; Borbely et al. 2000; Hainbuchner et al. 2000; Treimer et al. 2001; Jericha et al. 2003; Villa et al. 2003; Barker et al. 2005; Hammouda 2008; M.H. Kim, pers comm.) is often referred to as a USANS or USAXS. The “U” in this case stands for ultra, and refers to the instruments’ ability to measure scattering patters at very low Q values. This range varies somewhat per instrument. As noted above the USANS instruments at NCNR (Barker et al. 2005) and HANARO (M.H. Kim, pers. comm.) can reach Q values down to 0.0003 and 0.0002 nm−1 respectively. The USAXS instrument with a combined pinSAXS for high-Q data at the APS (Ilavsky and Jemian 2009) covers a range from 0.001 to 12 nm−1 at 10–18 keV. That is, significantly larger scales than can be achieved with pinhole instruments of reasonably achievable lengths. Available USANS instruments include those at the NIST Center for Neutron Research (BT5, Barker et al. 2005), ANSTO, Australia (Kookaburra, Rehm et al. 2013), the Institute Laue–Langevin, Grenoble, France (S18, Hainbuchner et al. 2000), the Paul Scherrer Institute (ECHO), the Institute for Solid State Physics, Tokyo, Japan (C1-3 ULS, Aizawa and Tomimitsu 1995), and the Korean Institute of Science and Technology, South Korea (Kist-USANS, HANNARO Cold Guide Hall). USAXS instruments include those at the Stanford Linear Accelerator (BL4-2, Smolsky et al. 2007), the European Synchrotron Radiation Facility (ID02, Narayanan et al. 2001), and the Advanced Photon Source (Ilavsky et al. 2009).

Figure 23 shows a schematic of the USANS image at the NCNR, which will be used as a general example of Bonse-Hart instruments (cf. Barker et al. 2005). The design of this instrument begins with sapphire and pyrolytic graphite prefilters and a pre monochrometer to remove higher energy components of the neutron spectrum and reduce radiation levels. The monochrometer and analyzer are channel-cut, triple-bounce silicon single crystals. The (220) reflection selects a neutron wavelength of 2.4 Å, and the triple bounce geometry dramatically reduces the width of the reflection. It was this latter innovation (Schwahn et al. 1985), coupled with the addition of cutting the crystal and adding absorbers between reflectors (Agamalian et al. 1997) that permitted low background rocking curves to be obtained and successful scattering curves to be measured on more weakly scattering materials.

Unlike a SANS/SAXS instrument where a range of Q values is measured simultaneously, in a Bonse–Hart instrument the Q value is varied during analysis by rotating the analyzer in small increments. If no sample is present the combined Bragg reflections require a very precise alignment for neutrons to pass through the instrument. If, however, a scattering sample is placed between the two crystals the alignment condition becomes satisfied for neutrons scattered at a given angle. Five end-window counters placed in the final reflection direction provide neutron detection.

A key factor in understanding and analyzing data obtained using a Bonse–Hart instrument is the effect of slit geometry. A standard SANS instrument uses a two-dimensional detector, thus explicitly measuring the scattering pattern at all observable angles. A Bonse–Hart instrument, by contrast, uses a one-dimensional slit geometry. Under these conditions the two-dimensional pattern is compressed, or “smeared” into one dimension. As noted by Hammouda (2008), for the NIST instrument the slit geometry provides very tight standard deviations in Q resolution (approximately 2.25 ×10−5 Å−1) in the horizontal direction, and much wider ones (approximately 0.022 Å−1) in the vertical direction. This is illustrated in Figure 24. While scattering from a sample is typically radial, if not necessarily circular, the slit geometry integrates the actual scattering over a narrow horizontal range, but a wide vertical range, thus including in that integration intensities from greater radial dimensions that the measured Q value (along the x-axis). In the latter case it may be possible to account for anisotropy using asymmetry values measured at the lowest Q values on the SANS instrument. This assumes, however, that this effect is not Q-dependent, which is unlikely.

One limitation for most USAS instruments already mentioned is that the one-dimensional detector limits the ability to analyze non-isotropic scattering. The USAXS instrument at the APS permits direct measurement of the asymmetry of the scattering pattern as low-Q (Ilavsky et al. 2009). This instrument adds a second set of channel-cut, two-reflection, Si (220) crystals before and after the sample. The main crystals are oriented vertically, and the second, inner set horizontally, thus both horizontally and vertically collimating the beam. The sample can be rotated, allowing measurements in multiple directions, either by fixing the sample angle (α) and varying Q, as in standard USAS experiments, or by fixing Q and varying α. A two dimensional pattern (obtained point-by-point) can be obtained by varying both Q and α. Beam-collimation reduces the intensity, however, which limits the practical range of Q to 10−4 Å−1 < Q < 0.1 Å−1.

This integration is given as (Barker et al. 2005; Hammouda 2008):

(48)

The resulting data may either be fit as is, accounting for the observed smearing, or desmeared before fitting using one of several available algorithms (e.g. Kline 2006; Ilavsky and Jemian 2009). The latter makes association with SANS results at higher Q, and results obtained from image analysis at lower Q easier, but may introduce additional noise and uncertainties, especially if the scattering pattern is not circular.

TOF-SANS

The difference between continuous-source SAS instruments and time-of-flight (TOF) instruments lies less in the design of the instrument itself than in the nature of the source. For neutrons continuous sources are typically reactors (e.g. NCNR, HFIR), while pulsed sources are either spallation sources (WNR/LANSCE, SNS, ISIS, ILL), continuous sources to which a neutron chopper has been added or the pulsed IBR-2 reactor in Dubna, Russia.. As the name implies, In TOF-SAS instruments the initial flux of radiation hits the sample in a single pulse of some known time width and intensity. This usually uses a wide wavelength range simultaneously. Each pixel in the detector must, therefore, measure the intensity as a function of time relative to the time the pulse hits the sample, and the time signal for each neutron can be recorded (time-stamped). Continuous sources, by contrast, typically operate in an integrating mode. For most geological applications there is not much difference between continuous and TOF instruments, although the wide wavelength range can complicate the use of sample environment materials with a Bragg edge such as a sapphire window. However, the TOF instruments do provide the opportunity to measure kinetics of fast processes, and may be particularly useful for dynamic imaging. Examples of such instruments are the EQ-SANS and TOF_SANS instruments at the SNS at Oak Ridge National Laboratory, and LOQ and SANS2d at ISIS, REFSANS at the FRM-II, and LQD and LANSCE and Los Alamos National Laboratory.

Kratky geometry

The Kratky geometry, often seen in commercial SAXS instruments, uses a line source and slit block collimation, rather than a pinhole (Kratky and Skala 1958). This allows for smaller laboratory-scale instruments, and often an increased sample flux. However, the line geometry induces smearing, much as does the Bonse–Hart. The scattering is highly collimated perpendicular to the slit direction, but allowed to broaden parallel to it, although some designs us a focused line geometry that minimizes smearing.

GISAS

Unlike transmitted geometries, grazing incidence SAS is a surface-sensitive technique, commonly used for the analysis of nanostructured thin films. GISAS provides the opportunity to study surfaces using small angle techniques, where the intensities obtained from normal transmission geometries are typically very small. These measurements are performed in situ and, for GISAXS at least where the fluxes are suitably high can be done in a time-resolved manner to study reaction kinetics. They can also be used to study buried structures non-destructively (Naudon 1995). Most importantly, because the areas illuminated for both X-ray and neutron studies are fairly large, GISAS techniques probe a statistically relevant surface area of square millimeters or larger. As this technique has been recently reviewed in this series (De Yoreo et al. 2013) it will be only briefly discussed here.

The first GISAS experiments were done using X-ray instruments (GISAXS, Levine et al. 1989, 1991; Müller-Buschbaum et al. 1997, 2003; Naudon and Thiaudiere 1997), and the technique has become well known for X-ray applications (cf. Rauscher et al. 1999; Lazzari 2002; Doshi et al. 2003; Forster et al. 2005; Henry 2005; Lee et al. 2005; Roth et al. 2006; Urban et al. 2006; Sanchez et al. 2008; Renaud et al. 2009). For neutron sources GISANS is in a more developmental stage. It was first reported in 1999 (Müller-Buschbaum et al. 1999a,b) but has become much more widely applied since, largely for polymer applications (Müller-Buschbaum et al. 2003, 2004, 2006, 2008; Wunnicke et al. 2003; Wolff et al. 2005; Ruderer et al. 2012). A time-of-flight version has also been developed (Forster et al. 2005; Kampmann et al. 2006; Müller-Buschbaum et al. 2009; Kaune et al. 2010; Müller-Buschbaum 2013).

The general geometry of a GISAS experiment is shown in Figure 25. The beam is directed at the sample at a low incident angle (αi), and the reflections are detected at both a final angle (αf) and out-of-plane angle (2θ) which, as above, generates a momentum transfer vector (Q) with units of inverse distance per Bragg’s law. The scattering pattern typically contains a peak for specular reflection (where af intersects the detector in Figure 25), as well as a Yoneda peak defined by the critical angle for total external reflection of the material. In many cases both reflected and transmitted data are detected (cf. Lee et al. 2005). At angles less than the critical angle a certain amount of sample penetration occurs (the so-called evanescent wave), giving this technique the very limited depth penetration (typically only a few nanometer) needed for surface and near-surface analysis. This depth is sensitive to the incident angle. Form factors (defined by the shape of individual scatterers, see below) typically dominate the GISAS pattern for randomly oriented nanoparticles with well-defined shapes, while structure factors (defined by the relationship between the particles) tend to dominate scattering for ordered layers (e.g., polymer thin films, reacted surface layers). While reflectometry techniques have been extensively applied to analysis of mineral surfaces (e.g., Fenter et al. 2000a,b,c, 2001; Cheng et al. 2001a; Teng et al. 2001; Fenter 2002; Schlegel et al. 2002, 2006; Fenter and Sturchio 2004; Geissbuhler et al. 2004; Predota et al. 2004; Zhang et al. 2004, 2006, 2008; Park et al. 2006; Vlcek et al. 2007), experiments using GISAS for analysis of surface experiments and pore precipitation have been more limited (e.g. Jun et al. 2010; Fernandez-Martinez et al. 2012a,b, 2013; De Yoreo et al. 2013; Panduro et al. 2014), and we know of none for GISANS. However, these have shown that the potential applications of this technique for analysis of precipitation in pores or on mineral surfaces is significant.

SESANS

The final type of small angle scattering instrument to be discussed here is the spin-echo SANS experiment (SESANS, Pynn 1980; Keller et al. 1995; Gähler et al. 1996; Rekveldt 1996; Bouwman and Rekveldt 2000; Bouwman et al. 2000, 2004, 2005, 2008; Krouglov et al. 2003a,b,c; Rekveldt et al. 2003, 2005; Uca et al. 2003; Pynn et al. 2005; Grigoriev et al. 2006; Plomp et al. 2007; Andersson et al. 2008a,b; Li et al. 2010; Washington et al. 2014). As with other SAS experiments the spin-echo technique also measures elastic scattering, but begins with a polarized neutron beam, and is based on the Larmor precession of neutron spins in a magnetic field (Mezei 1972, 1980). In a spin-echo instrument there are two identical magnetic fields with opposite orientations along the beam path: one before and one after the sample position. In the absence of a sample the neutron precesses at some angle θ1 in the first field, which is reversed in the second so that dθ = 0 and the neutron polarization is returned to its original state. If a sample is present between the two fields, however, small angle scattering by the sample between the two fields breaks this symmetry, depolarizing the beam, because the path lengths in the second field are no longer equal to those in the first. This is measured using a second polarizer (an analyzer) after the second Larmor device. The polarization of the neutron beam P(z) is then a direct function of the projection G(z) of the autocorrelation function γ(r) of the density distribution of the sample ρ(r), where z is the spin-echo length (in μm). In SANS, by contrast, the intensity distribution I(Q) is the Fourier transform of the autocorrelation function (Andersson et al. 2008a,b). The relationships between these various functions are summarized in Figure 26. While SESANS typically covers a size range similar to that of USANS (typically from tens of nm up to several mm) it has several advantages. The flux is much higher, improving the counting statistics and shortening counting times. No desmearing is required, and multiple scattering is easily accounted for, allowing much thicker samples to be used. In addition, the results are obtained in real, rather than inverse space. Because of this, however, the data do not directly overlap with pinhole SANS at higher Q (cf. Rehm et al. 2013).

To date, however, there has been very little work on rock materials using SESANS. Figure 27 shows preliminary data (Anovitz and Bouwman, unpb.) obtained from samples analyzed using (U)SANS by Anovitz et al. (2009). It is clear from these data that SESANS can be successfully applied to rock materials, and that there is significant opportunity to utilize this approach for geologic applications.

Magnetic SANS

Another SANS technique that has received little attention for its potential geological applications is magnetic scattering. As mentioned above, neutrons interact not just with the nucleus of an atom, but with unpaired orbital electrons as well. Thus they are highly suited for studying the magnetic structure of materials, and there is a significant literature on this topic (e.g., Scharpf 1978a,b; Cebula et al. 1981; Dormann et al. 1997; Ohoyama et al. 1998; Wiedenmann 2005; Zhu 2005; Michels and Weissmüller 2008). As the focus of this article is pore structures, however, we will not discuss this approach in any further detail, except to comment that its utility in understanding geomagnetism (and possibly particle transport in porous media using magnetic test particles) has yet to be explored.

Figure 28 shows an example of the use of SANS to investigate magnetic systems. In type II superconductors an externally-imposed magnetic field may form a flux lattice on the surface of the crystal. The magnetic field lines form filaments or vortices with a quantized magnetic flux that penetrates the superconductor in a regular lattice structure. The lattice constants of these vortex structures are on the order of a few nm. The sensitivity of neutrons to magnetic ordering implies that, in a small-angle neutron scattering experiment these flux lattices produce single crystal diffraction patterns. The magnetic diffraction pattern shown in Figure 28 shows the inverse-space lattice pattern for YBa2Cu3O7−δ (YBCO) taken at 2 K in a 4-T applied field after field cooling. Other examples of small-angle magnetic scattering include analysis of magnetic nanoparticles (e.g. Krycka et al. 2010). To our knowledge, however, small-angle scattering has yet to be advantageously used to study geomagnetism.

Contrast matching

Contrast matching is a very useful technique in small angle scattering studies that provides a method to separate connected from unconnected porosity. In addition (Anovitz, unpb.) it can also be used in multiphase materials to explore the question of distinguishing bulk mineralogy from reactive mineralogy as a function of pore size. Figure 29 shows the basics of this approach. One of the key differences between techniques such as BET and MIP and scattering approaches is that scattering sees all of the porosity in the rock (as well, possibly, as effects from grain/grain boundaries, see above), while sorption/intrusion techniques interrogate only accessible porosity, which may be limited by intrusion pressures for non-wetting fluids as described by the Young–Laplace (Washburn’s) Equation (Eqn. 11 above). It is, therefore, of significant interest to separate connected from unconnected porosity in order to relate scattering measurements to phenomenon such as permeability and mass transport.

As discussed above, the intensity of scattering is a function of the square of the scattering length density difference at an interface. Thus, assuming a two-phase system, if a rock is soaked in a wetting fluid (so that the fluid can be assumed to soak into all accessible pores) with a scattering length density equal to that of the matrix all of the accessible pores will “disappear” during the scattering experiment, and scattering will only be observed from unconnected pores.

In many cases the contrast point may be unknown. This is especially true in the case where more than one phase is present in the system. In this case a series of fluid mixtures with different scattering length densities can be used. Because contrast is a function of the difference in scattering length density squared, to a first approximation the intensity should be a parabolic function of scattering length density. If more than one phase is present, however, and if these vary with pore size several parabolas may be needed to fit the data, and these may vary with Q. Alternatively, one can plot the square root of scattering length density, often either at a projected value at I(Q = 0) or a minimum value of Q, which should be composed of two linear trends.

This approach has largely been applied in (U)SANS experiments (cf. Stuhrmann and Kirste 1965; Stuhrmann 1974, 2008, 2012; Ibel and Stuhrmann 1975; Stuhrmann et al. 1976, 1977, 1978; Beaudry et al. 1976; Williams et al. 1979; Akcasu et al. 1980; Jahshan and Summerfield 1980; Koberstein 1982; Hadziioannou et al. 1982; Bates et al. 1983; Hasegawa et al. 1985, 1987; Allen 1991; Hua et al. 1994; Radlinski et al. 1999; Smarsley et al. 2001; Littrell et al. 2002; Connolly et al. 2006; Stuhrmann and Heinrich 2007; Clarkson et al. 2013; Ruppert et al. 2013; Thomas et al. 2014). This is because the scattering length density for neutrons depends on the isotope, rather than just the element involved, and there is a very large difference in scattering length density between hydrogen and deuterium, and thus between H2O (neutron sld = −0.56 × 1010 cm−2) and D2O (neutron sld = 6.392 × 1010 cm−2). Where useful hydrogenated/deuterated methanol, or other solvents can be used (cf. Allen et al. 2007). This range covers that of most minerals, allowing a range of compositions to be matched. While a similar approach is possible for USAXS by adding a highly soluble, high-Z material to water, molecular liquids, metals or other fluids (cf. Smith 1971; Tolbert 1971; Strijkers et al. 1999; Dore et al. 2002; Laszlo et al. 2005; Laszlo and Geissler 2006; Jahnert et al. 2009; Mter et al. 2009; Kraus 2010) it has not, to our knowledge, been tried for geological materials other than coal (Smith et al. 1995).

An example of the utility of this approach is shown in Figure 30. Littrell et al. (2002) characterized a series of activated carbons produced from paper mill sludge using ZnCl2. They found that the surface area of the carbons increased as the concentration of ZnCl2 was increased. Contrast matching experiments were used to demonstrate the presence of two phases, a zing-rich particle and a nanoporous carbon, the relative sizes of which were determined from the Q-dependence of the contrast curves. Such an approach (Anovitz et al. 2015b) can also be used to analyze the pore surface mineralogy as a function of pore size, providing a link between porosity, overall mineralogy, and reactive mineralogy as a function of pore size and concentration.

Reduction and analysis of SAS data

Once SAS data have been obtained they must be processed prior to analysis (see Hammouda 2008; Ilavsky and Jemain 2009; for discussions of data reduction and analysis). The extent of this processing depends on the research goals of the project. For instance, if all that is of interest is the sizes of scatterers in some solution, then a simple radial averaging may be all that is needed to determine the Q value of peaks in the data. However, for most geological applications where the concentration of scatterers in a given rock volume is of interest (i.e. the pore fraction or absolute pore volume distribution), then the data must first be corrected for various effects and normalized to an absolute intensity scale (units of cm−1, Wignall and Bates 1987; Russell et al. 1988; Heenan et al. 1997; Glinka et al. 1998; Orthaber et al. 2000; Hu et al. 2011b; Wignall et al. 2012). This can either be done by referencing the results to a precalibrated sample or relative to the intensity of the direct beam. This must be decided before the SAS experiment is performed, as it requires that certain additional data be available, some of which must be acquired during the SAS experiment.

The first corrections that must be made are for the effects of dead time, nonuniformities in the detector pixel efficiency, scattering from the empty cell, and blocked beam or dark current scattering which is a measurement made with the beam blocked by a strong absorber (e.g., boron nitride for neutrons) or by a closed shutter. Dead time corrections are made by normalizing the total counts to the beam monitor counts, and pixel efficiency corrections by dividing each pixel intensity by that for an isotropic scatterer such as water or plexiglass normalized to 1 count/pixel (Glinka et al. 1998). Then, for each pixel in the scattering data, one calculates

(49)

where I is the intensity of the scattering, and T is the transmission, the measurement of which is usually made at one detector distance for each wavelength used in the measurement. For samples mounted on quartz glass the “cell” is a quartz glass slide mounted on a mask of the same diameter (for neutron measurements) with no sample on the slide. This also corrects for other effects such as scattering from beam windows, aperture edges, air in the beam path and spillage of the direct beam around the beam stop (Glinka et al. 1998). Transmission measurements are measurements the fraction of the incident beam that is not scattered by the sample and are the ratio of the transmitted beam intensity, integrated only over the area of the beam spot, to that of the incident beam measured with no sample or cell present. They are often measured with an attenuator in place to avoid damaging the detector. If the sample is mounted on/in a cell transmissions must be measured for both the sample and the empty cell, as shown in Equation (49) above.

The final step is to normalize the data to an absolute scale. For SANS the absolute scattering cross section (d∑/dΩ(Q), Turchin 1965, Glinka et al. 1998, Wignall et al. 2012) is defined as the number of neutrons (n/s) scattered per second into a unit solid angle divided by the neutron flux (n/(cm2 s)). Normalized to sample volume this has units of cm2/cm3, or cm−1. The relationship between the cross section and the adjusted count rate I(Q) in 1/cm2.s is then:

(50)

where I0 is the incident intensity on the sample, Δa is the area of a detector element, r is the distance between the sample and the detector, ɛ is the detector efficiency discussed above, A is the sample area, t is the sample thickness and T is the measured transmission. At high Q (small sample to detector distances) corrections for geometric effects may be needed as well.

The result of the above calculations is a two-dimensional scattering pattern similar to that shown in Figure 20 in which all of the values are on an absolute scale. While the data may be analyzed in two-dimensional form using a program such as SASVIEW (SasView, http://www.sasview.org/) it is more common to convert it to a one-dimensional form by angularly averaging the results. This yields a single curve showing intensity as a function of Q. There are, however, a few caveats to this process. Examples of two of these are shown in Figure 31. The left hand image shows scattering (un-normalized) from a shale with the bedding oriented horizontally and parallel to the beam. The asymmetry of the result is clear (cf. Anovitz et al. 2014). A fully radial average of this sample would, therefore, smear out these differences. The figure on the right shows an example of asterisms. In this case these may be caused by oriented micas in the sample, but fractures, either natural or accidental, may cause similar results.

In both cases the solution is to replace complete angular averaging with sector averaging. Data reduction packages include a function to allow averaging of only a selected angular range of the data. For oriented samples like shales this permits analysis of scattering perpendicular to, or parallel to bedding, shear planes, or other oriented structural fabrics in the sample. For samples with unwanted asterisms these angles can be avoided. An alternative approach for a sample with asterisms is to mask out the directional scattering, and radially average the remainder.

Figure 32 shows an example of an integrated scattering curve (Anovitz et al. 2015a) for a sample of St. Peter sandstone with experimentally-generated quartz overgrowths. This presentation, log(I(Q)) plotted as a function of log(Q) is sometimes referred to as a Porod plot. The data show are a combination of data from three sources: SANS, USANS, and calculations from backscattered electron images taken on a scanning electron microscope (BSE/SEM). The approximate Q-ranges over which these data were obtained are shown, although these are approximate as results from the three techniques overlap. The data in Figure 32 show several typical features. The intensity at high Q is apparently independent of Q. This reflects the incoherent background, and in most geological samples is primarily a function of the hydrogen (water or hydroxyl) content of the sample. In the mid-Q range the data can be fitted to a power-law slope. While we have shown that, in many cases, there are actually significant details in this region (Anovitz et al. 2013a, 2015a), to a first approximation the log–log slope represents the fractal properties of the sample. Several scattering studies suggest that the length correlations of pore-grain interfaces can often be described by self-similar fractals with non-universal dimensions (2 < D < 3) (cf. Bale and Schmidt 1984; Mildner and Hall 1986; Wong et al. 1986; Radlinski et al. 1999; Connolly et al. 2006; Anovitz et al. 2009, 2011, 2013a,b, 2014a,b, 2015a,b; Jin et al. 2011; Mastalerz et al. 2012; Melnichenko et al. 2012; Navarre-Sitchler et al. 2013; Wang et al. 2013; Swift et al. 2014). This leads to a non-integer power-law as a function of the scattering given by I(Q) = I0Q−x + B where B is the incoherent background. As summarized by Radlinski (2006) the magnitudes of these slopes are determined by the surface from which scattering occurs. Slopes between −2 and −3 are characteristic of mass fractal systems, those between −3 and −4 of surface fractal system, and those between −4 and −5 of non-fractal “fuzzy” interfaces. These may be interfaces in which the scattering length density varies monotonically between two phases, or ones in which this appears, on average, to be the case, such as needle-like scatterers imperfectly aligned towards the beam. For a volume or mass fractal scatterer, therefore, Dm = x, and for a surface fractal Ds = 6 − x (Bale and Schmidt 1984). Smooth interfaces give rise to scattering with a power-law slope of −4, which is referred to as Porod scattering.

Such suggestions of fractal surface and mass scaling are common in scattering studies of rock materials. In general, a surface fractal is an object whose surface areas scales in a non-integer manner with its radius (or some other selected ruler length), as:

(51)

For a non-fractal, three-dimensional object Ds = 2 (as in the surface area of a sphere A = 4pr2) and for a surface fractal 2 < Ds < 3. As noted by Anovitz et al. (2013a), however, while the ranges for a two dimension surface fractal are one less than the range just given, the relationship between the fractal dimension of a three dimensional object and a two dimensional slice through it is uncertain. For a mass fractal, it is the mass (or volume) with non-integer scaling as:

(52)

where Dm = 3 for a non fractal object, as in the volume or a sphere (V = 4/3πr3) and for a three dimensional mass fractal object again 2 < Dm < 3. The question becomes, however, how both can co-exist in a given rock. Figure 33 shows one, deterministic example. In this case in Figure 33a (left) the individual particles are represented by a simple, three-level, two-dimensional surface fractal object. Figure 33b (right) shows how these can be combined into a two-dimensional mass fractal, a Sierpinski carpet, with a mass fractal dimension of 1.8928.

There may also be several inflection points in the data. These include a point, the surface fractal correlation length r, which forms the upper scaling limit of surface fractal behavior. Below this Q-range the scaling exponent is dominated by mass fractal behavior. A second point may separate the mass-fractal scattering region from a fuzzy-scattering region (this is sometimes found at high-Q as well, cf. Naudon et al. 1994). At yet lower Q-values corresponding to length scales greater than the largest aggregates, the mass-fractal correlation length, the slope of I vs. Q should flatten. This “Guinier region” is not commonly observed in direct scattering data for rock samples, but is present in data obtained from image analysis. This latter, however, may reflect the scale of the images, rather than any maximum in the samples themselves.

The surface fractal dimension can also be used to determine the surface area to volume ratio as (Allen 1991):

(53)

This equation is based on the assumption that the fractal surface is self-affine (i.e. the structure is invariant under an anisotropic scaling transformation). Because the surfaces of these materials are fractal, the magnitude of the surface area depends on the size of the “ruler” used to measure it. (S/V)0 is the surface area to volume ratio for a smooth particle, r is the fractal “ruler” length, and r0 is the correlation length representing the upper limit of surface fractal behavior, and Ds is the surface fractal dimension. Anovitz et al. (2009) noted, however, that these surface areas represent only those surfaces that scatter neutrons, and therefore this represents primarily pore/grain boundaries although, as noted above, large concentrations of two-mineral grain boundaries may contribute. Because the rocks under consideration consisted largely of calcite, they selected a value of 7.165 Å for r, from the cube root of the calcite unit cell volume (367.85Å). Values for r0 and Ds were taken from the SANS/USANS data. Wang et al. (2013) modified this approach slightly, again selecting the crossover length (called 2l there) between the regions of surface and mass fractal scattering for r0, and 7 Å, the size of an N2-gas molecule used for absorption studies, as r (called d in Wang et al. 2013).

As can be seen in Figure 32, another inflection point occurs at high-Q. This is determined by the intensity of the incoherent background. In some cases Bragg peaks, typically due to the large d-spacings of clays are observed through the background, which complicates background subtraction, but no such peaks are observed in the data for the sample in Figure 32. The first step in analyzing the data is often to subtract this background. At high Q the Porod Law (Porod 1951, 1952) provides the relationship of the scattered intensity for an ideal two-phase system bounded by a smooth interface of area S, and the scattered intensity as Q goes to infinity:

(54)

where Ib is the background and S is the specific surface area (surface area per unit volume, units of cm2/cm3 or 1/cm). Figure 34 shows one way to obtain the background value (Glatter and Kratky 1982). The slope of a plot of Q4I(Q) as a function of Q4 has units of 1/cm, and is dominated by the data at high Q. From Equation (54) this slope, therefore, defines the background. This plot also provides a convenient way to judge whether Bragg peaks, which are often rather broad in this region for SANS data, are present. If so this approach cannot be used to determine incoherent background values.

The intercept of the line defined by Equation (54):

(55)

is called the Porod constant. Multiplying by π, and dividing by the invariant (Y, Eqn. 57 below) yields the surface area to volume ratio as Glatter and Kratky (1982):

(56)

Porod (1951, 1952) also showed that, for any sample, an integral of Q2I vs Q should be a constant, irrespective of details of the structure. If parts of the system are deformed the diffraction pattern may change, but the integral remains invariant (Glatter and Kratky 1982). The plot of this transform, shown in Figure 35 is know as the Kratky transform, and

(57)

where Y is called the Porod invariant, and φ is the volume fraction of scatterers or, in the case of a two-phase (rock-pore) system, the pore fraction. A critical factor in this calculation is the extent to which the Kratky transform is “closed”. The integral is extremely sensitive to values at high Q, and if this has not gone sufficiently to zero the results will be incorrect. Thus appropriate background subtraction is critical. Figure 35 shows the Kratky transform of the data in Figure 32.

Another useful simple transform is the Guinier plot, which yields the radius of gyration of the scattering particles. At low Q, for smooth spherical, or at least isotropic scatterer (e.g., a polymer chain):

(58)

where Rg is the radius of gyration of the scatterer. Therefore,

(59)

As this is the equation of a straight line the values of I0 and Rg are easily determined from a plot of ln(I(Q)) vs. Q2. In the case of cylindrical objects, however (Glatter and Kratky 1982), of length L and radius R, this equation is valid at low Q with:

(60)

but at intermediate Q-values

(61)

and the appropriate plot is ln(QI(Q)) vs. Q2. For lamellar objects of thickness T, the appropriate equation becomes:

(62)

where

(63)

and the plot is ln(Q2I(Q)) vs. Q2. However, as noted above a Guinier region (flat at low Q in a plot of I(Q) vs Q) is seldom observed in scattering data from rock samples, and the fractal nature of the mineral/pore interfaces in most rock materials tends to make this approach less useful for geological purposes.

Finally, as was discussed by Anovitz et al. (2013a), for many geological samples a simple Porod plot of I(Q) vs. Q is often not very convenient. This is because the near Q4 slope requires scaling of both the x and y axes in such a way that details of the data are often hard to discern. They suggested, therefore, that data be plotted on a semi-Porod transform (Q4I(Q) as a function of Q) instead. This has the effect of rotating the data so that a Q4 slope becomes horizontal, allowing significant magnification of the data and careful examination, both of changes between individual samples, and of the details for an individual sample previously hidden in the Porod plot. Figure 36 shows the semi-Porod transform for the same dataset as in the Porod plot in Figure 32. It is clear in this presentation that the data in the central part of the Q-range do not fall on a single fractal slope but, rather, are separated into several regions.

While the total porosity in the SAS size range can be calculated from the invariant as shown above, it is clearly of interest to derive the pore volume distribution or, if possible, the pore size distribution. There are, however, at least two caveats that must be considered. First, while a number of methods have been suggested for making these calculations the results may be model-dependent. Several are available as pre-programmed software, making them relatively easy to use, but the user is cautioned to understand the assumptions and methodologies of any technique adopted, and to consider these limitations in interpreting the results.

Second, as noted by Anovitz et al. (2009, 2013a), conversions from volume distributions to pore distributions are highly problematic. To do so requires one or more assumptions about the shape of the pores involved. Figure 37 shows TEM images of a selection of pore images from the Marble Canyon contact aureole, west Texas (Anovitz et al. 2009). It is clearly evident that the pores are neither solely spheres, nor solely laminar, but vary significantly. Thus, interpretations of pore sizes based on pore volumes can be problematic.

A number of approaches have been suggested for calculating pore volume/size distributions. These include the smooth surface approach (Anovitz et al. 2013a), the polydisperse hard sphere model (PRINSAS, Hinde 2004, Radlinski 2006), Maximum Entropy approaches (Jaynes 1983; Skilling and Bryan 1984; Culverson and Clarke 1986; Potton et al. 1986, 1988a,b; Hansen and Pedersen 1991; Jemain et al. 1991; Semenyuk and Svergun 1991), regularization or maximum smoothness (Glatter 1977, 1979; Moore 1980; Svergun 1991; Pederson 1994), total non-negative least squares (Merrit and Zhang 2004; Ilavsky and Jemian 2009), Bayesian (Hansen 2000) and Monte Carlo (Martelli and Di Nunzio 2002; Di Nunzio et al. 2004; Pauw et al. 2013). There are also methods available based on Titchmarsh transforms for determining size distributions (Fedorova and Schmidt 1978; Mulato and Chambouleyron 1996; Botet and Cabane 2012), and the structure interference method (Krauthäuser et al. 1996). Maximum entropy, regularization and total non-negative least squares are available in the IRENA program (Ilavsky and Jemian 2009).

While a detailed explanation of these techniques is beyond the scope of this review, it is worth illustrating the potential differences among them. Figure 38 shows the initial (U)SANS data (Anovitz unpb.) from a sample of dolostone from the Ordovician Kingsport formation and three pore distributions calculated using the total non-negative least squares, maximum entropy, and regularization approaches as implemented in IRENA. The overall similarities and detailed differences amongst the three approaches are apparent. The TNNLS and regularization approaches provide smoother estimates of the pore distributions, but all three suggest four, or maybe five subdistributions within the pore structure. In discussing these three approaches, however, Ilavsky and Jemian (2009) note that the regularization approach does not necessarily guarantee non-negative results for each bin. A similar multi-distribution pattern has been observed in sandstones (Anovitz et al. 2013a, 2015a) suggesting that modeling sandstones as a continuous fractal distribution is inappropriate.

In the end, however, solutions such as those above are limited by any number of assumptions, including in most cases those of a single pore shape and contrast. More detailed analysis of SAS data required modeling of the scattering results. For a dilute solution the intensity of a SAS pattern is described as:

(64)

where |Δρ|2 is the contrast, F(Q,r), the form factor, is an equation the represents the shape of the individual scatterers, V(r) is the particle volume, N is the total number of scatterers, and P(r) is the size distribution, the probability of a given particle of size r. For non-dilute solution the structure factor, which describes the interaction amongst the particles must be considered. For example, Anovitz et al. (2009) noted that modeling carbonates often requires both surface fractal (form factor) and mass fractal (structure factor) components, and Jin et al. (2011) obtained similar results from shales. Equations for the structure factor can be combined with form factor results as:

(65)

or, combining the the various constants into a single empirical variable:

(66)

although the application of this approach to polydispere systems can be more complex. Additional factors can also be added for backgrounds, Bragg peaks, fuzzy scattering or other factors as needed. Because there are a large number of possible scattering geometries, a large number of possible structure and form factors and size distributions, many derived for polymers, particles of known shapes, or complex fluids have been considered and are available in standard data fitting packages. These are described in more detail in several publications (see Kline 2006; Hammouda 2008; Ilavsky and Jemian 2009).

It is far beyond the scope of this review to even begin an analysis of the applications of image analysis to geological samples. However, in the context of analyzing and quantifying pore structures some discussion is appropriate, because analysis of low-magnification SEM/BSE or X-ray computed tomographic images can be used to extend the scale range analyzed by SAS experiments, and thus imagery can be used for pore characterization beyond that provided by point counting. In addition, in the process of obtaining and processing these data one generates binary images of the pore structure of the rock, typically at scales greater than approximately 1 mm that can then be used to provide further quantification of the pores structure at these scales using other statistical techniques that require the two- or three-dimensional data available in the images themselves.

Sample preparation and image acquisition

A key requirement of many forms of pore structure image analysis is that they require binary images showing pore-space vs. non-pore space (mineral phases). These are typically obtained by thresholding grey scale SEM/BSE or X-ray tomographic images to separate the two phases. Figures 39 and 40 show a BSE and binary image pair for sample 04Wi17b, 100 ºC, 8 weeks from Anovitz et al. (2015a). A significant caveat should be mentioned at this point with respect to obtaining the binary images necessary for many of the image-based calculations discussed here. Even for a simple system (essentially just quartz and pores, with the pores filled with epoxy to yield a smoother, more two-dimensional result in the BSE images) the process of image segmentation does not necessarily yield a unique solution. There are several reasons for this. First, the background, considered here as the grey-scale level for quartz or pores, may not be “flat” across the image. This is often a function of the instrument settings on the SEM and must be corrected for before thresholding/segmentation. More importantly, however, even within a given grain there are often variations in grey-scale level, adding to the noise level, and pixels at the boundaries between grains will have grey-scale values that average the values of both phases.

Selecting the appropriate method for segmenting an image, either for simply choosing a threshold for a 2-D or 3-D image, the grey-level between the two phases, or using a more complex segmentation approach, however, may lead to significant undertainty. While threshold selection can be done manually, it is unlikely that this approach will lead to consistent results. Wildenschild and Sheppart (2013) note that, even in cases where simple thresholding is appropriate, selection by hand has been shown to be subject to significant operator subjectivity. On the other hand, while there is agreement that automated methods are preferable, it is quite common that thresolding-based methods as well do not provide consistent results as well if applied to slightly varied images of the same material. For thresholdable images we have often found that reasonable consistency can be obtained by trying a number of methods and selecting a threshold value near the median, but the statistical reliability of such an approach remains to be tested. Simple thresholding is also intolerant of image noise, and subject to uncertainties for pixels that straddle grain boundaries. This becomes even more complex in multiphase systems.

Given the complexity of this issue, a careful examination and comparison of the various approaches is clearly beyond the scope of this review. There are, in fact, a very large number of algorithms for selecting a threshold. Sezgin and Sankur (2004) for instance, review forty different approaches in six categories: histogram shape, clustering, entropy, object attribute, spatial methods and local methods. Iassonov et al. (2009) reviewed segmentation methods, and provided some comparison with thresholding techniques, and Wildenschild and Sheppard (2013) summarized and referenced a number of approaches to thresholding and segmentations (see also Noiriel 2015, this volume). Exclusive of those aimed primarily at medical imaging, other reviews include: Pal and Pal (1993), Cheng et al. (2001b), Munoz et al. (2003), Udupa and Saha (2003), Cardoso and Corte-Real (2005), Cremers et al. (2007), Ilea and Whelan (2011), and Schülter et al. (2014). Readers are encouraged to evaluate these methodologies for their specific applications, but care must be taken in any event in order to obtain reasonable, consistent, and unbiased values.

The materials from which the original rock is composed may also make it difficult to create suitable binaries showing the pore structure. The technique of impregnating the pores with epoxy, yielding a low backscatter contrast, flat material in the pores, works very well as long as material with a similar average atomic number is not already present. This is, however, not true for materials with a significant organic content such as coals or tight oil/gas shales which often contain kerogen or bitumen. An alternative approach, suggested by several authors (Swanson 1979; Dullien 1981; Hildenbrand and Urai 2003; Dultz et al. 2006; Kauffman 2009, 2010; Hu et al. 2012) is to impregnate the pores with Wood’s metal, an alloy of approximately 50% Bi, 25% Pb, 12.5% Zn and 12.5% Cd with a melting point of only 78 ºC yielding pores that are bright in backscattered imaging, and thus stand out from the dark organic matter. Wood’s metal does not wet silicates, however, and thus the minimum pore size that it will enter is a function of injection pressure (similar to MIP) as described by the Young–Leplace (Washburn’s) Equation. As scattering describes the smaller pores, however, it is not really necessary to inject the metal into pores smaller than about 1 mm in this case. Given a contact angle of 130° and a surface tension of 0.4 N/m (Darot and Reuschle 1999; Hu et al. 2012) this only requires a pressure of about 10 bars (145 psi).

A difficulty with this approach is that, because of its Pb and Cd content, Wood’s metal is hazardous to use, and potentially difficult to dispose of correctly. A potentially safer alternative (Anovitz, unpb) is Field’s metal. Field’s metal is a fusible eutectic alloy of bismuth, indium, and tin (32.5 wt. % Bi, 51 wt. % In, 16.5 wt. % Sn. It melts at a lower temperature than Wood’s metal, becoming liquid at approximately 62 ºC (144 °F) and, as it contains no lead nor cadmium, is marketed as a non-toxic alternative to Wood’s metal.

Combining imaging and scattering data

A key feature of all scattering approaches is that the range of pore sizes they can interrogate is inherently limited by the design of the instrument. While the combination of small angle, very-small angle, ultra-small angle and potentially light scattering and even wide angle techniques can cover a very wide range of scales, even this is inherently smaller that the real range of porosity in geological materials, which stretches from the structural pores in such phases as beryl and cordierite (cf. Anovitz et al. 2013c; Kolesnikov et al. 2014), dioptase, hemimorphite, zeolites, etc. (Ferraris and Merlino 2005) which may be as small as several angstroms, to many meters or even miles in length if the definition of a pore is extended, sensu lato, to cave systems.

In order to extend the quantification of pore systems to larger scales, therefore, the results of another approach must be combined with those from the scattering data. To do so, we combine the results of imaging analysis, be it for two-dimensional images, usually obtained using backscattered electron imaging on an SEM, or three-dimensional X-ray computed tomographic images with the scattering data. This approach has the distinct advantage that it allows binary images of pore systems obtained at low magnifications using imaging techniques to be added to data obtained from scattering experiments. As backscattered electron images can easily be obtained that cover several square centimeters with mm- or sub-mm-resolution this allows the scales quantifiably analyzed using this extended “scattering” analysis to extend from the nanometer to the centimeter range—7 orders of magnitude.

In this case the correlation function, the Fourier pair to the scattering function, becomes identical with the two-phase autocorrelation function, and can be described explicitly (cf. Anovitz et al. 2013a, Wang et al. 2013). To do so, following Berryman (1985), Berryman and Blair (1986) and Blair et al. (1996), we begin by defining a characteristic function f(x), which has values of either 0 or 1. This is equivalent to a binary image, and thus it is this same relationship that allows us to quantitatively connect backscattered electron, X-ray CT, or other imagery of the sample to the scattering data and, thereby, extend the range of the scattering data to cm scales. Torquanto (2002a,b) defined this in terms of an indicator function I(i)(x) where:

(67)

where Vi is the volume occupied by phase I, and i is the volume occupied by the other phase (rock). As summarized by Anovitz et al. (2013a, who used f(x) instead of I(i)x), if we then let f(x) = 1 for the pores, and 0 for the solid, then the first two void–void correlation functions (1- and 2-point) for an isotropic material are given by

(68)

and

(69)

where the brackets are a volume average over x, r is a lag distance and r = |r| for an isotropic material, and φ is the pore fraction. Berryman (1985) showed that:

(70)

so that the zero intercept of the second correlation function is the porosity, and at the limit of large r

(71)

In addition, the specific surface area (s) defined as the ratio of the total surface of the pore-grain interface to the total volume of the grains can be derived as:

(72)

and the effective pore size is given as

(73)

which is the intersection of a line tangent to the S2(r) curve at the zero intercept with S2(r) = φ2. A quantitative estimate of the average grain size can also be obtained, but this depends on the sorting and arrangement of the grains in an individual sample (Blair et al. 1996).

Alternatively, correlation probabilities can be represented using the related autocovariance and/or autocorrelation coefficient functions:

(74)

and

(75)

where φg is the volume fraction of grain phase, φp + φg = 1, and c(r) is a normalized version of X(r). In a statistically homogeneous two-phase system (isotropic or anisotropic), X(r) has limiting values of X(0) = φpφg and, X (∞) = 0 and bounds in the range of −min(φp2g2)≤X(r)≤φpφg. Normalizing X(r) by φpφg puts c(r) into a range between one and some negative value. Thus, in c(r) form, a value of one at a given r means perfect correlation, zero means no correlation, and negative values mean anticorrelation. Following Adler et al. (1990), Radlinsky (2006) associated c(r) with g(r).

On the basis of this function Debye and Bueche (1949) and later studies (e.g., Guinier et al. 1955; Debye et al. 1957; Glatter 1980; Glatter and Kratky 1982; Adler et al. 1990; Lindner and Zemb 1991; Radlinksi et al. 2004; Radlinksi 2006) showed that small-angle scattering measurements can be used to obtain the autocorrelation coefficient of two-phase media. They showed that the normalized scattering intensity per unit sample volume V at wave number Q for a three-dimensional (3-D), isotropic, two-phase system comprised of solid and pore phases is proportional to the Fourier transform of the autocorrelation coefficient as:

(76)

(77)

where (Δρ)2 is the scattering length density contrast, and Q is the scattering vector magnitude as defined above.

The simplest method for calculating S2(r) is to calculate for each value of r the fraction of pixels for which both ends of a line segment of that length fall on the phase of interest. Alternatively, a Monte Carlo approach can be used to randomly select a suite of starting pixels and angles. The problem with such an approach, however, is that it is computationally slow. Anovitz et al. (2013a, 2015), therefore, used an alternative approach using the radial integration of the power spectrum of the Fourier Transform of the image (after extending the image size to avoid artifacts due to periodic boundary conditions) assuming that the image shows a random part of a much larger area having the same autocorrelation. This is based on the Wiener–Khinchin theorem (Weiner 1930; 1964; Khintchine 1934; Goodman 1985; Champeney 1987; Chatfield 1989; Hannan 1990; Couch 2001; Ricker 2003; Iniewski 2007). This shows that:

(78)

(79)

and

(80)

Thus, the correlation function c(r) can be quickly calculated by calculating the Fourier transform of an image, multiplying by its complex conjugate, and than back transforming the result. The result is normalized by the autocorrelation of a function equal to 1 in the image area and 0 outside. This corresponds to the denominator in the usual equation for autocorrelation of discrete 1D data sets. The autocorrelation is scaled in such a way that zero means ‘no correlation’ and one means ‘perfect correlation’. Thus, at a distance of r = 0, the value is always one. This differs slightly from the function as defined by Berryman (1985) and Berryman and Blair (1986) in which the value at r = 0 is φ and at large r is φ2, but the scaling between the two results is linear.

One limitation to either approach is that statistical noise necessitates truncation of the autocorrelation spectrum. The results at large r are often not a smoothly decreasing sinusoidal function. Because of these fluctuations at large radii, failure to truncate the data prior to calculation of the scattering intensity will introduce artifacts into the result. The noise in these results at high Q can be reduced by appropriate re-extrapolation of the truncated data (cf. Debye 1957; Anovitz et al. 2013a; Wang et al. 2013).

Three-point correlations

The one-point and two-point correlation functions just described are the first and second moments of the probability distribution of the pore/grain system—the mean (porosity) and the variance. As in any such system, there are an infinite number of related correlations that can be applied to porosity analysis. These are the n-point correlation functions, of which the 1- and 2-point correlations discussed above form a part. The reason to be concerned, at least with the 3-point correlation (cf. Beran 1968; Corson 1974a,b,c,d; Berryman 1985; Torquato 2002a; Jiao et al. 2007, 2008, 2009, 2010, 2013; Singh et al. 2012; Jiao and Chawla 2014), was stated by Berryman (1985):

An elaborate theoretical machinery is available for calculating the properties of heterogeneous materials if certain spatial correlation functions for the materials are known. Formulas have been published for calculating bounds on dielectric constants, magnetic permeabilities, electrical and thermal conductivities, fluid permeabilities, and elastic constants if the two-point and three-point correlation functions are known. (Brown 1955; Prager 1961; Beran 1968)”

This “theoretical machinery” has been even further defined since this time (cf. Berryman and Milton 1988; Bergman and Stroud 1992; Helsing 1995a,b; Blair et al. 1996; Torquato 2002a,b; Saheli et al. 2004; Prodanovic et al. 2007; Wang et al. 2007; Politis et al. 2008; Wang and Pan 2008; Yin et al. 2008; Deng et al. 2012; Wildenschild and Sheppard 2013), as have the facilities for two- and three-dimensional imaging (FIB, XCT, NCT) that provide the analyzable data. While these higher-correlation statistics are well-known in fields such as astronomy (e.g Baumgart and Fry 1991; Coles and Jones 1991; Gangui et al. 1994; Takada and Jain 2004; Seery and Lidsey 2005; Zehavi et al. 2005; Ade et al. 2014; Fu et al. 2014; Moresco et al. 2014) they have been less often applied to geological media, despite their potential for calculating important rock properties.

Geometrically, the three-point correlation function is exactly equivalent to the two-point function described above. In this case it provides the probability that the three points that describe the corners of a triangle of a given size and orientation (i.e. two vectors r1 and r2 sharing and initial, moveable point) all fall on a single phase.

A clear explanation of the three-point correlation function was provided by Berryman (1985, 1988, see also Velasquez 2010) and this discussion is summarized from there. Paralleling the definitions of the one- and two-point correlations, for a given phase in a homogeneous material in which only the differences in coordinate values are important, not the absolute locations:

(81)

where an oriented triangle is defined by the two vectors r1 and r2, and the triangular brackets represent a volume average over the range of x. If the material is further assumed to be isotropic, so that absolute angle is also unimportant (not necessarily true in geological materials, especially shales), then, letting r = |r|, so |r1| = x2x1 and |r2| = x3x1:

(82)

where:

(83)

Therefore we have the three variables that define a triangle, the side lengths r1 and r2, and the angle θ between them. This function has the following properties:

(84)

(85)

If there is no long-range order then:

(86)

and the three-point correlation function is bounded by:

(87)

where:

(88)

Berryman (1985) also suggested, and Berryman (1988) and Velasquez et al. (2010) modified a method for calculating the three-point correlation for an image. Berryman (1985) noted that, a minimal set of grid-commensurate triangles (ones in which all the corners fall on lattice points, or pixels in the case of an image), labeled with three integers (k, m, n) with k the length of the largest side, can be constructed as follows (note that Berryman used (l, m, n) rather than (k, m, n), This has been modified here for clarity). First, the longest axis is placed along the x-axis, defining a coordinate system with the intersection of the longest and shortest sides as (0,0) and the second vertex at (k, 0). The third vertex is then located in the first quadrant at (m, n). The shortest side shares the (0, 0) vertex, which places the third vertex at xk/2 within a circle of radius k from (k, 0). Berryman (1988) modified this to provide greater accuracy by considering all lattice points for the third vertex at:

(89)

(90)

This is described in Figure 41. For a homogeneous, isotropic system rotations of these triangles are not needed. One can then either calculate the correlation for a given triangle by testing every possible point (N) within the image (which is less than the total number of pixels as the size of the triangle will make a certain number of points on the right and top of the image inaccessible as (0, 0)), finding the number of times that all three corners of the triangle land on a single phase and calculating S3(k, m, n) as

(91)

or, as suggested by both Berryman (1988) and Velasquez et al. (2010), adopt a Monte Carlo scheme and randomly drop a given triangle on the image N number of times. While the size of their images is unclear, Velasquez et al. (2010) found that 100,000 drops was sufficient. Berryman (1988) discusses interpolation schemes to be used with a dataset of this type, for a triangle as suggested in Figure 41.

Even with the simplifications achieved this this approach, however, there remain a large number of possibilities to consider due to the large number of possible triangles involved, each of which is characterized by the three parameters k, m, and n (or r1, r2 and θ) and S3(k, m, n). While a full analysis seems optimally suited to parallelization, simpler schemes, such as that adopted by Velasquez et al. (2010) of choosing one or a few triangle shapes and investigating the effect of scaling as:

(92)

where p is an integer, and plotting the resultant S3(k, m, n)* as a function of p, can provide more easily plotted and analyzed results.

As with the two-point correlation, there are Fourier methods for calculating the three-point correlation. While the Fourier transform of the two-point correlation is the power spectrum, that for the three-point correlation is the bispectrum where, for two vectors r1 and r2 that define a triangle of given size and orientation:

(93)

where F* again refers to the complex conjugate. The bispectrum remains difficult to determine, however, because its parameter space, the set of all triangles, is very large.

The two and three-point correlation approaches described by Berryman (1985, 1987, 1988), Berryman and Blair (1986, 1987) and Berryman and Milton (1988) although, described earlier in other contexts, have been cited in a number of studies of different materials including bone (e.g., Hwang et al. 1997; Wehrli et al. 1998; Wald et al. 2007) cements and concretes (e.g., Lange et al. 1994; Bentz 1997; Sumanasooriya and Neithalath 2009, 2011; Sumanasooriya et al. 2009, 2010; Erdogan 2013), asphalts (e.g Velasquez et al. 2010; Falchetto et al. 2012, 2013, 2014; Moon et al. 2013, 2014a,b,c), fuel cells (e.g., Mukherjee and Wang 2007; Mukherjee et al. 2011), composites (e.g., Torquato 1985; Smith and Torquato 1989; Helsing 1995b; Terada et al. 1997; Spowart et al. 2001; Reuteler et al. 2011), digital reconstruction of porous materials (e.g., Roberts 1997; Kainourgiakis et al. 2000) and others, including several studies of geological materials (Blair et al. 1996; Coker et al. 1996; Ioannidis et al. 1996; Meng 1996; Virgin et al. 1996; Berge et al. 1998; Masad and Muhumthan 1998, 2000; Quenard et al. 1998; Fredrich 1999; Lebron et al. 1999; Saar and Manga 1999; Ikeda et al. 2000; Schaap and Lebron 2001; Vervoort and Cattle 2003; Rozenbaum et al. 2007; Chen et al. 2009; Torabi and Fossen 2009; Anovitz et al. 2013a, 2015a; Wang et al. 2013; Nabawy 2014)

Monofractals and multifractals

As has been noted above, SAS data suggest that pore structures in rocks exhibit both surface and mass fractal behavior. While the scattering data do not directly show what those structures look like, as noted above structure and form factor models such as those suggested by Beaucage (1995, 1996) and Beaucage et al. (1995, 2004) are based on models of this structure. Imaging data provides the opportunity to extend this analysis to a consideration of direct box-counting fractal (Block et al. 1990) and multifractal behavior based on actual observations.

Monofractal analysis is essentially binary in nature. In the box-counting method of measuring the fractal dimension a binary image is subdivided into a series of boxes of size ɛ, and the number of boxes, n, that contain at least some of the image are counted. The box size is then reduced, and the procedure repeated. A plot of log(n), the number of “on” boxes, as a function of log(ɛ), the box size, is then created, and the slope of the line, the scaling behavior of the system, is the fractal dimension.

The limitation in monofractal analysis is that a significant amount of the available information is ignored. In counting each “on” box it does not account for the number of pixels that are “on” or, in another version of this metric, the relative grayscale of each box. Thus, nonuniform variations in the overall density of the image are not accounted for. The multifractal approach (Mandelbrot 1989; Evertsz and Mandelbrot 1992) is an expansion of the original fractal description (Mandelbrot 1977, 1983) that considers this additional information. In multifractal systems a single exponent is not sufficient to describe the system. Rather, an array of exponents, known as the singularity spectrum, is used. A number of studies have used this approach to study sandstones (Muller and McCauley 1992; Anovitz et al. 2013a, 2015a), soils (Perfect 1997; Grout et al. 1998; Posadas et al. 2001, 2003; Caniego et al. 2003; Martin et al. 2005, 2006, 2009; Bird et al. 2006; Dathe et al. 2006; Kravchenko et al. 2009; Paz Ferriero and Vidal Vázquez 2010; San José Martinez et al. 2007) chalk (Muller 1992, 1994, 1996; Muller et al. 1995), and others (Block et al. 1991).

As summarized by Anovitz et al. (2013a), there are several, interrelated mathematical descriptors of multifractal structures, of which the most common are Hölder exponents (α) and Rényi dimensions D(q) (note that a lower case q is used in this case to separate it from the reciprocal space dimension Q in the neutron scattering data). As with the monofractal dimension we first begin by defining the length of one side of our measuring box as ɛ. We then define the total number of boxes of a given size as n(ɛ), and the “measure” of the box as μɛ, which can be any appropriate measure of its density, the number of “on” pixels, the grey scale, the fraction of all “on” pixels in the box, etc. We then further define, for each box

(94)

or

(95)

In the limit, as ɛ→0

(96)

where α is the Hölder exponent. Note that αcourse and α are not necessarily, or even likely to be, identical. If a given box containing four “on” pixels is divided into quarters the resultant four boxes might each contain one “on” pixel or all four might be in one box, etc., depending on their distribution.

For any given box size ɛ, we can define Nɛcourse) as the sum of the number of boxes with a given value of αcourse, and define the multifractal distribution (singularity exponent) as:

(97)

and

(98)

In this description it is the singularity spectrum, f(α) as a function of α, and especially the values of αmin, αmax, f(α)max, α at f(α)max, and the asymmetry of the spectrum:

(99)

that describe the statistical distribution of the measure in the image. In a monofractal the singularity spectrum is reduced to a single point.

The alternative, but related approach is the Rényi dimension D(q). In this case we begin by defining a generating function:

(100)

where q covers some range, usually approximately ± 10 to ± 15, and the sum represents the probability that q random points fall in the same box (Peitgen et al. 2004). We can then define the qth mass exponent

(101)

and the generalized or Rényi dimension

(102)

except at q = 1 where

(103)

In general

(104)

A plot of D(q) vs. q is referred to as the Rényi spectrum. If D(q) strictly decreases with increasing q for q >0, the fractal is called inhomogeneous or multifractal (Peitgen et al. 2004). If D(q) as a function of q is constant, the system is a monofractal. In this description D(0), referred to as the capacity dimension, is equivalent to the monofractal dimension. D(1) has the same form as the microscopic description of entropy from statistical mechanics. It describes the entropy of the system, and is called the entropy or information dimension. Similarly, τ(q) can be analogized to the Free Energy of the system, χ(q, ɛ) to the partition function, and q−1 to the temperature (Stanley and Meakin 1988; Arnéodo et al. 1995; Bershadskii 1998; Farge et al. 2004). D(2) is the correlation dimension, and gives the probability of finding pixels on an object within a given distance if you start at a pixel on the object. Thus it is related to the autocorrelation curve described above. These are related to the τ(q) curve as:

(105)

For q > 0, D(q) is dominated by large μ(i) and therefore by areas with a high density of the measure. For q < 0, D(q) is dominated by small μ(i) and therefore by areas of low density. It must be remembered, however, that the “measure” involved is the porosity, and thus “high density” refers to a high density of pores, not mass.

The singularity spectrum f(α) vs. α and the Rényi spectrum D(q) vs. q are not independent. They are related through τ(q) as:

(106)

and by the Legendre transformation

(107)

so that:

(108)

where f(α(0)) is the value of f(α(q)) at the maximum of the f(α(q)) vs α(q) curve,

(109)

(110)

At values of α(q) > α (0), q < 0, and for values of α(q) < α(0) q > 0. While these descriptions are not independent they provide useful alternative approaches.

Lacunarity, succolarity, and other correlations

While fractal and multifractal formalisms are excellent metrics to describe the scaling behavior of porous systems, they are not, in themselves, sufficient to fully quantify the pore structure. The reason for this is that they do not fully describe how a fractal structure fills space – the texture of the pore structure. Within a give area a fractal structure may be more, or less, heterogeneous, while still having the same scaling behavior. In his classic book “The Fractal Geometry of Nature” Mandelbrot (1977) began to address this limitation, originally noticed in his studies of galactic structures, by defining two additional parameters, the lacunarity, or gappiness, and the succolarity, or connectivity of the pore structure. As with fractal dimensions, these were originally proposed by Mandelbrot (1977, 1994, 1995) as a method of discerning amongst systems for which the fractal scalings are otherwise similar.

The term lacunarity comes form the Latin word lacuna, meaning a gap or lake. The term should be generally familiar to geologists from its use to mean a gap in the stratigraphic record (Gignoux 1955; Wheeler 1958). Lacunarity is a quantitative measure of how clustered the pore structure is, and serves as an addition to the concept of fractal analysis (cf. Mandelbrot 1983, 1994, 1995). It can be seen as representing the homogeneity, or translational or rotational invariance of the system. It can also be viewed as a measure of the translational homogeneity of an image. From the point of view of understanding the relationship between porosity and permeability, therefore, this provides a quantification of how isolated each pore, or group of pores is from others.

While much of the application of this approach has been in fields such as geography and organic/biological systems, several authors have investigated the utility of this measure for evaluating porosity and permeability in reservoir modeling (Garrison et al. 1993a,b; Cai et al. 2014), soils (Zeng et al. 1996; Millán 2004; Chun et al. 2008; Zamora-Castro et al. 2008; Luo and Lin 2009; Torres-Arguelles et al. 2010; Ulthayakumar et al. 2011), fractures (Miranda-Martinez et al. 2006), porous silica (Denoyel et al. 2006), sediments (Bube et al. 2007), oil mobilization (Hamida and Babadagli 2008), and sandstones (Anovitz et al. 2013a, 2015a) in both two and three dimensions.

There are a number of methods for calculating the lacunarity, several of which have been coded into the FracLac (Karperien 1999–2013) plugin for ImageJ (Abramoff et al. 2004; Rasband 1997–2014; Schneider et al. 2012). The simplest method, based on that used for fractal dimensions is again box counting. A box of size ɛ is slid over the image, either sequentially in a fixed grid or in an overlapping pattern (sliding box counting). For a binary image divided into a given number of boxes of a given box size, ɛ, and grid orientation, g, the box-size specific lacunarity (λ) is calculated from the mean, σ, and standard deviation, μ, of the number of pixels “turned on” in each box as:

(111)

The overall lacunarity of the image, Λ, is then the average of the single box size lacunarities (λ) over all box sizes and grid positions, although this can be done in terms of just box size or just orientation. Analysis of these values is, of course, limited by the resolution of the images in question.

There are several other definitions of lacunarity. One is based on the prefactor in the box-counting method for defining the fractal dimension. If the ln-ln scaling of the number of “on” boxes (N) as a function of the box size (ɛ) is given as:

(112)

where A is calculated for a given orientation (g) of the image from the y-intercept of the ln–ln curve as:

(113)

averaging over the (G) available orientations:

(114)

then the prefactor laccunarity (PΛ) can be defined from the prefactor as (Mandelbrot 1977):

(115)

Figures 42–45 and Figure 43 show examples of the utility of both the multifractal and lacunarity approaches in the analysis of variations in pore structures (Anovitz et al. 2013a). These data were obtained from samples of the St Peter sandstone from SW Wisconsin originally collected and reported on by Kelly et al. (2007). Each sample contains both initial detrital quartz grains and optically continuous quartz overgrowths (although analysis by Anovitz et al. (2015a) suggests that there may be significant differences between the initial and overgrowth quartz). It is believed that these samples were never buried to any appreciable depth, and that the quartz overgrowths formed as silicretes, precipitation of dissolved silica.

Figures 42 and 43 show examples of BSE/SEM images from low- and high-porosity samples, and Figures 44 and 45 show how the scattering (slope and subslope Ds) and imaging scale fractality (D(0)) as well as the correlation dimension (D(2)), the multifractality (D(0)–D(2)) and the lacunarity change with porosity. Since the primary geologic process here is overgrowth formation, which decreases porosity, the process variable increases to the left in these figures. It is clear from that there are distinctive, consistent changes in both the multifractal and lacunarity behavior of these sandstones as a function of overgrowth formation, as well as differences between changes observed at submicron (scattering), and supramicron (imaging) scales. At the imaging scale the overall fractality decreases with overgrowth formation, but the scale-dependence of the fractal behavior, the multifractality, and the inhomogeneity of the pore distribution increase. At scattering scales, however, the fractal dimension increases with overgrowth formation. These effects and their potential origins were described in more detail by Anovitz et al. (2013a).

In his original descriptions of fractal systems Mandelbrot (1977) not only described fractal and multifractal behavior a lacunarity, but a fourth variable he called succolarity. This was originally defined by Mandelbrot (1977), as follows:

a succolating fractal is one that “nearly” includes the filaments that would have allowed percolation; since percolare means “to flow through” in Latin …, succolare (sub-colare) seems the proper neo-Latin for “to almost flow through.

While this clearly relates to a fractal description of such concepts as connectivity, tortuosity and percolation, and thus for our purposes to the relationship between porosity and permeability. However, at the time he did not further define this parameter.

To our knowledge the first attempt to quantify the concept of succolarity was that of de Melo (2007, see also de Melo and Conci 2008, see also de Melo and Conci 2013). De Melo notes that succolarity is, in fact, a part of the very large field of percolation theory. This, generally speaking, asks what the probability is that a fluid pored on top of a porous medium will be able to reach the bottom. De Melo then defines succolarity as “the percolation degree of an image (how much of a given fluid can flow through this image)”. This process is, therefore, directional. For a rectangular image connected sections beginning at the top, right, left and bottom of an image are not necessarily identical. The calculation, therefore, begins by flooding all open pixels along one edge and determining all the open pixels connected to those across pixel edges. The image is then divided into (N) equal sized boxes of edge length (ɛ), and for each box of a given size the percentage of “on” pixels, the occupation fraction (O(n)), is calculated. Each box is then assigned a “pressure” (P(n)) equal to the number of pixels from the input side to the centroid (which may be the middle of a pixel) of the box. The sum of the product of the occupation percentage and pressure for each box is then calculated. This is then normalized to the sum when the occupation fraction of each box is 1, yielding the succolarity for a given flow direction as:

(116)

Interestingly, de Melo’s results (2007, de Melo and Conci 2008, de Melo and Conci 2013) suggest that this value is essentially independent of the box size, but not of flow direction.

Despite de Melo’s quantification of the concept, to date succolarity as a measure of fractal texture has received much less attention than fractality and laccunarity, although there are studies of drainage systems (Shahzad et al. 2010, Mahmood et al. 2011), biomedical analysis (Ichim and Dobrescu 2013; Cattani and Pierro 2013; N’Diaye et al. 2013, 2015; Sangeetha et al. 2013) and image recognition (Abiyev and Kilic 2010). While there have, as yet, been no studies we know of quantifying the succolarity of inorganic porous materials and rocks, the potential utility of this approach is clear, and bears further testing in geological systems.

While the succolarity as defined by de Melo (2007) may to have potential for defining percolation-related properties, other functions may also serve this purpose. Jiao et al. (2007,2008,2009,2010,2013) and Jiao and Chawla (2014), for instance, have suggested that the two-point cluster function (C2(r), see Torquato et al. 1988; Torquato 2002a,b) is also sensitive to topological connectedness. In fact, it should be noted that the approaches discussed above for quantifying the nature of a pore structure from imaging data (fractal, multifractal, lacunarity, succolarity, two-point autocorrelation) are only a fraction of the methods (cf. Heilbronner and Barrett 2014) and statistics (cf. Torquato 2002a) available for such analysis. Several other types of correlations have been derived for random, homogenous materials. Few, however, to our knowledge, have been applied to geological materials in a systematic way although Jiao et al. (2007, 2008) suggested that these may be a pragmatic alternative to the analysis of harder to calculate higher-order correlation functions. Torquato (2002a) discussed a number of these statistics and their relationships to deriving bulk material properties for porous materials. These include: surface correlation functions (between a point on a surface and a point in a pore or between a point on a surface and another point on the surface, applicable to trapping and flow problems), lineal path functions (the probability that a line of length (l) lies wholly in a single phase, provides linear connectedness information), chord-length density functions (probability of finding a chord, the line segments between phase boundaries along a line through the image, of length (l), defines discrete free paths for transport: see Thompson et al. (1987) for an application to sedimentary rocks), pore size functions (the probability that a randomly chosen point in a pore lies at a distance r from the pore/solid interface), percolation and cluster functions (the probability of finding two points in the same cluster of a phase or pores), nearest neighbor functions (for particles suspended in a medium the probability of finding the nearest-neighbor particle at a given distance from a reference particle), point/q-particle correlation functions (for particles in an inhomogeneous medium, the probability of finding a point in phase i at position x, the center of a sphere in some volume dr1 around point r1 … and the center of another sphere in a volume drq around point rq×, related to conductivity, elastic moduli, trapping and permeability), and surface/particle correlation functions (again for particles an inhomogeneous medium, the probability of the center of a particle being a distance r from a point on the surface, related to permeability through random beds of spheres). Further discussion of these correlation functions is beyond the scope of this review, and the interested reader is referred to the work of Torquato (2002a) for more information.

A combination of these statistical techniques, together with the characterization approaches described above, should provide methods, not only to describe bulk properties of porous materials from those of the mineralogy of which it is composed but to provide real-world multiscale descriptions of porous materials that can be used in model, in silico, reservoir flow, oil and gas recovery, transport of heat and contaminants, aquifers, and other properties of porous reservoirs of significant interests. A proper statistical representation of meso- and micro-pore morphology in the form of a 3-dimensional, angstrom-to-millimeter scale model or rocks and other porous material is crucial for the study of fluids in porous rocks, as well as for upscaling atomic-scale mineral growth and dissolution rates to pore, hand sample and reservoir scales, as this will provide a realistic multiscale matrix for these efforts. There have been several attempts to provide such models. The simplest, based on geometrical idealizations (e.g., Thovert et al. 2001), provide a useful tool but are not sufficiently detailed representations of real rocks. The scattering and imaging data described above, however, provide direct input to more general but also more computationally demanding reverse Monte Carlo (RMC) techniques (McGreevy and Pusztai 1988). Salazar and Gelb (2007) showed that scattering and adsorption experiments provide complementary information that can yield more realistic RMC models. As just noted, Torquato (2002a) investigated various simple ‘structural descriptors’ to identify those containing the most information about pore structure and connectivity (Torquato 2002a; Jiao et al. 2010). Mariethoz et al. (2010; Mariethoz and Kelly 2011) showed how multiple-point statistics (MPS), based on experimental data or simple physically-based structural motifs and a Monte Carlo algorithm can be used to develop structure models that typically appear more realistic. ‘Soft’ data may be integrated within a Bayesian statistical scheme (Lu et al. 2009). Other approaches employ simulations of the physical processes generating a particular material, such as mimicking sedimentation followed by compaction and cementation of sandstone (Oren and Bakke 2002).

Comparisons of multiple techniques

As noted above, it is valuable (and frequently necessary) to combine porosity and pore feature information from different complementary techniques. A case in point is the use of mercury intrusion porosimetry (MIP) and neutron scattering (e.g., Clarkson et al. 2012; Swift et al. 2014). MIP provides information on effective or accessible pore throat size distribution whereas (U)SANS reveals details about pore size, porosity, pore volume, surface area to pore volume ratios and fractal behavior. A good example of this was presented by Swift et al. (2014) who documented the relationship between mineralogy and porosity in the Eau Claire formation, a middle- to upper Cambrian regional mudstone located in the mid-continent of the U.S. The Eau Claire is the seal rock for the Mt. Simon formation used at the Decatur site in Illinois for CO2 storage demonstration. This study utilized MIP, (U)SANS and SEM to quantify the pore features of three subfacies in this formation, an illitic-shale facies (Fig. 46a), one rich in carbonate (Fig. 46b) and one enriched in glauconite (Fig. 46c).

As a first step in the comparison between MIP and neutron scattering, one can use the cumulative pore volumes derived from the neutron scattering data as identified in Swift et al. (2014) to calculate pore size distributions. This should be done with caution, however, as it requires an assumption with regard to pore shape. As shown by Anovitz et al. (2009, 2011), TEM examination of nanoporosity in rocks suggests that pore shapes are extremely variable. In addition, the fractal nature of the pore/solid boundary further complicates the assumptions. However, by taking the derivative of the cumulative porosity curves the pore volume distribution can be obtained without assumptions as to pore shape. These distributions are shown for the three samples in Figure 46a–c along with examples of the mineral maps produced by the SEM QEMSCAN method. As can be seen, pore scales fall into several groups, not all of which are present in each material. At the nanoscale, two pore regimes near 25 and 135 Å, and a broad, larger-scale regime centered around 10–20 μm occur in both of the mudstones. The porosity of the illitic shale is dominated by the first of the two nanoscale distributions. Microscale pores form only a small fraction of the total in this sample and are polydisperse, with a broad hump around 2 μm. Although nanoscale pores appear to be present in both mudstones, only the glauconitic mudstone has a significant peak near 10 nm, mirroring the weaker pore size cluster at that scale in the shale. These clearly reflect the high-Q “humps” in the I(Q) versus Q plots given in Swift et al. (2014). In the microscale regime, the glauconite-rich mudstone has a larger peak at 20 μm, which may have a shoulder around 115 μm. The carbonate mudstone has a narrower peak at 30 μm that may correspond to part of the wider distribution observed in the glauconitic sample.

By comparison, the MIP results indicate the connected porosity accessible by mercury through pore throats ranges between 4 nm and 50 μm in equivalent circular diameter is 2.2% of the rock volume for the shale, 0.2% for the carbonate-rich mudstone, and 4.5% for the glauconite-rich mudstone. As shown in Figure 46, the connected porosity of the illitic shale is dominated by pores having pore throats smaller than 0.1 μm, with a peak at or below 4 nm. The carbonate-rich mudstone has almost no connected porosity, a finding matching SEM observations of that sample. The glauconite-rich mudstone, by contrast, has substantial connected porosity and a bi-modal distribution of pore throats with clear peaks at roughly 10 nm and 700 nm. By combining the information provided by these two methods one can estimate the pore size to pore throat ratios, a proxy for pore accessibility.

As we have observed in the case of the mudstone results there is some similarity between the pore size patterns and overlap in pore feature dimensions retrieved from scattering and those produced by MIP. This is due in large part to the fact the pores themselves are approaching the size of the pore throats especially at the smaller length scales. However, when one examines the pore to pore-throat dimensions in coarser grained clastic rocks like sandstone the difference can be more dramatic. A case in point is shown in Figure 47 where we show pore volumes from MIP data plotted with similar data from neutron scattering data against pore throat size and pore size, respectively, for the Mt. Simon sandstone from Ohio. It is clear that there is a significant difference in the dimensions of the pore throats compared to the actual pore sizes. In this example, the pore to pore-throat ratio is on the order of 100 which is a typical value reported for many sandstones (cf. Wardlaw and Cassan 1979; Nelson 2009; Anovitz et al. 2015a). The take-away message here is that by combining not just two methods such as MIP and neutron scattering, but adding another approach like the SEM imaging, one can begin to not only quantify the pore features but also visually identify and even classify what these look like in detail.

Another excellent example of using multiple techniques to describe pores and pore throats was presented by Beckingham et al. (2013). In this study they described pore-network modeling of two kinds of samples, an experimental column of reacted coarse sediment (221–300 μm diameter) from Hanford, WA and a sandstone from the Viking formation, the western Canadian sedimentary basin. The modeling was based on the analysis of 2-D SEM images of thin sections coupled with 3-D X-ray micro tomography (CMT) data (Fig. 48). X-ray CT imaging has the advantage of reconstructing a 3-D pore network while 2-D SEM imaging can easily analyze sub-grain and intragranular variations in mineralogy. Refer to Noiriel (2015) for details on the CMT technique. Pore network models informed by analyses of 2-D and 3-D images at comparable resolutions produced permeability estimates with relatively good agreement. For cases where there was less adequate overlap in resolution between methods orders of magnitude discrepency in permeability were observed. Comparison of permeability predictions with expected and measured permeability values showed that the largest discrepancies resulted from the highest resolution images and the best predictions of permeability will result from images between 2 and 4 μm resolution.

The pore structures of natural materials (rocks, soils etc.), as well as those of many synthetics play a critical role in controlling the physical properties of and processes in rocks (Emmanuel et al. 2015; Navarre-Sitchler et al. 2015; Royne and Jamtveit 2015, this volume), and the interaction between them and the fluid that are stored, flow through, precipitate in (Stack 2015, this volume) and react with (Liu et al. 2015; Molins 2015; Putnis 2015, this volume) them. The better we understand and can quantify those porous structures, the better will be our ability to model, understand and predict the evolution of geological environments, either under natural conditions or those such as CO2 or radiological waste sequestration, or addition or removal of other fluids from geological reservoirs. The goal of this paper has been to present an overview of techniques for the measurement, description and quantification of pore structures in rocks and rock-like materials such as cements and ceramics. These make up the three-dimensional description of the critical pore/solid interface above atomic scales, and data on their structure provide a crucial basis for our understanding of permeation, transport and storage of fluids as well as various types of solid, liquid and gas contaminents. They also provide a link between the physical properties of the minerals that make up the rock, and those of the rock as a whole. To make this connection, however, we must understand not only the fraction of the rock occupied by pore space, but a number of its properties. These include the connectivity, surface area and roughness, size distribution, laccunarity, and other aspects of the texture of the pore structure.

It is clear that, since the earliest attempts to quantify the porosity of geologic materials, there has been a significant increase in the complexity of analysis and quantification approaches. Early approaches required little that wasn’t readily available, or could be made available in a small laboratory. More recent approaches require expensive equipment that may or may not be available at a given institution, or that requires travel to large, specialty user facilities. In many cases, however, far from supplanting the earlier methodologies, newer approaches frovide complementary data that extends our ability to quantitatively describe the pore structure of geologic materials. This quantification, in turn, is providing bridges to attempts to to describe the properties of macroscopic systems from those of the mineral grains of which they are composed, and detailed in-silico models of porous systems with statistical properties equivalent to those of real rocks and thus opportunities for finer scale and more realistic models of percolation and reactive and non-reactive transport.

The availability of these new, multiscale quantification techniques has also opened up a number of new avenues for understanding fluid/rock interactions. Approaches that provide statistical analysis of relatively large rock volumes can be correlated with images of the pores themselves, as can the spatial relationships at many scales between pore types, especially accessible vs inaccessible porosity, and mineralogy. Newer techniques, as yet little or unexplored in geological contexts (e.g., SESANS, magnetic SANS) provide opportunities to probe such questions as the microscopic origins of geomagnetism and the nature of particulate transport, and to parallel inverse space with real space measurements. In many cases the basic theories and application methodologies of the techniques already exist for non-geological materials and problems, but it is also possible that these may require significant modification for applications to geological materials and problems. Thus, while it is our hope that this summary, and descriptions elsewhere in this volume (Noiriel 2015, this volume) will provide a useful reference for those interested in the analysis of porous materials, it is clear that future developments are likely to futher expand this toolkit of approaches to measuring, characterizing and quantifying the structure of natural and synthetic porous materials.

Effort was funded by the Department of Energy Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences through the Energy Frontier Research Center - Nanoscale Control of Geologic CO2. We acknowledge the support of the National Institute of Standards and Technology, Center for Neutron Research, U.S. Department of Commerce, which is supported in part by the National Science Foundation under agreement No. DMR-0944772; the High-Flux Isotope Reactor at the Oak Ridge National Laboratory, sponsored by the Scientific User Facilities Division, office of Basic Energy Sciences, US Department of Energy; the Manuel Lujan, Jr. Neutron Scattering Center at Los Alamos National Laboratory, which is funded by the Department of Energy’s office of Basic Energy Sciences. Los Alamos National Laboratory is operated by Los Alamos National Security LLC under DOE Contract DE-AC52-06NA25396; the Advanced Photon Source, a U.S. Department of Energy (DOE) office of Science User Facility operated for the DOE office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357; the Hoger Onderwijs Reactor, Delft University of Technology, The Netherlands, and the JCNS at the Forschungs-Neutronenquelle Heinz Maier-Leibnitz, Garching, Germany, in providing the X-ray and neutron research facilities used in this work. We would especially like to thank our many other colleagues and co-authors at the above institutions for their help and guidance. Certain commercial equipment, instruments, materials and software may be identified in this paper to foster understanding. Such identification does not imply recommendation or endorsement by the National Institute of Standards and Technology, the Department of Energy, or the Oak Ridge National Laboratory, nor does it imply that the materials or equipment identified are necessarily the best available for the purpose. We would like to thank Timothy Prisk, Wim Bouwman, Jan Ilavsky, Roger Pynn, Bill Hamilton, Lauren Beckingham, and Alexis Navarre-Sitchler for their helpful reviews and suggestions. Prose on multifractal calculations and imaging/scattering calculations from Anovitz et al. (2013a), and Figures 37, 41, 42, 43, 44 and 45 from there, Berryman (1988) and Anovitz et al. (2015a) used with pemission from Elsevier. We would also like to thank colleagues who provided or agreed to our use of unpublished figures including: John Barker (Figure 22 and description of the NCNR VSANS), Vitaliy Pipich (Fig. 21), and Boualem Hammouda (Fig. 23 and original of Fig. 24) or measured data (Chris Duif and Wim Bouwman performed the SESANS measurements in Figure 27 at the Delft University of Technology, and Ken Littrell provided the original data from Littrell et al. (2002) used to plot Figure 30), and Alex Swift (Fig. 47).