Abstract

Knowledge transfer has gained political and social momentum in the twenty-first century. The emphasis of this momentum has been on encouraging the transfer of scientific expertise between academia and industry, and on informing the public. The widespread use of the World Wide Web has provided a mechanism for sharing large volumes of information, which enables knowledge transfer between all sections of society. In geoscience, this trend in online knowledge transfer, combined with a move to digital data acquisition, processing, and interpretation technologies, has provided a unique opportunity for rapid progression of the science and its understanding by the public. To maximize these opportunities, the geoscience community needs to embrace technologies in Web and data management, as well as consider how best to combine and share data sources and data interpretations in a digital world. To achieve effective knowledge transfer, geoscientists first need to understand the benefits and limitations of digital data acquisition, processing, and interpretation. In this paper, we consider four aspects of the “digital revolution” on geological workflows and knowledge transfer—sources of uncertainty in digital data workflows, combining data sources, presentation of data and models in 3D visualization environments, and the use of Web and data management systems for knowledge transfer. In considering these aspects, we have focused on the collection, processing, and management of field data and implications for data analysis and decision making. Understanding the benefits and limitations of digital data collection will place the community in a better position to represent data and geological models in a digital environment through online resources for effective knowledge transfer.

INTRODUCTION

Digital data acquisition has been part of for many years, but it has only been in the early twenty-first century that digital data acquisition techniques in field geoscience have become widely used and promoted (Jones et al. 2004; McCaffrey et al., 2005). Geoscientists use digital methods to improve the resolution, precision, and speed of information collection (e.g., Evans et al., 1994; Trimby et al., 2002; Dingley, 2004), and the acquisition of digital information has led to an increase in data volumes, quality, and experimental precision. Perhaps most important for field geoscience has been the integration of digital spatial information with photography, elevation models, structural data, and other information to build 3D data sets and virtual worlds. Through the use of digital technologies, the ability of geoscientists to visualize and represent geological structures in 3D has changed the focus of research and opened up new research avenues within the discipline (e.g., Wu and Xu, 2003; Pringle et al., 2004; McCaffrey et al., 2005). This change, combined with an increase in the use of digital databases (e.g., Broome et al., 1993; Ram et al. 1999), has meant geoscientists can now collect field data with greater precision and in greater volumes that are easier to share both within and outside the geoscience community.

Our paper aims to discuss issues arising from advances in digital technologies for the acquisition, visualization, and management of field data for knowledge transfer in the geosciences. The format and style of scientific digital data presentation is crucial to its subsequent value. The presentation of data can affect its meaning and usefulness and, consequently, its worth for decision making (Tufte, 1997, 2003). Understanding the benefits and limitations of field digital data acquisition for the study of Earth systems has important implications for data presentation. The presentation of data, interpretations, and their uncertainties has implications for knowledge transfer in geoscience. There are many challenges that the geoscience community must address to make the most of digital data, 3D visualization tools, and the transfer of data and knowledge through digital environments. We have considered four aspects of these within the context of knowledge transfer:

  • sources of uncertainty in digital data workflows (including human and environmental bias and the unconstrained 3D nature of geological systems);

  • interpretation, presentation, and analysis of digital data (focusing on geological model creation and interpretation in 3D visualization environments, including uncertainty visualization);

  • combining data sets (including data collected by different geoscientists, processed by different methods and of different vintages); and

  • use of Web and data management systems to transfer knowledge.

We present these discussions for consideration by field geoscientists and academics. Many of these issues are routinely considered in applied fields such as reservoir engineering (e.g., Guocheng, 1997; Hesthammer and Fossen, 2000; Floris et al., 2001) but are only just being considered in field and academic geoscience.

DIGITAL FIELD MAPPING—FASTER AND MORE PRECISE?

Digital mapping technologies have the potential to improve the speed and precision of field-based geological mapping (Jones et al., 2004; McCaffrey et al., 2005). Traditional techniques relied on a compass, base map, and the users' ability to locate themselves in the landscape. These techniques are now supplemented by the use of digital equipment such as total stations, which can provide a relative spatial precision of 1 mm over distances of up to 2 km, and differential Global Positioning System (dGPS), which provides global georeferencing with a spatial precision of 10–20 mm or better. An exercise undertaken at the Digital Technologies Penrose Conference in 2006 illustrates the improved precision of digital techniques. GPS location data were compared with traditional map-reading skills at two locations on a wave cut platform at Cullercoats, NE England. Positions determined by three handheld GPS receivers were compared to the positions determined by 31 conference participants using a 1:2500 base map (Fig. 1). The average discrepancy between the position identified by the participants and the mean GPS location was ∼20 m. Considering that the terrain is not a difficult area for map location, with diagnostic features such as piers in line of sight, GPS technology provided substantially better precision than 31 skilled geoscientists.

In addition to increased locational accuracy and precision, digital techniques allow data to be acquired much faster and in much greater volumes. The Chimney Rock fault array, Utah, was first mapped using traditional field surveying techniques by Krantz (1988). Cowie and Shipton (1998) mapped 0.5 km2 of the fault array in three weeks with two people using a total station. In contrast, using dGPS, one person mapped 20 km2 of the fault array in 15 days (Maerten et al., 2001). Figure 2 shows that the fault throw profiles calculated from these three data sets are identical within errors; however, the speed of data acquisition dramatically improved with the use of dGPS. The evolution in digital technologies continues to increase the speed of data acquisition in earth sciences. Efficient capture of very detailed geospatial data sets is now possible with airborne and terrestrial “lidar” laser scanning (e.g., Cunningham et al., 2006; Pringle et al., 2006). Terrestrial lidar scanners can capture up to 12,000 data points per second.

The volume of data collected from an outcrop may now be limited by the computing power for processing and storing the data, rather than the time in the field needed to acquire the data. In essence, we can now produce a virtual outcrop on our computer screen (Clegg et al., 2005). However, virtual outcrops still need a human to interpret them. Interpretation in the field allows the geoscientist to build a model, make predictions, and test these in the field in real time. The geologist can visit outcrops and revisit them in the light of a revised model or new information. The integrated nature of geological systems means that the broader geological context to our observations and hypotheses is important. Observations of other elements of the geological system at a smaller and larger scale can provide crucial contextual information for interpretation that is often absent from digitally acquired data sets and the virtual outcrop. A virtual outcrop can be used as an aid to interpretation and for ease of data sharing and visualization for the data collector and others. For instance, a virtual outcrop can be used to allow others to input into an interpretation and can allow the data to be reinterpreted at a later date. Remote surveying techniques can additionally allow the user to capture data from inaccessible areas such as cliff faces.

INTERPRETATION AND MODEL BUILDING: ENVIRONMENTAL AND HUMAN BIAS

Uncertainties are introduced at each stage in geological data acquisition by human and environmental bias. For instance, detailed fault zone studies such as those by Krantz (1988), Cowie and Shipton (1998), and Maerten et al. (2001) tend to take place in areas where the climate produces exceptional exposure (e.g., the Utah desert). Additionally, field areas may be selected for their accessibility, especially if digital surveying equipment and power supplies need to be transported. The acquisition of large volumes of digital data from limited examples has the potential to bias our understanding of geological systems resulting in logistically limited geoscience to areas that are well exposed and easily accessible. Such bias limits the certainty of these data sets for use as analogs in geological models.

Scientific methodology means that we collect information that matches a hypothesis (e.g., Chamberlin, 1890). Digital surveying and data collection techniques could be perceived as mitigation against human bias, by collecting the full range of data available from the sampled area (e.g., a lidar scan will systematically collect data across an area and produce a point cloud of information). But a sampling site will be chosen to measure a particular set of features, and the experiment or data collection strategy will be based on a hypothesis that the scientist aims to test. Acknowledging such bias in scientific methodology is important. However, at the analysis and interpretation stage of the workflow, an individual's prior knowledge will inform the concepts they fit to the data (Bond et al., 2007). Individuals may also be biased in their interpretation and analysis of data by irrelevant contextual information, preconceived notions, and dominant experience (Tversky and Khaneman, 1974). For example, a geoscientist may predict a greater amount of fault rock in an unsampled volume, if they have recently surveyed an area of high fault density.

Assumptions made during data processing also introduce uncertainties, for instance, when using algorithms to automatically remove vegetation or other noise from lidar scanned surfaces (Zhang et al., 2003). These assumptions are required because of the physical constraints of the natural environment in which most geoscience data are collected. An example of errors due to data processing can be seen in the previous example of fault throw calculated from both traditional and digital data sets. In Figure 2, fault throw profiles were constructed by comparing the relative elevation of bedding across the fault surfaces (Krantz, 1988; Cowie and Shipton, 1998; Maerten et al., 2001). This required that bedding be projected along strike onto the fault plane, i.e., assuming that the beds were not folded close to the fault. The assumption that bedding had a constant strike direction introduced much larger uncertainty in the derived fault throw profiles than the resolution of either the dGPS or total station surveying techniques.

To produce a geological model from data collected on surfaces, assumptions must be made to project rock structures and properties into the 3D volume of interest. The ability to visualize information in a 3D environment, i.e., viewing the data from the sampled planes in their relative spatial context, may give the interpreter a better appreciation of the unsampled volume and the possible concepts that could be applied to the data to create a geological model (e.g., Pundt and Brinkkotter-Runde, 2000). However, although a lidar scan of a surface is very high resolution, uncertainty remains due to projection of outcrop data into the surrounding rock volume. For instance, hypothetical outcrop scans of opposite walls of a canyon show two slices through a 3D fault zone volume (Fig. 3). At this scale, there is little to no constraint on how to link the anastomosing fault strands mapped in detail in the canyon walls, particularly as the floor of the canyon is covered in sediment. Through-going fault strands could link both walls, or the faults may all terminate within the unsampled volume (Fig. 3B). To build a geological model for the fault zone, assumptions must be made to facilitate the interpretation and to project the faults into the unsampled volume. The interpreter uses experience and knowledge from other areas as analogs to help in developing 3D models from 2D data. Ultimately, these models can be tested in other field areas and against other data sets to build understanding of geological systems.

3D VISUALIZATION OF UNCERTAINTY

The interpretation stage of a geological workflow may be aided by 3D visualization and model building packages (Pundt and Brink-kotter-Runde, 2000). Capturing the uncertainty in the models produced from this process provides an additional challenge. 3D digital models have an inherent slickness, which may mask the underlying uncertainty in the data acquisition and interpretation (Pang et al., 1997). How geoscientists represent uncertainty in a model is crucial to its value. Most studies of uncertainty concentrate on mathematical algorithms for calculating uncertainty (e.g., Bárdossy and Fodor, 2001; Cortazar et al., 2001), but few consider the representation of interpretational uncertainty visually in 3D geological models. A summary of the visualization techniques developed for the representation of uncertainty in images is given in Hearnshaw and Unwin (1994) and references therein, as well as Pang et al. (1997).

With the increasing prevalence of 3D data sets, such as seismic data volumes used in industry, more attention is being paid to the presentation and visualization of uncertainties in 3D geological models. The 3D geological model in Figure 4A was developed from well and seismic data. Within the range of uncertainty of the well positions, the depths of each of the colored horizons are known. Geoscientists use the most “realistic” geometries to connect the well intersection points, relying on their training, experience of analog sites, stratigraphy and basin architecture, and judgment to “correctly” correlate the horizons. Figure 4B shows the uncertainty in the well positions and propagated uncertainties in the horizon positions represented by probability density distributions—adding “fuzziness” to the slickly presented model. An alternative suggestion is to use color intensity (Davis and Keller, 1997; Penrose Conference, 2006). For instance in Figure 4B, the red horizon would be deep red (100% intensity) at the well intersection, fading to less than 1% intensity where the location of the horizon is effectively unconstrained. It must be remembered, however, given that the horizons in Figure 4B could be linked to the wells in a different geometrical arrangement to the presented model, that Figure 4B only represents the positional uncertainty in one possible realization of the modeled horizons. The uncertainty created by applying different concepts to a geological data set at the interpretation stage (e.g., Fig. 3B) can potentially be large (Bond et al., 2007).

Other suggestions for visualizing uncertainty include changing focus (MacEachren, 1992, 1994), texture change (Goodchild et al., 1994; Forssell and Cohen, 1995), pseudo-coloring (Hagen et al., 1992), and the use of glyphs (characters of different shapes or sizes that represent the extent of uncertainty; Wittenbrink et al., 1996; Pang et al., 1997). Pang et al. (1997) also discuss the use of geometry, animation (see also, Gershon, 1992), sonification, and psychovisual approaches (sounds or messages that evoke a psychological response) to “visualize” uncertainty. Color maps, texture changes, fuzziness, geometry, and animation are probably more applicable to geoscience studies than sonification and psycho-visual approaches. In analysis of their own psycho-visual experiments for representation of uncertainty, Pang et al. (1997) note the difficulty in achieving a consistent response from participants. This observation may be as true for the more accepted methods of uncertainty visualization. In attempting to show the uncertainty in models, we may add further uncertainty and bias to the interpretation of the data.

COMBINING DATA

To develop a rigorous understanding of a geological system that has predictive use, 3D models must be tested against other field examples or data sets (e.g., Mattioni et al., 2007). Using knowledge from many field sites is essential to constrain 2D to 3D interpretations and to aid in understanding conceptual uncertainty; yet, combining data sets brings new uncertainties. The advent of online data-sharing resources has created new opportunities to compare and combine data. We consider two simple examples of combining data sets that highlight key concerns for data sharing in a digital world.

The first example is a hypothetical map, similar to those published by geological surveys, which has been created by the compilation of data collected by several geoscientists, at different time periods and undertaking different elements of research. In this example (Fig. 5), the four geoscientists have created very different maps because the focus of their studies and their reasons for mapping the terrain were diverse. The compilation therefore contains varying levels of detail, for example, dike geometries and chemistry, which depend on who mapped that area. In the Moine Thrust Belt of NW Scotland, the original survey mapping (Peach et al., 1907) was reinterpreted by Elliott and Johnson (1980) in their seminal analysis of thrust system evolution. However, many of the Peach et al. (1907) structural geometries were incompatible with Elliot and Johnson's cross sections, prompting remapping with new interpretations, notably by Coward (e.g., Coward, 1980, 1982, 1983, 1984). It is not that the geological outcrops move or change radically between the various maps, but most of the boundaries must be inferred rather than observed directly, so the nature and geometry of the interpreted boundaries are different. On all geological maps, the model influences directly the map pattern—data and interpretation are inextricably linked, and as hypotheses change, so do the maps (Butler, 2007). Therefore, understanding why data were collected is crucial for interpretation and collation of data using digital and non-digital techniques.

The second example considers the importance of definitions and metadata (data about data) in collecting and combining data. Shipton et al. (2006) discuss the problems of combining fault zone width versus fault displacement data collected by different geoscientists. Scaling of fault zone width versus displacement is often used by structural geologists to make predictions about fault permeability (e.g., Manzocchi et al., 1999), and by seismologists to make predictions about the width of active fault slip zones during an earthquake. A large volume of data is required to make statistically valid predictions. Many presentations of fault width versus displacement combine data sets where research teams have used different definitions for fault properties (Fig. 6). To gain useful information from combining data sources, standard criteria or measurement definitions are needed (Lunn et al., 2007). Alternatively, scientists need to publish their measurement criteria and assumptions (i.e., create metadata) and link these to interpretations and data sources.

The two examples used are of field data that could have been acquired digitally or non-digitally. There are similar, but particular issues for the combination of processed digital data, such as different vintages of GPS and geodetic data (Featherstone, 2006), seismic data (Bishop and Nunns, 1994; Rickett and Lumley, 2001), or gravity and magnetic data (Fairhead and Somerton, 1998; Allsop et al., 2002). For geospatial information, it is often desirable to spatially compare or combine data stored on different systems or in different formats. Techniques have been developed to combine spatial information based on international geogrid systems. Many of these systems use the power of computing grids, such as the Grid Access Data Service (GADS), which uses the Web Services framework to compare climate and oceanographic information (Bower et al., 2003). As digital technologies develop, commonly used data formats change and active curation of digital data is becoming increasingly important (Brown, 2007). How data from different sources is processed and combined has important implications for the certainty and uncertainty of interpretations of that data. Documentation of processing information and the assumptions and criteria used for data combination as metadata increases the future worth of data and interpretations. Such provenance information should be easier to collect and store with increasing use of digital data and e-science experiments, enabling easier replication and increased experimental rigor. Provenance experiments to design software architecture are being developed to determine how best to capture provenance information (e.g., Barga and Digiampietri, 2007; Miles et al., 2007). How metadata, such as the hypothesis being tested and the aim of the final interpretation, are assigned to our models is crucial for effective knowledge transfer.

ONLINE KNOWLEDGE TRANSFER

The World Wide Web and online resources have revolutionized accessibility to data and resources (Berners-Lee, 1996; Berners-Lee and Fischetti, 1999). Online resources are used to inform choices, helping individuals and businesses to make decisions and plan events and enabling them to share information with others in an accessible format. Access to such data sources should make decisions both quicker and easier. However, access to too much data, and of unknown quality, can introduce new problems such as the difficulty in locating relevant information and being able to assess its relevance and quality for decision making. The format and mechanisms used for data management and digital information sharing have implications for knowledge transfer in science as well as everyday scenarios.

The ability to host and share information in a digital environment means that data accessibility is not physically limited, but controlled by the environment and user interface of online information stores. How we search and access such data sources is increasingly important as the volume of data and information we collect through digital data acquisition techniques increases (e.g., Fairhead and Somerton, 1998). Resource types can range from simple databases with minimal user interface, requiring certain knowledge and experience, to sophisticated data searching and user interface environments. For effective knowledge transfer, it is important to know your user and hence the language and search criteria they will use. Online databases and resources need to be set up to be fit for purpose. A tiered system of information will allow accessibility at a range of levels and specialties, where the user can define the level and amount of information they require. For instance, a user may just want the interpretation, or may need to access the data behind the interpretation, or to examine the assumptions used to process and acquire the data. As more data are collected digitally, we should be better placed to present data and metadata alongside interpretations in an accessible and easy to visualize manner, and to combine multiple data sets to increase understanding of geological systems and processes.

The World Stress Map (WSM) project (Reinecker et al., 2005) (http://www.world-stress-map.org) is an example of a tiered online resource of geological information that contains linked interpretations and data. Although, the resource contains specialized information, it has a simple user interface and a proficient PC user can view the stress map for their region of the world and receive a file of the plotted stress data. More advanced users can download the complete World Stress Map 2005 database as a zipped dBase (wsm2005dbf.zip), Excel spreadsheet (wsm2005xls.zip), or ASCII (American Standard Code for Information Interchange) file (wsm2005csv.zip). The World Stress Map project relies on the geoscience community to input their own databases on four categories of data and standardized measurements detailed in Zoback and Zoback (1980, 1991), Zoback et al. (1989), and Sperner et al. (2003). When uploading data, the user must also define the data quality, i.e., provide metadata about the information they are uploading. The quality ranking scheme is based mainly on the number, accuracy, and depth of the measurements (Zoback and Zoback, 1989; Sperner et al., 2003). The output interpretation image shows the category and quality of the data by color coding of data points and flag lengths, respectively. Information on data processing, references, and other material is available alongside the data. The World Stress Map provides different levels of information that can be utilized by users with varying requirements.

The WSM project is a good example of a tiered system of information, with the output accessible online. However, specialist knowledge is required to understand the information it provides. A school science teacher or a journalist using the World Stress Map to inform the public about earthquake prediction might be easily misled in believing that earthquakes can be predicted from the images produced by the database. How the interested public should interpret the data is often not addressed in online geological databases. For example, the Harvard Centroid-Moment-Tensor (CMT) catalog (http://www.globalcmt.org/) of earthquake information requires those wanting to download information in map form to also be a Generic Mapping Tools (GMT) user (see http://www.soest.hawaii.edu/gmt/). Databases for the specialist do not engage the wider public, or policy makers, in their knowledge transfer.

In fields where specialized technical data are used to inform commercial or social decision making, data and idea sharing initiatives are now being funded to revolutionize knowledge transfer both within and outside specialist communities. In the UK, over 500 million pounds of government money has been made available for knowledge transfer schemes aimed to enable universities to apply their knowledge, ideas, and expertise in response to market needs and for public benefit, making scientific resources and data more widely available (Higher Education Innovation Fund, 2005). These initiatives highlight the political recognition of the benefits of ideas and resource sharing.

Some online databases are specifically designed to inform the public—an embodiment of the knowledge transfer culture and of policies giving public access to data and information. These types of resources are generally used for informing decision making at a range of levels. Examples that belong to this category include mapping and line scan data (e.g., British Geological Survey marine magnetic surveys, http://www.bgs.ac.uk/geophysics/marmag.html), from which commercial decisions will be made, or the UK's Environment Agency flood maps (http://www.environment-agency.gov.uk/subjects/flood) that have a potential impact on social policy, planning, house prices, and insurance. The flood maps show areas with a 0.1% chance of flooding, based on a range of data from modeling to historical flood information. Similar impacts on planning policy and house and insurance prices could be predicted for decisions arising from delineating faults on a U.S. Geological Survey (USGS) geologic map of California, or from a map of predicted ground shaking around an active fault zone (e.g., Wong et al., 2002).

The usefulness of these map-based resources will depend on the decisions made on how to combine data while ensuring consistency of the information collected by different people and from different sources. How data are combined and processed and the uncertainties calculated have social and commercial impacts, for example, in flooding and other geohazard environments. Despite sophisticated computing algorithms for calculating risk, data modeling always contains assumptions and uncertainties. Some of the uncertainties caused by human bias in flood risk analysis are discussed in Pappenberger et al. (2007), who show that decisions regarding the choice of model and processing such as traditional area averaging techniques are inadequate for flood hazard studies. Uncertainties in combining map and field data collected by different geoscientists are highlighted in Figures 5 and 6, this paper, and in Shipton et al. (2006) and Butler (2007).

Many initiatives funded through knowledge transfer schemes have taken up the challenge of using online resources to facilitate knowledge transfer. New initiatives are using the evolution of Web-based search systems (Berners-Lee et al., 2006) to more effectively search for and manage information. As we acquire more data, the use of digital data management is crucial for our ability to maximize the benefits of the information (Fairhead and Somerton, 1998), for example, in finding analogs, against which we can test our own models and data or combine them with others. One such project currently under development is the Virtual Seismic Atlas (VSA). The Atlas will make available online a database of seismic images to form a visual atlas. The images will have associated metadata on location, processing of the data, bore-hole information, etc. The more metadata and contextual information the contributor has to input to the Atlas the more useful the resource will be for searching and acquiring relevant information. The “indexing” of the Atlas will use current expertise in Web and database content management systems to allow the non-expert or first time user, to find information that is most relevant to them. The search will use metadata and data combined with contextual information (Voelker, 2005) to perform multidimensional searches. The search system will not rely on an existing taxonomy or metadata structure that forms a linked chain, but will make a web of valid pathways to results. Even with the most up-to-date Web technology, the challenge for the VSA is in providing a single user interface behind which effectively lies a tailored index system for each user.

The VSA will enable members of the geoscience community to interpret and upload seismic sections, as well as compare and rate other interpretations. In this system, the interpretations are peer reviewed constantly and can evolve as ideas change. Such online resources not only make data easily accessible to the community, but capture and encourage debate and idea sharing. This community-based resource and review system is effectively an expansion of the online peer review discussions that are currently being tested in online journals such as Geosphere (Geological Society of America) and G3 (Geochemistry, Geophysics, and Geosystems), the American Geophysical Union's online publication, or ranking systems found online for hotels and restaurants. An environment in which the geoscience community can review its own input and evolution of ideas and understanding allows peer-driven quality control. For earth science, online data sharing should provide an effective mechanism for knowledge transfer between the public, academic, and industrial geoscientists and associated scientific communities. Web-based knowledge transfer initiatives may be a move to collaborative community-based science and could provide the type of environment that enables “scientific revolution” (Kuhn, 1962) and maximizes the evolution and progression of ideas and understanding as well as informing policy and the public.

CONCLUSIONS

The degree to which the geoscience community embraces digital technologies will depend on its ability to apply technology to geological problems. As an interdisciplinary science, geology will not be at the frontier of technology development, but the aptitude of the geoscience community to be the first appliers of technological developments to the subject will change the rate of progression of knowledge and understanding in geoscience. Digital technologies have improved data collection at geological field sites through increasing data density and precision, increasing the speed of data collection and improving spatial location ability on the Earth's surface. Analysis and interpretation are enhanced by 3D visualization, providing spatial context in situations or at scales that may be difficult to visualize in the field. At the same time, Web and content management system technologies provide a mechanism for knowledge transfer and information sharing.

The improvements in the digital collection of field data and the subsequent analysis and interpretation of geological environments discussed in this paper have also provided the geoscience community with challenges. These challenges include:

  • collection of data from a wider range of geological environments and field sites, to increase the number and variety of analogs and to reduce sampling and environmental bias;

  • methods for using field-based digital technologies to predict geological features and properties in unsampled rock volumes;

  • presentation of uncertainty in 3D geological models;

  • digitally capturing metadata and data provenance throughout geological workflows to increase the value of interpretations and data; and

  • embracing new technologies and developments within computer science to effectively use digital knowledge transfer environments.

As with any other scientific tool, digital data cannot be used uncritically. In this paper, we have raised issues and challenges for the critical use of digital data. These challenges should provide a focus for further work in this area, so that geoscience maximizes the benefits of digital technologies for knowledge transfer in the future.

The authors thank participants of the “Unlocking 3D Earth Systems—Harnessing New Digital Technologies to Revolutionize Multi-Scale Geologic Models,” Penrose Conference at Durham in 2006. Many of the ideas presented in this paper evolved from discussions that took place during the conference. Participants of the conference are also thanked for their involvement in the map location exercise at Cullercoats. CEB is supported by a Scottish Executive SCORE grant. The VSA is funded by Natural Environment Research Council Knowledge Transfer Grant NE/E002803/1, together with Petroleum Exploration Society of Great Britain, BHP-Billiton, Shell, Amerada-Hess, Statoil, and Hydro. The 3D geological models in (Fig. 4) were created in the Midland Valley software package 3DMove. This paper was improved by the comments of two anonymous reviewers.