The value of information (VOI) metric requires geophysicists to clearly link their measurements to important subsurface parameters in geothermal exploration, such as temperature, permeability, and fluid information. The metric quantifies the increase in the expected (average) value outcome of a decision. The decision is made with a certain type of additional information, compared to if the decision was made without the information. VOI has challenges that are similar to those faced by machine learning enthusiasts. Labeled data sets are needed in which the labels are the decision variables and the features are the geophysical observables. This paper presents three examples of how the statistical relationship, or reliability, may be calculated for geothermal exploration applications. VOI provides a way to document successes and identify opportunities for improvement in derisking geothermal prospecting with geophysical information.

As its name suggests, the value of information (VOI) metric focuses on value. VOI is defined by two connotations of value, termed loosely here as dollars and reliability. This paper provides examples of both, with more emphasis on the latter. This introduction explores the dollar value of geothermal and the challenges of using geophysical data for geothermal exploration. The core of the paper consists of three examples of reliability methodologies. Reliability and the VOI metric compel geophysicists to quantify, communicate, and improve the utility of geophysical methods for geothermal exploration. The paper concludes with a discussion on the importance of disassociating VOI with cost.

Value is most often expressed in monetary units. This is the most intuitive type of unit for evaluating “good” and “bad” outcomes from different decision scenarios (Pratt et al., 1995). Drilling where there is oil is a good outcome. A barrel of oil provides approximately 5,000,000 kJ of energy. A barrel of 300°C geothermal fluid provides approximately 150,000 kJ of thermal energy. This presents a value of US$0.50 per barrel for geothermal fluid. The contrast between the value of oil and hot water greatly impacts the dollar part of the VOI calculation. Decisions with higher payoffs can culminate in higher VOIs, irrespective of the type of information. However, the value of geothermal energy should include economic externalities, given that emissions of geothermal power plants are significantly lower than those of fossil fuel power plants (Bhatia, 2014). The dollar side of VOI for geothermal is more in the jurisdiction of policy makers.

Most geothermal reservoir engineers will attest that the real dollar value of a resource is driven by the reservoir's extent and temperature. The reservoir temperature is measured directly by bottom-hole temperature measurements. However, like all well measurements, they are spatially sparse and may not be representative of the entire reservoir. Geophysical measurements can provide a more spatially complete information source. Previous studies have demonstrated how distributed temperature sensing, electrical resistivity tomography, and the self-potential method can provide spatial and temporal indications of temperature changes but not absolute temperatures. Additionally, these three methods are limited in their extrapolation in depth and are only reliable up to 100 m depth (Hermans et al., 2014). Therefore, many geothermal developers often doubt the value of geophysics for geothermal prospecting and derisking when searching for large temperature resources (dollar value) and direct temperature measurements (reliability) to validate their investment in power plant infrastructure.

What data type can reliably measure geologic properties that are most associated with the dollar value? In conventional petroleum systems, no geophysical technique directly measures the hydrocarbons. The seismic response is a function of the total compressibility of the rock-fluid system. After decades of research and development, many seismic technical advances (e.g., amplitude variation with offset analysis, depth migration, multicomponent, and full-waveform inversion [FWI]) have led to a consensus that the use of 3D seismic at least doubles the probability of drilling successful wells for conventional petroleum systems (Gray, 2011).

In simplified terms, a conventional geothermal reservoir consists of a heat source (temperature), permeability, viable fluids, cap, and outflow or fluid discharge area (Coolbaugh et al., 2015). Similar to petroleum geophysics, no geophysical technique will directly measure the volume of fluids at a certain temperature. However, geophysical data can detect potential proxies (past or modern indicators) of permeability and fluids, such as faults and clay caps. Even though geophysics does not directly detect geothermal fluids or permeability, geophysical information can reduce exploration risk, especially through cutting-edge imaging, data integration, and/or processing techniques. This is precisely what VOI captures by statistically assessing how each information type could successfully identify one or more of the mechanisms of a geothermal resource.

The challenge is to connect geophysical observables to the property that provides the dollar value and to quantify the reliability of the connection. This is comparable to the current challenge of having high-quality labeled seismic data sets for machine learning. In the machine learning case, the label is the true geologic property, and the features may be migration models, which are constructed by observed seismic data (Wu et al., 2020). VOI similarly needs labeled data sets. However, they can come from a variety of methods, such as numerous observations from a deterministic result (shown in example 2). Like all statistical methods, the calculated VOI will become more useful and stable with more observations. Example 3 will demonstrate a machine learning example for calculating VOI.

The distinction between VOI reliability and the features-labels relationship in machine learning, is that the “label” must be subsequently linked to the dollar value. Let's call this the label-to-decision variable connection. For example, if a fault exists, it is [X] more likely that deeper and hotter geothermal fluids will flow (Carranza et al., 2008). If there is an illite-smectite clay cap, it is [Y] more likely that fluids greater than 160°C exist (Rejeki et al., 2010). In these two examples, one must convert the geothermal fluids and/or temperature into dollar amounts. This is true for any reliability. It must quantify how accurately and consistently the geophysical observations identify certain subsurface conditions, which affect the dollars in the decision (e.g., heat and fluid recovered). Ultimately, the reliability must be expressed with a statistical relationship (Trainor-Guitton, 2014; Trainor-Guitton et al., 2014).

Geophysicists are suited to understand, quantify, and refine the associations of geophysical observations (features) with geothermal indicators (labels such as temperature, steam flow, or proxies for permeability). However, complex geothermal reservoirs make it difficult to have ample labeled 3D/4D data sets. This paper presents three varied approaches to VOI analysis that are applied to geophysical data for geothermal exploration.

The first example serves as a tutorial for VOI calculations and utilizes empirical relationships between electrical resistivity and temperature, porosity, and salinity. Simple thresholds define three classes of geothermal reservoirs: economic, marginal, and uneconomic. The second example utilizes real field data from Darajat, Indonesia, including steam flow rates and an inversion model of electrical resistivity. The third example is the first VOI methodology, including seismic migration models and reliability statistics from convolutional neural networks (CNNs).

The objective is to give readers intuition and ideas of how to design their own VOI analysis. The three examples provided contain simplifications and shortcomings. The first leaves out the spatial dimension. The second comes from an exceptional, steam-dominated geothermal field. The third focuses on fault existence, when the fault's dip angle is likely more consequential for geothermal drilling decisions. Even though the VOI attempts are imperfect, they offer a mechanism for dialog among diverse expertise including geologists, geophysicists, and reservoir engineers. Using VOI and focusing on value can help identify the most prudent exploration practices.

This example provides the most straightforward demonstration of VOI by focusing on the nonunique relationship between three geothermal reservoir properties (temperature, porosity, and salinity) and one geophysical attribute (electrical resistivity). The first simplification is the label-to-decision variable connection. Two thresholds for temperature (150°C and 200°C) and porosity (1.68% and 5%) define three geothermal reservoir categories (the decision variable): economic, marginal, and uneconomic. These categories are represented by green, yellow, and red in Figure 1a. The x-axis is temperature, and the y-axis is log10 electrical resistivity. The reliability will describe how well electrical resistivity can distinguish between the three geothermal categories.

The second simplification is the generation of possible electrical resistivity values using empirical formulas (Ucok et al., 1980). In Figure 1a, resistivity was calculated for brine temperatures (ranging from 25°C to 375°C), three different porosity values (0.95%, 1.68%, and 5% shown by blue, pink, and yellow lines), and three different salinities (3%, 10%, and 20% concentrations shown by circle, diamond, and x markers). Using the Carmen-Kozeny relationship, these porosity values could correspond to 10–6, 10–8, and 10–9 Darcy. Salinity is not included in the definition of the geothermal category but is included to further demonstrate ambiguity between electrical resistivity and the geothermal category. For example, all three categories can produce a log10 electrical resistivity between 1.3 and 1.6. These overlaps illustrate that electrical resistivity cannot uniquely determine if the reservoir is hot and will flow.

Let's use the resistivity value log10ρ = 1.6 as a demonstration. Figure 1a graphically shows the intersection of log10ρ = 1.6 and the three geothermal categories. Therefore, the resistivity value log10ρ = 1.6 may belong to any of the three categories.

Suppose we have a calibrated (labeled) resistivity data set (e.g., we know the reservoir category that each datum belongs to), and we want to calculate the= likeliness of a particular log10ρ value. This is known as the likelihood. A general expression for the likelihood is:


Figure 1b graphically depicts how the likelihood is calculated for log10ρ = 1.6 by using the Venn diagram in Figure 1a. Specifically, the likelihood that an economic reservoir is observed is calculated by:


For all categories, the number of observations in each category (denominators in the likelihood) is represented by three shapes with dashed outlines (the number of calculated resistivity values from the empirical formulas): ∼ Pr(Θ = θuneconomic) = 40%, ∼ Pr(Θ = θmarginal) = 30%, and ∼ Pr(Θ = θeconomic) = 30%.

Suppose we do not want to use these specific probabilities to represent our chances in the geothermal lottery. VOI relies on conditional and Bayesian statistics, which requires assigning an a priori belief to the existence of the three geothermal categories. This can be different from the number of resistivity values generated for each category (represented by ∼ Pr(Θ = θi)). The notion of “prior” can be unsatisfying to objective geoscientists. However, the alternative is frequentist statistics, which depends on and varies with the number of experiments or observations made (Efron, 2012). For this case and in general, the prior provides flexibility in scaling the likelihood, which based on our observations is often biased. Many times, the calibrated data set is collected with the intention to only sample areas of hot flowable geothermal. We will see this in example 2 with the Darajat data.

We will assume that we want our a priori probabilities to be Pr(Θ = θuneconomic) = 50%, Pr(Θ = θmarginal) = 30%, and Pr(Θ = θeconomic) = 20%. We can now calculate the posterior. The posterior quantifies how likely any of the three categories are given a certain, observed resistivity value. The posterior is calculated as:
where scaled marginal probability is in the denominator. The marginal probability, which quantifies how frequently a particular resistivity observation occurs, is the product sum of the prior and likelihood (Figure 1).

Figure 2a graphically represents the posterior calculation for the resistivity bin value log10ρ = 1.6 with the likelihoods from Figure 1b. The scaled marginal (sum(prior ∗ likelihood)) is: 0.5 ∗ 0.25 (economic) + 0.3 ∗ 0.15 (marginal) + 0.2 ∗ 0.6 (economic) = 0.29. This is shown in the denominator of Figure 2a. The reliability of electrical resistivity data at value log10ρ = 1.6 gives 43%, 15%, and 42% probabilities of encountering an uneconomic, marginal, or economic geothermal reservoir, respectively. Figure 2b contains a graphical example of what the full posterior would look like, which includes a probability for each of the three reservoir categories (depicted by the color of each bar) at each of the resistivity bin values (x-axis).

The dollar or value outcomes must be determined. For this simplified case, I provide nominal but intuitive numbers to represent that drilling in an uneconomic, marginal, or economic reservoir will result in monetary loss, slight gain, or high gain, respectively. Table 1 contains six possible value outcomes for all combinations of drilling and not drilling (represented by a) in the three reservoir categories (represented by θi). The values in this table are represented by νa(θi) in equation 4.

Lastly, the reliability (equation 3) and dollars (Table 1) are combined in the value with imperfect information:


Figure 3 shows a demonstration of the calculation of Vimperfect by using only three log10ρ resistivity bins: 1.0, 1.6, and 2.2. These are shown as three branches on the left-hand side of the decision tree in Figure 3. Chronologically, moving left to right in Figure 3, we will interpret one of the three resistivity bins, decide whether to drill or not (blue squares), and experience a specific value outcome (Table 1). Note that the mathematical operations of equation 4 are performed from right to left, as shown by the numbers at the top of Figure 3. Step 2 is the inner weighted average operation, depicted by six dollar amounts on the decision action branches. Step 3 identifies which action a (e.g., drill or not drill) will on average result in higher outcome (e.g., maxa). This is identified with bold type. The outer weighted average (step 4) uses updated marginal probability as weights for the three bold dollar values. For this demonstration, Vimperfect = 200 ∗ 0.2 + 61.4 ∗ 0.29 + 0 ∗ 0.51 = $57.8. The scaled marginal is calculated as shown in the denominator of equation 3.

VOI quantifies how much higher (better) Vimperfect is in comparison to if the decision had been made without the information. What kind of outcome can we expect without additional information? We calculate another weighted average by using prior probabilities and Table 1:


For this demonstration, Vprior is $19. To compare how much better we did with resistivity information, we subtract this uninformed decision outcome from Vimperfect:


As emphasized earlier, VOI is not about the cost of information. VOI integrates the magnitude of decision outcomes (Table 1) and how reliably the information can distinguish parameters that determine different outcomes (equation 3). Consequently, information that is precise and relevant to decisions will have a higher VOI. It can be compared to the cost of an information source that is being considered. If VOI is greater than the cost of the information considered (e.g., geophysical survey), it is deemed a reasonable decision to purchase the information.

Serving as a tutorial, the first example does not provide the most realistic representation of geophysical information. It leaves out the advantages and disadvantages of field geophysical data (e.g., the spatial information, noise, and nonuniqueness that geophysical inversion must manage). Actual field data must be labeled or calibrated, such that statistical relationships can be made between the observed data and the decision variable. How to do this efficiently is an active area of research.

Trainor-Guitton et al. (2017) demonstrate a VOI approach that uses data from Darajat, a well-developed geothermal field. The magnetotelluric (MT) data are calibrated to numerous observed steam flow measurements. In this ideal scenario, steam flow is both the label and the decision variable, as it was directly measured and determines the power generated.

The MT data used for this analysis consist of 85 stations, which were distributed over and outside the boundaries of the Darajat geothermal field. The data were collected in order to interpret the distribution and extensions of the electrically conductive clay cap beyond the first development area (Rejeki et al., 2010). A 3D electrical conductivity model was obtained by inverting the MT data. The conductivity model (overlying the steam flow measurements) is used to determine possible relationships between the electrical conductivity property and steam flow magnitude. Typically, locations of high conductivity can be used to estimate the likely margins of the geothermal system (Cumming, 2009). We attempt to assess whether conductance information (the product of thickness and electrical conductivity) of the clay cap can be used to distinguish between higher and lower steam flow.

Defining reliability with spatial data requires defining the clay cap within the inversion model and collocating each steam flow measurement to a volume below the clay cap. In Trainor-Guitton et al. (2017), different conductivity thresholds are defined that delineate several possible clay caps with different thicknesses. Figure 4 shows an example of a clay cap defined at σ = 0.12 S/m. Next, the steam flow measurements, which generally originate below the cap, must be collocated to representative conductance values. We suggest that steam flow measurements closer to the clay cap are more likely to influence the electrical conductivities and geometry of the clay cap. Therefore, we expect a stronger relationship between the steam flow measurements that are closer to the clay cap. We define 750 m as the maximum distance between a steam flow measurement and any point within the clay cap. We choose this distance because it represents the lower quartile of all distances between clay cap conductivities and steam flow locations.

The statistics of the extracted conductance values and their collocated steam flow categories are plotted for the clay cap defined at 0.12 S/m in the box plots in Figure 5. The median conductance is in red, the quartile range is represented by the blue box, and the whiskers (dashed lines) are standard deviations for each of the seven steam flow categories. With the exception of the highest steam flow (greater than 30 kg/s) and the 15–20 kg/s category, lower conductance values correlate with higher steam flow rates. This is expected. Steam has a high resistivity. A high flow rate is indicative of high porosity and permeability, which also have higher resistivity (low conductivity). Geochemical alterations must be kept in mind. Higher temperatures (approximately 220°C) will change highly conductive smectite clays into more resistive illitic or chloritic clays (Ussher et al., 2000). Ultimately, the relationships visualized in Figure 5 can be used to define the reliability between steam flow and electrical conductance: Pr(Θ = θi | G = gj).

This case study is an example of why prior probabilities are useful. Table 2 shows the overall percentage of how the steam flow data are represented in each of the seven categories for the Darajat data set (∼Pr(Θ = θi)). As described previously, the preference is to drill for high steam flow. Therefore, category 30 > θi has the highest percentage at 26%. However, we want to calculate the VOI of MT for a new development field, where the risk of drilling lower steam flow is higher in an unknown area. To represent a greenfield exploration scenario, an alternative prior is proposed in the second row of Table 2. With less exploration information, we assign our chances in the geothermal lottery as 40% in the lowest steam flow category (θi ≤ 5) and 10% in the six other categories.

The bottom half of Table 2 also contains the used for the study, where again nominal values are assigned to each of the steam flow categories, going from high gain to negative gain for high to low steam flow. Table 3 contains the Vprior, Vimperfect, and VOIimperfect for both priors. The reliability used for both is the same, originating from the calibrated data set shown in Figure 5. The greenfield prior removes the sampling bias (preferential sampling of the high pay zone). By doing so, it gives more value to MT geophysics ($48,775 versus $11,030), which reflects how additional information will be more valuable in the higher-risk scenario of early stage geothermal prospecting. Remember, we compare the cost of collecting and processing MT into a conductivity model and compare that cost to VOIimperfect in Table 3. If VOIimperfect >> cost of MT, it's a sound decision to purchase.

The greenfield VOI also assumes that a similar geophysical observation-to-decision-variable relationship is viable at the new greenfield site. Darajat is one of four steam-only geothermal reservoirs known globally. Most geothermal reservoirs are a liquid-steam combination. Therefore, when possible, a reliability generated from liquid-steam fields should be used when exploring greenfields with expected liquid-steam reservoirs.

Migration modeling that includes wavefield simulation can be computationally expensive, and obtaining the statistics needed for VOI can be challenging. This final example describes a first-ever, novel and efficient VOI methodology. It includes 2D migration models and obtains information statistics from machine learning, specifically U-net CNNs. Details of this work can be found in Jreij et al. (2020).

This example is inspired by the Brady geothermal field in Nevada, where faults assist in recharge of the reservoir and provide good drilling locations (Folsom et al., 2018). Synthetic data were simulated using the survey configuration from March 2016 (Feigl et al., 2018). During this survey, horizontal distributed acoustic sensing (DAS) and sparse geophone sensors recorded seismic observations from vertical and orthogonal horizontal vibroseis sources. This example uses VOI to compare the spatially dense but single-component DAS to spatially sparse two-component (2C) geophones. We use synthetic data due to their ability to produce many ground truth training models for statistical learning and reliability. They are also used due to issues with the real PoroTomo active source data set.

For statistical learning, we use CNNs to provide classification accuracies to compare migration models constructed by horizontal DAS versus 2C geophones (Ronneberger et al., 2015). Although vertical DAS field data from the active source surveys were useful for migration (Trainor-Guitton et al., 2019), the sparse geophones and short offset of the survey (maximum 1100 m) created spatial aliasing and a lack of moveout observed in the PoroTomo data. Hence, this study evaluates the potential of horizontal DAS when geophones are too sparse.

The workflow of the novel VOI methodology is shown in Figure 6. Reverse time migration is used to evaluate the reliability of two seismic sensors. How well can faults be located using receiver information from the horizontal DAS versus 2C geophones? As shown in Figure 6a, faults are deemed as both the decision parameter (as a proxy for geothermal permeability) and the labels for machine learning. Faults are a potential seismic target and have been previously interpreted from geophone and vertical DAS sensors at Brady (Queen et al., 2016; Trainor-Guitton et al., 2019). From a 3D a priori fault model (Siler and Faulds, 2013), 183 2D reflectivity models were constructed.

For this example, the reliability (equation 3) quantifies how often interpretations of faults (θj=Fint) align with the actual presence of faults (θi=F) and vice versa. At each location within a migrated image, there is a classification of a fault (F) or absence of a fault (NF). For this study, four posteriors are calculated by using different combinations of two sources (S = horizontal, vertical) and two receiver types (R = geophones, DAS). Therefore, all combinations result in four possible posteriors.

Two-dimensional elastic forward modeling is used to produce strain (measured by DAS) and displacement (measured by geophones) data along the surface of our 2D example, which is separately excited by horizontal and vertical forces. The acquisition geometry includes 150 m source spacing (for both vertical and horizontal excitation), 100 m geophone spacing, and 10 m DAS gauge length. The raw point strain data were simulated by the finite-difference code after Ning and Sava (2018). Both displacement and strain utilize the Madagascar software package (Fomel et al., 2013). The vertical and horizontal force sources are modeled to represent vertical and horizontal vibe sources. In total, 183 × 4 2D prestack depth migration experiments are implemented. We use reverse time migration to produce images from the simulated seismic measurements. Since surface horizontal DAS is sensitive to the horizontal component of particle differential displacement, short-offset P-wave reflections will not be recorded on surface DAS, assuming a flat-layered earth. This is not the case for our experiments, which are modeled after the steeply dipping faults at Brady (see the fault model in Figure 6a).

Reliability, a measure of imperfectness of the seismic image, is achieved via a machine learning approach. CNNs are applied to detect faults within the seismic image and to return the Bayesian statistics needed for a VOI evaluation. The common application of CNNs to images is to predict labels from features (Szegedy et al., 2016). In slight contrast to the mainstream application, the role of CNNs in our study is to compare the quality of images across the four source and receiver combinations. We do this by separately training the CNN on each combination. The U-net CNN is used because the goal is a semantic segmentation. Every pixel in our model is classified as either fault or nonfault. The images are resampled, cropped, or rescaled, enabling the U-net to capture features at different scales (Ronneberger et al., 2015).

We separately train and test CNN models for the four groups of images. To ensure that accuracy metrics were not specific to one particular training and test split, cross validation was performed using a three-fold cross validation. A 67/33 training-testing split was used, including three sets of 122 images for training and 61 for testing. The same three training-testing split was used for all four source-receiver models to ensure consistency. Training continued until the loss did not improve after five epochs, where our chosen loss function is the dice coefficient (Milletari et al., 2016). Then, the average of the three sets of evaluation metrics were used to compare the four source-receiver pairs.

Figure 7 depicts the standard confusion matrix for the binary case that is used to assess the accuracy of a classification algorithm. The rows show the true class (e.g., nonfault or fault), and the columns organize the resulting interpretations. Therefore, the quantities of correct classifications are along the diagonal (true negatives and true positives). The incorrect classifications are on the off diagonals (false negatives and false positives). We compare the predicted class of the test set, where the predictions are based on models developed on the training samples.

Equations 7–10 contain the four likelihoods (equation 2) for this binary case:





where there is one for each of the four different trained CNN models (S = {vertical, horizontal}, R = {DAS, geophone}). The likelihoods indicate the ability of these sensors to record seismic signals, which enable the migration algorithms to differentiate faults from nonfaults. These likelihoods can be transformed into the posterior (equation 3) and used to calculate the value with imperfect information (equation 4). From these comparisons, we can quantitatively compare single-component horizontal DAS and 2C geophones.

Using a prior of 50/50 for fault/nonfault and likelihoods, the 2 × 2 posteriors for the four combinations are obtained. Figure 8 contains four posteriors for the source and receiver combinations, where we see that horizontal DAS does better than geophones, and the horizontal source for both receiver types has a higher diagonal value. In general, in our noise-free synthetic images, it was observed that the geophone images suffered more migration artifacts than images produced by DAS due to insufficient sampling of the wavefield. We will discuss how this would differ for field observations. For both receiver cases, the images produced from horizontal force better resolve the deeper reflectors. The S-wave source may provide more variety of incidence angle with the deeper reflector. Additionally, the horizontal source produces more of the slower S-waves, which have smaller wavelengths (versus P-waves), providing a better resolution of deeper reflectors.

Figure 9 contains the training loss value for each of the 2D cross sections for the four CNN models, providing insight into the physics of each source-receiver combination. All training losses are the highest on the right side of the plot, which are cross sections on the south side of the a priori fault model. The fault's location and its changing dip do not provide adequate reflection energy to the geophones or DAS. The middle sections (X = 92 and X = 105) also have very short faults that are difficult to image. In this case, the horizontal source combined with the high spatial density of DAS does the best job (cyan triangles).

As in the previous examples, the two decision actions are deemed as drill or do not drill. The Vimperfect for the four groups is shown in Figure 8, below their respetive posteriors. The values in Table 4 and posteriors in Figure 8 result in DAS with horizontal source having the highest Vimperfect(39.60), followed by geophone-horizontal source (38.24), then DAS-vertical (36.82) and geophone-vertical (27.94) combinations. High spatial sampling of DAS with the horizontal polarization source provides the most reliable images of the faults. Until now, previous VOI work has not assessed spatial models built by seismic data.

Synthetic simulations cannot capture all of the challenges that will encumber seismic data, especially measured by DAS, which will have weaker signals due to coupling issues and larger broadside in sensitivity. We emphasize that the current DAS and geophone comparison is an illustration of a methodology, which can be used to optimize survey design via the VOI metric. This methodology is transferable to other scenarios beyond the experiments presented here. It can provide guidance for other data types that produce spatial images of important structures in the subsurface.

The geophysical community is consistently challenged to show value. In the unconventional space, some argue that cheap processing of land seismic does not demonstrate the advantage of collecting this type of data (Chief Geophysicists Forum, 2016; Duhailan and Badri, 2019). Similar to the pivot during the unconventional peak, geophysics must evolve to ensure it can bring value to geothermal. We must innovate and listen to geothermal experts. Which advanced techniques and technologies could bring more reliability to geothermal prospecting? It likely will not be FWI but instead a joint inversion of geochemistry and another geophysical technique. Sharing this document with geothermal engineers has sparked new ideas for improved data collection. It also has emphasized that timely 3D interpretation is of utmost importance when financing new geothermal fields.

Despite geothermal's general economic disadvantage over petroleum, geophysicists must keep their focus on effectiveness over efficiency. Communication is key. We must document our successes and communicate and understand our shortcomings. Let's make geophysics as indispensable for geothermal as it is for petroleum.

Thank you to all of my geothermal mentors: Jeff Roberts, G. Michael Hoversten, Herb Wang, Doug Hollett, Egill Juliusson, and most of all, Robert Stacey.

Data associated with this research are available and can be obtained by contacting the corresponding author.

All article content, except where otherwise noted (including republished material), is licensed under a Creative Commons Attribution 4.0 International License (CC BY). See Distribution or reproduction of this work in whole or in part commercially or noncommercially requires full attribution of the original publication, including its digital object identifier (DOI).