Abstract

Training image (TI) is important for multipoint statistics simulation method (MPS), since it captures the spatial geological pattern of target reservoir to be modeled. Generally, one optimal TI is selected before applying MPS by evaluating the similarities between many TIs and the well interpretations of target reservoir. In this paper, we propose a new training image optimization approach based on the convolutional neural network (CNN). First, candidate TIs were randomly sampled several times to obtain the sample dataset. Then, the CNN was used to conduct transfer learning for all samples, and finally, the optimal TI of the conditioning well data is selected through the trained CNN model. By taking advantage of the strong learning ability of CNN in image feature recognition, the proposed method can automatically identify differences in spatial features between the conditioning well data and the samples of the training image. Hence, it effectively resolves the difficulty of spatial matching between discrete datapoints and grid structures. We demonstrated the applicability of our model via 2D and 3D training image selection examples. The proposed methods effectively selected the appropriate TI, and then the pretreatment techniques for improving the accuracy of continuous TI selection were achieved. Moreover, the proposed method was successfully applied to training image selection of a discrete fracture network model. Finally, sensitivity analysis was carried out to show that sufficient conditioning data volume can reduce the uncertainty of the optimization results. By comparing with the improved MDevD method, the advantages of the new method are verified in terms of efficiency and reliability.

1. Introduction

Based on the training images that are used to interpret the a priori geological model, the MPS method can effectively reconstruct the complex geometry of the reservoir while satisfying the conditioning well data. Since the first MPS method, extended normal equation simulation (ENESIM) [1], was proposed, MPS has rapidly gained attention in the field of reservoir modeling and has been widely applied in a variety of areas such as modeling fluvial [24] and deltaic [5] reservoirs, microscopic pore modeling [6, 7], and other petroleum-related topics. Until today, several MPS algorithms have been introduced: for example, the probabilistic modeling algorithm via SNESIM Program [8], the pattern similarity matching modeling algorithm represented by SIMPAT [9], the direct sampling modeling algorithm by DS [10], the image quilting modeling algorithm represented by CIQ [11], and other algorithms [1214], as well as the optimization and improvement methods for prediction accuracy, efficiency, memory, and nonstationary problems [1519].

As a priori geological model, which can effectively describe the internal structure of the reservoir, geometry, and distribution of sedimentary facies, TI is the key input data for MPS. Therefore, appropriate training images must be provided before the implementation of MPS modeling. At present, training image acquisition methods mainly include a manual drawing approach based on geological knowledge, object-based modeling method [20, 21], methods based on sedimentation [22, 23], and methods based on the simulated depositional process [24, 25]. In addition, Comunian et al. [26] proposed a method to construct 3D TI using a two-dimensional TI. Later on, the s2Dcd algorithm was improved to divide the model into multiple regions and retrieve probability information only within the region of interest [27] to reduce the influence of nonstationary on the global space. Later on, Fadlelmula et al. [28] published a software for constructing training images. Various TIs can be obtained based on these methods and tools, but not all TIs can be used for the specific main modeling tasks. Although these candidate TIs are very similar, there are some differences in their compatibility with the geological body to be simulated, including the width of the sand body and its distribution density. Therefore, how to quantitatively evaluate the matching degree between conditioning well data from the main region of interest and the candidate TIs is one of the important issues affecting the modeling quality of MPS.

Based on what was said above, to the best of our knowledge, at present, there are not many optimization methods for the purpose of training images. Boisvert et al. [29] proposed two evaluation indexes based on runs distribution and multipoint density function to compare the matching degree between conditioning data and training images, to achieve the goal of TI optimization. Pérez et al. [30] proposed two compatibility ranking indexes to optimize TI, which were serving as the evaluation indexes of relative and absolute compatibility of training images and conditioning data in terms of spatial structure. Moreover, Feng et al. [31] proposed a TI selection method based on the estimation of minimum data event distance. In their method, first, the minimum distance of the events between the conditioning data and the candidate TIs is calculated, and then, the compatibility of the candidate TIs is ranked by using the distance statistics as an index to optimize the TI. Furthermore, Feng et al. [32] proposed a method for TI optimization based on the seismic attribute volume (SAV) correlation coefficient where the SAV was used to replace highly sparse data to optimize TI. Wang et al. [33] proposed a TI optimization method based on the repetition probability of the data pertaining the events. In this method, the lower the nonmatching rate, the smaller the variance of repetition probability would become, and as a result, a better match between TI and conditioning data will be achieved.

Ultimately, optimizing the training image that best matches the conditioning data is an issue of quantitative evaluation of spatial feature similarity between discrete points and regular grid. Additionally, in classical statistics, sampling is commonly done to infer to the characteristics of the population [34], indicating that there is a statistical consistency between the population and its samples. In this study, we utilized deep learning to address the matching issue, and a new training image optimization method is proposed based on the deep CNN. The proposed method takes TI as a population and conducts multiple random sampling on it to ensure that the number of datapoints in each sampling step is equal to the number of datapoints in the conditioning data. Then, the closer the spatial characteristics of the samples and conditioning data are to each other, the closer the spatial characteristics of TI and conditioning data would be. In order to accurately quantify the spatial characteristics difference between the conditioning data and the sampling points in the TI, our new method utilizes the convolutional neural network to train a large number of random samples of the candidate TIs. Next, it uses the trained neural network model to identify the training image that would best match the conditioning data.

2. Methodology

2.1. Deep Convolution Neural Network and Transfer Learning

In recent years, the application of deep learning [35] in reservoir modeling and oil and gas exploration has become an emerging topic in addition to image recognition, speech analysis, and natural language processing [36, 37]. Deep learning has been successfully used in seismic and well logging interpretation based on the CNN [38, 39] and reservoir modeling based on the generative adversarial network (GAN) [4043]. Quantification of matching degree between conditioning data and TI on spatial features is the measure to quantify TI optimization success. Compared with traditional indexes such as variogram and pattern similarity, the CNN which is based on deep learning has the main advantage that spatial features can be extracted through autonomous learning of a large number of samples, and feature recognition would become more accurate.

A deep convolution neural network is a feedforward neural network which is comprised of convolution calculation and deep neural network structure. Common convolutional neural network models include AlexNet, VGG-Net, ResNet, and GooLeNet. A GooLeNet-based Inception-ResNet-v2 model introduces the residual network jump connection mode based on Inception V3 [44] and integrates the “residual” structure of ResNet into the Inception structure module, which optimizes the network convergence efficiency and avoids the gradient disappearance caused by the deepening of the network. The model achieved the best results in ILSVRC image classification benchmark test at that time. The overall structure of the Inception-ResNet-v2 model is shown in Figure 1(a). Inception-ResNet-v2 uses convolution, pooling, and tensor connection to extract the features of the input image, and SoftMax classifier is used to identify the features, thereby predicting the probability of the category the image belongs to. Inception-ResNet-v2 contains multiple convolution residual modules, which are Inception-ResNet-A, Inception-ResNet-B, and Inception-ResNet-C, respectively. As shown in Figures 1(b)–1(d), the feature map first goes through the activation function of ReLu and then enters the right channel for combinational convolution. After tensor connection and residual structure, activation operation is carried out in ReLu, where S1 represents the step size of convolution equal to 1. It is important to note that the Inception-ResNet-B and Inception-ResNet-C modules convert the symmetric 7×7 and 3×3 convolution kernels to the asymmetric 1×7 and 7×1 and 1×3 and 3×1 convolution kernels. After processing by the multichannel convolution residual module, the dimension of the feature vector becomes 1×1792. Finally, the input vector dimension of SoftMax classifier is 1×1792, and its output dimension is the number of classifications.

Assuming that the number of input images is N, and the input image is defined as xi, and its category label is yi, the total number of categories output by the model will be mm2. Hypothesis function fθxi corresponding to the category of probability Pyi=jxi, then, can be obtained by the following equation:
(1)fθxi=Pyi=1xi;θPyi=2xi;θPyi=mxi;θ=1j=1meθjTxieθ1Txieθ2TxieθmTxi,
where 1/j=1meθjTxi represents the normalization of the probability distribution. The dimension of parameter matrix θ is 1792×m, and the equation is as follows:
(2)θ=θ11θm1θ1nθmn,
where n=1792. Each column of the matrix θ is responsible for a category of prediction, and the loss function is defined as
(3)Jx,y,θ=1Nj=1Nj=1m1yi=jlog2eθjTxij=1meθjTxi,
where 1yi=j is an indicative function, and its value is as follows:
(4)1yi=j=1,yi=j,0,yij.

The random gradient descent method is used to minimize the error function and obtain the final weight of the neural network. In the practical application of deep convolutional neural networks, transfer learning is usually adopted. This is a process to transfer the trained model parameters to the new one. The basic idea is to take the trained model weights on the existing large datasets as the initial values and transfer them to the datasets of the main problem for fine-tuning training. Transfer learning avoids the drawbacks of starting from scratch and by sharing the trained model parameters with the new model, the learning efficiency of the optimized model is accelerated.

2.2. Training Image Selection Based on Convolutional Neural Network

The nature of the process of optimizing the TI that best matches the conditioning data is to quantitatively evaluate the similarity between the spatial feature of the discrete points and the regular grid. As a priori geological model, TI does not need to satisfy the conditioning data, but it must maintain similar spatial characteristics with the conditioning data. For instance, in the conditioning data from stationary fluvial region, nonstationary fan TI is clearly not the best choice. This means if there is a higher compatibility between TI and the conditioning data, there will be a higher similarity between the spatial characteristics of the samples from the training images and the conditioning data. Based on the deep convolutional neural network concept, a new training image optimization method is proposed in this paper. As shown in Figure 2, conditioning data C contains k points, and Model A and Model B are candidate TIs of C. In order to find the TI that best matches C from the two candidate TIs, Model A and Model B are sampled for n times each. The number of randomly selected points is equal to k for each time, and the sampling point sets A_i and B_i are obtained, where i1,,n. For convenience, the point set from Model A is labeled as “A,” and the point set from Model B is labeled as “B.” All of the datapoints are then assembled as a training set and are used to train Inception-ResNet-v2 neural networks. Finally, the trained CNN model is used to identify the training image that best matches the conditioning data C.

2.3. Steps of the Proposed Algorithm

The current mainstream CNN platform mainly deals with digital image recognition scenarios. When selecting the best TI, training images must first be converted into digital images. The basic principle of mapping values to colors is to enhance the contrast between sedimentary facies in the training image as much as possible, so that the color distinction between different facies is clear and obvious, which helps to improve the accuracy of digital image recognition. In the following examples, for the binary type facies model, type 1 is mapped to white color and type 2 is mapped to black color. For continuous variables or multicategorical variables, the color mapping table is used to obtain the color corresponding to the value. Once the training images are converted to digital images, they can be imported into the CNN neural network in the form of images for training and recognition. Based on our proposed idea (Figure 2), the following steps constitute the algorithm:

    Algorithm 1: The process of training image optimization algorithm based on convolutional neural network.
  • 1. INPUT the real conditioning data C, that is, a set of points consisting of k points, convert it to an image available to CNN

  • 2. INPUT the M training images, TIm is the m-th training image to be selected, wherein m = 1, … , M

  • 3. DEFINE the pointsets PS used to train the Convolutional Neural Network

  • 4. DEFINE N is the number of sampled pointset from one training image

  • 5. FOR m FROM 1 TO M STEP 1 DO

  • 6.   FOR n FROM 1 TO N STEP 1 DO

  • 7.    Randomly sampling k points from TIm to obtain the n-th pointset Pm(n)

  • 8.    Assign Pm(n) a label {m} as a sign of the n-th training image TIm identity

  • 9.    Add Pm(n) into PS

  • 10.   END FOR

  • 11. END FOR

  • 12. PS is used to train the CNN model in the way of transfer learning, and obtain the trained neural network model CNNPS

  • 13. Test C with CNNPS to find the best matching training image

3. Examples

3.1. Optimization of Sedimentary Facies

The first example is the evaluation of optimal training images for conditioning data and TIs of different depositional environments. As shown in Figures 3(a)–3(c), Models F1, F2, and F3 are training images of three different sedimentary facies. And the similar models F1, F2, and F3 are the regions of interest (ROI). For matching the evaluation test with the candidate TIs, 1% of the data are randomly sampled from the F1, F2, and F3 as conditioning data. Based on Algorithm 1, the detailed process is as follows: first, 1% of the datapoints are randomly selected from each training image in Figures 3(a)–3(c). Each training image was sampled 50 times, and a total of 150 samples were obtained from three training images. Each sample is given a label to identify the training image that it belongs to. Then, transfer training is done to establish a deep convolutional network model with these samples. In order to test the recognition level of the trained CNN model, the sample in Figures 3(g)–3(i) is used as the conditioning dataset for the study. Each similar model in Figures 3(d)–3(f) contains 100 samples and then uses the trained CNN model to make TI-optimized selection. According to Equation (5), the recognition ability of 100 tests was calculated, and finally, a statistical histogram is obtained. As shown in Figure 4, the accuracy of cd_F1 recognition as F1 is about 80%. The other two test results of cd_F2 and cd_F3 are about 60% and 94%, demonstrating that the proposed method is effective for the optimization of training images of different sedimentary facies.
(5)ACC=NrNall,
where Nall is the total number of sample tests and Nr is the number of correctly identified tests.

3.2. Channel Widths

Sand channels are key in proper selection of the well location, and therefore, it would be important to predict the channel width in the ROI based on the existing conditioning data [2]. In this example, the proposed method is used to find the best matching of training image from different channel widths based on the conditioning data, which will ultimately enable us to identify the channel width. As shown in Figure 5, models W1 and W1 have similar channel width statistics. Similarly, W2 and W2 and W3 and W3 have the same channel width statistics. The values of all widths are shown in Table 1. cd_W1 is a 1% random sample of W1, which is used as conditioning data to test the recognition effect of the training images. Finally, the results of 100 iterations are counted, as shown in Figure 6, and the TI recognition ability of the cd_W1, cd_W2, and cd_W3 turns out 90%, 75%, and 80%, respectively, indicating the better performance of the proposed method for recognition of the sand channel.

3.3. Continuous Variable Training Images

In addition to discrete variable models, continuous ones, such as porosity or permeability in reservoir models, are more frequently addressed in research. In this example, we use the continuous model [16] in Figure 7(a) as the training image, which displays a mosaic cross section of packed stones, with visible gray scale textures and sharp boundaries. This section proposes a technique to improve the accuracy of continuous TI selection (see Algorithm 2). The technique discretizes continuous variables by splitting them into two or more parts through assigning one or more thresholds. By discretizing the continuous training image, it not only helps to reduce the effects of noisy data but also reduces the complexity of the training image. This facilitates the extraction of the main target contours and highlights the relationship between the position of global and local features. In this regard, it would be beneficial to improve the accuracy in identifying the training images that best matches the conditioning data. Considering Algorithm 1, the range of model variables in Figures 7(a)–7(d) is mapped from 0-255 to 1-5. As shown in Figures 7(e)–7(h), the boundary contours of the model are more prominent after the discretization process. Figures 7(b)–7(d) are used as the training images, and Figure 7(a) is used as the main region of interest. The TIs of Figures 7(b)–7(d) are sampled for the CNN training. Then, the conditioning data is sampled from Figure 7(a) and identified for finding the best matched TI by using the trained CNN model. Ultimately, the same procedure is carried out for the discretized models (Figures 7(e)–7(h)). Finally, a plot of TI recognition accuracy versus the number of conditioning data is established. As shown in Figure 8, it is noticeable that after the discretization process, the recognition level of TI is higher than the original data overall.

    Algorithm 2: Discrete process of continuous training image.
  • 1. Input the conditioning data C (including k data points) of the main region of interest, and the training image TI (including l data points), where the range of all data is [Omin, Omax].

  • 2. Input the data range after discretization [Dmin, Dmax], where D represents integer value

  • 3. FOR i FROM 1 TO k STEP 1 DO

  • 4.   temp ← (C (i) - Omin) / (Omax - Omin) (Dmax - Dmin) + Dmin

  • 5.   C (i) ← ⌈temp⌉

  • 6. End FOR

  • 7. FOR j FROM 1 TO l STEP 1 DO

  • 8.   temp ← (TI(j) - Omin) / (Omax - Omin) (Dmax - Dmin) + Dmin

  • 9.   TI(j) ←⌈temp⌉, ⌈ ⌉ means round up

  • 10. End FOR

3.4. 3D Scenario

In three dimensions, strata are usually superimposed as layers. Because of gradual deposition of sediments, the model of superimposed strata changes gradually, which has practical significance for the analysis of the geological model. When optimizing the 3D training images based on the conditioning data of the study area, we usually study the strata in different sedimentary environments separately. Since the depositional models within the same layer are generally similar, stratification must be completed first in traditional modeling. In this paper, the research framework based on the convolutional neural network does not directly implement the three-dimensional convolution method; instead, a simple alternative solution is proposed. Here, sedimentary bodies with consistent vertical depositional patterns are studied as a whole, which means the depositional patterns of different fine layers in the vertical direction of the 3D training images are consistent. In this case, our method inspects the characteristics of well data and the training images in a layer-by-layer fashion for feature selection, and the process that is followed in a single layer would be similar to the one in the 2D TI method. It should be noted that when the convolutional neural network is established, the training data is the horizontal 2D slice model of the training image, and the test data is the horizontal 2D slice conditioning data from the well. When the best possible match between training images of 2D slice conditioning data is achieved layer by layer, the optimization results of all layers were counted, and the candidate TI with the most frequent occurrence was selected as the final optimization results. The 3D training image selection algorithm is described in details in Algorithm 3.

    Algorithm 3: Optimization process of 3D training images.
  • 1. INPUT the conditioning data C, which consisting of k wells, and the M training images, TIm is the m-th training image to be selected, wherein m = 1, …, M, the conditioning data C and the TIs have the same grid frame, and the dimensions of the grid frame in three directions are NX, NY, and NZ

  • 2. DEFINE the pointsets PS used to train the CNN model

  • 3. DEFINE N is the number of sampled pointset from one candidate training image

  • 4. FOR m FROM 1 TO M STEP 1 DO

  • 5.   FOR n FROM 1 TO N STEP 1 DO

  • 6.    Randomly intercepting a 2D slice data TIm_2d in the vertical Z direction from TIm

  • 7.    Randomly sampling k points from TIm_2d to obtain the n-th pointset Pm(n)

  • 8.    Assign Pm(n) a label {m} as a sign of the m-th training image TIm identity

  • 9.    Add Pm(n) into PS

  • 10.   END FOR

  • 11. END FOR

  • 12. training CNN model with PS, and obtain the trained neural network model CNNPS

  • 13. FOR nz FROM 1 TO NZ STEP 1 DO

  • 14.   Get the 2D slice data C_2dC(nz) with vertical position equals to NZ from the conditioning data C

  • 15.   Test C(nz) with CNNPS to find the sign of the most matching training image

  • 16. END FOR

  • 17. All NZ signs of the most matching TI were counted, and the TI with the largest proportion was selected as the best TI

In the following example, the test data came from the object-based modeling method. Three models containing channels and three corresponding similar models were established for further verification by using three groups of parameters in Table 2. The grid dimension of the models is 100×100×50, and the size of grid unit in X, Y, and Z directions is 50 m, 50 m, and 1 m, respectively. As shown in Figure 9, Model R1, Model R2, and Model R3 are significantly different in terms of the distribution of the properties. These three models are used as training images, and R1, R2, and R3’ are used as testing areas. We randomly sampled 1.5% of the Wells in the area of interest as the conditioning data, and then these conditioning data were utilized to test the optimization performance of the training image. As shown in Figure 10, after 100 tests, the proportion of conditioning data cd_R1 being recognized as training image R1 is up to 90%, and the proportion of cd_R3 recognized as R3 is over 80%. Since the distribution characteristics of cd_R2 sedimentary facies are similar to cd_R1, some cd_R2 are identified as R1 while in the recognition rate, the ratio of cd_R2 is still close to 60%. Collectively, results confirm that the method has high accuracy in training selections in 3D space.

4. Discussion

4.1. DFN Training Image Optimization

A discrete fracture network (DFN) model has been widely used in the realm of fractured reservoirs because it can better characterize the heterogeneity and complexity of fractured reservoirs. In order to obtain the DFN model consistent with the real situation, other scholars have proposed a variety of DFN modeling methods. However, quantitative evaluated spatial correlation and structural differences between the DFN model and a small number of conditioning data have become a difficult problem to optimize the DFN model. In this section, the proposed method is applied to optimization of a DFN model. As shown in Figure 11, C1, C2, and C3 are three separate DFN models [46, 47]. In this figure, C1, C2, and C3 are the corresponding working images while 1.5% of the data is extracted from these to constitute the conditioning data. We established the CNN based on Algorithm 1 and then carried out a training image optimization process on 1.5% conditioning data. As shown in Figure 12, after 100 runs, most of the conditioning data can be well recognized as the corresponding DFN training images, indicating that under the condition of sparse data, the proposed method can find out the DFN model with higher spatial feature compatibility even with a limited amount of conditioning data.

4.2. Sensitivity Analysis on the Number of Conditioning Data

It is evident that the number of conditioning data and their distribution have a large impact on the correct recognition rate during the selection of training images. However, in this paper, we utilized uniform distribution of the conditioning data; thus, the number of conditioning data would be the main focus of our discussion. In this regard, in the extreme scenario when only one conditioning data exists, the selected training image would be meaningless. This originates from the fact that when data volumes are limited it will lead to higher uncertainty. Hence, matching conditioning data with training images is essentially a problem in evaluating the degree of difference in spatial correlation between discrete datapoints and regular grid datapoints. Usually when the amount of conditioning data is more, the uncertainty reflecting spatial features would be less, and as a result, the selected training images would be more reliable. In order to quantitatively understand how the quantity of conditioning data on the selection of TI would impact the results, sensitivity analysis based on two 2D discrete TI selection examples (Sections 3.1 and 3.2) is done. The number of conditioning data is increased from 1% to 10%, where the step size is 1%. Considering the example in Section 3.1, as shown in Figure 13, the recognition rate gradually increases from 0.7 to 0.9 as the sampling ratio increases, which illustrates that increasing conditioning data does effectively improve the recognition rate. A similar process is followed for other examples, and the results are depicted in Figure 14. From this figure, a similar outcome, improvement of the recognition rate, is understood. When the amount of data is too small, it almost fails to correctly identify the correct training images since spatial characteristics presented by the sparse data own notable uncertainty in them. In addition to the volume of conditioning data (which is costly), the inclusion of other types of information might also be useful to address the uncertainty in the sparse data which this itself needs in a separate study.

4.3. Comparison with the Method Based on Modified MDevD

Feng proposed an TI optimal selection method based on minimum data event distance (MDevD) in 2017. Based on a fixed-size template, this method scans cd data events from the conditioning data and finds the compatible TI data event with the smallest distance from TI. The cd data event and its compatible TI data event are called a compatible data event pair, and the distance of the compatible data event pair is called the minimum data event distance. After obtaining the MDevD properties between the conditioning data and the candidate training images, the difference between the conditioning data and each candidate training image is quantified as the average value of the MDevD properties. However, the fixed template which is commonly used in the origin MDevD method to scan the cd data event in the conditioning data could cause difficulties in searching for data events with enough datapoints when there are significant differences in the spatial distribution density of conditioning data. In this regard, this article uses a flexible template instead. When a smaller template cannot search for enough conditioning data, the search range is expanded and search is continued to find enough conditioning data, thus improving the accuracy characteristics of MDevD.

In this article, we use the improved MDevD method to compare with the proposed CNN-based method to test the reliability and practicality of the new method. The study is conducted using data from the example in Section 3.1 (selection of training image for different sedimentary face patterns). First, 1% of the datapoints are randomly sampled from F1 in Figure 3 as the conditioning data cd_F1, and then, the MDevD metrics of cd_F1 and the candidate training images F1, F2, and F3 are calculated. As shown in Figures 15(b)–15(d), the smaller the average value of the MDevD attribute (Figure 15(b)), the higher the compatibility of the training image with the conditioning data. We use the proposed method and the modified MDevD method to perform 100 TI selection tests on F1, F2, and F3, respectively, and obtain the comparison results of recognition rates, as shown in Figure 16. The recognition rates of the proposed method and the improved MDevD method are very close, the recognition rate of cd_F1 is slightly lower than that of the MDevD method, and the recognition rates of cd_F2 and cd_F3 are slightly higher than those of the MDevD method, demonstrating the reliability of the proposed method. The MDevD method which compares the distances between cd data event and all TI data events requires large computation. For the same 100 experiments, the proposed method (including training the neural network) takes 46 seconds while the modified MDevD method (in parallel computing mode, the maximum number of conditioning data is 25 and the search radius is 30) needs 14.6 hours, showing that the proposed method has great advantages and practicability in terms of computational efficiency.

5. Conclusion

The process of selecting training images that would best match the conditioning data is done by quantitatively evaluating similarity of spatial features between data with various spatial distribution densities, such as discrete datapoints and regular grids. In this study, based on CNN concept, a new training image selection method was proposed. This new method first conducts random sampling from candidate TIs for several iterations (recommended greater than 50) and then carries out CNN training. Finally, the trained CNN model is used to identify the training image that best matches the conditioning data. Through taking advantage of CNN capability in recognition of features in an image, the new method was able to quickly and automatically identify the spatial differences in features between conditioning data and TI. Ultimately, based on the results, this method not only can be used for TI selection but also can be used to resolve the problem of model evaluation and appropriate model parameter selection.

The effectiveness of the new proposed method in optimizing training images was demonstrated in a few 2D and 3D examples. At first, TI selection of different patterns was followed by identifying different sand channel widths in the second example. The third example studied the training image selection problem of continuous training images via an innovative solution to discretize continuous variables through threshold values. The results exhibited that this new approach can effectively improve the accuracy of continuous training image selection. In the last example, optimization of 3D training images was performed. We concluded that, since the CNN framework lacks the ability to recognize 3D images, a simple method to enhance the 3D TI optimization should be adopted. Based on the results from these examples, we deduced that the proposed method is effective in selecting the most compatible training images. Additionally, the method was applied to TI selection of discrete fracture networks, and a good recognition result was obtained. Finally, through parameter sensitivity analysis, it was realized that the number of conditioning data has a crucial impact on the optimization accuracy of the training image where more conditioning data will lead to higher recognition accuracy. Considering the low recognition rate when the conditioning data is sparse, we recommend to adopt the deep learning of multisource information to improve the selection step of the training images. In addition, comparing the new method with the modified MDevD method, the results show that the recognition rate of the new method is very close to that of the MDevD method, while the higher computational efficiency gives it an advantage.

Data Availability

All data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Science and the National Natural Science Foundation of China (No. 42002147) and Open Foundation of Top Disciplines in Yangtze University (No. 2019KFJJ0818021). We are grateful to Gregoire Mariethoz and Jef Caers for sharing the Training Image Library (http://www.trainingimages.org/training-images-library.html) in the monograph “Multiple-Point Geostatistics: Stochastic Modeling with Training Images” and Wenjie Feng for his suggestion in the early stage of the work about TI selection. We also thank the original image classification sample code based on ML.NET (https://docs.microsoft.com/zh-cn/dotnet/machine-learning/tutorials/image-classification-api-transfer-learning).

Exclusive Licensee GeoScienceWorld. Distributed under a Creative Commons Attribution License (CC BY 4.0).