During the past two decades, geoscientists have used machine learning (ML) to produce a more quantitative reservoir characterization and discover hidden patterns in their data. However, as the complexity of these models increases, the sensitivity of their results to the choice of the input data becomes more challenging. Measuring how the model uses the input data to perform either a classification or regression task provides an understanding of the data-to-geology relationships which indicates how confident we are in the prediction. To provide such insight, the ML community has developed local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) tools. In this study, we train a random forest architecture using a suite of seismic attributes as input to differentiate among mass transport deposits (MTDs), salt, and conformal siliciclastic sediments in a Gulf of Mexico data set. We apply SHAP to understand how the model uses the input seismic attributes to identify target seismic facies and examine in what manner variations in the input, such as adding band-limited random noise or applying a Kuwahara filter, impact the model predictions. During our global analysis, we find that the attribute importance is dynamic, and it changes based on the quality of the seismic attributes and the seismic facies analyzed. For our data volume and target facies, attributes measuring changes in dip and energy show the largest importance for all cases in our sensitivity analysis. We note that to discriminate between the seismic facies, the ML architecture learns a “set of rules” in multiattribute space, and overlap among MTDs, salt, and conformal sediments might exist based on the seismic attribute analyzed. Finally, using SHAP at a voxel scale, we understand why certain areas of interest were misclassified by the algorithm and perform an in-context interpretation to analyze how changes in the geology impacted the model predictions.