Abstract
The alignment or registration of seismic images is an important and extensively researched aspect in seismic data processing. This process plays a pivotal role in evaluating the evolution of CO2 plumes and assessing the potential for CO2 leakage from its reservoir storage when using 4D or time-lapse seismic methodology. The computer vision community has demonstrated considerable advantages in employing deep learning for such tasks, particularly when confronted with substantial shifts, mismatches, and noises in complex images. In this study, we leverage a cutting-edge recurrent deep learning network known as recurrent all-pairs field transforms (RAFT), incorporating tailored modifications to address seismic matching challenges. We evaluate the network's performance on both synthetic and field 4D CO2 seismic data sets, showcasing its superior accuracy and effectiveness compared to a widely used traditional method. Particularly, in some intricate and ambiguous instances, where even human vision may struggle to identify corresponding events between images, our deep learning method exhibits strong and reliable performance. Validated results confirm that this lightweight multiscale network, trained through self-supervised learning, offers a rapid, parameter-free, and end-to-end solution for 4D image matching in subsurface CO2 storage monitoring with potential applicability to many other seismic registration tasks.