Recently, convolutional neural network (CNN)-based deep learning (DL) for impedance inversion has been extended to multiple dimensions. Training multidimensional DL inversion requires extracting supervised information from sparse 1D well-log labels. Fully convolutional networks rely on their parameter sharing mechanism and receptive fields to achieve this, but their perceptual range is limited, making it difficult to capture long-term correlations in seismic data. The transformer is a type of network that is entirely based on self-attention, and it has demonstrated remarkable performance across various tasks and domains. However, its suitability for 3D seismic inversion is restricted by its high computational workload, fixed input size requirement, and inadequate handling of low-level details. The primary goal was to reengineer the self-attention mechanism to optimize its applicability for seismic impedance inversion tasks. The high-dimensional self-attention is decoupled into dual low-dimensional attention paths to reduce the computation of dense connections and matrix dot products. Shared parameters are used instead of full connection, allowing for flexible changes in input sizes by the network. In addition, its local modeling capabilities are enhanced by integrating it with the residual structure of the CNN. We name the resulting structure Self-Attention ResBlock, which is used as the basic unit for constructing TransInver. Comparative experiments indicate that TransInver performs significantly better than 3D methods such as UNet, TransUNet, HRNet, and 1D inversion methods. TransInver produces reliable inversion results using only nine well logs for SEAM Phase I and three well logs for the field data set of the Netherlands F3. This backbone network can deliver excellent inversion performance without depending on any auxiliary means such as low-frequency constraints or semisupervised frameworks.