Time-lapse seismic is a powerful technology for monitoring a variety of subsurface changes due to reservoir fluid flow. However, the practice can be technically challenging when one seeks to acquire colocated time-lapse surveys with high degrees of replicability among the shot locations. We have determined that under “ideal” circumstances, in which we ignore errors related to taking measurements off the grid, high-quality prestack data can be obtained from randomized subsampled measurements that are observed from surveys in which we choose not to revisit the same randomly subsampled on-the-grid shot locations. Our acquisition is low cost because our measurements are subsampled. We have found that the recovered finely sampled prestack baseline and monitor data actually improve significantly when the same on-the-grid shot locations are not revisited. We achieve this result by using the fact that different time-lapse data share information and that nonreplicated (on-the-grid) acquisitions can add information when prestack data are recovered jointly. Whenever the time-lapse data exhibit joint structure — i.e., they are compressible in some transform domain and share information — sparsity-promoting recovery of the “common component” and “innovations,” with respect to this common component, outperforms independent recovery of the prestack baseline and monitor data. The recovered time-lapse data are of high enough quality to serve as the input to extract poststack attributes used to compute time-lapse differences. Without joint recovery, artifacts — due to the randomized subsampling — lead to deterioration of the degree of repeatability of the time-lapse data. We tested this method by carrying out experiments with reliable statistics from thousands of repeated experiments. We also confirmed that high degrees of repeatability are achievable for an ocean-bottom cable survey acquired with time-jittered continuous recording.