We introduce a practical measure that predicts the frequency dispersion in implicit time-domain finite-difference migration. This measure of dispersion can be readily computed as a function of velocity, dip, and the sampling parameters (depth interval, time interval, trace interval). One result of this analysis is that smaller sampling intervals can often lead to poorer results by an unbalancing of canceling errors. Another result is that, even if the errors are kept in balance for one event, it is not possible to minimize simultaneously the dispersion for all events, since many different dips and velocities occur on a typical seismic section. We also extend the computation of the dispersion measure to include cascaded finite-difference migration. Cascading does not reduce the magnitude of wavelet dispersion, but it does make the control of dispersion easier because it avoids the problem of choosing parameters in the presence of multivalued velocity. We calibrate and confirm our theoretical dispersion measure by means of migration tests on model data and field data.