Grain-size analysis increasingly consists of multivariate comparisons between samples involving use of class-frequencies. This approach is dictated by realization that size-frequency distributions are spectra more akin to X-ray diffractograms than simple random phenomena. With the assumption that samples consist of mixtures of subdistributions comes the problem of the most efficient way to compare and contrast size-frequency data in order to enhance differences between samples without forcing contrasts that do not exist. Two problems exist with respect to doing this: 1) determination of the optimal number of class-intervals and 2) determination of class-interval widths. The first problem is unsolved, but this paper explains a way to determine class-interval widths (once the number of intervals is chosen) to maximize information content. Applying the basic concepts of information theory, a procedure is presented which evaluates the relative information content of a set of frequency data when subdivided in various ways. Maximum information is always preserved when "maximum entropy" spectra (unequal class intervals) are used. Evaluation of several schemes of histogram subdivision (phi-based arithmetic, log arithmetic, Z-score, maximum entropy) indicate, not surprisingly, that in some instances equal-interval, phi-based histograms contain the least information.