In evaluating exploration prospects, is it better to rely on a few top experts or solicit estimates from a larger group and take the average? If the evaluators work properly, a statistical compensation should occur between optimistic and pessimistic estimates so that a group average should be about right. This principle provides a simple explanation for the winner's curse because the winning bid is based on the highest estimate instead of the mean. But does this principle of statistical compensation of errors apply in prospect evaluation? To answer this question, a data set from the movie industry is used as an analog. The data are forecasts of the number of tickets sold for new movies on the opening day in the Paris area. These forecasts are made every week in a competitive game between movie industry professionals, with the advantage, unlike oil industry data, that the true values become known.
Several lessons can be learned from this data set that potentially apply to prospect evaluation. The most important is that averaging several independent appraisals of a given prospect generally does not give the true value of the prospect. However, over a large enough portfolio of prospects, the statistical compensation does occur and the mean can be delivered. The movie data indicate that a single expert can outperform the group as a whole, but averaging the estimates of a few top experts, possibly weighted by credibility, is even better. The data also show that the distribution of the forecasts accurately represents the uncertainty about the true value. Finally, the influence of cognitive biases on estimating is briefly discussed, in particular anchoring and the need to challenge systematically the validity of geological analogs.