Combining forecasts: What information do judges need to outperform the simple average?

Ilan Fischer, Nigel Harvey

Research output: Contribution to journalArticlepeer-review


Previous work has shown that combinations of separate forecasts produced by judgment are inferior to those produced by simple averaging. However, in that research judges were not informed of outcomes after producing each combined forecast. Our first experiment shows that when they are given this information, they learn to weight the separate forecasts appropriately. However, their judgments, though improved, are still not significantly better than the simple average because they contain a random error component. Bootstrapping can be used to remove this inconsistency and produce results that outperform the average. In our second and third experiments, we provided judges with information about errors made by the individual forecasters. Results show that providing information about their mean absolute percentage errors updated each period enables judges to combine their forecasts in a way that outperforms the simple average.

Original languageEnglish
Pages (from-to)227-246
Number of pages20
JournalInternational Journal of Forecasting
Issue number3
StatePublished - Jul 1999
Externally publishedYes

Bibliographical note

Funding Information:
This research was funded by Economic and Social Research Council Grant R000236827. Parts of it were presented at the International Symposium of Forecasting, Barbados, 1997, and at the 38th Annual Meeting of the Psychonomic Society, Philadelphia, 1997. The authors thank Clare Harries for her comments on an earlier version of this paper.


  • Combining forecasts
  • Feedback
  • Forecasting
  • Information integration
  • Judgment

ASJC Scopus subject areas

  • Business and International Management


Dive into the research topics of 'Combining forecasts: What information do judges need to outperform the simple average?'. Together they form a unique fingerprint.

Cite this