Abstract
Previous work has shown that combinations of separate forecasts produced by judgment are inferior to those produced by simple averaging. However, in that research judges were not informed of outcomes after producing each combined forecast. Our first experiment shows that when they are given this information, they learn to weight the separate forecasts appropriately. However, their judgments, though improved, are still not significantly better than the simple average because they contain a random error component. Bootstrapping can be used to remove this inconsistency and produce results that outperform the average. In our second and third experiments, we provided judges with information about errors made by the individual forecasters. Results show that providing information about their mean absolute percentage errors updated each period enables judges to combine their forecasts in a way that outperforms the simple average.
Original language | English |
---|---|
Pages (from-to) | 227-246 |
Number of pages | 20 |
Journal | International Journal of Forecasting |
Volume | 15 |
Issue number | 3 |
DOIs | |
State | Published - Jul 1999 |
Externally published | Yes |
Bibliographical note
Funding Information:This research was funded by Economic and Social Research Council Grant R000236827. Parts of it were presented at the International Symposium of Forecasting, Barbados, 1997, and at the 38th Annual Meeting of the Psychonomic Society, Philadelphia, 1997. The authors thank Clare Harries for her comments on an earlier version of this paper.
Keywords
- Combining forecasts
- Feedback
- Forecasting
- Information integration
- Judgment
ASJC Scopus subject areas
- Business and International Management