Abstract
This paper considers two problems of interpreting forecasting competition error statistics. The first problem is concerned with the importance of linking the error measure (loss function) used in evaluating a forecasting model with the loss function used in estimating the model. It is argued that because the variety of uses of any single forecast, such matching is impractical. Secondly, there is little evidence that matching would have any impact on comparative forecast performance, however measured. As a consequence the results of forecasting competitions are not affected by this problem. The second problem is concerned with the interpreting performance, when evaluated through M(ean) S(quare) E(rror). The authors show that in the Makridakis Competition, good MSE performance is solely due to performance on a small number of the 1001 series, and arises because of the effects of scale. They conclude that comparisons of forecasting accuracy based on MSE are subject to major problems of interpretation.
Original language | English |
---|---|
Pages (from-to) | 545-550 |
Number of pages | 6 |
Journal | International Journal of Forecasting |
Volume | 4 |
Issue number | 4 |
DOIs | |
Publication status | Published - 1 Jan 1988 |
Keywords
- Bayesian forecasting
- Estimation - evaluation
- evaluation
- Loss functions - evaluation
- Loss functions - interpretation
- M-competition
- Time series - transformations