Forecasting and loss functions

Robert Fildes, Spyros Makridakis

    Research output: Contribution to journalArticlepeer-review


    This paper considers two problems of interpreting forecasting competition error statistics. The first problem is concerned with the importance of linking the error measure (loss function) used in evaluating a forecasting model with the loss function used in estimating the model. It is argued that because the variety of uses of any single forecast, such matching is impractical. Secondly, there is little evidence that matching would have any impact on comparative forecast performance, however measured. As a consequence the results of forecasting competitions are not affected by this problem. The second problem is concerned with the interpreting performance, when evaluated through M(ean) S(quare) E(rror). The authors show that in the Makridakis Competition, good MSE performance is solely due to performance on a small number of the 1001 series, and arises because of the effects of scale. They conclude that comparisons of forecasting accuracy based on MSE are subject to major problems of interpretation.

    Original languageEnglish
    Pages (from-to)545-550
    Number of pages6
    JournalInternational Journal of Forecasting
    Issue number4
    Publication statusPublished - 1 Jan 1988


    • Bayesian forecasting
    • Estimation - evaluation
    • evaluation
    • Loss functions - evaluation
    • Loss functions - interpretation
    • M-competition
    • Time series - transformations


    Dive into the research topics of 'Forecasting and loss functions'. Together they form a unique fingerprint.

    Cite this