The M4 Competition: 100,000 time series and 61 forecasting methods

Spyros Makridakis, Evangelos Spiliotis, Vassilios Assimakopoulos

Research output: Contribution to journalArticlepeer-review

97 Citations (Scopus)


The M4 Competition follows on from the three previous M competitions, the purpose of which was to learn from empirical evidence both how to improve the forecasting accuracy and how such learning could be used to advance the theory and practice of forecasting. The aim of M4 was to replicate and extend the three previous competitions by: (a) significantly increasing the number of series, (b) expanding the number of forecasting methods, and (c) including prediction intervals in the evaluation process as well as point forecasts. This paper covers all aspects of M4 in detail, including its organization and running, the presentation of its results, the top-performing methods overall and by categories, its major findings and their implications, and the computational requirements of the various methods. Finally, it summarizes its main conclusions and states the expectation that its series will become a testing ground for the evaluation of new methods and the improvement of the practice of forecasting, while also suggesting some ways forward for the field.

Original languageEnglish
Pages (from-to)54-74
Number of pages21
JournalInternational Journal of Forecasting
Issue number1
Publication statusPublished - 1 Jan 2020


  • Benchmarking methods
  • Forecasting accuracy
  • Forecasting competitions
  • M competitions
  • Machine learning methods
  • Practice of forecasting
  • Prediction intervals
  • Time series methods


Dive into the research topics of 'The M4 Competition: 100,000 time series and 61 forecasting methods'. Together they form a unique fingerprint.

Cite this