A Hardware Acceleration Platform for AI-Based Inference at the Edge

Kimon Karras, Evangelos Pallis, George Mastorakis, Yannis Nikoloudakis, Jordi Mongay Batalla, Constandinos X. Mavromoustakis, Evangelos Markakis

    Research output: Contribution to journalArticlepeer-review

    6 Citations (Scopus)


    Machine learning (ML) algorithms are already transforming the way data are collected and processed in the data center, where some form of AI has permeated most areas of computing. The integration of AI algorithms at the edge is the next logical step which is already under investigation. However, harnessing such algorithms at the edge will require more computing power than what current platforms offer. In this paper, we present an FPGA system-on-chip-based architecture that supports the acceleration of ML algorithms in an edge environment. The system supports dynamic deployment of ML functions driven either locally or remotely, thus achieving a remarkable degree of flexibility. We demonstrate the efficacy of this architecture by executing a version of the well-known YOLO classifier which demonstrates competitive performance while requiring a reasonable amount of resources on the device.

    Original languageEnglish
    JournalCircuits, Systems, and Signal Processing
    Publication statusPublished - 1 Jan 2019


    • Acceleration
    • Acceleration of machine learning
    • AI
    • Computing
    • EDGE
    • Fog
    • ML
    • PCP
    • YOLO


    Dive into the research topics of 'A Hardware Acceleration Platform for AI-Based Inference at the Edge'. Together they form a unique fingerprint.

    Cite this