Evaluating DL Model Scaling Trade-Offs During Inference via an Empirical Benchmark Analysis

Research output: Contribution to journalArticlepeer-review

Abstract

With generative Artificial Intelligence (AI) capturing public attention, the appetite of the technology sector for larger and more complex Deep Learning (DL) models is continuously growing. Traditionally, the focus in DL model development has been on scaling the neural network’s foundational structure to increase computational complexity and enhance the representational expressiveness of the model. However, with recent advancements in edge computing and 5G networks, DL models are now aggressively being deployed and utilized across the cloud–edge–IoT continuum for the realization of in situ intelligent IoT services. This paradigm shift introduces a growing need for AI practitioners, as a focus on inference costs, including latency, computational overhead, and energy efficiency, is long overdue. This work presents a benchmarking framework designed to assess DL model scaling across three key performance axes during model inference: classification accuracy, computational overhead, and latency. The framework’s utility is demonstrated through an empirical study involving various model structures and variants, as well as publicly available datasets for three popular DL use cases covering natural language understanding, object detection, and regression analysis.

Original languageEnglish
Article number468
JournalFuture Internet
Volume16
Issue number12
DOIs
Publication statusPublished - Dec 2024

Keywords

  • artificial intelligence
  • benchmarking
  • cloud computing
  • deep learning

Fingerprint

Dive into the research topics of 'Evaluating DL Model Scaling Trade-Offs During Inference via an Empirical Benchmark Analysis'. Together they form a unique fingerprint.

Cite this