Comparing Convolutional Neural Networks and Transformers in a Points-of-Interest Experiment

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This paper addresses a research gap by providing a unique comparative analysis of the most prevalent Deep Learning (DL) models for image classification, specifically focusing on Points-of-Interest (POI) and discusses their differences. Convolutional Neural Network (CNN) based models are trained on a POI dataset and their accuracy levels are noted. The paper then proceeds to compare them with a recent model called ViT, which is based on the Transformers architecture, and has the potential to surpass current accuracy levels and bring further innovation in the field of Deep Learning. For this comparative study, a random sample from the Places365 dataset is utilized and is referred as the mini-places dataset in this paper.

Original languageEnglish
Title of host publicationProceedings of the 3rd International Conference on Innovations in Computing Research (ICR’24)
EditorsKevin Daimi, Abeer Al Sadoon
PublisherSpringer Science and Business Media Deutschland GmbH
Pages153-162
Number of pages10
ISBN (Print)9783031655210
DOIs
Publication statusPublished - 2024
Event3rd International Conference on Innovations in Computing Research, ICR 2024 - Athens, Greece
Duration: 12 Aug 202414 Aug 2024

Publication series

NameLecture Notes in Networks and Systems
Volume1058 LNNS
ISSN (Print)2367-3370
ISSN (Electronic)2367-3389

Conference

Conference3rd International Conference on Innovations in Computing Research, ICR 2024
Country/TerritoryGreece
CityAthens
Period12/08/2414/08/24

Keywords

  • Artificial intelligence
  • Convolution neural networks
  • Deep learning
  • Machine learning
  • Points-of-interest
  • Transformers

Fingerprint

Dive into the research topics of 'Comparing Convolutional Neural Networks and Transformers in a Points-of-Interest Experiment'. Together they form a unique fingerprint.

Cite this