Automated Contextual Tagging in Points-of-Interest Using Bag-of-Objects

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Object detection is a fundamental task in computer vision, with its applications ranging from autonomous driving to scene recognition. In the domain of Points-of-Interest (POI), object recognition can aid in the analysis of complex urban environments that contain various types of infrastructure, people, and activities. This paper addresses this challenge by proposing a Bag-of-Objects (BoO) methodology for POI scene contextual tagging. The suggested approach utilizes a transformer-based model for object detection, followed by a threshold to assign tags to POI scenes. These tags are then used to train a novel multimodal model, called WORLD (Weight Optimization for Representation and Labeling Descriptions), which is capable of classifying and contextually tagging POI scenes based on their visual features.

Original languageEnglish
Title of host publicationProceedings of the 4th International Conference on Innovations in Computing Research, ICR 2025
EditorsKevin Daimi, Abeer Alsadoon
PublisherSpringer Science and Business Media Deutschland GmbH
Pages62-73
Number of pages12
ISBN (Print)9783031956515
DOIs
Publication statusPublished - 2025
Event4th International Conference on Innovations in Computing Research, ICR 2025 - London, United Kingdom
Duration: 25 Aug 202527 Aug 2025

Publication series

NameLecture Notes in Networks and Systems
Volume1487 LNNS
ISSN (Print)2367-3370
ISSN (Electronic)2367-3389

Conference

Conference4th International Conference on Innovations in Computing Research, ICR 2025
Country/TerritoryUnited Kingdom
CityLondon
Period25/08/2527/08/25

Keywords

  • Artificial Intelligence
  • Bag-of-Objects
  • Data Science
  • Deep Learning
  • Points-of-Interest
  • Transformers

Fingerprint

Dive into the research topics of 'Automated Contextual Tagging in Points-of-Interest Using Bag-of-Objects'. Together they form a unique fingerprint.

Cite this