Towards cross-media feature extraction

Research output: Chapter in Book or Conference Publication/ProceedingConference Publicationpeer-review

Abstract

In this paper we describe past and present work dealing with the use of textual resources, out of which semantic information can be extracted in order to provide for semantic annotation and indexing of associated image or video material. Since the emergence of semantic web technologies and resources, entities, relations and events extracted from textual resources by means of Information Extraction (IE) can now be marked up with semantic classes derived from ontologies, and those classes can be used for the semantic annotation and indexing of related image and video material. More recently our work aims additionally at taking into account extracted Audio-Video (A/V) features (such as motion, audio-pitch, close-up, etc.) to be combined with the results of Ontology-Based Information Extraction for the annotation and indexing of specific event types. As extraction of A/V features is then supported by textual evidence, and possibly also the other way around, our work can be considered as going towards a "crossmedia feature extraction", which can be guided by shared ontologies (Multimedia, Linguistic and Domain ontologies).

Original languageEnglish
Title of host publicationMultimedia Information Extraction - Papers from the AAAI Fall Symposium, Technical Report
PublisherAmerican Association for Artificial Intelligence
Pages41-45
Number of pages5
ISBN (Print)9781577353973
Publication statusPublished - 2008
Externally publishedYes
Event2008 AAAI Fall Symposium - Arlington, VA, United States
Duration: 7 Nov 20089 Nov 2008

Publication series

NameAAAI Fall Symposium - Technical Report
VolumeFS-08-05

Conference

Conference2008 AAAI Fall Symposium
Country/TerritoryUnited States
CityArlington, VA
Period7/11/089/11/08

Fingerprint

Dive into the research topics of 'Towards cross-media feature extraction'. Together they form a unique fingerprint.

Cite this