TY - GEN
T1 - Towards cross-media feature extraction
AU - Declerck, Thierry
AU - Buitelaar, Paul
AU - Nemrava, Jan
AU - Sadlier, David
PY - 2008
Y1 - 2008
N2 - In this paper we describe past and present work dealing with the use of textual resources, out of which semantic information can be extracted in order to provide for semantic annotation and indexing of associated image or video material. Since the emergence of semantic web technologies and resources, entities, relations and events extracted from textual resources by means of Information Extraction (IE) can now be marked up with semantic classes derived from ontologies, and those classes can be used for the semantic annotation and indexing of related image and video material. More recently our work aims additionally at taking into account extracted Audio-Video (A/V) features (such as motion, audio-pitch, close-up, etc.) to be combined with the results of Ontology-Based Information Extraction for the annotation and indexing of specific event types. As extraction of A/V features is then supported by textual evidence, and possibly also the other way around, our work can be considered as going towards a "crossmedia feature extraction", which can be guided by shared ontologies (Multimedia, Linguistic and Domain ontologies).
AB - In this paper we describe past and present work dealing with the use of textual resources, out of which semantic information can be extracted in order to provide for semantic annotation and indexing of associated image or video material. Since the emergence of semantic web technologies and resources, entities, relations and events extracted from textual resources by means of Information Extraction (IE) can now be marked up with semantic classes derived from ontologies, and those classes can be used for the semantic annotation and indexing of related image and video material. More recently our work aims additionally at taking into account extracted Audio-Video (A/V) features (such as motion, audio-pitch, close-up, etc.) to be combined with the results of Ontology-Based Information Extraction for the annotation and indexing of specific event types. As extraction of A/V features is then supported by textual evidence, and possibly also the other way around, our work can be considered as going towards a "crossmedia feature extraction", which can be guided by shared ontologies (Multimedia, Linguistic and Domain ontologies).
UR - https://www.scopus.com/pages/publications/65649086454
M3 - Conference Publication
AN - SCOPUS:65649086454
SN - 9781577353973
T3 - AAAI Fall Symposium - Technical Report
SP - 41
EP - 45
BT - Multimedia Information Extraction - Papers from the AAAI Fall Symposium, Technical Report
PB - American Association for Artificial Intelligence
T2 - 2008 AAAI Fall Symposium
Y2 - 7 November 2008 through 9 November 2008
ER -