Abstract
In this paper we present different sources of information complementary to audio-visual (A/V) streams and propose their usage for enriching A/V data with semantic concepts in order to bridge the gap between low-level video analysis and high-level analysis. Our aim is to extract crossmedia feature descriptors from semantically enriched and aligned resources so as to detect finer-grained events in video. We introduce an architecture for complementary resources analysis and discuss domain dependency aspects of this approach connected to our initial domain of soccer broadcasts.
| Original language | English |
|---|---|
| Pages (from-to) | 7-8 |
| Number of pages | 2 |
| Journal | CEUR Workshop Proceedings |
| Volume | 300 |
| Publication status | Published - 2007 |
| Externally published | Yes |
| Event | 2nd International Conference on Semantic and Digital Media Technologies, SAMT 2007 - Genoa, Italy Duration: 5 Dec 2007 → 7 Dec 2007 |
Keywords
- Multimedia databases
- Text processing