Abstract
With the advent of online social networks and User-Generated Content (UGC), the social Web is experiencing an explosion of audio-visual data. However, the usefulness of the collected data is in doubt, given that the means of retrieval are limited by the semantic gap between them and people's perceived understanding of the memories they represent. Whereas machines interpret UGC media as series of binary audio-visual data, humans perceive the context under which the content is captured and the people, places, and events represented. The Annotation CReatiON for Your Media (ACRONYM) framework addresses the semantic gap by supporting the creation of a layer of explicit machine-interpretable meaning describing UGC context. This paper presents an overview of a use case of ACRONYM for semantic annotation of personal photographs. The authors define a set of recommendation algorithms employed by ACRONYM to support the annotation of generic UGC multimedia. This paper introduces the context metrics and combination methods that form the recommendation algorithms used by ACRONYM to determine the people represented in multimedia resources. For the photograph annotation use case, these result in an increase in recommendation accuracy. Context-based algorithms provide a cheap and robust means of UGC media annotation that is compatible with and complimentary to content-recognition techniques.
| Original language | English |
|---|---|
| Pages (from-to) | 1-35 |
| Number of pages | 35 |
| Journal | International Journal on Semantic Web and Information Systems |
| Volume | 7 |
| Issue number | 4 |
| DOIs | |
| Publication status | Published - Oct 2011 |
Keywords
- Context mining
- Linked data
- Mobile devices
- Recommender algorithms
- Semantic annotation
- Social media
Fingerprint
Dive into the research topics of 'ACRONYM: Context metrics for linking people to user-generated media content'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver