Abstract
Traditionally, relevance assessments for expert search have been gathered through self-assessment or based on the opinions of co-workers. We introduce three benchmark datasets1 for expert search that use conference workshops for relevance assessment. Our data sets cover entire research domains as opposed to single institutions. In addition, they provide a larger number of topic-person associations and allow a more objective and fine-grained evaluation of expertise than existing data sets do. We present and discuss baseline results for a language modelling and a topic-centric approach to expert search. We find that the topic-centric approach achieves the best results on domain-specific datasets.
| Original language | English (Ireland) |
|---|---|
| Title of host publication | Proceedings of the workshop on Computational Scientometrics: Theory and Applications, at the 22nd ACM International Conference on Information and Knowledge Management (CIKM 2013) |
| DOIs | |
| Publication status | Published - 1 Jan 2013 |
Authors (Note for portal: view the doc link for the full list of authors)
- Authors
- Georgeta Bordea, Toine Bogers and Paul Buitelaar
Fingerprint
Dive into the research topics of 'Benchmarking Domain-Specific Expert Search Using Workshop Program Committees'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver