Has Adhoc Retrieval Improved Since 1994?
Timothy G. Armstrong
Department of Computer Science and Software Engineering,
The University of Melbourne,
Victoria 3010, Australia.
Alistair Moffat
Department of Computer Science and Software Engineering,
The University of Melbourne,
Victoria 3010, Australia.
William Webber
Department of Computer Science and Software Engineering,
The University of Melbourne,
Victoria 3010, Australia.
Justin Zobel
Department of Computer Science and Software Engineering,
The University of Melbourne,
Victoria 3010, Australia.
Status
Proc. 32nd Annual International ACM SIGIR Conference on
Research and Development in Information Retrieval, Boston,
July 2009, pages 692-693.
Poster presentation.
Abstract
Evaluation forums such as TREC allow systematic measurement and
comparison of information retrieval techniques.
The goal is consistent improvement, based on reliable comparison of
the effectiveness of different approaches and systems.
In this paper we report experiments to determine whether this goal
has been achieved.
We ran five publicly available search systems, in a total of
seventeen different configurations, against nine TREC adhoc-style
collections, spanning 1994 to 2005.
These runsets were then used as a benchmark for reassessing the
relative effectiveness of the original TREC runs for those
collections.
Surprisingly, there appears to have been no overall improvement in
effectiveness for either median or top-end TREC submissions, even
after allowing for several possible confounds.
We therefore question whether the effectiveness of adhoc information
retrieval has improved over the past decade and a half.
Full text
http://doi.acm.org/10.1145/1571941.1572081
.