Precision-At-Ten Considered Redundant


William Webber
Department of Computer Science and Software Engineering, The University of Melbourne, Victoria 3010, Australia.

Alistair Moffat
Department of Computer Science and Software Engineering, The University of Melbourne, Victoria 3010, Australia.

Justin Zobel
NICTA Victoria Laboratory, Department of Computer Science and Software Engineering, The University of Melbourne, Victoria 3010, Australia.

Tetsuya Sakai
Newswatch Inc., Japan.


Status

Proc. 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Singapore, July 2008, pages 695-696.

Abstract

Information retrieval systems are compared using evaluation metrics, with researchers commonly reporting results for simple metrics such as precision-at-10 or reciprocal rank together with more complex ones such as average precision or discounted cumulative gain. In this paper, we demonstrate that complex metrics are as good as or better than simple metrics at predicting the performance of the simple metrics on other topics. Therefore, reporting of results from simple metrics alongside complex ones is redundant.

Software

Web site giving standardization constants for typical TREC experiments

Full text

http://doi.acm.org/10.1145/1390334.1390456