Picture of virus under microscope

Research under the microscope...

The Strathprints institutional repository is a digital archive of University of Strathclyde research outputs.

Strathprints serves world leading Open Access research by the University of Strathclyde, including research by the Strathclyde Institute of Pharmacy and Biomedical Sciences (SIPBS), where research centres such as the Industrial Biotechnology Innovation Centre (IBioIC), the Cancer Research UK Formulation Unit, SeaBioTech and the Centre for Biophotonics are based.

Explore SIPBS research

An evaluation of resource description quality measures

Baillie, M. and Azzopardi, L. and Crestani, F. (2006) An evaluation of resource description quality measures. In: Proceedings of the 2006 ACM symposium on Applied computing. ACM, pp. 1110-1111. ISBN 1-59593-108-2

[img]
Preview
PDF (strathprints002747.pdf)
strathprints002747.pdf

Download (113kB) | Preview

Abstract

An open problem for Distributed Information Retrieval is how to represent large document repositories (known as resources) efficiently. To facilitate resource selection, estimated descriptions of each resource are required, especially when faced with non-cooperative distributed environments[1]. Accurate and efficient Resource description estimation is required as this can have an affect on resource selection, and as a consequence retrieval quality. Query-Based Sampling (QBS) has been proposed as a novel solution for resource estimation[2], with proceeding techniques developed therafter[3]. However, the challenge to determine if one QBS technique is better at generating resource description than another is still an unresolved issue. The initial metrics tested and deployed for measuring resource description quality were the Collection Term Frequency ratio (CTF) and Spearman Rank Correlation Coefficient (SRCC)[2]. The former provides an indication of the percentage of terms seen, whilst the later measures the term ranking order, although neither consider the term frequency, which is important for resource selection. We re-examine this problem and consider measuring the quality of a resource description in context to resource selection, where an estimate of the probability of a term given the resource is typically required. We believe a natural measure for comparing the estimated resource against the actual resource is the Kullback-Leibler Divergence (KL) measure. KL addresses the concerns put forward previously, by not over-representing low frequency terms, and also considering term order[2]. In this paper, we re-assess the two previous measures alongside KL. Our preliminary investigation revealed that the former metrics display contradictory results. Whilst, KL suggested a different QBS technique than that prescribed in [2], would provide better estimates. This is a significant result, because it now remains unclear as to which technique will consistently provide better resource descriptions. The remainder of this paper details the three measures, the experimental analysis of our preliminary study and outlines our points of concern along with further research directions.