Picture of mobile phone running fintech app

Fintech: Open Access research exploring new frontiers in financial technology

Strathprints makes available Open Access scholarly outputs by the Department of Accounting & Finance at Strathclyde. Particular research specialisms include financial risk management and investment strategies.

The Department also hosts the Centre for Financial Regulation and Innovation (CeFRI), demonstrating research expertise in fintech and capital markets. It also aims to provide a strategic link between academia, policy-makers, regulators and other financial industry participants.

Explore all Strathclyde Open Access research...

Modelling epistemic uncertainty in IR evaluation

Yakici, M. and Baillie, M. and Ruthven, I. and Crestani, F. (2007) Modelling epistemic uncertainty in IR evaluation. In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development on Information Retrieval (SIGIR 07), 2007-07-23 - 2007-07-27. (Unpublished)

Full text not available in this repository. Request a copy from the Strathclyde author

Abstract

Modern information retrieval (IR) test collections violate the completeness assumption of the Cranfield paradigm. In order to maximise the available resources, only a sample of documents (i.e. the pool) are judged for relevance by a human assessor(s). The subsequent evaluation protocol does not make any distinctions between assessed or unassessed documents, as documents that are not in the pool are assumed to be not relevant for the topic. This is beneficial from a practical point of view, as the relative performance can be compared with confidence if the experimental conditions are fair for all systems. However, given the incompleteness of relevance assessments, two forms of uncertainty emerge during evaluation. The first is Aleatory uncertainty, which refers to variation in system performance across the topic set, which is often addressed through the use of statistical significance tests. The second form of uncertainty is Epistemic, which refers to the amount of knowledge (or ignorance) we have about the estimate of a system's performance.