Conversational gold : evaluating personalized conversational search system using gold nuggets

Abbasiantaeb, Zahra and Lupart, Simon and Azzopardi, Leif and Dalton, Jeffrey and Aliannejadi, Mohammad; Ferro, Nicola and Maistro, Maria and Pasi, Gabriella and Alonso, Omar and Trotman, Andrew and Verberne, Suzan, eds. (2025) Conversational gold : evaluating personalized conversational search system using gold nuggets. In: SIGIR '25: Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery (ACM), ITA, pp. 3455-3465. ISBN 9798400715921 (https://doi.org/10.1145/3726302.3730316)

[thumbnail of Abbasiantaeb-etal-ACM-2025-Conversational-gold-evaluating-personalized-conversational-search-system]
Preview
Text. Filename: Abbasiantaeb-etal-ACM-2025-Conversational-gold-evaluating-personalized-conversational-search-system.pdf
Final Published Version
License: Creative Commons Attribution 4.0 logo

Download (1MB)| Preview

Abstract

The rise of personalized conversational search systems has been driven by advancements in Large Language Models (LLMs), enabling these systems to retrieve and generate answers for complex information needs. However, the automatic evaluation of responses generated by Retrieval Augmented Generation (RAG) systems remains an understudied challenge. In this paper, we introduce a new resource for assessing the retrieval effectiveness and relevance of responses generated by RAG systems, using a nugget-based evaluation framework. Built upon the foundation of TREC iKAT 2023, our dataset extends to the TREC iKAT 2024 collection, which includes 17 conversations and 20,575 relevance passage assessments, together with 2,279 extracted gold nuggets and 62 manually written gold answers from NIST assessors. While maintaining the core structure of its predecessor, this new collection enables a deeper exploration of generation tasks in conversational settings. Key improvements in iKAT 2024 include: (1) ''gold nuggets'' - concise, essential pieces of information extracted from relevant passages of the collection - which serve as a foundation for automatic response evaluation; (2) manually written answers to provide a gold standard for response evaluation; (3) expanded user personas, providing richer contextual grounding; and (4) a transition from Personal Text Knowledge Base (PTKB) ranking to PTKB classification and selection. Built on this resource, we provide a framework for long-form answer generation evaluation, involving nugget extraction and nugget matching, linked to retrieval. This establishes a solid resource for advancing research in personalized conversational search and long-form answer generation. Our resources are publicly available at https://github.com/irlabamsterdam/CONE-RAG.

ORCID iDs

Abbasiantaeb, Zahra, Lupart, Simon, Azzopardi, Leif ORCID logoORCID: https://orcid.org/0000-0002-6900-0557, Dalton, Jeffrey and Aliannejadi, Mohammad; Ferro, Nicola, Maistro, Maria, Pasi, Gabriella, Alonso, Omar, Trotman, Andrew and Verberne, Suzan