Analysing mixed initiatives and search strategies during conversational search

Aliannejadi, Mohammad and Azzopardi, Leif and Zamani, Hamed and Kanoulas, Evangelos and Thomas, Paul and Craswell, Nick; (2021) Analysing mixed initiatives and search strategies during conversational search. In: CIKM '21 : Proceedings of the 30th ACM International Conference on Information & Knowledge Management. Proceedings of the 30th ACM International Conference on Information & Knowledge Management . ACM, New York, NY., pp. 16-26. ISBN 9781450384469 (https://doi.org/10.1145/3459637.3482231)

[thumbnail of Aliannejadi-etal-CIKM-2021-Analysing-mixed-initiatives-and-search-strategies-during-conversational-search]
Preview
Text. Filename: Aliannejadi_etal_CIKM_2021_Analysing_mixed_initiatives_and_search_strategies_during_conversational_search.pdf
Accepted Author Manuscript

Download (1MB)| Preview

Abstract

Information seeking conversations between users and Conversational Search Agents (CSAs) consist of multiple turns of interaction. While users initiate a search session, ideally a CSA should sometimes take the lead in the conversation by obtaining feedback from the user by offering query suggestions or asking for query clarifications i.e. mixed initiative. This creates the potential for more engaging conversational searches, but substantially increases the complexity of modelling and evaluating such scenarios due to the large interaction space coupled with the trade-offs between the costs and benefits of the different interactions. In this paper, we present a model for conversational search – from which we instantiate different observed conversational search strategies, where the agent elicits: (i) Feedback-First, or (ii) Feedback-After. Using 49 TREC WebTrack Topics, we performed an analysis comparing how well these different strategies combine with different mixed initiative approaches: (i) Query Suggestions vs. (ii) Query Clarifications. Our analysis reveals that there is no superior or dominant combination, instead it shows that query clarifications are better when asked first, while query suggestions are better when asked after presenting results. We also show that the best strategy and approach depends on the trade-offs between the relative costs between querying and giving feedback, the performance of the initial query, the number of assessments per query, and the total amount of gain required. While this work highlights the complexities and challenges involved in analyzing CSAs, it provides the foundations for evaluating conversational strategies and conversational search agents in batch/offline settings.