Conversational agents trust calibration

Dubiel, Mateusz and Daronnat, Sylvain and Leiva, Luis A.; Halvey, Martin and Foster, Mary Ellen and Dalton, Jeff and Munteanu, Cosmin and Trippas, Johanne, eds. (2022) Conversational agents trust calibration. In: Proceedings of the 4th International Conference on Conversational User Interfaces, CUI 2022. ACM International Conference Proceeding Series . Association for Computing Machinery, GBR. ISBN 9781450397391 (https://doi.org/10.1145/3543829.3544518)

[thumbnail of Dubiel-etal-ACM-2022-Conversational-agents-trust]
Preview
Text. Filename: Dubiel_etal_ACM_2022_Conversational_agents_trust.pdf
Final Published Version
License: Creative Commons Attribution-NonCommercial 4.0 logo

Download (154kB)| Preview

Abstract

Previous work identified trust as one of the key requirements for adoption and continued use of conversational agents (CAs). Given recent advances in natural language processing and deep learning, it is currently possible to execute simple goal-oriented tasks by using voice. As CAs start to provide a gateway for purchasing products and booking services online, the question of trust and its impact on users' reliance and agency becomes ever-more pertinent. This paper collates trust-related literature and proposes four design suggestions that are illustrated through example conversations. Our goal is to encourage discussion on ethical design practices to develop CAs that are capable of employing trust-calibration techniques that should, when relevant, reduce the user's trust in the agent. We hope that our reflections, based on the synthesis of insights from the fields of human-Agent interaction, explainable ai, and information retrieval, can serve as a reminder of the dangers of excessive trust in automation and contribute to more user-centred CA design.

ORCID iDs

Dubiel, Mateusz, Daronnat, Sylvain ORCID logoORCID: https://orcid.org/0000-0002-4779-9601 and Leiva, Luis A.; Halvey, Martin, Foster, Mary Ellen, Dalton, Jeff, Munteanu, Cosmin and Trippas, Johanne