Can large language models be good companions? : An LLM-based eyewear system with conversational common ground

Xu, Zhenyu and Xu, Hailin and Lu, Zhouyang and Zhao, Yingying and Zhu, Rui and Wang, Yujiang and Dong, Mingzhi and Chang, Yuhu and Lv, Qin and Dick, Robert and Yang, Fan and Lu, Tun and Gu, Ning and Shang, Li (2024) Can large language models be good companions? : An LLM-based eyewear system with conversational common ground. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8 (2). 87. ISSN 2474-9567 (https://doi.org/10.1145/3659600)

[thumbnail of NonACM_Can Large Language Models Be Good Companions_ An LLM-Based Eyewear System with Conversational Common Ground]
Preview
Text. Filename: NonACM_Can_Large_Language_Models_Be_Good_Companions_An_LLM-Based_Eyewear_System_with_Conversational_Common_Ground.pdf
Accepted Author Manuscript

Download (6MB)| Preview

Abstract

Developing chatbots as personal companions has long been a goal of artificial intelligence researchers. Recent advances in Large Language Models (LLMs) have delivered a practical solution for endowing chatbots with anthropomorphic language capabilities. However, it takes more than LLMs to enable chatbots that can act as companions. Humans use their understanding of individual personalities to drive conversations. Chatbots also require this capability to enable human-like companionship. They should act based on personalized, real-time, and time-evolving knowledge of their users. We define such essential knowledge as the common ground between chatbots and their users, and we propose to build a common-ground-aware dialogue system from an LLM-based module, named OS-1, to enable chatbot companionship. Hosted by eyewear, OS-1 can sense the visual and audio signals the user receives and extract real-time contextual semantics. Those semantics are categorized and recorded to formulate historical contexts from which the user's profile is distilled and evolves over time, i.e., OS-1 gradually learns about its user. OS-1 combines knowledge from real-time semantics, historical contexts, and user-specific profiles to produce a common-ground-aware prompt input into the LLM module. The LLM's output is converted to audio, spoken to the wearer when appropriate. We conduct laboratory and in-field studies to assess OS-1's ability to build common ground between the chatbot and its user. The technical feasibility and capabilities of the system are also evaluated. Our results show that by utilizing personal context, OS-1 progressively develops a better understanding of its users. This enhances user satisfaction and potentially leads to various personal service scenarios, such as emotional support and assistance.

ORCID iDs

Xu, Zhenyu, Xu, Hailin, Lu, Zhouyang, Zhao, Yingying ORCID logoORCID: https://orcid.org/0000-0001-5902-1306, Zhu, Rui, Wang, Yujiang, Dong, Mingzhi, Chang, Yuhu, Lv, Qin, Dick, Robert, Yang, Fan, Lu, Tun, Gu, Ning and Shang, Li;