What song am I thinking of?

McGuire, Niall and Moshfeghi, Yashar; Nicosia, Giuseppe and Ojha, Varun and La Malfa, Emanuele and La Malfa, Gabriele and Pardalos, Panos M. and Umeton, Renato, eds. (2024) What song am I thinking of? In: Machine Learning, Optimization, and Data Science. Lecture Notes in Computer Science . Springer-Verlag, GBR, pp. 418-432. ISBN 9783031539664 (https://doi.org/10.1007/978-3-031-53966-4_31)

[thumbnail of McGuire-Moshfeghi-Springer-2024-What-song-am-I-thinking-of] Text. Filename: McGuire-Moshfeghi-Springer-2024-What-song-am-I-thinking-of.pdf
Accepted Author Manuscript
Restricted to Repository staff only until 15 February 2025.
License: Strathprints license 1.0

Download (1MB) | Request a copy

Abstract

Information Need (IN) is a complex phenomenon due to the difficulty experienced when realising and formulating it into a query format. This leads to a semantic gap between the IN and its representation (e.g., the query). Studies have investigated techniques to bridge this gap by using neurophysiological features. Music Information Retrieval (MIR) is a sub-field of IR that could greatly benefit from bridging the gap between IN and query, as songs present an acute challenge for IR systems. A searcher may be able to recall/imagine a piece of music they wish to search for but still need to remember key pieces of information (title, artist, lyrics) used to formulate a query that an IR system can process. Although, if a MIR system could understand the imagined song, it may allow the searcher to satisfy their IN better. As such, in this study, we aim to investigate the possibility of detecting pieces from Electroencephalogram (EEG) signals captured while participants “listen” to or “imagine” songs. We employ six machine learning models on the publicly available data set, OpenMIIR. In the model training phase, we devised several experiment scenarios to explore the capabilities of the models to determine the potential effectiveness of Perceived and Imagined EEG song data in a MIR system. Our results show that, firstly, we can detect perceived songs using the recorded brain signals, with an accuracy of 62.0% (SD 5.4%). Furthermore, we classified imagined songs with an accuracy of 60.8% (SD 13.2%). Insightful results were also gained from several experiment scenarios presented within this paper. Overall, the encouraging results produced by this study are a crucial step towards information retrieval systems capable of interpreting INs from the brain, which can help alleviate the semantic gap’s negative impact on information retrieval.

ORCID iDs

McGuire, Niall and Moshfeghi, Yashar ORCID logoORCID: https://orcid.org/0000-0003-4186-1088; Nicosia, Giuseppe, Ojha, Varun, La Malfa, Emanuele, La Malfa, Gabriele, Pardalos, Panos M. and Umeton, Renato