What song am I thinking of?
McGuire, Niall and Moshfeghi, Yashar; Nicosia, Giuseppe and Ojha, Varun and La Malfa, Emanuele and La Malfa, Gabriele and Pardalos, Panos M. and Umeton, Renato, eds. (2024) What song am I thinking of? In: Machine Learning, Optimization, and Data Science. Lecture Notes in Computer Science . Springer-Verlag, GBR, pp. 418-432. ISBN 9783031539664 (https://doi.org/10.1007/978-3-031-53966-4_31)
Text.
Filename: McGuire-Moshfeghi-Springer-2024-What-song-am-I-thinking-of.pdf
Accepted Author Manuscript Restricted to Repository staff only until 15 February 2025. License: Strathprints license 1.0 Download (1MB) | Request a copy |
Abstract
Information Need (IN) is a complex phenomenon due to the difficulty experienced when realising and formulating it into a query format. This leads to a semantic gap between the IN and its representation (e.g., the query). Studies have investigated techniques to bridge this gap by using neurophysiological features. Music Information Retrieval (MIR) is a sub-field of IR that could greatly benefit from bridging the gap between IN and query, as songs present an acute challenge for IR systems. A searcher may be able to recall/imagine a piece of music they wish to search for but still need to remember key pieces of information (title, artist, lyrics) used to formulate a query that an IR system can process. Although, if a MIR system could understand the imagined song, it may allow the searcher to satisfy their IN better. As such, in this study, we aim to investigate the possibility of detecting pieces from Electroencephalogram (EEG) signals captured while participants “listen” to or “imagine” songs. We employ six machine learning models on the publicly available data set, OpenMIIR. In the model training phase, we devised several experiment scenarios to explore the capabilities of the models to determine the potential effectiveness of Perceived and Imagined EEG song data in a MIR system. Our results show that, firstly, we can detect perceived songs using the recorded brain signals, with an accuracy of 62.0% (SD 5.4%). Furthermore, we classified imagined songs with an accuracy of 60.8% (SD 13.2%). Insightful results were also gained from several experiment scenarios presented within this paper. Overall, the encouraging results produced by this study are a crucial step towards information retrieval systems capable of interpreting INs from the brain, which can help alleviate the semantic gap’s negative impact on information retrieval.
ORCID iDs
McGuire, Niall and Moshfeghi, Yashar ORCID: https://orcid.org/0000-0003-4186-1088; Nicosia, Giuseppe, Ojha, Varun, La Malfa, Emanuele, La Malfa, Gabriele, Pardalos, Panos M. and Umeton, Renato-
-
Item type: Book Section ID code: 87919 Dates: DateEvent15 February 2024Published10 July 2023AcceptedNotes: Copyright © 2024 Springer-Verlag. This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at https://doi.org/10.1007/978-3-031-53966-4_31 Subjects: Bibliography. Library Science. Information Resources > Library Science. Information Science
Medicine > Internal medicine > Neuroscience. Biological psychiatry. Neuropsychiatry
Science > Mathematics > Electronic computers. Computer scienceDepartment: Faculty of Science > Computer and Information Sciences Depositing user: Pure Administrator Date deposited: 26 Jan 2024 09:53 Last modified: 11 Nov 2024 15:34 URI: https://strathprints.strath.ac.uk/id/eprint/87919