Overview of the CLEF ehealth evaluation lab 2018

Suominen, Hanna and Kelly, Liadh and Goeuriot, Lorraine and Névéol, Aurélie and Ramadier, Lionel and Robert, Aude and Kanoulas, Evangelos and Spijker, Rene and Azzopardi, Leif and Li, Dan and Jimmy and Palotti, João and Zuccon, Guido; SanJuan, Eric and Murtagh, Fionn and Nie, Jian Yun and Soulier, Laure and Cappellato, Linda and Bellot, Patrice and Mothe, Josiane and Trabelsi, Chiraz and Ferro, Nicola, eds. (2018) Overview of the CLEF ehealth evaluation lab 2018. In: Experimental IR Meets Multilinguality, Multimodality, and Interaction - 9th International Conference of the CLEF Association, CLEF 2018, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) . Springer-Verlag, FRA, pp. 286-301. ISBN 9783319989310 (https://doi.org/10.1007/978-3-319-98932-7_26)

[thumbnail of Suominen-etal-CLEF2018-Overview-CLEF-ehealth-evaluation-lab-2018]
Preview
Text. Filename: Suominen_etal_CLEF2018_Overview_CLEF_ehealth_evaluation_lab_2018.pdf
Accepted Author Manuscript

Download (639kB)| Preview

Abstract

In this paper, we provide an overview of the sixth annual edition of the CLEF eHealth evaluation lab. CLEF eHealth 2018 continues our evaluation resource building efforts around the easing and support of patients, their next-of-kins, clinical staff, and health scientists in understanding, accessing, and authoring eHealth information in a multilingual setting. This year’s lab offered three tasks: Task 1 on multilingual information extraction to extend from last year’s task on French and English corpora to French, Hungarian, and Italian; Task 2 on technologically assisted reviews in empirical medicine building on last year’s pilot task in English; and Task 3 on Consumer Health Search (CHS) in mono- and multilingual settings that builds on the 2013–17 Information Retrieval tasks. In total 28 teams took part in these tasks (14 in Task 1, 7 in Task 2 and 7 in Task 3). Herein, we describe the resources created for these tasks, outline our evaluation methodology adopted and provide a brief summary of participants of this year’s challenges and results obtained. As in previous years, the organizers have made data and tools associated with the lab tasks available for future research and development.