Picture of smart phone

Open Access research that is better understanding human-computer interaction...

Strathprints makes available scholarly Open Access content by researchers in the Department of Computer & Information Sciences, including those researching information retrieval, information behaviour, user behaviour and ubiquitous computing.

The Department of Computer & Information Sciences hosts The Mobiquitous Lab, which investigates user behaviour on mobile devices and emerging ubiquitous computing paradigms. The Strathclyde iSchool Research Group specialises in understanding how people search for information and explores interactive search tools that support their information seeking and retrieval tasks, this also includes research into information behaviour and engagement.

Explore the Open Access research of The Mobiquitous Lab and the iSchool, or theDepartment of Computer & Information Sciences more generally. Or explore all of Strathclyde's Open Access research...

Treacle and smallpox : two tests for multi-criteria decision analysis models in health technology assessment

Morton, Alec (2017) Treacle and smallpox : two tests for multi-criteria decision analysis models in health technology assessment. Value in Health, 30 (3). pp. 512-515. ISSN 1524-4733

Text (Morton-VH-2016-two-tests-for-multi-criteria-decision-analysis-models-in-health-technology-assessment)
Accepted Author Manuscript
License: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 logo

Download (475kB) | Preview


Multicriteria Decision Analysis (MCDA) is, rightly, receiving increasing attention in Health Technology Assessment. However, a distinguishing feature of the health domain is that technologies must actually improve health, and good performance on other criteria cannot compensate for failure to do so. We argue for two reasonable tests for MCDA models: the treacle test (can a winning intervention be incompletely ineffective?) and the smallpox test (can a winning intervention be for a disease which no one suffers from?). We explore why models might fail such tests (as the models of some existing published studies would do) and offer some suggestions as to how practice should be improved.