Picture of athlete cycling

Open Access research with a real impact on health...

The Strathprints institutional repository is a digital archive of University of Strathclyde's Open Access research outputs. Strathprints provides access to thousands of Open Access research papers by Strathclyde researchers, including by researchers from the Physical Activity for Health Group based within the School of Psychological Sciences & Health. Research here seeks to better understand how and why physical activity improves health, gain a better understanding of the amount, intensity, and type of physical activity needed for health benefits, and evaluate the effect of interventions to promote physical activity.

Explore open research content by Physical Activity for Health...

Automatically classifying test results by semi-supervised learning

Almaghairbe, Rafig and Roper, Marc (2016) Automatically classifying test results by semi-supervised learning. In: 2016 IEEE 27th International Symposium on Software Reliability Engineering (ISSRE). IEEE, [Piscataway, NJ], pp. 116-126. ISBN 978-1-4673-9003-3

Text (Almaghairbe-Roper-ISSRE2016-Automatically-classifying-test-results-by-semi-supervised-learning)
Almaghairbe_Roper_ISSRE2016_Automatically_classifying_test_results_by_semi_supervised_learning.pdf - Accepted Author Manuscript

Download (189kB) | Preview


A key component of software testing is deciding whether a test case has passed or failed: an expensive and error-prone manual activity. We present an approach to automatically classify passing and failing executions using semi-supervised learning on dynamic execution data (test inputs/outputs and execution traces). A small proportion of the test data is labelled as passing or failing and used in conjunction with the unlabelled data to build a classifier which labels the remaining outputs (classify them as passing or failing tests). A range of learning algorithms are investigated using several faulty versions of three systems along with varying types of data (inputs/outputs alone, or in combination with execution traces) and different labelling strategies (both failing and passing tests, and passing tests alone). The results show that in many cases labelling just a small proportion of the test cases – as low as 10% – is sufficient to build a classifier that is able to correctly categorise the large majority of the remaining test cases. This has important practical potential: when checking the test results from a system a developer need only examine a small proportion of these and use this information to train a learning algorithm to automatically classify the remainder.