Picture of person typing on laptop with programming code visible on the laptop screen

World class computing and information science research at Strathclyde...

The Strathprints institutional repository is a digital archive of University of Strathclyde's Open Access research outputs. Strathprints provides access to thousands of Open Access research papers by University of Strathclyde researchers, including by researchers from the Department of Computer & Information Sciences involved in mathematically structured programming, similarity and metric search, computer security, software systems, combinatronics and digital health.

The Department also includes the iSchool Research Group, which performs leading research into socio-technical phenomena and topics such as information retrieval and information seeking behaviour.

Explore

Automatically classifying test results by semi-supervised learning

Almaghairbe, Rafig and Roper, Marc (2016) Automatically classifying test results by semi-supervised learning. In: 2016 IEEE 27th International Symposium on Software Reliability Engineering (ISSRE). IEEE, [Piscataway, NJ], pp. 116-126. ISBN 978-1-4673-9003-3

[img]
Preview
Text (Almaghairbe-Roper-ISSRE2016-Automatically-classifying-test-results-by-semi-supervised-learning)
Almaghairbe_Roper_ISSRE2016_Automatically_classifying_test_results_by_semi_supervised_learning.pdf - Accepted Author Manuscript

Download (189kB) | Preview

Abstract

A key component of software testing is deciding whether a test case has passed or failed: an expensive and error-prone manual activity. We present an approach to automatically classify passing and failing executions using semi-supervised learning on dynamic execution data (test inputs/outputs and execution traces). A small proportion of the test data is labelled as passing or failing and used in conjunction with the unlabelled data to build a classifier which labels the remaining outputs (classify them as passing or failing tests). A range of learning algorithms are investigated using several faulty versions of three systems along with varying types of data (inputs/outputs alone, or in combination with execution traces) and different labelling strategies (both failing and passing tests, and passing tests alone). The results show that in many cases labelling just a small proportion of the test cases – as low as 10% – is sufficient to build a classifier that is able to correctly categorise the large majority of the remaining test cases. This has important practical potential: when checking the test results from a system a developer need only examine a small proportion of these and use this information to train a learning algorithm to automatically classify the remainder.