Picture of athlete cycling

Open Access research with a real impact on health...

The Strathprints institutional repository is a digital archive of University of Strathclyde's Open Access research outputs. Strathprints provides access to thousands of Open Access research papers by Strathclyde researchers, including by researchers from the Physical Activity for Health Group based within the School of Psychological Sciences & Health. Research here seeks to better understand how and why physical activity improves health, gain a better understanding of the amount, intensity, and type of physical activity needed for health benefits, and evaluate the effect of interventions to promote physical activity.

Explore open research content by Physical Activity for Health...

Building test oracles by clustering failures

Almaghairbe, Rafig and Roper, Marc (2015) Building test oracles by clustering failures. In: 2015 IEEE/ACM 10th International Workshop on Automation of Software Test (AST). IEEE, pp. 3-7.

[img]
Preview
Text (Almaghairbe-Roper-ICSE2015-building-test-oracles-clustering-failures)
Almaghairbe_Roper_ICSE2015_building_test_oracles_clustering_failures.pdf - Accepted Author Manuscript

Download (137kB) | Preview

Abstract

In recent years, software testing research has produced notable advances in the area of automated test data generation, but the corresponding oracle problem (a mechanism for determine the (in)correctness of an executed test case) is still a major problem. In this paper, we present a preliminary study which investigates the application of anomaly detection techniques (based on clustering) to automatically build an oracle using a system’s input/output pairs, based on the hypothesis that failures will tend to group into small clusters. The fault detection capability of the approach is evaluated on two systems and the findings reveal that failing outputs do indeed tend to congregate in small clusters, suggesting that the approach is feasible and has the potential to reduce by an order of magnitude the numbers of outputs that would need to be manually examined following a test run.