Industry relevance: Software industry is changing. Development process is getting more agile in hope to reduce Time to Market. In a rush, software testing is often neglected. The ongoing question remains: How can we test more by doing less, saving money and time? One of possible solutions is software fault prediction. We are able to use knowledge from development process and previous versions of software to predict faults in future releases. By predicting fault parts of software we can allocate more resources and time to fault prone parts. Although methods are not perfect, it makes sense to use them, as they make valuable contribution to software testing.
Research description: Software fault prediction is made of two base input components, i.e. model and data. We started research by looking at studies evaluating fault prediction models (statistical, neural networks, etc.). We found out that differences between models are really small and in many cases insignificant. We did another research on studies evaluating different software metrics, which led us to the conclusion that fault proneness accuracy is more dependent on data than a model.
We are doing evaluation study on code and process metrics used for software fault prediction. Valuable information is gathered inside our CMS repository, which can be used as an input for our software fault prediction model. We believe static code metrics are good indicators of fault proneness but can do better when combined with process metrics.