Contrastive and counterfactual explanations for test case prioritization: Ideas and challenges





Publicado en

Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023)

Licencia Creative Commons


As machine learning (ML) is increasingly used in software engineering (SE), explainable artificial intelligence (XAI) is crucial for understanding choices made by opaque, "black-box" models. Test case prioritization (TCP) is an important SE problem that can benefit from ML. In this paper, we explore two approaches for generating explanations in ML-based TCP, contrastive and counterfactual XAI, and present application scenarios where they can enhance testers' comprehension of model outputs. Specifically, we use DiCE, a method for generating counterfactual explanations, as an illustrative example and conclude by discussing open issues.


Acerca de Ramírez, Aurora

Palabras clave

Test Case Prioritization, Automated Testing, Machine Learning, Explainable Artificial Intelligence
Página completa del ítem
Notificar un error en este artículo
Mostrar cita
Mostrar cita en BibTeX
Descargar cita en BibTeX