Resultados de búsqueda para Testing
Model Transformation Testing and Debugging: A Survey
Model transformations are the key technique in Model-Driven Engineering (MDE) to manipulate and construct models. As a consequence, the correctness of software systems built with MDE approaches relies mainly on the correctness of model transformations, and thus, detecting and locating bugs in model transformations have been popular research topics in recent years. This surge of work has led to a vast literature on model transformation testing and debugging, which makes it challenging to gain a comprehensive view of the current state of the art. This is an obstacle for newcomers to this topic and MDE practitioners to apply these approaches. This paper presents a survey on testing and debugging model transformations based on the analysis of 140 papers on the topics. We explore the trends, advances, and evolution over the years, bringing together previously disparate streams of work and providing a comprehensive view of these thriving areas. In addition, we present a conceptual framework to understand and categorise the different proposals. Finally, we identify several open research challenges and propose specific action points for the model transformation community.
Autores: Javier Troya / Sergio Segura / Lola Burgueño / Manuel Wimmer /
Palabras Clave: Debugging - Model Transformation - survey - Testing
Suggesting Model Transformation Repairs for Rule-based Languages using a Contract-based Testing Approach
En este trabajo se presenta la propuesta MoTES (Model Transformation TEst Specification) que utiliza técnicas de testing de modelos basadas en contratos para asistir a los ingenieros encargados de la evolución y reparación de las transformaciones de modelos.MoTES utiliza contratos para especificar el comportamiento esperado de la transformación de modelos a probar.Estos contratos actúan como oráculos de pares de elementos entre los modelos de entrada y salida, generados al ejecutar la transformación en pruebas con modelos de entrada concretos. Mediante el procesamiento del modelo de salida del oráculo de test, se calculan las métricas precision y recall para cada patrón de salida. Los resultados de estas métricas se categorizan para simplificar su interpretación: MoTES define 8 posibles casos distintos. Además, si existe información de trazabilidad de la transformación en pruebas para cada patrón de salida, es posible clasificar cada regla de transformación relacionada según su impacto en las métricas, p.ej. el número de positivos correctos generados. MoTES define 37 casos para esta clasificación, cada uno de los cuales est+AOE asociado con una acción abstracta de reparación de una regla, como relajar el filtro de entrada de una regla. En este trabajo se presenta una completa evaluación mediante el análisis de tres casos de estudio diferentes. Como resultados principales, se concluye que nuestra propuesta es capaz de (1) detectar los errores de la transformación, (2) localizar la regla que falla y (3) sugerir las acciones de reparación adecuadas, reduciendo significativamente el esfuerzo de los ingenieros de pruebas.
Autores: Roberto Rodriguez-Echeverria / Fernando Macías / Adrian Rutle / Jose Maria Conejero /
Palabras Clave: Adaptations - Evolution - Fault Localization - Model Transformation - Repairing - Testing - Testing Oracle - Verification
User-driven diverse scenario exploration in model finders
Model finders can build instances of declarative specifications that satisfy a set of correctness constraints. Some model finders ensure some degree of diversity among the instances they compute. Nevertheless, each model finder uses its own definition of diversity, that may or may not match designer intent. In this paper, we propose a procedure that enables designers to capture the desired notion of diversity they are looking for. Using a simple domain-specific language, they can specify what elements in the specification are relevant when comparing the differences between two instances. This information can then be used to make any model finder diversity-aware while using it as a black box. As a proof of concept, this approach has been implemented on top of the Alloy Analyzer.
Autores: Robert Clarisó / Jordi Cabot /
Palabras Clave: Clustering - diversity - graph kernels - Model-Driven Engineering - Testing - verification and validation
AMADEUS: Towards the AutoMAteD secUrity teSting
The proper configuration of systems has become a fundamental factor to avoid cybersecurity risks. Thereby, the analysis of cybersecurity vulnerabilities is a mandatory task, but the number of vulnerabilities and system configurations that can be threatened is extremely high. In this paper, we propose a method that uses software product line techniques to analyse the vulnerable configuration of the systems. We propose a solution, entitled AMADEUS, to enable and support the automatic analysis and testing of cybersecurity vulnerabilities of configuration systems based on feature models. AMADEUS is a holistic solution that is able to automate the analysis of the specific infrastructures in the organisations, the existing vulnerabilities, and the possible configurations extracted from the vulnerability repositories. By using this information, AMADEUS generates automatically the feature models, that are used for reasoning capabilities to extract knowledge, such as to determine attack vectors with certain features. AMADEUS has been validated by demonstrating the capacities of feature models to support the threat scenario, in which a wide variety of vulnerabilities extracted from a real repository are involved. Furthermore, we open the door to new applications where software product line engineering and cybersecurity can be empowered.
Autores: Angel Jesus Varela Vaca / Rafael M. Gasca / José Antonio Carmona-Fombella / Maria Teresa Gómez López /
Palabras Clave: cybersecurity - feature model - pentesting - reasoning - Testing - vulnerabilities - vulnerable configuration
Many-Objective Test Suite Generation for Software Product Lines
A Software Product Line (SPL) is a set of products builtfrom a number of features, the set of valid products being dened bya feature model. Typically, it does not make sense to test all productsdened by an SPL and one instead chooses a set of products to test(test selection) and, ideally, derives a good order in which to test them(test prioritisation). Since one cannot know in advance which productswill reveal faults, test selection and prioritisation are normally based onobjective functions that are known to relate to likely effectiveness orcost. This article introduces a new technique, the grid-based evolutionstrategy (GrES), which considers several objective functions that assessa selection or prioritisation and aims to optimise on all of these. Theproblem is thus a many-objective optimisation problem. We use a newapproach, in which all of the objective functions are considered but one(pairwise coverage) is seen as the most important. We also derive a novelevolution strategy based on domain knowledge. The results of the evalua-tion, on randomly generated and realistic feature models, were promising,with GrES outperforming previously proposed techniques and a range ofmany-objective optimisation algorithms.
Autores: Rob Hierons / Miqing Li / Xiaohui Liu / José Antonio Parejo Maestre / Sergio Segura Rueda / Xin Yao /
Palabras Clave: Evolutionary algorithms - many-objectives optimization - Search-Based Software Engineering - software product lines - Testing
Reparación de pruebas de interfaz de usuario en Android como un problema de búsqueda
Las pruebas de interfaz de usuario son una técnica muy popular gracias a su capacidad para validar el comportamiento de la aplicación tal y como lo experimentaría el usuario, y por su facilidad para generar los casos de prueba. Sin embargo, una de las limitaciones más importantes de este tipo de pruebas es su fragilidad ante los cambios de la propia interfaz de usuario, que suelen producirse durante el desarrollo del sistema. En este artículo formulamos la reparación de estas pruebas ante cambios en la intefaz o funcionalidad de la aplicación como un problema de búsqueda. Además, proponemos un algoritmo heurístico para su resolución basado en GRASP. Esta propuesta se ha implementado y validado en el dominio especifico de aplicaciones móviles para dispositivos Android. Los resultados obtenidos demuestran su aplicabilidad con varios casos de estudio para cambios de diversa envergadura.
Autores: Adrián Cantón Fernandez / José Antonio Parejo Maestre / Sergio Segura / Antonio Ruiz-Cortés /
Palabras Clave: Android - GRASP - SBSE - test case repair - Testing
Towards the Definition of Test Coverage Criteria for RESTful Web APIs
Web APIs following the REST architectural style (so-called RESTful Web APIs) have become the de-facto standard for software integration. As RESTful APIs gain momentum, so does the testing of them. However, there is a lack of mechanisms to assess the adequacy of testing approaches in this context, which makes it difficult to measure and compare the effectiveness of different testing techniques. In this work-in-progress paper, we take a step forward towards a framework for the assessment and comparison of testing approaches for RESTful Web APIs. To that end, we propose a preliminary catalogue of test coverage criteria. These criteria measure the adequacy of test suites based on the degree to which they exercise the different input and output elements of RESTful Web services. To the best of our knowledge, this is the first attempt to measure the adequacy of testing approaches for RESTful Web APIs.
Autores: Alberto Martin-Lopez / Sergio Segura / Antonio Ruiz-Cortés /
Palabras Clave: coverage criteria - REST - Testing - web services
Spectrum-Based Fault Localization in Model Transformations
Model transformations play a cornerstone role in Model-Driven Engineering as they provide the essential mechanisms for manipulating and transforming models. The correctness of software built using MDE techniques greatly relies on the correctness of model transformations. However, it is challenging and error prone to debug them, and the situation gets more critical as the size and complexity of model transformations grow, where manual debugging is no longer possible.Spectrum-Based Fault Localization (SBFL) uses the results of test cases and their corresponding code coverage information to estimate the likelihood of each program component (e.g., statements) of being faulty. In this paper we present an approach to apply SBFL for locating the faulty rules in model transformations. We evaluate the feasibility and accuracy of the approach by comparing the effectiveness of 18 different state-of-the-art SBFL techniques at locating faults in model transformations. Evaluation results revealed that the best techniques, namely Kulcynski2, Mountford, Ochiai and Zoltar, lead the debugger to inspect a maximum of three rules in order to locate the bug in around 74% of the cases. Furthermore, we compare our approach with a static approach for fault localization in model transformations, observing a clear superiority of the proposed SBFL-based method.
Autores: Javier Troya / Sergio Segura / José Antonio Parejo Maestre / Antonio Ruiz-Cortés /
Palabras Clave: Debugging - Fault Localization - Model Transformation - Spectrum-based - Testing
Automatic Testing of Program Slicers
Program slicing is a technique to extract the part of a program (the slice) that influences or is influenced by a set of variables at a given point (the slicing criterion). Computing minimal slices is undecidable in the general case, and obtaining the minimal slice of a given program is normally computationally prohibitive even for very small programs. Therefore, no matter what program slicer we use, in general, we cannot be sure that our slices are minimal. This is probably the fundamental reason why no benchmark collection of minimal program slices exists. In this work, we present a method to automatically produce quasi-minimal slices. Using our method, we have produced a suite of quasi-minimal slices for Erlang that we have later manually proved they are minimal. We explain the process of constructing the suite, the methodology and tools that were used, and the results obtained. The suite comes with a collection of Erlang benchmarks together with different slicing criteria and the associated minimal slices.
Autores: Sergio Pérez / Josep Sílva / Salvador Tamarit /
Palabras Clave: Erlang - Program analysis - Program Slicing - Testing
SMT-based Test-Case Generation with Complex Preconditions
We present a system which can automatically generate an exhaustive set of black-box test-cases, up to a given size, for programs under test requiring complex preconditions. The key of the approach is to translate a formal precondition into a set of constraints belonging to the decidable logics of SMT solvers. By checking the satisfiability of the constraints, then the models returned by the solver automatically synthesize the cases.We also show how to use SMT solvers to automatically check for validity the test-case results, and also to complement the black-box cases with white-box ones. Finally, we use of solver to perform what we call automatic partial verification of the program. In summary, we propose a system in which exhaustive black-box and white-box testing, result checking, and partial verification, can all be done automatically. The only extra effort required from programmers is to write formal specifications.
Autores: Ricardo Peña / Jaime Sánchez-Hernández / Miguel Garrido / Javier Sagredo /
Palabras Clave: formal specification - SMT solvers - test-case generation - Testing
No encuentra los resultados que busca? Prueba nuestra Búsqueda avanzada