Resumen:
A generic LSTM neural network architecture to infer heterogeneous model transformations

Fecha

2023-09-12

Editor

Sistedes

Publicado en

Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023)

Licencia Creative Commons

Resumen

Models capture relevant properties of systems. During the models’ life-cycle, they are subjected to manipulations with different goals such as managing software evolution, performing analysis, increasing developers’ productivity, and reducing human errors. Typically, these manipulation operations are implemented as model transformations. Examples of these transformations are (i) model-to-model transformations for model evolution, model refactoring, model merging, model migration, model refinement, etc., (ii) model-to-text transformations for code generation and (iii) text-to-model ones for reverse engineering. These operations are usually manually implemented, using general-purpose languages such as Java, or domain-specific languages (DSLs) such as ATL or Acceleo. Even when using such DSLs, transformations are still time-consuming and error-prone. We propose using the advances in artificial intelligence techniques to learn these manipulation operations on models and automate the process, freeing the developer from building specific pieces of code. In particular, our proposal is a generic neural network architecture suitable for heterogeneous model transformations. Our architecture comprises an encoder–decoder long short-term memory with an attention mechanism. It is fed with pairs of input–output examples and, once trained, given an input, automatically produces the expected output. We present the architecture and illustrate the feasibility and potential of our approach through its application in two main operations on models: model-to-model transformations and code generation. The results confirm that neural networks are able to faithfully learn how to perform these tasks as long as enough data are provided and no contradictory examples are given.

Descripción

Acerca de Burgueño, Lola

Palabras clave

Model Manipulation, Code Generation, Model Transformation, Artificial Intelligence, Machine Learning, Neural Networks
Página completa del ítem
Notificar un error en este resumen
Mostrar cita
Mostrar cita en BibTeX
Descargar cita en BibTeX