Jornadas de Ingeniería del Software y Bases de Datos (JISBD)
URI permantente para esta comunidad:
Las Jornadas de Ingeniería del Software y Bases de Datos (JISBD) constituyen un foro de encuentro de referencia donde investigadores y profesionales de España, Portugal e Iberoamérica en los campos de la Ingeniería del Software y de las Bases de Datos pueden debatir e intercambiar ideas, crear sinergias y, sobre todo, conocer la investigación que se está llevando a cabo en nuestra comunidad.
El impacto de las JISBD no ha parado de crecer desde que se organizaron por primera vez en Cáceres en 1999 y donde se unificaron dos eventos nacionales ya consolidados: de una parte, las Jornadas de Ingeniería de Software, que tuvieron lugar en Sevilla, San Sebastián y Murcia en los años 1996, 1997 y 1998, respectivamente; de otra, las Jornadas de Investigación y Docencia en Bases de Datos, celebradas con anterioridad en A Coruña, Madrid y Valencia en los mismos años.
Examinar
Examinando Jornadas de Ingeniería del Software y Bases de Datos (JISBD) por Título
Resultados por página
Opciones de ordenación
Resumen A big data-centric architecture metamodel for Industry 4.0López Martínez, Patricia; Dintén, Ricardo; Zorrilla, Marta Elena; Drake, José María. Actas de las XXVI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2022), 2022-09-05.The effective implementation of Industry 4.0 requires the reformula-tion of industrial processes in order to achieve the vertical and horizontal digi-talization of the value chain. For this purpose, it is necessary to provide tools that enable their successful implementation. This paper therefore proposes a da-ta-centric, distributed, dynamically scalable reference architecture that inte-grates cutting-edge technologies being aware of the existence of legacy tech-nology typically present in these environments. In order to make its implemen-tation easier, we have designed a metamodel that collects the description of all the elements involved in a digital platform (data, resources, applications and monitoring metrics) as well as the necessary information to configure, deploy and execute applications on it. Likewise, we provide a tool compliant to the metamodel that automates the generation of configuration, deployment and launch files and their corresponding transference and execution in the nodes of the platform. We show the flexibility, extensibility and validity of our software artefacts through their application in two case studies, one addressed to prepro-cess and store pollution data and the other one, more complex, which simulates the management of an electric power distribution of a smart city.Resumen A compact representation for trips over networks built on self-indexesRodríguez Brisaboa, Nieves; Fariña, Antonio; Galaktionov, Daniil; Rodríguez, M. Andrea. Actas de las XXIV Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2019), 2019-09-02.This work has been previously published in Information Systems (ISSN: 0306-4379) vol. 28 (November 2018), pages 1-28 and DOI https://doi.org/10.1016/j.is.2018.06.010. The last measured impact factor of that journal is 2.551. Representing the movements of objects (trips) over a network in a compact way while retaining the capability of exploiting such data effectively is an important challenge of real applications. We present a new Compact Trip Representation (CTR) that handles the spatio-temporal data associated with users’ trips over transportation networks. Depending on the network and types of queries, nodes in the network can represent intersections, stops, or even street segments. CTR represents separately sequences of nodes and the time instants when users traverse these nodes. The spatial component is handled with a data structure based on the well-known Compressed Suffix Array, which provides both a compact representation and interesting indexing capabilities. The temporal component is self-indexed with either a Hu–Tucker-shaped Wavelet-Tree or a Wavelet Matrix that solve range-interval queries efficiently. We show how CTR can solve relevant counting-based spatial, temporal, and spatio-temporal queries over large sets of trips. Experimental results show the space requirements (around 50-70% of the space needed by a compact non-indexed baseline) and query efficiency (most queries are solved in <1 ms) of CTR.Resumen A Compact Representation of Indoor TrajectoriesFariña, Antonio; Gutiérrez-Asorey, Pablo; Ladra, Susana; Penabad, Miguel R.; Varela Rodeiro, Tirso. Actas de las XXVI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2022), 2022-09-05.We present a system that combines indoor positioning with a compression algorithm for trajectories in the context of a nursing home. Our aim is to gather and effectively represent the location of the residents and caregivers along time, while allowing for efficient access to those data. We briefly show the system architecture that enables the automatic tracking of user's movements and consequently the gathering of their locations. Then, we present indRep, our compact representation to handle positioning data using grammar-based compression, and provide two basic operations that enable pseudo-random access to the data. Finally, we include experiments that show that indRep is competitive with well-know general-purpose compressors in terms of compression effectiveness and also provides fast access to the compressed data. We expect both features would enable exploitation functionalities even in computers with rather low computational resources.Artículo A comparison between traditional and Serverless technologies in a microservices settingMera Menéndez, Juan; Labra Gayo, Jose Emilio; Riesgo Canal, Enrique; Echevarría Fernández, Aitor. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Serverless technologies, also known as FaaS (Function as a Service), are promoted as solutions that provide dynamic scalability, speed of development, cost-per-consumption model, and the ability to focus on the code while taking attention away from the infrastructure that is managed by the vendor. A microservices architecture is defined by the interaction and management of the application state by several independent services, each with a well-defined domain. When implementing software architectures based on microservices, there are several decisions to take about the technologies and the possibility of adopting Serverless. In this study, we implement 9 prototypes of the same microservice application using different technologies. Some architectural decisions and their impact on the performance and cost of the result obtained are analysed. We use Amazon Web Services and start with an application that uses a more traditional deployment environment (Kubernetes) and migration to a serverless architecture is performed by combining and analysing the impact (both cost and performance) of the use of different technologies such as AWS ECS Fargate, AWS Lambda, DynamoDB or DocumentDB.Resumen A continuous deployment-based approach for the collaborative creation, maintenance, testing and deployment of CityGML modelsPrieto, Iñaki; Izkara, Jose Luis; Béjar, Rubén. Actas de las XXIII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2018), 2018-09-17.Publicado en: International Journal of Geographical Information Science Volume 32, 2018 - Issue 2, pp. 282-301 (ya publicado online el 26 Oct 2017) https://doi.org/10.1080/13658816.2017.1393543 La revista está 46/146 en COMPUTER SCIENCE, INFORMATION SYSTEMS en el JCR (Q2) Abstract: Georeferenced 3D models are an increasingly common choice to store and display urban data in many application areas. CityGML is an open and standardized data model, and exchange format that provides common semantics for 3D city entities and their relations and one of the most common options for this kind of information. Currently, creating and maintaining CityGML models is costly and difficult. This is in part because both the creation of the geometries and the semantic annotation can be complex processes that require at least some manual work. In fact, many publicly available CityGML models have errors. This paper proposes a method to facilitate the regular maintenance of correct city models in CityGML. This method is based on the continuous deployment strategy and tools used in software development, but adapted to the problem of creating, maintaining and deploying CityGML models, even when several people are working on them at the same time. The method requires designing and implementing CityGML deployment pipelines. These pipelines are automatic implementations of the process of building, testing and deploying CityGML models. These pipelines must be run by the maintainers of the models when they make changes that are intended to be shared with others. The pipelines execute increasingly complex automatic tests in order to detect errors as soon as possible, and can even automate the deployment step, where the CityGML models are made available to their end users. In order to demonstrate the feasibility of this method, and as an example of its application, a CityGML deployment pipeline has been developed for an example scenario where three actors maintain the same city model. This scenario is representative of the kind of problems that this method intends to solve, and it is based on real work in progress. The main bene fits of this method are the automation of model testing, every change to the model is tested in a repeatable way; the automation of the model deployment, every change to the model can reach its end users as fast as possible; the systematic approach to integrating changes made by different people working together on the models, including the possibility of keeping parallel versions with a common core; anautomatic record of every change made to the models (who did what and when) and the possibility of undoing some of those changes at any time.Artículo A Data-Interoperability Aware Software ArchitectureHumanes, Héctor; Yagüe, Agustín; Perez, Jennifer; Garbajosa, Juan; Burgas, Llorenç; Colomer, Joan; Melendez, Joaquim; Pous, Carles. Actas de las XXIII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2018), 2018-09-17.Making heterogeneous data sources homogeneous manually and off-line can become a high time-consuming task. This paper presents a software architecture that extends the standardized-based architectures for heterogeneous sensors with components to also support devices and data that are not compliant with standards. The defined architecture is based on Internet of Things (IoT) layered architectures that establish perception, network, middleware, application, and business as main layers. To define the architecture, an architectural framework was used; this framework supports the identification of non-compliant data, providing then a different processing path. This proposed architecture covers a wide spectrum of data interoperability addressing the IoT challenge of ``Interoperability and Standardization''. The implemented solution proved that the processing time between data acquisition and the feeding of analysis algorithms can be reduced from 100% to approximately to 1% with systems based on the proposed architecture compared with those that manage data manually and off-line.Resumen A decision-making support system for Enterprise Architecture ModellingPérez-Castillo, Ricardo; Ruiz-González, Francisco; Piattini Velthuis, Mario Gerardo. Actas de las XXV Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2021), 2021-09-22.Companies are increasingly conscious of the importance of Enterprise Architecture (EA) to represent and manage IT and business in a holistic way. EA modelling has become decisive to achieve models that accurately represents behaviour and assets of companies and lead them to make appropriate business decisions. Although EA representations can be manually modelled by experts, automatic EA modelling methods have been proposed to deal with drawbacks of manual modelling, such as error-proneness, time-consumption, slow and poor re-adaptation, and cost. However, automatic modelling is not effective for the most abstract concepts in EA like strategy or motivational aspects. Thus, companies are demanding hybrid approaches that combines automatic with manual modelling. In this context there are no clear relationships between the input artefacts (and mining techniques) and the target EA viewpoints to be automatically modelled, as well as relationships between the experts' roles and the viewpoints to which they might contribute in manual modelling. Consequently, companies cannot make informed decisions regarding expert assignments in EA modelling projects, nor can they choose appropriate mining techniques and their respective input artefacts. This research proposes a decision support system whose core is a genetic algorithm. The proposal first establishes (based on a previous literature review) the mentioned missing relationships and EA model specifications. Such information is then employed using a genetic algorithm to decide about automatic, manual or hybrid modelling by selecting the most appropriate input artefacts, mining techniques and experts. The genetic algorithm has been optimized so that the system aids EA architects to maximize the accurateness and completeness of EA models while cost (derived from expert assignments and unnecessary automatic generations) are kept under control.Resumen A Delphi Study to Recognize and Assess Systems of Systems VulnerabilitiesOlivero González, Miguel Ángel; Bertolino, Antonia; Domínguez Mayo, Francisco José; Matteucci, Ilaria; Escalona Cuaresma, María José. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.System of Systems (SoS) is an emerging paradigm by which independent systems collaborate by sharing resources and processes to achieve objectives that they could not achieve on their own. In this context, a number of emergent behaviors may arise that can undermine the security of the constituent systems. We apply the Delphi method with the aims to improve our understanding of SoS security and related problems, and to investigate their possible causes and remedies. Experts on SoS expressed their opinions and reached consensus in a series of rounds by following a structured questionnaire. The results show that the experts found more consensus in disagreement than in agreement about some SoS characteristics, and on how SoS vulnerabilities could be identified and prevented. From this study we learn that more work is needed to reach a shared understanding of SoS vulnerabilities, and we leverage expert feedback to outline some future research directions.Resumen A Domain-Specific Language for the specification of UCON policiesReina-Quintero, Antonia M.; Martínez Pérez, Salvador; Varela Vaca, Ángel Jesús; Gómez-López, María Teresa; Cabot Sagrera, Jordi. Actas de las XXVI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2022), 2022-09-05.Security policies constrain the behaviour of all users of an information system. In any non-trivial system, these security policies go beyond simple access control rules and must cover more complex and dynamic scenarios while providing, at the same time, a fine-grained level decision-making ability. The Usage Control model (UCON) was created for this purpose but so far integration of UCON in mainstream software engineering processes has been very limited, hampering its usefulness and popularity among the software and information systems communities. In this sense, this paper proposes a Domain-Specific Language to facilitate the modelling of UCON policies and their integration in (model-based) development processes. Together with the language, an exploratory approach for policy evaluation and enforcement of the modeled policies via model transformations has been introduced. These contributions have been defined on top of the Eclipse Modelling Framework, the de-facto standard MDE (Model-Driven Engineering) framework making them freely available and ready-to-use for any software designer interested in using UCON for the definition of security policies in their new development projects.Artículo A Federated Approach for Array and Entity Environmental Linked DataAlmobydeen, Shahed Bassam; Ríos Viqueira, José Ramón; Lama Penín, Manuel. Actas de las XXI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2016), 2016-09-13.Available environmental and spatial data is increasing in size and every time new application domains take advantage of this fact. The need for accessing them through linked data paradigm is also increased, due to the interest of their combination with already available linked data repositories. Entity based environmental data fits perfectly to the graph data model of RDF, however, much environmental data are array-based, and such data are clearly not efficiently represented with RDF. In fact, transforming array environmental data to RDF triples in some datasets will generate huge RDF datasets. Querying these datasets through SPARQL will lead to low performance solutions. In this paper, we propose a federated architecture that integrates entity and array-based repositories into a single SPARQL-based framework, where SPARQL queries are translated into SQL and array-based queries. New operations will be added to SPARQL algebra in order to embed those relational and array-based queries into SPARQL query plans. This will make SPARQL able to access two different database paradigms (entity and array) in one query to answer questions like “What is the predicted average of temperature of each municipality of Spain for the next week?”Resumen A Fine-Grained Requirement Traceability Evolutionary Algorithm: Kromaia, a Commercial Video Game Case StudyBlasco, Daniel; Cetina Englada, Carlos; Pastor López, Óscar. Actas de las XXV Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2021), 2021-09-22.Context: Commercial video games usually feature an extensive source code and requirements that are related to code lines from multiple methods. Traceability is vital in terms of maintenance and content update, so it is necessary to explore such search spaces properly. Objective: This work presents and evaluates CODFREL (Code Fragmentbased Requirement Location), our approach to fine-grained requirement traceability, which lies in an evolutionary algorithm and includes encoding and genetic operators to manipulate code fragments that are built from source code lines. We compare it with a baseline approach (Regular-LSI) by configuring both approaches with different granularities (code lines / complete methods). Method: We evaluated our approach and Regular-LSI in the Kromaia video game case study, which is a commercial video game released on PC and PlayStation 4. The approaches are configured with method and code line granularity and work on 20 requirements that are provided by the development company. Our approach and Regular-LSI calculate similarities between requirements and code fragments or methods to propose possible solutions and, in the case of CODFREL, to guide the evolutionary algorithm. Results: The results, which compare code line and method granularity configurations of CODFREL with different granularity configurations of RegularLSI, show that our approach outperforms Regular-LSI in precision and recall, with values that are 26 and 8 times better, respectively, even though it does not achieve the optimal solutions. We make an open-source implementation of CODFREL available. Conclusions: Since our approach takes into consideration key issues like the source code size in commercial video games and the requirement dispersion, it provides better starting points than Regular-LSI in the search for solution candidates for the requirements. However, the results and the influence of domain-specific language on them show that more explicit knowledge is required to improve such results.Artículo A First Approach towards Storage and Query Processing of Big Spatial Networks in Scalable and Distributed SystemsMena, Manel; Corral, Antonio; Iribarne, Luis. Actas de las XXIII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2018), 2018-09-17.Due to the ubiquitous use of spatial data applications and the large amounts of spatial data that these applications generate, the processing of large-scale queries in distributed systems is becoming increasingly popular. Complex spatial systems are very often organized under the form of Spatial Networks, a type of graph where nodes and edges are embedded in space. Examples of these spatial networks are transportation and mobility networks, mobile phone networks, social and contact networks, etc. When these spatial networks are big enough that exceed the capacity of commonly-used spatial computing technologies, we have Big Spatial Networks, and to manage them is necessary the use of distributed graph-parallel systems. In this paper, we describe our emerging work concerning the design of new storage methods and query processing algorithms over big spatial networks in scalable and distributed systems, which is a very active research area in the past years.Artículo A First Step Towards Keyword-Based Searching for Recommendation SystemsRodríguez Hernández, María Del Carmen; Guerra, Francesco; Ilarri, Sergio; Trillo-Lado, Raquel. Actas de las XX Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2015), 2015-09-15.Due to the high availability of data, users are frequently overloaded with a huge amount of alternatives when they need to choose a particular item. This has motivated an increased interest in research on recommendation systems, which filter the options and provide users with suggestions about specific elements (e.g., movies, restaurants, hotels, news, etc.) that are estimated to be potentially relevant for the user. Recommendation systems are still an active area of research, and particularly in the last years the concept of context-aware recommendation systems has started to be popular, due to the interest of considering the context of the user in the recommendation process. In this paper, we describe our work-in-progress concerning pull-based recommendations (i.e., recommendations about certain types of items that are explicitly requested by the user). In particular, we focus on the problem of detecting the type of item the user is interested in. Due to its popularity, we consider a keyword-based user interface: the user types a few keywords and the system must determine what the user is searching for. Whereas there is extensive work in the field of keyword-based search, which is still a very active research area, keyword searching has not been applied so far in most recommendation contexts.Artículo A general approach to Software Product Line testingRuiz, Elvira G.; Ayerdi, Jon; Galindo, José A.; Arrieta, Aitor; Sagardui, Goiuria; Benavides Cuevas, David Felipe. Actas de las XXIV Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2019), 2019-09-02.Variability is a central concept in Software Product Lines (SPLs). It has been extensively studied how the SPL paradigm can improve both the efficiency of a company and the quality of products. Nevertheless, this brings several challenges when testing an SPL, which are mainly caused by the potentially huge amount of products that can be derived from an SPL. There exist different studies proposing methods for testing SPLs. Also there are secondary studies reviewing and mapping the literature of the existing proposals. Nevertheless, there is a lack of systematic guidelines for practitioners and researchers with the different steps required to perform a testing strategy of an SPL. In this paper, we present a first version of a tutorial that summarizes the existing proposals of the SPL testing area. To the best of our knowledge, there is no similar attempt in existing literature. Our goal is to discuss this tutorial with the community and enrich it to provide a more solid version of it in the future.Resumen A generic LSTM neural network architecture to infer heterogeneous model transformationsBurgueño, Lola; Cabot Sagrera, Jordi; Li, Shuai; Gérard, Sébastien. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Models capture relevant properties of systems. During the models’ life-cycle, they are subjected to manipulations with different goals such as managing software evolution, performing analysis, increasing developers’ productivity, and reducing human errors. Typically, these manipulation operations are implemented as model transformations. Examples of these transformations are (i) model-to-model transformations for model evolution, model refactoring, model merging, model migration, model refinement, etc., (ii) model-to-text transformations for code generation and (iii) text-to-model ones for reverse engineering. These operations are usually manually implemented, using general-purpose languages such as Java, or domain-specific languages (DSLs) such as ATL or Acceleo. Even when using such DSLs, transformations are still time-consuming and error-prone. We propose using the advances in artificial intelligence techniques to learn these manipulation operations on models and automate the process, freeing the developer from building specific pieces of code. In particular, our proposal is a generic neural network architecture suitable for heterogeneous model transformations. Our architecture comprises an encoder–decoder long short-term memory with an attention mechanism. It is fed with pairs of input–output examples and, once trained, given an input, automatically produces the expected output. We present the architecture and illustrate the feasibility and potential of our approach through its application in two main operations on models: model-to-model transformations and code generation. The results confirm that neural networks are able to faithfully learn how to perform these tasks as long as enough data are provided and no contradictory examples are given.Artículo A Linda-based Platform for the Parallel Execution of Out-place Model TransformationsBurgueño, Lola; Wimmer, Manuel; Vallecillo, Antonio. Actas de las XXII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2017), 2017-07-19.Context: The performance and scalability of model transformations is gaining interest as industry is progressively adopting model-driven techniques and multicore computers are becoming commonplace. However, existing model transformation engines are mostly based on sequential and in-memory execution strategies, and thus their capabilities to transform large models in parallel and distributed environments are limited. Objective: This paper presents a solution that provides concurrency and distribution to model transformations. Method: Inspired by the concepts and principles of the Linda coordination language, and the use of data parallelism to achieve parallelization, a novel Java-based execution platform is introduced. It offers a set of core features for the parallel execution of out-place transformations that can be used as a target for high-level transformation language compilers. Results: Significant gains in performance and scalability of this platform are reported with regard to existing model transformation solutions. These results are demonstrated by running a model transformation test suite, and by its comparison against several state-of-the-art model transformation engines. Conclusion: Our Linda-based approach to the concurrent execution of model transformations can serve as a platform for their scalable and efficient implementation in parallel and distributed environments.Resumen A method for transforming knowledge discovery metamodel to ArchiMate modelsPérez-Castillo, Ricardo; Delgado, Andrea; Ruiz-González, Francisco; Bacigalupe, Virginia; Piattini Velthuis, Mario Gerardo. Actas de las XXVI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2022), 2022-09-05.La Arquitectura Empresarial (AE) se ha convertido en un impulsor y facilitador de la transformación digital en las empresas, ya que permite administrar TI y negocio de una forma holística e integrada al establecer conexiones entre las preocupaciones tecnológicas y la estratégica del negocio. El modelado de la AE es fundamental para representar con precisión el negocio y sus activos de TI de forma interrelacionada. Este modelado es importante cuando las empresas comienzan a administrar su AE, pero también cuando se remodela con el fin de conseguir su realineamiento en un mundo cambiante. La AE suele ser modelada por un grupo reducido de expertos de forma manual, lo que es propenso a errores, requiere mucho tiempo y dificulta la realineación continua. Por el contrario, otras propuestas de modelado automático de AE inspeccionan algunos artefactos como código fuente, bases de datos, servicios, etc. Las propuestas de modelado automatizado hasta la fecha se centran en el análisis de artefactos individuales con transformaciones aisladas hacia ArchiMate u otras notaciones y/o marcos de AE. As+AO0, en es-te artículo se propone un enfoque MDE mediante el uso de Knowledge Discovery Metamodel (KDM) para representar toda la información intermedia recuperada de los artefactos de los sistemas de información, que luego se transforma automáticamente en modelos ArchiMate. La contribución principal de este artículo es la transformación de modelos entre KDM y ArchiMate. La principal implicación de esta propuesta es que los modelos ArchiMate se generan automáticamente a partir de un repositorio de conocimiento común. De este modo, las relaciones entre arte-factos de diferente naturaleza se pueden explotar para obtener representaciones de AE más completas y precisas, favoreciendo además su realineamiento continuo.Resumen A methodology to automatically translate user requirements into visualizations: Experimental validationLavalle, Ana; Maté, Alejandro; Trujillo, Juan; Teruel, Miguel A.; Rizzi, Stefano. Actas de las XXVI Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2022), 2022-09-05.Context: Information visualization is paramount for the analysis of Big Data. The volume of data requiring interpretation is continuously growing. However, users are usually not experts in information visualization. Thus, defining the visualization that best suits a determined context is a very challenging task for them. Moreover, it is often the case that users do not have a clear idea of what objectives they are building the visualizations for. Consequently, it is possible that graphics are misinterpreted, making wrong decisions that lead to missed opportunities. One of the underlying problems in this process is the lack of methodologies and tools that non-expert users in visualizations can use to define their objectives and visualizations. Objective: The main objectives of this paper are to (i) enable non-expert users in data visualization to communicate their analytical needs with little effort, (ii) generate the visualizations that best fit their requirements, and (iii) evaluate the impact of our proposal with reference to a case study, describing an experiment with 97 non-expert users in data visualization. Methods: We propose a methodology that collects user requirements and semi-automatically creates suitable visualizations. Our proposal covers the whole process, from the definition of requirements to the implementation of visualizations. The methodology has been tested with several groups to measure its effectiveness and perceived usefulness. Results: The experiments increase our confidence about the utility of our methodology. It significantly improves over the case when users face the same problem manually. Specifically: (i) users are allowed to cover more analytical questions, (ii) the visualizations produced are more effective, and (iii) the overall satisfaction of the users is larger. Conclusion: By following our proposal, non-expert users will be able to more effectively express their analytical needs and obtain the set of visualizations that best suits their goals.Artículo A Methodology to Retire a Software Product LineCortiñas, Alejandro; Krüger, Jacob; Lamas Sardiña, Victor Juan; Rodríguez Luaces, Miguel; Pedreira, Oscar. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.The development of a family of software systems with customization based on a common platform is made possible through software product-line engineering. By employing this approach, an organization can configure a system to adapt to changing customer requirements and also reap long-term benefits such as reduced development and maintenance costs. Typically used for a long-living family of systems that are continuously evolved, a product line may eventually be retired and replaced by a successor due to outdated technology that cannot be easily replaced, making it more feasible to develop a new product line. Previous work has mentioned retiring product lines, but without much detail. This paper aims to fill this gap by presenting a process for retiring and replacing a product line, with the aim of helping practitioners retire product lines more systematically and with fewer issues. Additionally, the paper highlights open research directions that need to be addressed in the future.Artículo A model-based proposal for integrating the measures lifecycle within the process lifecycleMeidan, Ayman; García García, Julián Alberto; Ramos, Isabel; Escalona Cuaresma, María José; Arevalo, Carlos. Actas de las XXII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2017), 2017-07-19.Software development process (SDP) is a complex and long endeavor, the quality and management of this process affect the quality of its results. Measuring the SDP is essential to gain insight on its performance and to discover improvements. This work proposes to use Model-Driven Engineering (MDE) paradigm to integrate the measures lifecycle within the process lifecycle in order to explicitly and operationally model measures during the process modeling. Also defines transformation rules to derive executable code to run these measures into enterprise tools in order to support the measures lifecycle.