JISBD 2023 (Ciudad Real)
URI permantente para esta comunidad:
Las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023) se han celebrado en Ciudad Real del del 12 al 14 de septiembre de 2023, como parte de las Jornadas Sistedes.
El programa de JISBD 2023 se ha organizado en torno a sesiones temáticas o tracks.
Examinar
Examinando JISBD 2023 (Ciudad Real) por Título
Mostrando 1 - 20 de 118
Resultados por página
Opciones de ordenación
Artículo A comparison between traditional and Serverless technologies in a microservices settingMera Menéndez, Juan; Labra Gayo, Jose Emilio; Riesgo Canal, Enrique; Echevarría Fernández, Aitor. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Serverless technologies, also known as FaaS (Function as a Service), are promoted as solutions that provide dynamic scalability, speed of development, cost-per-consumption model, and the ability to focus on the code while taking attention away from the infrastructure that is managed by the vendor. A microservices architecture is defined by the interaction and management of the application state by several independent services, each with a well-defined domain. When implementing software architectures based on microservices, there are several decisions to take about the technologies and the possibility of adopting Serverless. In this study, we implement 9 prototypes of the same microservice application using different technologies. Some architectural decisions and their impact on the performance and cost of the result obtained are analysed. We use Amazon Web Services and start with an application that uses a more traditional deployment environment (Kubernetes) and migration to a serverless architecture is performed by combining and analysing the impact (both cost and performance) of the use of different technologies such as AWS ECS Fargate, AWS Lambda, DynamoDB or DocumentDB.Resumen A Delphi Study to Recognize and Assess Systems of Systems VulnerabilitiesOlivero González, Miguel Ángel; Bertolino, Antonia; Domínguez Mayo, Francisco José; Matteucci, Ilaria; Escalona Cuaresma, María José. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.System of Systems (SoS) is an emerging paradigm by which independent systems collaborate by sharing resources and processes to achieve objectives that they could not achieve on their own. In this context, a number of emergent behaviors may arise that can undermine the security of the constituent systems. We apply the Delphi method with the aims to improve our understanding of SoS security and related problems, and to investigate their possible causes and remedies. Experts on SoS expressed their opinions and reached consensus in a series of rounds by following a structured questionnaire. The results show that the experts found more consensus in disagreement than in agreement about some SoS characteristics, and on how SoS vulnerabilities could be identified and prevented. From this study we learn that more work is needed to reach a shared understanding of SoS vulnerabilities, and we leverage expert feedback to outline some future research directions.Resumen A generic LSTM neural network architecture to infer heterogeneous model transformationsBurgueño, Lola; Cabot Sagrera, Jordi; Li, Shuai; Gérard, Sébastien. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Models capture relevant properties of systems. During the models’ life-cycle, they are subjected to manipulations with different goals such as managing software evolution, performing analysis, increasing developers’ productivity, and reducing human errors. Typically, these manipulation operations are implemented as model transformations. Examples of these transformations are (i) model-to-model transformations for model evolution, model refactoring, model merging, model migration, model refinement, etc., (ii) model-to-text transformations for code generation and (iii) text-to-model ones for reverse engineering. These operations are usually manually implemented, using general-purpose languages such as Java, or domain-specific languages (DSLs) such as ATL or Acceleo. Even when using such DSLs, transformations are still time-consuming and error-prone. We propose using the advances in artificial intelligence techniques to learn these manipulation operations on models and automate the process, freeing the developer from building specific pieces of code. In particular, our proposal is a generic neural network architecture suitable for heterogeneous model transformations. Our architecture comprises an encoder–decoder long short-term memory with an attention mechanism. It is fed with pairs of input–output examples and, once trained, given an input, automatically produces the expected output. We present the architecture and illustrate the feasibility and potential of our approach through its application in two main operations on models: model-to-model transformations and code generation. The results confirm that neural networks are able to faithfully learn how to perform these tasks as long as enough data are provided and no contradictory examples are given.Artículo A Methodology to Retire a Software Product LineCortiñas, Alejandro; Krüger, Jacob; Lamas Sardiña, Victor Juan; Rodríguez Luaces, Miguel; Pedreira, Oscar. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.The development of a family of software systems with customization based on a common platform is made possible through software product-line engineering. By employing this approach, an organization can configure a system to adapt to changing customer requirements and also reap long-term benefits such as reduced development and maintenance costs. Typically used for a long-living family of systems that are continuously evolved, a product line may eventually be retired and replaced by a successor due to outdated technology that cannot be easily replaced, making it more feasible to develop a new product line. Previous work has mentioned retiring product lines, but without much detail. This paper aims to fill this gap by presenting a process for retiring and replacing a product line, with the aim of helping practitioners retire product lines more systematically and with fewer issues. Additionally, the paper highlights open research directions that need to be addressed in the future.Resumen A model-driven approach for systematic reproducibility and replicability of data science projectsGonzález, Francisco Javier Melchor; Rodríguez-Echeverría, Roberto; Conejero, José María; Prieto Ramos, Álvaro E.; Gutiérrez Gallardo, Juan Diego. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.En los últimos años se ha producido un importante incremento en el número de herramientas y enfoques para la definición de pipelines que permiten el desarrollo de proyectos de ciencia de datos. Estas herramientas permiten tanto la definición del pipeline como la generación del código necesario para ejecutar el proyecto, proporcionando una forma sencilla de realizar estos proyectos incluso para usuarios no expertos. Sin embargo, todavía existen algunos retos que estas herramientas no abordan. Por ejemplo, la posibilidad de ejecutar pipelines en entornos tecnológicos diferentes a los de su definición (reproducibilidad y replicabilidad), o la identificación de operaciones inconsistentes (intencionalidad). Para paliar estos problemas, este trabajo presenta un framework basado en modelos para la definición de pipelines de ciencia de datos independientes de la plataforma de ejecución y de las herramientas concretas. Este framework se basa en la separación de la definición del pipeline en dos capas de modelado diferentes: conceptual, en el que el científico de datos puede especificar todas las operaciones de datos que conforman el pipeline; operacional, en el que el ingeniero de datos puede describir los detalles concretos del entorno de ejecución donde se implementarán las operaciones finalmente. Basado en esta definición abstracta y en la separación en capas, nuestra propuesta permite: el uso de diferentes herramientas mejorando, así, la replicabilidad del proceso; la automatización de la ejecución del proceso, mejorando la reproducibilidad del proceso; y la definición de reglas de verificación del modelo, proporcionando restricciones de intencionalidad.Resumen A reference framework for the implementation of data governance systems for industry 4.0Yebenes, Juan; Zorrilla, Marta Elena. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.The fourth industrial revolution, or Industry 4.0, represents a new stage of evolution in the organization, management and control of the value chain throughout the product or service life cycle. This is mainly based on the digi-talization of the industrial environment by means of the convergence of In-formation Technologies (IT) and operational Technologies (OT) through cyber-physical systems and the Industrial IoT (IIoT) and the use of data gen-erated in real time for gaining insights and making decisions. Therefore, data becomes a critical asset for Industry 4.0 and must be managed and governed like a strategic asset. We rely on Data Governance (DG) as a key instrument for carrying out this transformation. This paper presents the design of a spe-cific governance frame-work for Industry 4.0. First, this contextualizes data governance for Industry 4.0 environments and identifies the requirements that this framework must address, which are conditioned by the specific fea-tures of Industry 4.0, among others, the intensive use of big data, the cloud and edge computing, the artificial intelligence and the current regulations. Next, we formally define a reference framework for the implementation of Data Governance Systems for Industry 4.0 using interna-tional standards and providing several examples of architecture building blocks.Resumen A systematic review of capability and maturity innovation assessment models: Opportunities and challengesGimenez Medina, Manuel; González Enríquez, José; Domínguez Mayo, Francisco José. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Public funding, being the primary source for innovation, imposes restrictions caused by a lack of trust between the roles of public funders and organisations in the innovation process. Capability and maturity innovation assessment models can improve the process by combining both roles to create an agile and trusting environment. This paper aims to provide a current description of the state-of-the-art on capability and maturity innovation assessment models in the context of Information and Communication Technologies. To this end, a Systematic Mapping Study was carried out considering high-quality published research from four relevant digital libraries since 2000. The 78 primary studies analysed show several gaps and challenges. In particular, a common ontology has not been achieved, and Innovation Management Systems are scarcely considered. Concepts such as open innovation have not been correctly applied to incorporate all Quadruple Helix stakeholders, especially the government and its role as a public funder. This implies that no studies explore a standard agile public–private maturity model based on capabilities since the public funders’ restrictions have not been considered. Furthermore, although some concepts of innovation capabilities have evolved, none of the studies analysed offer a comprehensive coverage of capabilities. As potential future lines of research, this paper proposes 11 challenges based on the 5 shortcomings found in the literature.Resumen A Unified Metamodel for NoSQL and Relational DatabasesFernández Candel, Carlos Javier; Sevilla Ruiz, Diego; García Molina, Jesús Joaquín. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.The Database field is undergoing significant changes. Although relational systems are still predominant, the interest in NoSQL systems is continuously increasing. In this scenario, polyglot persistence is envisioned as the database architecture to be prevalent in the future. Therefore, database tools and systems are evolving to support several data models. Multi-model database tools normally use a generic or unified metamodel to represent schemas of the data model that they support. Such metamodels facilitate developing database utilities, as they can be built on a common representation. Also, the number of mappings required to migrate databases from a data model to another is reduced, and integrability is favored. In this paper, we present the U-Schema unified metamodel able to represent logical schemas for the four most popular NoSQL paradigms (columnar, document, key–value, and graph) as well as relational schemas. We will formally define the mappings between U-Schema and the data model defined for each database paradigm. How these mappings have been implemented and validated will be discussed, and some applications of USchema will be shown. To achieve flexibility to respond to data changes, most of NoSQL systems are “schema-on-read,” and the declaration of schemas is not required. Such an absence of schema declaration makes structural variability possible, i.e., stored data of the same entity type can have different structure. Moreover, data relationships supported by each data model are different; For example, document stores have aggregate objects but not relationship types, whereas graph stores offer the opposite. Throughout the paper, we will show how all these issues have been tackled in our approach. As far as we know, no proposal exists in the literature of a unified metamodel for relational and the NoSQL paradigms which describes how each individual data model is integrated and mapped. Our metamodel goes beyond the existing proposals by distinguishing entity types and relationship types, representing aggregation and reference relationships, and including the notion of structural variability. Our contributions also include developing schema extraction strategies for schemaless systems of each NoSQL data model, and tackling performance and scalability in the implementation for each store.Artículo Alineamiento de trazas de Gemelos. El Caso de Estudio de un AscensorMuñoz, Paula; Arrieta, Aitor; Vallecillo, Antonio. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Un problema a la hora de desarrollar un gemelo digital es comprobar que su comportamiento es fiel al del sistema físico que replica. Este trabajo establece medidas de fidelidad usando un algoritmo de alineamiento de trazas y las aplica al caso de estudio de un ascensor.Resumen An Empirical Study on the Survival Rate of GitHub ProjectsAit, Adem; Cánovas Izquierdo, Javier Luis; Cabot Sagrera, Jordi. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.The number of Open Source projects hosted in social coding platforms such as GitHub is constantly growing. However, many of these projects are not regularly maintained and some are even abandoned shortly after they were created. In this paper we analyze early project development dynamics in software projects hosted on GitHub, including their survival rate. To this aim, we collected all 1,127 GitHub repositories from four different ecosystems (i.e., NPM packages, R packages, WordPress plugins and Laravel packages) created in 2016. We stored their activity in a time series database and analyzed their activity evolution along their lifespan, from 2016 to now. Our results reveal that the prototypical development process consists of intensive coding-driven active periods followed by long periods of inactivity. More importantly, we have found that a significant number of projects die in the first year of existence with the survival rate decreasing year after year. In fact, the probability of surviving longer than five years is less than 50% though some types of projects have better chances of survival.Artículo Análisis de Expresiones Faciales para la Adaptación Inteligente de Interfaces de UsuarioCarceller Llorens, Fernando; Figueiredo, Daniel Gaspar; Abrahao Gonzales, Silvia; Insfrán Pelozo, Emilio. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Los sistemas software actuales deben ser conscientes del contexto y las necesidades de los usuarios para poder adaptarse adecuadamente. La información del contexto puede recogerse del entorno, la plataforma o el usuario y ser utilizada para conducir la adaptación. Aunque se han propuesto muchas aproximaciones de adaptación, la adaptación de interfaces de usuario sigue siendo un gran desafío debido a la dificultad en sugerir la adaptación correcta en el momento y lugar correctos. En este trabajo se presenta una herramienta para el análisis de expresiones faciales que será utilizado en el contexto de un framework para la adaptación inteligente de interfaces de usuario. La infraestructura desarrollada permite recoger información de las expresiones faciales del usuario y extraer la emoción dominante. Esta información se utilizará como retroalimentación (positiva o negativa) a un proceso de toma de decisiones basado en aprendizaje automático por refuerzo que propondrá acciones de mejora (adaptaciones) en el interfaz de usuario para mejorar la experiencia de usuario.Artículo Analizando la motivación, estrés y rendimiento de los profesionales software en contextos de desarrollo global y trabajo remotoSuárez, Julio; Vizcaíno, Aurora; García Rubio, Félix Óscar. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.En este artículo se presenta un estudio de cómo influyen el trabajo remoto y global de desarrollo de software en los factores humanos. Dicho estudio se realiza con el objetivo de posteriormente crear un sistema que ayude a determinar qué personas son las más adecuadas para un determinado proyecto según las características de los posibles miembros del equipo y las características del propio proyecto con el fin de potenciar la productividad y de una manera indirecta mejorar la calidad del software. Se proporciona para ello una visión general del método de investigación que se ha seguido, así como de los principales resultados que se han obtenido con un mapeo sistemático de la literatura y los que se pretenden obtener en las siguientes etapas.Resumen Applying Inter-rater Reliability and Agreement in Collaborative Grounded Theory Studies in Software EngineeringDíaz, Jessica; Pérez, Jorge; Gallardo, Carolina; González-Prieto, Ángel. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Context: The qualitative research on empirical software engineering that uses Grounded Theory is increasing (GT). The trustworthiness, rigor, and transparency of GT qualitative data analysis can benefit, among others, when multiple analysts juxtapose diverse perspectives and collaborate to develop a common code frame based on a consensual and consistent interpretation. Inter-Rater Reliability (IRR) and/or Inter-Rater Agreement (IRA) are commonly used techniques to measure consensus, and thus develop a shared interpretation. However, minimal guidance is available about how and when to measure IRR/IRA during the iterative process of GT, so researchers have been using ad hoc methods for years. Objective: This paper presents a process for systematically measuring IRR/IRA in GT studies, when appropriate, which is grounded in a previous systematic mapping study on collaborative GT in the field of software engineering. Method: Meta-science guided us to analyze the issues and challenges of collaborative GT and formalize a process to measure IRR/IRA in GT. Results: This process guides researchers to incrementally generate a theory while ensuring consensus on the constructs that support it, improving trustworthiness, rigor, and transparency, and promoting the communicability, reflexivity, and replicability of the research. Conclusion: The application of this process to a GT study seems to support its feasibility. In the absence of further confirmation, this would represent the first step in a de facto standard to be applied to those GT studies that may benefit from IRR/IRA techniques.Resumen Architecting Digital Twins Using a Domain-Driven Design-Based ApproachMacías, Aurora; Navarro, Elena; Cuesta, Carlos E.; Zdun, Uwe. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.The Digital Twin (DT) concept has overcome its initial definition based on a purely descriptive approach focusing on modelling physical objects, often using CAD. Today DT often describes a behavioural approach that can simulate an object's dynamics, monitor its state, and control or predict its behaviour. Although DTs are attracting significant attention and offer many advantages in the design of especially cyber-physical systems, most proposals have focused on developing DTs for a specific use case or need without providing a more holistic approach to its design. We aim to propose a domain-agnostic approach for architecting DTs. Here, DTs are directly supported by Domain-Driven Design’s notion of Bounded Contexts (BCs), hiding all the domain-inherent specifications behind BC boundaries. These BCs are also the central abstraction in many microservice architectures and can be used to describe DTs. A Wind Turbine DT architecture is used as a running example to describe how every relevant DT property can be satisfied following our proposal for architecting digital twins. A qualitative evaluation of this case by five external practitioners shows that our DDD-based proposal consistently outperforms the 5-dimension model used as the reference approach.Artículo Arquitectura de un Framework para la Generación Automatizada de Datasets Temporales en Data LakesSal, Brian; de La Vega, Alfonso; López Martínez, Patricia; García-Saiz, Diego; Grande, Alicia; López, David; Sánchez Barreiro, Pablo. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.En los últimos años, los data lakes se han popularizado como solución para el almacenamiento centralizado de grandes volúmenes de datos heterogéneos procedentes de fuentes dispares. Estos datos suelen tener un marcado carácter temporal, ya que los datos suelen extraerse periódicamente de diversas fuentes a diferentes frecuencias y se almacenan directamente en crudo. Por tanto, estos datos deben ser adecuadamente preprocesados antes de ser consumidos por las aplicaciones que los explotan. Esta tarea de preprocesamiento se realiza actualmente de manera manual, mediante la escritura de scripts en lenguajes de transformación de datos. Este proceso es laborioso, costoso y, por lo general, propenso a errores. Para tratar de aliviar este problema, este artículo presenta la arquitectura inicial de Hannah, un framework que busca automatizar la generación de datasets para la minería de series temporales a partir de datos en bruto provenientes de data lakes. El objetivo es que, utilizando la menor cantidad de información posible como entrada, el framework sea capaz de recuperar los datos requeridos del data lake y procesarlos para que encajen adecuadamente dentro de un dataset.Artículo Arquitectura para la Gestión del Ciclo de Vida de Líneas de Producto Softwarede Castro Celard, David; Cortiñas, Alejandro; Rodríguez Luaces, Miguel; Pedreira, Oscar; Saavedra Places, Ángeles. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.La Gestión del Ciclo de Vida de Aplicaciones (conocido como ALM por su nombre en inglés, Application Lifecycle Management) coordina las actividades para desarrollar, mantener y evolucionar una aplicación, definiendo un marco que busca la eficiencia y efectividad en cada etapa del ciclo de vida. Aunque existen numerosas publicaciones sobre ALM, la aplicación de ALM en las Líneas de Producto Software (LPS) aún no ha sido abordada. En este trabajo se propone adaptar ALM para el desarrollo de LPS mediante la creación de una herramienta que maneje principalmente las áreas de desarrollo y operaciones para gestionar varias LPS y sus productos asociados.Resumen ARTE: Automated Generation of Realistic Test Inputs for Web APIsAlonso Valenzuela, Juan Carlos; Martín-López, Alberto; Segura Rueda, Sergio; García, José María; Ruiz Cortés, Antonio. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Automated test case generation for web APIs is a thriving research topic, where test cases are frequently derived from the API specification. However, this process is only partially automated since testers are usually obliged to manually set meaningful valid test inputs for each input parameter. In this article, we present ARTE, an approach for the automated extraction of realistic test data for web APIs from knowledge bases like DBpedia. Specifically, ARTE leverages the specification of the API parameters to automatically search for realistic test inputs using natural language processing, search-based, and knowledge extraction techniques. ARTE has been integrated into RESTest, an open-source testing framework for RESTful APIs, fully automating the test case generation process. Evaluation results on 140 operations from 48 real-world web APIs show that ARTE can efficiently generate realistic test inputs for 64.9% of the target parameters, outperforming the state-of-the-art approach SAIGEN (31.8%). More importantly, ARTE supported the generation of over twice as many valid API calls (57.3%) as random generation (20%) and SAIGEN (26%), leading to a higher failure detection capability and uncovering several real-world bugs. These results show the potential of ARTE for enhancing existing web API testing tools, achieving an unprecedented level of automation.Resumen Automated Engineering of Domain-Specific Metamorphic Testing EnvironmentsGómez-Abajo, Pablo; Cañizares, Pablo C.; Núñez, Alberto; Guerra, Esther; de Lara, Juan. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Context. Testing is essential to improve the correctness of software systems. Metamorphic testing (MT) is an approach especially suited when the system under test lacks oracles, or they are expensive to compute. However, building an MT environment for a particular domain (e.g., cloud simulation, model transformation, machine learning) requires substantial effort. Objective. Our goal is to facilitate the construction of MT environments for specific domains. Method. We propose a model-driven engineering approach to automate the construction of MT environments. Starting from a meta-model capturing the domain concepts, and a description of the domain execution environment, our approach produces an MT environment featuring comprehensive support for the MT process. This includes the definition of domain-specific metamorphic relations, their evaluation, detailed reporting of the testing results, and the automated search-based generation of follow-up test cases. Results. Our method is supported by an extensible platform for Eclipse, called Gotten. We demonstrate its effectiveness by creating an MT environment for simulation-based testing of data centres and comparing with existing tools; its suitability to conduct MT processes by replicating previous experiments; and its generality by building another MT environment for video streaming APIs. Conclusion. Gotten is the first platform targeted at reducing the development effort of domain-specific MT environments. The environments created with Gotten facilitate the specification of metamorphic relations, their evaluation, and the generation of new test cases.Resumen Automatizing Software Cognitive Complexity ReductionSaborido, Rubén; Ferrer, Javier; Chicano García, José Francisco; Alba, Enrique. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.Software plays a central role in our life nowadays. We use it almost anywhere, at any time, and for everything: to browse the Internet, to check our emails, and even to access critical services such as health monitoring and banking. Hence, its reliability and general quality is critical. As software increases in complexity, developers spend more time fixing bugs or making code work rather than designing or writing new code. Thus, improving software understandability and maintainability would translate into an economic relief over the total cost of a project. Different cognitive complexity measures have been proposed to quantify the understandability of a piece of code and, therefore, its maintainability. However, the cognitive complexity metric provided by SonarSource and integrated in SonarCloud and SonarQube is quickly spreading in the software industry due to the popularity of these well-known static code tools for evaluating software quality. Despite SonarQube suggests to keep method’s cognitive complexity no greater than 15, reducing method’s complexity is challenging for a human programmer and there are no approaches to assist developers on this task. We model the cognitive complexity reduction of a method as an optimization problem where the search space contains all sequences of Extract Method refactoring opportunities. We then propose a novel approach that searches for feasible code extractions allowing developers to apply them, all in an automated way. This will allow software developers to make informed decisions while reducing the complexity of their code. We evaluated our approach over 10 open-source software projects and was able to fix 78% of the 1,050 existing cognitive complexity issues reported by SonarQube. We finally discuss the limitations of the proposed approach and provide interesting findings and guidelines for developers.Artículo Benchmarking del rendimiento de proyectos software de código abierto mediante una herramienta colaborativaSánchez Ruiz, José Manuel; Olivero González, Miguel Ángel; Domínguez Mayo, Francisco José; Oriol, Xavier; Benavides Cuevas, David Felipe. Actas de las XXVII Jornadas de Ingeniería del Software y Bases de Datos (JISBD 2023), 2023-09-12.La creciente popularidad y dependencia de las organizaciones en proyectos de software de código abierto (OSS) hace esencial asegurar su rendimiento óptimo. Sin embargo, las herramientas de evaluación del rendimiento actuales no cuentan con funcionalidades de colaboración adecuadas, lo que dificulta la evaluación y comparación objetiva del rendimiento mediante métricas estandarizadas, presentando desafíos tanto para desarrolladores como organizaciones en el mercado tecnológico. En este artículo se presenta Performance-Tracker, una herramienta de evaluación comparativa que ha sido diseñada para evaluar y comparar el rendimiento de proyectos OSS mediante métricas que consideran las características propias de los proyectos OSS. Performance-Tracker utiliza una base de conocimientos inicial de 50 proyectos de código abierto y define un modelo de contribución participativa y colaborativa en proyectos OSS, lo que permite a las comunidades evaluar su rendimiento de forma objetiva. La herramienta permite evaluar y comparar el rendimiento de los proyectos OSS, proporcionando perspectivas valiosas de mejora y asentando un marco de trabajo incial para fomentar un desarrollo más eficiente basado en un aprendizaje participativo y colaborativo.