The definition of process-related key performance indicators (KPIs) is a key part of performance measurement and one of the most challenging because of the lack of one best way to define businessapplicable KPIs that are both aligned with the strategic goals that the organisation wants to achieve and, at the same time, achievable in its context. It requires the identification of relevant threshold values able to distinguish different levels of process execution quality. However, obtaining these values remains an organization-specific task based on human abilities and no consensual technique exists. To overcome this problem, this paper introduces a methodology for threshold determination that considers not only the expert opinion but also data from real process executions.
- Jornadas de Ingeniería del Software y Bases de Datos (JISBD)
- JISBD 2015 (Santander)
- JISBD 2016 (Salamanca)
- Arquitecturas del Software y Variabilidad
- Artículos Relevantes
- Calidad y Pruebas
- Conferencia Invitada: Prof. Dr. Andrei Voronkov
- Desarrollo de Software Dirigido por Modelos
- Gestión de Datos
- Ingeniería del Software Guiada por Búsqueda
- Ingeniería Web y Sistemas Pervasivos
- Proceso Software y Metodologías
- Tema Abierto
- JISBD 2017 (La Laguna)
- Conferencia invitada: Dr. Don Gotterbarn
- Salón de la Fama
- Track ASV – Arquitecturas Software y Variabilidad
- Track GD – Gestión de Datos
- Track ISDM – Ingeniería del Software Dirigida por Modelos
- Track ISGB – Ingeniería del Software Guiada por Búsqueda
- Track IWSP – Ingeniería Web y Sistemas Pervasivos
- Track MEISSI – Métodos Empíricos en Ingeniería del Software y Sistemas de Información
- Track PSM – Proceso Software y Metodologías
- Track RCP – Requisitos, Calidad y Pruebas
- JISBD 2018 (Sevilla)
- Arquitecturas Software y Variabilidad
- Gestión de Datos
- Ingeniería del Software Dirigida por Modelos
- Ingeniería del Software Guiada por Búsqueda
- Ingeniería Web y Sistemas Pervasivos
- Métodos Empíricos en Ing. del Software y Sis. de Información
- Metodologías, Calidad y Pruebas Software
- JISBD 2019 (Cáceres)
- ASV: Arquitecturas Software y Variabilidad
- GD: Gestión de Datos
- ISDM: Ingeniería del Software Dirigida por Modelos
- ISGB: Ingeniería del Software Guiada por Búsqueda
- IWSP: Ingeniería Web y Sistemas Pervasivos
- MEISSI: Métodos Empíricos en Ingeniería del Software y Sistemas de Información
- MCPS: Metodologías, Calidad y Pruebas Software
- Jornadas sobre Programación y Lenguajes (PROLE)
- PROLE 2015 (Santander)
- PROLE 2016 (Salamanca)
- 1. Prefacio
- 2. Índice
- 3. Charla invitada: Constraint-Based Testing: An Emerging Trend in Software Testing
- Sesión 02: Transformación y Optimización
- Sesión 05: Trabajos ya publicados (I)
- Sesión 07: Lenguajes y Semánticas
- Sesión 10: Trabajos ya publicados (II)
- Sesión 11: Tutoriales y Web
- Sesión 12: Lenguajes
- Sesión 13: Verificación y Análisis
- PROLE 2017 (La Laguna)
- PROLE 2018 (Sevilla)
- Comité de Programa
- Sesión 1: Datalog and Deductive Databases
- Sesión 2: Logic Programming
- Sesión 3: Web Queries and Program Analysis
- Sesión 4: Model Checking and Information Retrieval
- Sesión 5: Logics
- Sesión 6: Transition Systems and Parallel Programming
- Sesión 7: Testing
- Sesión 8: Formal Verification and Correctness
- PROLE 2019 (Cáceres)
- Jornadas de Ciencia e Ingeniería de Servicios (JCIS)
- JCIS 2015 (Santander)
- JCIS 2016 (Salamanca)
- JCIS 2017 (La Laguna)
- Conferencia invitada: Dr. Cesare Pautasso
- Sesión 1. Servicios de Usuario y Colaborativos
- Sesión 2. Procesos de Negocio
- Sesión 3. Pruebas y Acuerdos de Calidad de Servicio
- Sesión 4. SOAs Inteligentes e Industria 4.0
- Sesión 5. Redes e Infraestructuras para Servicios
- Sesión 6. Optimización y Mejora del Rendimiento
- Sesión 7. Modelado de Servicios y Procesos
- JCIS 2018 (Sevilla)
- JCIS 2019 (Cáceres)
- Sesión 1. Procesos de Negocio
- Sesión 2. Minería y Análisis de Procesos de Negocio
- Sesión 3. Internet de las Cosas: Eventos
- Sesión 4. Internet de las Cosas: Arquitecturas y Aplicaciones
- Sesión 5. Acuerdo de nivel de Servicio
- Sesión 6. Microservicios
- Sesión 7. Soluciones en el ámbito de la Salud
- Jornadas de Ingeniería del Software y Bases de Datos (JISBD)
- Documentos SISTEDES
El autor Antonio Ruiz-Cortes ha publicado 57 artículo(s):
Quality of an open source software ecosystem (OSS ecosystem) is key for different ecosystem actors such as contributors or adopters. In fact, the consideration of several quality aspects(e.g., activeness, visibility, interrelatedness, etc.) as a whole may provide a measure of the healthiness of OSS ecosystems. The more health a OSS ecosystem is, the more and better contributors and adopters it will gather. Some research tools have been developed to gather specific quality information from open source community data sources. However, there exist no frameworks available that can be used to evaluate their quality as a whole in order to obtain the health of an OSS ecosystem. To assess the health of these ecosystems, we propose to adopt robust principles and methods from the Service Oriented Computing field.
La representación de indicadores de rendimiento sobre los procesos de negocio facilita la comprensión y definición en el cálculo y obtención de datos. Al incluir varios indicadores sobre un proceso puede ser necesario incorporar una gran cantidad de elementos de medición, generando un exceso de información y dificultando el análisis visual de los datos. En este artículo se presenta una ampliación de la notación gráfica Visual PPINOT, que permite modelar gráficamente indicadores de rendimiento sobre los procesos de negocio. A la notación se incorporan elementos de abstracción para facilitar la representación de patrones recurrentes en indicadores y para mejorar la legibilidad del diagrama del proceso. La implementación se valida utilizando el Modelo de Referencia SCOR. Se propone una clasificación de sus métricas y éstas se utilizan como referencia para estudiar las diferencias del modelado con la notación original en comparación con la notación ampliada.
As APIs are becoming popular to build Service-Based Applications (SBA), API Gateways are being increasingly used to facilitate API features management. They offer API management functionalities such as pricing plans support, user authentication, API versioning or response caching. Some parts of the information that an API Gateway needs are already included into a Service Level Agreement (SLA), that providers use to describe the rights and the obligations of involved parties in the service. Unfortunately, current API Gateways do not use any SLA representation model nor SLA underlying technology, thereby missing potential opportunities. In this paper we analyze the state of the art to justify the current situation and we identify some research challenges so as to achieve SLA-Driven API Gateways.
The Cloud Service Market has evolved into a complex landscape that challenges the decision making of users as they develop their purchasing process. In particular, we explore the case of cloud infrastructure (IaaS) providers as an example of heterogeneous variety of purchasing options and discounts; this variability represents an important drawback during the decision making process where there is a need to compare and select the best option. In this work, we define a common model to describe purchasing models from different providers taking into account such heterogeneity. This purchasing model represents a first step towards the automated support of decision making problems during the purchasing process. In order to illustrate our approach we apply the model in a real case study of IaaS purchasing.
Testing variability-intensive systems is a challenge due to the potentially huge number of derivable configurations. To alleviate this problem, many test case selection and prioritization techniques have been proposed with the aim of reducing the number of configurations to be tested and increasing their effectiveness. However, we found that these approaches do not exploit all available information since they are mainly driven by functional information such as the feature coverage. Furthermore, most of these works are focused on a single-objective perspective (e.g. features coverage), which could not reflect the real scenarios where several goals need to be met (e.g. features coverage and code changes coverage). In this context, we identify an important challenge, to take advantage of all available system information to guide the generation of test cases. As a first step towards a solution, we propose to study all this information with special emphasis on non-functional properties and address the test case generation as a multi-objective problem. Also, we describe some open issues to be explored that we hope have an important impact on future evaluations.
Feature models represent all the products that can be built under a variability-intensive system such as a software product line, but they are not fully configurable. There exist no explicit effort in defining configuration models that enable making decisions on attributes and cardinalities in feature models that use these artefacts. In this paper we present configurable feature models as an evolution from feature models that integrate configuration models within, improving the configurability of variability-intensive systems.
Current software industry is evolving into a servicecentric scenario and consequently, the importance to create reliable service consumptions amongst organizations is a key point. In such a context, the concept of Service Level Agreement (SLA) represents the foundation to express the responsibilities (i.e. rights and obligations) of service consumer and provider during the consumption. However, in spite there has been a major effort in both academia and industry to develop languages and frameworks to support SLAs, there still remain important challenges to address such as how to automate the detection of a violation of the SLAs and how to react accordingly in order to claim for a compensation. Specifically, in this paper we focus on the definition of the automated claiming of SLAs problem characterized as the set of processes of gathering, checking and explaining the evidences associated with the service consumption within the context of an SLA. In order to identify the key requirements to automate the claiming of SLAs, we analyse the real case of the Simple Storage Service (S3) provided by Amazon, that is regulated by an SLA. Based on our analysis we propose a set of extensions to current prominent SLA language specification (WSAgreement) and conceptualize a list of research challenges to automate the management of the claiming process.
Resource Assignment Language (RAL) is a language for the selection of organisational resources that can be used, for example, for the assignment of human resources to business process activities. Its formal semantics have allowed the automation of analysis operations in several phases of the business process lifecycle. RAL was designed considering a specific organisational metamodel and pursuing specific purposes. However, it can be extended to deal with similar problems in different domains and under different circumstances. In this paper, a methodology to extend RAL is introduced, and an extension to support another organisational metamodel is described as a proof-of-concept.
Autores: Cristina Cabanillas / Manuel Resinas / Antonio Ruiz-Cortés / Jan Mendling /
Palabras Clave: Business Process Management - description logics - RAL - resource assignment - W3C Organisation Ontology
Availability is a key property in computational services and, therefore, is guaranteed by Service Level Agreements (SLAs) from the majority infrastructure services, such as virtualization (Amazon EC2, Windows Azure, Google Cloud, Joyent, Rackspace, …) and storage (Amazon S3, Google Cloud Storage, …). These SLAs describe availability in natural language and there are important differences in the scope and penalties that each service provides. Furthermore, descriptions use specific domain terms so they are difficult to understand by service customers. These circumstances make that availability analysis is a tedious, error-prone and time-consuming task. In this paper, we describe in detail this problem and provide a first approach to deal with these SLAs supported on current SLA analysis techniques.
Business Process Management Systems (BPMS) are increasingly used to support service composition, typically working with executable BP models that involve resources, which include both automatic services and services provided by human resources. The appropriate selection of human resources is critical, as factors such as workload or skills have an impact on work performance. While priorities for automatic services are intensively researched, human resource prioritization has been hardly discussed. In classical workflow management, only resource assignment at BP design time to select potential performers for activities, and resource allocation at run time to choose actual performers, are considered. There is no explicit consideration of prioritizing potential performers to facilitate the selection of actual performers. It is also disregarded in professional solutions.
In this paper, we address this research gap and provide two contributions: (i) we conceptually define prioritized allocation based on preferences; and (ii) we propose a concrete way in which preferences over resources can be defined so that a resource priority ranking can be automatically generated. Our solution builds on the adaptation of a user preference model developed for the discovery and ranking of semantic web services called SOUP  to the domain at hand. As a proof of concept, we have extended the resource management tool CRISTAL (http://www.isa.us.es/cristal) with the SOUP component , using RAL  for resource selection. 1. J. M. García, D. Ruiz, and A. R. Cortés, «A Model of User Preferences for Semantic
Services Discovery and Ranking,» in ESWC (2), pp. 114, Springer, 2010. 2. J. M. García, M. Junghans, D. Ruiz, S. Agarwal, and A. R. Cortés, «Integrating
semantic Web services ranking mechanisms using a common preference model,» Knowl.-Based Syst., vol. 49, pp. 2236, 2013. 3. C. Cabanillas, M. Resinas, and A. Ruiz-Cortés, «Defining and Analysing Resource Assignments in Business Processes with RAL,» in ICSOC, vol. 7084, pp. 477486, Springer, 2011.
This work was published in ICSOC 2013, vol. 8274, 374-388. It was partially supported by the EU-FP7, the EU Commission, the Spanish and the Andalusian R&D&I programmes (grants 318275, 284860, TIN2009-07366, TIN2012-32273, TIC-5906).
La dificultad para decidir la compra de un IaaS (Infrastructure as a Service) depende de la complejidad de las opciones de compra dadas por su proveedor y de la complejidad del plan del cliente que quiere realizarla. Es habitual que estos tipos de servicios ofrezcan muchas configuraciones de uso diferentes, y para cada una de ellas sea posible disponer de varias opciones de compra. De este modo, decidir la mejor compra se convierte en una tarea que consume mucho tiempo, tediosa y propensa a errores. En este trabajo inicial, caracterizamos el problema con un caso de estudio ilustrativo y presentamos los desafíos inmediatos para mejorar las herramientas de soporte actualmente disponibles.
key aspect in any process-oriented organisation is the evaluation of process performance for the achievement of its strategic and operational goals. Process Performance Indicators (PPIs) are a key asset to carry out this evaluation, and, therefore, having an appropriate definition of these PPIs is crucial. After a careful review of the literature related and a study of the current picture in different real organisations, we conclude that there not exists any proposal that allows to define PPIs in a way that is unambiguous and highly expressive, understandable by technical and non-technical users and traceable with the business process (BP). Furthermore, it is also increasingly important to provide these PPI definitions with support to automated analysis allowing to extract implicit information from them and their relationships with the BP. In this work we present PPINOT, a tool that allows the graphical definition of PPIs together with their corresponding business processes, and their subsequent automated analysis.
There exist many available service ranking implementations, each one providing ad hoc preference models that offer different levels of expressiveness. Consequently, applying a single implementation to a particular scenario constrains the user to define preferences based on the underlying formalisms. Furthermore, preferences from different ranking implementation’s model cannot be combined in general, due to interoperability issues. In this article we present an integrated ranking implementation that enables the combination of three different ranking implementations developed within the EU FP7 SOA4All project. Our solution has been developed using PURI, a Preference-based Universal Ranking Integration framework that is based on a common, holistic preference model that allows to exploit synergies from the integrated ranking implementations, offering a single user interface to define preferences that acts as a fa¸cade to the integrated ranking implementation.
Business process (BP) modelling notations tend to stray their attention from resource management, unlike other aspects such as control flow or even data flow. On the one hand, the languages they offer to assign resources to BP activities are usually either little expressive, or hard to use for non-technical users. On the other hand, they barely care about the subsequent analysis of resource assignments, which would enable the detection of problems and/or inefficiency in the use of the resources available in a company. We present RAL Solver, a tool that addresses the two aforementioned issues, and thus: (i) allows the specification of assignments of resources to BP activities in a reasonably simple way; and (ii) provides capabilities to automatically analyse resource assignments at design time, which allows extracting information from BP models, and detecting inconsistencies and assignment conflicts.
Business processes (BPs) are often analysed in terms of control flow, temporal constraints, data and resources. From all of these aspects, resources have received much less attention than other aspects, specially control flow. Even the standard BP modelling notation (BPMN) does not provide concrete definitions for the resource-related concepts . However, the participation of people in BPs is of utmost importance, both to supervise the execution of automatic activities and to carry out software-aided and/or manual tasks. Therefore, they should be considered when designing and modelling the BPs used in an organization.
In this paper we face human-resource management (resource management for short) in BP models. Firstly, we deal with the assignment of resources to the activities of a BP model, aiming at easing and improving the way resources can be associated with BP activities. Some approaches addressing a similar purpose have been introduced in the last years , but they are in general either too complex to be used by technically unskilled people, or not expressive enough to provide powerful resource management in workflows (WFs) and BPs.
Software Product Line (SPL) engineering is a reuse strategy to develop families of related systems. From common assets, different software products are assembled reducing production costs and timetomarket. Products in SPLs are defined in terms of features. A feature is an increment in product functionality. Feature models are widely used to represent all the valid combinations of features (i.e. products) of an SPL in a single model in terms of features and relations among them. The automated analysis of feature models deals with the computeraided extraction of information from feature models. Typical operations of analysis allow determining whether a feature model is void (i.e. it represents no products), whether it contains errors (e.g. features that cannot be part of any product) or what is the number of products of the SPL represented by the model. Catalogues with up to 30 analysis operations on feature models and multiple analysis solutions have been reported.
Business Process (BP) families are made up of BP variants that share commonalities but also show differences to accommodate the specific necessities of different application contexts (i.e., country regulations, industrial domain, etc.). Even though there are modelling techniques to represent these families (e.g., C-EPC, Provop), there is no work aimed at the performance measurement of the different BP variants that conform the family. Process Performance Indicators (PPI) are commonly used to study and analyse the performance of business processes. However, the application of such indicators in BP families increases the modelling and management complexity of the whole family. To deal with this complexity, this work introduces a modelling solution for managing PPI variability based on the concepts of change patterns for process families (CP4PF). The proposed solution includes a set of patterns aimed at 1) reducing the number of operations required to specify PPIs and 2) ensuring PPI family correctness.
The myriad of cloud service providers, as well as their overwhelming variety of configuration and purchasing options, result in a highly complex purchasing scenario. Furthermore, users may specify their needs for cloud services provisioning with a certain scheduling restrictions. There is a need for an automatic support for obtaining an appropriate purchasing plan, which takes into account both service configurations and scheduling needs, while allowing the comparison among different providers and their various offerings. In this work, we present an automatic purchasing plan generator, which analyzes cloud service offerings from several providers to obtain an optimized purchasing plan according to user needs. From the obtained purchasing plan, our solution can provide the corresponding charge plan, possibly including discounts, which serves the purpose of comparing offerings to get the best option.
During the last years the use of service level agreements (SLA) is rising uncontrollably to describe the rights and obligations of parties involved in service provisioning (typically the service consumer and the service provider); amongst other information, SLA could define guarantees associated with the idea of service level objectives (SLOs) that normally represent key performance indicators of either the consumer or the provider. In case the guarantee is under or over fulfilled SLAs could also define some compensations (i.e. penalties or rewards). In such a context, there have been important steps towards the automation of the analysis of SLAs. One of these steps is a characterization model of SLAs with compensations proposed by the authors in a previous work; and another step is the standardisation effort in the SLAs notation made by WS-Agreement. However, real-world SLAs includes complex concepts that must be considered, namely: (i) SLA terms that specify compensations without an explicit SLO; and (ii) a limit for the compensations. In this paper we extend our prior characterization model considering these complex concepts. Specifically, (i) we provide up to five real-world scenarios whose SLAs incorporate aforementioned new concepts; (ii) we extend our model for compensable guarantees considering terms without an explicit SLO; and (iii) we provide a novel WS-Agreement-based syntax to model SLAs with compensations considering these concepts. These contributions aim to establish a foundation to elaborate tools that could provide an automated support to the modelling and analysis of SLAs with compensations.
Resumen de artículo publicado como:
Cristina Cabanillas, David Knuplesch, Manuel Resinas, Manfred Reichert, Jan Mendling, Antonio Ruiz-Cortés: RALph: A Graphical Notation for Resource Assignments in Business Processes. International Conference on Advanced Information Systems Engineering (CAiSE) 2015: 53-68. DOI: 10.1007/978-3-319-19069-3_4.
Process mining allows the extraction of useful information from event logs and historical data of business processes. This information will improve the performance of these processes and is generally obtained after they have finished. Therefore, predictive monitoring of business process running instances is needed, in order to provide proactive and corrective actions to improve the process performance and mitigate the possible risks in real time. This monitoring allows the prediction of evaluation metrics for a runtime process. In this context, this work describes a general methodology for a business process monitoring system for the prediction of process performance indicators and their stages, such as, the processing and encoding of log events, the calculation of aggregated attributes or the application of a data mining algorithm.
Resumen del artículo publicado como:
A. del Río-Ortega et al.: Modelling Service Level Agreements for Business Process Outsourcing Services. In: CAiSE 2015: 485-500.
In the past, elasticity and commitment in business processes were under-explored. But as businesses increasingly exploit pay-per-use resources in the cloud for on-demand needs, elasticity and commitment have become important issues. Here, the authors discuss the value of using elastic resources and commitments to create more dynamic organizations that can easily balance the need to be adaptable and flexible, while also retaining a high level of manageability.
Resumen del artículo publicado como:
Pablo Fernandez, Hong-Linh Truong, Schahram Dustdar, and Antonio Ruiz-Cortes. Programming Elasticity and Commitment in Dynamic Processes, IEEE Internet Computing, Vol 19, 2 , 68-74, 2015.
La monitorización predictiva de instancias de procesos de negocio en ejecución propociona acciones proactivas y correctivas para mejorar el rendimiento de los procesos y mitigar los posibles riesgos en tiempo real. Dicha monitorización permite la predicción de métricas de evaluación o indicadores del rendimiento de un proceso en ejecución. En este contexto, este trabajo define una arquitectura para el proceso de predicción de indicadores que, asimismo, contempla la posibilidad de la actualización del modelo predictivo a lo largo del tiempo.
The term of API Economy is becoming increasingly used to describe the change of vision in how APIs can add value to the organizations. Furthermore, a greater automation of RESTful APIs management can suppose a competitive advantage for the company. New proposals are emerging in order to automatize some API governance tasks and increase the ease of use (e.g. generation of code and documentation). Despite that, the non-functional aspects are often addressed in a highly specific manner or even there not exists any solution for an automatic governance. Nevertheless, these properties are already defined in natural language at the Service Level Agreement (SLA) that both customer and provided have established. In this paper, we carry out a study on the *aaS industry and analyze the current both API modeling and SLA modeling proposals in order to identify the open challenges for an automatic RESTful API governance.
The publication and, when it is possible, automation of public services on Internet provides advantages for citizens and governance. The former because promotes the transparency and control over governance actions and avoids unneeded presencial inquiries and the latter because information systems help to decrease human resources costs. A number of efforts have been performed by public administrations to provide precise service information online. As this service information is incrementally published, manual interaction to navigate and query these services becomes a difficult task that automated mechanisms could support based on service catalogs. In this paper we introduce an ongoing work proposing the use of ontologies to enable the automated processing -i.e. search and validation- of these service catalogs.
Monitoring and measuring the performance of business processes are valuable tasks that facilitate the identification of possible improvement areas within the organisation according to the fulfillment of its strategic and business goals. A large number of techniques and tools have been developed with the aim of measuring process performance, but most of those processes are structured processes, usually defined using BPMN. The object of this paper is to identify and to analyse the feasibility of using an existing mechanism for the definition and modelling of process performance indicators (PPINOT) in a different context to structured BPMN processes; such as Cases, usually modelled using CMMN. This analysis is based on the similarities between CMMN and BPMN, and on characteristics and attributes used by PPINOT to get values from the process.
Aiming to be as competitive as possible, organisations are always pursuing to improve their business processes applying corrective actions when needed. However, the actual analysis and decision making for those actions is typically a challenging task relying on extensive human-in-the-loop expertise. Specifically, this improvement process usually involves: (i) to analyse evidences to understand the current behavior; (ii) to decide the actual objectives (usually defined in Service Level Agreements -SLAs- based on intuition) and (iii) to establish the improvement plan. In this ongoing work, we aim to propose a data-driven and intuition-free methodology to define an SLA as a governance element that specifies the service level objectives in an explicit way. Such a methodology considers process performance indicators that are analysed by means of inference, optimization, and simulation techniques. In order to motivate and exemplify our work we address a Healthcare scenario.
S. Segura, G. Fraser, A. B. Sanchez and A. Ruiz-Cortés, A Survey on Metamorphic Testing, in IEEE Transactions on Software Engineering, vol. 42, no. 9, pp. 805-824, Sept. 1 2016. https://doi.org/10.1109/TSE.2016.2532875 Indicadores de calidad: – Revista de referencia en el área de Ingeniería del Software (CS-SE: 20/106). – Ha recibido 9 citas desde su publicación en febrero de 2016 (más otras 5-7 citas por aparecer en las actas del segundo workshop internacional de pruebas metamórficas ). – Hemos sido invitados a presentar el trabajo en ICSE17 como parte de la iniciativa journal-first (ver programa de la conferencia ). – Colaboración internacional con el profesor Gordon Fraser.  https://www.cs.montana.edu/met17/  http://icse2017.gatech.edu/?q=technical-research-accepted
Test case prioritization schedules test cases for execution in an order that attempts to accelerate the detection of faults. The order of test cases is determined by prioritization objectives such as covering code or critical components as rapidly as possible. The importance of this technique has been recognized in the context of Highly-Configurable Systems (HCSs), where the potentially huge number of configurations makes testing extremely challenging. However, current approaches for test case prioritization in HCSs suffer from two main limitations. First, the prioritization is usually driven by a single objective which neglects the potential benefits of combining multiple criteria to guide the detection of faults. Second, instead of using industry-strength case studies, evaluations are conducted using synthetic data, which provides no information about the effectiveness of different prioritization objectives. In this paper, we address both limitations by studying 63 combinations of up to three prioritization objectives in accelerating the detection of faults in the Drupal framework. Results show that non-functional properties such as the number of changes in the features are more effective than functional metrics extracted from the configuration model. Results also suggest that multi-objective prioritization typically results in faster fault detection than mono-objective prioritization. Indicios de calidad de la revista: Journal of Systems and Software (Elsevier) ISSN: 0164-1212 Factor de impacto 2015: 1,424 Factor de impacto a 5 años: 1,767 Indexada en dos categorías: Computer Science / Theory & Methods: 31/105 (Q2) Computer Science / Software Engineering: 24/106 (Q1) Otros datos: CiteScore: 2.93 Source Normalized Impact per Paper (SNIP): 2.415 SCImago Journal Rank (SJR): 0.897 Indicios de calidad del propio paper: Número de Citas según Google Scholar: 3 Número de lecturas según Research Gate: 73
Autores: José Antonio Parejo Maestre / Ana Belén Sánchez Jerez / Sergio Segura / Antonio Ruiz-Cortés / Roberto Erick Lopez-Herrejón / Alexander Egyed /
Palabras Clave: optimización multi-objetivo - priorización de pruebas - sistemas altamente configurables
Model transformations play a cornerstone role in Model-Driven Engineering as they provide the essential mechanisms for manipulating and transforming models. The use of assertions for checking their correctness has been proposed in several works. However, it is still challenging and error prone to locate the faulty rules, and the situation gets more critical as the size and complexity of model transformations grow, where manual debugging is no longer possible. Spectrum-Based Fault Localization (SBFL) is a technique for software debugging that uses the results of test cases and their corresponding code coverage information to estimate the likelihood of each program component (e.g., statements) of being faulty. This paper describes a proposal for applying SBFL for locating the faulty rules in ATL model transformations. The approach aims at automatically detecting the transformation rule that makes an assertion fail.
Process Performance indicators (PPIs) play an important role in monitoring the performance of operational procedures. Both defining and measuring suitable PPIs are key tasks for aligning strategic business objectives with the operational implementation of a process. A major challenge in this regard is that perspectives on the same real-world phenomenon differ among the stakeholders that are involved in these tasks. Since the formulation of PPIs is typically a managerial concern, there is a risk that these do not match with the exact operational and technical characteristics of business processes. To bridge this gap, the concepts described in PPIs must first be linked to their corresponding process elements. Establishing these links is paramount for the monitoring of process performance.
Without them, the values of PPIs cannot be computed automatically. However, the necessary links must currently be established manually. A task which is tedious and error-prone, due to the aforementioned incoherence between the different perspectives. The goal of our work is to overcome the efforts involved in the manual creation of links by automating this step. To achieve this, we developed an approach that automatically aligns textual PPI descriptions to the relevant parts of a process model. The approach takes a textual PPI description and a process model to which the PPI relates as input. Given this input, the approach generates an alignment in three steps. (1) Type classification: We make use of a decision tree classifier to identify the type of a given PPI, which is important because it affects the number and kinds of process model elements that should be aligned to a PPI. (2) PPI parsing: We parse the textual PPI description to extract those phrases that relate to specific parts of a process, making use of natural language processing techniques. (3) Alignment to process model: Finally, given the identified measure type and the extracted phrases, we compute an alignment between the phrases and the process model. A quantitative evaluation with a set of 173 PPIs obtained from industry and reference frameworks, demonstrates that our automated approach produces satisfying results.
S. Segura, J. A. Parejo, J. Troya and A. Ruiz-Cortés, «Metamorphic Testing of RESTful Web APIs» in IEEE Transactions on Software Engineering, Oct 2017 (online) vol. PP, no. 99, pp. 1-1. https://doi.org/10.1109/TSE.2017.2764464Aceptado para ser presentado en ICSE 2018 en la categoría de journal-first: https://www.icse2018.org/track/icse-2018-Journal-first-papers#event-overview166 lecturas en ResearchGate desde su publicación (118 en IEEE Xplore).
Web service based applications often invoke services provided by third-parties in their workflow. The Quality of Service (QoS) provided by the invoked supplier can be expressed in terms of the Service Level Agreement specifying the values contracted for particular aspects like cost or throughput, among others. In this scenario, intelligent systems can support the engineer to scrutinise the service market in order to select those candidates that best fit with the expected composition focusing on different QoS aspects. This search problem, also known as QoS-aware web service composition, is characterised by the presence of many diverse QoS properties to be simultaneously optimised from a multi-objective perspective. Nevertheless, as the number of QoS properties considered during the design phase increases and a larger number of decision factors come into play, it becomes more difficult to find the most suitable candidate solutions, so more sophisticated techniques are required to explore and return diverse, competitive alternatives. With this aim, this paper explores the suitability of many objective evolutionary algorithms for addressing the binding problem of web services on the basis of a real-world benchmark with 9 QoS properties. A complete comparative study demonstrates that these techniques, never before applied to this problem, can achieve a better trade-off between all the QoS properties, or even promote specific QoS properties while keeping high values for the rest. In addition, this search process can be performed within a reasonable computational cost, enabling its adoption by intelligent and decision-support systems in the field of service oriented computation.Publicado en: Expert Systems with Applications, vol. 72, pp.357-370. 2017. DOI: http://dx.doi.org/10.1016/j.eswa.2016.10.047.IF(2016): 3,928 [18/133 Artificial Intelligence] (Q1).
Autores: Aurora Ramírez / José Antonio Parejo / José Raúl Romero / Sergio Segura / Antonio Ruiz-Cortés /
Palabras Clave: many-objective evolutionary algorithms - multi-objective optimization - QoS-aware web service composition
Model transformations play a cornerstone role in Model-Driven Engineering (MDE) as they provide the essential mechanisms for manipulating and transforming models. Checking whether the output of a model transformation is correct is a manual and errorprone task, referred to as the oracle problem. Metamorphic testing alleviates the oracle problem by exploiting the relations among different inputs and outputs of the program under test, so-called metamorphic relations (MRs). One of the main challenges in metamorphic testing is the automated inference of likely MRs. This paper proposes an approach to automatically infer likely MRs for ATL model transformations, where the tester does not need to have any knowledge of the transformation. The inferred MRs aim at detecting faults in model transformations in three application scenarios, namely regression testing, incremental transformations and migrations among transformation languages. In the experiments performed, the inferred likely MRs have proved to be quite accurate, with a precision of 96.4% from a total of 4101 true positives out of 4254 MRs inferred. Furthermore, they have been useful for identifying mutants in regression testing scenarios, with a mutation score of 93.3%. Finally, our approach can be used in conjunction with current approaches for the automatic generation of test cases. Artículo publicado en The Journal of Systems and Software, Vol 136, pp 188-208 (Available Online May 2017; Final Published Version February 2018) – Q1. http://dx.doi.org/10.1016/j.jss.2017.05.043
Autores: Javier Troya / Sergio Segura / Antonio Ruiz-Cortés /
Palabras Clave: Automatic inference - Generic approach - Metamorphic relations - metamorphic testing - model transformations - Model-Driven Engineering
Actualmente existen millones de aplicaciones para smartphone que deben ejecutarse correctamente en entornos software, hardware y de conectividad muy variados y cambiantes. El testing de dichas aplicaciones es por tanto un reto importante, para el que ligeras mejoras de la productividad suponen grandes beneficios para usuarios y desarrolladores. Este artículo presenta una primera aproximación de trabajo en curso para la la automatización de pruebas funcionales y de rendimiento en aplicaciones android usando algoritmos basados en búsqueda. La viabilidad de la propuesta se ha validado aplicándola a dos aplicaciones simples. Generando casos de pruebas que detectan cierres abruptos en la aplicación y maximizan el tiempo de ejecución.
Summary of the contribution
Process performance indicators (PPIs) allow the quantitative evaluation of business processes (BPs), providing essential information for decision making. However, PPI management is not only restricted to the evaluation phase of the BPM lifecycle, but also includes a number of steps that must be carried out throughout the whole lifecycle. PPIs need to be defined, the corresponding BPs must be instrumented, PPI values have to be computed, then they can be monitored and analysed using techniques such as business activity monitoring or process mining, and finally, a PPI redefinition can be required in case of the evolution of either the associated BPs or the PPIs themselves. It is common practice today that BPs and PPIs are usually modelled separately using graphical notations for the former and natural language for the latter. This approach makes PPI definitions simple to read and write, but it hinders maintenance consistency between BPs and PPIs. It also requires their manual translation into lower–level implementation languages for their operationalisation, which is a time–consuming, error– prone task because of the ambiguities inherent to natural language definitions. In this article we present Visual ppinot, a graphical notation for defining PPIs together with BP models aimed at facilitating and automating PPI management. This is mainly achieved by means of the following features. First, Visual ppinot is based on the ppinot metamodel, which provides a precise and unambiguous definition of PPIs, thus allowing their automated processing in the different ac- tivities of the lifecycle. Second, Visual ppinot provides traceability by design between PPIs and BPs because PPIs must be explicitly connected to BP elements, thus avoiding inconsistencies and promoting their co–evolution. Finally, Visual ppinot enables a definition of PPIs that is independent of the platforms used to support the PPIs in the BP lifecycle, which reduces vendor lock–in and allows definitions of PPIs encompassing several information systems. In addition, it improves current state–of–the–art proposals in terms of expressiveness and of providing an explicit visualisation of the link between PPIs and BPs. The reference implementation, developed as a complete tool suite, has allowed its validation in a multiple-case study, in which five dimensions were studied: expressiveness, precision, automation, understandability, and traceability.
Summary of the contribution
Predictive monitoring of business processes is a challenging topic of process min- ing which is concerned with the prediction of process indicators of running pro- cess instances. The main value of predictive monitoring is to provide information in order to take proactive and corrective actions to improve process performance and mitigate risks in real time. In this paper, we present an approach for pre- dictive monitoring based on the use of evolutionary algorithms. Our method provides a novel event window-based encoding and generates a set of decision rules for the run-time prediction of process indicators according to event log properties. These rules can be interpreted by users to extract further insight of the business processes while keeping a high level of accuracy. Furthermore, a full software stack consisting of a tool to support the training phase and a framework that enables the integration of run-time predictions with business process man- agement systems, has been developed. Obtained results show the validity of our proposal for two large real-life datasets: BPI Challenge 2013 and IT Department of Andalusian Health Service (SAS).
Autores: Alfonso E. Márquez-Chamorro / Manuel Resinas / Antonio Ruiz-Cortés /
Palabras Clave: Business process indicator - Business Process Management - Evolutionary algorithm - Predictive mon- itoring - Process Mining
Software architecture tendencies are shifting to a microser- vice paradigm. In this context, RESTful APIs are being established the standard of integration. API designer often identifies two key issues to be competitive in such growing market. On the one hand, the generation of accurate documentation of the behavior and capabilities of the API to promote its usage; on the other hand, the design of a pricing plan that fits into the potential API user’s needs. Besides the increasing number of API modeling alternatives is emerging, there is a lack of proposals on the definition of flexible pricing plans usually contained in the Service Level Agreements (SLAs). In this paper we propose two different modeling techniques for the de- scription of SLA in a RESTful API context: iAgree and SLA4OAI.
Summary of the Contribution
Process-oriented organisations need to manage the different types of responsi- bilities their employees may have w.r.t. the activities involved in their business processes. Despite several approaches provide support for responsibility modelling, in current Business Process Management Systems (BPMS) the only responsibility considered at run time is the one related to performing the work required for activity completion. Others like accountability or consultation must be implemented by manually adding activities in the executable process model, which is time-consuming and error-prone. This paper addresses this limitation by enabling current BPMS to execute processes in which people with different responsibilities interact to complete the activities. A metamodel based on Responsibility Assignment Matrices (RAM) is designed to model the responsibility assignment for each activity, and a template- based mechanism that automatically transforms such information into BPMN elements is developed. The approach is independent of the platform and hence, the output models can be interpreted and executed by BPMS that support BPMN. Furthermore, the original structure of the process model remains unchanged, as the templates for modelling responsibilities are defined at subprocess level. This provides transparency and does not affect the readability of the original model. As our approach does not enforce any specific behaviour but new templates can be modelled to specify the interaction that best suits the activity requirements, there is high flexibility and generalisability. Moreover, template libraries can be created and reused in different processes. We provide a reference implementation and build a library of templates for a well-known set of responsibilities.
Resumen de la contribución
La aparición del paradigma de la computación en la nube ha conllevado un cambio significativo dentro de la industria de las tecnologías de la información, tanto para proveedores de servicios como para los propios consumidores. Así, existen servicios como los de Amazon Elastic Computing Cloud (EC2) o Google Compute Engine que ofrecen computación virtualizada y almacenamiento de recursos (comúmente denominados Infraestructuras como Servicios o IaaS por sus siglas en inglés), de forma que los clientes pueden adquirirlos para reducir los costes de operación de sus sistemas, en comparación con el aprovisionamiento de las mismas infraestructuras de computación en un entorno local. Sin embargo, el aprovisionamiento de servicios en la nube resulta una tarea muy compleja dada la abrumadora variedad de proveedores, configuraciones y opciones de compra disponibles. En este escenario aparecen además diversas dificultades para comparar las ofertas de los distintos proveedores, debido a la heterogeneidad en la descripción de las configuraciones, opciones de compra, o incluso descuentos aplicables. A su vez, las necesidades concretas de los consumidores podrían incluir restricciones adicionales para tener en cuenta una planificación temporal previa en cuanto al número de instancias de IaaS que necesitarán en determinados momentos. Aunque existen algunas herramientas y calculadoras on-line que permiten buscar configuraciones concretas de IaaS, éstas no tienen en cuenta cuestiones como la planificación y la optimización de las opciones de compra. En este trabajo presentamos un framework de análisis automático que es capaz de analizar y comparar ofertas de servicios en la nube de distintos proveedores para obtener un plan de aprovisionamiento óptimo de acuerdo con las necesidades de los consumidores. Dicho plan especifica la cantidad y el tipo de instancias de IaaS que deben adquirirse, junto con la planificación de su uso. Hemos desarrollado un prototipo que ha sido validado en un escenario de virtualización de clases de laboratorio, comparando las opciones de dos proveedores.
Sergio Segura, Javier Troya, Amador Durán, Antonio Ruiz-Cortés. «Performance Metamorphic Testing: A Proof of Concept». Information and Software Technology, 98:1-4, 2018. https://doi.org/10.1016/j.infsof.2018.01.013Impact indicators:- JCR FI: 2.62 TOP 15% (Q1) CS/SE.- 2 citations in GScholar (http://bit.ly/2WnrYYj)A preliminary version of this paper was presented in the track of New Ideas an Emerging Results at ICSE 2017, with an acceptance rate of 16% (14 papers accepted out of 85 submissions) . All four reviewers agreed on the value of the work with an overall score of 9 (out of 12), and a novelty score of 11 (out of 12). Sergio Segura, Javier Troya, Amador Durán, Antonio Ruiz-Cortés. «Performance metamorphic testing: motivation and challenges». In Proceedings of the 39th International Conference on Software Engineering: New Ideas and Emerging Results (ICSE NIER’17) Track. IEEE Press, Piscataway, NJ, USA, 7-10, 2017. [Acceptance rate: 16%. Main track ranked as Class 1 in SCIE Ranking] https://doi.org/10.1109/ICSE-NIER.2017.16
Model transformations play a cornerstone role in Model-Driven Engineering as they provide the essential mechanisms for manipulating and transforming models. The correctness of software built using MDE techniques greatly relies on the correctness of model transformations. However, it is challenging and error prone to debug them, and the situation gets more critical as the size and complexity of model transformations grow, where manual debugging is no longer possible.Spectrum-Based Fault Localization (SBFL) uses the results of test cases and their corresponding code coverage information to estimate the likelihood of each program component (e.g., statements) of being faulty. In this paper we present an approach to apply SBFL for locating the faulty rules in model transformations. We evaluate the feasibility and accuracy of the approach by comparing the effectiveness of 18 different state-of-the-art SBFL techniques at locating faults in model transformations. Evaluation results revealed that the best techniques, namely Kulcynski2, Mountford, Ochiai and Zoltar, lead the debugger to inspect a maximum of three rules in order to locate the bug in around 74% of the cases. Furthermore, we compare our approach with a static approach for fault localization in model transformations, observing a clear superiority of the proposed SBFL-based method.
Web APIs following the REST architectural style (so-called RESTful Web APIs) have become the de-facto standard for software integration. As RESTful APIs gain momentum, so does the testing of them. However, there is a lack of mechanisms to assess the adequacy of testing approaches in this context, which makes it difficult to measure and compare the effectiveness of different testing techniques. In this work-in-progress paper, we take a step forward towards a framework for the assessment and comparison of testing approaches for RESTful Web APIs. To that end, we propose a preliminary catalogue of test coverage criteria. These criteria measure the adequacy of test suites based on the degree to which they exercise the different input and output elements of RESTful Web services. To the best of our knowledge, this is the first attempt to measure the adequacy of testing approaches for RESTful Web APIs.
Las pruebas de interfaz de usuario son una técnica muy popular gracias a su capacidad para validar el comportamiento de la aplicación tal y como lo experimentaría el usuario, y por su facilidad para generar los casos de prueba. Sin embargo, una de las limitaciones más importantes de este tipo de pruebas es su fragilidad ante los cambios de la propia interfaz de usuario, que suelen producirse durante el desarrollo del sistema. En este artículo formulamos la reparación de estas pruebas ante cambios en la intefaz o funcionalidad de la aplicación como un problema de búsqueda. Además, proponemos un algoritmo heurístico para su resolución basado en GRASP. Esta propuesta se ha implementado y validado en el dominio especifico de aplicaciones móviles para dispositivos Android. Los resultados obtenidos demuestran su aplicabilidad con varios casos de estudio para cambios de diversa envergadura.
Decisions are a key aspect of every business and its processes and their management is of utmost importance for the achievement of strategic and operational goals in any organisational context. Therefore, decisions should be considered as first-class citizens that need to be modelled, measured, analysed, monitored to track their performance, and redesigned if necessary. Existing literature studies the definition of decisions themselves in terms of accuracy, certainty, consistency, covering and correctness. However, to the best of our knowledge, no prior work exists that analyses the relationship between decisions and process performance.
In this paper, we seek to improve the understanding of the relationship between decision management and process performance measurement by means of the analysis of the relationship between these two concepts in three ways. First, by analysing the impact of decisions related to business processes on process performance indicators (PPIs), and using guidelines in the form of a set of steps that can be used to identify decisions that affect the process performance. Second, by defining decision performance indicators (DPIs) to measure performance of decisions related to business processes. And third, by using process performance information in the definition of decisions. Some advantages of explicitly defining these relationships have been encountered, such as the provision of important insights regarding possible dysfunctional decisions from a performance point of view or the identification of possible actions to be taken to improve the performance. We also outline how these relationships can be modelled and supported by extending and integrating PPINOT, a metamodel for the definition and modelling of PPIs, with DMN, a standard that provides constructs to model and decouple decisions from process models.
As distribution models of information systems are moving to XaaS paradigms, microservices architectures are rapidly emerging, having the RESTful principles as the API model of choice. In this context, the term of API Economy is being used to describe the increasing movement of the industries in order to take advantage of exposing their APIs as part of their service offering and expand its business model.
Currently, the industry is adopting standard specifications such as OpenAPI to model the APIs in a standard way following the RESTful principles; this shift has supported the proliferation of API execution platforms (API Gateways) that allow the XaaS to optimize their costs. However, from a business point of view, modeling offering plans of those APIs is mainly done ad-hoc (or in a platform-dependent way) since no standard model has been proposed. This lack of standardization hinders the creation of API governance tools in order to provide and automate the management of business models in the XaaS industry.
This work presents a systematic analysis of 69 XaaS in the industry that offer RESTful APIs as part of their business model. Specifically, we review in detail the plans that are part of the XaaS offerings that could be used as a first step to identify the requirements for the creation of an expressive governance model of realistic RESTful APIs. Additionally, we provide an open dataset in order to enable further analysis in this research line.
A Service Level Agreement (SLA) regulates the provisioning of a service by defining a set of guarantees. Each guarantee sets a Service Level Objective (SLO) on some service metrics, and optionally a compensation that is applied when the SLO is unfulfilled (the compensation would be a penalty) or overfulfilled (the compensation would be a reward). For instance, Amazon is penalised with a 10% in service credits if the availability of the Elastic Cloud Computing service drops below 99.95%.
Currently, there are software tools and research proposals that use the information about compensations to automate and optimise certain parts of the service management. However, they assume that compensations are well defined, which is too optimistic in some circumstances and can lead to undesirable situations. For example, an unbounded, automated penalty was discarded in 2005 by the UK Royal Mail company after causing a loss of 280 million pounds in one year and a half.
In the article «Automated Validation of Compensable SLAs», published in IEEE Transactions on Services Computing (Early Access), and available at https://doi.org/10.1109/TSC.2018.2885766, we aim at answering the question «How can compensations be automatically validated?». To this end, we build on the compensable SLA model proposed in a previous work to provide a technique that leverages constraint satisfaction problem solvers to automatically validate them. We also present a materialisation of the model in iAgree, a language to specify SLAs and a tooling support that implements our whole approach. Our proposal has been evaluated by modelling and analysing the compensations of 24 SLAs of real-world scenarios including 319 guarantee terms. As a result, our technique has proven to be useful for detecting mistakes that are typically derived not only from the manual specification of SLAs in natural language, but also from the complex nature of compensation definitions. Thus, we found that nine guarantees with compensations that were not properly defined in the original SLAs specified in natural language. Specifically, five were wrongly specified by Verizon, and four were wrongly specified by the outsourcing service hiring of the regional governments of: Northwest Territories of Canada, and Andalusia in Spain. Therefore, our proposal can pave the way for using compensable SLAs in a safer and more reliable way.
Autores: Carlos Müller / Antonio Manuel Gutierrez / Pablo Fernandez / Octavio Martín-Díaz / Manuel Resinas / Antonio Ruiz-Cortés /
Palabras Clave: Analysis - Compensation - CSP - Penalty - Reward - SLA - validation - WS-Agreement
Cloud service providers offer to their customers a variety of pricing policies, which range from the simple, yet widely used pay-as-you-go schema to complex discounted models. When executing the billing process, stakeholders have to consider usage metrics and service level objectives in order to obtain the correct billing and conform to the service level agreement in place. The more metrics, discount and compensations rules are added to the pricing schema, the more complex the billing generation results. In this paper we present a monitoring-based solution that enables the dynamically definition of both service level objectives and discount rules, so that providers can customise the billing generation process in terms of the service level agreement they offer. We validate our proposal in a real-world scenario, introducing a micro-service based software solution deployed in a Kubernetes cluster.