The definition of process-related key performance indicators (KPIs) is a key part of performance measurement and one of the most challenging because of the lack of one best way to define businessapplicable KPIs that are both aligned with the strategic goals that the organisation wants to achieve and, at the same time, achievable in its context. It requires the identification of relevant threshold values able to distinguish different levels of process execution quality. However, obtaining these values remains an organization-specific task based on human abilities and no consensual technique exists. To overcome this problem, this paper introduces a methodology for threshold determination that considers not only the expert opinion but also data from real process executions.
- Jornadas de Ingeniería del Software y Bases de Datos (JISBD)
- JISBD 2015 (Santander)
- JISBD 2016 (Salamanca)
- Arquitecturas del Software y Variabilidad
- Artículos Relevantes
- Calidad y Pruebas
- Conferencia Invitada: Prof. Dr. Andrei Voronkov
- Desarrollo de Software Dirigido por Modelos
- Gestión de Datos
- Ingeniería del Software Guiada por Búsqueda
- Ingeniería Web y Sistemas Pervasivos
- Proceso Software y Metodologías
- Tema Abierto
- JISBD 2017 (La Laguna)
- Conferencia invitada: Dr. Don Gotterbarn
- Salón de la Fama
- Track ASV – Arquitecturas Software y Variabilidad
- Track GD – Gestión de Datos
- Track ISDM – Ingeniería del Software Dirigida por Modelos
- Track ISGB – Ingeniería del Software Guiada por Búsqueda
- Track IWSP – Ingeniería Web y Sistemas Pervasivos
- Track MEISSI – Métodos Empíricos en Ingeniería del Software y Sistemas de Información
- Track PSM – Proceso Software y Metodologías
- Track RCP – Requisitos, Calidad y Pruebas
- JISBD 2018 (Sevilla)
- Arquitecturas Software y Variabilidad
- Gestión de Datos
- Ingeniería del Software Dirigida por Modelos
- Ingeniería del Software Guiada por Búsqueda
- Ingeniería Web y Sistemas Pervasivos
- Métodos Empíricos en Ing. del Software y Sis. de Información
- Metodologías, Calidad y Pruebas Software
- JISBD 2019 (Cáceres)
- ASV: Arquitecturas Software y Variabilidad
- GD: Gestión de Datos
- ISDM: Ingeniería del Software Dirigida por Modelos
- ISGB: Ingeniería del Software Guiada por Búsqueda
- IWSP: Ingeniería Web y Sistemas Pervasivos
- MEISSI: Métodos Empíricos en Ingeniería del Software y Sistemas de Información
- MCPS: Metodologías, Calidad y Pruebas Software
- JISBD 2021 (Málaga)
- Conferencia invitada: Prof. Dr. Lionel Briand
- Mesa redonda: Systematic Literature Reviews: Keep the Hat On
- ASV: Arquitecturas Software y Variabilidad
- ICD: Ingeniería y Ciencia de Datos
- ISDM: Ingeniería del Software Dirigida por Modelos
- ISBBAA: Ingeniería del Software Basada en Búsqueda y Aprendizaje Automático
- METODOS: Métodos de Investigación en Ingeniería del Software y Sistemas de Información
- MCPS: Metodologías, Calidad y Pruebas del Software
- Jornadas sobre Programación y Lenguajes (PROLE)
- PROLE 2015 (Santander)
- PROLE 2016 (Salamanca)
- 1. Prefacio
- 2. Índice
- 3. Charla invitada: Constraint-Based Testing: An Emerging Trend in Software Testing
- Sesión 02: Transformación y Optimización
- Sesión 05: Trabajos ya publicados (I)
- Sesión 07: Lenguajes y Semánticas
- Sesión 10: Trabajos ya publicados (II)
- Sesión 11: Tutoriales y Web
- Sesión 12: Lenguajes
- Sesión 13: Verificación y Análisis
- PROLE 2017 (La Laguna)
- PROLE 2018 (Sevilla)
- Comité de Programa
- Sesión 1: Datalog and Deductive Databases
- Sesión 2: Logic Programming
- Sesión 3: Web Queries and Program Analysis
- Sesión 4: Model Checking and Information Retrieval
- Sesión 5: Logics
- Sesión 6: Transition Systems and Parallel Programming
- Sesión 7: Testing
- Sesión 8: Formal Verification and Correctness
- PROLE 2019 (Cáceres)
- PROLE 2021 (Málaga)
- Comité de Programa
- Sesión 1: Knowledge discovery / Symbolic computation
- Sesión 2: Query languages
- Sesión 3: Specification and implementation
- Sesión 4: Quantum computing
- Sesión 5: Testing
- Sesión 6: Fuzzy logic programming / Verification
- Sesión 7: Verification / Temporal logics
- Sesión 8: Functional programming / Semantics
- Sesión 9: Program slicing
- Jornadas de Ciencia e Ingeniería de Servicios (JCIS)
- JCIS 2015 (Santander)
- JCIS 2016 (Salamanca)
- JCIS 2017 (La Laguna)
- Conferencia invitada: Dr. Cesare Pautasso
- Sesión 1. Servicios de Usuario y Colaborativos
- Sesión 2. Procesos de Negocio
- Sesión 3. Pruebas y Acuerdos de Calidad de Servicio
- Sesión 4. SOAs Inteligentes e Industria 4.0
- Sesión 5. Redes e Infraestructuras para Servicios
- Sesión 6. Optimización y Mejora del Rendimiento
- Sesión 7. Modelado de Servicios y Procesos
- JCIS 2018 (Sevilla)
- JCIS 2019 (Cáceres)
- Sesión 1. Procesos de Negocio
- Sesión 2. Minería y Análisis de Procesos de Negocio
- Sesión 3. Internet de las Cosas: Eventos
- Sesión 4. Internet de las Cosas: Arquitecturas y Aplicaciones
- Sesión 5. Acuerdo de nivel de Servicio
- Sesión 6. Microservicios
- Sesión 7. Soluciones en el ámbito de la Salud
- JCIS 2021 (Málaga)
- Jornadas de Ingeniería del Software y Bases de Datos (JISBD)
- Documentos SISTEDES
- Boletines de Prensa
- Seminarios SISTEDES
- Ingeniería del Software para la Computación Cuántica: Retos y Oportunidades
- Pruebas Metamórficas: Introducción, Aplicaciones y Retos
- Minería de Procesos: Perspectiva Actual y Retos Algorítmicos
- Ingeniería del Software Guiada por Búsqueda
- SMT: una solución eficaz para grandes desafíos de la informática
- El software, será verde o no será
El autor Manuel Resinas ha publicado 28 artículo(s):
Quality of an open source software ecosystem (OSS ecosystem) is key for different ecosystem actors such as contributors or adopters. In fact, the consideration of several quality aspects(e.g., activeness, visibility, interrelatedness, etc.) as a whole may provide a measure of the healthiness of OSS ecosystems. The more health a OSS ecosystem is, the more and better contributors and adopters it will gather. Some research tools have been developed to gather specific quality information from open source community data sources. However, there exist no frameworks available that can be used to evaluate their quality as a whole in order to obtain the health of an OSS ecosystem. To assess the health of these ecosystems, we propose to adopt robust principles and methods from the Service Oriented Computing field.
La representación de indicadores de rendimiento sobre los procesos de negocio facilita la comprensión y definición en el cálculo y obtención de datos. Al incluir varios indicadores sobre un proceso puede ser necesario incorporar una gran cantidad de elementos de medición, generando un exceso de información y dificultando el análisis visual de los datos. En este artículo se presenta una ampliación de la notación gráfica Visual PPINOT, que permite modelar gráficamente indicadores de rendimiento sobre los procesos de negocio. A la notación se incorporan elementos de abstracción para facilitar la representación de patrones recurrentes en indicadores y para mejorar la legibilidad del diagrama del proceso. La implementación se valida utilizando el Modelo de Referencia SCOR. Se propone una clasificación de sus métricas y éstas se utilizan como referencia para estudiar las diferencias del modelado con la notación original en comparación con la notación ampliada.
Performance calculation is a key factor to match corporate goals between different partners in process execution. However, although, a number of standards protocols and languages have recently emerged to support business process services in the industry, there is no standard related to monitoring of performance indicators over processes in these systems. As a consequence, BPMS use propietary languages to define measures and calculate them over process execution. In this paper, we describe two different approaches to compute performance mea- sures on business process decoupled from specific Business Process Man- agement System (BPMS) with an existing BPMS-independent language (PPINOT) to define indicators over business processes. Finally, some optimization techniques are described to increase calculation performance based on computing aggregated measures incrementally.
Autores: Antonio Manuel Gutiérrez–Fernández / Manuel Resinas / Adela del–Río–Ortega / Antonio Ruiz–Cortés /
Palabras Clave: Business Process Management - Complex Event Processing - Key Performance Indicators
Current software industry is evolving into a servicecentric scenario and consequently, the importance to create reliable service consumptions amongst organizations is a key point. In such a context, the concept of Service Level Agreement (SLA) represents the foundation to express the responsibilities (i.e. rights and obligations) of service consumer and provider during the consumption. However, in spite there has been a major effort in both academia and industry to develop languages and frameworks to support SLAs, there still remain important challenges to address such as how to automate the detection of a violation of the SLAs and how to react accordingly in order to claim for a compensation. Specifically, in this paper we focus on the definition of the automated claiming of SLAs problem characterized as the set of processes of gathering, checking and explaining the evidences associated with the service consumption within the context of an SLA. In order to identify the key requirements to automate the claiming of SLAs, we analyse the real case of the Simple Storage Service (S3) provided by Amazon, that is regulated by an SLA. Based on our analysis we propose a set of extensions to current prominent SLA language specification (WSAgreement) and conceptualize a list of research challenges to automate the management of the claiming process.
Resource Assignment Language (RAL) is a language for the selection of organisational resources that can be used, for example, for the assignment of human resources to business process activities. Its formal semantics have allowed the automation of analysis operations in several phases of the business process lifecycle. RAL was designed considering a specific organisational metamodel and pursuing specific purposes. However, it can be extended to deal with similar problems in different domains and under different circumstances. In this paper, a methodology to extend RAL is introduced, and an extension to support another organisational metamodel is described as a proof-of-concept.
Autores: Cristina Cabanillas / Manuel Resinas / Antonio Ruiz-Cortés / Jan Mendling /
Palabras Clave: Business Process Management - description logics - RAL - resource assignment - W3C Organisation Ontology
Availability is a key property in computational services and, therefore, is guaranteed by Service Level Agreements (SLAs) from the majority infrastructure services, such as virtualization (Amazon EC2, Windows Azure, Google Cloud, Joyent, Rackspace, …) and storage (Amazon S3, Google Cloud Storage, …). These SLAs describe availability in natural language and there are important differences in the scope and penalties that each service provides. Furthermore, descriptions use specific domain terms so they are difficult to understand by service customers. These circumstances make that availability analysis is a tedious, error-prone and time-consuming task. In this paper, we describe in detail this problem and provide a first approach to deal with these SLAs supported on current SLA analysis techniques.
Business Process Management Systems (BPMS) are increasingly used to support service composition, typically working with executable BP models that involve resources, which include both automatic services and services provided by human resources. The appropriate selection of human resources is critical, as factors such as workload or skills have an impact on work performance. While priorities for automatic services are intensively researched, human resource prioritization has been hardly discussed. In classical workflow management, only resource assignment at BP design time to select potential performers for activities, and resource allocation at run time to choose actual performers, are considered. There is no explicit consideration of prioritizing potential performers to facilitate the selection of actual performers. It is also disregarded in professional solutions.
In this paper, we address this research gap and provide two contributions: (i) we conceptually define prioritized allocation based on preferences; and (ii) we propose a concrete way in which preferences over resources can be defined so that a resource priority ranking can be automatically generated. Our solution builds on the adaptation of a user preference model developed for the discovery and ranking of semantic web services called SOUP  to the domain at hand. As a proof of concept, we have extended the resource management tool CRISTAL (http://www.isa.us.es/cristal) with the SOUP component , using RAL  for resource selection. 1. J. M. García, D. Ruiz, and A. R. Cortés, «A Model of User Preferences for Semantic
Services Discovery and Ranking,» in ESWC (2), pp. 114, Springer, 2010. 2. J. M. García, M. Junghans, D. Ruiz, S. Agarwal, and A. R. Cortés, «Integrating
semantic Web services ranking mechanisms using a common preference model,» Knowl.-Based Syst., vol. 49, pp. 2236, 2013. 3. C. Cabanillas, M. Resinas, and A. Ruiz-Cortés, «Defining and Analysing Resource Assignments in Business Processes with RAL,» in ICSOC, vol. 7084, pp. 477486, Springer, 2011.
This work was published in ICSOC 2013, vol. 8274, 374-388. It was partially supported by the EU-FP7, the EU Commission, the Spanish and the Andalusian R&D&I programmes (grants 318275, 284860, TIN2009-07366, TIN2012-32273, TIC-5906).
key aspect in any process-oriented organisation is the evaluation of process performance for the achievement of its strategic and operational goals. Process Performance Indicators (PPIs) are a key asset to carry out this evaluation, and, therefore, having an appropriate definition of these PPIs is crucial. After a careful review of the literature related and a study of the current picture in different real organisations, we conclude that there not exists any proposal that allows to define PPIs in a way that is unambiguous and highly expressive, understandable by technical and non-technical users and traceable with the business process (BP). Furthermore, it is also increasingly important to provide these PPI definitions with support to automated analysis allowing to extract implicit information from them and their relationships with the BP. In this work we present PPINOT, a tool that allows the graphical definition of PPIs together with their corresponding business processes, and their subsequent automated analysis.
Business process (BP) modelling notations tend to stray their attention from resource management, unlike other aspects such as control flow or even data flow. On the one hand, the languages they offer to assign resources to BP activities are usually either little expressive, or hard to use for non-technical users. On the other hand, they barely care about the subsequent analysis of resource assignments, which would enable the detection of problems and/or inefficiency in the use of the resources available in a company. We present RAL Solver, a tool that addresses the two aforementioned issues, and thus: (i) allows the specification of assignments of resources to BP activities in a reasonably simple way; and (ii) provides capabilities to automatically analyse resource assignments at design time, which allows extracting information from BP models, and detecting inconsistencies and assignment conflicts.
Business processes (BPs) are often analysed in terms of control flow, temporal constraints, data and resources. From all of these aspects, resources have received much less attention than other aspects, specially control flow. Even the standard BP modelling notation (BPMN) does not provide concrete definitions for the resource-related concepts . However, the participation of people in BPs is of utmost importance, both to supervise the execution of automatic activities and to carry out software-aided and/or manual tasks. Therefore, they should be considered when designing and modelling the BPs used in an organization.
In this paper we face human-resource management (resource management for short) in BP models. Firstly, we deal with the assignment of resources to the activities of a BP model, aiming at easing and improving the way resources can be associated with BP activities. Some approaches addressing a similar purpose have been introduced in the last years , but they are in general either too complex to be used by technically unskilled people, or not expressive enough to provide powerful resource management in workflows (WFs) and BPs.
Business Process (BP) families are made up of BP variants that share commonalities but also show differences to accommodate the specific necessities of different application contexts (i.e., country regulations, industrial domain, etc.). Even though there are modelling techniques to represent these families (e.g., C-EPC, Provop), there is no work aimed at the performance measurement of the different BP variants that conform the family. Process Performance Indicators (PPI) are commonly used to study and analyse the performance of business processes. However, the application of such indicators in BP families increases the modelling and management complexity of the whole family. To deal with this complexity, this work introduces a modelling solution for managing PPI variability based on the concepts of change patterns for process families (CP4PF). The proposed solution includes a set of patterns aimed at 1) reducing the number of operations required to specify PPIs and 2) ensuring PPI family correctness.
During the last years the use of service level agreements (SLA) is rising uncontrollably to describe the rights and obligations of parties involved in service provisioning (typically the service consumer and the service provider); amongst other information, SLA could define guarantees associated with the idea of service level objectives (SLOs) that normally represent key performance indicators of either the consumer or the provider. In case the guarantee is under or over fulfilled SLAs could also define some compensations (i.e. penalties or rewards). In such a context, there have been important steps towards the automation of the analysis of SLAs. One of these steps is a characterization model of SLAs with compensations proposed by the authors in a previous work; and another step is the standardisation effort in the SLAs notation made by WS-Agreement. However, real-world SLAs includes complex concepts that must be considered, namely: (i) SLA terms that specify compensations without an explicit SLO; and (ii) a limit for the compensations. In this paper we extend our prior characterization model considering these complex concepts. Specifically, (i) we provide up to five real-world scenarios whose SLAs incorporate aforementioned new concepts; (ii) we extend our model for compensable guarantees considering terms without an explicit SLO; and (iii) we provide a novel WS-Agreement-based syntax to model SLAs with compensations considering these concepts. These contributions aim to establish a foundation to elaborate tools that could provide an automated support to the modelling and analysis of SLAs with compensations.
Resumen de artículo publicado como:
Cristina Cabanillas, David Knuplesch, Manuel Resinas, Manfred Reichert, Jan Mendling, Antonio Ruiz-Cortés: RALph: A Graphical Notation for Resource Assignments in Business Processes. International Conference on Advanced Information Systems Engineering (CAiSE) 2015: 53-68. DOI: 10.1007/978-3-319-19069-3_4.
Process mining allows the extraction of useful information from event logs and historical data of business processes. This information will improve the performance of these processes and is generally obtained after they have finished. Therefore, predictive monitoring of business process running instances is needed, in order to provide proactive and corrective actions to improve the process performance and mitigate the possible risks in real time. This monitoring allows the prediction of evaluation metrics for a runtime process. In this context, this work describes a general methodology for a business process monitoring system for the prediction of process performance indicators and their stages, such as, the processing and encoding of log events, the calculation of aggregated attributes or the application of a data mining algorithm.
Resumen del artículo publicado como:
A. del Río-Ortega et al.: Modelling Service Level Agreements for Business Process Outsourcing Services. In: CAiSE 2015: 485-500.
The publication and, when it is possible, automation of public services on Internet provides advantages for citizens and governance. The former because promotes the transparency and control over governance actions and avoids unneeded presencial inquiries and the latter because information systems help to decrease human resources costs. A number of efforts have been performed by public administrations to provide precise service information online. As this service information is incrementally published, manual interaction to navigate and query these services becomes a difficult task that automated mechanisms could support based on service catalogs. In this paper we introduce an ongoing work proposing the use of ontologies to enable the automated processing -i.e. search and validation- of these service catalogs.
Aiming to be as competitive as possible, organisations are always pursuing to improve their business processes applying corrective actions when needed. However, the actual analysis and decision making for those actions is typically a challenging task relying on extensive human-in-the-loop expertise. Specifically, this improvement process usually involves: (i) to analyse evidences to understand the current behavior; (ii) to decide the actual objectives (usually defined in Service Level Agreements -SLAs- based on intuition) and (iii) to establish the improvement plan. In this ongoing work, we aim to propose a data-driven and intuition-free methodology to define an SLA as a governance element that specifies the service level objectives in an explicit way. Such a methodology considers process performance indicators that are analysed by means of inference, optimization, and simulation techniques. In order to motivate and exemplify our work we address a Healthcare scenario.
Process Performance indicators (PPIs) play an important role in monitoring the performance of operational procedures. Both defining and measuring suitable PPIs are key tasks for aligning strategic business objectives with the operational implementation of a process. A major challenge in this regard is that perspectives on the same real-world phenomenon differ among the stakeholders that are involved in these tasks. Since the formulation of PPIs is typically a managerial concern, there is a risk that these do not match with the exact operational and technical characteristics of business processes. To bridge this gap, the concepts described in PPIs must first be linked to their corresponding process elements. Establishing these links is paramount for the monitoring of process performance.
Without them, the values of PPIs cannot be computed automatically. However, the necessary links must currently be established manually. A task which is tedious and error-prone, due to the aforementioned incoherence between the different perspectives. The goal of our work is to overcome the efforts involved in the manual creation of links by automating this step. To achieve this, we developed an approach that automatically aligns textual PPI descriptions to the relevant parts of a process model. The approach takes a textual PPI description and a process model to which the PPI relates as input. Given this input, the approach generates an alignment in three steps. (1) Type classification: We make use of a decision tree classifier to identify the type of a given PPI, which is important because it affects the number and kinds of process model elements that should be aligned to a PPI. (2) PPI parsing: We parse the textual PPI description to extract those phrases that relate to specific parts of a process, making use of natural language processing techniques. (3) Alignment to process model: Finally, given the identified measure type and the extracted phrases, we compute an alignment between the phrases and the process model. A quantitative evaluation with a set of 173 PPIs obtained from industry and reference frameworks, demonstrates that our automated approach produces satisfying results.
A pesar de la importancia de los servicios en la economía, las tareas como la búsqueda, análisis de alternativas, y contratación de servicios en virtud de acuerdos de nivel de servicio (ANS), siguen realizándose manualmente. En la denominada Web de los servicios existen alternativas para facilitar la automatización de estas tareas basadas en diversos modelos conceptuales: genéricos como Linked USDL, o centrados en algún aspecto concreto, como
WS-Agreement con los ANS. Sin embargo, estos últimos contemplan principalmente sólo aspectos técnicos, sin proporcionar una semántica explícita a los términos del ANS ni cumplir los principios de la Web, dificultando su adopción y análisis automático.
En este artículo presentamos Linked USDL Agreement, una extensión de la familia de ontologías Linked USDL que proporciona facilidades para especificar, gestionar y compartir descripciones de ANS en la Web. Este modelo semántico evita los problemas de interoperabilidad y heterogeneidad de las especificaciones de ANS actuales. Además, dado que nuestro modelo sigue los principios de la Web de los datos, las descripciones de ANS generadas son fácilmente publicables, compartibles y analizables, sirviendo como soporte del ciclo de vida de los servicios.
Nuestra propuesta ha sido validada tanto sobre servicios Web tradicionales (e.g. computación en la nube), como sobre servicios no-computacionales (e.g. outsourcing de procesos de negocio). La comparación realizada con otras alternativas existentes, así como la implementación de una herramienta que facilita la creación, publicación, y análisis automático de documentos en Linked USDL Agreement, nos permite afirmar que nuestra propuesta es capaz de soportar
completamente la gestión del ciclo de vida de los ANS.
Summary of the contribution
Predictive monitoring of business processes is a challenging topic of process min- ing which is concerned with the prediction of process indicators of running pro- cess instances. The main value of predictive monitoring is to provide information in order to take proactive and corrective actions to improve process performance and mitigate risks in real time. In this paper, we present an approach for pre- dictive monitoring based on the use of evolutionary algorithms. Our method provides a novel event window-based encoding and generates a set of decision rules for the run-time prediction of process indicators according to event log properties. These rules can be interpreted by users to extract further insight of the business processes while keeping a high level of accuracy. Furthermore, a full software stack consisting of a tool to support the training phase and a framework that enables the integration of run-time predictions with business process man- agement systems, has been developed. Obtained results show the validity of our proposal for two large real-life datasets: BPI Challenge 2013 and IT Department of Andalusian Health Service (SAS).
Autores: Alfonso E. Márquez-Chamorro / Manuel Resinas / Antonio Ruiz-Cortés /
Palabras Clave: Business process indicator - Business Process Management - Evolutionary algorithm - Predictive mon- itoring - Process Mining
Modern SLA management includes SLA prediction based on data collected during service operations. Besides overall accuracy of a prediction model, decision makers should be able to measure the reliability of individual predictions before taking important decisions, such as whether to renegotiate an SLA. Measures of reliability of individual predictions provided by machine learning techniques tend to depend strictly on the technique chosen and to neglect the features of the system generating the data used to learn a model, i.e., the service provisioning landscape in this case. In this paper, we define a hybrid measure of reliability of an individual SLA prediction for classification models, which accounts for both the reliability of the chosen prediction technique, if available, and features capturing the variability of the service provisioning scenario. The metric is evaluated empirically using SLAs and event logs of a real world case.
This paper was presented in ACM Symposium on Applied Computing (SAC) in April 2019 (GGS Class 2).
Decisions are a key aspect of every business and its processes and their management is of utmost importance for the achievement of strategic and operational goals in any organisational context. Therefore, decisions should be considered as first-class citizens that need to be modelled, measured, analysed, monitored to track their performance, and redesigned if necessary. Existing literature studies the definition of decisions themselves in terms of accuracy, certainty, consistency, covering and correctness. However, to the best of our knowledge, no prior work exists that analyses the relationship between decisions and process performance.
In this paper, we seek to improve the understanding of the relationship between decision management and process performance measurement by means of the analysis of the relationship between these two concepts in three ways. First, by analysing the impact of decisions related to business processes on process performance indicators (PPIs), and using guidelines in the form of a set of steps that can be used to identify decisions that affect the process performance. Second, by defining decision performance indicators (DPIs) to measure performance of decisions related to business processes. And third, by using process performance information in the definition of decisions. Some advantages of explicitly defining these relationships have been encountered, such as the provision of important insights regarding possible dysfunctional decisions from a performance point of view or the identification of possible actions to be taken to improve the performance. We also outline how these relationships can be modelled and supported by extending and integrating PPINOT, a metamodel for the definition and modelling of PPIs, with DMN, a standard that provides constructs to model and decouple decisions from process models.
A Service Level Agreement (SLA) regulates the provisioning of a service by defining a set of guarantees. Each guarantee sets a Service Level Objective (SLO) on some service metrics, and optionally a compensation that is applied when the SLO is unfulfilled (the compensation would be a penalty) or overfulfilled (the compensation would be a reward). For instance, Amazon is penalised with a 10% in service credits if the availability of the Elastic Cloud Computing service drops below 99.95%.
Currently, there are software tools and research proposals that use the information about compensations to automate and optimise certain parts of the service management. However, they assume that compensations are well defined, which is too optimistic in some circumstances and can lead to undesirable situations. For example, an unbounded, automated penalty was discarded in 2005 by the UK Royal Mail company after causing a loss of 280 million pounds in one year and a half.
In the article «Automated Validation of Compensable SLAs», published in IEEE Transactions on Services Computing (Early Access), and available at https://doi.org/10.1109/TSC.2018.2885766, we aim at answering the question «How can compensations be automatically validated?». To this end, we build on the compensable SLA model proposed in a previous work to provide a technique that leverages constraint satisfaction problem solvers to automatically validate them. We also present a materialisation of the model in iAgree, a language to specify SLAs and a tooling support that implements our whole approach. Our proposal has been evaluated by modelling and analysing the compensations of 24 SLAs of real-world scenarios including 319 guarantee terms. As a result, our technique has proven to be useful for detecting mistakes that are typically derived not only from the manual specification of SLAs in natural language, but also from the complex nature of compensation definitions. Thus, we found that nine guarantees with compensations that were not properly defined in the original SLAs specified in natural language. Specifically, five were wrongly specified by Verizon, and four were wrongly specified by the outsourcing service hiring of the regional governments of: Northwest Territories of Canada, and Andalusia in Spain. Therefore, our proposal can pave the way for using compensable SLAs in a safer and more reliable way.
Autores: Carlos Müller / Antonio Manuel Gutierrez / Pablo Fernandez / Octavio Martín-Díaz / Manuel Resinas / Antonio Ruiz-Cortés /
Palabras Clave: Analysis - Compensation - CSP - Penalty - Reward - SLA - validation - WS-Agreement
Knowledge-intensive Processes (KIPs) can be defined as a type of process that comprises sequences of activities based on intensive acquisition, sharing, storage, and (re)use of knowledge, whereby the amount of value added to the organization depends on the knowledge of the actors involved. Among other characteristics, KIPs are usually non-repeatable, collaboration-oriented, unpredictable, and, in many cases, driven by implicit knowledge, derived from the capabilities and previous experiences of participants. Despite the growing body of research focused on understanding KIPs and on proposing systems to support these KIPs, the research question on how to define performance measures in this context remains open.
In this article, we address this issue with a proposal to enable the performance management of KIPs. Our approach comprises an ontology that allows us to define process performance indicators (PPIs) in the context of KIPs, and a methodology that builds on the ontology and the concepts of lead and lag indicators to provide process participants with actionable guidelines that help them conduct the KIP in a way that fulfills a set of performance goals. Both the ontology and the methodology were applied to a case study of a real organization in Brazil to manage the performance of an Incident management process within an information and communication technology outsourcing company. The insights provided by our approach were considered highly valuable by the company.
Autores: Bedilia Estrada-Torres / Pedro Henrique Piccoli Richetti / Adela Del-Río-Ortega / Fernanda Araujo Baião / Manuel Resinas / Flávia Maria Santoro / Antonio Ruiz-Cortés /
Palabras Clave: Knowledge-intensive processes - Performance measure - Process performance indicators
La transformación digital ha traído consigo un ritmo de cambio sin precedentes y una enorme cantidad de información disponible para las empresas. Al mismo tiempo, también ha creado una serie de dificultades para los trabajadores del conocimiento que tienen que lidiar con entornos cada vez más volátiles, inciertos, complejos y ambiguos (VUCA). En este escenario, est+AOE proliferando el uso de herramientas colaborativas – en inglés WorkStream Collaboration tools (WSC)- como Microsoft Teams o Slack, para gestionar esta nueva forma de trabajo y mejorar la productividad de los trabajadores del conocimiento. Sin embargo, los objetivos de estas herramientas WSC y la forma de utilizarlas no están bien establecidos por dos razones (i) estos nuevos entornos de trabajo plantean un conjunto de retos para trabajar de forma productiva que no han sido claramente caracterizados, y (ii) no existe experiencia previa ni cuerpo de investigación sólido que los estudie en conjunto para guiar el diseño de soluciones basadas en herramientas WSC. En este trabajo, seguimos un enfoque inductivo basado en el análisis de datos cualitativos y cuantitativos de 365 empleados de 3 empresas (inmersas en entornos VUCA e iniciativas de digitalización con herramientas WSC) para caracterizar los retos de productividad en estos escenarios. El resultado es un conjunto de 14 retos que aparecen con diferente intensidad en cada empresa y el análisis de su implicación sobre el uso de estas herramientas WSC.
Autores: Adela del-Río-Ortega / Joaquín Peña / Manuel Resinas / Antonio Ruiz-Cortés /
Palabras Clave: herramientas colaborativas - Productividad - trabajador del conocimiento - transformación digital - workstream collaboration
It is well-known that context impacts running instances of a process. Thus, defining and using contextual information may help to improve the predictive monitoring of business processes, which is one of the main challenges in process mining. However, identifying this contextual information is not an easy task because it might change depending on the target of the prediction. In this paper, we propose a novel methodology named CAP3 (Context-aware Process Performance indicator Prediction) which involves two phases. The first phaseguides process analysts on identifying the context for the predictive monitoring of process performance indicators (PPIs), which are quantifiable metrics focused on measuring the progress of strategic objectives aimed to improve the process. The second phase involves a context-aware predictive monitoring technique that incorporates the relevant context information as input for the prediction. Our methodology leverages context-oriented domain knowledge and experts’ feedback to discover the contextual information useful to improve the quality of PPI prediction with a decrease of error rates in most cases, by adding this information as features to the datasets used as input of the predictive monitoring process. We experimentally evaluated our approach using two-real-life organizations. Process experts from both organizations applied CAP3 methodology and identified the contextual information to be used for prediction. The model learned using this information achieved lower error rates in most cases than the model learned without contextual information confirming the benefits of CAP3.This paper was published in IEEE Access, 2020, Vol. 8, pp. 222050 – 222063, doi: 10.1109/ACCESS.2020.3044670
Autores: Alfonso E. Márquez-Chamorro / Kate Revoredo / Manuel Resinas / Adela Del-Río-Ortega / Flavia Santoro / Antonio Ruiz-Cortés /
Palabras Clave: Business Process Management - context-awareness - predictive monitoring - process indicator prediction - Process Mining