Using models to represent business processes provides several advantages, such as being able to check the correctness of the processes before their implementation. In contrast to traditional process modeling approaches, the artifact-centric approach treats data as a key element of the process, also considering the tasks or activities that are performed in it. This paper presents a way to verify and validate the semantic correctness of an artifact-centric business process model defined using a combination of UML and OCL models – a BAUML model. To do this, we provide a method to translate all BAUML components into a set of logic formulas. The result of this translation ensures that the only changes allowed are those specified in the model, and that those changes are taking place according the order established by the model. Having obtained this logic representation, these models can be validated by any existing reasoning method able to deal with negation of derived predicates. Moreover, we show how to automatically generate the relevant tests to validate the models and we prove the feasibility of our approach.
Autores: Montserrat Estañol / Maria-Ribera Sancho / Ernest Teniente /
Palabras Clave: business process modelling - reasoning - Tool - UML - validation - Verification
This paper presents the Kopernik approach for modeling business processes for digital customers. These processes require a high degree of flexibility in the execution of their tasks or actions. We achieve this by using an artifact-centric approach to process modeling and the
use of condition-action rules. The processes modeled following Kopernik can then be implemented in an existing commercial tool, Balandra.
Autores: Montserrat Estañol / Manuel Castro / Silvia Díaz-Montenegro / Ernest Teniente /
Palabras Clave:
Data-services are applications whose main concern is to provide data to theirclient applications. Data-services play a key role in areas like the Internet ofThings (IoT), where smart objects might want to offer/consume data throughInternet, and thus, providing/discovering such data-services automatically.To make data-services discoverable, the usual strategy is to register dataservicesin some kind of service-broker, i.e., a marketplace where data-services arepublicly offered. Then, smart objects query the service-broker, and the servicebroker is responsible to match the request with its data-services. How to perform this matching automatically is still an open problem in IoT.In this paper, we propose a framework for specifying data-services so thatthey can be automatically discovered. To achieve it, we provide unambiguousdescriptions of the data-services and the request, together with a mechanismcapable of interpreting these descriptions and check whether they match. Oursolution is grounded on ontology-based data integration and can be applied inthe IoT context, altought it can also be used in any other domain involving thediscovery of applications retrieving data.In essence, our idea is based on, given a domain ontology describing thereal world our data-services speaks about, consider each data-service as a newassociation in that ontology. Indeed, a data-service consuming some input objectsand retrieving some output objects can be modelled as an association from theproper to the latter. As expected, ontology constraints must be used to restrictthe instances of the association to the input-output instances our data-serviceexpects/provides.Hence, the problem of matching data-services is reduced to that of automaticreasoning on ontologies (and in particular, association subsumption). Thus, contributions on this last field can be directly used for the data-services discovering problem.
Autores: Xavier Oriol / Ernest Teniente /
Palabras Clave: Data-service - Data-service discovery - Internet of Things
Ontology-based Data Access (OBDA) is gaining importance both scientifically and practically. However, little attention has been paid so far to the problem of updating OBDA systems. This is an essential issue if we want to be able to cope with modifications of data both at the ontology and at the source level, while maintaining the independence of the data sources. In this paper, we propose mechanisms to properly handle updates in this context. We show that updating data both at the ontology and source level is first-order rewritable. We also provide a practical implementation of such updating mechanisms based on non-recursive Datalog.
Autores: Giuseppe De Giacomo / Domenico Lembo / Xavier Oriol / Domenico Fabio Savo / Ernest Teniente /
Palabras Clave: DL-Lite - OBDA - updates