Navegación

Búsqueda

Búsqueda avanzada

El autor Juan Trujillo ha publicado 14 artículo(s):

1 - Ontology-Driven Approach for KPI Meta-modelling, Selection and Reasoning

Modern applications of Big Data are transcendingfrom being scalable solutions of data processing and analysis, to nowprovide advanced functionalities with the ability to exploit and understandthe underpinning knowledge. This change is promoting the developmentof tools in the intersection of data processing, data analysis,knowledge extraction and management. In this paper, we proposeTITAN, a software platform for managing all the life cycle of scienceworkflows from deployment to execution in the context of Big Data applications.This platform is characterised by a design and operation modedriven by semantics at different levels: data sources, problem domain andworkflow components. The proposed platform is developed upon an ontologicalframework of meta-data consistently managing processes andmodels and taking advantage of domain knowledge. TITAN comprises awell-grounded stack of Big Data technologies including Apache Kafka forinter-component communication, Apache Avro for data serialisation andApache Spark for data analytics. A series of use cases are conducted forvalidation, which comprises workflow composition and semantic metadatamanagement in academic and real-world fields of human activityrecognition and land use monitoring from satellite images.

Autores: Maria del Mar Roldan-Garcia / José García-Nieto / Alejandro Maté / Juan Trujillo / José F. Aldana-Montes / 
Palabras Clave: Knowledge extraction - KPI Modelling - Ontology - reasoning - Semantics - Water Management

2 - Framework for modelling and implementing secure NoSQL document databases

The great amount of data managed by Big Data technologies have to be correctly assured in order to protect critical enterprise and personal information. Nevertheless, current security solutions for Big Data technologies such as NoSQL databases do not take into account the special characteristics of these technologies. In this paper, we focus on assuring NoSQL document databases proposing a framework composed of three stages: (1) the source data set is analysed by using Natural Language Processing techniques and ontological resources in order to detect sensitive data. (2) we define a metamodel for document NoSQL databases that allows designer to specify both structural and security aspects. (3) this model is implemented into a specific document database tool, MongoDB. Finally, we apply the framework proposed to a case study with a dataset of medical domain.

Autores: Carlos Blanco Bueno / Jesus Peral / Juan Trujillo / Eduardo Fernandez-Medina / 
Palabras Clave: big data - model - Natural Language Processing - No SQL - security

3 - Fostering Sustainability through Visualization Techniques for Real-Time IoT Data: A Case Study Based on Gas Turbines for Electricity Production

Improving sustainability is a key concern for industrial development. Industry has recently been benefiting from the rise of IoT technologies, leading to improvements in the monitoring and breakdown prevention of industrial equipment. In order to properly achieve this monitoring and prevention, visualization techniques are of paramount importance. However, the visualization of real-time IoT sensor data has always been challenging, especially when such data are originated by sensors of different natures. In order to tackle this issue, we propose a methodology that aims to help users to visually locate and understand the failures that could arise in a production process.This methodology collects, in a guided manner, user goals and the requirements of the production process, analyzes the incoming data from IoT sensors and automatically derives the most suitable visualization type for each context. This approach will help users to identify if the production process is running as well as expected+ADs thus, it will enable them to make the most sustainable decision in each situation. Finally, in order to assess the suitability of our proposal, a case study based on gas turbines for electricity generation is presented.

Autores: Ana Lavalle / Miguel A. Teruel / Alejandro Maté / Juan Trujillo / 
Palabras Clave: Artificial Intelligence - Big Data analytics - data visualization - gas turbines - Internet of Things - sustainable production

4 - Improving Sustainability of Smart Cities through Visualization Techniques for Big Data from IoT Devices

Fostering sustainability is paramount for Smart Cities development. Lately, Smart Cities are benefiting from the rising of Big Data coming from IoT devices, leading to improvements on monitoring and prevention. However, monitoring and prevention processes require visualization techniques as a key component. Indeed, in order to prevent possible hazards (such as fires, leaks, etc.) and optimize their resources, Smart Cities require adequate visualizations that provide insights to decision makers. Nevertheless, visualization of Big Data has always been a challenging issue, especially when such data are originated in real-time. This problem becomes even bigger in Smart City environments since we have to deal with many different groups of users and multiple heterogeneous data sources. Without a proper visualization methodology, complex dashboards including data from different nature are difficult to understand. In order to tackle this issue, we propose a methodology based on visualization techniques for Big Data, aimed at improving the evidence-gathering process by assisting users in the decision making in the context of Smart Cities. Moreover, in order to assess the impact of our proposal, a case study based on service calls for a fire department is presented. In this sense, our findings will be applied to data coming from citizen calls. Thus, the results of this work will contribute to the optimization of resources, namely fire extinguishing battalions, helping to improve their effectiveness and, as a result, the sustainability of a Smart City, operating better with less resources. Finally, in order to evaluate the impact of our proposal, we have performed an experiment, with non-expert users in data visualization.

Autores: Ana Lavalle / Miguel A. Teruel / Alejandro Maté / Juan Trujillo / 
Palabras Clave: Artificial Intelligence - Big Data analytics - dashboards - data visualization - Internet of Things - methodology - Smart city

5 - Modelado Seguro de Consultas OLAP y su Evolución

La seguridad de la información es un aspecto crítico para las organizaciones. Los almacenes de datos manejan información histórica altamente sensible, ya que además de ser el apoyo a la toma de decisiones estratégicas suele incluir datos personales protegidos por ley. Por lo tanto, esta información ha de ser asegurada garantizando que los usuarios finales encargados de la toma de decisiones no accedan ni infieran información no autorizada en sus consultas al almacén mediante aplicaciones OLAP. Este artículo presenta una propuesta para el modelado seguro de consultas OLAP en la que se modelan tanto consultas OLAP sensibles, como su posible evolución mediante la aplicación de operaciones OLAP. Esta propuesta permite por lo tanto establecer la información que le ha de ser proporcionada al usuario en cada momento de su interacción con el almacén, teniendo en cuenta la información que ha ido conociendo previamente para limitar así el riesgo de inferencias.

Autores: Carlos Blanco / Eduardo Fernández-Medina / Juan Trujillo / Jan Jurjens / 
Palabras Clave: Almacenes de Datos - Evolución de Consultas - Modelo de Estados. - OLAP - seguridad

6 - Modelado y Generacíon Automática de Requisitos de Cuadros de Mando

La Inteligencia de Negocio (IN) utiliza grandes cantidades de información procedentes de fuentes heterogéneas que tradicionalmente se encuentran integradas en un Almacén de Datos (AD). De forma general, se ha prestado especial atención al proceso de diseño e implementacíon del AD desde el punto de vista de la informacíon a almacenar. Sin embargo, hasta el momento son pocas las aproximaciones que priorizan las necesidades de explotacíon de la informacíon por parte de los tomadores de decisíon. De esta forma, los tomadores de decisíon cuentan con los datos necesarios pero no son capaces de utilizarlos de forma óptima, ni relacionarlos con la estrategia de negocio. En este artículo, proponemos un metamodelo para el diseño de cuadros de mando, que permite a los diseñadores capturar las necesidades de datos de los tomadores de decisión y, posteriormente, obtener la implementacíon correspondiente en la plataforma de IN objetivo. De esta forma, las necesidades de información y explotación de los usuarios finales gúian el proceso de diseño de los cuadros de mando, con el objetivo de aumentar la satisfaccíon de los usuarios finales.

Autores: Elisa de Gregorio / Alejandro Maté / Hector Llorens / Juan Trujillo / 
Palabras Clave: Almacenes de Datos - Cuadros de mando - MDA - Modelado conceptual - Requisitos - Visualización de Datos

7 - Integración de indicadores internos y externos mediante generación semi-automática de código

onitorizar los Indicadores Clave de Desempeño en una organización es un aspecto clave que nos permite adoptar decisiones estratégicas basadas en información cuantitativa fidedigna. En la Inteligencia de Negocio (Business Intelligence, BI) tradicional, el análisis es una tarea compleja, ya que si un indicador falla no se conocen claramente todos los objetivos de negocio que están siendo afectados. A estas dificultades ha de añadirse el uso, cada vez más habitual, de datos externos, que suelen estar hospedados por terceras partes. De esta forma el análisis se convierte en una amalgama de datos heterogéneos proporcionados por Indicadores Internos y Externos que, difícilmente se encuentran alineados con la estrategia de negocio y, por tanto, dificultan aún más el control de los procesos. En este artículo, proponemos una aproximación para permitir la integración Indicadores Externos e Internos de la empresa. Gracias a nuestra aproximación se podrá consultar datos de fuentes externas junto las internas y por tanto, se simplifica la tarea de análisis.

Autores: Elisa de Gregorio / Alejandro Maté / Juan Trujillo / 
Palabras Clave: API - Estrategia de negocio - Inteligencia de negocio - KPI - Metamodelado - REST

8 - A methodology to automatically translate user requirements into visualizations: Experimental validation

Context: Information visualization is paramount for the analysis of Big Data. The volume of data requiring interpretation is continuously growing. However, users are usually not experts in information visualization. Thus, defining the visualization that best suits a determined context is a very challenging task for them. Moreover, it is often the case that users do not have a clear idea of what objectives they are building the visualizations for. Consequently, it is possible that graphics are misinterpreted, making wrong decisions that lead to missed opportunities. One of the underlying problems in this process is the lack of methodologies and tools that non-expert users in visualizations can use to define their objectives and visualizations.Objective: The main objectives of this paper are to (i) enable non-expert users in data visualization to communicate their analytical needs with little effort, (ii) generate the visualizations that best fit their requirements, and (iii) evaluate the impact of our proposal with reference to a case study, describing an experiment with 97 non-expert users in data visualization.Methods: We propose a methodology that collects user requirements and semi-automatically creates suitable visualizations. Our proposal covers the whole process, from the definition of requirements to the implementation of visualizations. The methodology has been tested with several groups to measure its effectiveness and perceived usefulness.Results: The experiments increase our confidence about the utility of our methodology. It significantly improves over the case when users face the same problem manually. Specifically: (i) users are allowed to cover more analytical questions, (ii) the visualizations produced are more effective, and (iii) the overall satisfaction of the users is larger.Conclusion: By following our proposal, non-expert users will be able to more effectively express their analytical needs and obtain the set of visualizations that best suits their goals.

Autores: Ana Lavalle / Alejandro Maté / Juan Trujillo / Miguel A. Teruel / Stefano Rizzi / 
Palabras Clave: Big Data analytics - data visualization - Experimental validation - Model-Driven Development - requirements engineering

9 - A Trace Metamodel Proposal based on the Model Driven Architecture Framework for the Traceability of User Requirements in Data Warehouses

Data warehouses (DW) integrate several heterogeneous data sources in multidimensional structures (i.e. facts and dimensions) in support of the decisionmaking process in Business Intelligence. Therefore, the development of the DW is a complex process that must be carefully planned in order to meet user needs. In order to develop the DW, three different approaches, similar to the existing ones in Software Engineering (bottom-up or supply-driven, top-down or demanddriven, and hybrid), were proposed [1]. The hybrid approach makes use of both data sources and user requirements, and avoids missing information from one of the two sources until the DW is already built.
However, by following the hybrid approach a new problem arises. DW elements are merged to consider the information from both requirements and data sources, each named using a different terminology. In turn, implicit traceability is lost, thus hurting requirements validation, making us unable to trace each requirement, and dramatically increasing the cost of introducing changes.
In order to solve this problem, in this paper, we perform a thorough review of literature on traceability, and, due to the special idiosyncrasy of DW development, we propose a novel trace metamodel specifically tailored to face several challenges: (i) connecting multiple sources with multiple targets in a meaningful way, as requirements need to be reconciled with data sources that may, or may not, match the expectations of the users. (ii) Being weakly coupled with DW models, as these models can change since there is no standard. Finally, (iii) minimizing the overhead introduced in the development process with the inclusion of traceability, by defining how traces should be generated in an automatic way, and maintaining them without user intervention wherever possible.
First, we introduce the semantics included in the metamodel, to cover the different relationships involved in DW development. Then, we describe how traces can be integrated within DW development by means of trace models. Afterwards, we show how these trace models can be aligned with the Model Driven Architecture (MDA) framework in order to semi-automatically generate traces within the DW development process. We show how to generate traces from user requirements to conceptual DW models by means of Query/View/Transformation (QVT) rules, thus saving time and costs required to record traces. Furthermore, we also describe how traces can be maintained without requiring human intervention when changes are introduced into the DW. Additionally, we show how the framework can be implemented within the Eclipse platform and how the results are integrated into a DW development approach.
In order to show the applicability of our proposal, we show an example of application based on a real case study with another university that involved designing several data marts for educational analysis. As shown in Figure 1, our framework allows us to trace each requirement, as well as any modifications, to its corresponding elements in the DW. The great benefit of our proposal is the improvement in requirements validation as well as being able to easily assess the impact of changes and regenerate the affected parts.
Our plans for the immediate future are developing a new set of QVT rules to explore the relationships between the conceptual and logical models, and explore the potential of using the information recorded in the traces in order to support automated analysis. We will also complete our development of the traceability framework in order to make the maintenance of traces as automatic as possible.
Acknowledgments This work has been partially supported by the MESOLAP (TIN2010-14860) and SERENIDAD (PEII-11-0327-7035) projects from the Spanish Ministry of Education and the Junta de Comunidades de Castilla La Mancha. Alejandro Maté is funded by the Generalitat Valenciana under the grant ACIF/2010/298.

Autores: Alejandro Maté / Juan Trujillo / 
Palabras Clave:

10 - Tactical Business-Process-Decision Support based on KPIs Monitoring and Validation

Key Performance Indicators (KPIs) can be used to evaluate the success of an organization, facilitating the detection of the deviations and unexpected evolution of the behaviour of a company. The difficulty for enterprises is to ascertain what to do when a deviation is detected. In this paper, we propose a modelling approach to improve the operational business-level and to ascertain the possible actions that can be executed to maintain the right direction in a company. For business process-oriented companies, it entails knowing how KPIs can be affected by the business processes. It implies not only pointing out that a system malfunction exists, but also to know what to do when a deviation is detected. Our proposal presents a methodology that covers: (1) an extension of the existing models in order to combine KPIs, goals of the companies, and the decision variables together with business processes; (2) a methodology based on data mining analysis to verify the correctness of the enriched proposed model according to the data stored during business evolution, and; (3) a framework to simulate the evolution of the business according to the decisions taken in the governance process, thereby supporting governance activities to achieve the defined objectives by exploiting goals and KPIs from the proposed model.

Autores: José Miguel Pérez-Álvarez / Alejandro Maté / Maria Teresa Gómez López / Juan Trujillo / 
Palabras Clave: Business process - Decisions Support - Fuzzy Logic - governance - KPIs - Modelling knowledge

11 - Modelado Conceptual basado en Objetivos para la definición de Visualizaciones

Cada vez son más las cantidades de datos que necesitan ser analizadas e interpretadas y la visualización de la información juega un papel clave para ello. Definir una visualización correcta y sin errores es crucial para comprender e interpretar los patrones y resultados obtenidos por los algoritmos de análisis, ya que una incorrecta interpretación o resultados incorrectos podría suponer pérdidas significativas a la empresa. Sin embargo, la definición de visualizaciones es una tarea difícil para los usuarios de negocio, ya que en la mayoría de ocasiones no son expertos en la visualización de información y no conocen exactamente las herramientas o tipos de visualización mas adecuados para medir sus objetivos. El principal problema que se encuentra es la falta de herramientas y metodologías que ayuden a usuarios no expertos a definir sus objetivos de visualización y análisis de datos en términos de negocio. Es por ello, que para afrontar este problema, presentamos un modelo basado en el lenguaje i* para la especificación de visualización de datos. Nuestra propuesta permite seleccionar de forma objetiva las técnicas de visualización más adecuadas, con la gran ventaja de proporcionar a los usuarios no expertos, las visualizaciones más adecuadas según sus necesidades y sus datos con poco esfuerzo y sin requerir experiencia en la visualización de información.

Autores: Ana Lavalle / Alejandro Maté / Juan Trujillo / 
Palabras Clave: Analíticas de datos - Modelo basado en Objetivos - Requisitos de usuario - Visualización de Datos

12 - Modelado multidimensional para la visualización integrada de Big Data en plataformas de Inteligencia de Negocio

La gran cantidad de información disponible y así como su heterogeneidad han sobrepasado la capacidad de las tecnologías actuales de gestión de datos. El tratamiento con grandes volúmenes de datos estructurados y no estructurados, a menudo referido como Big Data, es un tema de investigación de actualidad así como un importante desafío tecnológico. En este artículo, se presenta un enfoque con el objetivo de permitir consultas OLAP a través de diferentes y heterogéneos orígenes de datos asistidos con herramientas de visualización que faciliten el tratamiento de los mismos. Nuestro enfoque está basado en el paradigma MapReduce, permitiendo la integración de diferentes formatos como el novedoso formato RDF Data Cube. Las principales contribuciones de nuestro enfoque son la capacidad de consultar y visualizar distintas fuentes de información, manteniendo al mismo tiempo, una visión integrada y completa de los datos disponibles, así como una sencilla interfaz de visualización de Big Data. El presente artículo también analiza las ventajas y desventajas, así como los retos de implementación que presenta este enfoque y, finaliza con un caso de estudio mostrando las ventajas de la aproximación presentada.

Autores: Roberto Tardío / Elisa de Gregorio / Alejandro Maté / Rafa Muñoz-Terol / Hector Llorens / Juan Trujillo / David Gil / 
Palabras Clave:

13 - Easing DApp Interaction for Non-Blockchain Users from a Conceptual Modelling Approach

Blockchain decentralized applications (DApp+IBk-s) are applications which run on blockchains nodes. Thus, to interact directly with this sort of applica-tions, users need to have a blockchain address, wallet, and knowledge about how to make transactions to interact with DApp+IBk-s. Therefore, the knowledge required to use a DApp can easily make users to desist when trying to interact with them. To tackle this issue, we propose a software ar-chitecture that will be in the middle of the user and the DApp, thus making users initially unaware of the fact that they are interacting with a DApp. This is achieved by analyzing the relationship between DApps and Apps by using UML modelling. Next, based on the previous analysis, we created a middleware for users to interact with a DApp in the same manner they do with a traditional web app, i.e., by using usernames, passwords and user in-terface elements instead of addresses, private keys or transactions. To put the developed middleware into practice, we developed a DApp that makes use of it. This DApp registers the time control of workers from companies by using blockchain to store the data in a secure and non-modifiable man-ner. Finally, we performed an experiment, thus demonstrating that a DApp that implements the proposed middleware would improve its usability for users with no experience with blockchain.

Autores: Miguel A. Teruel / Juan Trujillo / 
Palabras Clave: Blockchain - Clockchain - conceptual modelling - DApp - Ethereum - Middleware - Quorum - Smart Contract - Solidity - UML

14 - Adding Semantic Modules to improve Goal-Oriented Analysis of Data Warehouses using I-star

Requirements elicitation and analysis is a key step in designing and maintaining data warehouses. In order to better support this step, in this paper we (i) propose an extension of the basic goaloriented metamodel in order to include semantic modules, (ii) include a description of each step followed in the process, and (iii) evaluate it by means of an empirical experiment.

Autores: Alejandro Maté / Juan Trujillo / Xavier Franch / 
Palabras Clave: Data warehouses - i-star - user requirements