El autor Ismael Navas-Delgado ha publicado 7 artículo(s):
En las últimas décadas el aumento de fuentes de información en diferentes campos de la sociedad desde la salud hasta las redes sociales ha puesto de manifiesto la necesidad de nuevas técnicas para su análisis, lo que se ha venido a llamar el Big Data. Los problemas clásicos de optimización no son ajenos a este cambio de paradigma, como por ejemplo el problema del viajante de comercio (TSP), ya que se puede beneficiar de los datos que proporciona los diferentes sensores que se encuentran en las ciudades y que podemos acceder a ellos gracias a los portales de Open Data. Cuando estamos realizando análisis, ya sea de optimización o machine learning en Big Data, una de las formas más usada de abordarlo es mediante workflows de análisis. Estos están formados por componentes que hacen cada paso del análisis. El flujo de información en workflows puede ser anotada y almacenada usando herramientas de la Web Semántica para facilitar la reutilización de dichos componentes o incluso el workflow completo en futuros análisis, facilitando as+AO0, su reutilización y a su vez, mejorando el proceso de creación de estos. Para ello se ha creado la ontología BIGOWL, que permite trazar la cadena de valor de los datos de los workflows mediante semántica y además ayuda al analista en la creación de workflow gracias a que va guiando su composición con la información que contiene por la anotación de algoritmos, datos, componentes y workflows. La problemática que ha abordado y resuelto BIGOWL se encuentra en dar estructura a esta información para poder ser integrada en los componentes. Para para validar el modelo semántico, se presentan una serie de consultas SPARQL y reglas de razonamiento para guiar el proceso de creación y validación de dos casos de estudio, que consisten en: primero, el procesamiento en streaming de datos de tráfico real con Spark para la optimización de rutas en el entorno urbano de la ciudad de Nueva York+ADs y segundo, clasificación usando algoritmos de minería de datos de un conjunto de datos académicos como son los de la flor de Iris.
Autores: Cristóbal Barba-González / José García-Nieto / Maria Del Mar Roldan-Garcia / Ismael Navas-Delgado / Antonio J. Nebro / Jose F Aldana Montes /
Palabras Clave: big data - Machine Learning - Optimización - Web Semantic
Modern applications of Big Data are transcendingfrom being scalable solutions of data processing and analysis, to nowprovide advanced functionalities with the ability to exploit and understandthe underpinning knowledge. This change is promoting the developmentof tools in the intersection of data processing, data analysis,knowledge extraction and management. In this paper, we proposeTITAN, a software platform for managing all the life cycle of scienceworkflows from deployment to execution in the context of Big Data applications.This platform is characterised by a design and operation modedriven by semantics at different levels: data sources, problem domain andworkflow components. The proposed platform is developed upon an ontologicalframework of meta-data consistently managing processes andmodels and taking advantage of domain knowledge. TITAN comprises awell-grounded stack of Big Data technologies including Apache Kafka forinter-component communication, Apache Avro for data serialisation andApache Spark for data analytics. A series of use cases are conducted forvalidation, which comprises workflow composition and semantic metadatamanagement in academic and real-world fields of human activityrecognition and land use monitoring from satellite images.
Autores: Antonio Benítez-Hidalgo / Cristóbal Barba-González / José García-Nieto / Pedro Gutierez-Moncayo / Manuel Paneque / Antonio J. Nebro / Maria del Mar Roldan-Garcia / Jose F. Aldana-Montes / Ismael Navas-Delgado /
Palabras Clave: Big Data analytics - Knowledge extraction - Semantics
In the last decade, clinical trial management systems have become an essential support tool for data management and analysis in clinical research. However, these clinical tools have design limitations, since they are currently not able to cover the needs of adaptation to the continuous changes in the practice of the trials due to the heterogeneous and dynamic nature of the clinical research data. These systems are usually proprietary solutions provided by vendors for specific tasks. In this work, we propose FIMED, a software solution for the flexible management of clinical data from multiple trials, moving towards personalized medicine, which can contribute positively by improving clinical researchers quality and ease in clinical trials. This tool allows a dynamic and incremental design of patients’ profiles in the context of clinical trials, providing a flexible user interface that hides the complexity of using databases. Clinical researchers will be able to define personalized data schemas according to their needs and clinical study specifications. Thus, FIMED allows the incorporation of separate clinical data analysis from multiple trials. The efficiency of the software has been demonstrated by a real-world use case for a clinical assay in Melanoma disease, which has been indeed anonymized to provide a user demonstration. FIMED currently provides three data analysis and visualization components, guaranteeing a clinical exploration for gene expression data: heatmap visualization, clusterheatmap visualization, as well as gene regulatory network inference and visualization. An instance of this tool is freely available on the web at https://khaos.uma.es/fimed. It can be accessed with a demo user account, «researcher», using the password «demo». Category: COMPUTER SCIENCE, THEORY & METHODS. Ranking: 13/110. Journal: COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE. Year: 2021. DOI: https://doi.org/10.1016/j.cmpb.2021.106496.
Autores: Sandro Hurtado / José García-Nieto / Ismael Navas-Delgado / Jose F Aldana Montes /
Palabras Clave: Clinical Research - Clinical Trial Management Systems - Gene Expression Data Analysis - Gene Regulatory Network Inference - NoSQL Database
In the field of complex problem optimization with metaheuristics, semantics has been used for modeling different aspects, such as: problem characterization, parameters, decision-maker’s preferences, or algorithms. However, there is a lack of approaches where ontologies are applied in a direct way into the optimization process, with the aim of enhancing it by allowing the systematic incorporation of additional domain knowledge. This is due to the high level of abstraction of ontologies, which makes them difficult to be mapped into the code implementing the problems and/or the specific operators of metaheuristics. In this paper, we present a strategy to inject domain knowledge (by reusing existing ontologies or creating a new one) into a problem implementation that will be optimized using a metaheuristic. Thus, this approach based on accepted ontologies enables building and exploiting complex computing systems in optimization problems. We describe a methodology to automatically induce user choices (taken from the ontology) into the problem implementations provided by the jMetal optimization framework. With the aim of illustrating our proposal, we focus on the urban domain. Concretely, we start from defining an ontology representing the domain semantics for a city (e.g., building, bridges, point of interest, routes, etc.) that allows defining a-priori preferences by a decision maker in a standard, reusable, and formal (logic-based) way. We validate our proposal with several instances of two use cases, consisting in bi-objective formulations of the Traveling Salesman Problem (TSP) and the Radio Network Design problem (RND), both in the context of an urban scenario. The results of the experiments conducted show how the semantic specification of domain constraints are effectively mapped into feasible solutions of the tackled TSP and RND scenarios. This proposal aims at representing a step forward towards the automatic modeling and adaptation of optimization problems guided by semantics, where the annotation of a human expert can be now considered during the optimization process.
Autores: Cristobal Barba-Gonzalez / Antonio J. Nebro / José García-Nieto / Maria Del Mar Roldan-Garcia / Ismael Navas-Delgado / Jose F Aldana Montes /
Palabras Clave: Decision Making - domain knowledge - Metaheuristics - multi-objective optimization - Ontology - Semantic web technologies
High-throughput experiments have produced large amounts of heterogeneous data in the life sciences. The integration of data in the life sciences is a key component in the analysis of biological processes. These data may contain errors, but the curation of the vast amount of data generated in the «omic» era cannot be done by individual researchers. To address this problem, community-driven tools could be used to assist with data analysis. In this paper, we focus on a tool with social networking capabilities built on top of the SBMM (Systems Biology Metabolic Modelling) Assistant to enable the collaborative improvement of metabolic pathway models (the application is freely available at http://sbmm.uma.es/SPA).
Autores: Ismael Navas-Delgado / Alejandro del Real-Chicharro / Miguel Ángel Medina / Francisca Sánchez-Jiménez / José F. Aldana-Montes /
Palabras Clave: Data Integration - Life Sciences - Social Data Curation
Las enfermedades cardiovasculares son la principal causa de muerte en España, siendo necesaria la prevención de factores de riesgos como la obesidad o los altos niveles de colesterol. La actividad física previene estos problemas, y su seguimiento usando pulseras de actividad permite tomar decisiones para su corrección. En este trabajo se presenta un experimento para evaluar la viabilidad de detección automática de ciertas actividades a través de algoritmos supervisados de Deep Learning
Autores: Sandro Hurtado-Requena / Cristobal Barba-Gonzalez / Maciej Rybinski / Francisco J Baron-Lopez / Julia Warnberg / Ismael Navas-Delgado / Jose F Aldana-Montes /
Palabras Clave: classification - Deep Learning - HAR - wearable sensors
Life Sciences have emerged as a key domain in the Linked Data community because of the diversity of data semantics and formats available through a great variety of databases and web technologies. Unfortunately, bioinformaticians are not exploiting the full potential of this technology and experts in Life Sciences have real problems to discover, understand and devise how to take advantage of these interlinked data. In this context, we have implemented Bioqueries, a wiki-based portal that is aimed at community building around biological Linked Data (http://bioqueries.uma.es/). This space offers a collaborative platform in which users can create, modify, execute and share biological SPARQL queries.
Autores: María Jesús García Godoy / Esteban López-Camacho / Ismael Navas-Delgado / José F. Aldana-Montes /
Palabras Clave: