Arias, Joaquín

Foto de perfil

E-mails conocidos

Fecha de nacimiento

Proyectos de investigación

Unidades organizativas

Puesto de trabajo



Nombre de pila



Nombres alternativos

Arias, Joaquin

Afiliaciones conocidas

Universidad Rey Juan Carlos, Spain
Universidad Politécnica de Madrid and IMDEA Software Institute, Spain
IMDEA Software Institute, Spain
Universidad Politécnica de Madrid e IMDEA Software
Página completa del ítem
Notificar un error en este autor

Resultados de la búsqueda

Mostrando 1 - 2 de 2
  • Artículo
    Modeling and Reasoning in Event Calculus Using Constraint Answer Set Programming
    Arias, Joaquín; Carro, Manuel. Actas de las XIX Jornadas de Programación y Lenguajes (PROLE 2019), 2019-09-02.
    Automated commonsense reasoning is essential for building human-like AI systems featuring, for example, explainable AI. Event Calculus (EC) is a family of formalisms that model commonsense reasoning with a sound, logical basis. Previous attempts to mechanize reasoning using EC faced difficulties in the treatment of continuous change in dense domains (e.g., time and other physical quantities), constraints among variables, default negation, and the uniform application of different inference methods, among others. We propose the use of s(CASP), a query driven, top-down execution model for predicate Answer Set Programming with Constraints, to model and reason using EC. We show how EC scenarios can be modeled in s(CASP) and how its expressiveness makes it pos- sible to perform deductive and abductive reasoning tasks in domains featuring, for example, constraints involving dense time and fluents.
  • Artículo
    fCASP: A forgetting technique for XAI based on goal-directed constraint ASP models
    Fidilio-Allende, Luciana; Arias, Joaquín. Actas de las XXIII Jornadas de Programación y Lenguajes (PROLE 2024), 2024-06-17.
    Artificial Intelligence systems based on machine learning are increasingly used to make decisions that directly affect humans, but they are not able to explain those decisions. On the other hand, Artificial Intelligence systems based on Constrained Answer Set Programming (CASP) provide human-readable justifications and their models can be audited and/or adapted, e.g., to ensure that they are value-aware. While this explainability is a legal (and ethical) requirement, it can lead to a leak of sensitive information, for example in cases of victims of gender-based violence. Although explanations can be manipulated to avoid leaks, when adapting the models, the application of techniques such as forgetting is required. However, current forgetting techniques are mostly only applied in propositional ASP programs, and they have limitations dealing with even loops. In this paper, we present preliminary results of a new forgetting technique, called fCASP , which can be successfully applied to examples that existing techniques are not able to solve correctly. fCASP is based on the dual rules of s(CASP), a goal-directed CASP reasoner, and therefore, we believe that it can be applied to generic CASP programs without grounding. We have validated our proposal by solving flagship examples from the literature, and we plan to use this technique in the context of school places allocation while preserving the privacy of victims of gender-based violence.