Towards Dynamic Consistency Checking in Goal-directed Predicate Answer Set Programming





Publicado en

Actas de las XXI Jornadas de Programación y Lenguajes (PROLE 2022)




Goal-directed evaluation of Answer Set Programs is gaining traction thanks to its amenability to create AI systems that can, due to the evaluation mechanism used, generate explanations and justifications. s(CASP) is one of these systems and has been already used to write reasoning systems in several fields. It provides enhanced expressiveness w.r.t. other ASP systems due to its ability to use constraints, data structures, and unbound variables natively. However, the performance of existing s(CASP) implementations is not on par with other ASP systems: model consistency is checked once models have been generated, in keeping with the generate-and-test paradigm. In this work, we present a variation of the top-down evaluation strategy, termed Dynamic Consistency Checking, which interleaves model generation and consistency checking. This makes it possible to determine when a literal is not compatible with the denials associated to the global constraints in the program, prune the current execution branch, and choose a different alternative. This strategy is specially (but not exclusively) relevant in problems with a high combinatorial component. We have experimentally observed speedups of up to 90+ANc w.r.t. the standard versions of s(CASP). The full paper has been presented at the 24th International Symposium Practical Aspects of Declarative Languages (PADL 2022), Philadelphia, PA, USA, January 17+IBM-18, 2022 +AFs-1+AF0.


Acerca de Arias, Joaquín

Palabras clave

Answer Set Programming, Constraints, Dynamic Consistency Checking, Goal-directed Evaluation
Página completa del ítem
Notificar un error en este resumen
Mostrar cita
Mostrar cita en BibTeX
Descargar cita en BibTeX