Examinando Jornadas sobre Programación y Lenguajes (PROLE) por Fecha de publicación
Mostrando 1 - 20 de 191
Resultados por página
Opciones de ordenación
ResumenSatisfiability of Constraint Specifications on XML DocumentsNavarro, Marisa; Orejas, Fernando; Pino, Elvira. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.In this work, we present an approach for specifying the structure of XML documents using three kinds of constraints based on XPath, together with a sound and complete method for reasoning about them. Currently, the standard specification of classes of XML documents is done by means of DTDs or XML Schemas. In both cases, we essentially describe the abstract syntax of the class of documents and the types of its attributes. This is quite limited. In particular, we may want to state more complex conditions about the structure of documents in a given class or about their contents. For example, with respect to the structure of documents, we may want to state that if an element includes an attribute with a given content, then these documents should not include some other element. Or, with respect to the contents of documents, we may want to express that the value of some numeric attribute of a certain element is smaller than the value of another attribute of a different element. In this paper, we concentrate on the specification of the structure of documents, not paying much attention to their contents. In this sense, we present an abstract approach for the specification of (the structure of) classes of XML documents using sets of constraints that are based on XPath [8, 9] queries, as given in , using the concept of tree patterns. Roughly, a tree pattern describes a basic property on the structure of documents. Its root repre sents the root of documents. Nodes represent elements that must be present on the given documents and their labels represent their contents, i.e. the names of elements and their value, if any. A wild card (the symbol ∗), means that we don’t know or we don’t care about the contents of that element. Finally, single edges represent parent/child relations between elements, while double edges represent a descendant relationship between elements. Again, if any of these two relations is included in a tree pattern, then it should also be included in the documents satisfying that property. For instance, on the left of Fig. 1 we show a tree pattern p describing documents D whose root node is labelled with a, some child node of the root node in D is labelled b, and some descendant node of the root node in D has two child nodes labelled c and d, respectively. Similarly, we represent, in an abstract way, XML documents using the same kind of trees. The difference between a document and a tree pattern is that a document does not include double edges or wildcards. For example, on the right of Fig. 1 we show a document that satisfies the pattern on the left. In particular, we may see that the root of the document is labelled by a. Moreover, that root has a child node labelled b and a descendant node (the element labelled f) that has two child nodes labelled c and d, respectively. We consider three kinds of (atomic) constraints. The first one, called positive constraints, are tree patterns. The second one are negative constraints, ¬p, where p is a tree pattern, expressing that documents should not satisfy p. Finally, the third sort of constraint are conditional constraints, written ∀(c : p → q), where both p and q are tree patterns. Roughly speaking, these constraints express that if a document satisfies p then it must also satisfy q. Moreover, these constraints can be combined using the connectives ∧ and ∨. These kinds of constraints are similar to the graph constraints studied in [6, 7] in the context of graph transformation. The work presented in those papers, shows how to use graph constraints as a specification formalism, and how to reason about these specifications. However, the application of these ideas to our setting is not trivial. Specifically, the descendant relation in our constraints makes non-trivial the application of these techniques since, the descendent relation would be second-order in the logic of graph constraints defined in [6, 7]. Obviously, there are conditions on the structure of XML documents that are not expressible using the kind of constraints studied in this paper. However, our experience in the area of graph transformation [6, 7] shows that, in practice, these constraints are sufficient in most cases. Nevertheless, we believe that the ideas presented here can be extended to a class of XML constraints, similar to the class of nested graph conditions that has been shown equivalent to first-order logic of graphs . However, we also believe that this extension is not straightforward. Since our aim is to be able to reason about these specifications, we present inference rules that are shown, by means of tableaux, to define a sound and complete refutation procedure for checking satisfiability of a given specification. We strongly believe that satisfiability problem for this class of constraints is only semidecidable, since we believe that it would be similar to the (un)decidability of the satisfiability problem for the Horn clause fragment of first-order logic. As a consequence, if a given specification is inconsistent, we can be sure that our procedure will terminate showing that unsatisfiability. However, our procedure may not terminate if the given specification is satisfiable. Nevertheless, we may like to have an idea about the performance of our approach when the procedure terminates. One could think, that this performance would be quite poor, since checking if there is a monomorphism between two trees (a basic operation in our deduction procedure) is an NP-complete problem . Actually, this is not our experience with the tool that we have implemented . We think that the situation is similar to what happens with graph transformation tools. In these tools, applying a graph transformation rule means finding a subgraph isomorphism, which is also a wellknown NP-complete problem. However, the fact that the graphs are typed (in our case, the trees are labelled), in practice, reduces considerably the search. Finally, in the future, we plan to extend our approach to consider also cross-references and properties about the contents of documents. The former problem means, in fact, to extend our approach to graphs and graph patterns. For the latter case, we plan to follow the same approach that we used to extend our results for graphs in [6, 7] to the case of attributed graphs in . ArtículoLearning a Subclass of Multivalued Dependencies Formulas from EntailmentsHermo, Montserrat; Ozaki, Ana. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.Functional and multivalued dependencies play an important role in the design of relational databases. There is a strong connection between data dependencies and some fragments of the propositional logic. In particular, functional dependencies are closely related to Horn formulas. Also, multivalued dependencies are characterized in terms of multivalued formulas. It is known that both Horn formulas and sets of functional dependencies are efficiently learnable in the exact model of learning with queries. In this work, we study the learnability of a non-trivial subclass of multivalued formulas called CRMVDF. We use Angluin’s exact learning model with membership and equivalence queries and present a polynomial time algorithm which exactly learns CRMVDF from entailments. ArtículoA discretized operational semantics for the implementation of Hy-tccpGallardo, María del Mar; Panizo, Laura; Titolo, Laura. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.The language Hy-tccp was proposed as an extension of the Timed Concurrent Constraint paradigm (tccp) with continuous time and suitable mechanisms to handle continuous behaviors. This language provides a powerful model for hybrid and cyber-physical systems including concurrency and syn chronization features. In this paper, we propose a discretized operational semantics for Hy-tccp and an extension of the standard LTL to reason about temporal properties of Hy-tccp programs. The semantics and the logics will be the basis for the definition of formal verification and analysis tools, such as model checkers and theorem provers. ArtículoAutomatic generation of logical models for order-sorted first-order theories in program analysisLucas, Salvador. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.Computations are often viewed as proofs of specific sentences in some computational logic describing the operational semantics of the programming language or computational system. Since the semantics of programs (i.e., the set of such specific sentences that are provable in the logic) is usually incomputable, and most program properties undecidable, abstraction is essential in program analysis. Abstractions can be formalized as semantic models which should be automatically generated in a push-the-button-and-wait style of program analysis and verification. We investigate the automatic generation of numerical models for order-sorted first-order logics and its use in program analysis. Our development systematically uses the recently introduced convex domains which are well-suited for representing domains for different sorts; we use them to interpret the ranked symbols of sorted signatures by means of appropriately adapted convex matrix interpretations. Such numerical interpretations permit the use of existing algorithms and tools from linear algebra (e.g., Farkas’ Lemma), real algebraic geometry, and arithmetic constraint solving in the implementation of the analyses. ResumenAbstract Diagnosis for tccp using a Linear Temporal LogicComini, M.; Titolo, L.; Villanueva, A.. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.This extended abstract is a summary of , where we provided an automatic decision method to check whether a given property, specified in a linear temporal logic, is valid w.r.t. a tccp program. Our proposal (based on abstract interpretation techniques) does not require to build any model of the program, in constrast with standard verification methods such as model checking. Our results guarantee correctness but, as usual when using an abstract semantics, completeness is lost. ArtículoHR-SQL: An SQL Database System with Extended Recursion and Hypothetical ReasoningNieva, Susana; Sáenz Pérez, Fernando; Sánchez-Hernández, Jaime. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.In a former work we described the system and language R-SQL that overcomes some limitations of recursion of the relational database language SQL. Such limitations are non-linearity, mutual recursion, and some combinations of negation with recursion. In addition, R-SQL improved termination properties of recursive definitions. Next, this language was extended to include a restricted form of hypothetical relations and queries using assumptions, obtaining the language HR-SQL, and a preliminary implementation was developed for it. Here, we develop a new system HR-SQL from scratch and enhance the former system in several areas. First, hypothetical reasoning is fully integrated with recursive definitions. Second, the Python script generated by the system for computing the extension (materialization) of a database is now targeted to several state-of-the-art relational database systems. Third, the system has been interfaced to the integrated development environment ACIDE, allowing both a more friendly user interaction and a graphical view of the dependency graph that shows dependencies between relations. Fourth, being developed in Prolog, we have targeted it to both SICStus Prolog and SWI-Prolog, also providing standalone executables. Finally, the system has been extended with a bundle of commands, highly improving its capabilities with respect to the former system. ResumenA Haskell Implementation of a Rule-Based Program Transformation for C ProgramsTamarit, Salvador; Vigueras, Guillermo; Carro, Manuel; Mariño, Julio. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.Obtaining good performance when programming heterogeneous computing platforms poses significant challenges for the programmer. We present a program transformation environment, implemented in Haskell, where architecture-agnostic scientific C code is transformed into a functionally equivalent one better suited for a given platform. The transformation rules are formalized in a domain-specific language (STML) that takes care of the syntactic and semantic conditions required to apply a given transformation. STML rules are compiled into Haskell function definitions that operate at AST level. Program properties, to be matched with rule conditions, can be automatically inferred or, alternatively, stated as annotations in the source code. Early experimental results are described. ResumenAn Assertional Proof of the Stability and Correctness of Natural MergesortRustan M. Leino, K.; Lucio, Paqui. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.Natural Mergesort  is a sorting algorithm for linear data structures (arrays and lists) that has been widely studied mainly due to its good properties. It has Nlog(N) worst-case complexity and, even in the case of arrays, is slightly easier to code than heapsort. Further, it performs very well on input data that is already mostly sorted. Another good property is stability. A sorting algorithm is stable if it maintains the relative order of records with equal keys. The most obvious application of a stable algorithm is sorting using different (primary, secondary, etc.) keys. Stability is, as we show in lemma EqMultisets, stronger than the property of preserving the multiset of elements (from the input list to the sorted output list). Hence, stability, along with sortedness, implies the correctness of sorting algorithms (including the permutation property). Recently, Sternagel  has published an Isabelle/HOL proof of the correctness and stability of natural mergesort as a proof pearl. Sternagel , firstly, specifies the algorithm as a functional program and, then, formalizes and proves the desired properties using the proof-assistant Isabelle/HOL. The proof is non-assertional and uses higher-order constructions. Indeed, it is strongly based on two skillful ad-hoc induction schemes. The first one for handling the mutually recursive functions involved in the splitting of the input into ascending sequences. The second induction scheme is related to the merging of the ascending lists. Correctness and stability are deduced from auxiliary lemmas which are proved by means of these induction schemes and with the help of a subtle generalization of the predicate sorted. The definition of that generalization and the induction schemes require the power of higher-order logic. In particular, the stability property is formalized in higher-order logic. More recently, de Gouw et al.  discussed a semi-automated formal proof of the correctness and stability of two sorting algorithms on arrays: Counting sort and Radix sort. This proof is formalized using the theorem-prover KeY . The implementation code is written in Java. The specification is written (using the Java Modeling Language, JML) in an extension of first-order logic with permutation predicates, which have recently been added  to the KeY system. There are many other formalizations of the natural mergesort algorithm and also of different sorting algorithms (e.g. insertion sort, quicksort, heapsort, radix sort, etc.) in various systems, such as Coq , Isabelle/HOL , Why3 , ACL2 , KeY , etc. However, to the best of our knowledge, stability is only considered in , , and in our assertional proof. In this paper, we present an implementation of natural mergesort over an algebraic data type of lists. The code is enriched with its contract-based specification and a proof of its correctness and its stability. Our proof is assertional, i.e. it uses assert statements, inserted in the code, to enable the (fully) automatic verification. The assertions are first-order formulas that explain how and why the program works. The proof is supported by a few definitions that are easy to understand, and a few lemmas that isolate useful properties. Moreover, only non-trivial lemmas have detailed proofs and these are short and easy to read and to understand. Hence, in our opinion, the presented proof is quite clear and elegant. The program-proof is implemented in the state-of-the-art verifier Dafny . The Dafny programming language supports a mixture of imperative, object-oriented programming and functional programming. In this paper, we use mostly functions, methods, and algebraic datatypes. The Dafny specification language includes the usual assertional language for contracts of pre/post conditions, invariants, decreasing expressions for termination proofs, etc. Since Dafny is designed with the main purpose of facilitating the construction of correct code, Dafny notation is compact and easy to understand. For the sake of readability and conciseness, the Dafny proof language includes constructs for structuring proofs such as lemmas and calculational proofs . Dafny automatically generates executable .NET code for verified programs. The presented proof is made on the basis of some lemmas that ensure natural properties. Most of the proofs are inductive and use calculations  when appropriate. We believe that our program-proof is a simple and intuitive example of how a practical verification tool can be used by software developers with a minimum of familiarity with contract-based specifications and first-order assertions. We aim to contribute to the spread of the educational use of automatic tools in the development of formally verified software. We are convinced that this kind of example is useful for the introduction of formal software development methods and tools in software engineering courses. To sum up, we present a mechanically verified implementation of the sorting algorithm Natural Mergesort that consists of a few methods specified by their contracts of pre/post conditions. Methods are annotated with assertions that allow the automatic verification of the contract satisfaction. Along the paper we provide and explain the complete text of the program-proof. ArtículoProductivity of rewrite systems without transformationsLucas, Salvador. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.Termination of programs, i.e., the absence of infinite computations, ensures the existence of normal forms for all initial expressions, thus providing an essential ingredient for the definition of a normalization semantics for functional programs. In lazy functional languages, though, infinite data structures are often delivered as the outcome of computations. For instance, the list of all prime numbers can be returned as a neverending stream of numerical expressions or data structures. If such streams are allowed, requiring termination is hopeless. In this setting, the notion of productivity can be used to provide an account of computations with infinite data structures, as it “captures the idea of computability, of progress of infinite-list programs” (B.A. Sijtsma, On the Productivity of Recursive List Definitions, ACM Transactions on Programming Languages and Systems 11(4):633- 649, 1989). However, in the realm of Term Rewriting Systems, which can be seen as (first-order, untyped, unconditional) functional programs, termination of Context-Sensitive Rewriting (CSR) has been showed equivalent to productivity of rewrite systems through appropriate transformations. In this way, tools for proving termination of CSR can be used to prove productivity. In term rewriting, CSR is the restriction of rewriting that arises when reductions are allowed on selected arguments of function symbols only. In this paper we show that well-known results about the computational power or CSR are useful to better understand the existing connections between productivity of rewrite systems and termination of CSR, and also to obtain more powerful techniques to prove productivity of rewrite systems. ResumenTowards a Formal Semantics-Based Technique for Interprocedural SlicingAsavoae, Irina Mariuca; Asavoae, Mihail; Riesco, Adrián. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.Interprocedural slicing is a technique applied on programs with procedures which relies on how the information is passed at procedure call/return sites. Such a technique computes program slices (i.e. program fragments restricted w.r.t. a given criterion). The existing approaches to interprocedural slicing exploit the particularities of the underlying language semantics in order to compute program slices. In this paper we propose a generic technique for interprocedural slicing. More specifically, our approach works with inferred particularities of a language semantics, given as a rewriting-logic specification, and computes program slices using a term slicing-based algorithm. PreliminaresKeynote: Obscuring code – Unveiling and Veiling Information in ProgramsActas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.Conferencia invitada "Keynote" en las XV Jornadas de Programación y Lenguajes (PROLE 2015). ArtículoA Declarative Semantics for a Fuzzy Logic Language Managing Similarities and Truth DegreesJulián Iranzo, Pascual; Moreno, Ginés; Penabad, Jaime; Vázquez, Carlos. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.This work proposes a declarative semantics based on a fuzzy variant of the classical notion of least Herbrand model for the so-called FASILL language (acronym of “Fuzzy Aggregators and Similarity Into a Logic Language”) which has being recently designed and implemented in our research group for coping with implicit/explicit truth degree annotations, a great variety of connectives and unification by similarity. ArtículoProperty based Testing of XQuery ProgramsAlmendros-Jiménez, Jesus M.; Becerra Terón, Antonio. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.In this paper we present the elements of an XQuery testing tool which makes possible to automatically test XQuery programs. The tool is able to systematically generate XML instances (i.e., test cases) from a given XML schema. The number and type of instances is defined by the human tester. These instances are used to execute the given XQuery program. In addition, the tool makes possible to provide an user defined property to be tested against the output of the XQuery program. The property can be specified with a Boolean XQuery function. The tool is implemented as an oracle able to report whether the XQuery program passes the test, that is, all the test cases satisfy the property, as well as the number of test cases used for testing. In the case when the XQuery program fails the testing, the tool shows counterexamples found in the test cases. The tool has been implemented as an XQuery library which makes possible to be used from any XQuery interpreter. ArtículoConstraint Programming Meets SQLCaballero, Rafael; Ieva, Carlo. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.We present a proposal for introducing SQL tuples into the modeling programming language MINIZINC. The domain of the new decision variables is defined by arbitrary relational database tables indicated by the user. The new setting increases the expressiveness of MINIZINC, allowing the modeler to mix the usual finite domains already existing in the language with string constraints typical from SQL such as concat, substr, or like. In order to obtain the solutions of these combined models, we first replace the atomic constraints involving strings by boolean variables. The result is a standard MINIZINC model, which can be solved by any off-the-shelf solver. Then, each individual solution is applied to the remainder string constraints, which are then solved using an SQL query. We discuss how both languages, MINIZINC and SQL, benefit from this combination. ArtículoRestricted Predicates for Hypothetical DatalogSáenz Pérez, Fernando. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.Hypothetical Datalog is based on an intuitionistic semantics rather than on a classical logic semantics, and embedded implications are allowed in rule bodies. While the usual implication (i.e., the neck of a Horn clause) stands for inferring facts, an embedded implication plays the role of assuming its premise for deriving its consequence. A former work introduced both a formal framework and a goal-oriented tabled implementation, allowing negation in rule bodies. While in that work positive assumptions for both facts and rules can occur in the premise, negative assumptions are not allowed. In this work, we cover this subject by introducing a new concept: a restricted predicate, which allows negative assumptions by pruning the usual semantics of a predicate. This new setting has been implemented in the deductive system DES. ArtículoA Generic Intermediate Representation for Verification Condition Generation, Work in ProgressMontenegro, Manuel; Peña Marí, Ricardo; Sánchez-Hernández, Jaime. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.As part of a platform for computer-assisted verification, we present an intermediate representation of programs that is both language independent and appropriate for the generation of verification conditions. We show how many imperative and functional languages can be translated to this generic internal representation, and how the generated conditions faithfully reflect the semantics of the original program. At this representation level, loop invariants and preconditions of recursive functions belonging to the original program are represented by assertions placed at certain edges of a directed graph. The paper defines the generic representation, sketches the transformation algorithms, and describes how the places where the invariants should be placed are computed. Assuming that, either manually or assisted by the platform, the invariants have been settled, it is shown how the verification conditions are generated. A running example illustrates the process. ArtículoProving Continuity of Coinductive Global Bisimulation Distances: A Never Ending StoryRomero-Hernández, David; de Frutos Escrig, David; Della Monica, Dario. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.We have developed a global approach to define bisimulation distances which goes somehow further away than the bisimulation distances based on the bisimulation game, previously proposed by some other authors. Our proposal is based on the cost of transformations: how much we need to modify one of the compared processes to obtain the other. Our original definition only covered finite processes, but a coinductive approach extends it to cover infinite but finitary trees. We have shown many interesting properties of our distances, and we wanted to prove their continuity with respect to projections, bur unfortunately we have not been able to accomplish that task. However, we have obtained several partial results that we now present in this paper. ArtículoAnalysing the Termination of Term Rewriting Systems using Data MiningPiris, J.; Fabregat, H.; Ramírez-Quintana, M. J.. Actas de las XV Jornadas de Programación y Lenguajes (PROLE 2015), 2015-09-15.During the last decades, researchers in the field of Term Rewriting System (TRS) have devoted a lot of effort in order to develop techniques and methods able to demonstrate the termination property of a TRS. As a consequence, some of the proposed techniques have been implemented and several termination tools have been developed in order to automatize the termination proofs. From 2004, the annual Termination Competition is the foro in which research groups compare their tools trying to provide termination proofs of as many TRS as possible. This event generates a large amount of information (results obtained by the different tools, time spent on each proof, ...) that is recorded in databases. In this paper, we propose an alternative approach to study the termination of TRS: to use data mining techniques that, based on the historical information collected in the competition, generate models to explore the termination of a TRS. The goal of our study is not to develop a termination tool but to show, for the first time, what machine learning techniques can offer to the analysis of TRS termination.