Gopalan Nadathur
University of Minnesota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gopalan Nadathur.
Annals of Pure and Applied Logic | 1991
Dale Miller; Gopalan Nadathur; Frank Pfenning; Andre Scedrov
Abstract Miller, D., G. Nadathur, F. Pfenning and A. Scedrov, Uniform proofs as a foundation for logic programming, Annals of Pure and Applied Logic 51 (1991) 125–157. A proof-theoretic characterization of logical languages that form suitable bases for Prolog-like programming languages is provided. This characterization is based on the principle that the declarative meaning of a logic program, provided by provability in a logical system, should coincide with its operational meaning, provided by interpreting logical connectives as simple and fixed search instructions. The operational semantics is formalized by the identification of a class of cut-free sequent proofs called uniform proofs. A uniform proof is one that can be found by a goal-directed search that respects the interpretation of the logical connectives as search instructions. The concept of a uniform proof is used to define the notion of an abstract logic programming language, and it is shown that first-order and higher-order Horn clauses with classical provability are examples of such a language. Horn clauses are then generalized to hereditary Harrop formulas and it is shown that first-order and higher-order versions of this new class of formulas are also abstract logic programming languages if the inference rules are those of either intuitionistic or minimal logic. The programming language significance of the various generalizations to first-order Horn clauses is briefly discussed.
international conference on logic programming | 1986
Dale Miller; Gopalan Nadathur
In this paper we consider the problem of extending Prolog to include predicate and function variables and typed λ-terms. For this purpose, we use a higher-order logic to describe a generalization to first-order Horn clauses. We show that this extension possesses certain desirable computational properties. Specifically, we show that the familiar operational and least fixpoint semantics can be given to these clauses. A language, λProlog that is based on this generalization is then presented, and several examples of its use are provided. We also discuss an interpreter for this language in which new sources of branching and backtracking must be accommodated. An experimental interpreter has been constructed for the language, and all the examples in this paper have been tested using it.
Journal of the ACM | 1990
Gopalan Nadathur; Dale Miller
A generalization of Horn clauses to a higher-order logic is described and examined as a basis for logic programming. In qualitative terms, these higher-order Horn clauses are obtained from the first-order ones by replacing first-order terms with simply typed λ-terms and by permitting quantification over all occurrences of function symbols and some occurrences of predicate symbols. Several proof-theoretic results concerning these extended clauses are presented. One result shows that although the substitutions for predicate variables can be quite complex in general, the substitutions necessary in the context of higher-order Horn clauses are tightly constrained. This observation is used to show that these higher-order formulas can specify computations in a fashion similar to first-order Horn clauses. A complete theorem-proving procedure is also described for the extension. This procedure is obtained by interweaving higher-order unification with backchaining and goal reductions, and constitutes a higher-order generalization of SLD-resolution. These results have a practical realization in the higher-order logic programming language called λProlog.
conference on automated deduction | 1999
Gopalan Nadathur; Dustin J. Mitchell
The logic programming language λProlog is based on the intuitionistic theory of higher-order hereditary Harrop formulas, a logic that significantly extends the theory of Horn clauses. A systematic exploitation of features in the richer logic endows λProlog with capabilities at the programming level that are not present in traditional logic programming languages. Several studies have established the value of λProlog as a language for implementing systems that manipulate formal objects such as formulas, programs, proofs and types. Towards harnessing these benefits, methods have been developed for realizing this language efficiently. This work has culminated in the description of an abstract machine and compiler based implementation scheme. An actual implementation of λProlog based on these ideas has recently been completed. The planned presentation will exhibit this system--called Teyjus--and will also illuminate the metalanguage capabilities of λProlog.
conference on automated deduction | 2007
David Baelde; Andrew Gacek; Dale Miller; Gopalan Nadathur; Alwen Tiu
Bedwyr is a generalization of logic programming that allows model checking directly on syntactic expressions possibly containing bindings. This system, written in OCaml, is a direct implementation of two recent advances in the theory of proof search. The first is centered on the fact that both finite success and finite failure can be captured in the sequent calculus by incorporating inference rules for definitionsthat allow fixed pointsto be explored. As a result, proof search in such a sequent calculus can capture simple model checking problems as well as may and must behavior in operational semantics. The second is that higher-order abstract syntax is directly supported using term-level i¾?-binders and the i¾? quantifier. These features allow reasoning directly on expressions containing bound variables.
meeting of the association for computational linguistics | 1986
Dale Miller; Gopalan Nadathur
Consideration of the question of meaning in the framework of linguistics often requires an allusion to sets and other higher-order notions. The traditional approach to representing and reasoning about meaning in a computational setting has been to use knowledge representation systems that are either based on first-order logic or that use mechanisms whose formal justifications are to be provided after the fact. In this paper we shall consider the use of a higher-order logic for this task. We first present a version of definite clauses (positive Horn clauses) that is based on this logic. Predicate and function variables may occur in such clauses and the terms in the language are the typed λ-terms. Such term structures have a richness that may be exploited in representing meanings. We also describe a higher-order logic programming language, called λProlog, which represents programs as higher-order definite clauses and interprets them using a depth-first interpreter. A virtue of this language is that it is possible to write programs in it that integrate syntactic and semantic analyses into one computational paradigm. This is to be contrasted with the more common practice of using two entirely different computation paradigms, such as DCGs or ATNs for parsing and frames or semantic nets for semantic processing. We illustrate such an integration in this language by considering a simple example, and we claim that its use makes the task of providing formal justifications for the computations specified much more direct.
Journal of Automated Reasoning | 1992
Gopalan Nadathur
A proof procedure is presented for a class of formulas in intuitionistic logic. These formulas are the so-calledgoal formulas in the theory of hereditary Harrop formulas. Proof search in intuitionistic logic is complicated by the nonexistence of a Herbrand-like theorem for this logic: formulas cannot in general be preprocessed into a form such as the clausal form, and the construction of a proof is often sensitive to the order in which the connectives and quantifiers are analyzed. An interesting aspect of the formulas we consider here is that this analysis can be carried out in a relatively controlled manner in their context. In particular, the task of finding a proof can be reduced to one of demonstrating that a formula follows from a set of assumptions, with the next step in this process being determined by the structure of the conclusion formula. An acceptable implementation of this observation must utilize unification. However, since our formulas may contain universal and existential quantifiers in mixed order, care must be exercised to ensure the correctness of unification. One way of realizing this requirement involves labelling constants and variables and then using these labels to constrain unification. This form of unification is presented and used in a proof procedure for goal formulas in a first-order version of hereditary Harrop formulas. Modifications to this procedure for the relevant formulas in a higher-order logic are also described. The proof procedure that we present has a practical value in that it provides the basis for an implementation of the logic programming language λProlog.
logic in computer science | 2008
Andrew Gacek; Dale Miller; Gopalan Nadathur
Many semantical aspects of programming languages, such as their operational semantics and their type assignment calculi, are specified by describing appropriate proof systems. Recent research has identified two proof-theoretic features that allow direct, logic-based reasoning about such descriptions: the treatment of atomic judgments as fixed points (recursive definitions) and an encoding of binding constructs via generic judgments. However, the logics encompassing these two features have thus far treated them orthogonally: that is, they do not provide the ability to define object-logic properties that themselves depend on an intrinsic treatment of binding. We propose a new and simple integration of these features within an intuitionistic logic enhanced with induction over natural numbers and we show that the resulting logic is consistent. The pivotal benefit of the integration is that it allows recursive definitions to not just encode simple, traditional forms of atomic judgments but also to capture generic properties pertaining to such judgments. The usefulness of this logic is illustrated by showing how it can provide elegant treatments of object-logic contexts that appear in proofs involving typing calculi and of arbitrarily cascading substitutions that play a role in reducibility arguments.
Journal of Automated Reasoning | 2012
Andrew Gacek; Dale Miller; Gopalan Nadathur
Relational descriptions have been used in formalizing diverse computational notions, including, for example, operational semantics, typing, and acceptance by non-deterministic machines. We therefore propose a (restricted) logical theory over relations as a language for specifying such notions. Our specification logic is further characterized by an ability to explicitly treat binding in object languages. Once such a logic is fixed, a natural next question is how we might prove theorems about specifications written in it. We propose to use a second logic, called a reasoning logic, for this purpose. A satisfactory reasoning logic should be able to completely encode the specification logic. Associated with the specification logic are various notions of binding: for quantifiers within formulas, for eigenvariables within sequents, and for abstractions within terms. To provide a natural treatment of these aspects, the reasoning logic must encode binding structures as well as their associated notions of scope, free and bound variables, and capture-avoiding substitution. Further, to support arguments about provability, the reasoning logic should possess strong mechanisms for constructing proofs by induction and co-induction. We provide these capabilities here by using a logic called
foundations of software technology and theoretical computer science | 2005
Radha Jagadeesan; Gopalan Nadathur; Vijay A. Saraswat
{\cal G}