A. J. Kfoury
Boston University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A. J. Kfoury.
Archive | 1991
A. J. Kfoury; Michael A. Arbib; Robert N. Moll
1 Introduction.- 1.1 Partial Functions and Algorithms.- 1.2 An Invitation to Computability Theory.- 1.3 Diagonalization and the Halting Problem.- 2 The Syntax and Semantics of while-Programs.- 2.1 The Language of while-Programs.- 2.2 Macro Statements.- 2.3 The Computable Functions.- 3 Enumeration and Universality of the Computable Functions.- 3.1 The Effective Enumeration of while-Programs.- 3.2 Universal Functions and Interpreters.- 3.3 String-Processing Functions.- 3.4 Pairing Functions.- 4 Techniques of Elementary Computability Theory.- 4.1 Algorithmic Specifications.- 4.2 The s-m-n Theorem.- 4.3 Undecidable Problems.- 5 Program Methodology.- 5.1 An Invitation to Denotational Semantics.- 5.2 Recursive Programs 110 5.3* Proof Rules for Program Properties.- 6 The Recursion Theorem and Properties of Enumerations.- 6.1 The Recursion Theorem.- 6.2 Model-Independent Properties of Enumerations.- 7 Computable Properties of Sets (Part 1).- 7.1 Recursive and Recursively Enumerable Sets.- 7.2 Indexing the Recursively Enumerable Sets.- 7.3 Godels Incompleteness Theorem.- 8 Computable Properties of Sets (Part 2).- 8.1 Rices Theorem and Related Results.- 8.2 A Classification of Sets.- 9 Alternative Approaches to Computability.- 9.1 The Turing Characterization.- 9.2 The Kleene Characterization.- 9.3 Symbol-Manipulation Systems and Formal Languages.- References.- Notation Index.- Author Index.
Archive | 1988
Robert N. Moll; Michael A. Arbib; A. J. Kfoury
Stir a cup of coffee with a spoon. It is a well-known fact of popular mathematics that some point in the coffee will be in its initial position after the stir. This drop of coffee is called a fixed point. Fixed points are extremely important objects in mathematics and computer science. In numerical methods, fixed points can be used to find roots of equations; in programming semantics, fixed point methods can be used to find the semantics (the meaning) of a program. Fixed point methods are useful in language theory too.
Archive | 1988
Robert N. Moll; Michael A. Arbib; A. J. Kfoury
In this section we establish one of the most important equivalences in theoretical computer science: we prove that a language is context-free if and only if some push-down automaton can accept the language in a sense we shall shortly make precise. This result has practical as well as theoretical importance, because a push-down store is the basis for several algorithms used in parsing context-free languages.
Archive | 1988
Robert N. Moll; Michael A. Arbib; A. J. Kfoury
In this section we consider strong LL grammars. This grammatical class is important because members of the class, and especially members of the class strong LL(1), are easy to parse and easy to write parsers for. In addition, many important programming language fragments can be given a strong LL(1) description, so parsing based on this grammatical characteristic has important applications in computer science.
Archive | 1988
Robert N. Moll; Michael A. Arbib; A. J. Kfoury
In this chapter we relate the theory of formal languages to the abstract theory of computing. How can we characterize the set of functions computed by computer programs? This question—which functions are realizable by algorithms, and which are not?—has its most direct roots in the work of Alan Turing in the 1930s. Using what is now called the Turing machine model, Turing showed that certain natural problems in computing cannot be computed by any algorithm, real or imagined. Actually, Turing showed only that these problems are not calculable specifically by Turing machines; later investigations by other researchers led to the generally held belief that Turing computability is synonymous with computability in any other sufficiently powerful algorithm specification system.
Archive | 1988
Robert N. Moll; Michael A. Arbib; A. J. Kfoury
In Chapter 1 we had a first look at context-free grammars and context-free languages, and we had a glimpse at two areas of practical interest—parsing and natural language processing—where formal grammatical descriptions play an important role.
Archive | 1988
Robert N. Moll; Michael A. Arbib; A. J. Kfoury
Because unconstrained transformational grammars are capable of generating undecidable sets—apparently more power than is necessary for natural languages—much recent research within linguistics has focused on constraining the power of grammars. Several approaches to this task have been taken, and we will examine two in detail. First, however, in 9.1 we will review some current developments in linguistic theory that are common to both approaches to be examined. Then, in Section 9.2 we describe the framework of generalized phrase structure grammar (GPSG). Finally, in Section 9.3 we look at the theory of government and binding (GB), the current version of transformational grammar.
Archive | 1988
Robert N. Moll; Michael A. Arbib; A. J. Kfoury
As we mentioned in Chapter 1, the modern theory of formal grammars was in large part initiated by Chomsky’s analysis of the generative power of natural languages, the languages of everyday human usage. In this chapter and the next we will explore in more detail just what formal power natural languages seem to have, and how this is captured in contemporary linguistic theories. We will see that there is no easy answer to the question, What is the generative power of natural languages? There is still some disagreement, in fact, over whether natural languages are recursive or not.
Archive | 1982
A. J. Kfoury; Robert N. Moll; Michael A. Arbib
We began our discussion of the Halting Problem in Chapter 1 by showing how to make a systematic listing, based on ASCII character codes, of all pascal programs. Each pascal program had a unique number, or index. This number was obtained by concatenating the ASCII bit strings representing the resulting binary sequence as an integer in binary. Of course, not every number corresponded to a legal program, or even to a legal sequence of ASCII blocks. Under these circumstances an invalid program in our listing was interpreted, by default, to be a particular program for the empty function. In Section 3.1 we follow a similar plan and present an enumeration of the while-programs. Then, in Section 3.2, we develop an “interpreter”, also called a universal program, which can decode the index n of a program P n to process data just as P n would have done. Sections 3.3 and 3.4 develop “subroutines” for the construction of the interpreter showing, respectively, how string-processing functions may be simulated by numerical functions and how vectors of natural numbers may be coded by a single such number.
Archive | 1982
A. J. Kfoury; Robert N. Moll; Michael A. Arbib
Recursion is an important programming technique in computer science. We examined this notion initially in Section 5.2, where we also proved the First Recursion Theorem. Here we discuss a more general result, the Second Recursion Theorem (or more simply, the Recursion Theorem), perhaps the most fascinating result in elementary computability theory. The Recursion Theorem legitimizes self-reference in the descriptions of algorithms. Self-reference arguments add elegance and economy to the methods of computability theory, and guarantee that certain completely unexpected properties of any effective enumeration of algorithms must hold.