Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jacques Cohen is active.

Publication


Featured researches published by Jacques Cohen.


ACM Computing Surveys | 1981

Garbage Collection of Linked Data Structures

Jacques Cohen

A concise and unified view of the numerous existing algorithms for performing garbage collection of linked data structures is presented. The emphasm is on garbage collection proper, rather than on storage allocatlon. First, the classical garbage collection algorithms are reviewed, and their marking and collecting phases, with and without compacting, are discussed. Algorithms descnbing these phases are classified according to the type of cells to be collected: those for collecting single-sized cells are simpler than those for varimzed cells. Recently proposed algorithms are presented and compared with the classical ones. Special topics in garbage collection are also covered: the use of secondary and virtual storage, the use of reference counters, parallel and real-time collections, analyses of garbage collection algorithms, and language features whlch influence the design of collectors. The bibhography, wlth topical annotations, contains over 100 references.


Communications of The ACM | 1990

Constraint logic programming languages

Jacques Cohen

Constraint Logic Programming (CLP) is an extension of Logic Programming aimed at replacing the pattern matching mechanism of unification, as used in Prolog, by a more general operation called constraint satisfaction. This aritcle provides a panoramic view of the recent work done in designing and implementing CLP languages. It also presents a summary of their theoretical foundations, discusses implementation issues, compares the major CLP languages, and suggests directions for further work.


Journal of the ACM | 1979

Two Algorithms for Determining Volumes of Convex Polyhedra

Jacques Cohen; Timothy J. Hickey

Determining volumes of convex n-dimensional polyhedra defined by a linear system of inequalities is useful in program analysis Two methods for computing these volumes are proposed (1) summing the volumes of stmphces which form the polyhedron, and (2) summing the volumes of (increasingly smaller) paralleleplpeds which can be fit into the polyhedron Assuming that roundoff errors are small, the first method is analytically exact whereas the second one converges to the exact solution at the expense of addmonal computer time Examples of polyhedra whose volumes were computed by programs representing the algorithms are also provided


ACM Transactions on Programming Languages and Systems | 1987

Parsing and compiling using Prolog

Jacques Cohen; Timothy J. Hickey

This paper presents the material needed for exposing the reader to the advantages of using Prolog as a language for describing succinctly most of the algorithms needed in prototyping and implementing compilers or producing tools that facilitate this task. The available published material on the subject describes one particular approach in implementing compilers using Prolog. It consists of coupling actions to recursive descent parsers to produce syntax-trees which are subsequently utilized in guiding the generation of assembly language code. Although this remains a worthwhile approach, there is a host of possibilities for Prolog usage in compiler construction. The primary aim of this paper is to demonstrate the use of Prolog in parsing and compiling. A second, but equally important, goal of this paper is to show that Prolog is a labor-saving tool in prototyping and implementing many non-numerical algorithms which arise in compiling, and whose description using Prolog is not available in the literature. The paper discusses the use of unification and nondeterminism in compiler writing as well as means to bypass these (costly) features when they are deemed unnecessary. Topics covered include bottom-up and top-down parsers, syntax-directed translation, grammar properties, parser generation, code generation, and optimizations. Newly proposed features that are useful in compiler construction are also discussed. A knowledge of Prolog is assumed.


Communications of The ACM | 1985

Describing Prolog by its interpretation and compilation

Jacques Cohen

Since its conception, Prolog has followed a developmental course similar to the early evolution of LISP. Although the version of Prolog described here typifies that currently in use, it should be considered within the framework of language evolution.


SIAM Journal on Computing | 1983

Uniform Random Generation of Strings in a Context-Free Language

Timothy J. Hickey; Jacques Cohen

Let S be the set of all strings of length n generated by a given context-free grammar. A uniform random generator is one which produces strings from S with equal probability. In generating these strings, care must be taken in choosing the disjuncts that form the right-hand side of a grammar rule so that the produced string will have the specified length. Uniform random generators have applications in studying the complexity of parsers, in estimating the average efficiency of theorem provers for the propositional calculus, in establishing a measure of ambiguity of a grammar, etc. Two methods are presented for generating uniform random strings in an unambiguous context-free language. The first method will generate a random string of length n in linear time, but must use a precomputed table of size


Journal of the ACM | 1988

Automating program analysis

Timothy J. Hickey; Jacques Cohen

O(n^{r + 1} )


Communications of The ACM | 1974

Two languages for estimating program efficiency

Jacques Cohen; Carl Zuckerman

, where r is the number of nonterminals in the grammar used to specify the language. The second method precomputes part of the table and calculates the other entries as they are called for. It requi...


Journal of the ACM | 1982

Upper Bounds for Speedup in Parallel Parsing

Jacques Cohen; Timothy J. Hickey; Joel Katcoff

The first part of the paper shows that previous theoretical work on the semantics of probabilistic programs (Kozen) and on the correctness of performance annotated programs (Ramshaw) can be used to automate the average-case analysis of simple programs containing assignments, conditionals, and loops. A performance compiler has been developed using this theoretical foundation. The compiler is described, and it is shown that special cases of symbolic simplifications of formulas play a major role in rendering the system usable. The performance compiler generates a system of recurrence equations derived from a given program whose efficiency one wishes to analyze. This generation is always possible, but the problem of solving the resulting equations may be complex. The second part of the paper presents an original method that generalizes the previous approach and is applicable to functional programs that make use of recursion and complex data structures. Several examples are presented, including an analysis of binary tree sort. A key feature of the analysis of such programs is that distributions on complex data structures are represented using attributed probabilistic grammars.


ACM Computing Surveys | 1979

Non-Deterministic Algorithms

Jacques Cohen

Two languages enabling their users to estimate the efficiency of computer programs are presented. The program whose efficiency one wishes to estimate is written in the first language, a go-to-less programming language which includes most of the features of Algol 60. The second language consists of interactive commands enabling its users to provide additional information about the program written in the first language and to output results estimating its efficiency. Processors for the two languages are also described. The first processor is a syntax-directed translator which compiles a program into a symbolic formula representing the execution time for that program. The second processor is a set of procedures for algebraic manipulation which can be called by the user to operate on the formula produced by the first processor. Examples of the usage of the two languages are included. The limitations of the present system, its relation to Knuths work on the analysis of algorithms, and some of the directions for further research are also discussed.

Collaboration


Dive into the Jacques Cohen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge