Christopher W. Fraser
Microsoft
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher W. Fraser.
ACM Letters on Programming Languages and Systems | 1992
Christopher W. Fraser; David R. Hanson; Todd A. Proebsting
Many code-generator generators use tree pattern matching and dynamic programming. This paper describes a simple program that generates matchers that are fast, compact, and easy to understand. It is simpler than common alternatives: 200–700 lines of Icon or 950 lines of C versus 3000 lines of C for Twig and 5000 for burg. Its matchers run up to 25 times faster than Twigs. They are necessarily slower than burgs BURS (bottom-up rewrite system) matchers, but they are more flexible and still practical.
ACM Transactions on Programming Languages and Systems | 1984
Jack W. Davidson; Christopher W. Fraser
Cet article montre comment une optimisation de code objet minutieux a simplifiee un compilateur et la rendu facile a recibler. Les compilateurs croises emettent un code comparable aux compilateurs specifiques vides
ACM Transactions on Programming Languages and Systems | 1980
Jack W. Davidson; Christopher W. Fraser
Peephole optimizers improve object code by replacing certain sequences of instructions with better sequences. This paper describes PO, a peephole optimizer that uses a symbolic machine description to simulate pairs of adjacent instructions, replacing them, where possible, with an equivalent single instruction. As a result of this organization, PO is machine independent and can be described formally and concisely: when PO is finished, no instruction, and no pair of adjacent instructions, can be replaced with a cheaper single instruction that has the same effect. This thoroughness allows PO to relieve code generators of much case analysis; for example, they might produce only load/add-register sequences and rely on PO to, where possible, discard them in favor or add-memory, add-immediate, or increment instructions. Experiments indicate that naive code generators can give good code if used with PO.
Sigplan Notices | 1991
Christopher W. Fraser
lcc is a new retargetable compiler for ANSI C. Versions for the VAX, Motorola 68020, SPARC, and MIPS are in production use at Princeton University and at AT&T Bell Laboratories. With a few exceptions, little about lcc is unusual --- it integrates several well engineered, existing techniques --- but it is smaller and faster than most other C compilers, and it generates code of comparable quality, lccs target-independent front end performs a few simple, but effective, optimizations that contribute to good code; examples include simulating register declarations and partitioning switch statement cases into dense tables. It also implements target-independent function tracing and expression-level profiling.
compiler construction | 1984
Christopher W. Fraser; Eugene W. Myers; Alan L. Wendt
This paper describes the application of a general data compression algorithm to assembly code. The system is retargetable and generalizes cross-jumping and procedural abstraction. It can be used as a space optimizer that trades time for space, it can turn assembly code into interpretive code, and it can help formalize and automate the traditionally ad hoc design of both real and abstract machines.
Software - Practice and Experience | 1991
Christopher W. Fraser; David R. Hanson
1cc is a retargetable, production compiler for ANSI C; it has been ported to the VAX, Motorola 68020, SPARC, and MIPS R3000, and some versions have been in use for over a year and a half. It is smaller and faster than generally available alternatives, and its local code is comparable. This paper describes the interface between the target‐independent front end and the target‐dependent back ends. The interface consists of shared data structures, a few functions, and a dag language. While this approach couples the front and back ends tightly, it results in efficient, compact compilers. The interface is illustrated by detailing a code generator that emits naive VAX code.
symposium on principles of programming languages | 1994
Todd A. Proebsting; Christopher W. Fraser
Most modern CPUS pipeline instructions. When two instructions need the same machine resource—like a bus, register or functional unit— at the same time, they suffer a structural hazard, which stalls the pipeline or corrupts a result. Compilers order or schedule instructions to cut structural hazards. A fundamental step in scheduling is detecting if a series of instructions suffers a structural hazard. This paper describes a method for detecting structural hazards 5–80 times faster than its predecessors, which generally have simulated the pipeline at compile time. It accepts a compact specification of the pipeline and creates a finitestate automaton that can detect structural hazards in one table lookup per instruction. The automaton maintains an integer state that encodes all potential structural hazards for all instructions in the pipe. It accepts an instruction type and a state and either reports a hazard or produces the state that folds in the new instruction and advances the pipeline by one cyChristopher W. Frasert AT&T Bell Laboratories
programming language design and implementation | 1999
Christopher W. Fraser
This paper describes experiments that apply machine learning to compress computer programs, formalizing and automating decisions about instruction encoding that have traditionally been made by humans in a more ad hoc manner. A program accepts a large training set of program material in a conventional compiler intermediate representation (IR) and automatically infers a decision tree that separates IR code into streams that compress much better than the undifferentiated whole. Driving a conventional arithmetic compressor with this model yields code 30% smaller than the previous record for IR code compression, and 24% smaller than an ambitious optimizing compiler feeding an ambitious general-purpose data compressor.
compiler construction | 1984
Jack W. Davidson; Christopher W. Fraser
This paper describes a system that automatically generates peephole optimizations. A general peephole optimizer driven by a machine description produces optimizations at compile-compile time for a fast, pattern-directed, compile-time optimizer. They form part of a compiler that simplifies retargeting by substituting peephole optimization for case analysis.
programming language design and implementation | 2001
William S. Evans; Christopher W. Fraser
This paper describes the design and implementation of a method for producing compact, bytecoded instruction sets and interpreters for them. It accepts a grammar for programs written using a simple bytecoded stack-based instruction set, as well as a training set of sample programs. The system transforms the grammar, creating an expanded grammar that represents the same language as the original grammar, but permits a shorter derivation of the sample programs and others like them. A programs derivation under the expanded grammar forms the compressed bytecode representation of the program. The interpreter for this bytecode is automatically generated from the original bytecode interpreter and the expanded grammar. Programs expressed using compressed bytecode can be substantially smaller than their original bytecode representation and even their machine code representation. For example, compression cuts the bytecode for lcc from 199KB to 58KB but increases the size of the interpreter by just over 11KB.