David Bacon
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Bacon.
ACM Computing Surveys | 1994
David Bacon; Susan L. Graham; Oliver Sharp
In the last three decades a large number of compiler transformations for optimizing programs have been implemented. Most optimizations for uniprocessors reduce the number of instructions executed by the program using transformations based on the analysis of scalar quantities and data-flow techniques. In contrast, optimizations for high-performance superscalar, vector, and parallel processors maximize parallelism and memory locality with transformations that rely on tracking the properties of arrays using loop dependence analysis.nThis survey is a comprehensive overview of the important high-level program restructuring techniques for imperative languages, such as C and Fortran. Transformations for both sequential and various types of parallel architectures are covered in depth. We describe the purpose of each transformation, explain how to determine if it is legal, and give an example of its application.nProgrammers wishing to enhance the performance of their code can use this survey to improve their understanding of the optimizations that compilers can perform, or as a reference for techniques to be applied manually. Students can obtain an overview of optimizing compiler technology. Compiler writers can use this survey as a reference for most of the important optimizations developed to date, and as bibliographic reference for the details of each optimization. Readers are expected to be familiar with modern computer architecture and basic program compilation techniques.
workshop on parallel & distributed debugging | 1991
David Bacon; Seth Copen Goldstein
Shared-memory parallel programs can be highly non-deterministic due to the unpredictable order in which shared references are satisfied. However, deterministic execution is extremely important for debugging and can also be used for fault-tolerance and other replay-based algorihtms. We present a hardware/software design that allows the order of memory and the CPUs. This log can then be used along with hardware and software control to replay execution. Simulation of several parallel programs shows that our device records no more than 1.17 MB/second for an application exhibiting fine-grained sharing behavior on a 16-way multiprocessor consisting of 12 MIP CPUs. In addition, no probe effect on performance degradation is introduced. This represents several orders of magnitude improvement in both performance and log size over purely software-based methods proposed previously.
Physical Review A | 2000
Daniel A. Lidar; David Bacon; Julia Kempe; K. Birgitta Whaley
The exchange interaction between identical qubits in a quantum-information processor gives rise to unitary two-qubit errors. It is shown here that decoherence-free subspaces (DFSs) for collective decoherence undergo Pauli errors under exchange, which, however, do not take the decoherence-free states outside of the DFS. In order to protect DFSs against these errors it is sufficient to employ a recently proposed concatenated DFS quantum-error-correcting code scheme [D. A. Lidar, D. Bacon, and K.B. Whaley, Phys. Rev. Lett. 82, 4556 (1999)].
symposium on reliable distributed systems | 1991
David Bacon
File system operation in a transparently fault-tolerant system that uses checkpointing and message logging is discussed. Logging messages to disk is one of the primary performance costs of such systems. The author has measured the file system operations performed on large timesharing systems running Unix in terms of the level of concurrency (number of consecutive operations that do not change the state of the file system). By performing much of the data analysis online within a modified Unix kernel, statistics were collected over a long period of time with a substantial variation in system load. Using this data, it is demonstrated that a technique called null logging can reduce the number of messages logged to disk by a factor of 10 to 25, depending on the workload. This reduces the overhead of the fault-tolerance mechanism and allows a large fraction of file system operations to commit instantaneously.<<ETX>>
Archive | 1997
David Bacon; Susan L. Graham
ieee international conference on high performance computing data and analytics | 1993
David Bacon; Susan L. Graham; Oliver Sharp
WorkingUSA | 2000
David Bacon
WorkingUSA | 2012
David Bacon
WorkingUSA | 2001
David Bacon
Archive | 1999
Daniel A. Lidar; David Bacon; Julia Kempe; K. B. Whaley