Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Hughey is active.

Publication


Featured researches published by Richard Hughey.


Proteins | 2003

Combining local-structure, fold-recognition, and new fold methods for protein structure prediction

Kevin Karplus; Rachel Karchin; Jenny Draper; Jonathan Casper; Yael Mandel-Gutfreund; Mark Diekhans; Richard Hughey

This article presents an overview of the SAM‐T02 method for protein fold recognition and the UNDERTAKER program for ab initio predictions. The SAM‐T02 server is an automatic method that uses two‐track hidden Markov models (HMMS) to find and align template proteins from PDB to the target protein. The two‐track HMMs use an amino acid alphabet and one of several different local structure alphabets. The UNDERTAKER program is a new fragment‐packing program that can use short or long fragments and alignments to create protein conformations. The HMMs and fold‐recognition alignments from the SAM‐T02 method were used to generate the fragment and alignment libraries used by UNDERTAKER. We present results on a few selected targets for which this combined method worked particularly well: T0129, T0181, T0135, T0130, and T0139. Proteins 2003;53:491–496.


IEEE Transactions on Parallel and Distributed Systems | 2005

The UCSC Kestrel parallel processor

A. Di Bias; David M. Dahle; Mark Diekhans; L. Grate; Jeffrey D. Hirschberg; Kevin Karplus; H. Keller; M. Kendrick; F.J. Mesa-Martinez; D. Pease; Eric Rice; A. Schultz; Don Speck; Richard Hughey

The architectural landscape of high-performance computing stretches from superscalar uniprocessor to explicitly parallel systems, to dedicated hardware implementations of algorithms. Single-purpose hardware can achieve the highest performance and uniprocessors can be the most programmable. Between these extremes, programmable and reconfigurable architectures provide a wide range of choice in flexibility, programmability, computational density, and performance. The UCSC Kestrel parallel processor strives to attain single-purpose performance while maintaining user programmability. Kestrel is a single-instruction stream, multiple-data stream (SIMD) parallel processor with a 512-element linear array of 8-bit processing elements. The system design focuses on efficient high-throughput DNA and protein sequence analysis, but its programmability enables high performance on computational chemistry, image processing, machine learning, and other applications. The Kestrel system has had unexpected longevity in its utility due to a careful design and analysis process. Experience with the system leads to the conclusion that programmable SIMD architectures can excel in both programmability and performance. This work presents the architecture, implementation, applications, and observations of the Kestrel project at the University of California at Santa Cruz.


Bioinformatics | 1998

Reduced space hidden Markov model training.

C. Tarnas; Richard Hughey

MOTIVATIONnComplete forward-backward (Baum-Welch) hidden Markov model training cannot take advantage of the linear space, divide-and-conquer sequence alignment algorithms because of the examination of all possible paths rather than the single best path.nnnRESULTSnThis paper discusses the implementation and performance of checkpoint-based reduced space sequence alignment in the SAM hidden Markov modeling package. Implementation of the checkpoint algorithm reduced memory usage from O(mn) to O (m square root n) with only a 10% slowdown for small m and n, and vast speed-up for the larger values, such as m = n = 2000, that cause excessive paging on a 96 Mbyte workstation. The results are applicable to other types of dynamic programming.nnnAVAILABILITYnA World-Wide Web server, as well as information on obtaining the Sequence Alignment and Modeling (SAM) software suite, can be found at http://www.cse.ucsc. edu/research/compbio/[email protected]


symposium on computer arithmetic | 1997

Multiprecision division on an 8-bit processor

Eric Rice; Richard Hughey

Small processors can be especially useful in massively parallel architectures. This paper considers multiprecision division algorithms on an 8-bit processor (the Kestrel processor, currently in fabrication) that includes a small amount of memory and an 8-bit multiplier. We evaluate several variations of the Newton-Raphson reciprocal approximation methods for use with division. Our final single-precision algorithm requires 41 cycles to divide two 24-bit numbers to produce a 26-bit result. The double-precision version requires 98 cycles to divide two 53-bit numbers to produce a 55-bit result. This low cycle count is the result of several techniques, including low-precision arithmetic, early introduction of dividends, and simple (yet good) initial reciprocal estimates.


international conference on application specific array processors | 1995

Parallel sequence comparison and alignment

Richard Hughey

Sequence comparisons, a vital research tool in computational biology, is based on a simple O(n/sup 2/) algorithm that easily maps to a linear array of processors. This paper reviews and compares high-performance sequence analysis on general-purpose supercomputers and single-purpose, reconfigurable, and programmable co-processors. The difficulty of comparing hardware from published performance figures is also noted.


conference on advanced research in vlsi | 1997

Kestrel: design of an 8-bit SIMD parallel processor

David M. Dahle; Jeffrey D. Hirschberg; Kevin Karplus; Hansjoerg Keller; Eric Rice; Don Speck; Douglas H. Williams; Richard Hughey

Kestrel is a high-performance programmable parallel co-processor. Its design is the result of examination and reexamination of algorithmic, architectural, packaging, and silicon design issues, and the interrelations between them. The final system features a linear array of 8-bit processing elements, each with local memory, an arithmetic logic unit (ALU), a multiplier, and other functional units. Sixty-four Kestrel processing elements fit in a 1.4 million transistor, 60 mm/sup 2/, 0.5 /spl mu/m CMOS chip with just 84 pins. The planned single-board, 8-chip system will, for some applications, provide supercomputer performance at a fraction of the cost. This paper surveys four of our applications (sequence analysis, neural networks, image compression, and floating-point arithmetic), and discusses the philosophy behind many of the design comparators compact instruction encoding and design, the architectures facility with nested conditionals, and the multipliers flexibility in performing multiprecision operations. Finally, we discuss the implementation and performance of the Kestrel test chips.


international conference on application specific array processors | 1992

Programming systolic arrays

Richard Hughey

This paper presents the New Systolic Language as a general solution to the problem of systolic programming. The language provides a simple programming interface for systolic algorithms suitable for different hardware platforms and software simulators. The New Systolic Language hides the details and potential hazards of inter-processor communication, allowing data flow only via abstract systolic data streams. Data flows and systolic cell programs for the co-processor are integrated with host functions, enabling a single file to specify a complete systolic program.<<ETX>>


application-specific systems, architectures, and processors | 2000

Explicit SIMD programming for asynchronous applications

A. Di Blas; Richard Hughey

This paper presents the SIMD Phase Programming Model, a simple approach to solving asynchronous, irregular problems on massively parallel SIMD computers. The novelty of this model consists of a simple, clear method on how to turn a general serial program into an explicitly parallel one for a SIMD machine, transferring a portion of the flow control into the single PEs. Three case studies (the Mandelbrot Set, the N-Queen problem, and a Hopfield neural network that approximates the maximum clique in a graph) will be presented, implemented on two different SIMD computers (the UCSC Kestrel and the MasPar MP-2). Our results so far show good performance with respect to conventional serial CPU computing time and in terms of the high parallel speedup and efficiency achieved.


signal processing systems | 2008

Finding the Next Computational Model: Experience with the UCSC Kestrel

Richard Hughey; Andrea Di Blas

Architects and industry have been searching for the next durable computational model, the next step beyond the standard CPU. Graphics co-processors, though ubiquitous and powerful, can only be effectively used on a limited range of stream-based applications. The UCSC Kestrel parallel processor is part of a continuum of parallel processing architectures, stretching from the application-specific through the application-specialized to the application-unspecific. Kestrel combines an ALU, multiplier, and local memory, with Systolic Shared Registers for seamless merging of communication and computation, and an innovative condition stack for rapid conditionals. The result has been a readily programmable and efficient co-processor for a wide range of applications, including biological sequence analysis, image processing, and irregular problems. Experience with Kestrel indicates that programmable systolic processing, and its natural combination with the Single Instruction-Multiple Data (SIMD) parallel architecture, is the most powerful, flexible, and power-efficient computational model available for a large group of applications. Unlike other approaches that try to displace or replace the standard serial processor, our model recognizes that the expansion in the application landscape and performance requirements simply imply that the most efficient solution is the combination of more than one type of processor. We propose a model in which the CPU and the GPU are complemented by “the third big chip,” a massively-parallel SIMD processor.


international conference on parallel processing | 1991

B-SYS: A 470-Processor Programmable Systolic Array.

Richard Hughey; Daniel P. Lopresti

Collaboration


Dive into the Richard Hughey's collaboration.

Top Co-Authors

Avatar

Eric Rice

University of California

View shared research outputs
Top Co-Authors

Avatar

Kevin Karplus

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David M. Dahle

University of California

View shared research outputs
Top Co-Authors

Avatar

Don Speck

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Diekhans

University of California

View shared research outputs
Top Co-Authors

Avatar

A. Di Bias

University of California

View shared research outputs
Top Co-Authors

Avatar

A. Di Blas

University of California

View shared research outputs
Top Co-Authors

Avatar

Andrea Di Blas

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge