Albert A. Liddicoat
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Albert A. Liddicoat.
asilomar conference on signals, systems and computers | 2000
Albert A. Liddicoat; Michael J. Flynn
Typically multipliers are used to compute the square and cube of an operand. A squaring unit can be used to compute the square of an operand faster and more efficiently than a multiplier This paper proposes a parallel cubing unit that computes the cube of an operand 25 to 30% faster than can be computed using multipliers. Furthermore, the reduced squaring and cubing units are mathematically modeled and the performance and area requirements are studied for operands up to 54 bits in length. The applicability of the proposed cubing circuit is discussed with relation to the current Newton-Raphson and Taylor series function evaluation units.
digital systems design | 2001
Albert A. Liddicoat; Michael J. Flynn
In modern processors floating point divide operations often take 20 to 25 clock cycles, five times that of multiplication. Typically multiplicative algorithms with quadratic convergence are used for high-performance divide. A divide unit based on the multiplicative Newton-Raphson iteration is proposed. This divide unit utilizes the higher-order Newton-Raphson reciprocal approximation to compute the quotient fast, efficiently and with high throughput. The divide unit achieves fast execution by computing the square, cube and higher powers of the approximation directly and much faster than the traditional approach with serial multiplications. Additionally, the second, third and higher-order terms are computed simultaneously further reducing the divide latency. Significant hardware reductions have been identified that reduce the overall computation significantly and therefore, reduce the area required for implementation and the power consumed by the computation. The proposed hardware unit is designed to achieve the desired quotient precision in a single iteration allowing the unit to be fully pipelined for maximum throughput.
asilomar conference on signals, systems and computers | 2001
Hossam A. H. Fahmy; Albert A. Liddicoat; Michael J. Flynn
This work presents several techniques to improve the effectiveness of floating point arithmetic computations. A partially redundant number system is proposed as an internal format for arithmetic operations. The redundant number system enables carry free arithmetic operations to improve performance. Conversion from the proposed internal format back to the standard IEEE format is done only when an operand is written to memory. Efficient arithmetic units for floating point addition, multiplication and division are proposed using the redundant number system. This proposed system achieves overall better performance across all of the functional units when compared to state-of-the-art designs. The proposed internal format and arithmetic units comply with all the rounding modes of the IEEE 754 floating point standard.
conference on advanced signal processing algorithms architectures and implemenations | 2002
Hossam A. H. Fahmy; Albert A. Liddicoat; Michael J. Flynn
A parametric time delay model to compare floating point unit implementations is proposed. This model is used to compare a previously proposed floating point adder using a redundant number representation with other high-performance implementations. The operand width, the fan-in of the logic gates and the radix of the redundant format are used as parameters to the model. The comparison is done over a range of operand widths, fan-in and radices to show the merits of each implementation.
field programmable logic and applications | 2001
Michael J. Flynn; Albert A. Liddicoat
System and processor architectures depend on changes in technology. Looking ahead as die density and speed increase, power consumption and on chip interconnection delay become increasingly important in defining architecture tradeoffs. While technology improvements enable increasingly complex processor implementations, there are physical and program behavior limits to the usefulness of this complexity at the processor level. The architecture emphasis then shifts to the system: integrating controllers, signal processors, and other components with the processor to achieve enhanced system performance. In dealing with these elements, adaptability or reconfiguration is essential for optimizing system performance in a changing application environment. Ah ierarchy of adaptation is proposed based on flexible processor architectures, traditional FPL, and a new coarse grain adaptive arithmetic cell. The adaptive arithmetic cell offers high-performance arithmetic operations while providing computational flexibility. The proposed cell offers efficient and dynamic reconfiguration of the arithmetic units. Hybrid fine and coarse grain techniques may offer the best path to the continued evolution of the processor-based system.
asilomar conference on signals, systems and computers | 1999
Oskar Mencer; Martin Morf; Albert A. Liddicoat; Michael J. Flynn
Continued fractions (CFs) efficiently compute digit-serial rational function approximations. Traditionally, CFs are used to compute homographic functions such as y=ax+b/cx+d, given continued fractions x, y, and integers a, b, c, d. Improvements in the implementation of CF algorithms open up their use to many digital filtering applications in both software and hardware. These improvements of CF algorithms include error control, more efficient number representation and the associated efficient conversions. Continued fractions have been applied to digital filters in the frequency domain. These techniques include methods to compute optimal coefficients for rational transfer functions of digital filters and the realization of ladder forms for digital filtering. We propose a time domain digital filtering technique that incorporates continued fraction arithmetic units. For the FIR filter example chosen, the proposed technique achieves a 45-50% reduction of the mean square error of the transfer function.
Archive | 2002
Michael J. Flynn; Albert A. Liddicoat
Archive | 2001
Albert A. Liddicoat; Michael J. Flynn
Archive | 2000
Albert A. Liddicoat; Michael J. Flynn
Lecture Notes in Computer Science | 2001
Michael J. Flynn; Albert A. Liddicoat