Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Altaf Abdul Gaffar is active.

Publication


Featured researches published by Altaf Abdul Gaffar.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2006

Accuracy-Guaranteed Bit-Width Optimization

Dong-U Lee; Altaf Abdul Gaffar; Ray C. C. Cheung; Oskar Mencer; Wayne Luk; George A. Constantinides

An automated static approach for optimizing bit widths of fixed-point feedforward designs with guaranteed accuracy, called MiniBit, is presented. Methods to minimize both the integer and fraction parts of fixed-point signals with the aim of minimizing the circuit area are described. For range analysis, the technique in this paper identifies the number of integer bits necessary to meet range requirements. For precision analysis, a semianalytical approach with analytical error models in conjunction with adaptive simulated annealing is employed to optimize the number of fraction bits. The analytical models make it possible to guarantee overflow/underflow protection and numerical accuracy for all inputs over the user-specified input intervals. Using a stream compiler for field-programmable gate arrays (FPGAs), the approach in this paper is demonstrated with polynomial approximation, RGB-to-YCbCr conversion, matrix multiplication, B-splines, and discrete cosine transform placed and routed on a Xilinx Virtex-4 FPGA. Improvements for a given design reduce the area and the latency by up to 26% and 12%, respectively, over a design using optimum uniform fraction bit widths. Studies show that MiniBit-optimized designs are within 1% of the area produced from the integer linear programming approach


field-programmable custom computing machines | 2004

Unifying bit-width optimisation for fixed-point and floating-point designs

Altaf Abdul Gaffar; Oskar Mencer; Wayne Luk

This paper presents a method that offers a uniform treatment for bit-width optimisation of both fixed-point and floating-point designs. Our work utilises automatic differentiation to compute the sensitivities of outputs to the bit-width of the various operands in the design. This sensitivity analysis enables us to explore and compare fixed-point and floating-point implementation for a particular design. As a result, we can automate the selection of the optimal number representation for each variable in a design to optimize area and performance. We implement our method in the BitSize tool targeting reconfigurable architectures, which takes user-defined constraints to direct the optimisation procedure. We illustrate our approach using applications such as ray-tracing and function approximation.


field-programmable technology | 2002

Floating-point bitwidth analysis via automatic differentiation

Altaf Abdul Gaffar; Oskar Mencer; Wayne Luk; Peter Y. K. Cheung; Nabeel Shirazi

Automatic bitwidth analysis is a key ingredient for highlevel programming of FPGAs and high-level synthesis of VLSI circuits. The objective is to find the minimal number of bits to represent a value in order to minimise the circuit area and to improve efficiency of the respective arithmetic operations, while satisfying user-defined numerical constraints. We present a novel approach to bitwidth- or precision-analysis for floating-point designs. The approach involves analysing the dataflow graph representation of a design to see how sensitive the output of a node is to changes in the outputs of other nodes: higher sensitivity requires higher precision and hence more output bits. We automate such sensitivity analysis by a mathematical method called automatic differentiation, which involves differentiating variables in a design with respect to other variables. We illustrate our approach by optimising the bitwidth for two examples, a discrete Fourier transform (DFT) implementation and a Finite Impulse Response (FIR) filter implementation.


IEEE Transactions on Computers | 2005

Optimizing hardware function evaluation

Dong-U Lee; Altaf Abdul Gaffar; Oskar Mencer; Wayne Luk

We present a methodology and an automated system for function evaluation unit generation. Our system selects the best function evaluation hardware for a given function, accuracy requirements, technology mapping, and optimization metrics, such as area, throughput, and latency. Function evaluation f(x) typically consists of range reduction and the actual evaluation on a small convenient interval such as [0, /spl pi//2) for sin(x). We investigate the impact of hardware function evaluation with range reduction for a given range and precision of x and f(x) on area and speed. An automated bit-width optimization technique for minimizing the sizes of the operators in the data paths is also proposed. We explore a vast design space for fixed-point sin(x), log(x), and /spl radic/x accurate to one unit in the last place using MATLAB and ASC, a stream compiler for field-programmable gate arrays (FPGAs). In this study, we implement over 2,000 placed-and-routed FPGA designs, resulting in over 100 million application-specific integrated circuit (ASIC) equivalent gates. We provide optimal function evaluation results for range and precision combinations between 8 and 48 bits.


design automation conference | 2005

MiniBit: bit-width optimization via affine arithmetic

Dong-U Lee; Altaf Abdul Gaffar; Oskar Mencer; Wayne Luk

MiniBit, our automated approach for optimizing bit-widths of fixed-point designs is based on static analysis via affine arithmetic. We describe methods to minimize both the integer and fraction parts of fixed-point signals with the aim of minimizing circuit area. Our range analysis technique identifies the number of integer bits required. For precision analysis, we employ a semi-analytical approach with analytical error models in conjunction with adaptive simulated annealing to find the optimum number of fraction bits. Improvements for a given design reduce area and latency by up to 20% and 12% respectively, over optimum uniform fraction bit-widths on a Xilinx Virtex-4 FPGA.


field programmable logic and applications | 2002

Automating Customisation of Floating-Point Designs

Altaf Abdul Gaffar; Wayne Luk; Peter Y. K. Cheung; Nabeel Shirazi; James Hwang

This paper describes a method for customising the representation of floating-point numbers that exploits the flexibility of re-configurable hardware. The method determines the appropriate size of mantissa and exponent for each operation in a design, so that a cost functionn with a given error specification for the output relative to a reference representation can be satisfied. We adopt an iterative implementation of this method, which supports IEEE single-precision or double-precision floating-point representation as the reference representation. This implementation produces customised floating-point formats with arbitrary-sized mantissa and exponent. The tool follows a generic framework designed to cover a variety of arithmetic representations and their hardware implementations; both combinational and pipelined designs can be developed. Results show that, particularly for calculations involving large dynamic ranges, our tool can produce hardware that is smaller and faster when compared with a design adopting the reference representation.


international symposium on circuits and systems | 2006

Fast word-level power models for synthesis of FPGA-based arithmetic

Jonathan A. Clarke; Altaf Abdul Gaffar; George A. Constantinides; Peter Y. K. Cheung

This paper presents power models for multiplication and addition components on FPGAs which can be used at a high-level design description stage to estimate their logic and intra-component routing power consumption. The models presented are parameterized by the word-length of the component and the word-level statistics of its input signals. A key feature of these power models is the ability to handle both zero mean and non-zero mean signals. A method for measuring intra-component routing power consumption is presented, enabling the power models to account for both logic and routing power in components. The resulting models are equations which can be used to estimate the power consumed in an arithmetic component in a fraction of a second at the pre-placement stage of the design flow. The models have a mean relative error of 7.2% compared to bit-level power simulation of the placed-and-routed design


field-programmable logic and applications | 2005

Parameterized logic power consumption models for FPGA-based arithmetic

Jonathan A. Clarke; Altaf Abdul Gaffar; George A. Constantinides

The need for fast power estimation methods is a growing requirement in tools which perform power consumption optimization. This paper addresses the requirement by presenting a technique which is capable of providing a power estimate using only the word-level statistics of signals within an arithmetic hardware design. By abstracting away from the low-level details of a design it is possible to reduce the time required to calculate the power consumption dramatically. Power models for multiplication and addition have been constructed using an experimental method, and the operation of these models is illustrated by estimating the power consumed in logic for two example circuits: a sum of products and a parameterised polynomial evaluation. The proposed method is capable of providing an estimate within 10% of low-level power estimates given by XPower.


field-programmable technology | 2006

PowerBit - power aware arithmetic bit-width optimization

Altaf Abdul Gaffar; Jonathan A. Clarke; George A. Constantinides

In this paper we present a novel method reducing the dynamic power consumption in FPGA-based arithmetic circuits by optimizing the bit-widths of the signals inside the circuit. The proposed method is implemented in the tool PowerBit, which makes use of macro models parameterized by word-level signal statistics to estimate the circuit power consumption during the optimization process. The power models used take in to account the generation and propagation of signal glitches through the circuit. The bit-width optimization uses a static analysis technique which is capable of providing guaranteed accuracy in the design outputs. We show that, for sample designs implemented on FPGAs that improvements of over 10% are possible for multiple bit-width allocated designs optimized for power compared to designs allocated uniform bit-widths


field-programmable custom computing machines | 2002

Customising floating-point designs

Altaf Abdul Gaffar; Wayne Luk; Peter Y. K. Cheung; Nabeel Shirazi

This paper describes a method for customising the representation of floating-point numbers that exploits the flexibility of reconfigurable hardware. The method determines the appropriate size of mantissa and exponent for each operation in a design, so that a cost function with a given error specification for the output relative to a reference representation can be satisfied. Currently our tool, which adopts an iterative implementation of this method, supports single- or double-precision floating-point representation as the reference representation. It produces customised floating-point formats with arbitrary-sized mantissa and exponent. Results show that, for calculations involving large dynamic ranges, our method can achieve significant hardware reduction and speed improvement with respect to a design adopting the reference representation.

Collaboration


Dive into the Altaf Abdul Gaffar's collaboration.

Top Co-Authors

Avatar

Wayne Luk

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Oskar Mencer

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dong-U Lee

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ray C. C. Cheung

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge