Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Babak Zamanlooy is active.

Publication


Featured researches published by Babak Zamanlooy.


IEEE Transactions on Very Large Scale Integration Systems | 2014

Efficient VLSI Implementation of Neural Networks With Hyperbolic Tangent Activation Function

Babak Zamanlooy; Mitra Mirhassani

Nonlinear activation function is one of the main building blocks of artificial neural networks. Hyperbolic tangent and sigmoid are the most used nonlinear activation functions. Accurate implementation of these transfer functions in digital networks faces certain challenges. In this paper, an efficient approximation scheme for hyperbolic tangent function is proposed. The approximation is based on a mathematical analysis considering the maximum allowable error as design parameter. Hardware implementation of the proposed approximation scheme is presented, which shows that the proposed structure compares favorably with previous architectures in terms of area and delay. The proposed structure requires less output bits for the same maximum allowable error when compared to the state-of-the-art. The number of output bits of the activation function determines the bit width of multipliers and adders in the network. Therefore, the proposed activation function results in reduction in area, delay, and power in VLSI implementation of artificial neural networks with hyperbolic tangent activation function.


ieee international newcas conference | 2012

Efficient hardware implementation of threshold neural networks

Babak Zamanlooy; Mitra Mirhassani

Area and Noise to Signal Ratio (NSR) are two main factors in hardware implementation of neural networks. Despite attempts to reduce the area of sigmoid and hyperbolic tangent activation functions, they cannot achieve the efficiency of threshold activation function. A new NSR efficient architecture for threshold networks is proposed in this paper. The proposed architecture uses different number of bits for weight storage in different layers. The optimum number of bits for each layer is found based on the mathematical derivation using stochastic model. Network training is done using the recently introduced learning algorithm called Extreme Learning Machine (ELM). A 4-7-4 network is considered as a case study and its hardware implementation for different weight accuracies is investigated. The proposed design is more efficient considering area × NSR as a performance metric. VLSI implementation of the proposed architecture using a 0.18 μm CMOS process is presented which shows 44.16%, 58.04 % and 67.30% improvement for total number of bits equal to 16, 20 and 24.


IEEE Transactions on Very Large Scale Integration Systems | 2015

CVNS Synapse Multiplier for Robust Neurochips With On-Chip Learning

Babak Zamanlooy; Mitra Mirhassani

Designing low noise-to-signal-ratio (NSR) structures is one of the main concerns when implementing hardware-based neural networks. In this paper, a new continuous valued number system (CVNS) multiplication algorithm for low-resolution environment is proposed with accurate results. Using the proposed CVNS multiplication algorithm, VLSI implementation of a high-resolution mixed-signal CVNS synapse multiplier for neurochips with on-chip learning is realized. The proposed CVNS multiplication algorithm provides structures with lower NSR. Therefore, the proposed CVNS multiplication algorithm can be exploited to design robust CVNS Adaline for neurochips with on-chip learning.


Neurocomputing | 2014

Area-efficient robust Madaline based on continuous valued number system

Babak Zamanlooy; Mitra Mirhassani

Abstract Achieving a low Noise-to-Signal Ratio (NSR) is one of the major concerns when implementing hardware-based neural networks. Continuous Valued Number System (CVNS) features have been exploited to improve the NSR. The efficiency of the network model in terms of area, power consumption, and NSR is measured using the product of the total number of neurons in the network multiplied by the network NSR, which indicates the number of neurons required for a specific NSR. The network proposed in this paper stores the weights in digital registers while the processing is done in the analog domain using CVNS arithmetic. The mathematical analysis and comparison between the proposed network and the previous structures prove that the proposed architecture improves upon in terms of both the NSR and the product of neuron multiplied by NSR.


international conference on electronic devices, systems and applications | 2010

Design of A 100MHz – 1.66GHz, 0.13µm CMOS phase locked loop

Mehdi Ayat; Behnam Babaei; Reza Ebrahimi Atani; Sattar Mirzakuchaki; Babak Zamanlooy

A fully integrated charge-pump phase-locked loop (PLL) is described. The PLL is designed and simulated in a 0.13 CMOS technology. The PLL lock range is from 100MHz to 1.66GHz.


Neurocomputing | 2017

Mixed-signal VLSI neural network based on Continuous Valued Number System

Babak Zamanlooy; Mitra Mirhassani

Mixed-signal neural networks have higher energy efficiency and lower area consumption when compared with their equivalent digital implementations. However, the signal processing precision in mixed-signal implementations is limited. A mixed-signal arithmetic method called Continuous Valued Number Systems (CVNS) is employed for development and implementation of different functions required in a neuronal network. Analog digits of this number system enable the higher accuracy of analog operations, and can efficiently relate with digital and binary values. In this paper, design and implementation of a 2-2-1 mixed-signal CVNS network structure is presented, to confirm the developed arithmetic method. In the proposed structure, weights are stored in the digital registers while the arithmetic is based on the CVNS. The CVNS features have been exploited to address the limited signal processing precision issue, which requires resolutions higher than 12-bit for online training of the network. As a result, the proposed structure satisfies this requirement of neural networks, while yielding an optimized network configuration. The proposed network is designed, fabricated, and tested in 0.18m technology.


IEEE Transactions on Very Large Scale Integration Systems | 2017

An Analog CVNS-Based Sigmoid Neuron for Precise Neurochips

Babak Zamanlooy; Mitra Mirhassani

In this paper, the design and implementation of an analog sigmoid neuron is presented. The activation function of the proposed neuron is implemented based on the piecewise linear approximation in the analog domain. The proposed neuron provides the required accuracy that cannot be achieved in general by analog neural network implementations. General digital outputs of a sigmoid neuron are replaced with fewer analog digits of the continuous valued number system (CVNS), while at the same time maximum approximation error is kept the same as the digital architectures. The proposed CVNS neuron resulted in an optimal ASIC implementation and is suitable for neurochips with on-chip learning. The VLSI implementation of the neuron is carried out using current-mode circuits. The implementation results compare favorably with previously developed structures in terms of area, delay, and power consumption. The proposed neuron structure occupies 28% less area compared with the state-of-the-art methods and it has two times lower power


international symposium on multiple-valued logic | 2012

Complexity Study of the Continuous Valued Number System Adders

Babak Zamanlooy; Ashley Novak; Mitra Mirhassani

times


Archive | 2017

Robust Analog Arithmetic Based on the Continuous Valued Number System

Babak Zamanlooy; Mitra Mirhassani

delay.


international symposium on circuits and systems | 2014

Area efficient low-sensitivity lumped madaline based on Continuous Valued Number System

Babak Zamanlooy; Mitra Mirhassani

The Continuous Valued Number System (CVNS) is a novel analog digit number system which employs digit level analog modular arithmetic. The information redundancy among the digits, allows efficient binary operations using analog circuitry with arbitrary accuracy, which in turn reduces the area and the number of required interconnections. CVNS theory can open up a new approach for performing digital arithmetic with classical analog elements, such as current comparators and current mirrors, and with arbitrary precision. Addition in the CVNS is digit wise and digits do not intercommunicate. In this paper the two operand CVNS adder complexity is compared with similar CVNS adders, as well as conventional threshold adders. Comparisons show that the CVNS adder is more area efficient than conventional threshold logic adders.

Collaboration


Dive into the Babak Zamanlooy's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge