Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shubha Ramakrishnan is active.

Publication


Featured researches published by Shubha Ramakrishnan.


IEEE Journal of Solid-state Circuits | 2010

A Floating-Gate-Based Field-Programmable Analog Array

Arindam Basu; Stephen Brink; Craig Schlottmann; Shubha Ramakrishnan; Csaba Petre; Scott Koziol; I. Faik Baskaya; Christopher M. Twigg; Paul E. Hasler

A field-programmable analog array (FPAA) with 32 computational analog blocks (CABs) and occupying 3 × 3 mm2 in 0.35-μm CMOS is presented. Each CAB has a wide variety of subcircuits ranging in granularity from multipliers and programmable offset wide-linear-range Gm blocks to nMOS and pMOS transistors. The programmable interconnects and circuit elements in the CAB are implemented using floating-gate (FG) transistors, the total number of which exceeds fifty thousand. Using FG devices eliminates the need for SRAM to store configuration bits since the switch stores its own configuration. This system exhibits significant performance enhancements over its predecessor in terms of achievable dynamic range (> 9 b of FG voltage) and speed (≈ 20 gates/s) of accurate FG current programming and isolation between ON and OFF switches. An improved routing fabric has been designed that includes nearest neighbor connections to minimize the penalty on bandwidth due to routing parasitic. A maximum bandwidth of 57 MHz through the switch matrix and around 5 MHz for a first-order low-pass filter is achievable on this chip, the limitation being a “program” mode switch that will be rectified in the next chip. Programming performance improved drastically by implementing the entire algorithm on-chip with an SPI digital interface. Measured results of the individual subcircuits and two system examples including an AM receiver and a speech processor are presented.


IEEE Transactions on Biomedical Circuits and Systems | 2011

Floating Gate Synapses With Spike-Time-Dependent Plasticity

Shubha Ramakrishnan; Paul E. Hasler; Christal Gordon

This paper describes a single transistor floating-gate synapse device that can be used to store a weight in a nonvolatile manner, compute a biological EPSP, and demonstrate biological learning rules such as Long-Term Potentiation, LTD, and spike-time dependent plasticity. We also describe a highly scalable architecture of a matrix of synapses to implement the described learning rules. Parameters for weight update in the 0.35 um process have been extracted and can be used to predict the change in weight based on time difference between pre- and post-synaptic spike times.


IEEE Transactions on Biomedical Circuits and Systems | 2013

A Learning-Enabled Neuron Array IC Based Upon Transistor Channel Models of Biological Phenomena

Stephen Brink; Stephen Nease; Paul E. Hasler; Shubha Ramakrishnan; Richard B. Wunderlich; Arindam Basu; Brian P. Degnan

We present a single-chip array of 100 biologically-based electronic neuron models interconnected to each other and the outside environment through 30,000 synapses. The chip was fabricated in a standard 350 nm CMOS IC process. Our approach used dense circuit models of synaptic behavior, including biological computation and learning, as well as transistor channel models. We use Address-Event Representation (AER) spike communication for inputs and outputs to this IC. We present the IC architecture and infrastructure, including IC chip, configuration tools, and testing platform. We present measurement of small network of neurons, measurement of STDP neuron dynamics, and measurement from a compiled spiking neuron WTA topology, all compiled into this IC.


IEEE Transactions on Biomedical Circuits and Systems | 2010

Neural Dynamics in Reconfigurable Silicon

Arindam Basu; Shubha Ramakrishnan; Csaba Petre; Scott Koziol; Stephen Brink; Paul E. Hasler

A neuromorphic analog chip is presented that is capable of implementing massively parallel neural computations while retaining the programmability of digital systems. We show measurements from neurons with Hopf bifurcations and integrate and fire neurons, excitatory and inhibitory synapses, passive dendrite cables, coupled spiking neurons, and central pattern generators implemented on the chip. This chip provides a platform for not only simulating detailed neuron dynamics but also uses the same to interface with actual cells in applications such as a dynamic clamp. There are 28 computational analog blocks (CAB), each consisting of ion channels with tunable parameters, synapses, winner-take-all elements, current sources, transconductance amplifiers, and capacitors. There are four other CABs which have programmable bias generators. The programmability is achieved using floating gate transistors with on-chip programming control. The switch matrix for interconnecting the components in CABs also consists of floating-gate transistors. Emphasis is placed on replicating the detailed dynamics of computational neural models. Massive computational area efficiency is obtained by using the reconfigurable interconnect as synaptic weights, resulting in more than 50 000 possible 9-b accurate synapses in 9 mm2.


IEEE Transactions on Very Large Scale Integration Systems | 2016

A Programmable and Configurable Mixed-Mode FPAA SoC

Suma George; Sihwan Kim; Sahil Shah; Jennifer Hasler; Michelle Collins; Farhan Adil; Richard B. Wunderlich; Stephen Nease; Shubha Ramakrishnan

This paper presents a floating-gate (FG)-based, field-programmable analog array (FPAA) system-on-chip (SoC) that integrates analog and digital programmable and configurable blocks with a 16-bit open-source MSP430 microprocessor (μP) and resulting interface circuitry. We show the FPAA SoC architecture, experimental results from a range of circuits compiled into this architecture, and system measurements. A compiled analog acoustic command-word classifier on the FPAA SoC requires 23 μW to experimentally recognize the word dark in a TIMIT database phrase. This paper jointly optimizes high parameter density (number of programmable elements/area/process normalized), as well as high accessibility of the computations due to its data flow handling; the SoC FPAA is 600 000 × higher density than other non-FG approaches.


international symposium on circuits and systems | 2010

Hardware and software infrastructure for a family of floating-gate based FPAAs

Scott Koziol; Craig Schlottmann; Arindam Basu; Stephen Brink; Csaba Petre; Brian P. Degnan; Shubha Ramakrishnan; Paul E. Hasler; Aurele Balavoine

Analog circuits and systems research and education can benefit from the flexibility provided by large-scale Field Programmable Analog Arrays (FPAAs). This paper presents the hardware and software infrastructure supporting the use of a family of floating-gate based FPAAs being developed at Georgia Tech. This infrastructure is compact and portable and provides the user with a comprehensive set of tools for custom analog circuit design and implementation. The infrastructure includes the FPAA IC, discrete ADC, DAC and amplifier ICs, a 32-Bit ARM based microcontroller for interfacing the FPAA with the users computer, and Matlab and targeting software. The FPAA hardware communicates with Matlab over a USB connection. The USB connection also provides the hardwares power. The software tools include three major systems: a Matlab Simulink FPAA program, a SPICE to FPAA compiler called GRASPER, and a visualization tool called RAT. The hardware consists of two custom PCB designs which include a main board used to program and control an FPAA IC and an FPAA IC adaptor board used to interface a QFP packaged FPAA IC with the 100 pin ZIF socket on the main programming and control board.


custom integrated circuits conference | 2008

RASP 2.8: A new generation of floating-gate based field programmable analog array

Arindam Basu; Christopher M. Twigg; Stephen Brink; Paul E. Hasler; Csaba Petre; Shubha Ramakrishnan; Scott Koziol; Craig Schlottmann

The RASP 2.8 is a very powerful reconfigurable analog computing platform with thirty-two computational analog blocks (CABs). Each CAB has a wide variety of sub-circuits ranging in granularity from multipliers and programmable offset wide linear range Gm blocks to NMOS and PMOS transistors. The programmable interconnects and circuit elements in the CAB are implemented using floating gate transistors. This system exhibits significant performance enhancements over its predecessor in terms of achievable signal bandwidth (> 50 MHz), accuracy (> 9 bits), dynamic range (> 7 decades of current), speed of floating-gate programming (> 200 gates/sec) and isolation between ON and OFF switches. The improved bandwidth is primarily due to an improved routing fabric that includes nearest neighbor connections. Programming performance improved drastically by implementing the entire algorithm on-chip with an SPI digital interface. Several complex system examples are presented.


IEEE Transactions on Very Large Scale Integration Systems | 2014

Vector-Matrix Multiply and Winner-Take-All as an Analog Classifier

Shubha Ramakrishnan; Jennifer Hasler

The vector-matrix multiply and winner-take-all structure is presented as a general-purpose, low-power, compact, programmable classifier architecture that is capable of greater computation than a one-layer neural network, and equivalent to a two-layer perceptron. The classifier generates event outputs and is suitable for integration with event-driven systems. The main sources of mismatch, temperature dependence, and methods for compensation are discussed. We present measured data from simple linear and nonlinear classifier structures on a 0.35-μm chip and analyze the power and computing efficiency for scaled structures.


international symposium on circuits and systems | 2010

Floating gate synapses with spike time dependent plasticity

Shubha Ramakrishnan; Paul E. Hasler; Christal Gordon

This paper demonstrates a single transistor synapse that stores a weight in a non-volatile manner, computes a biological EPSP, and also demonstrates biological learning rules such as LTP, LTD and STDP. It also describes a highly scalable architecture of an array of synapses that can implement the described learning rules. Parameters for weight update in a 0.35µm process were extracted and used to predict changes in weight based on the time difference between pre-synaptic and post-synaptic spike times.


IEEE Transactions on Very Large Scale Integration Systems | 2014

Speech Processing on a Reconfigurable Analog Platform

Shubha Ramakrishnan; Arindam Basu; Leung Kin Chiu; Jennifer Hasler; David V. Anderson; Stephen Brink

Real-time implementation of audio processing algorithms involving discrete-time signals tend to be power-intensive. We present an alternate analog system implementation of a noise-suppression algorithm on our reconfigurable chip, which also enables future implementations of other applications such as voice-activity detection, hearing compensation, and classifier front-ends.

Collaboration


Dive into the Shubha Ramakrishnan's collaboration.

Top Co-Authors

Avatar

Paul E. Hasler

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Stephen Brink

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arindam Basu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jennifer Hasler

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Csaba Petre

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Craig Schlottmann

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Richard B. Wunderlich

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brian P. Degnan

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christal Gordon

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge