Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lorenzo Turicchia is active.

Publication


Featured researches published by Lorenzo Turicchia.


IEEE Transactions on Biomedical Engineering | 2005

An ultra-low-power programmable analog bionic ear processor

Rahul Sarpeshkar; Christopher D. Salthouse; Ji-Jon Sit; Michael W. Baker; Serhii M. Zhak; Timothy K. Lu; Lorenzo Turicchia; Stephanie Balster

We report a programmable analog bionic ear (cochlear implant) processor in a 1.5-/spl mu/m BiCMOS technology with a power consumption of 211 /spl mu/W and 77-dB dynamic range of operation. The 9.58 mm/spl times/9.23 mm processor chip runs on a 2.8 V supply and has a power consumption that is lower than state-of-the-art analog-to-digital (A/D)-then-DSP designs by a factor of 25. It is suitable for use in fully implanted cochlear-implant systems of the future which require decades of operation on a 100-mAh rechargeable battery with a finite number of charge-discharge cycles. It may also be used as an ultra-low-power spectrum-analysis front end in portable speech-recognition systems. The power consumption of the processor includes the 100 /spl mu/W power consumption of a JFET-buffered electret microphone and an associated on-chip microphone front end. An automatic gain control circuit compresses the 77-dB input dynamic range into a narrower internal dynamic range (IDR) of 57 dB at which each of the 16 spectral channels of the processor operate. The output bits of the processor are scanned and reported off chip in a format suitable for continuous-interleaved-sampling stimulation of electrodes. Power-supply-immune biasing circuits ensure robust operation of the processor in the high-RF-noise environment typical of cochlear implant systems.


international solid-state circuits conference | 2005

An analog bionic ear processor with zero-crossing detection

Rahul Sarpeshkar; Michael W. Baker; Christopher D. Salthouse; Ji-Jon Sit; Lorenzo Turicchia; Serhii M. Zhak

A 75 dB 251 /spl mu/W analog speech processor is described that preserves the performance, robustness, and programmability needed for deaf patients at a reduced power consumption compared to that of implementations with A/D and DSP. It also provides zero-crossing outputs for stimulation strategies that use phase information to improve performance.


IEEE Transactions on Speech and Audio Processing | 2005

A bio-inspired companding strategy for spectral enhancement

Lorenzo Turicchia; Rahul Sarpeshkar

This work presents a compressing-and-expanding, i.e., companding, strategy for spectral enhancement inspired by the operation of the auditory system. The companding strategy simulates the two-tone suppression phenomena of the auditory system and implements a soft local winner-take-all-like enhancement of the input spectrum. It performs multichannel syllabic compression without degrading spectral contrast. The companding strategy works in an analog fashion without explicit decision making, without the use of the fast Fourier transform, and without any cross-coupling between spectral channels. The strategy is useful in cochlear-implant processors, hearing aids, and speech recognition for enhancing spectral contrast. It is well suited to low-power analog circuit implementations.


Jaro-journal of The Association for Research in Otolaryngology | 2003

Otoacoustic emissions from residual oscillations of the cochlear basilar membrane in a human ear model.

Renato Nobili; Aleš Vetešník; Lorenzo Turicchia; Fabio Mammano

Sounds originating from within the inner ear, known as otoacoustic emissions (OAEs), are widely exploited in clinical practice but the mechanisms underlying their generation are not entirely clear. Here we present simulation results and theoretical considerations based on a hydrodynamic model of the human inner ear. Simulations show that, if the cochlear amplifier (CA) gain is a smooth function of position within the active cochlea, filtering performed by a middle ear with an irregular, i.e., nonsmooth, forward transfer function suffices to produce irregular and long-lasting residual oscillations of cochlear basilar membrane (BM) at selected frequencies. Feeding back to the middle ear through hydrodynamic coupling afforded by the cochlear fluid, these oscillations are detected as transient evoked OAEs in the ear canal. If, in addition, the CA gain profile is affected by irregularities, residual BM oscillations are even more irregular and tend to evolve towards self-sustaining oscillations at the loci of gain irregularities. Correspondingly, the spectrum of transient evoked OAEs exhibits sharp peaks. If both the CA gain and the middle-ear forward transfer function are smooth, residual BM oscillations have regular waveforms and extinguish rapidly. In this case no emissions are produced. Finally, and paradoxically albeit consistent with observations, simulating localized damage to the CA results in self-sustaining BM oscillations at the characteristic frequencies (CFs) of the sites adjacent to the damage region, accompanied by generation of spontaneous OAEs. Under these conditions, stimulus-frequency OAEs, with typical modulation patterns, are also observed for inputs near hearing threshold. This approach can be exploited to provide novel diagnostic tools and a better understanding of key phenomena relevant for hearing science.


Journal of the Acoustical Society of America | 2007

Evaluation of companding-based spectral enhancement using simulated cochlear-implant processing

Andrew J. Oxenham; Andrea M. Simonson; Lorenzo Turicchia; Rahul Sarpeshkar

This study tested a time-domain spectral enhancement algorithm that was recently proposed by Turicchia and Sarpeshkar [IEEE Trans. Speech Audio Proc. 13, 243-253 (2005)]. The algorithm uses a filter bank, with each filter channel comprising broadly tuned amplitude compression, followed by more narrowly tuned expansion (companding). Normal-hearing listeners were tested in their ability to recognize sentences processed through a noise-excited envelope vocoder that simulates aspects of cochlear-implant processing. The sentences were presented in a steady background noise at signal-to-noise ratios of 0, 3, and 6 dB and were either passed directly through an envelope vocoder, or were first processed by the companding algorithm. Using an eight-channel envelope vocoder, companding produced small but significant improvements in speech reception. Parametric variations of the companding algorithm showed that the improvement in intelligibility was robust to changes in filter tuning, whereas decreases in the time constants resulted in a decrease in intelligibility. Companding continued to provide a benefit when the number of vocoder frequency channels was increased to sixteen. When integrated within a sixteen-channel cochlear-implant simulator, companding also led to significant improvements in sentence recognition. Thus, companding may represent a readily implementable way to provide some speech recognition benefits to current cochlear-implant users.


IEEE Transactions on Biomedical Circuits and Systems | 2008

An Analog Integrated-Circuit Vocal Tract

Keng Hoong Wee; Lorenzo Turicchia; Rahul Sarpeshkar

We present the first experimental integrated-circuit vocal tract by mapping fluid volume velocity to current, fluid pressure to voltage, and linear and nonlinear mechanical impedances to linear and nonlinear electrical impedances. The 275 muW analog vocal tract chip includes a 16-stage cascade of two-port pi-elements that forms a tunable transmission line, electronically variable impedances, and a current source as the glottal source. A nonlinear resistor models laminar and turbulent flow in the vocal tract. The measured SNR at the output of the analog vocal tract is 64, 66, and 63 dB for the first three formant resonances of a vocal tract with uniform cross-sectional area. The analog vocal tract can be used with auditory processors in a feedback speech locked loop-analogous to a phase locked loop-to implement speech recognition that is potentially robust in noise. Our use of a physiological model of the human vocal tract enables the analog vocal tract chip to synthesize speech signals of interest, using articulatory parameters that are intrinsically compact and linearly interpolatable.


PLOS ONE | 2012

Efficient Universal Computing Architectures for Decoding Neural Activity

Benjamin I. Rapoport; Lorenzo Turicchia; Woradorn Wattanapanitch; Thomas J. Davidson; Rahul Sarpeshkar

The ability to decode neural activity into meaningful control signals for prosthetic devices is critical to the development of clinically useful brain– machine interfaces (BMIs). Such systems require input from tens to hundreds of brain-implanted recording electrodes in order to deliver robust and accurate performance; in serving that primary function they should also minimize power dissipation in order to avoid damaging neural tissue; and they should transmit data wirelessly in order to minimize the risk of infection associated with chronic, transcutaneous implants. Electronic architectures for brain– machine interfaces must therefore minimize size and power consumption, while maximizing the ability to compress data to be transmitted over limited-bandwidth wireless channels. Here we present a system of extremely low computational complexity, designed for real-time decoding of neural signals, and suited for highly scalable implantable systems. Our programmable architecture is an explicit implementation of a universal computing machine emulating the dynamics of a network of integrate-and-fire neurons; it requires no arithmetic operations except for counting, and decodes neural signals using only computationally inexpensive logic operations. The simplicity of this architecture does not compromise its ability to compress raw neural data by factors greater than . We describe a set of decoding algorithms based on this computational architecture, one designed to operate within an implanted system, minimizing its power consumption and data transmission bandwidth; and a complementary set of algorithms for learning, programming the decoder, and postprocessing the decoded output, designed to operate in an external, nonimplanted unit. The implementation of the implantable portion is estimated to require fewer than 5000 operations per second. A proof-of-concept, 32-channel field-programmable gate array (FPGA) implementation of this portion is consequently energy efficient. We validate the performance of our overall system by decoding electrophysiologic data from a behaving rodent.


wearable and implantable body sensor networks | 2009

A Battery-Free Tag for Wireless Monitoring of Heart Sounds

Soumyajit Mandal; Lorenzo Turicchia; Rahul Sarpeshkar

We have developed a wearable, battery-free tag that monitors heart sounds. The tag powers up by harvesting ambient RF energy, and contains a low-power integrated circuit, an antenna and up to four microphones. The chip, which consumes only 1.0uW of power, generates digital events when the outputs of any of the microphones exceeds a programmable threshold voltage, combines such events together by using a programmable logic array, and transmits them to a base station by using backscatter modulation. The chip can also be programmed to trade-off microphone sensitivity for power consumption. In this paper, we demonstrate that the tag, when attached to the chest, can reliably measure heart rate at distances up to 7m from an FCC-compliant RF power source. We also suggest how delays between signals measured by microphones at the wrist and neck can be used to provide information about relative blood-pressure variations.


Eurasip Journal on Audio, Speech, and Music Processing | 2007

An FFT-based companding front end for noise-robust automatic speech recognition

Bhiksha Raj; Lorenzo Turicchia; Bent Schmidt-Nielsen; Rahul Sarpeshkar

We describe an FFT-based companding algorithm for preprocessing speech before recognition. The algorithm mimics tone-to-tone suppression and masking in the auditory system to improve automatic speech recognition performance in noise. Moreover, it is also very computationally efficient and suited to digital implementations due to its use of the FFT. In an automotive digits recognition task with the CU-Move database recorded in real environmental noise, the algorithm improves the relative word error by 12.5% at -5 dB signal-to-noise ratio (SNR) and by 6.2% across all SNRs (-5 dB SNR to +5 dB SNR). In the Aurora-2 database recorded with artificially added noise in several environments, the algorithm improves the relative word error rate in almost all situations.


IEEE Transactions on Biomedical Circuits and Systems | 2011

An Articulatory Silicon Vocal Tract for Speech and Hearing Prostheses

Keng Hoong Wee; Lorenzo Turicchia; Rahul Sarpeshkar

We describe the concept of a bioinspired feedback loop that combines a cochlear processor with an integrated-circuit vocal tract to create what we call a speech-locked loop. We discuss how the speech-locked loop can be applied in hearing prostheses, such as cochlear implants, to help improve speech recognition in noise. We also investigate speech-coding strategies for brain-machine-interface-based speech prostheses and present an articulatory speech-synthesis system by using an integrated-circuit vocal tract that models the human vocal tract. Our articulatory silicon vocal tract makes the transmission of low bit-rate speech-coding parameters feasible over a bandwidth-constrained body sensor network. To the best of our knowledge, this is the first articulatory speech-prosthesis system reported to date. We also present a speech-prosthesis simulator as a means to generate realistic articulatory parameter sequences.

Collaboration


Dive into the Lorenzo Turicchia's collaboration.

Top Co-Authors

Avatar

Rahul Sarpeshkar

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Keng Hoong Wee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Soumyajit Mandal

Case Western Reserve University

View shared research outputs
Top Co-Authors

Avatar

Jose L. Bohorquez

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Maziar Tavakoli

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

William R. Sanchez

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bent Schmidt-Nielsen

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Bhiksha Raj

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Christopher D. Salthouse

University of Massachusetts Amherst

View shared research outputs
Researchain Logo
Decentralizing Knowledge