Santani Teng
Massachusetts Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Santani Teng.
Experimental Brain Research | 2012
Santani Teng; Amrita Puri; David Whitney
Echolocating organisms represent their external environment using reflected auditory information from emitted vocalizations. This ability, long known in various non-human species, has also been documented in some blind humans as an aid to navigation, as well as object detection and coarse localization. Surprisingly, our understanding of the basic acuity attainable by practitioners—the most fundamental underpinning of echoic spatial perception—remains crude. We found that experts were able to discriminate horizontal offsets of stimuli as small as ~1.2° auditory angle in the frontomedial plane, a resolution approaching the maximum measured precision of human spatial hearing and comparable to that found in bats performing similar tasks. Furthermore, we found a strong correlation between echolocation acuity and age of blindness onset. This first measure of functional spatial resolution in a population of expert echolocators demonstrates precision comparable to that found in the visual periphery of sighted individuals.
Cortex | 2008
Kiki Zanolie; Santani Teng; Sarah E. Donohue; Anna C. K. van Duijvenvoorde; Guido P. H. Band; Serge A.R.B. Rombouts; Eveline A. Crone
A crucial element of testing hypotheses about rules for behavior is the use of performance feedback. In this study, we used fMRI and EEG to test the role of medial prefrontal cortex (PFC) and dorsolateral (DL) PFC in hypothesis testing using a modified intradimensional/extradimensional rule shift task. Eighteen adults were asked to infer rules about color or shape on the basis of positive and negative feedback in sets of two trials. Half of the trials involved color-to-color or shape-to-shape trials (intradimensional switches; ID) and the other half involved color-to-shape or shape-to-color trials (extradimensional switches; ED). Participants performed the task in separate fMRI and EEG sessions. ED trials were associated with reduced accuracy relative to ID trials. In addition, accuracy was reduced and response latencies increased following negative relative to positive feedback. Negative feedback resulted in increased activation in medial PFC and DLPFC, but more so for ED than ID shifts. Reduced accuracy following negative feedback correlated with increased activation in DLPFC, and increased response latencies following negative feedback correlated with increased activation in medial PFC. Additionally, around 250msec following negative performance feedback participants showed a feedback-related negative scalp potential, but this potential did not differ between ID and ED shifts. These results indicate that both medial PFC and DLPFC signal the need for performance adjustment, and both regions are sensitive to the increased demands of set shifting in hypothesis testing.
IEEE Transactions on Biomedical Engineering | 2015
Jascha Sohl-Dickstein; Santani Teng; Benjamin Gaub; Chris C. Rodgers; Crystal Li; Michael R. DeWeese; Nicol S. Harper
Objective: We present a device that combines principles of ultrasonic echolocation and spatial hearing to provide human users with environmental cues that are 1) not otherwise available to the human auditory system, and 2) richer in object and spatial information than the more heavily processed sonar cues of other assistive devices. The device consists of a wearable headset with an ultrasonic emitter and stereo microphones with affixed artificial pinnae. The goal of this study is to describe the device and evaluate the utility of the echoic information it provides. Methods: The echoes of ultrasonic pulses were recorded and time stretched to lower their frequencies into the human auditory range, then played back to the user. We tested performance among naive and experienced sighted volunteers using a set of localization experiments, in which the locations of echo-reflective surfaces were judged using these time-stretched echoes. Results: Naive subjects were able to make laterality and distance judgments, suggesting that the echoes provide innately useful information without prior training. Naive subjects were generally unable to make elevation judgments from recorded echoes. However, trained subjects demonstrated an ability to judge elevation as well. Conclusion: This suggests that the device can be used effectively to examine the environment and that the human auditory system can rapidly adapt to these artificial echolocation cues. Significance: Interpreting and interacting with the external world constitutes a major challenge for persons who are blind or visually impaired. This device has the potential to aid blind people in interacting with their environment.
Philosophical Transactions of the Royal Society B | 2017
Radoslaw Martin Cichy; Santani Teng
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’.
international conference on robotics and automation | 2017
Hsueh-Cheng Wang; Robert K. Katzschmann; Santani Teng; Brandon Araki; Laura Giarré; Daniela Rus
This work introduces a wearable system to provide situational awareness for blind and visually impaired people. The system includes a camera, an embedded computer and a haptic device to provide feedback when an obstacle is detected. The system uses techniques from computer vision and motion planning to (1) identify walkable space; (2) plan step-by-step a safe motion trajectory in the space, and (3) recognize and locate certain types of objects, for example the location of an empty chair. These descriptions are communicated to the person wearing the device through vibrations. We present results from user studies with low- and high-level tasks, including walking through a maze without collisions, locating a chair, and walking through a crowded environment while avoiding people.
eNeuro | 2017
Santani Teng; Verena Sommer; Dimitrios Pantazis; Aude Oliva
Abstract Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals.
bioRxiv | 2016
Santani Teng; Verena Sommer; Dimitrios Pantazis; Aude Oliva
Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Auditory cues are informative about the shape and extent of large-scale environments: humans can make judgments about surrounding spaces from reverberation cues. However, how the scale of auditory space is represented neurally is unknown. Here, by orthogonally varying the spatial extent and sound source content of auditory scenes during magnetoencephalography (MEG) recording, we report a neural signature of auditory space size perception, starting ~145 ms after stimulus onset. Importantly, this neuromagnetic response is readily dissociable in form and time into representations of the source and its reverberant enclosing space: while the source exhibits an early and transient response, the neural signature of space is sustained and independent of the original source that produced it. Further, the space size response is robust to variations in sound source, and vice versa. The MEG decoding signal was distributed primarily across bilateral temporal sensor locations, significantly correlated with behavioral responses in a separate experiment. Together, our results provide the first neuromagnetic evidence for a robust auditory space size representation in the human brain, sensitive to reverberant decay, and reveal the temporal dynamics of how such a code emerges over time from the transformation of complex naturalistic auditory signals.
Journal of Vision | 2015
Santani Teng; Radoslaw Martin Cichy; Dimitrios Pantazis; Aude Oliva
Functional changes in visual cortex as a consequence of blindness are a major model for studying crossmodal neuroplasticity. Previous studies have shown that traditionally visual cortical regions activate in response to a wide range of nonvisual tasks (Merabet & Pascual-Leone, 2010; Kupers & Ptito, 2013). However, the underlying computations, while often inferred to be similar for the blind and the sighted, have almost never been examined in detail. Here we used magnetoencephalography (MEG) and advanced multivariate pattern analysis to compare visual letter recognition with Braille reading (Sadato et al., 1996; Reich et al., 2011). We presented blind and sighted volunteers with 10 single letters in random order while recording brain activity. Sighted subjects were presented with Roman visual letters, while blind subjects were presented with Braille tactile letters. We used linear support vector machines to decode letter identity from MEG data. We found that the classification time course of letter recognition in sighted subjects was generally faster, briefer, and more consistent than in blind subjects. We then used representational similarity analysis (Kriegeskorte et al., 2008) to compare how sighted and blind subjects represented letters both within and across groups. This analysis revealed high within-group correlations at ~200 ms for sighted and ~600 ms for blind subjects. Correlations between groups were an order of magnitude lower, though overall significantly positive. The results suggest that blind and sighted letter reading may be largely driven by distinct processes, but that brain regions recruited crossmodally may be performing some common underlying computations for analogous tasks. This work was supported by NIH R01-EY020484 to A.O. Meeting abstract presented at VSS 2015.
I-perception | 2011
Santani Teng; Amrita Puri; David Whitney
In active echolocation, reflections from self-generated acoustic pulses are used to represent the external environment. This ability has been described in some blind humans to aid in navigation and obstacle perception[1-4]. Echoic object representation has been described in echolocating bats and dolphins[5,6], but most prior work in humans has focused on navigation or other basic spatial tasks[4,7,8]. Thus, the nature of echoic object information received by human practitioners remains poorly understood. In two match-to-sample experiments, we tested the ability of five experienced blind echolocators to identify objects haptically which they had previously sampled only echoically. In each trial, a target object was presented on a platform and subjects sampled it using echolocation clicks. The target object was then removed and re-presented along with a distractor object. Only tactile sampling was allowed in identifying the target. Subjects were able to identify targets at greater than chance levels among both common household objects (p < .001) and novel objects constructed from plastic blocks (p = .018). While overall accuracy was indicative of high task difficulty, our results suggest that objects sampled by echolocation are recognizable by shape, and that this representation is available across sensory modalities.
Journal of Visual Impairment & Blindness | 2011
Santani Teng; David Whitney