Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Noelle R. B. Stiles is active.

Publication


Featured researches published by Noelle R. B. Stiles.


Scientific Reports | 2015

Rate perception adapts across the senses: evidence for a unified timing mechanism

Carmel A. Levitan; Yih-Hsin A. Ban; Noelle R. B. Stiles; Shinsuke Shimojo

The brain constructs a representation of temporal properties of events, such as duration and frequency, but the underlying neural mechanisms are under debate. One open question is whether these mechanisms are unisensory or multisensory. Duration perception studies provide some evidence for a dissociation between auditory and visual timing mechanisms; however, we found active crossmodal interaction between audition and vision for rate perception, even when vision and audition were never stimulated together. After exposure to 5 Hz adaptors, people perceived subsequent test stimuli centered around 4 Hz to be slower, and the reverse after exposure to 3 Hz adaptors. This aftereffect occurred even when the adaptor and test were different modalities that were never presented together. When the discrepancy in rate between adaptor and test increased, the aftereffect was attenuated, indicating that the brain uses narrowly-tuned channels to process rate information. Our results indicate that human timing mechanisms for rate perception are not entirely segregated between modalities and have substantial implications for models of how the brain encodes temporal features. We propose a model of multisensory channels for rate perception, and consider the broader implications of such a model for how the brain encodes timing.


Scientific Reports | 2015

Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli

Noelle R. B. Stiles; Shinsuke Shimojo

Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities.


Frontiers in Psychology | 2015

Length and orientation constancy learning in 2-dimensions with auditory sensory substitution: the importance of self-initiated movement

Noelle R. B. Stiles; Yuqian Zheng; Shinsuke Shimojo

A subset of sensory substitution (SS) devices translate images into sounds in real time using a portable computer, camera, and headphones. Perceptual constancy is the key to understanding both functional and phenomenological aspects of perception with SS. In particular, constancies enable object externalization, which is critical to the performance of daily tasks such as obstacle avoidance and locating dropped objects. In order to improve daily task performance by the blind, and determine if constancies can be learned with SS, we trained blind (N = 4) and sighted (N = 10) individuals on length and orientation constancy tasks for 8 days at about 1 h per day with an auditory SS device. We found that blind and sighted performance at the constancy tasks significantly improved, and attained constancy performance that was above chance. Furthermore, dynamic interactions with stimuli were critical to constancy learning with the SS device. In particular, improved task learning significantly correlated with the number of spontaneous left-right head-tilting movements while learning length constancy. The improvement from previous head-tilting trials even transferred to a no-head-tilt condition. Therefore, not only can SS learning be improved by encouraging head movement while learning, but head movement may also play an important role in learning constancies in the sighted. In addition, the learning of constancies by the blind and sighted with SS provides evidence that SS may be able to restore vision-like functionality to the blind in daily tasks.


bioRxiv | 2018

Augmented Reality Powers a Cognitive Prosthesis for the Blind

Yang Liu; Noelle R. B. Stiles; Markus Meister

To restore vision for the blind several prosthetic approaches have been explored that convey raw images to the brain. So far these schemes all suffer from a lack of bandwidth and the extensive training required to interpret unusual stimuli. Here we present an alternate approach that restores vision at the cognitive level, bypassing the need to convey sensory data. A wearable computer captures video and other data, extracts the important scene knowledge, and conveys that through auditory augmented reality. This system supports many aspects of visual cognition: from obstacle avoidance to formation and recall of spatial memories, to long-range navigation. Neither training nor modification of the physical environment are required: Blind subjects can navigate an unfamiliar multi-story building on their first attempt. The combination of unprecedented computing power in wearable devices with augmented reality technology promises a new era of non-invasive prostheses that are limited only by software. Impact Statement A non-invasive prosthesis for blind people endows objects in the environment with voices, allowing a user to explore the scene, localize objects, and navigate through a building with minimal training.


Multisensory Research | 2013

Cross-modal temporal frequency channels for rate classification

Carmel A. Levitan; Charlotte L. Yang; Yih-Hsin Alison Ban; Noelle R. B. Stiles; Shinsuke Shimojo

We previously reported our discovery that temporal rate adaptation transfers bidirectionally between vision and audition. Temporal frequency channels are linked across audition and vision (Yao et al., 2009); but duration channels for audition and vision are thought to be independent (Heron et al., 2012). We used our paradigm to characterize linkages between auditory and visual channels by measuring whether or not transfer of adaptation still occurs as the discrepancy between adaptation and test frequencies increases. Participants ran in three experimental sessions, each with a different adaptation frequency. They were trained, using feedback, to classify flickering visual stimuli (ranging in frequency from 3.25–4.75 Hz) as fast or slow (relative to 4 Hz). They then classified 140 pre-adaptation test trials with feedback, providing a baseline. Afterwards, 30 adaptation trials of auditory stimuli beeping at either 5, 8, or 12 Hz were presented, followed by 20 alternating blocks of 7 adaptation and 7 post-adaptation test trials (without feedback). We compared the PSE of the pre- and post-adaptation trials to quantify the cross-modal transfer and found that the aftereffect occurred when the adaptation frequency was most similar to the test frequencies but was no longer present with larger discrepancies. These results rule out response bias as a plausible explanation for our original findings and suggest that the timing mechanisms underlying rate perception are consistent with supramodal channels that are tuned.


Frontiers in Optics | 2007

Intraocular camera for retinal prostheses: Design constraints based on visual psychophysics

Noelle R. B. Stiles; Michelle C. Hauer; Pamela Lee; Patrick J. Nasiatka; Jaw-Chyng Lormen Lue; James D. Weiland; Mark S. Humayun; Armand R. Tanguay

Optical system design constraints for an intraocular camera are determined by visual psychophysics techniques, including pixellation limits adequate for navigation and object identification, optimal pre- and post-pixellation blurring, and the elimination of gridding artifacts.


PLOS ONE | 2018

What you saw is what you will hear: Two new illusions with audiovisual postdictive effects

Noelle R. B. Stiles; Monica Li; Carmel A. Levitan; Yukiyasu Kamitani; Shinsuke Shimojo

Neuroscience investigations are most often focused on the prediction of future perception or decisions based on prior brain states or stimulus presentations. However, the brain can also process information retroactively, such that later stimuli impact conscious percepts of the stimuli that have already occurred (called “postdiction”). Postdictive effects have thus far been mostly unimodal (such as apparent motion), and the models for postdiction have accordingly been limited to early sensory regions of one modality. We have discovered two related multimodal illusions in which audition instigates postdictive changes in visual perception. In the first illusion (called the “Illusory Audiovisual Rabbit”), the location of an illusory flash is influenced by an auditory beep-flash pair that follows the perceived illusory flash. In the second illusion (called the “Invisible Audiovisual Rabbit”), a beep-flash pair following a real flash suppresses the perception of the earlier flash. Thus, we showed experimentally that these two effects are influenced significantly by postdiction. The audiovisual rabbit illusions indicate that postdiction can bridge the senses, uncovering a relatively-neglected yet critical type of neural processing underlying perceptual awareness. Furthermore, these two new illusions broaden the Double Flash Illusion, in which a single real flash is doubled by two sounds. Whereas the double flash indicated that audition can create an illusory flash, these rabbit illusions expand audition’s influence on vision to the suppression of a real flash and the relocation of an illusory flash. These new additions to auditory-visual interactions indicate a spatio-temporally fine-tuned coupling of the senses to generate perception.


Journal of Vision | 2017

The Spatial Double Flash Illusion: Audition-Induced Spatial Displacement

Armand R. Tanguay; Bolton Bailey; Noelle R. B. Stiles; Carmel Levitan; Shinsuke Shimojo

Background: The spatial double flash illusion is generated by the brief presentation of a central visual stimulus (a small rectangular target; a “flash”) in conjunction with a short auditory stimulus (a “beep”) that is physically displaced to the left (or right) of the central (peripheral) flash, followed by a second identical auditory stimulus that is physically displaced to the right (or left) of the single flash. The second beep generates an illusory flash that is perceived to be displaced in the direction of the auditory beep sequence. This illusion is a variant of the original double flash illusion with no audio displacement (Shams, et al., 2000). Methods: A 17 ms flash of a white rectangle against a grey background is presented centrally, displaced by 11.5° vertically below a fixation cross, in conjunction with a 7 ms 800 Hz audio tone (beep). A second beep is generated 57 ms following the first beep. The two speakers used to present the beeps are displaced to the left and right of a centrally located monitor. Participants (N = 10) were asked to report the number of flashes perceived, whether or not the two flashes were collocated or displaced, and if displaced, in which direction. Results: Participants reported significantly more illusory flashes displaced in the direction of the auditory beep sequence than in the opposite direction (Left to right, p = 0.011; Right to left, p = 0.036). Discussion: The illusory flash following the presented flash was perceived to be displaced laterally in space in the same direction as the sequence of audio stimuli predominantly more often than it was perceived to be displaced in the opposite direction. As such, both the generation of the illusory flash and its location are modified by auditory input, an unusual example of crossmodal interaction in which audition dominates over vision.


Frontiers in Optics | 2014

Intraocular Retinal Prostheses: Monocular Depth Perception in the Low Resolution Limit

Noelle R. B. Stiles; Ben P. McIntosh; Armand R. Tanguay; Mark S. Humayun

Depth perception via monocular cues was studied with a reach and grasp task in a retinal prosthesis simulator at low resolution. Results indicate that depth perception may be possible with retinal prostheses implanted only monocularly.


Seeing and Perceiving | 2012

Temporal rate adaptation transfers cross-modally at a subconscious level

Charlotte L. Yang; Noelle R. B. Stiles; Carmel A. Levitan; Shinsuke Shimojo

In an earlier study, we demonstrated that the temporal rate adaptation effect can be transferred from audition to vision and vice versa. However, it was unclear whether this effect was due to a top-down cognitive process, or rather to an earlier calibration process which is stimulus-driven and automatic. We therefore examined the effect of interocular masking of the adapting stimuli on the temporal rate adaptation and its cross-modal transfer from vision to audition (VA). Participants were trained, using feedback, to classify repetitive auditory stimuli presented at a range of frequencies (3.25–4.75 Hz) as fast or slow (as compared to the average frequency of 4 Hz). Afterwards, subjects were repeatedly exposed to visual stimuli at a specific rate (3 or 5 Hz). This adaptation stimulus was masked by continuous flash suppression (CFS). During CFS, a stimulus presented to one eye can be suppressed from awareness by a stream of constantly changing images in the other eye. To test whether adaptation resulted from this less visible exposure, participants then performed the same task as in the training, but without feedback. Test and adaptation tasks were presented in 20 alternating blocks. A comparison of the pre- and post-adaptation responses showed cross-modal changes in subjects’ perception of temporal rate. Adaptation to the masked 5 Hz (3 Hz) stimuli led to subsequent stimuli seeming slower (faster) than they had before adaptation. Since the adaptation stimuli were mostly masked by CFS, the results suggest that temporal rate adaptation and its cross-modal transfer occur mostly at a subconscious level.

Collaboration


Dive into the Noelle R. B. Stiles's collaboration.

Top Co-Authors

Avatar

Armand R. Tanguay

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Shinsuke Shimojo

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark S. Humayun

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michelle C. Hauer

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Patrick J. Nasiatka

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

James D. Weiland

Johns Hopkins University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Ben P. McIntosh

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Benjamin P. McIntosh

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Monica Li

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge