Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jedediah M. Singer is active.

Publication


Featured researches published by Jedediah M. Singer.


The Journal of Neuroscience | 2010

Temporal Cortex Neurons Encode Articulated Actions as Slow Sequences of Integrated Poses

Jedediah M. Singer; David L. Sheinberg

Form and motion processing pathways of the primate visual system are known to be interconnected, but there has been surprisingly little investigation of how they interact at the cellular level. Here we explore this issue with a series of three electrophysiology experiments designed to reveal the sources of action selectivity in monkey temporal cortex neurons. Monkeys discriminated between actions performed by complex, richly textured, rendered bipedal figures and hands. The firing patterns of neurons contained enough information to discriminate the identity of the character, the action performed, and the particular conjunction of action and character. This suggests convergence of motion and form information within single cells. Form and motion information in isolation were both sufficient to drive action discrimination within these neurons, but removing form information caused a greater disruption to the original response. Finally, we investigated the temporal window across which visual information is integrated into a single pose (or, equivalently, the timing with which poses are differentiated). Temporal cortex neurons under normal conditions represent actions as sequences of poses integrated over ∼120 ms. They receive both motion and form information, however, and can use either if the other is absent.


Epilepsy & Behavior | 2011

Quickest detection of drug-resistant seizures: an optimal control approach.

Sabato Santaniello; Samuel P. Burns; Alexandra J. Golby; Jedediah M. Singer; William S. Anderson; Sridevi V. Sarma

Epilepsy affects 50 million people worldwide, and seizures in 30% of the cases remain drug resistant. This has increased interest in responsive neurostimulation, which is most effective when administered during seizure onset. We propose a novel framework for seizure onset detection that involves (i) constructing statistics from multichannel intracranial EEG (iEEG) to distinguish nonictal versus ictal states; (ii) modeling the dynamics of these statistics in each state and the state transitions; you can remove this word if there is no room. (iii) developing an optimal control-based “quickest detection” (QD) strategy to estimate the transition times from nonictal to ictal states from sequential iEEG measurements. The QD strategy minimizes a cost function of detection delay and false positive probability. The solution is a threshold that non-monotonically decreases over time and avoids responding to rare events that normally trigger false positives. We applied QD to four drug resistant epileptic patients (168 hour continuous recordings, 26–44 electrodes, 33 seizures) and achieved 100% sensitivity with low false positive rates (0.16 false positive/hour). This article is part of a Supplemental Special Issue entitled The Future of Automated Seizure Detection and Prediction.


Journal of Vision | 2008

A method for the real-time rendering of formless dot field structure-from-motion stimuli.

Jedediah M. Singer; David L. Sheinberg

The perception of visual motion relies on different computations and different neural substrates than the perception of static form. It is therefore useful to have psychophysical stimuli that carry mostly or entirely motion information, conveying little or nothing about form in any single frame. Structure-from-motion stimuli can sometimes achieve this dissociation, with some examples in studies of biological motion using point-light walkers. It is, however, generally not trivial to provide motion information without also providing static form information. The problem becomes more computationally difficult when the structures and the motions in question are complex. Here we present a technique by which an animated three-dimensional scene can be rendered in real-time as a pattern of dots. Each dot follows the trajectory of the underlying object in the animation, but each static frame of the animation appears to be a uniform random field of dots. The resulting stimuli capture motion vectors across arbitrary complex scenes, while providing virtually no instantaneous information about the structure of that scene. We also present the results of a psychophysical experiment demonstrating the efficacy and the limitations of the technique. The ability to create such stimuli on the fly allows for interactive adjustment and control of the stimuli, real-time parametric variations of structure and motion, and the creation of large libraries of actions without the need to pre-render a prohibitive number of movies. This technique provides a powerful tool for the dissociation of complex motion from static form.


Journal of Neurophysiology | 2012

Temporal stability of visually selective responses in intracranial field potentials recorded from human occipital and temporal lobes

A. K. Bansal; Jedediah M. Singer; William S. Anderson; Alexandra J. Golby; Joseph R. Madsen; Gabriel Kreiman

The cerebral cortex needs to maintain information for long time periods while at the same time being capable of learning and adapting to changes. The degree of stability of physiological signals in the human brain in response to external stimuli over temporal scales spanning hours to days remains unclear. Here, we quantitatively assessed the stability across sessions of visually selective intracranial field potentials (IFPs) elicited by brief flashes of visual stimuli presented to 27 subjects. The interval between sessions ranged from hours to multiple days. We considered electrodes that showed robust visual selectivity to different shapes; these electrodes were typically located in the inferior occipital gyrus, the inferior temporal cortex, and the fusiform gyrus. We found that IFP responses showed a strong degree of stability across sessions. This stability was evident in averaged responses as well as single-trial decoding analyses, at the image exemplar level as well as at the category level, across different parts of visual cortex, and for three different visual recognition tasks. These results establish a quantitative evaluation of the degree of stationarity of visually selective IFP responses within and across sessions and provide a baseline for studies of cortical plasticity and for the development of brain-machine interfaces.


Vision Research | 2006

XOR style tasks for testing visual object processing in monkeys

Britt Anderson; Jessie J. Peissig; Jedediah M. Singer; David L. Sheinberg

Using visually complex stimuli, three monkeys learned visual exclusive-or (XOR) tasks that required detecting two way visual feature conjunctions. Monkeys with passive exposure to the test images, or prior experience, were quicker to acquire an XOR style task. Training on each pairwise comparison of the stimuli to be used in an XOR task provided nearly complete transfer when stimuli became intermingled in the full XOR task. Task mastery took longer, accuracy was lower, and response times were slower for conjunction stimuli. Rotating features of the XOR stimuli did not adversely effect recognition speed or accuracy.


Journal of Vision | 2014

Short temporal asynchrony disrupts visual object recognition.

Jedediah M. Singer; Gabriel Kreiman

Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition.


NeuroImage | 2017

What is changing when: Decoding visual information in movies from human intracranial recordings

Leyla Isik; Jedediah M. Singer; Joseph R. Madsen; Nancy Kanwisher; Gabriel Kreiman

ABSTRACT The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well‐defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision. HIGHLIGHTSThis study presents a methodology to examine intracranial field potential responses to continuous movie stimuli.Intracranial field potentials from human ventral visual cortex show strong, selective and consistent responses to changes during a movie.We can decode when visual changes happen and what content changes in the visual input in single events directly from physiological signals.


conference on information sciences and systems | 2016

A machine learning approach to predict episodic memory formation

Hanlin Tang; Jedediah M. Singer; Matias Ison; Gnel Pivazyan; Melissa Romaine; Elizabeth Meller; Victoria Perron; Marlise Arlellano; Gabriel Kreiman; Adriana Boulin; Rosa Frias; James Carroll; Sarah Dowcett

Episodic memories constitute the essence of our recollections and are formed by autobiographical experiences and contextual knowledge. Memories are rich and detailed, yet at the same time they can be malleable and inaccurate. The contents that end up being remembered are the result of filtering incoming sensory inputs in the context of previous knowledge. Here we asked whether the quintessentially subjective process of memory construction could be predicted by a supervised machine learning approach based exclusively on content information. We considered audiovisual segments from a movie as a proxy for real-life memory formation and built a quantitative model to explain psychophysics data evaluating recognition memory. The inputs to the model included audiovisual information (e.g. presence of specific characters, objects, voices and sounds), scene information (e.g. location, presence or absence of action) and emotional valence information. The machine-learning model could predict memory formation in single trials both for group averages and individual subjects with an accuracy of up to 80% using solely stimulus content properties. These results provide a quantitative and predictive model that links sensory perception and emotional attributes to memory formation. Furthermore, the results demonstrate that a computational model can make sophisticated inferences about a cognitive process that involves selective filtering and subjective interpretation.


Neuron | 2009

Toward unmasking the dynamics of visual perception.

Jedediah M. Singer; Gabriel Kreiman

Fisch et al. report in this issue of Neuron the results of an investigation of the neural correlates of conscious perception. They find an early, dramatic, and long-lasting gamma response in high-level visual areas, when (and only when) a rapidly presented image is perceived.


Journal of Neurophysiology | 2015

Sensitivity to timing and order in human visual cortex

Jedediah M. Singer; Joseph R. Madsen; William S. Anderson; Gabriel Kreiman

Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brains encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds.

Collaboration


Dive into the Jedediah M. Singer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joseph R. Madsen

Boston Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

William S. Anderson

Johns Hopkins University School of Medicine

View shared research outputs
Top Co-Authors

Avatar

Alexandra J. Golby

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Jessie J. Peissig

California State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cheston Tan

McGovern Institute for Brain Research

View shared research outputs
Top Co-Authors

Avatar

Leyla Isik

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge