Edgar Walker
Baylor College of Medicine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edgar Walker.
Attention Perception & Psychophysics | 2015
Demet Gurler; Nathan Doyle; Edgar Walker; John F. Magnotti; Michael S. Beauchamp
The McGurk effect is an illusion in which visual speech information dramatically alters the perception of auditory speech. However, there is a high degree of individual variability in how frequently the illusion is perceived: some individuals almost always perceive the McGurk effect, while others rarely do. Another axis of individual variability is the pattern of eye movements make while viewing a talking face: some individuals often fixate the mouth of the talker, while others rarely do. Since the talkers mouth carries the visual speech necessary information to induce the McGurk effect, we hypothesized that individuals who frequently perceive the McGurk effect should spend more time fixating the talkers mouth. We used infrared eye tracking to study eye movements as 40 participants viewed audiovisual speech. Frequent perceivers of the McGurk effect were more likely to fixate the mouth of the talker, and there was a significant correlation between McGurk frequency and mouth looking time. The noisy encoding of disparity model of McGurk perception showed that individuals who frequently fixated the mouth had lower sensory noise and higher disparity thresholds than those who rarely fixated the mouth. Differences in eye movements when viewing the talker’s face may be an important contributor to interindividual differences in multisensory speech perception.
bioRxiv | 2017
Santiago A. Cadena; Gh Denfield; Edgar Walker; Leon A. Gatys; As Tolias; Matthias Bethge; Alexander S. Ecker
Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have been successfully applied to neural data: On the one hand, transfer learning from networks trained on object recognition worked remarkably well for predicting neural responses in higher areas of the primate ventral stream, but has not yet been used to model spiking activity in early stages such as V1. On the other hand, data-driven models have been used to predict neural responses in the early visual system (retina and V1) of mice, but not primates. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. Even though V1 is rather at an early to intermediate stage of the visual system, we found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals. Author summary Predicting the responses of sensory neurons to arbitrary natural stimuli is of major importance for understanding their function. Arguably the most studied cortical area is primary visual cortex (V1), where many models have been developed to explain its function. However, the most successful models built on neurophysiologists’ intuitions still fail to account for spiking responses to natural images. Here, we model spiking activity in primary visual cortex (V1) of monkeys using deep convolutional neural networks (CNNs), which have been successful in computer vision. We both trained CNNs directly to fit the data, and used CNNs trained to solve a high-level task (object categorization). With these approaches, we are able to outperform previous models and improve the state of the art in predicting the responses of early visual neurons to natural images. Our results have two important implications. First, since V1 is the result of several nonlinear stages, it should be modeled as such. Second, functional models of entire visual pathways, of which V1 is an early stage, do not only account for higher areas of such pathways, but also provide useful representations for V1 predictions.
bioRxiv | 2016
Jacob Reimer; Dimitri Yatsenko; Alexander S. Ecker; Edgar Walker; Fabian H. Sinz; Philipp Berens; A Hoenselaar; Rj Cotton; Athanassios G. Siapas; As Tolias
The rise of big data in modern research poses serious challenges for data management: Large and intricate datasets from diverse instrumentation must be precisely aligned, annotated, and processed in a variety of ways to extract new insights. While high levels of data integrity are expected, research teams have diverse backgrounds, are geographically dispersed, and rarely possess a primary interest in data science. Here we describe DataJoint, an open-source toolbox designed for manipulating and processing scientific data under the relational data model. Designed for scientists who need a flexible and expressive database language with few basic concepts and operations, DataJoint facilitates multiuser access, efficient queries, and distributed computing. With implementations in both MATLAB and Python, DataJoint is not limited to particular file formats, acquisition systems, or data modalities and can be quickly adapted to new experimental designs. DataJoint and related resources are available at http://datajoint.github.com.
Science | 2016
Xiaolong Jiang; Shan Shen; Fabian H. Sinz; Jacob Reimer; Cathryn R. Cadwell; Philipp Berens; Alexander S. Ecker; Saumil S. Patel; Gh Denfield; Emmanouil Froudarakis; Shuang Li; Edgar Walker; As Tolias
The critique of Barth et al. centers on three points: (i) the completeness of our study is overstated; (ii) the connectivity matrix we describe is biased by technical limitations of our brain-slicing and multipatching methods; and (iii) our cell classification scheme is arbitrary and we have simply renamed previously identified interneuron types. We address these criticisms in our Response.
neural information processing systems | 2018
Fabian H. Sinz; Alexander S. Ecker; Paul G. Fahey; Edgar Walker; Erick Cobos; Emmanouil Froudarakis; Dimitri Yatsenko; Zachary Pitkow; Jacob Reimer; As Tolias
To better understand the representations in visual cortex, we need to generate better predictions of neural activity in awake animals presented with their ecological input: natural video. Despite recent advances in models for static images, models for predicting responses to natural video are scarce and standard linear-nonlinear models perform poorly. We developed a new deep recurrent network architecture that predicts inferred spiking activity of thousands of mouse V1 neurons simultaneously recorded with two-photon microscopy, while accounting for confounding factors such as the animal’s gaze position and brain state changes related to running state and pupil dilation. Powerful system identification models provide an opportunity to gain insight into cortical functions through in silico experiments that can subsequently be tested in the brain. However, in many cases this approach requires that the model is able to generalize to stimulus statistics that it was not trained on, such as band-limited noise and other parameterized stimuli. We investigated these domain transfer properties in our model and find that our model trained on natural images is able to correctly predict the orientation tuning of neurons in responses to artificial noise stimuli. Finally, we show that we can fully generalize from movies to noise and maintain high predictive performance on both stimulus domains by fine-tuning only the final layer’s weights on a network otherwise trained on natural movies. The converse, however, is not true.
bioRxiv | 2018
Edgar Walker; R. James Cotton; Wei Ji Ma; As Tolias
For more than a century, Bayesian-inspired models have been used to explain human and animal behavior, suggesting that organisms represent the uncertainty associated with sensory variables. Nevertheless, the neural code of uncertainty remains elusive. A central hypothesis is that uncertainty is encoded in the population activity of cortical neurons in the form of likelihood functions. Here, we studied the neural code of uncertainty by simultaneously recording population activity in the visual cortex in primates during a visual categorization task for which trial-to-trial uncertainty about stimulus orientation was relevant for the animal’s decision. We decoded the likelihood function from the trial-to-trial population activity and found that it predicted the monkey’s decisions better than using only a decoded point estimate of the orientation. Critically, this remained true even when we conditioned on the stimulus including its contrast, suggesting that random fluctuations in neural activity firing rates drive behaviorally meaningful variations in the likelihood function. Our results establish the role of population-encoded likelihood functions in mediating behavior and offer potential neural underpinnings for Bayesian models of perception.
arXiv: Neurons and Cognition | 2018
Alexander S. Ecker; Fabian H. Sinz; Emmanouil Froudarakis; Paul G. Fahey; Santiago A. Cadena; Edgar Walker; Erick Cobos; Jacob Reimer; As Tolias; Matthias Bethge
arXiv: Databases | 2018
Dimitri Yatsenko; Edgar Walker; As Tolias
Bernstein Conference 2016 | 2016
Santiago A. Cadena; Alexander S. Ecker; Gh Denfield; Edgar Walker; As Tolias; Matthias Bethge
Archive | 2015
Dimitri Yatsenko; Florian Franzen; Edgar Walker; Fabian H. Sinz