James A. Bednar
University of Edinburgh
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James A. Bednar.
Archive | 2005
Risto Miikkulainen; James A. Bednar; Yoonsuck Choe; Joseph Sirosh
Biological Background.- Computational Foundations.- LISSOM: A Computational Map Model of V1.- Development of Maps and Connections.- Understanding Plasticity.- Understanding Visual Performance: The Tilt Aftereffect.- HLISSOM: A Hierarchical Model.- Understanding Low-Level Development: Orientation Maps.- Understanding High-Level Development: Face Detection.- PGLISSOM: A Perceptual Grouping Model.- Temporal Coding.- Understanding Perceptual Grouping: Contour Integration.- Computations in Visual Maps.- Scaling LISSOM simulations.- Discussion: Biological Assumptions and Predictions.- Future Work: Computational Directions.- Conclusion.
Cerebral Cortex | 2009
Marina Papoutsi; Jacco A. de Zwart; J. Martijn Jansma; Martin J. Pickering; James A. Bednar; Barry Horwitz
We used event-related functional magnetic resonance imaging to investigate the neuroanatomical substrates of phonetic encoding and the generation of articulatory codes from phonological representations. Our focus was on the role of the left inferior frontal gyrus (LIFG) and in particular whether the LIFG plays a role in sublexical phonological processing such as syllabification or whether it is directly involved in phonetic encoding and the generation of articulatory codes. To answer this question, we contrasted the brain activation patterns elicited by pseudowords with high- or low-sublexical frequency components, which we expected would reveal areas related to the generation of articulatory codes but not areas related to phonological encoding. We found significant activation of a premotor network consisting of the dorsal precentral gyrus, the inferior frontal gyrus bilaterally, and the supplementary motor area for low- versus high-sublexical frequency pseudowords. Based on our hypothesis, we concluded that these areas and in particular the LIFG are involved in phonetic and not phonological encoding. We further discuss our findings with respect to the mechanisms of phonetic encoding and provide evidence in support of a functional segregation of the posterior part of Brocas area, the pars opercularis.
Neural Computation | 2000
James A. Bednar; Risto Miikkulainen
RF-LISSOM, a self-organizing model of laterally connected orientation maps in the primary visual cortex, was used to study the psychological phenomenon known as the tilt aftereffect. The same self-organizing processes that are responsible for the long-term development of the map are shown to result in tilt aftereffects over short timescales in the adult. The model permits simultaneous observation of large numbers of neurons and connections, making it possible to relate high-level phenomena to low-level events, which is difficult to do experimentally. The results give detailed computational support for the long-standing conjecture that the direct tilt aftereffect arises from adaptive lateral interactions between feature detectors. They also make a new prediction that the indirect effect results from the normalization of synaptic efficacies during this process. The model thus provides a unified computational explanation of self-organization and both the direct and indirect tilt aftereffect in the primary visual cortex.
Neural Computation | 2003
James A. Bednar; Risto Miikkulainen
New born humans preferentially orient to facelike patterns at birth, but months of experience with faces are required for full face processing abilities to develop. Several models have been proposed for how the interaction of genetic and evironmental influences can explain these data. These models generally assume that the brain areas responsible for newborn orienting responses are not capable of learning and are physically separate from those that later learn from real faces. However, it has been difficult to reconcile these models with recent discoveries of face learning in newborns and young infants. We propose a general mechanism by which genetically specified and environment-driven preferences can coexist in the same visual areas. In particular, newborn face orienting may be the result of prenatal exposure of a learning system to internally generated input patterns, such as those found in PGO waves during REM sleep. Simulating this process with the HLISSOM biological model of the visualsystem, we demonstrate that the combination of learning and internal patterns is an efficient way to specify and develop circuitry for face perception. This prenatal learning can account for the newborn preferences for schematic and photographic images of faces, providing a computational explanation for how genetic influences interact with experience to construct a complex adaptive system.
Frontiers in Neuroinformatics | 2009
James A. Bednar
Many neural regions are arranged into two-dimensional topographic maps, such as the retinotopic maps in mammalian visual cortex. Computational simulations have led to valuable insights about how cortical topography develops and functions, but further progress has been hindered by the lack of appropriate tools. It has been particularly difficult to bridge across levels of detail, because simulators are typically geared to a specific level, while interfacing between simulators has been a major technical challenge. In this paper, we show that the Python-based Topographica simulator makes it straightforward to build systems that cross levels of analysis, as well as providing a common framework for evaluating and comparing models implemented in other simulators. These results rely on the general-purpose abstractions around which Topographica is designed, along with the Python interfaces becoming available for many simulators. In particular, we present a detailed, general-purpose example of how to wrap an external spiking PyNN/NEST simulation as a Topographica component using only a dozen lines of Python code, making it possible to use any of the extensive input presentation, analysis, and plotting tools of Topographica. Additional examples show how to interface easily with models in other types of simulators. Researchers simulating topographic maps externally should consider using Topographicas analysis tools (such as preference map, receptive field, or tuning curve measurement) to compare results consistently, and for connecting models at different levels. This seamless interoperability will help neuroscientists and computational scientists to work together to understand how neurons in topographic maps organize and operate.
Psychology of Learning and Motivation | 1997
Risto Miikkulainen; James A. Bednar; Yoonsuck Choe; Joseph Sirosh
Based on a Hebbian adaptation process, the afferent and lateral connections in the RF-LISSOM model organize simultaneously and cooperatively, and form structures such as those observed in the primary visual cortex. The neurons in the model develop local receptive fields that are organized into orientation, ocular dominance, and size selectivity columns. At the same time, patterned lateral connections form between neurons that follow the receptive field organization. This structure is in a continuously-adapting dynamic equilibrium with the external and intrinsic input, and can account for reorganization of the adult cortex following retinal and cortical lesions. The same learning processes may be responsible for a number of low-level functional phenomena such as tilt aftereffects, and combined with the leaky integrator model of the spiking neuron, for segmentation and binding. The model can also be used to verify quantitatively the hypothesis that the visual cortex forms a sparse, redundancy-reduced encoding of the input, which allows it to process massive amounts of visual information efficiently.
Neurocomputing | 2003
James A. Bednar; Risto Miikkulainen
Studies of orientation maps in primary visual cortex (V1) suggest that lateral connections mediate competition and cooperation between orientation-selective units, but their role in motion perception has not been established. Using a self-organizing model of V1 with moving oriented patterns, we show that (1) a/erent weights of each neuron organize into Gabor-like spatiotemporal receptive elds with ON and OFF lobes, (2) these receptive elds form realistic joint direction and orientation maps, and (3) lateral connections develop between patches with similar orientation and direction preferences. These results suggest that a single self-organizing system may underlie the development of orientation selectivity, direction selectivity, and lateral connectivity. c 2003 Published by Elsevier Science B.V.
Journal of Physiology-paris | 2012
James A. Bednar
Researchers have used a very wide range of different experimental and theoretical approaches to help understand mammalian visual systems. These approaches tend to have quite different assumptions, strengths, and weaknesses. Computational models of the visual cortex, in particular, have typically implemented either a proposed circuit for part of the visual cortex of the adult, assuming a very specific wiring pattern based on findings from adults, or else attempted to explain the long-term development of a visual cortex region from an initially undifferentiated starting point. Previous models of adult V1 have been able to account for many of the measured properties of V1 neurons, while not explaining how these properties arise or why neurons have those properties in particular. Previous developmental models have been able to reproduce the overall organization of specific feature maps in V1, such as orientation maps, but are generally formulated at an abstract level that does not allow testing with real images or analysis of detailed neural properties relevant for visual function. In this review of results from a large set of new, integrative models developed from shared principles and a set of shared software components, I show how these models now represent a single, consistent explanation for a wide body of experimental evidence, and form a compact hypothesis for much of the development and behavior of neurons in the visual cortex. The models are the first developmental models with wiring consistent with V1, the first to have realistic behavior with respect to visual contrast, and the first to include all of the demonstrated visual feature dimensions. The goal is to have a comprehensive explanation for why V1 is wired as it is in the adult, and how that circuitry leads to the observed behavior of the neurons during visual tasks.
Neurocomputing | 2006
James A. Bednar; Risto Miikkulainen
Primary visual cortex (V1) contains overlaid feature maps for orientation (OR), motion direction selectivity (DR), and ocular dominance (OD). Neurons in these maps are connected laterally in patchy, long-range patterns that follow the feature preferences. Using the LISSOM model, we show for the first time how realistic laterally connected joint OR/OD/DR maps can self-organize from Hebbian learning of moving natural images. The model predicts that lateral connections will link neurons of either eye preference and with similar DR and OR preferences. These results suggest that a single self-organizing system may underlie the development of spatiotemporal feature preferences and lateral connectivity.
Neuroinformatics | 2001
Amol Kelkar; James A. Bednar; Risto Miikkulainen
Self-organizing computational models with specific intracortical connections can explain many functional features of visual cortex, such as topographic orientation and ocular dominance maps. However, due to their computational requirements, it is difficult to use such detailed models to study large-scale phenomenal like object segmentation and binding, object recognition, tilt illusions, optic flow, and fovea-periphery differences. This article introduces two techniques that make large simulations practical. First, we show how parameter scaling equations can be derived for laterally connected self-organizing models. These equations result in quantitatively equivalent maps over a wide range of simulation sizes, making it possible to debug small simulations and then scale them up only when needed. Parameter scaling also allows detailed comparison of biological maps and parameters between individuals and species with different brain region sizes. Second, we use parameter scaling to implement a new growing map method called GLISSOM, which dramatically reduces the memory and computational requirements of large self-organizing networks. With GLISSOM, it should be possible to simulate all of human V1 at the single-column level using current desktop workstations. We are using these techniques to develop a new simulator Topographica, which will help make it practical to perform detailed studies of large-scale phenomena in topographic maps.