Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gennady Erlikhman is active.

Publication


Featured researches published by Gennady Erlikhman.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2011

Semantic cuing and the scale insensitivity of recency and contiguity.

Sean M. Polyn; Gennady Erlikhman; Michael J. Kahana

In recalling a set of previously experienced events, people exhibit striking effects of recency, contiguity, and similarity: Recent items tend to be recalled best and first, and items that were studied in neighboring positions or that are similar to one another in some other way tend to evoke one another during recall. Effects of recency and contiguity have most often been investigated in tasks that require people to recall random word lists. Similarity effects have most often been studied in tasks that require people to recall categorized word lists. Here we examine recency and contiguity effects in lists composed of items drawn from 3 distinct taxonomic categories and in which items from a given category are temporally separated from one another by items from other categories, all of which are tested for recall. We find evidence for long-term recency and for long-range contiguity, bolstering support for temporally sensitive models of memory and highlighting the importance of understanding the interaction between temporal and semantic information during memory search.


PLOS ONE | 2014

Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

Philip J. Kellman; Jennifer L. Mnookin; Gennady Erlikhman; Patrick Garrigan; Tandra Ghose; Everett Mettler; David Charlton; Itiel E. Dror

Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.


Vision Research | 2016

Modeling spatiotemporal boundary formation.

Gennady Erlikhman; Philip J. Kellman

Spatiotemporal boundary formation (SBF) refers to perception of continuous contours, shape, and global motion from sequential transformations of widely separated surface elements. How such minimal information in SBF can produce whole forms and the nature of the computational processes involved remain mysterious. Formally, it has been shown that orientations and motion directions of local edge fragments can be recovered from small sets of element changes (Shipley & Kellman, (1997). Vision Research, 37, 1281-1293). Little experimental work has examined SBF in simple situations, however, and no model has been able to predict human SBF performance. We measured orientation discrimination thresholds in simple SBF displays for thin, oriented bars as a function of element density, number of element transformations, and frame duration. Thresholds decreased with increasing density and number of transformations, and increased with frame duration. An ideal observer model implemented to give trial-by-trial responses in the same orientation discrimination task exceeded human performance. In a second group of experiments, we measured human precision in detecting inputs to the model (spatial, temporal, and angular inter-element separation). A model that modified the ideal observer by added encoding imprecision for these parameters, directly obtained from Exp. 2, and that included two integration constraints obtained from previous research, closely fit human SBF data with no additional free parameters. These results provide the first empirical support for an early stage in shape formation in SBF based on the recovery of local edge fragments from spatiotemporally sparse element transformation events.


NeuroImage | 2016

The neural representation of objects formed through the spatiotemporal integration of visual transients

Gennady Erlikhman; Gennadiy Gurariy; Ryan E. B. Mruczek; Gideon Caplovitz

Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time.


NeuroImage | 2017

Decoding information about dynamically occluded objects in visual cortex

Gennady Erlikhman; Gideon Caplovitz

Abstract During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. Questions remain as to whether there is information maintained about the object itself (i.e. its shape or identity) or non‐object‐specific information such as its position or velocity as it is tracked behind an occluder, as well as which areas of visual cortex represent such information. Recent studies have found that early visual cortex is activated by “invisible” objects during visual imagery and by unstimulated regions along the path of apparent motion, suggesting that some properties of dynamically occluded objects may also be neurally represented in early visual cortex. We applied functional magnetic resonance imaging in human subjects to examine representations within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3). This activity was spatially‐specific, corresponding to the occluded location in the visual field. However, the activity did not encode enough information about object identity to discriminate between different kinds of occluded objects (circles vs. stars) using MVPA. In contrast, object identity could be decoded in spatially‐specific subregions of higher‐order, topographically organized areas such as ventral, lateral, and temporal occipital areas (VO, LO, and TO) as well as the functionally defined LOC and hMT+. These results suggest that early visual cortex may only represent the dynamically occluded objects position or motion path, while later visual areas represent object‐specific information.


Frontiers in Human Neuroscience | 2014

Non-rigid illusory contours and global shape transformations defined by spatiotemporal boundary formation

Gennady Erlikhman; Yang Z. Xing; Philip J. Kellman

Spatiotemporal boundary formation (SBF) is the perception of form, global motion, and continuous boundaries from relations of discrete changes in local texture elements (Shipley and Kellman, 1994). In two experiments, small, circular elements underwent small displacements whenever an edge of an invisible (virtual) object passed over them. Unlike previous studies that examined only rigidly translating objects, we tested virtual objects whose properties changed continuously. Experiment 1 tested rigid objects that changed in orientation, scale, and velocity. Experiment 2 tested objects that transformed non-rigidly taking on a series of shapes. Robust SBF occurred for all of the rigid transformations tested, as well as for non-rigid virtual objects, producing the perception of continuously bounded, smoothly deforming shapes. These novel illusions involve perhaps the most extreme cases of visual perception of continuous boundaries and shape from minimal information. They show that SBF encompasses a wider range of illusory phenomena than previously understood, and they present substantial challenges for existing models of SBF.


Frontiers in Psychology | 2016

From Flashes to Edges to Objects: Recovery of Local Edge Fragments Initiates Spatiotemporal Boundary Formation

Gennady Erlikhman; Philip J. Kellman

Spatiotemporal boundary formation (SBF) is the perception of illusory boundaries, global form, and global motion from spatially and temporally sparse transformations of texture elements (Shipley and Kellman, 1993a, 1994; Erlikhman and Kellman, 2015). It has been theorized that the visual system uses positions and times of element transformations to extract local oriented edge fragments, which then connect by known interpolation processes to produce larger contours and shapes in SBF. To test this theory, we created a novel display consisting of a sawtooth arrangement of elements that disappeared and reappeared sequentially. Although apparent motion along the sawtooth would be expected, with appropriate spacing and timing, the resulting percept was of a larger, moving, illusory bar. This display approximates the minimal conditions for visual perception of an oriented edge fragment from spatiotemporal information and confirms that such events may be initiating conditions in SBF. Using converging objective and subjective methods, experiments showed that edge formation in these displays was subject to a temporal integration constraint of ~80 ms between element disappearances. The experiments provide clear support for models of SBF that begin with extraction of local edge fragments, and they identify minimal conditions required for this process. We conjecture that these results reveal a link between spatiotemporal object perception and basic visual filtering. Motion energy filters have usually been studied with orientation given spatially by luminance contrast. When orientation is not given in static frames, these same motion energy filters serve as spatiotemporal edge filters, yielding local orientation from discrete element transformations over time. As numerous filters of different characteristic orientations and scales may respond to any simple SBF stimulus, we discuss the aperture and ambiguity problems that accompany this conjecture and how they might be resolved by the visual system.


Archive | 2013

Challenges in Understanding Visual Shape Perception and Representation: Bridging Subsymbolic and Symbolic Coding

Philip J. Kellman; Patrick Garrigan; Gennady Erlikhman

Perceiving and representing the shapes of contours and objects are among the most crucial tasks for biological and artificial vision systems. Much is known about early cortical encoding of visual information, and at a more abstract level, experimental data and computational models have revealed great deal about contour, object, and shape perception. Between the early “subsymbolic” encodings and higher level “symbolic” descriptions (e.g., of contours or shapes), however, lies a considerable gap. In this chapter, we highlight the issue of attaining symbolic codes from subsymbolic ones in considering two crucial problems of shape. We describe (1) the dependence of shape perception and representation on segmentation and grouping processes. We show that in ordinary perception, shape descriptions are given to objects rather than visible regions, and we review progress in understanding interpolation processes that construct unified objects across gaps in the input. We relate these efforts to neurally plausible models of interpolation, but note that current versions still lack ways of achieving symbolic codes. We then consider (2) properties that (some) shape representations must have and why these require computations beyond the local information obtained in early visual encoding. As an example of how to bridge the gap between the subsymbolic and symbolic, we describe psychophysical and modeling work in which contour shape is approximated in terms of constant curvature segments. Our “arclet” model takes local, oriented units as inputs and produces outputs that are symbolic contour tokens with constant curvature parts. The approach provides a plausible account of aspects of contour shape perception, and more generally, it illustrates the kinds of properties needed for models that connect early visual filtering to ecologically useful outputs in the perception and representation of shape.


Archive | 2017

The maintenance and updating of representations of no longer visible objects and their parts

J. Daniel McCarthy; Gennady Erlikhman; Gideon Caplovitz

When an object partially or completely disappears behind an occluding surface, a representation of that object persists. For example, fragments of no longer visible objects can serve as an input into mid-level constructive visual processes, interacting and integrating with currently visible portions to form perceptual units and global motion signals. Remarkably, these persistent representations need not be static and can have their positions and orientations updated postdictively as new information becomes visible. In this chapter, we highlight historical considerations, behavioral evidence, and neural correlates of this type of representational updating of no longer visible information at three distinct levels of visual processing. At the lowest level, we discuss spatiotemporal boundary formation in which visual transients can be integrated over space and time to construct local illusory edges, global form, and global motion percepts. At an intermediate level, we review how the visual system updates form information seen at one moment in time and integrates it with subsequently available information to generate global shape and motion representations (e.g., spatiotemporal form integration and anorthoscopic perception). At a higher level, when an entire object completely disappears behind an occluder, the objects identity and predicted position can be maintained in the absence of visual information.


Consciousness and Cognition | 2018

Towards a unified perspective of object shape and motion processing in human dorsal cortex

Gennady Erlikhman; Gideon Caplovitz; Gennadiy Gurariy; Jared Medina; Jacqueline C. Snow

Although object-related areas were discovered in human parietal cortex a decade ago, surprisingly little is known about the nature and purpose of these representations, and how they differ from those in the ventral processing stream. In this article, we review evidence for the unique contribution of object areas of dorsal cortex to three-dimensional (3-D) shape representation, the localization of objects in space, and in guiding reaching and grasping actions. We also highlight the role of dorsal cortex in form-motion interaction and spatiotemporal integration, possible functional relationships between 3-D shape and motion processing, and how these processes operate together in the service of supporting goal-directed actions with objects. Fundamental differences between the nature of object representations in the dorsal versus ventral processing streams are considered, with an emphasis on how and why dorsal cortex supports veridical (rather than invariant) representations of objects to guide goal-directed hand actions in dynamic visual environments.

Collaboration


Dive into the Gennady Erlikhman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tandra Ghose

Kaiserslautern University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Itiel E. Dror

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hongjing Lu

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge