Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elan Barenholtz is active.

Publication


Featured researches published by Elan Barenholtz.


Cognition | 2003

Detection of change in shape: an advantage for concavities

Elan Barenholtz; Elias H. Cohen; Jacob Feldman; Manish Singh

Shape representation was studied using a change detection task. Observers viewed two individual shapes in succession, either identical or one a slightly altered version of the other, and reported whether they detected a change. We found a dramatic advantage for concave compared to convex changes of equal magnitude. Observers were more accurate when a concavity along the contour was introduced, or removed, compared to a convexity. This result sheds light on the underlying representation of visual shape, and in particular the central role played by part-boundaries. Moreover, this finding shows how change detection methodology can serve as a useful tool in studying the specific form of visual representations.


Multimedia Tools and Applications | 2011

Context modeling in computer vision: techniques, implications, and applications

Oge Marques; Elan Barenholtz; Vincent Charvillat

In recent years there has been a surge of interest in context modeling for numerous applications in computer vision. The basic motivation behind these diverse efforts is generally the same—attempting to enhance current image analysis technologies by incorporating information from outside the target object, including scene analysis as well as metadata. However, many different approaches and applications have been proposed, leading to a somewhat inchoate literature that can be difficult to navigate. The current paper provides a ‘roadmap’ of this new research, including a discussion of the basic motivation behind context-modeling, an overview of the most representative techniques, and a discussion of specific applications in which contextual modeling has been incorporated. This review is intended to introduce researchers in computer vision and image analysis to this increasingly important field as well as provide a reference for those who may wish to incorporate context modeling in their own work.


Psychology of Learning and Motivation | 2006

RECONSIDERING THE ROLE OF STRUCTURE IN VISION

Elan Barenholtz; Michael J. Tarr

Publisher Summary This chapter focuses on the role of structure in mental representations and visual object recognition. A match between an incoming image and a stored image is registered based on global similarity across a homogeneous input space. One of the best pieces of evidence for structural models in visual recognition is text reading. Recent empirical evidence suggests that word recognition depends on identifying letters as opposed to recognizing the word holistically as a single pattern. Structural accounts of recognition are those in which the identification of a visual object depends on both a set of features and the relations between those features within some representational space. The primate visual system is clearly able to extract and represent structural information, for example, spatial relations, at a wide variety of scales. The computational utility of compositional structure has been considered in depth with regard to a number of cognitive domains, and in particular, with respect to language. A compositional architecture allows you to generate arbitrary constructions based on a limited set of starting units, a property referred to as productivity.


Cognition | 2016

Language familiarity modulates relative attention to the eyes and mouth of a talker.

Elan Barenholtz; Lauren Mavica; David J. Lewkowicz

We investigated whether the audiovisual speech cues available in a talkers mouth elicit greater attention when adults have to process speech in an unfamiliar language vs. a familiar language. Participants performed a speech-encoding task while watching and listening to videos of a talker in a familiar language (English) or an unfamiliar language (Spanish or Icelandic). Attention to the mouth increased in monolingual subjects in response to an unfamiliar language condition but did not in bilingual subjects when the task required speech processing. In the absence of an explicit speech-processing task, subjects attended equally to the eyes and mouth in response to both familiar and unfamiliar languages. Overall, these results demonstrate that language familiarity modulates selective attention to the redundant audiovisual speech cues in a talkers mouth in adults. When our findings are considered together with similar findings from infants, they suggest that this attentional strategy emerges very early in life.


Psychonomic Bulletin & Review | 2014

Categorical congruence facilitates multisensory associative learning

Elan Barenholtz; David J. Lewkowicz; Meredith Davidson; Lauren Mavica

Learning about objects often requires making arbitrary associations among multisensory properties, such as the taste and appearance of a food or the face and voice of a person. However, the multisensory properties of individual objects usually are statistically constrained, such that some properties are more likely to co-occur than others, on the basis of their category. For example, male faces are more likely to co-occur with characteristically male voices than with female voices. Here, we report evidence that these natural multisensory statistics play a critical role in the learning of novel, arbitrary associative pairs. In Experiment 1, we found that learning of pairs consisting of human voices and gender-congruent faces was superior to learning of pairs consisting of human voices and gender-incongruent faces or of pairs consisting of human voices and pictures of inanimate objects (plants and rocks). In Experiment 2, we found that this “categorical congruency” advantage extended to nonhuman stimuli, as well—namely, to pairs of class-congruent animal pictures and vocalizations (e.g., dogs and barks) versus class-incongruent pairs (e.g., dogs and bird chirps). These findings suggest that associating multisensory properties that are statistically consistent with the various objects that we encounter in our daily lives is a privileged form of learning.


Multimedia Tools and Applications | 2015

Deep learning human actions from video via sparse filtering and locally competitive algorithms

William Hahn; Stephanie Lewkowitz; Daniel LaCombe; Elan Barenholtz

Physiological and psychophysical evidence suggest that early visual cortex compresses the visual input on the basis of spatial and orientation-tuned filters. Recent computational advances have suggested that these neural response characteristics may reflect a ‘sparse coding’ architecture—in which a small number of neurons need to be active for any given image—yielding critical structure latent in natural scenes. Here we present a novel neural network architecture combining a sparse filter model and locally competitive algorithms (LCAs), and demonstrate the network’s ability to classify human actions from video. Sparse filtering is an unsupervised feature learning algorithm designed to optimize the sparsity of the feature distribution directly without having the need to model the data distribution. LCAs are defined by a system of differential equations where the initial conditions define an optimization problem and the dynamics converge to a sparse decomposition of the input vector. We applied this architecture to train a classifier on categories of motion in human action videos. Inputs to the network were small 3D patches taken from frame differences in the videos. Dictionaries were derived for each action class and then activation levels for each dictionary were assessed during reconstruction of a novel test patch. Overall, classification accuracy was at ≈ 97 %. We discuss how this sparse filtering approach provides a natural framework for multi-sensory and multimodal data processing including RGB video, RGBD video, hyper-spectral video, and stereo audio/video streams.


Visual Cognition | 2011

Visual learning of statistical relations among nonadjacent features: Evidence for structural encoding

Elan Barenholtz; Michael J. Tarr

Recent results suggest that observers can learn, unsupervised, the co-occurrence of independent shape features in viewed patterns (e.g., Fiser & Aslin, 2001). A critical question with regard to these findings is whether learning is driven by a structural, rule-based encoding of spatial relations between distinct features or by a pictorial, template-like encoding, in which spatial configurations of features are embedded in a “holistic” fashion. In two experiments, we test whether observers can learn combinations of features when the paired features are separated by an intervening spatial “gap”, in which other, unrelated features can appear. This manipulation both increases task difficulty and makes it less likely that the feature combinations are encoded simply as larger unitary features. Observers exhibited learning consistent with earlier studies, suggesting that unsupervised learning of compositional structure is based on the explicit encoding of spatial relations between separable visual features. More generally, these results provide support for compositional structure in visual representation.


Journal of Vision | 2012

Matching voice and face identity from static images

Lauren Kogelschatz; Elan Barenholtz

Previous research has suggested that people are unable to correctly choose which unfamiliar voice and static image of a face belong to the same person. Here, we present evidence that people can perform this task with greater than chance accuracy. In Experiment 1, participants saw photographs of two, same-gender models, while simultaneously listening to a voice recording of one of the models pictured in the photographs and chose which of the two faces they thought belonged to the same model as the recorded voice. We included three conditions: (a) the visual stimuli were frontal headshots (including the neck and shoulders) and the auditory stimuli were recordings of spoken sentences; (b) the visual stimuli only contained cropped faces and the auditory stimuli were full sentences; (c) we used the same pictures as Condition 1 but the auditory stimuli were recordings of a single word. In Experiment 2, participants performed the same task as in Condition 1 of Experiment 1 but with the stimuli presented in sequence. Participants also rated the model’s faces and voices along multiple “physical” dimensions (e.g., weight,) or “personality” dimensions (e.g., extroversion); the degree of agreement between the ratings for each model’s face and voice was compared to performance for that model in the matching task. In all three conditions, we found that participants chose, at better than chance levels, which faces and voices belonged to the same person. Performance in the matching task was not correlated with the degree of agreement on any of the rated dimensions.


Journal of Vision | 2010

Quantifying the role of context in visual object recognition

Elan Barenholtz

An object’s context may serve as a source of information for recognition when the object’s image is degraded. The current study aimed to quantify this source of information. Stimuli were photographs of objects divided into quantized blocks. Participants decreased block size (increasing resolution) until identification. Critical resolution was compared across three conditions: (1) when the picture of the target object was shown in isolation, (2) in the object’s contextual setting where that context was unfamiliar to the participant, and (3) where that context was familiar to the participant. A second experiment assessed the role of object familiarity without context. Results showed a profound effect of context: Participants identified objects in familiar contexts with minimal resolution. Unfamiliar contexts required higher-resolution images, but much less so than those without context. Experiment 2 found a much smaller effect of familiarity without context, suggesting that recognition in familiar contexts is primarily based on object-location memory.


international conference on bioinformatics | 2018

Convolutional Neural Networks for Predicting Molecular Binding Affinity to HIV-1 Proteins

Paul Morris; Yahchayil DaSilva; Evan Clark; William Edward Hahn; Elan Barenholtz

Computational techniques for binding-affinity prediction and molecular docking have long been considered in terms of their utility for drug discovery. With the advent of deep learning, new supervised learning techniques have emerged which can utilize the wealth of experimental binding data already available. Here we demonstrate the ability of a fully convolutional neural network to classify molecules from their Simplified Molecular-Input Line-Entry System (SMILES) strings for binding affinity to HIV proteins. The network is evaluated on two tasks to distinguish a set of molecules which are experimentally verified to bind and inhibit HIV-1 Protease and HIV-1 Reverse Transcriptase from a random sample of drug-like molecules. We report 98% and 93% classification accuracy on the respective tasks using a computationally efficient model which outperforms traditional machine learning baselines. Our model is suitable for virtually screening a large set of drug-like molecules for binding to HIV or other protein targets.

Collaboration


Dive into the Elan Barenholtz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Tarr

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Kleiman

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Derrick Schlangen

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Lauren Kogelschatz

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Lauren Mavica

Florida Atlantic University

View shared research outputs
Top Co-Authors

Avatar

Meredith Davidson

Florida Atlantic University

View shared research outputs
Researchain Logo
Decentralizing Knowledge