Betty Edelman
University of Texas at Dallas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Betty Edelman.
Perception | 1995
Hervé Abdi; Dominique Valentin; Betty Edelman; Alice J. O'Toole
The ability of a statistical/neural network to classify faces by sex by means of a pixel-based representation has not been fully investigated. Simulations with pixel-based codes have provided sex-classification results that are less impressive than those reported for measurement-based codes. In no case, however, have the reported pixel-based simulations been optimized for the task of classifying faces by sex. A series of simulations is described in which four network models were applied to the same pixel-based face code. These simulations involved either a radial basis function network or a perceptron as a classifier, preceded or not by a preprocessing step of eigendecomposition. It is shown that performance comparable to that of the measurement-based models can be achieved with pixel-based input (90%) when the data are preprocessed. The effect of the eigendecomposition preprocessing of the faces is then compared with spatial-frequency analysis of face images and analyzed in terms of the perceptual information it captures. It is shown that such an examination may offer insight into the facial aspects important to the sex-classification process. Finally, the contribution of hair information to the performance of the model is evaluated. It is shown that, although the hair contributes to the sex-classification process, it is not the only important contributor.
Perception | 1997
Dominique Valentin; Hervé Abdi; Betty Edelman
Empirical studies of face recognition suggest that faces might be stored in memory by means of a few canonical representations. The nature of these canonical representations is, however, unclear. Although psychological data show a three-quarter-view advantage, physiological studies suggest profile and frontal views are stored in memory. A computational approach to reconcile these findings is proposed. The pattern of results obtained when different views, or combinations of views, are used as the internal representation of a two-stage identification network consisting of an autoassociative memory followed by a radial-basis-function network are compared. Results show that (i) a frontal and a profile view are sufficient to reach the optimal network performance; and (ii) all the different representations produce a three-quarter view advantage, similar to that generally described for human subjects. These results indicate that although three-quarter views yield better recognition than other views, they need not be stored in memory to show this advantage.
Journal of Biological Systems | 1998
Betty Edelman; Dominique Valentin; Hervé Abdi
Human subjects and an artificial neural network, composed of an autoassociative memory and a perceptron, gender classified the same 160 frontal face images (80 male and 80 female). All 160 face images were presented under three conditions (1) full face image with the hair cropped (2) top portion only of the Condition 1 image (3) bottom portion only of the Condition 1 image. Predictions from simulations using Condition 1 stimuli for training and testing novel stimuli in Conditions 1, 2, and 3, were compared to human subject performance. Although the network showed a fair ability to generalize learning to new stimuli under the three conditions, performing from 66 to 78% correctly on novel faces, and predicted main effects, a more detailed comparison with the human data was not as promising. As expected, human accuracy declined with decreased image area, but showed a surprising interaction between the sex of the face and the partial image conditions. The network failed to predict this interaction, or the likelihood of correct human classification for a particular face. This analysis on an item level raises concern about the psychological relevance of the model.
Perception | 1999
Dominique Valentin; Hervé Abdi; Betty Edelman
A study is reported of the effect of distinctive marks on the recognition of unfamiliar faces across view angles. Subjects were asked to memorize a set of target faces, half of which had distinctive marks. Recognition was assessed by presenting the target faces, either in the same orientation, or after 90° rotation, mixed with an equal number of distractors. Results show that the effect of distinctive marks depends on the view presented during learning. When a frontal view was learned, as predicted by the dual-strategy model [Valentin et al, in press, in Computational, Geometric, and Process Perspectives on Facial Cognition: Context and Challenges Eds T Wenger, J Townsend (Hillsdale, NJ: Lawrence Erlbaum Associates)], distinctive marks improve recognition performance in the 90° condition but not in the 0° condition. However, when a profile view was learned, distinctive marks have no effect on recognition performance, even in the 90° condition where a frontal view is tested.
Journal of Mathematical Psychology | 1997
Domenique Valentin; Hervé Abdi; Betty Edelman; Alice J. O'Toole
computer vision and pattern recognition | 2005
Hervé Abdi; Alice J. O'Toole; Dominique Valentin; Betty Edelman
Archive | 2009
Hervé Abdi; Betty Edelman; Dominique Valentin; W. Jay Dowling
Journal of Mathematical Psychology | 1996
Hervé Abdi; Dominique Valentin; Betty Edelman; Alice J. O'Toole
Behavioral and Brain Sciences | 1998
Hervé Abdi; Dominique Valentin; Betty Edelman
Psychologica Belgica | 1996
Betty Edelman; Hervé Abdi; Dominique Valentin