Daniel Gill
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Gill.
computing in cardiology conference | 2005
Daniel Gill; N Gavrieli; N Intrator
This work presents a novel method for automatic detection and identification of heart sounds. Homomorphic filtering is used to obtain a smooth envelogram of the phono cardiogram, which enables a robust detection of events of interest in heart sound signal. Sequences of features extracted from the detected events are used as observations of a hidden Markov model. It is demonstrated that the task of detection and identification of the major heart sounds can be learned from unlabelled phono cardiograms by an unsupervised training process and without the assistance of any additional synchronizing channels
Psychological Science | 2014
Daniel Gill; Oliver Garrod; Rachael E. Jack; Philippe G. Schyns
Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.
Neurocomputing | 2000
Daniel Gill; Lidror Troyansky; Israel Nelken
Abstract This work presents a biologically motivated neuronal model for detecting the elevation of unfamiliar natural sound sources using monoaural cues, based on head-related-transfer functions. This model can determine the elevation of an unfamiliar sound source to within less than 4° with no error using very small number of training samples. In addition, we suggest that the approximate logarithmic response of the cells in the cochlea is beneficial for localizing unfamiliar sound sources.
international symposium on visual computing | 2007
Daniel Gill; Ya'acov Ritov; Gideon Dror
This work presents a new approach to analysis of shapes represented by finite set of landmarks, that generalizes the notion of Procrustes distance - an invariant metric under translation, scaling, and rotation. In many shape classification tasks there is a large variability in certain landmarks due to intra-class and/or inter-class variations. Such variations cause poor shape alignment needed for Procrustes distance computation, and lead to poor classification performance. We apply a general framework to the task of supervised classification of shapes that naturally deals with landmark distributions exhibiting large intra class or inter-class variabilty. The incorporation of Procrustes metric and of a learnt general quadratic distance inspired by Fisher linear discriminant objective function, produces a generalized Procrustes distance. The learnt distance retains the invariance properties and emphasizes the discriminative shape features. In addition, we show how the learnt metric can be useful for kernel machines design and demonstrate a performance enhancement accomplished by the learnt distances on a variety of classification tasks of organismal forms datasets.
Journal of Vision | 2014
Daniel Gill; Rachael E. Jack; Philippe G. Schyns
Inference of social traits from faces is a prominent factor in everyday social interactions. As such it is supposed to be susceptible to evolutionary pressures affecting detection sensitivities of specific social context. Previous studies have shown in a multi trait rating task (Oosterhof & Todorov, 2008), that the valence dimension (first principal component) has been found to capture most of the variance (68%) whereas the second principle component, well aligned with dominance, was found to capture only 18% of the variance. It has been suggested that the latter structure expresses the priority of evaluation of intention over evaluation of dominance in a chance encounter. Here we address the issue of the modulation of social impression from faces across varying viewing distance. In a four social trait rating task (trustworthiness, dominance, attractiveness and aggressiveness) and simulation of varying viewing distances ranging from 2.5 to 80m we addressed this question by (i) evaluating the composition of diagnostic information across that viewing distance by classification images (ii) reconstructing the structure of the viewing-distance-dependent social space by measuring its principal components. The results show a varying composition of diagnostic information across viewing distance. Whilst in long viewing distance face and hair color (e.g. face redness) serve as major diagnostic features, in short viewing distance inner facial features (such as eyebrows) become diagnostic as well. In addition we show that at a long viewing distance loading on the first principal component of dominance is high and that of trustworthiness is low. When viewing distance is becoming shorter the loading on the first principal component of dominance becomes lower and that of trustworthiness becomes higher. The latter results suggest a viewing distance dependent tuning of social perception: priority of evaluation of counterparts capacity in far viewing distance and priority of intentions in short distance.
I-perception | 2012
Daniel Gill; Ogb Garrod; Rachael E. Jack; Philippe G. Schyns
Evolution and Human Behavior | 2017
Daniel Gill
Journal of Vision | 2015
Daniel Gill; Lisa M. DeBruine; Benedict C. Jones; Philippe G. Schyns
Journal of Vision | 2013
Daniel Gill; Oliver Garrod; Rachael E. Jack; Philippe G. Schyns
I-perception | 2011
Daniel Gill