Harold C Hill
University of Wollongong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Harold C Hill.
Current Biology | 2001
Harold C Hill; Alan Johnston
Head and facial movements can provide valuable cues to identity in addition to their primary roles in communicating speech and expression [1-8]. Here we report experiments in which we have used recent motion capture and animation techniques to animate an average head [9]. These techniques have allowed the isolation of motion from other cues and have enabled us to separate rigid translations and rotations of the head from nonrigid facial motion. In particular, we tested whether human observers can judge sex and identity on the basis of this information. Results show that people can discriminate both between individuals and between males and females from motion-based information alone. Rigid head movements appear particularly useful for categorization on the basis of identity, while nonrigid motion is more useful for categorization on the basis of sex. Accuracy for both sex and identity judgements is reduced when faces are presented upside down, and this finding shows that performance is not based on low-level motion cues alone and suggests that the information is represented in an object-based motion-encoding system specialized for upright faces. Playing animations backward also reduced performance for sex judgements and emphasized the importance of direction specificity in admitting access to stored representations of characteristic male and female movements.
Journal of Experimental Psychology: Human Perception and Performance | 1996
Harold C Hill; Vicki Bruce
A series of experiments is reported that investigated the effects of variations in lighting and viewpoint on the recognition and matching of facial surfaces. In matching tasks, changing lighting reduced performance, as did changing view, but changing both did not further reduce performance. There were also differences between top and bottom lighting. Recognizing familiar surfaces and matching across changes in viewpoint were more accurate when lighting was from above than when it was from below the heads, and matching between different directions of top lighting was more accurate than between different directions of bottom lighting. Top lighting also benefited matching between views of unfamiliar objects (amoebae), though this benefit was not found for inverted faces. The results are difficult to explain if edge- or image-based representations mediate face processing and seem more consistent with an account in which lighting from above helps the derivation of 3-dimensional shape.
Perception | 1992
Alan Johnston; Harold C Hill; Nicole Carman
When information about three-dimensional shape obtained from shading and shadows is ambiguous, the visual system favours an interpretation of surface geometry which is consistent with illumination from above. If pictures of top-lit faces are rotated the resulting stimulus is both figurally inverted and illuminated from below. In this study the question of whether the effects of figural inversion and lighting orientation on face recognition are independent or interactive is addressed. Although there was a clear inversion effect for faces illuminated from the front and above, the inversion effect was found to be reduced or eliminated for faces illuminated from below. A strong inversion effect for photographic negatives was also found but in this case the effect was not dependent on the direction of illumination. These findings are interpreted as evidence to suggest that lighting faces from below disrupts the formation of surface-based representations of facial shape.
Current Biology | 2003
Miyuki Kamachi; Harold C Hill; Karen Lander; Eric Vatikiotis-Bateson
Speech perception provides compelling examples of a strong link between auditory and visual modalities. This link originates in the mechanics of speech production, which, in shaping the vocal tract, determine the movement of the face as well as the sound of the voice. In this paper, we present evidence that equivalent information about identity is available cross-modally from both the face and voice. Using a delayed matching to sample task, XAB, we show that people can match the video of an unfamiliar face, X, to an unfamiliar voice, A or B, and vice versa, but only when stimuli are moving and are played forward. The critical role of time-varying information is underlined by the ability to match faces to voices containing only the coarse spatial and temporal information provided by sine wave speech [5]. The effect of varying sentence content across modalities was small, showing that identity-specific information is not closely tied to particular utterances. We conclude that the physical constraints linking faces to voices result in bimodally available dynamic information, not only about what is being said, but also about who is saying it.
Perception | 2003
Frank E. Pollick; Harold C Hill; Andrew J. Calder; Helena Paterson
We examined how the recognition of facial emotion was influenced by manipulation of both spatial and temporal properties of 3-D point-light displays of facial motion. We started with the measurement of 3-D position of multiple locations on the face during posed expressions of anger, happiness, sadness, and surprise, and then manipulated the spatial and temporal properties of the measurements to obtain new versions of the movements. In two experiments, we examined recognition of these original and modified facial expressions: in experiment 1, we manipulated the spatial properties of the facial movement, and in experiment 2 we manipulated the temporal properties. The results of experiment 1 showed that exaggeration of facial expressions relative to a fixed neutral expression resulted in enhanced ratings of the intensity of that emotion. The results of experiment 2 showed that changing the duration of an expression had a small effect on ratings of emotional intensity, with a trend for expressions with shorter durations to have lower ratings of intensity. The results are discussed within the context of theories of encoding as related to caricature and emotion.
Perception | 2007
Harold C Hill; Alan Johnston
The hollow-face illusion, in which a mask appears as a convex face, is a powerful example of binocular depth inversion occurring with a real object under a wide range of viewing conditions. Explanations of the illusion are reviewed and six experiments reported. In experiment 1 the detrimental effect of figural inversion, evidence for the importance of familiarity, was found for other oriented objects. The inversion effect held for masks lit from the side (experiment 2). The illusion was stronger for a mask rotated by 908 lit from its forehead than from its chin, suggesting that familiar patterns of shading enhance the illusion (experiment 2). There were no effects of light source visibility or any left/right asymmetry (experiment 3). In experiments 4 – 6 we used a ‘virtual’ hollow face, with illusion strength quantified by the proportion of noise texture needed to eliminate the illusion. Adding characteristic surface colour enhanced the illusion, consistent with the familiar face pigmentation outweighing additional bottom - up cues (experiment 4). There was no difference between perspective and orthographic projection. Photographic negation reduced, but did not eliminate, the illusion, suggesting shading is important but not essential (experiment 5). Absolute depth was not critical, although a shallower mask was given less extreme convexity ratings (experiment 6). We argue that the illusion arises owing to a convexity preference when the raw data have ambiguous interpretations. However, using a familiar object with typical orientation, shading, and pigmentation greatly enhances the effect.
Journal of Experimental Psychology: Human Perception and Performance | 2007
Karen Lander; Harold C Hill; Miyuki Kamachi; Eric Vatikiotis-Bateson
Recent studies have shown that the face and voice of an unfamiliar person can be matched for identity. Here the authors compare the relative effects of changing sentence content (what is said) and sentence manner (how it is said) on matching identity between faces and voices. A change between speaking a sentence as a statement and as a question disrupted matching performance, whereas changing the sentence itself did not. This was the case when the faces and voices were from the same race as participants and speaking a familiar language (English; Experiment 1) or from another race and speaking an unfamiliar language (Japanese; Experiment 2). Altering manner between conversational and clear speech (Experiment 3) or between conversational and casual speech (Experiment 4) was also disruptive. However, artificially slowing (Experiment 5) or speeding (Experiment 6) speech did not affect cross-modal matching performance. The results show that bimodal cues to identity are closely linked to manner but that content (what is said) and absolute tempo are not critical. Instead, prosodic variations in rhythmic structure and/or expressiveness may provide a bimodal, dynamic identity signature.
Visual Cognition | 2005
Tamara L. Watson; Alan Johnston; Harold C Hill; Nikolaus F. Troje
Natural face and head movements were mapped onto a computer rendered three-dimensional average of 100 laser-scanned heads in order to isolate movement information from spatial cues and nonrigid movements from rigid head movements (Hill & Johnston, 2001). Experiment 1 investigated whether subjects could recognize, from a rotated view, facial motion that had previously been presented at a full-face view using a delayed match to sample experimental paradigm. Experiment 2 compared recognition for views that were either between or outside initially presented views. Experiment 3 compared discrimination at full face, three-quarters, and profile after learning at each of these views. A significant face inversion effect in Experiments 1 and 2 indicated subjects were using face-based information rather than more general motion or temporal cues for optimal performance. In each experiment recognition performance only ever declined with a change in viewpoint between sample and test views when rigid motion was present. Nonrigid, face-based motion appears to be encoded in a viewpoint invariant, object-centred manner, whereas rigid head movement is encoded in a more view specific manner.
Perception | 2003
Harold C Hill; Yuri Jinno; Alan Johnston
The movement of faces provides useful information for a variety of tasks and is now an active area of research. We compare here two ways of presenting face motion in experiments: as solid-body animations and as point-light displays. In the first experiment solid-body and point-light animations, based on the same motion-captured marker data, produced similar levels of performance on a sex-judgment task. The trend was for an advantage for the point-light displays, probably in part because of residual spatial cues available in such stimuli. In the second experiment we compared spatially normalised point-light displays of marker data with solid-body animations and pseudorandom point-light animations. Performance with solid-body animations and normalised point-light displays was similar and above chance, while performance with the pseudorandom point-light stimuli was not above chance. We conclude that both relatively few well-placed points and solid-body animations provide useful information about facial motion, but that a greater number of randomly placed points does not support above-chance performance. Solid-body animations have the methodological advantages of reducing the importance of marker placement and are more effective in isolating motion information, even if they are subsequently rendered as point-light displays.
PLOS ONE | 2014
Justin O'Brien; Janine Spencer; Christine Girges; Alan Johnston; Harold C Hill
Facial motion is a special type of biological motion that transmits cues for socio-emotional communication and enables the discrimination of properties such as gender and identity. We used animated average faces to examine the ability of adults with autism spectrum disorders (ASD) to perceive facial motion. Participants completed increasingly difficult tasks involving the discrimination of (1) sequences of facial motion, (2) the identity of individuals based on their facial motion and (3) the gender of individuals. Stimuli were presented in both upright and upside-down orientations to test for the difference in inversion effects often found when comparing ASD with controls in face perception. The ASD group’s performance was impaired relative to the control group in all three tasks and unlike the control group, the individuals with ASD failed to show an inversion effect. These results point to a deficit in facial biological motion processing in people with autism, which we suggest is linked to deficits in lower level motion processing we have previously reported.