Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julie M. Harris is active.

Publication


Featured researches published by Julie M. Harris.


Public Health Nutrition | 2006

Accuracy of estimates of food portion size using food photographs – the importance of using age-appropriate tools

Emma Foster; J. N. S. Matthews; Michael Nelson; Julie M. Harris; John C. Mathers; Ashley Adamson

BACKGROUND In order to obtain a measure of nutrient intake, a measure or estimate of the amount of food consumed is required. Weighing foods imposes a large burden on subjects, often resulting in underreporting. Tools are available to assist subjects in providing an estimate of portion size and these include food photographs. The application of these tools in improving portion size estimation by children has not been investigated systematically. OBJECTIVES To assess the accuracy with which children are able to estimate food portion sizes using food photographs designed for use with adults, and to determine whether the accuracy of estimates is improved when age-appropriate portion size photographs are provided. DESIGN Original data from three separate studies, on the accuracy of portion size estimates by adults using food photographs, by children using adult photographs and by children using age-appropriate photographs, are analysed and compared. SUBJECTS One hundred and thirty-five adults aged 18 to 90 years and 210 children aged 4 to 11 years. RESULTS Childrens estimates of portion sizes using age-appropriate food photographs were significantly more accurate (an underestimate of 1% on average) than estimates using photographs designed for use with adults (an overestimate of 45% on average). Accuracy of childrens estimates of portion size using age-appropriate photographs was not significantly different from that of adults. Children overestimated a foods weight by 18% on average and adults underestimated by 5%. CONCLUSIONS Providing children with food photographs depicting age-appropriate portion sizes greatly increases the accuracy of portion size estimates compared with estimates using photographs designed for use with adults.


Journal of Vision | 2012

Optimal integration of shading and binocular disparity for depth perception.

Paul George Lovell; Marina Bloj; Julie M. Harris

We explore the relative utility of shape from shading and binocular disparity for depth perception. Ray-traced images either featured a smooth surface illuminated from above (shading-only) or were defined by small dots (disparity-only). Observers judged which of a pair of smoothly curved convex objects had most depth. The shading cue was around half as reliable as the rich disparity information for depth discrimination. Shading- and disparity-defined cues where combined by placing dots in the stimulus image, superimposed upon the shaded surface, resulting in veridical shading and binocular disparity. Independently varying the depth delivered by each channel allowed creation of conflicting disparity-defined and shading-defined depth. We manipulated the reliability of the disparity information by adding disparity noise. As noise levels in the disparity channel were increased, perceived depths and variances shifted toward those of the now more reliable shading cue. Several different models of cue combination were applied to the data. Perceived depths and variances were well predicted by a classic maximum likelihood estimator (MLE) model of cue integration, for all but one observer. We discuss the extent to which MLE is the most parsimonious model to account for observer performance.


Vision Research | 2002

Optic flow and scene structure do not always contribute to the control of human walking

Julie M. Harris; William Bonas

Using displacing prisms to dissociate the influence of optic flow and egocentric direction, previous research (Current Biology 8 (1998) 1191) showed that people primarily use egocentric direction to control their locomotion on foot, rather than optic flow. When wearing displacing prisms, participants followed the curved path predicted by the use of simple egocentric direction, rather than a straight path, as predicted by the use of optic flow. It has previously been suggested that, in rich visual environments, other visual information including optic flow and static scene structure may influence locomotion in addition to direction. Here we report a study where neither scene structure nor optic flow have any influence on the control of walking. Participants wearing displacing prisms walked along a well-lit corridor (containing rich scene structure and flow) and along the same corridor in darkness (no scene structure or flow). Heading errors were not significantly different between the dark and light conditions. Thus, even under conditions of rich scene structure and high flow when walking in a well-lit corridor, participants follow the same curved paths as when these cues are not available. These results demonstrate that there are conditions under which visual direction is the only useful source of visual information for the control of locomotion.


Vision Research | 2009

The role of monocularly visible regions in depth and surface perception

Julie M. Harris; Laurie M. Wilcox

The mainstream of binocular vision research has long been focused on understanding how binocular disparity is used for depth perception. In recent years, researchers have begun to explore how monocular regions in binocularly viewed scenes contribute to our perception of the three-dimensional world. Here we review the field as it currently stands, with a focus on understanding the extent to which the role of monocular regions in depth perception can be understood using extant theories of binocular vision.


Journal of Vision | 2010

Stereoscopic perception of real depths at large distances

Stephen Palmisano; Barbara Gillam; Donovan G. Govan; Robert S. Allison; Julie M. Harris

There has been no direct examination of stereoscopic depth perception at very large observation distances and depths. We measured perceptions of depth magnitude at distances where it is frequently reported without evidence that stereopsis is non-functional. We adapted methods pioneered at distances up to 9 m by R. S. Allison, B. J. Gillam, and E. Vecellio (2009) for use in a 381-m-long railway tunnel. Pairs of Light Emitting Diode (LED) targets were presented either in complete darkness or with the environment lit as far as the nearest LED (the observation distance). We found that binocular, but not monocular, estimates of the depth between pairs of LEDs increased with their physical depths up to the maximum depth separation tested (248 m). Binocular estimates of depth were much larger with a lit foreground than in darkness and increased as the observation distance increased from 20 to 40 m, indicating that binocular disparity can be scaled for much larger distances than previously realized. Since these observation distances were well beyond the range of vertical disparity and oculomotor cues, this scaling must rely on perspective cues. We also ran control experiments at smaller distances, which showed that estimates of depth and distance correlate poorly and that our metric estimation method gives similar results to a comparison method under the same conditions.


Frontiers in Psychology | 2010

Two independent mechanisms for motion-in-depth perception: evidence from individual differences.

Harold T. Nefs; Louise O'Hare; Julie M. Harris

Our forward-facing eyes allow us the advantage of binocular visual information: using the tiny differences between right and left eye views to learn about depth and location in three dimensions. Our visual systems also contain specialized mechanisms to detect motion-in-depth from binocular vision, but the nature of these mechanisms remains controversial. Binocular motion-in-depth perception could theoretically be based on first detecting binocular disparity and then monitoring how it changes over time. The alternative is to monitor the motion in the right and left eye separately and then compare these motion signals. Here we used an individual differences approach to test whether the two sources of information are processed via dissociated mechanisms, and to measure the relative importance of those mechanisms. Our results suggest the existence of two distinct mechanisms, each contributing to the perception of motion-in-depth in most observers. Additionally, for the first time, we demonstrate the relative prevalence of the two mechanisms within a normal population. In general, visual systems appear to rely mostly on the mechanism sensitive to changing binocular disparity, but perception of motion-in-depth is augmented by the presence of a less sensitive mechanism that uses interocular velocity differences. Occasionally, we find observers with the opposite pattern of sensitivity. More generally this work showcases the power of the individual differences approach in studying the functional organization of cognitive systems.


Nature Neuroscience | 2005

Using visual direction in three-dimensional motion perception

Julie M. Harris; Vit F Drga

The eyes receive slightly different views of the world, and the differences between their images (binocular disparity) are used to see depth. Several authors have suggested how the brain could exploit this information for three-dimensional (3D) motion perception, but here we consider a simpler strategy. Visual direction is the angle between the direction of an object and the direction that an observer faces. Here we describe human behavioral experiments in which observers use visual direction, rather than binocular information, to estimate an objects 3D motion even though this causes them to make systematic errors. This suggests that recent models of binocular 3D motion perception may not reflect the strategies that human observers actually use.


Journal of Experimental Psychology: Human Perception and Performance | 2003

Accuracy and Precision of Binocular 3-D Motion Perception

Julie M. Harris; Philip J. A. Dean

In principle, information for 3-D motion perception is provided by the differences in position and motion between left- and right-eye images of the world. It is known that observers can precisely judge between different 3-D motion trajectories, but the accuracy of binocular 3-D motion perception has not been studied. The authors measured the accuracy of 3-D motion perception. In 4 different tasks, observers were inaccurate, overestimating trajectory angle, despite consistently choosing similar angles (high precision). Errors did not vary consistently with target distance, as would be expected had inaccuracy been due to misestimates of viewing distance. Observers appeared to rely strongly on the lateral position of the target, almost to the exclusion of the use of depth information. For the present tasks, these data suggest that neither an accurate estimate of 3-D motion direction nor one of passing distance can be obtained using only binocular cues to motion in depth. ((c) 2003 APA, all rights reserved)


Nature | 2005

Sporting contests: Seeing red? Putting sportswear in context

Candy Rowe; Julie M. Harris; S. Craig Roberts

Arising from: R. A. Hill & R. A. Barton 435, 293 (200510.1038/435293a); R. A. Hill & R. A. Barton replyThere is a Corrigendum (11 May 2006) associated with this document.The shirt colour worn by sportsmen can affect the behaviour of the competitors, but Hill and Barton show that it may also influence the outcome of contests. By analysing the results of mens combat sports from the Athens 2004 Olympics, they found that more matches were won by fighters wearing red outfits than by those wearing blue; they suggest that red might confer success because it is a sign of dominance in many animal species and could signal aggression in human contests. Here we use another data set from the 2004 Olympics to show that similar winning biases occur in contests in which neither contestant wears red, indicating that a different mechanism may be responsible for these effects.


Vision Research | 2003

Poor visibility of motion in depth is due to early motion averaging.

Julie M. Harris; Simon K. Rushton

Under a variety of conditions, motion in depth from binocular cues is harder to detect than lateral motion in the frontoparallel plane. This is surprising, as the nasal-temporal motion in the left eye associated with motion in depth is easily detectable, as is the nasal-temporal motion in the right eye. It is only when the two motions are combined in binocular viewing that detection can become difficult. We previously suggested that the visibility of motion-in-depth is low because early stereomotion detectors average left and right retinal motions. For motion in depth, a neural averaging process would produce a motion signal close to zero. Here we tested the averaging hypothesis further. Specifically we asked, could the reduced visibility observed in previous experiments be associated with depth and layout in the stimuli, rather than motion averaging? We used anti-correlated random dot stereograms to show that, despite no depth being perceived, it is still harder to detect motion when it is presented in opposite directions in the two eyes than when motion is presented in the same direction in the two eyes. This suggests that the motion in depth signal is lost due to early motion averaging, rather than due to the presence of noise from the perceived depth patterns in the stimulus.

Collaboration


Dive into the Julie M. Harris's collaboration.

Top Co-Authors

Avatar

Marina Bloj

University of Bradford

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Harold T. Nefs

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge