Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel R. Saunders is active.

Publication


Featured researches published by Daniel R. Saunders.


Psychological Science | 2011

Body Configuration Modulates the Usage of Local Cues to Direction in Biological-Motion Perception

Masahiro Hirai; Dorita H. F. Chang; Daniel R. Saunders; Nikolaus F. Troje

The presence of information in a visual display does not guarantee its use by the visual system. Studies of inversion effects in both face recognition and biological-motion perception have shown that the same information may be used by observers when it is presented in an upright display but not used when the display is inverted. In our study, we tested the inversion effect in scrambled biological-motion displays to investigate mechanisms that validate information contained in the local motion of a point-light walker. Using novel biological-motion stimuli that contained no configural cues to the direction in which a walker was facing, we found that manipulating the relative vertical location of the walker’s feet significantly affected observers’ performance on a direction-discrimination task. Our data demonstrate that, by themselves, local cues can almost unambiguously indicate the facing direction of the agent in biological-motion stimuli. Additionally, we document a noteworthy interaction between local and global information and offer a new explanation for the effect of local inversion in biological-motion perception.


Perception | 2009

Off on the Wrong Foot: Local Features in Biological Motion

Daniel R. Saunders; Julia Suchan; Nikolaus F. Troje

Biological-motion perception consists of a number of different phenomena. They include global mechanisms that support the retrieval of the coherent shape of a walker, but also mechanisms which derive information from the local motion of its parts about facing direction and animacy, independent of the particular shape of the display. A large body of the literature on biological-motion perception is based on a synthetic stimulus generated by an algorithm published by James Cutting in 1978 (Perception 7 393–405). Here we show that this particular stimulus lacks a visual invariant inherent to the local motion of the feet of a natural walker, which in more realistic motion patterns indicates the facing direction of a walker independent of its shape. Comparing Cuttings walker to a walker derived from motion-captured data of real human walkers, we find no difference between the two displays in a detection task designed such that observers had to rely on global shape. In a direction discrimination task, however, in which only local motion was accessible to the observer, performance on Cuttings walker was at chance, while direction could still be retrieved from the stimuli derived from the real walker.


Journal of Medical Internet Research | 2013

Crowdsourcing a Normative Natural Language Dataset: A Comparison of Amazon Mechanical Turk and In-Lab Data Collection

Daniel R. Saunders; Peter J. Bex; Russell L. Woods

Background Crowdsourcing has become a valuable method for collecting medical research data. This approach, recruiting through open calls on the Web, is particularly useful for assembling large normative datasets. However, it is not known how natural language datasets collected over the Web differ from those collected under controlled laboratory conditions. Objective To compare the natural language responses obtained from a crowdsourced sample of participants with responses collected in a conventional laboratory setting from participants recruited according to specific age and gender criteria. Methods We collected natural language descriptions of 200 half-minute movie clips, from Amazon Mechanical Turk workers (crowdsourced) and 60 participants recruited from the community (lab-sourced). Crowdsourced participants responded to as many clips as they wanted and typed their responses, whereas lab-sourced participants gave spoken responses to 40 clips, and their responses were transcribed. The content of the responses was evaluated using a take-one-out procedure, which compared responses to other responses to the same clip and to other clips, with a comparison of the average number of shared words. Results In contrast to the 13 months of recruiting that was required to collect normative data from 60 lab-sourced participants (with specific demographic characteristics), only 34 days were needed to collect normative data from 99 crowdsourced participants (contributing a median of 22 responses). The majority of crowdsourced workers were female, and the median age was 35 years, lower than the lab-sourced median of 62 years but similar to the median age of the US population. The responses contributed by the crowdsourced participants were longer on average, that is, 33 words compared to 28 words (P<.001), and they used a less varied vocabulary. However, there was strong similarity in the words used to describe a particular clip between the two datasets, as a cross-dataset count of shared words showed (P<.001). Within both datasets, responses contained substantial relevant content, with more words in common with responses to the same clip than to other clips (P<.001). There was evidence that responses from female and older crowdsourced participants had more shared words (P=.004 and .01 respectively), whereas younger participants had higher numbers of shared words in the lab-sourced population (P=.01). Conclusions Crowdsourcing is an effective approach to quickly and economically collect a large reliable dataset of normative natural language responses.


Journal of Vision | 2010

Gaze patterns during perception of direction and gender from biological motion

Daniel R. Saunders; David K. Williamson; Nikolaus F. Troje

Humans can perceive many properties of a creature in motion from the movement of the major joints alone. However it is likely that some regions of the body are more informative than others, dependent on the task. We recorded eye movements while participants performed two tasks with point-light walkers: determining the direction of walking, or determining the walkers gender. To vary task difficulty, walkers were displayed from different view angles and with different degrees of expressed gender. The effects on eye movement were evaluated by generating fixation maps, and by analyzing the number of fixations in regions of interest representing the shoulders, pelvis, and feet. In both tasks participants frequently fixated the pelvis region, but there were relatively more fixations at the shoulders in the gender task, and more fixations at the feet in the direction task. Increasing direction task difficulty increased the focus on the foot region. An individuals task performance could not be predicted by their distribution of fixations. However by showing where observers seek information, the study supports previous findings that the feet play an important part in the perception of walking direction, and that the shoulders and hips are particularly important for the perception of gender.


Journal of Vision | 2011

Allocation of attention to biological motion: local motion dominates global shape.

Masahiro Hirai; Daniel R. Saunders; Nikolaus F. Troje

Directional information can be retrieved from a point-light walker (PLW) in two different ways: either from recovering the global shape of the articulated body or from signals in the local motion of individual dots. Here, we introduce a voluntary eye movement task to assess how the direction of a centrally presented, task-irrelevant PLW affects the onset latency and accuracy of saccades to peripheral targets. We then use this paradigm to design experiments to study which aspects of biological motion-the global form mediated by the motion of the walker or the local movements of critical features-drive the observed attentional effects. Putting the two cues into conflict, we show that saccade latency and accuracy were affected by the local motion of the dots representing the walkers feet-but only if they retain their familiar, predictable location within the display.


Journal of Vision | 2013

Trajectory prediction of saccadic eye movements using a compressed exponential model.

Peng Han; Daniel R. Saunders; Russell L. Woods; Gang Luo

Gaze-contingent display paradigms play an important role in vision research. The time delay due to data transmission from eye tracker to monitor may lead to a misalignment between the gaze direction and image manipulation during eye movements, and therefore compromise the contingency. We present a method to reduce this misalignment by using a compressed exponential function to model the trajectories of saccadic eye movements. Our algorithm was evaluated using experimental data from 1,212 saccades ranging from 3° to 30°, which were collected with an EyeLink 1000 and a Dual-Purkinje Image (DPI) eye tracker. The model fits eye displacement with a high agreement (R² > 0.96). When assuming a 10-millisecond time delay, prediction of 2D saccade trajectories using our model could reduce the misalignment by 30% to 60% with the EyeLink tracker and 20% to 40% with the DPI tracker for saccades larger than 8°. Because a certain number of samples are required for model fitting, the prediction did not offer improvement for most small saccades and the early stages of large saccades. Evaluation was also performed for a simulated 100-Hz gaze-contingent display using the prerecorded saccade data. With prediction, the percentage of misalignment larger than 2° dropped from 45% to 20% for EyeLink and 42% to 26% for DPI data. These results suggest that the saccade-prediction algorithm may help create more accurate gaze-contingent displays.


Current Zoology | 2017

Social interactivity in pigeon courtship behavior

E. L. R. Ware; Daniel R. Saunders; Nikolaus F. Troje

Abstract A closed-loop teleprompter system was used to isolate and manipulate social interactivity in the natural courtship interactions of pigeons Columbia livia. In Experiment 1, a live face-to-face real-time interaction between 2 courting pigeons (Live) was compared to a played back version of the video stimulus recorded during the pairs Live interaction. We found that pigeons were behaving interactively; their behavior depended on the relationships between their own signals and those of their partner. In Experiment 2, we tested whether social interactivity relies on spatial cues present in the facing direction of a partner’s display. By moving the teleprompter camera 90° away from its original location, the partner’s display was manipulated to appear as if it is directed 90° away from the subject. We found no effect of spatial offset on the pigeon’s behavioral response. In Experiment 3, 3 time delays, 1 s, 3 s, and 9 s, a Live condition, and a playback condition were chosen to investigate the importance of temporal contiguity in social interactivity. Furthermore, both opposite-sex (courtship) and same-sex (rivalry) pairs were studied to investigate whether social-context affects social interactivity sensitivity. Our results showed that pigeon courtship behavior is sensitive to temporal contiguity. Behavior declined in the 9 s and Playback conditions as compared to Live condition and the shorter time delays. For males only, courtship behavior also increased in the 3-s delay condition. The effect of social interactivity and time delay was not observed in rivalry interactions, suggesting that social interactivity may be specific to courtship.


Biology Open | 2015

The influence of motion quality on responses towards video playback stimuli

E. L. R. Ware; Daniel R. Saunders; Nikolaus F. Troje

ABSTRACT Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the videos image presentation rate (IPR), are of particular importance in determining a subjects social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia) response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p) frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour.


PLOS ONE | 2014

Measuring Information Acquisition from Sensory Input Using Automated Scoring of Natural-Language Descriptions

Daniel R. Saunders; Peter J. Bex; Dylan Rose; Russell L. Woods

Information acquisition, the gathering and interpretation of sensory information, is a basic function of mobile organisms. We describe a new method for measuring this ability in humans, using free-recall responses to sensory stimuli which are scored objectively using a “wisdom of crowds” approach. As an example, we demonstrate this metric using perception of video stimuli. Immediately after viewing a 30 s video clip, subjects responded to a prompt to give a short description of the clip in natural language. These responses were scored automatically by comparison to a dataset of responses to the same clip by normally-sighted viewers (the crowd). In this case, the normative dataset consisted of responses to 200 clips by 60 subjects who were stratified by age (range 22 to 85y) and viewed the clips in the lab, for 2,400 responses, and by 99 crowdsourced participants (age range 20 to 66y) who viewed clips in their Web browser, for 4,000 responses. We compared different algorithms for computing these similarities and found that a simple count of the words in common had the best performance. It correctly matched 75% of the lab-sourced and 95% of crowdsourced responses to their corresponding clips. We validated the measure by showing that when the amount of information in the clip was degraded using defocus lenses, the shared word score decreased across the five predetermined visual-acuity levels, demonstrating a dose-response effect (N = 15). This approach, of scoring open-ended immediate free recall of the stimulus, is applicable not only to video, but also to other situations where a measure of the information that is successfully acquired is desirable. Information acquired will be affected by stimulus quality, sensory ability, and cognitive processes, so our metric can be used to assess each of these components when the others are controlled.


Journal of Vision | 2011

A test battery for assessing biological motion perception

Daniel R. Saunders; Nikolaus F. Troje

Biological motion perception is often assessed using a single task. However, it was shown that there are at least two distinct processes at work, one based on local motion information and one based on integrating information about the structure across the display (Troje and Westhoff, 2006). Along with other recent results, this suggests that biological motion is analyzed via a hierarchy of perceptual abilities, including:

Collaboration


Dive into the Daniel R. Saunders's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter J. Bex

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Masahiro Hirai

Japan Society for the Promotion of Science

View shared research outputs
Top Co-Authors

Avatar

Dylan Rose

Northeastern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julia Suchan

University of Tübingen

View shared research outputs
Researchain Logo
Decentralizing Knowledge