Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ak Dunn is active.

Publication


Featured researches published by Ak Dunn.


Journal of Evolutionary Psychology | 2009

Multiple signals in human mate selection: A review and framework for integrating facial and vocal signals

Tj Wells; Ak Dunn; Mark J. T. Sergeant; Mark Davies

Abstract Evolutionary adaptation in variable environments is likely to give rise to several signals that can be used to identify a suitable mate in multisensory organisms. The presence of multiple signals for sexual selection could be advantageous, limiting the chance of mating with a suboptimal partner and avoiding the costs of inferior progeny. Despite extensive research into isolated signals of attractiveness, the amalgamation of multiple signals in sexual selection is poorly understood, particularly in humans. Inferences regarding both the function and importance of such signals are therefore tentative unless the effects are considered together. Here, the literature regarding two evolved signals of attraction (cf. faces and voices) is reviewed in relation to a framework (Candolin 2003) for signal integration. It is argued that the functional nature of signals of attractiveness would be better studied through manipulation and experimentation with both single and multiple signals. Considering the preval...


Evolutionary Psychology | 2016

Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis

Harriet M. J. Smith; Ak Dunn; Thom Baguley; Paula C. Stacey

Information from faces and voices combines to provide multimodal signals about a person. Faces and voices may offer redundant, overlapping (backup signals), or complementary information (multiple messages). This article reports two experiments which investigated the extent to which faces and voices deliver concordant information about dimensions of fitness and quality. In Experiment 1, participants rated faces and voices on scales for masculinity/femininity, age, health, height, and weight. The results showed that people make similar judgments from faces and voices, with particularly strong correlations for masculinity/femininity, health, and height. If, as these results suggest, faces and voices constitute backup signals for various dimensions, it is hypothetically possible that people would be able to accurately match novel faces and voices for identity. However, previous investigations into novel face–voice matching offer contradictory results. In Experiment 2, participants saw a face and heard a voice and were required to decide whether the face and voice belonged to the same person. Matching accuracy was significantly above chance level, suggesting that judgments made independently from faces and voices are sufficiently similar that people can match the two. Both sets of results were analyzed using multilevel modeling and are interpreted as being consistent with the backup signal hypothesis.


Attention Perception & Psychophysics | 2016

Matching novel face and voice identity using static and dynamic facial images

Harriet M. J. Smith; Ak Dunn; Thom Baguley; Paula C. Stacey

Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face–voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face–voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face–voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face–voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face–voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others.


Archives of Sexual Behavior | 2013

Perceptions of Human Attractiveness Comprising Face and Voice Cues

Tj Wells; Thom Baguley; Mark J. T. Sergeant; Ak Dunn

In human mate choice, sexually dimorphic faces and voices comprise hormone-mediated cues that purportedly develop as an indicator of mate quality or the ability to compete with same-sex rivals. If preferences for faces communicate the same biologically relevant information as do voices, then ratings of these cues should correlate. Sixty participants (30 male and 30 female) rated a series of opposite-sex faces, voices, and faces together with voices for attractiveness in a repeated measures computer-based experiment. The effects of face and voice attractiveness on face-voice compound stimuli were analyzed using a multilevel model. Faces contributed proportionally more than voices to ratings of face-voice compound attractiveness. Faces and voices positively and independently contributed to the attractiveness of male compound stimuli although there was no significant correlation between their rated attractiveness. A positive interaction and correlation between attractiveness was shown for faces and voices in relation to the attractiveness of female compound stimuli. Rather than providing a better estimate of a single characteristic, male faces and voices may instead communicate independent information that, in turn, provides a female with a better assessment of overall mate quality. Conversely, female faces and voices together provide males with a more accurate assessment of a single dimension of mate quality.


Cognitive Neuroscience | 2010

Ventral and dorsal streams as modality-independent phenomena

Benjamin J. Dyson; Ak Dunn; Claude Alain

Abstract Interest in ventral and dorsal streams is not limited to vision, and the functionality of similar pathways in other domains has also been considered. Auditory dual pathway models share many conceptual and empirical concerns with those put forward for vision, including the absolute vs. relative, localized vs. distributed, and exact nature of functionality of the two streams. Despite their problems, dual pathway hypotheses provide broad frameworks with which to consider cortical architecture across the senses.


Quarterly Journal of Experimental Psychology | 2018

The effect of inserting an inter-stimulus interval in face–voice matching tasks:

Harriet M. J. Smith; Ak Dunn; Thom Baguley; Paula C. Stacey

Voices and static faces can be matched for identity above chance level. No previous face–voice matching experiments have included an inter-stimulus interval (ISI) exceeding 1 s. We tested whether accurate identity decisions rely on high-quality perceptual representations temporarily stored in sensory memory, and therefore whether the ability to make accurate matching decisions diminishes as the ISI increases. In each trial, participants had to decide whether an unfamiliar face and voice belonged to the same person. The face and voice stimuli were presented simultaneously in Experiment 1, and there was a 5-s ISI in Experiment 2, and a 10-s interval in Experiment 3. The results, analysed using multilevel modelling, revealed that static face–voice matching was significantly above chance level only when the stimuli were presented simultaneously (Experiment 1). The overall bias to respond same identity weakened as the interval increased, suggesting that this bias is explained by temporal contiguity. Taken together, the findings highlight that face–voice matching performance is reliant on comparing fast-decaying, high-quality perceptual representations. The results are discussed in terms of social functioning.


Memory | 2013

Testing the exclusivity effect in location memory

Daniel P. A. Clark; Ak Dunn; Thom Baguley

There is growing literature exploring the possibility of parallel retrieval of location memories, although this literature focuses primarily on the speed of retrieval with little attention to the accuracy of location memory recall. Baguley, Lansdale, Lines, and Parkin (2006) found that when a person has two or more memories for an objects location, their recall accuracy suggests that only one representation can be retrieved at a time (exclusivity). This finding is counterintuitive given evidence of non-exclusive recall in the wider memory literature. The current experiment explored the exclusivity effect further and aimed to promote an alternative outcome (i.e., independence or superadditivity) by encouraging the participants to combine multiple representations of space at encoding or retrieval. This was encouraged by using anchor (points of reference) labels that could be combined to form a single strongly associated combination. It was hypothesised that the ability to combine the anchor labels would allow the two representations to be retrieved concurrently, generating higher levels of recall accuracy. The results demonstrate further support for the exclusivity hypothesis, showing no significant improvement in recall accuracy when there are multiple representations of a target objects location as compared to a single representation.


Spatial Cognition and Computation | 2018

Spatial memory exclusivity: Examining performance of multiple object-location memories

Thomas J. Dunn; Thom Baguley; Ak Dunn

ABSTRACT Research on location memory suggests that integration of separate sources of information does not occur when recalling the position of a common target object. In a relatively simple task, previous research shows no observable benefit from holding two spatial memories compared to one. It has been suggested that exclusively utilizing only one of two memories may account for this finding. The current research tests the robustness of this idea as well as an alternative in the form of an averaging approach to combining spatial information. The results suggest that exclusivity may not be the best account for performance of multiple object-location memories. Rather, memories may well combine in a manner similar to averaging, where information is available for each memory but combined in a nonbeneficial way.


Language, cognition and neuroscience | 2018

An exploration of the accentuation effect: errors in memory for voice fundamental frequency (F0) and speech rate

Georgina Gous; Ak Dunn; Thom Baguley; Paula C. Stacey

ABSTRACT The accentuation effect demonstrates how memory often reflects category typical representations rather than the specific features of learned items. The present study investigated the impact of manipulating fundamental frequency (F0) and speech rate (syllables per second) on immediate target matching performance (selecting a voice from a pair to match a previously heard target voice) for a range of synthesised voices. It was predicted that when participants were presented with high or low frequency target voices, voices even higher or lower in frequency would be selected. The same pattern was also predicted for speech rate. Inconsistent with the accentuation account, the results showed a general bias to select voices higher in frequency for high, moderate, and low frequency target voices. For speech rate, listeners selected voices faster in rate for slow rate target voices. Overall it seems doubtful that listeners rely solely on categorical information about voices during recognition.


Journal of Neurophysiology | 2018

Asymmetric interference between cognitive task components and concurrent sensorimotor coordination

Joshua Baker; Antonio Castro; Ak Dunn; Suvobrata Mitra

Everyday cognitive tasks are frequently performed under dual-task conditions alongside continuous sensorimotor coordinations (CSCs) such as driving, walking, or balancing. Observed interference in these dual-task settings is commonly attributed to demands on executive function or attentional resources, but the time course and reciprocity of interference are not well understood at the level of information-processing components. Here we used electrophysiology to study the detailed chronometry of dual-task interference between a visual oddball task and a continuous visuomanual tracking task. The oddball tasks electrophysiological components were linked to underlying cognitive processes, and the tracking task served as a proxy for the continuous cycle of state monitoring and adjustment inherent to CSCs. Dual-tasking interfered with the oddball tasks accuracy and attentional processes (attenuated P2 and P3b magnitude and parietal alpha-band event-related desynchronization), but errors in tracking due to dual-tasking accrued at a later timescale and only in trials in which the target stimulus appeared and its tally had to be incremented. Interference between cognitive tasks and CSCs can be asymmetric in terms of timing as well as affected information-processing components. NEW & NOTEWORTHY Interference between cognitive tasks and continuous sensorimotor coordination (CSC) has been widely reported, but this is the first demonstration that the cognitive operation that is impaired by concurrent CSC may not be the one that impairs the CSC. Also demonstrated is that interference between such tasks can be temporally asymmetric. The asynchronicity of this interference has significant implications for understanding and mitigating loss of mobility in old age, and for rehabilitation for neurological impairments.

Collaboration


Dive into the Ak Dunn's collaboration.

Top Co-Authors

Avatar

Thom Baguley

Nottingham Trent University

View shared research outputs
Top Co-Authors

Avatar

Paula C. Stacey

Nottingham Trent University

View shared research outputs
Top Co-Authors

Avatar

Tj Wells

Aberystwyth University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rf Larkin

Nottingham Trent University

View shared research outputs
Top Co-Authors

Avatar

James Stiller

Nottingham Trent University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge