Jessica Irons
Australian National University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jessica Irons.
Journal of Experimental Psychology: Human Perception and Performance | 2012
Jessica Irons; Charles L. Folk; Roger W. Remington
Although models of visual search have often assumed that attention can only be set for a single feature or property at a time, recent studies have suggested that it may be possible to maintain more than one attentional control setting. The aim of the present study was to investigate whether spatial attention could be guided by multiple attentional control settings for color. In a standard spatial cueing task, participants searched for either of two colored targets accompanied by an irrelevantly colored distractor. Across five experiments, results consistently showed that nonpredictive cues matching either target color produced a significant spatial cueing effect, while irrelevantly colored cues did not. This was the case even when the target colors could not be linearly separated from irrelevantly cue colors in color space, suggesting that participants were not simply adopting one general color set that included both target colors. The results could not be explained by intertrial priming by previous targets, nor could they be explained by a single inhibitory set for the distractor color. Overall, the results are most consistent with the maintenance of multiple attentional control settings.
PLOS ONE | 2013
Romina Palermo; Kirsty B. O’Connor; Joshua M. Davis; Jessica Irons; Elinor McKone
Although good tests are available for diagnosing clinical impairments in face expression processing, there is a lack of strong tests for assessing “individual differences” – that is, differences in ability between individuals within the typical, nonclinical, range. Here, we develop two new tests, one for expression perception (an odd-man-out matching task in which participants select which one of three faces displays a different expression) and one additionally requiring explicit identification of the emotion (a labelling task in which participants select one of six verbal labels). We demonstrate validity (careful check of individual items, large inversion effects, independence from nonverbal IQ, convergent validity with a previous labelling task), reliability (Cronbach’s alphas of.77 and.76 respectively), and wide individual differences across the typical population. We then demonstrate the usefulness of the tests by addressing theoretical questions regarding the structure of face processing, specifically the extent to which the following processes are common or distinct: (a) perceptual matching and explicit labelling of expression (modest correlation between matching and labelling supported partial independence); (b) judgement of expressions from faces and voices (results argued labelling tasks tap into a multi-modal system, while matching tasks tap distinct perceptual processes); and (c) expression and identity processing (results argued for a common first step of perceptual processing for expression and identity).
Quarterly Journal of Experimental Psychology | 2017
Romina Palermo; Bruno Rossion; Gillian Rhodes; Renaud Laguesse; Tolga Tez; Bronwyn Hall; Andrea Albonico; Manuela Malaspina; Roberta Daini; Jessica Irons; Shahd Al-Janabi; Libby Taylor; Davide Rivolta; Elinor McKone
Diagnosis of developmental or congenital prosopagnosia (CP) involves self-report of everyday face recognition difficulties, which are corroborated with poor performance on behavioural tests. This approach requires accurate self-evaluation. We examine the extent to which typical adults have insight into their face recognition abilities across four experiments involving nearly 300 participants. The experiments used five tests of face recognition ability: two that tap into the ability to learn and recognize previously unfamiliar faces [the Cambridge Face Memory Test, CFMT; Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia, 44(4), 576–585. doi:10.1016/j.neuropsychologia.2005.07.001; and a newly devised test based on the CFMT but where the study phases involve watching short movies rather than viewing static faces—the CFMT-Films] and three that tap face matching [Benton Facial Recognition Test, BFRT; Benton, A., Sivan, A., Hamsher, K., Varney, N., & Spreen, O. (1983). Contribution to neuropsychological assessment. New York: Oxford University Press; and two recently devised sequential face matching tests]. Self-reported ability was measured with the 15-item Kennerknecht et al. questionnaire [Kennerknecht, I., Ho, N. Y., & Wong, V. C. (2008). Prevalence of hereditary prosopagnosia (HPA) in Hong Kong Chinese population. American Journal of Medical Genetics Part A, 146A(22), 2863–2870. doi:10.1002/ajmg.a.32552]; two single-item questions assessing face recognition ability; and a new 77-item meta-cognition questionnaire. Overall, we find that adults with typical face recognition abilities have only modest insight into their ability to recognize faces on behavioural tests. In a fifth experiment, we assess self-reported face recognition ability in people with CP and find that some people who expect to perform poorly on behavioural tests of face recognition do indeed perform poorly. However, it is not yet clear whether individuals within this group of poor performers have greater levels of insight (i.e., into their degree of impairment) than those with more typical levels of performance.
Journal of Vision | 2014
Jessica Irons; Elinor McKone; Rachael Dumbleton; Nick Barnes; Xuming He; Jan M. Provis; Callin Ivanovici; Alisa Kwa
Damage to central vision, of which age-related macular degeneration (AMD) is the most common cause, leaves patients with only blurred peripheral vision. Previous approaches to improving face recognition in AMD have employed image manipulations designed to enhance early-stage visual processing (e.g., magnification, increased HSF contrast). Here, we argue that further improvement may be possible by targeting known properties of mid- and/or high-level face processing. We enhance identity-related shape information in the face by caricaturing each individual away from an average face. We simulate early- through late-stage AMD-blur by filtering spatial frequencies to mimic the amount of blurring perceived at approximately 10° through 30° into the periphery (assuming a face seen premagnified on a tablet computer). We report caricature advantages for all blur levels, for face viewpoints from front view to semiprofile, and in tasks involving perceiving differences in facial identity between pairs of people, remembering previously learned faces, and rejecting new faces as unknown. Results provide a proof of concept that caricaturing may assist in improving face recognition in AMD and other disorders of central vision.
Journal of Vision | 2013
Stephen Pond; Nadine Kloth; Elinor McKone; Linda Jeffery; Jessica Irons; Gillian Rhodes
Many aspects of faces derived from structural information appear to be neurally represented using norm-based opponent coding. Recently, however, Zhao, Seriès, Hancock, and Bednar (2011) have argued that another aspect with a strong structural component, namely face gender, is instead multichannel coded. Their conclusion was based on finding that face gender aftereffects initially increased but then decreased for adaptors with increasing levels of gender caricaturing. Critically, this interpretation rests on the untested assumption that caricaturing the differences between male and female composite faces increases perceived sexual dimorphism (masculinity/femininity) of faces. We tested this assumption in Study 1 and found that it held for male, but not female faces. A multichannel account cannot, therefore, be ruled out, although a decrease in realism of adaptors was observed that could have contributed to the decrease in aftereffects. However, their aftereffects likely reflect low-level retinotopic adaptation, which was not minimized for most of their participants. In Study 2 we minimized low-level adaptation and found that face gender aftereffects were strongly positively related to the perceived sexual dimorphism of adaptors. We found no decrease for extreme adaptors, despite testing adaptors with higher perceived sexual dimorphism levels than those used by Zhao et al. These results are consistent with opponent coding of higher-level dimensions related to the perception of face gender.
Attention Perception & Psychophysics | 2013
Jessica Irons; Roger W. Remington
Previous investigations of the ability to maintain separate attentional control settings for different spatial locations have relied principally on a go/no-go spatial-cueing paradigm. The results have suggested that control of attention is accomplished only late in processing. However, the go/no-go task does not provide strong incentives to withhold attention from irrelevant color–location conjunctions. We used a modified version of the task in which failing to adopt multiple control settings would be detrimental to performance. Two RSVP streams of colored letters appeared to the left and right of fixation. Participants searched for targets that were a conjunction of color and location, so that the target color for one stream acted as a distractor when presented in the opposite stream. Distractors that did not match the target conjunctions nevertheless captured attention and interfered with performance. This was the case even when the target conjunctions were previewed early in the trial prior to the target (Exp. 2). However, distractor interference was reduced when the upcoming distractor was previewed early on in the trial (Exp. 3). Attentional selection of targets by color–location conjunctions may be effective if facilitative attentional sets are accompanied by the top-down inhibition of irrelevant items.
Attention Perception & Psychophysics | 2016
Jessica Irons; Andrew B. Leber
Goal-directed attentional control supports efficient visual search by prioritizing relevant stimuli in the environment. Previous research has shown that goal-directed control can be configured in many ways, and often multiple control settings can be used to achieve the same goal. However, little is known about how control settings are selected. We explored the extent to which the configuration of goal-directed control is driven by performance maximization (optimally configuring settings to maximize speed and accuracy) and effort minimization (selecting the least effortful settings). We used a new paradigm, adaptive choice visual search, which allows participants to choose one of two available targets (a red or a blue square) on each trial. Distractor colors vary predictively across trials, such that the optimal target switches back and forth throughout the experiment. Results (N = 43) show that participants chose the optimal target most often, updating to the new target when the environment changed, supporting performance maximization. However, individuals were sluggish to update to the optimal color, consistent with effort minimization. Additionally, we found a surprisingly high rate of nonoptimal choices and switching between targets, which could not be explained by either factor. Analysis of participants’ self-reported search strategy revealed substantial individual differences in the control strategies used. In sum, the adaptive choice visual search enables a fresh approach to studying goal-directed control. The results contribute new evidence that control is partly determined by both performance maximization and effort minimization, as well as at least one additional factor, which we speculate to include novelty seeking.
Vision Research | 2017
Jessica Irons; Tamara Gradden; Angel Zhang; Xuming He; Nick Barnes; Adele F. Scott; Elinor McKone
ABSTRACT The visual prosthesis (or “bionic eye”) has become a reality but provides a low resolution view of the world. Simulating prosthetic vision in normal‐vision observers, previous studies report good face recognition ability using tasks that allow recognition to be achieved on the basis of information that survives low resolution well, including basic category (sex, age) and extra‐face information (hairstyle, glasses). Here, we test within‐category individuation for face‐only information (e.g., distinguishing between multiple Caucasian young men with hair covered). Under these conditions, recognition was poor (although above chance) even for a simulated 40 × 40 array with all phosphene elements assumed functional, a resolution above the upper end of current‐generation prosthetic implants. This indicates that a significant challenge is to develop methods to improve face identity recognition. Inspired by “bionic ear” improvements achieved by altering signal input to match high‐level perceptual (speech) requirements, we test a high‐level perceptual enhancement of face images, namely face caricaturing (exaggerating identity information away from an average face). Results show caricaturing improved identity recognition in memory and/or perception (degree by which two faces look dissimilar) down to a resolution of 32 × 32 with 30% phosphene dropout. Findings imply caricaturing may offer benefits for patients at resolutions realistic for some current‐generation or in‐development implants.
Journal of Experimental Psychology: Human Perception and Performance | 2017
Romina Palermo; Linda Jeffery; Jessica Lewandowsky; Chiara Fiorentini; Jessica Irons; Amy Dawel; Nichola Burton; Elinor McKone; Gillian Rhodes
There are large, reliable individual differences in the recognition of facial expressions of emotion across the general population. The sources of this variation are not yet known. We investigated the contribution of a key face perception mechanism, adaptive coding, which calibrates perception to optimize discrimination within the current perceptual “diet.” We expected that a facial expression system that readily recalibrates might boost sensitivity to variation among facial expressions, thereby enhancing recognition ability. We measured adaptive coding strength with an established facial expression aftereffect task and measured facial expression recognition ability with 3 tasks optimized for the assessment of individual differences. As expected, expression recognition ability was positively associated with the strength of facial expression aftereffects. We also asked whether individual variation in affective factors might contribute to expression recognition ability, given that clinical levels of such traits have previously been linked to ability. Expression recognition ability was negatively associated with self-reported anxiety but not with depression, mood, or degree of autism-like or empathetic traits. Finally, we showed that the perceptual factor of adaptive coding contributes to variation in expression recognition ability independently of affective factors.
Behavior Research Methods | 2017
Amy Dawel; Luke Wright; Jessica Irons; Rachael Dumbleton; Romina Palermo; Richard O’Kearney; Elinor McKone
In everyday social interactions, people’s facial expressions sometimes reflect genuine emotion (e.g., anger in response to a misbehaving child) and sometimes do not (e.g., smiling for a school photo). There is increasing theoretical interest in this distinction, but little is known about perceived emotion genuineness for existing facial expression databases. We present a new method for rating perceived genuineness using a neutral-midpoint scale (–7 = completely fake; 0 = don’t know; +7 = completely genuine) that, unlike previous methods, provides data on both relative and absolute perceptions. Normative ratings from typically developing adults for five emotions (anger, disgust, fear, sadness, and happiness) provide three key contributions. First, the widely used Pictures of Facial Affect (PoFA; i.e., “the Ekman faces”) and the Radboud Faces Database (RaFD) are typically perceived as not showing genuine emotion. Also, in the only published set for which the actual emotional states of the displayers are known (via self-report; the McLellan faces), percepts of emotion genuineness often do not match actual emotion genuineness. Second, we provide genuine/fake norms for 558 faces from several sources (PoFA, RaFD, KDEF, Gur, FacePlace, McLellan, News media), including a list of 143 stimuli that are event-elicited (rather than posed) and, congruently, perceived as reflecting genuine emotion. Third, using the norms we develop sets of perceived-as-genuine (from event-elicited sources) and perceived-as-fake (from posed sources) stimuli, matched on sex, viewpoint, eye-gaze direction, and rated intensity. We also outline the many types of research questions that these norms and stimulus sets could be used to answer.