Audiovisual Speech-In-Noise (SIN) Performance of Young Adults with ADHD
Gavindya Jayawardena, Anne M. P. Michalek, Andrew T. Duchowski, Sampath Jayarathna
AAudiovisual Speech-In-Noise (SIN) Performance of Young Adultswith ADHD
Gavindya Jayawardena
Old Dominion UniversityNorfolk, [email protected]
Anne M. P. Michalek
Old Dominion UniversityNorfolk, [email protected]
Andrew T. Duchowski
Clemson UniversityClemson, [email protected]
Sampath Jayarathna
Old Dominion UniversityNorfolk, [email protected]
ABSTRACT
Adolescents with Attention-deficit/hyperactivity disorder (ADHD)have difficulty processing speech with background noise due toreduced inhibitory control and working memory capacity (WMC).This paper presents a pilot study of an audiovisual Speech-In-Noise (SIN) task for young adults with ADHD compared to age-matched controls using eye-tracking measures. The audiovisualSIN task consists of varying six levels of background babble, accom-panied by visual cues. A significant difference between ADHD andneurotypical (NT) groups was observed at 15 dB signal-to-noise ra-tio (SNR). These results contribute to the literature of young adultswith ADHD.
CCS CONCEPTS • Applied computing → Psychology ; •
Human-centered com-puting → Visualization; •
General and reference → Experimen-tation.
KEYWORDS
ADHD, Eye-Tracking, Speech-In-Noise
ACM Reference Format:
Gavindya Jayawardena, Anne M. P. Michalek, Andrew T. Duchowski, and Sam-path Jayarathna. 2020. Audiovisual Speech-In-Noise (SIN) Performance ofYoung Adults with ADHD. In
Symposium on Eye Tracking Research andApplications (ETRA ’20 Short Papers), June 2–5, 2020, Stuttgart, Germany.
ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3379156.3391373
The recent estimated prevalence of diagnosed ADHD in childrenand adolescents has increased from 6.1% to 10.2% over the periodof 1997 to 2016 in the U.S. [Xu et al. 2018]. Adolescents with ADHDhave difficulty meeting time limits, controlling anger, inhibiting re-sponses, and processing auditory information [Barkley 1997; Fieldset al. 2017; Fostick 2017]. Processing speech in background noise re-quires fundamental language abilities, higher working memory, aswell as a higher signal-to-noise ratio (SNR) [Schneider et al. 2007].Since a person’s ability to process speech with background noisedepends on that person’s auditory and cognitive system [Schneider
Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].
ETRA ’20 Short Papers, June 2–5, 2020, Stuttgart, Germany © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.ACM ISBN 978-1-4503-7134-6/20/06...$15.00https://doi.org/10.1145/3379156.3391373 et al. 2007], young adults with ADHD may experience difficultyprocessing auditory information in the presence of backgroundnoise due to reduced inhibitory control [Barkley 1997; Pazvantoğluet al. 2012; Woltering et al. 2013; Woods et al. 2002], and decreasedworking memory capacity (WMC) [Alderson et al. 2013; Banichet al. 2009; Michalek et al. 2014].Unlike noise, which degrades listening conditions, the presenceof external visual cues such as written, contextual informationand facial movements, can enhance the processing of auditoryinformation, especially when accompanied by noise [Fraser et al.2010; Jääskeläinen 2010; Michalek et al. 2014; Mishra et al. 2013;Moradi et al. 2013; Rudner et al. 2009; Van Wassenhove et al. 2005;von Kriegstein et al. 2008]. At increased noise levels, semanticallyrelated visual cues have a positive impact on the perception ofspoken sentences [Zekveld et al. 2011]. When increased noise ispresent during face-to-face conversation, adults tend to fixate moreon the nose and mouth area of the speaker [Buchan et al. 2008],confirming that oral-motor movements of the speaker aides speechrecognition [Bristow et al. 2008].Neurotypical (NT) individuals are known to perceive audiovisualcues more accurately from the right visual field (RVF) than fromthe left visual field (LVF) [Kimura 1973]. Multiple studies on this[Carter et al. 1995; Heilman et al. 1991; Mitchell et al. 1990; Voellerand Heilman 1988] showed the presence of a lateralized deficit inthe visual-spatial attention of ADHD subjects, which orients theirattention to LVF targets.Our work presents the performance of young adults with ADHDcompared to age-matched controls using eye-tracking measuresduring an audiovisual SIN task. Our findings are consistent with thepossibility that audiovisual cues, in general, are processed in sucha way that WMC or cognitive load are not consistently impactedin increasing levels of background noise for NT adults [Michaleket al. 2018].
Our pilot study consisted of five young adults (4 F, 1 M) with aprior diagnosis of ADHD, and six NT young adults (4 F, 2 M) asthe control. All participants were aged between 18 - 30 years, withno history of psychotic symptoms and normal vision. Participantswith a diagnosis of ADHD confirmed their diagnosis through medi-cal documentation, including records from a physician or licensedpsychiatrist. They were asked to remain medication-free for 12hours prior to study participation. There were no participants whohad been prescribed long lasting non-stimulants, so the 12-hourtime frame was sufficient for all participants. Information on therisks of avoiding medication were provided prior to the experiment, a r X i v : . [ q - b i o . N C ] A p r TRA ’20 Short Papers, June 2–5, 2020, Stuttgart, Germany Gavindya Jayawardena, Anne M. P. Michalek, Andrew T. Duchowski, and Sampath Jayarathna and participants acknowledged it by signing a consent form ap-proved by University’s Institutional Review Board. Both ADHDand NT participants went through a hearing screening of 20 dB HLat frequencies 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz, bilaterally,to ensure their hearing was within normal limits.
We used QuickSIN [Killion et al. 2004] software to simultaneouslypresent a sentence repetition task with background noise (i.e. speechbabble) at six SNRs: 25 dB, 20 dB, 15 dB, 10 dB, 5 dB, and 0 dB. EachSNR represents the ratio of the dB level of speech to dB level ofnoise. The level of background noise increases as the SNR decreases.The audiovisual QuickSIN setup is presented in Figure 1a. Partici-pants were asked to listen to the sentences while simultaneouslyviewing the speaker’s face and then repeat each sentence verbally.Participants were presented with nine sentence sets, each havingsix sentences representing all background noise levels. Each sen-tence had an average of 8-13 words including five keywords (e.g.,
The weight of the package was seen on the high scale ). Participantswere scored based on the number of keywords accurately repeatedper sentence. The presentation of the nine sentence blocks wasrandomized and counterbalanced across participants.
We used Tobii Pro X2-60 computer screen-based eye tracker (60Hz, 0.4 ◦ accuracy) to record the eye movements of participantsduring the QuickSIN task. Prior to the experiment, each participantwas calibrated using Tobii’s standard calibration methods. We usedTobii Studio analysis software to pre-process gaze metrics usingthe I-VT filter (velocity threshold set to 30 ◦ /second) to extract eyemovement metrics recorded throughout the study.We specified four areas of interest (AOIs): 1) left eye, 2) right eye,3) nose, and 4) mouth of the eye-tracking stimulus to analyze theeye-movements of participants (see Figure 1b). To observe how the eye-tracking measurements change with audio-visual cues, we used our RAEMAP [Jayawardena 2020] eye move-ment processing pipeline, which is a modified version of gaze analyt-ics pipeline [Duchowski 2017]. Upon correct mapping of variables,the original gaze analytics pipeline has the capability of extractingraw gaze data from various eye trackers [Duchowski 2017]. Afterextracting raw gaze data, the gaze analytics pipeline: (1) classifyraw gaze points into fixations, and (2) aggregate fixations relatedinformation for statistical analysis. The gaze analytics pipeline facil-itates computation of numerous eye movement metrics. Also, it hasthe capability of generating visualizations of gaze points, fixationswithin AOIs, heat maps, ambient/focal fixations, and microsaccadesper scan path. The current implementation of the gaze analyticspipeline handles eye-tracking data recorded during each task ofeach person sequentially. This process is computationally expen-sive, where the split and merge approach generates large numberof intermediate files along the way of eye gaze metrics calculations.RAEMAP is developed such that calculations of eye gaze met-rics utilize distributed computing resources as illustrated in Figure2. RAEMAP facilitates computation of traditional positional gaze metrics such as fixation count and fixation duration, as well as ad-vanced metrics such as gaze transition entropy [Krejtz et al. 2015],and complex pupillometry measurements such as index of pupillaryactivity (IPA) [Duchowski et al. 2018] which indicate cognitive load.RAEMAP also has the capability of generating visualizations ofgaze points, AOIs, scan paths, and fixations on AOIs (see Figure 1).The architecture of RAEMAP is shown in Figure 2.In RAEMAP, the calculations of eye gaze metrics of subjectsare done in separate processes utilizing distributed computing re-sources as illustrated in Figure 2 since they are independent of oneanother to enhance the efficiency. The aggregation of calculatedeye gaze metrics of all participants in each task is done using Mes-sage Passing Interface (MPI). In addition, RAEMAP have the streamprocessing capability to calculate eye gaze metrics and visualizethe scan path as data is being streamed by the eye tracker.We applied RAEMAP to calculate gaze points, AOIs, scan paths,and fixations on AOIs per each sentence of the QuickSIN task foreach participant. Figure 1c shows a visualization of fixations ofone participant while watching one sentence in the QuickSIN task.Figure 1d shows a visualization of fixations on pre-defined AOIs.We generated gaze transition matrices and corresponding gazetransition entropies for both participant groups. We also calculatedthe IPA counts for participants in both groups.
We first report the performance of ADHD and NT participants dur-ing the QuickSIN task. Next we analyze changes in eye movementsin relation to the six SNRs. A mixed, repeated measures ANOVAusing a 2x6 design with main factors of group (ADHD or NT) andSNR (0 dB to 25 dB with 5dB increments) was carried out on theperformance of QuickSIN task and the eye-tracking measures.
We first analyze the performance of both ADHD and NT partici-pants at each SNR. Each participant was assigned a score for everysentence, based on the number of keywords accurately repeatedout of five. There was no main effect of the participant group forQuickSIN performance F ( , ) = . , p > .
05, indicating that per-formance was similar between ADHD and NT participants. Therewas a significant main effect of the SNR on QuickSIN performance, F ( . , . ) = . , p < . F ( . , . ) < , p > . t -test for each SNR, identifying a significant difference of QuickSINperformance between the two groups at 15 dB SNR, p < . udiovisual Speech-In-Noise (SIN) Performance of Young Adults with ADHD ETRA ’20 Short Papers, June 2–5, 2020, Stuttgart, Germany (a) Speaker (b) AOIs (c) Scanpath (d) Fixations on AOIs Figure 1: The Audiovisual QuickSIN Setup, (a) Speaker’s face as viewed by the participants during the audiovisual SIN task, (b) FourAOIs created for the eye-movement analysis: left eye, right eye, nose, and mouth, (c) Sample scan-path with fixations , and (d) fixations onthe AOIs of a participant while listening to one sentence.
Figure 2: The architecture of RAEMAP which could process eye-tracking data as being streamed by an eye-tracker.
RAEMAPAPI distributes tasks among the nodes using MPI. Each node hosts an instance of the RAEMAP providing the functionality raw to extract rawgaze data, along with parallel processing of process and graph steps.
Process step calculate fixations, fixations in AOIs, saccade amplitudes,saccade duration, and IPA, whereas graph step generate visualizations. MPI gather function facilitates the aggregation of calculated eye gazemetrics in collate step, which provides data for statistical analysis in stats step.
Fixation count indicates the number of times eyes fixated on anAOI. We observed that participants with ADHD fixate more on lefteye whereas NT participants fixate more on right eye. At SNRs 20dB, 15 dB, and 10 dB, participants with ADHD fixated mostly onthe left eye region.We conducted repeated measures 2x6 two-way ANOVA withmain factors of group, and SNR on fixation counts on each AOI.We observed a significant main effect of the SNR, F ( . , . ) = . , p < .
001 as well as group, F ( , ) = . , p < .
008 onfixation counts on left eye indicating that number of fixations dif-fered among ADHD and NT participants as well as different SNRs.There was a significant interaction effect between SNR and group, F ( . , . ) = . , p < .
05, indicating that fixation counts on the left eye on different listening conditions differed depending on theADHD diagnosis.We observed a significant main effect of the SNR for fixationson right eye, nose, and mouth, all p < .
02, but no main effect ofthe group, all F ( , ) < . , p > .
05. Also, there was no significantinteraction effect between SNR and group for right eye, nose, andmouth, all p > .
05. Contrasts of the SNR revealed that the numberof fixations on the nose significantly differed when compared 25dB, 20 dB, and 15 dB SNRs against 0 dB, all F ( , ) > . , p < . p < .
05 (see Table 1).
TRA ’20 Short Papers, June 2–5, 2020, Stuttgart, Germany Gavindya Jayawardena, Anne M. P. Michalek, Andrew T. Duchowski, and Sampath Jayarathna
Table 1: Fixation counts on AOIs of ADHD and NT Participants.
SNR Left Eye Right Eye Nose MouthADHD NT ADHD NT ADHD NT ADHD NT25 dB 93 . ± . . ± . . ± . . ± . . ± . . ± . . ± . . ± .
720 dB 105 . ± . . ± . . ± . . ± . . ± . . ± . . ± . . ± .
515 dB 83 . ± .
20 68 . ± .
40 60 . ± .
70 76 . ± .
80 68 . ± . . ± . . ± . . ± .
910 dB 60 . ± .
10 42 . ± .
60 42 . ± .
70 51 . ± .
40 52 . ± . . ± .
10 46 . ± .
80 66 . ± .
105 dB 33 . ± .
90 30 . ± .
60 34 . ± .
70 27 . ± .
40 42 . ± . . ± . . ± .
50 35 . ± .
100 dB 13 . ± .
80 10 . ± .
30 19 . ± .
90 5 . ± .
30 41 . ± . . ± . . ± .
80 16 . ± . The gaze transition matrices [Krejtz et al. 2015] indicate the proba-bility of transition of gaze between two AOIs. Figure 3 shows thecomputed gaze transition matrices for ADHD and NT participantsat gradually increasing levels of background noise.Gaze transition matrices for different listening conditions sug-gest that, in general, participants with ADHD tend to make un-predictable gaze transitions at different difficulty levels of the taskwhereas NT participants tend to make gaze transition from anyAOI to mouth region regardless the difficulty level of the task. In-terestingly, it can be observed that participants with ADHD tendto re-fixate on the left eye region at 20 dB SNR, where the task isrelatively easy.We calculated the gaze transition entropy to determine the over-all distribution of attention over AOIs. Small entropy values indicatepredictable gaze transitions among AOIs, while large entropy val-ues indicate less predictable gaze transitions among AOIs whentransitioning from any source AOI to any destination AOI withsimilar probabilities [Krejtz et al. 2015].Corresponding transition entropies of computed gaze transi-tion matrices are shown in Table 2. There was no significant maineffect of the SNR, F ( . , . ) = . , p > .
05, or the group, F ( , ) = . , p > . F ( . , . ) = . , p > .
3. Table 2 shows a tendency of higherentropy for both ADHD and NT participants during the most dif-ficult listening condition (0 dB), indicating less predictability ingaze transitions. Also, t -tests on transition entropies of participantsat each SNR (i.e. without aggregating per participant) showed asignificant effect for the NT group, at 0 dB compared to the otherlistening conditions (all p < . The IPA is calculated using a wavelet-based algorithm that relies onwavelet decomposition of the pupil diameter signal, and its waveletanalysis. For the IPA calculation, we used Daubechies-4 waveletfor a 60 Hz signal as suggested in [Duchowski et al. 2018]. Low IPAcounts reflect little cognitive load whereas high IPA counts indicatestrong cognitive load [Duchowski et al. 2018].There was no significant main effect of the SNR, F ( , ) = . , p > .
05, or of the group, F ( , ) = . , p > .
05 on IPAcounts, indicating that cognitive load did not differ among partici-pant groups or SNRs. We observed a significant interaction effectbetween SNR and group, F ( , ) = . , p < .
02 on IPA counts
Table 2: Gaze Transition Entropy and IPA of ADHD and NTParticipants.
SNR Entropy IPAADHD NT ADHD NT25 dB 0 . ± .
02 0 . ± .
02 0 . ± .
03 0 . ± . . ± .
02 0 . ± .
02 0 . ± .
02 0 . ± . . ± .
05 0 . ± .
04 0 . ± .
02 0 . ± . . ± .
04 0 . ± .
04 0 . ± .
03 0 . ± .
035 dB 0 . ± .
05 0 . ± .
04 0 . ± .
02 0 . ± .
020 dB 0 . ± .
07 0 . ± .
06 0 . ± .
02 0 . ± . F ( , ) = . , p < .
05, and SNRs 20 dB and0 dB, F ( , ) = . , p < .
03. These effects reflect that the cogni-tive load differed significantly among hardest and easiest listeningconditions between the two groups (see Table 2). The remainingcontrasts revealed no significant interaction when comparing twogroups to different listening conditions, p > . t -tests on IPA counts of participants at sentencelevel (i.e. without aggregating per participant) yielded no signifi-cant difference between the ADHD and NT groups, p > .
08 for allSNRs, except 15 dB SNR. Interestingly, at 15 dB SNR, IPA countsof participants at sentence level yielded a significant difference be-tween the ADHD and NT groups, p < .
04 indicating a significantdifference in cognitive load between the two groups. Since cognitiveload inherently reduces WMC [Chandler and Sweller 1991; Swelleret al. 1990], we expected participants with ADHD to do worse at15 dB SNR, as their cognitive load is high and their WMC doesnot commensurate with NT participants. The expected behavior isconfirmed by the significant performance difference observed inthe evaluation of the number of keywords recalled between ADHDand NT groups at 15 dB SNR.
Our results indicate ADHD, and NT adolescents perform equallylikely in the SIN task where audiovisual cues are present whenthe task difficulty is very high or very low. However, significantdifferences of QuickSIN performance between participants groupswere observed at 15 dB SNR where ADHD and NT participants udiovisual Speech-In-Noise (SIN) Performance of Young Adults with ADHD ETRA ’20 Short Papers, June 2–5, 2020, Stuttgart, Germany
ADHD Noise Level : 25 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (a) ADHD:25 dB ADHD Noise Level : 20 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (b) ADHD:20 dB ADHD Noise Level : 15 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (c) ADHD:15 dB ADHD Noise Level : 10 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (d) ADHD:10 dB ADHD Noise Level : 5 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (e) ADHD:5 dB ADHD Noise Level : 0 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (f) ADHD:0 dB Non−ADHD Noise Level : 25 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (g) NT:25 dB Non−ADHD Noise Level : 20 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (h) NT:20 dB Non−ADHD Noise Level : 15 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (i) NT:15 dB Non−ADHD Noise Level : 10 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (j) NT:10 dB Non−ADHD Noise Level : 5 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (k) NT:5 dB Non−ADHD Noise Level : 0 mouthnosereyeleye l e y e r e y e no s e m ou t h Destination AOI (to) S ou r c e A O I ( f r o m ) (l) NT:0 dB Figure 3: Gaze transition matrices of ADHD and NT participants, at varying levels of background noise yielding six SNR levels: 0 to25 dB with 5dB increments.recalled 4.7 and 4.9 keywords on average respectively. Fixationcounts on AOIs suggested that both groups had a strong preferenceto look at the mouth of the speaker at the easiest listening conditionwhereas, both groups looked at the nose of the speaker at thehardest listening condition. In all other listening conditions, NTparticipants preferred to look at the mouth whereas participantswith ADHD mostly looked at the left eye. The NT individualsperceive audiovisual cues more accurately from the RVF [Kimura1973], whereas ADHD orient attention to LVF targets due to thelateralized deficit in visual-spatial attention [Carter et al. 1995;Heilman et al. 1991; Mitchell et al. 1990; Voeller and Heilman 1988].Our eye tracking observations align with literature, confirming thatyoung adults with ADHD orient attention to LVF when perceivingaudiovisual cues. Gaze transition matrices and transition entropiesshow that participants with ADHD tend to make unpredictablegaze transitions at different difficulty levels of the task whereas NTparticipants tend to re-fixate on the mouth region.Interestingly, only at 15 dB SNR, IPA counts yielded a significantdifference between the two groups, where QuickSIN performancealso yielded a significant difference between them. This indicatesthat 15 dB SNR is when the noise shifts to a point where processingof speech becomes less automatic and relies more on increasedcognitive load. These findings are consistent with the possibilitythat audiovisual cues, in general, are processed in such a way thatWMC or cognitive load are not consistently impacted in increasinglevels of background noise for NT group [Michalek et al. 2018].
Our work presents an analysis of audiovisual SIN performance foryoung adults with ADHD compared to age-matched controls usingeye-tracking measures. We analyzed the performance of the partici-pants and eye-movement parameters such as fixation count on AOIs,gaze transition entropy, and IPA. We observed that participantswith ADHD primarily fixated on the left eye of the speaker whereasNT group fixated on the right eye,supporting the literature thatADHD orients attention to the LVF whereas NT individuals orientattention to the RVF. When the task difficulty was at a medium level with 15 dB SNR, we observed a significant difference in cognitiveload as well as performance between the two groups.In the future we expect to explore eye movement behavior whenscanning the speaker’s face in terms of advanced eye movementmetrics such as coefficient κ of measurement of focal or ambientviewing , and a larger representation of ADHD and NT adolescents. REFERENCES
R Matt Alderson, Lisa J Kasper, Kristen L Hudec, and Connor HG Patros. 2013.Attention-deficit/hyperactivity disorder (ADHD) and working memory in adults: ameta-analytic review.
Neuropsychology
27, 3 (2013), 287.Marie T Banich, Gregory C Burgess, Brendan E Depue, Luka Ruzic, L CinnamonBidwell, Sena Hitt-Laustsen, Yiping P Du, and Erik G Willcutt. 2009. The neuralbasis of sustained and transient attentional control in young adults with ADHD.
Neuropsychologia
47, 14 (2009), 3095–3104.Russell A Barkley. 1997. Behavioral inhibition, sustained attention, and executivefunctions: constructing a unifying theory of ADHD.
Psychological bulletin
Journal ofcognitive neuroscience
21, 5 (2008), 905–921.Julie N Buchan, Martin Paré, and Kevin G Munhall. 2008. The effect of varyingtalker identity and listening conditions on gaze behavior during audiovisual speechperception.
Brain research
Biological psychiatry
37, 11 (1995), 789–797.Paul Chandler and John Sweller. 1991. Cognitive load theory and the format ofinstruction.
Cognition and instruction
8, 4 (1991), 293–332.Andrew T Duchowski. 2017. The Gaze Analytics Pipeline. In
Eye Tracking Methodology .Springer, New York, NY, 175–191.Andrew T Duchowski, Krzysztof Krejtz, Izabela Krejtz, Cezary Biele, Anna Niedzielska,Peter Kiefer, Martin Raubal, and Ioannis Giannopoulos. 2018. The index of pupillaryactivity: measuring cognitive load vis-à-vis task difficulty with pupil oscillation.In
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems .ACM, Montréal, QC, Canada, 282.Scott A Fields, William Michael Johnson, and Madison B Hassig. 2017. Adult ADHD:Addressing a unique set of challenges.
The journal of family practice
66, 2 (2017),68–74.Leah Fostick. 2017. The effect of attention-deficit/hyperactivity disorder andmethylphenidate treatment on the adult auditory temporal order judgment thresh-old.
Journal of Speech, Language, and Hearing Research
60, 7 (2017), 2124–2128.Sarah Fraser, Jean-Pierre Gagné, Majolaine Alepins, and Pascale Dubois. 2010. Evaluat-ing the effort expended to understand speech in noise using a dual-task paradigm:The effects of providing visual speech cues.
Journal of Speech, Language, and HearingResearch
53, 1 (2010), 18–33. https://doi.org/10.1044/1092-4388(2009/08-0140)
TRA ’20 Short Papers, June 2–5, 2020, Stuttgart, Germany Gavindya Jayawardena, Anne M. P. Michalek, Andrew T. Duchowski, and Sampath Jayarathna
Kenneth M Heilman, Kytja KS Voeller, and Stephen E Nadeau. 1991. A possiblepathophysiologic substrate of attention deficit hyperactivity disorder.
Journal ofChild Neurology
6, 1_suppl (1991), S76–S81.Iiro P Jääskeläinen. 2010. The role of speech production system in audiovisual speechperception.
The open neuroimaging journal
Symposium on Eye Tracking Research and Applications 2020 . ACM,Stuttgart, Germany.Mead C Killion, Patricia A Niquette, Gail I Gudmundsen, Lawrence J Revit, and ShilpiBanerjee. 2004. Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners.
The Journalof the Acoustical Society of America
Scientific American
ACM Transactions on Applied Perception (TAP)
13, 1 (2015), 4.Anne MP Michalek, Ivan Ash, and Kathryn Schwartz. 2018. The independence of work-ing memory capacity and audiovisual cues when listening in noise.
Scandinavianjournal of psychology
59, 6 (2018), 578–585.Anne MP Michalek, Silvana M Watson, Ivan Ash, Stacie Ringleb, and Anastasia Raymer.2014. Effects of noise and audiovisual cues on speech processing in adults with andwithout ADHD.
International journal of audiology
53, 3 (2014), 145–152.Sushmit Mishra, Thomas Lunner, Stefan Stenfelt, Jerker Rönnberg, and Mary Rudner.2013. Seeing the talker’s face supports executive processing of speech in steadystate noise.
Frontiers in Systems Neuroscience
Journal of Child Neurology
5, 3 (1990), 195–204.Shahram Moradi, Björn Lidestam, and Jerker Rönnberg. 2013. Gated audiovisualspeech identification in silence vs. noise: Effects on time and accuracy.
Frontiers inPsychology
Journal of the International Neuropsychological Society
18, 5 (2012), 819–826.Mary Rudner, Catharina Foo, Jerker Rönnberg, and Thomas Lunner. 2009. Cognitionand aided speech recognition in noise: Specific role for cognitive factors followingnine-week experience with adjusted compression settings in hearing aids.
Scandi-navian Journal of Psychology
50, 5 (2009), 405–418.Bruce A Schneider, Liang Li, and Meredyth Daneman. 2007. How competing speechinterferes with speech comprehension in everyday listening situations.
Journal ofthe American Academy of Audiology
18, 7 (2007), 559–572.John Sweller, Paul Chandler, Paul Tierney, and Martin Cooper. 1990. Cognitive load asa factor in the structuring of technical material.
Journal of experimental psychology:general
Proceedings of the NationalAcademy of Sciences
Neurology
38, 5 (1988), 806–806.Katharina von Kriegstein, Özgür Dogan, Martina Grüter, Anne-Lise Giraud, Christian AKell, Thomas Grüter, Andreas Kleinschmidt, and Stefan J Kiebel. 2008. Simulation oftalking faces in the human brain improves auditory speech recognition.
Proceedingsof the National Academy of Sciences
Neuropsychologia
51, 10 (2013), 1888–1895.Steven Paul Woods, David W Lovejoy, and JD Ball. 2002. Neuropsychological char-acteristics of adults with ADHD: A comprehensive review of initial studies.
TheClinical Neuropsychologist
16, 1 (2002), 12–34.Guifeng Xu, Lane Strathearn, Buyun Liu, Binrang Yang, and Wei Bao. 2018. Twenty-year trends in diagnosed attention-deficit/hyperactivity disorder among US childrenand adolescents, 1997-2016. In
JAMA Network Open . American Medical Association,Iowa City, IA, e181471–e181471.Adriana A Zekveld, Mary Rudner, Ingrid S Johnsrude, Joost M Festen, Johannes HMVan Beek, and Jerker Rönnberg. 2011. The influence of semantically related andunrelated text cues on the intelligibility of sentences in noise.