Sean Seaman
Wayne State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sean Seaman.
Brain Research | 2009
Li Hsieh; Richard A. Young; Susan M. Bowyer; John E. Moran; Richard J. Genik; Christopher C. Green; Yow Ren Chiang; Ya Ju Yu; Chia Cheng Liao; Sean Seaman
This neuroimaging study investigated the neural mechanisms of the effect of conversation on visual event detection during a driving-like scenario. The static load paradigm, established as predictive of visual reaction time in on-road driving, measured reaction times to visual events while subjects watched a real-world driving video. Behavioral testing with twenty-eight healthy volunteers determined the reaction time effects from overt and covert conversation tasks in this paradigm. Overt and covert conversation gave rise to longer visual event reaction times in the surrogate driving paradigm compared to just driving with no conversation, with negligible effect on miss rates. The covert conversation task was then undertaken by ten right-handed healthy adults in a 4-Tesla fMRI magnet. We identified a frontal-parietal network that maintained event detection performance during the conversation task while watching the driving video. Increased brain activations for conversation vs. no conversation during such simulated driving was found not only in language regions (Brocas and Wernickes areas), but also specific regions in bilateral inferior frontal gyrus, bilateral anterior insula and orbitofrontal cortex, bilateral lateral prefrontal cortex (right middle frontal gyrus and left frontal eye field), supplementary motor cortex, anterior and posterior cingulate gyrus, right superior parietal lobe, right intraparietal sulcus, right precuneus, and right cuneus. We propose an Asynchrony Model in which the frontal regions have a top-down influence on the synchrony of neural processes within the superior parietal lobe and extrastriate visual cortex that in turn modulate the reaction time to visual events during conversation while driving.
Language and Speech | 2004
Lee H. Wurm; Douglas A. Vakoch; Sean Seaman
Until recently most models of word recognition have assumed that semantic effects come into play only after the identification of the word in question. What little evidence exists for early semantic effects in word recognition has relied primarily on priming manipulations using the lexical decision task, and has used visual stimulus presentation. The current study uses auditory stimulus presentation and multiple experimental tasks, and does not use priming. Response latencies for 100 common nouns were found to depend on perceptual dimensions identified by Osgood (1969): Evaluation, Potency, and Activity. In addition, the two-way interactions between these dimensions were significant. All effects were above and beyond the effects of concreteness, word length, frequency, onset phoneme characteristics, stress, and neighborhood density. Results are discussed against evidence from several areas of research suggesting a role of behaviorally important information in perception.
Journal of Experimental Psychology: Learning, Memory and Cognition | 2008
Lee H. Wurm; Sean Seaman
Previous research has demonstrated that the subjective danger and usefulness of words affect lexical decision times. Usually, an interaction is found: Increasing danger predicts faster reaction times (RTs) for words low on usefulness, but increasing danger predicts slower RTs for words high on usefulness. The authors show the same interaction with immediate auditory naming. The interaction disappeared with a delayed auditory naming control experiment, suggesting that it has a perceptual basis. In an attempt to separate input (signal to ear) from output (brain to muscle) processes in word recognition, the authors ran 2 auditory perceptual identification experiments. The interaction was again significant, but performance was best for words high on both danger and usefulness. This suggests that initial demonstrations of the interaction were reflecting an output approach/withdraw response conflict induced by stimuli that are both dangerous and useful. The interaction cannot be characterized as a tradeoff of speed versus accuracy.
Cognition & Emotion | 2007
Lee H. Wurm; R. Douglas Whitman; Sean Seaman; Laura Hill; Heather M. Ulstad
In this study we examined the interplay between appetitive (approach) and defensive (avoid) responses in spoken word recognition. Ninety-two right-handed participants (half women) took part in an auditory lexical decision experiment in which speech was presented to only one ear. The danger and usefulness of the word referents interacted in predicting RTs, as in previous (binaural) studies with poorer control of psycholinguistic covariates. Specifically, higher danger ratings were associated with faster RTs for words rated low on usefulness; but higher danger ratings were associated with slower RTs for words rated high on usefulness. In addition to this primary finding, men showed more lateralised performance, as indicated by significant interactions of sex and ear of presentation with word frequency, and with the animacy of the word referents. For both sexes, word frequency had a stronger effect on accuracy for speech presented to the right ear. Finally, mens but not womens RTs were related to the danger dimension. This last finding provides an intriguing avenue for future research in the area of sex differences and emotion.
human factors in computing systems | 2017
Lex Fridman; Heishiro Toyoda; Sean Seaman; Bobbie Seppelt; Linda Angell; Joonbum Lee; Bruce Mehler; Bryan Reimer
We consider a large dataset of real-world, on-road driving from a 100-car naturalistic study to explore the predictive power of driver glances and, specifically, to answer the following question: what can be predicted about the state of the driver and the state of the driving environment from a 6-second sequence of macro-glances? The context-based nature of such glances allows for application of supervised learning to the problem of vision-based gaze estimation, making it robust, accurate, and reliable in messy, real-world conditions. So, its valuable to ask whether such macro-glances can be used to infer behavioral, environmental, and demographic variables? We analyze 27 binary classification problems based on these variables. The takeaway is that glance can be used as part of a multi-sensor real-time system to predict radio-tuning, fatigue state, failure to signal, talking, and several environment variables.
SAE 2015 World Congress & Exhibition | 2015
Li Hsieh; Sean Seaman; Richard A. Young
Abstract As advanced electronic technology continues to be integrated into in-vehicle and portable devices, it is important to understand how drivers handle multitasking in order to maintain safe driving while reducing driver distraction. NHTSA has made driver distraction mitigation a major initiative. Currently, several types of Detection Response Tasks (DRTs) for assessing selective attention by detecting and responding to visual or tactile events while driving have been under development by an ISO WG8 DRT group. Among these DRTs, the tactile version (TDRT) is considered as a sensitive surrogate measure for driver attention without visual-manual interference in driving, according to the ISO DRT Draft Standard. In our previous study of cognitive demand, our results showed that the TDRT is the only surrogate DRT task with an acute sensitivity to a cognitive demand increase in an auditory-vocal task (i.e., n-Back verbal working memory task). At the same time, a specificity for responding to only increased cognitive demand, not to increased physical demand for a visual-manual task (i.e., Surrogate Reference Task or SuRT). Similar findings in both simulated and on-road driving confirmed that the TDRT is a sensitive, specific and reliable surrogate test for measuring the effects of secondary tasks on driver attention. The current paper further investigated eye glance patterns and subjective ratings, and their relationship with DRT metrics, allowing a more comprehensive understanding of the attentional effect of secondary tasks on driver performance.
automotive user interfaces and interactive vehicular applications | 2010
Li Hsieh; Sean Seaman; Richard A. Young
Evoked Response Potential (ERP) and functional Magnetic Resonance Imaging (fMRI) recordings in this study shed light on underlying neural mechanisms for higher cognitive processes and attention allocation during multitasking of cell phone conversations and driving. Behavioral results indicated that hands-free cellular phone conversations caused statistically significant but small reaction time effects for visual event detection during simulated and on-road driving. The validated Static Load driving paradigm gives rise to high correlations of red light reaction times between lab and on-road. Both ERP and fMRI findings suggested that cognitive distractions are correlated with increased cognitive load and attentional distribution. The novel contribution of this ERP and fMRI study is that adding an angry emotional valence to the speech increased the alertness level, resulting in reduced driver distraction, likely via increases in right frontoparietal networks and dampened or desynchronized left frontal activity.
automotive user interfaces and interactive vehicular applications | 2017
Bobbie Seppelt; Sean Seaman; Linda Angell; Bruce Mehler; Bryan Reimer
Voice interfaces offer promise in allowing drivers to keep their eyes on-road and hands on-wheel. In relieving visualmanual demand, there is the potential for voice-enabled interfaces to inadvertently shift the burden of load to cognitive resources. Measurement approaches are needed that can identify when and to what extent cognitive load is present during driving. A modified form of the AttenD algorithm was applied to assess the amount of cognitive load present in a set of auditory-vocal task interactions. These tasks were subset from a larger on-road study conducted in the Boston area of driver response during use of an in-vehicle voice system [22]. The modified algorithm differentiated among the set of auditory-vocal tasks examined -- and may be useful to HMI practitioners who are working to develop and evaluate HMIs to support drivers in managing their attention to the road, and in the development of real-time driver attention monitoring systems.
Transportation Research Record | 2017
Joonbum Lee; Ben D. Sawyer; Bruce Mehler; Linda Angell; Bobbie Seppelt; Sean Seaman; Lex Fridman; Bryan Reimer
Multitasking related demands can adversely affect drivers’ allocation of attention to the roadway, resulting in delays or missed responses to roadway threats and to decrements in driving performance. Robust methods for obtaining evidence and data about demands on and decrements in the allocation of driver attention are needed as input for design, training, and policy. The detection response task (DRT) is a commonly used method (ISO 17488) for measuring the attentional effects of cognitive load. The AttenD algorithm is a method intended to measure driver distraction through real-time glance analysis, in which individual glances are converted into a scalar value using simple rules considering glance duration, frequency, and location. A relationship between the two tools is explored. A previous multitasking driving simulation study, which used the remote form of the DRT to differentiate the demands of a primary visual–manual human–machine interface from alternative primary auditory–vocal multimodal human–machine interfaces, was reanalyzed using AttenD, and the two analyses compared. Results support an association between DRT performance and AttenD algorithm output. Summary statistics produced from AttenD profiles differentiate between the demands of the human–machine interfaces considered with more power than analyses of DRT response time and miss rate. Among discussed implications is the possibility that AttenD taps some of the same attentional effects as the DRT. Future research paths, strategies for analyses of past and future data sets, and possible application for driver state detection are also discussed.
automotive user interfaces and interactive vehicular applications | 2016
Joshua E. Domeyer; Sean Seaman; Linda Angell; Joonbum Lee; Bryan Reimer; Chong Zhang; Birsen Donmez
The Strategic Highway Research Program 2 (SHRP2) can provide unique information on how driver behavior leads to crashes or near-crashes. A subset of this dataset was created to examine naturalistic engagement in secondary tasks (NEST; Owens, Angell, Hankey, Foley, & Ebe, 2015). The NEST dataset is composed of crash and near-crash epochs which have identified secondary tasks as a contributing factor and contains 20 seconds of coding before a crash or near-crash. The present analysis aims to identify high level prevalence characteristics of the dataset. This analysis focuses on the prevalence of secondary task engagement alone, during adverse weather, and during different traffic densities. Additionally, task occurrence with other tasks (i.e., comorbidity) is considered. Overall, the results indicate that some factors (e.g., weather, traffic density) can influence a drivers decision to engage in certain secondary tasks.