Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian McClimens is active.

Publication


Featured researches published by Brian McClimens.


Journal of the Acoustical Society of America | 2006

Aural classification of impulsive‐source active sonar echoes

James W. Pitton; Scott Philips; Les E. Atlas; James A. Ballas; Derek Brock; Brian McClimens; Maxwell H. Miller

The goal of this effort is to develop automatic target classification technology for active sonar systems by exploiting knowledge of signal processing methods and human auditory processing. Using impulsive‐source active sonar data, formal listening experiments were conducted to determine if and how human subjects can discriminate between sonar target and clutter echoes using aural cues alone. Both trained sonar operators and naive listeners at APL‐UW were examined to determine a baseline performance level. This level was found to be well above chance for multiple subjects in both groups, validating the accepted wisdom that there are inherent aural cues separating targets from clutter. In a subsequent experiment, feedback was provided to the naive listeners and classification performance dramatically improved, demonstrating that naive listeners can be trained to a level on par with experts. Using these trained listeners at APL‐UW, a multidimensional scaling (MDS) listening experiment was designed and condu...


military communications conference | 2011

Facilitating the watchstander's voice communications task in future Navy operations

Derek Brock; Christina Wasylyshyn; Brian McClimens; Dennis Perzanowski

Recent human performance research at the Naval Surface Warfare Center, Dahlgren Division (NSWCDD) has shown that increasing the number of concurrent voice communications tasks individual Navy watchstanders must handle is an uncompromising empirical barrier to streamlining crew sizes in future shipboard combat information centers. Subsequent work on this problem at the Naval Research Laboratory (NRL) has resulted in a serialized communications monitoring prototype (U.S. Patent Application Pub. No. US. 2007/0299657) that uses a patented NRL technology known as “pitch synchronous segmentation” (U.S. Patent 5,933,808) to accelerate buffered human speech up to 100% faster than its normal rate without a meaningful decline in intelligibility. In conjunction with this research effort, a series of ongoing human subjects studies at NRL has shown that rate-accelerated, serialized communications monitoring overwhelmingly improves performance measures of attention, comprehension, and effort in comparison to concurrent listening in the same span of time. This paper provides an overview of NRLs concurrent communications monitoring solution and summarizes the empirical performance questions addressed by, and the outcomes of, the Labs associated program of listening studies.


international conference on human-computer interaction | 2011

Modeling Attention Allocation in a Complex Dual Task with and without Auditory Cues

Brian McClimens; Derek Brock

Navy watchstanding operations increasingly involve information-saturated environments in which operators must attend to more than one critical task display at a time [1]. In response, the Navy is pursuing a model-based understanding of human performance in multitask settings. Empirical studies with a complex dual task and related cognitive modeling work in the authors’ lab suggest that auditory cueing is an effective strategy for mediating operators’ attention [2,3,4]. Characterizing the effects of widely separated displays on performance and effort is an important ancillary concern, and a series of cognitive models developed with the EPIC cognitive architecture [5] is used for this purpose. These cognitive models verify a key finding from an empirical study; namely, time spent on the primary, relatively stateless, tracking task is regulated by state information retained from the secondary, radar task. These findings suggest that in multitask settings, operators use relatively simple state information about a task they are about to leave to gauge how long they can attend to other matters before they must return.


international conference on auditory display | 2009

Evaluating the utility of auditory perspective-taking in robot speech presentations

Derek Brock; Brian McClimens; Christina Wasylyshyn; J. Gregory Trafton; J. Malcolm McCurry

In speech interactions, people routinely reason about each others auditory perspective and change their manner of speaking accordingly, by adjusting their voice to overcome noise or distance, or by pausing for especially loud sounds and resuming when conditions are more favorable for the listener. In this paper we report the findings of a listening study motivated both by this observation and a prototype auditory interface for a mobile robot that monitors the aural parameters of its environment and infers its users listening requirements. The results provide significant empirical evidence of the utility of simulated auditory perspective taking and the inferred use of loudness and/or pauses to overcome the potential of ambient noise to mask synthetic speech.


Journal of the Acoustical Society of America | 2006

Perceptual dimensions of impulsive‐source active sonar echoes

Jason E. Summers; Derek Brock; Brian McClimens; Charles F. Gaumond; Ralph N. Baer

Recent findings [J. Pitton et al., J. Acoust. Soc. Am. 119, 3395(A) (2006)] have joined anecdotal evidence to suggest that human listeners are able to discriminate target from clutter in cases for which automatic classifiers fail. To uncover the dimensions of the perceptual space in which listeners perform classification, a multidimensional scaling (MDS) experiment was performed. Subjects rated the aural similarity between ordered pairs of stimuli drawn from a set of 100 operationally measured sonar signals, comprising 50 target echoes and 50 false‐alarm clutter echoes. Experimental controls were employed to evaluate consistency in judgments within and between subjects. To ensure that dimensions were discovered rather than imposed [Allen and Scollie, J. Acoust. Soc. Am. 112, 211–218 (2002)], subjects were neither trained in classification nor made aware of the underlying two‐class structure of the signal set. While training improves classification performance, prior work suggests that both expert and nave...


Journal of the Acoustical Society of America | 2004

Using spatialized sound cues in an auditorily rich environment

Derek Brock; James A. Ballas; Janet L. Stroup; Brian McClimens

Previous Navy research has demonstrated that spatialized sound cues in an otherwise quiet setting are useful for directing attention and improving performance by 16.8% or more in the decision component of a complex dual‐task. To examine whether the benefits of this technique are undermined in the presence of additional, unrelated sounds, a background recording of operations in a Navy command center and a voice communications response task [Bolia et al., J. Acoust. Soc. Am. 107, 1065–1066 (2000)] were used to simulate the conditions of an auditorily rich military environment. Without the benefit of spatialized sound cues, performance in the presence of this extraneous auditory information, as measured by decision response times, was an average of 13.6% worse than baseline performance in an earlier study. Performance improved when the cues were present by an average of 18.3%, but this improvement remained below the improvement observed in the baseline study by an average of 11.5%. It is concluded that while the two types of extraneous sound information used in this study degrade performance in the decision task, there is no interaction with the relative performance benefit provided by the use of spatialized auditory cues. [Work supported by ONR.]


international conference on auditory display | 2004

THE DESIGN OF MIXED -USE, VIRTUAL AUDITORY DISPLAYS: RECENT FINDINGS WITH A DUAL-TASK PARADIGM

Derek Brock; James A. Ballas; Janet L. Stroup; Brian McClimens


Archive | 2008

Evaluating Listeners' Attention to and Comprehension of Spatialized Concurrent and Serial Talkers at Normal and a Synthetically Faster Rate of Speech

Derek Brock; Brian McClimens; J. Gregory Trafton; Malcolm McCurry; Dennis Perzanowski


ISICT '03 Proceedings of the 1st international symposium on Information and communication technologies | 2003

Perceptual issues for the use of 3D auditory displays in operational environments

Derek Brock; James A. Ballas; Brian McClimens


Archive | 2010

VIRTUAL AUDITORY CUEING REVISITED

Derek Brock; Brian McClimens; Malcolm McCurry

Collaboration


Dive into the Brian McClimens's collaboration.

Top Co-Authors

Avatar

Derek Brock

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Christina Wasylyshyn

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

James A. Ballas

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Charles F. Gaumond

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dennis Perzanowski

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

J. Gregory Trafton

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Janet L. Stroup

United States Naval Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jason E. Summers

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Farilee E. Mintz

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge