Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian M. Moon is active.

Publication


Featured researches published by Brian M. Moon.


IEEE Intelligent Systems | 2006

Making Sense of Sensemaking 1: Alternative Perspectives

Gary Klein; Brian M. Moon; Robert R. Hoffman

Sensemaking has become an umbrella term for efforts at building intelligent systems. This essay examines sensemaking from various perspectives to see if we can separate the things that are doable from the things that seem more like pie-in-the-sky


IEEE Intelligent Systems | 2004

What is design in the context of human-centered computing?

Robert R. Hoffman; Axel Roesler; Brian M. Moon

We deal about the design in human-centered computing. Problem solving often involves recognizing and fiddling with tacit assumptions. Such realization can often come from seeing things from new perspectives. Appreciating the human-centered perspective may offer some hope for enriching designs scientific foundations and for crafting new and better approaches to it. Certainly this suggests a constraint on or a goal for design, but how do we go from such statements to actual designs that accomplish the stated goals? We approach this class of question by considering the origins of and historical influences on the notion of design, then by considering the assumptions underlying our modern conception of design in light of the principles of human-centered computing.


Science | 2011

Comment on “Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping”

Joel J. Mintzes; Alberto J. Cañas; John W. Coffey; James Gorman; Laine Gurley; Robert R. Hoffman; Saundra Y. McGuire; Norma Miller; Brian M. Moon; James Trifone; James H. Wandersee

Karpicke and Blunt (Reports, 11 February 2011, p. 772) reported that retrieval practice produces greater gains in learning than elaborative studying with concept mapping and concluded that this strategy is a powerful way to promote meaningful learning of complex concepts commonly found in science education. We question their findings on methodological and epistemological grounds.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2004

DESIGNING SUPPORT FOR INTELLIGENCE ANALYSTS

William C. Elm; Malcolm J. Cook; Frank L. Greitzer; Robert R. Hoffman; Brian M. Moon; Susan G. Hutchins

Intelligence analysis is a prototypical example of a situation that can be described by the phrase “coping with complexity,” given a variety of complicating factors including (but certainly not limited to) massive amounts of data, inherent uncertainty in the data, dynamism in the underlying world, and the risks associated with the conclusions drawn from the data. Given this complex nature of intelligence analysis, a wide variety of advanced visualization tools are continually being developed in hopes of designing support for the analysis process. In addition, a number of different cognitive analysis activities are ongoing in hopes of deriving requirements for future support tools. The goal of this panel is to present a sample from both areas in hopes of providing an integration of efforts and thus more effective support tools for intelligence analysts. We have four speakers presenting an analytic perspective, one from a tool development perspective, and one from a support function (the medium between analysis and design) perspective. This should provide an interesting set of complementary discussions into this topic area.


Theoretical Issues in Ergonomics Science | 2011

Reasoning difficulty in analytical activity

Robert R. Hoffman; Simon Henderson; Brian M. Moon; David T. Moore; Jordan A. Litman

We review the consensus of expert opinion concerning the psychology of intelligence analysis, as a form of critical thinking. This consensus details a number of ways in which the cognitive work is difficult. Many senior analysts have commented upon the requirements of intelligence analysis – the reasoning traps to which novices fall victim, and the required knowledge and skills of experts. There remain gaps in our understanding, not just because the research is classified. There simply has not been that much systematic research. If the empirical base were broadened, headway might be made in training and techniques to help analysts cope with difficulty. We hope that this article contributes by presenting an overview and rationale for empirical study of the cognitive ergonomics of intelligence analysis.


International Journal of Human-computer Interaction | 2015

Concept Mapping Usability Evaluation: An Exploratory Study of a New Usability Inspection Method

Randolph G. Bias; Brian M. Moon; Robert R. Hoffman

A key aspect of a website or any artifact is its usability—the ability of the artifact’s target audience to carry out tasks safely, effectively, efficiently, even joyfully. One class of usability evaluation methods is inspection methods, in which the usability professional systematically inspects the user interface to discern potential usability problems. Here the article proposes employing Concept Mapping, a proven method of knowledge elicitation and representation, as a new, structured usability inspection method. Nineteen students in a master’s-level usability class each generated a Concept Map (Cmap) of 1 of 5 websites. These Cmaps were shared with the sites’ webmasters, and the webmasters completed a questionnaire giving us feedback on the value of the Cmaps for subsequent site redesigns. The article presents those data, infers what improvements need to be made in the new Concept Mapping Usability Evaluation method, and invites others to join us in investigating the potential value of this method.


53rd Human Factors and Ergonomics Society Annual Meeting 2009, HFES 2009, 19 October 2009 through 23 October 2009, San Antonio, TX. ference code: 80015, 1, 309-313 | 2009

Using a Knowledge Elicitation Method to Specify the Business Model of a Human Factors Organization

Johannes Martinus Cornelis Schraagen; Josine van de Ven; Robert R. Hoffman; Brian M. Moon

Concept Mapping was used to structure knowledge elicitation interviews with a group of human factors specialists whose goal was to describe the business model of their Department. This novel use of cognitive task analysis to describe the business model of a human factors organization resulted in a number of Concept Maps on topics such as Department strengths and weaknesses, strategic plans and partnerships, and ambitions and goals. This work might be seen as a prototype for how other human factors organizations might brainstorm their activities, progress, and goals.


58th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2014 | 2014

Tailoring Cognitive Task Analysis (CTA) Methods for Use in Healthcare

Laura Militello; Cindy Dominguez; Patricia R. Ebright; Brian M. Moon; Alissa L. Russ; Charlene R. Weir

Cognitive task analysis (CTA) methods are most widely known for their contributions to military, nuclear power plant, and aviation research. In recent years, however, these methods have been adapted and applied with increasing frequency to address issues in healthcare. CTA methods have been used in the context of designing and integrating health information technology, in pursuit of improved patient safety, and as a means to improve education and training. This panel will 1) reflect on strategies for tailoring CTA methods for use in a range of healthcare settings, 2) highlight challenges to conducting CTA in healthcare settings, and 3) highlight important contributions of CTA in addressing challenging issues in healthcare today.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2005

Beyond Requirements: Improving Software Tools for Intelligence Analysts

Sarah Geitz; Brian M. Moon; Anita D'Amico; Robert R. Hoffman; Rob Page

The goal of this panel is to discuss critical human factors concerns in the development of software for intelligence analysts. The panel presentations are designed to provide a high level overview of the software development process, the intelligence analysis process, and the challenges encountered in both obtaining user feedback. Presentations will examine a variety of issues including the analysis of imagery, text, information assurance, data fusion, visualization models and in establishing situational awareness, as well as empowering analysts with open source software. The panel discussion will focus on extracting generic processes that can be applied to obtaining more accurate software metrics, requirements and solutions from a world where certain topics cannot be discussed. Methods and metaphors for better describing to individuals working outside of the classified world the context a within which a tool will be used, may be touched upon, as well as identifying ways of overcoming both internal and external politics. Human factors concerns may also be addressed, such as evaluating how trust affects the feedback received from individual analysts and communication and interaction within and between groups of analysts. Identifying and overcoming potential perceptual problems in the software development process will also be discussed. The anticipated outcome of the panel will be to target individual processes, techniques and technologies that can be applied to obtaining requirements to support cognitive processes, which can in turn be applied developing software tools that better fit the needs intelligence analysts.


IEEE Intelligent Systems | 2006

Making Sense of Sensemaking 2: A Macrocognitive Model

Gary Klein; Brian M. Moon; Robert R. Hoffman

Collaboration


Dive into the Brian M. Moon's collaboration.

Top Co-Authors

Avatar

Robert R. Hoffman

University of West Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto J. Cañas

University of West Florida

View shared research outputs
Top Co-Authors

Avatar

Alissa L. Russ

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Anne Miller

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brad Hamner

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge