Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marvin S. Cohen is active.

Publication


Featured researches published by Marvin S. Cohen.


Human Factors | 1996

METARECOGNITION IN TIME-STRESSED DECISION MAKING: RECOGNIZING, CRITIQUING, AND CORRECTING

Marvin S. Cohen; Jared T. Freeman; Steve Wolf

We describe a framework for decision making, called the recognition/metacognition (R/M) model, that explains how decision makers handle uncertainty and novelty while exploiting their experience in real-world domains. The model describes a set of critical-thinking strategies that supplement recognitional processes by verifying the results of recognition and correcting problems. Structured situation models causally organize information about a situation and provide a basis for metarecognitional processes. Metarecognitional processes determine when it is worthwhile to think more about a problem; identify evidence-conclusion relationships within a situation model; critique situation models for incompleteness, conflict, and unreliability; and prompt collection or retrieval of new information and revision of assumptions. We illustrate the R/M framework in the context of naval tactical decision making.


Human Factors | 1993

Real-Time Expert System Interfaces, Cognitive Processes, and Task Performance: An Empirical Assessment

Leonard Adelman; Marvin S. Cohen; Terry A. Bresnick; James O. Chinnis; Kathryn Blackmond Laskey

In this experiment we investigated the effect of different real-time expert system interfaces on operators cognitive processes and performance. The results supported the principle that a real-time expert systems interface should focus operators attention on where it is required most. However, following this principle resulted in unanticipated consequences. In particular, it led to inferior performance for less critical, yet important cases requiring operators attention. For such cases operators performed better with an interface that let them select where they wanted to focus their attention. Having a rule generation capability improved performance with all interfaces but did so less than hypothesized. In all cases performance with different interfaces and a rule generation capability was explained by the effect of the interfaces on cognitive process measures.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 1996

Thinking Naturally about Uncertainty

Marvin S. Cohen; Jared T. Freeman

Methods for handling uncertainty should be evaluated in terms of their cognitive compatibility with real-world decision makers. Bayesian models of uncertainty demand precise up-front assessments of all problem elements and discourage the dynamic evolution of problem understanding. They handle missing or conflicting data by mathematical aggregation, while real-world decision makers regard gaps in knowledge and conflicting evidence as problems to be solved. Finally, they produce as output a statistical average rather than a coherent picture of the situation. Another approach to decision making, based on pattern-matching, does not address the ways in which situation pictures are evaluated and modified. A third approach, however, called the Recognition / Metacognition model, treats decision making under uncertainty as a problem-solving process that starts with the results of recognition, verifies them, and improves them where necessary. Critiquing strategies identify problems of incompleteness, conflict, and unreliability in situation models, and lead to correcting steps that retrieve or collect additional information or adopt assumptions. Training methods based on this model have been developed and tested with active-duty Naval officers.


systems man and cybernetics | 1989

Representing and eliciting knowledge about uncertain evidence and its implications

Kathryn Blackmond Laskey; Marvin S. Cohen; Anne W. Martin

A reasoning system and associated assessment methodology built on a natural schema for an evidential argument are discussed. This argument schema is based on the underlying causal chains linking conclusions and evidence. The framework couples a probabilistic calculus with qualitative approaches to evidential reasoning. The resulting knowledge structure leads to a natural assessment methodology in which the expert first specifies a qualitative argument from evidence to conclusion. Next the expert specifies a series of premises on which the argument is based. Invalidating any of these premises would disrupt the causal link between evidence and conclusion. The final step is the assessment of the strength of the argument, in the form of degrees of belief for the premises underlying the argument. The expert may also explicitly adopt assumptions affecting the strength of evidential arguments. A higher-level metareasoning process is described, in which assumptions underlying the strength and direction of evidential arguments may be revised in response to conflict. >


Interactive Learning Environments | 2000

Modeling and Diagnosing Domain Knowledge Using Latent Semantic Indexing

Jared T. Freeman; Bryan Thompson; Marvin S. Cohen


Archive | 1994

Training Metacognitive Skills for Situation Assessment

Jared T. Freeman; Marvin S. Cohen


Archive | 1987

Display Techniques for Pilot Interactions with Intelligent Avionics: A Cognitive Approach

Marvin S. Cohen; Martin A Tolcott; James McIntyre


Archive | 1995

Metacognitive Behavior in Adaptive Agents 1

Bryan Thompson; Marvin S. Cohen; Jared T. Freeman


Archive | 1989

Knowledge Elicitation: Phase 1 Final Report. Volume 1

John Leddo; Theresa M. Mullins; Marvin S. Cohen; Terry A. Bresnick; F. F. Marvin


Archive | 1986

An Expert System Framework for Adaptive Evidential Reasoning: Application to In-Flight Route Re-Planning

Marvin S. Cohen; Kathryn D Laskey; James McIntyre; Bryan Thompson

Collaboration


Dive into the Marvin S. Cohen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge