Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert R. Hoffman is active.

Publication


Featured researches published by Robert R. Hoffman.


Archive | 2003

Adjustable Autonomy and Human-Agent Teamwork in Practice: An Interim Report on Space Applications

Jeffrey M. Bradshaw; Maarten Sierhuis; Alessandro Acquisti; Paul J. Feltovich; Robert R. Hoffman; Renia Jeffers; Debbie Prescott; Niranjan Suri; Andrzej Uszok; Ron van Hoof

We give a preliminary perspective on the basic principles and pitfalls of adjustable autonomy and human-centered teamwork. We then summarize the interim results of our study on the problem of work practice modeling and human-agent collaboration in space applications, the development of a broad model of human-agent teamwork grounded in practice, and the integration of the Brahms, KAoS, and NOMADS agent frameworks We hope our work will benefit those who plan and participate in work activities in a wide variety of space applications, as well as those who are interested in design and execution tools for teams of robots that can function as effective assistants to humans.


IEEE Intelligent Systems | 2004

Keeping it too simple: how the reductive tendency affects cognitive engineering

Paul J. Feltovich; Robert R. Hoffman; David D. Woods; Axel Roesler

Certain features of tasks make especially difficult for humans. These constitute leverage points for applying intelligent technologies, but theres a flip side. Designing complex cognitive systems is itself a tough task. Cognitive engineers face the same challenges in designing systems that users confront in working the tasks that the systems are intended to aid. We discuss about these issues. We assume that the cognitive engineers will invoke one or more knowledge shields when they are confronted with evidence that their understanding and planning involves a reductive understanding. The knowledge shield phenomenon suggests that it will take effort to change the reductive mindset that people might bring to design a CCS.


Journal of Knowledge Management | 2003

Knowledge modeling for the preservation of institutional memory

John W. Coffey; Robert R. Hoffman

After setting the stage by briefly surveying knowledge elicitation techniques, this article presents a description of an iterative approach to the elicitation and representation of organizational knowledge called PreSERVe, which stands for prepare, scope, elicit, render, and verify. The method involves an initial process of preparing for knowledge elicitation, followed by an iterative process of assessing the scope of the endeavor, knowledge elicitation and rendering, and, verification. Use of the PreSERVe method is illustrated by a case study involving work with six senior engineers at NASA Glenn Research Center (NASA GRC), Cleveland, OH, USA.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2002

An Empirical Comparison of Methods for Eliciting and Modeling Expert Knowledge

Robert R. Hoffman; John W. Coffey; Mary Jo Carnot; Joseph D. Novak

The goal of this project in Human-Centered Computing was to apply a variety of methods of Cognitive Task Analysis (CTA) and Cognitive Field Research (CFR) to support a complete process going all the way from knowledge elicitation to leverage point identification and then to system prototyping, and also use this as an opportunity to empirically compare and evaluate the methods. The research relied upon the participation of expert, journeyman, and apprentice weather forecasters at the Naval Training Meteorology and Oceanography Facility at Pensacola Naval Air Station. Methods included Protocol Analysis, a number of types of structured interviews, workspace and work patterns analysis, the Critical Decision Method, the Knowledge Audit, Concept Mapping, and the Cognitive Modeling Procedure. The methods were compared in terms of (1) their yield of information that was useful in modeling expert knowledge, (2) their yield in terms of identification of leverage points (where the application of new technology might bring about positive change), and (3) their efficiency. Efficiency was gauged in terms of total effort (time to prepare to run a procedure, plus time to run the procedure, plus time to analyze the data) relative to the yield (number of leverage points identified, number of propositions suitable for use in a model of domain knowledge). CTA/CFR methods supported the identification of dozens of leverage points and also yielded behaviorally- validated models of the reasoning of expert forecasters. Knowledge modeling using Concept-Mapping resulted in over a thousand propositions covering domain knowledge. The Critical Decision Method yielded a number of richly-populated case studies with associated Decision Requirements Tables. Results speak to the relative efficiency of various methods of CTA/CFR, and also the strengths of each of the methods. In addition to extending our empirical base on the comparison of knowledge elicitation methods, a deliverable from the project was a knowledge model that illustrates human-centered computing in that it integrates training support and performance aiding.


IEEE Intelligent Systems | 2002

The Sacagawea principle

Mica R. Endsley; Robert R. Hoffman

Many software tools and systems restrict the availability of information and make information integration and exploration difficult. Poorly designed tools are often brittle, because they prescribe task sequences. But in complex sociotechnical contexts, workers do not perform tasks; they engage in knowledge-driven, context-sensitive choices from among action sequence alternatives in order to achieve goals. So, good tools must be flexible


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2001

Storm-LK: A Human-Centered Knowledge Model for Weather Forecasting

Robert R. Hoffman; John W. Coffey; Kenneth M. Ford; Mary Jo Carnot

they must provide the information that workers need to generate appropriate action sequences by which they can achieve the same goal in different situations. Adapted from the writings of Donald Norman is a principle we call the Sacagawea Principle: Human-centered computational tools need to support active organization of information, active search for information, active exploration of information, reflection on the meaning of information, and evaluation and choice among action sequence alternatives. Context-conditional variation includes variation due to the worker-each worker has his or her own needs, entailing different requirements and constraints. This implies that individuals should be able to choose different trajectories to achieve the desired outcome in different ways. A good tool gives users discretion to generate various action sequences and express their preferences. As with many HCC principles, we have named this one after a person to give it a concrete and meaningful label. Sacagawea served as a guide, without whose help the Lewis and Clark expedition might not have achieved the successes it did. The name is also somewhat ironic, because Sacagawea was, for part of her life, a captured slave. The theme of machines and robots as slaves is arguably the oldest in the robotics literature, and it is still often used as a metaphor to describe the tools people use to accomplish their work. In this essay, we explore an approach for fulfilling the Sacagawea Principle in system design


Information Visualization | 2006

Concept Map-Based Knowledge Modeling: Perspectives from Information and Knowledge Visualization

John W. Coffey; Robert R. Hoffman; Alberto J. Cañas

an approach based on empirical study of the way in which people process their environments in complex worlds.


IEEE Intelligent Systems | 2002

The triples rule

Robert R. Hoffman; Patrick J. Hayes; Kenneth M. Ford; Peter A. Hancock

STORM-LK (System To Organize Representations in Meteorology-Local Knowledge) is a Human-Centered system that used the CMap Tools© software to represent the knowledge and reasoning of expert forecasters. It demonstrates the feasibility of using Concept-Mapping to generate large-scale multi-media knowledge models. STORM-LK can support knowledge preservation, distance learning and collaboration, and navigation through the data that are used in weather forecasting.


Science | 2011

Comment on “Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping”

Joel J. Mintzes; Alberto J. Cañas; John W. Coffey; James Gorman; Laine Gurley; Robert R. Hoffman; Saundra Y. McGuire; Norma Miller; Brian M. Moon; James Trifone; James H. Wandersee

This article explores the idea of knowledge modeling as defined at the Florida Institute for Human and Machine Cognition. The notion of knowledge modeling is described to illustrate a particular method by which concept maps might be employed to create a useful structure and organization of other information and knowledge resources. Knowledge model structuring and navigational schemes afforded by the approach are described and illustrated. An example of a knowledge model pertaining to weather forecasting on the Gulf coast of the United States is presented to illustrate these ideas. Examples of how information visualization techniques have been and might be applied to the knowledge modeling scheme are discussed. Ideas pertaining to how knowledge models might serve as learning resources are briefly presented throughout. The article concludes with additional discourse regarding specific ways in which the knowledge modeling approach might be employed to create, present, and organize effective electronic learning resources.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2004

DESIGNING SUPPORT FOR INTELLIGENCE ANALYSTS

William C. Elm; Malcolm J. Cook; Frank L. Greitzer; Robert R. Hoffman; Brian M. Moon; Susan G. Hutchins

A fundamental stance taken in human-centered computing is that information processing devices must be thought of in systems terms. At first blush, this seems self-evident. However, the notion has a long history, and not just in systems engineering. In this new age of symbiosis, machines are made for specific humans for use in specific contexts. The unit of analysis for cognitive engineering and computer science is a triple: person, machine and context The triples rule asserts that system development must take this triple as the unit of analysis, which has strong implications, including a mandate that the engineering of complex systems should include detailed cognitive work analysis. It also has implications for the meaning of intelligence, including artificial intelligence.

Collaboration


Dive into the Robert R. Hoffman's collaboration.

Top Co-Authors

Avatar

Paul J. Feltovich

Florida Institute for Human and Machine Cognition

View shared research outputs
Top Co-Authors

Avatar

John W. Coffey

University of West Florida

View shared research outputs
Top Co-Authors

Avatar

Neil Charness

Florida State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kenneth M. Ford

University of West Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alberto J. Cañas

University of West Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge