Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barry G. Silverman is active.

Publication


Featured researches published by Barry G. Silverman.


Presence: Teleoperators & Virtual Environments | 2006

Human behavior models for agents in simulators and games: part I: enabling science with PMFserv

Barry G. Silverman; Michael Johns; Jason Cornwell; Kevin O'Brien

This paper focuses on challenges to improving the realism of socially intelligent agents and attempts to reflect the state of the art in human behavior modeling with particular attention to the impact of personality/cultural values and affect as well as biology/stress upon individual coping and group decision making. The first section offers an assessment of the state of the practice and of the need to integrate valid human performance moderator functions (PMFs) from traditionally separated subfields of the behavioral literature. The second section pursues this goal by postulating a unifying architecture and principles for integrating existing PMF theories and models. It also illustrates a PMF testbed called PMFserv created for implementating and studying how PMFs may contribute to such an architecture. To date it interconnects versions of PMFs on physiology and stress; personality, cultural and emotive processes (Cognitive Appraisal-OCC, value systems); perception (Gibsonian affordance); social processes (relations, identity, trust, nested intentionality); and cognition (affect- and stress-augmented decision theory, bounded rationality). The third section summarizes several usage case studies (asymmetric warfare, civil unrest, and political leaders) and concludes with lessons learned. Implementing and interoperating this broad collection of PMFs helps to open the agenda for research on syntheses that can help the field reach a greater level of maturity. The companion paper, Part II, presents a case study in using PMFserv for rapid scenario composability and realistic agent behavior.


Computer Education | 1995

Computer supported collaborative learning (CSCL)

Barry G. Silverman

Abstract This paper reviews efforts to develop and use computer support for situated, collaborative learning. We have been investigating how to integrate two types of group collaboration tools: constructivist and ‘minimal instructionist’. Results from classroom-based experiments illustrate a number of problems arising from the use of either approach alone. Given the right amount and manner of instructionism (i.e. minimal instructionism), students seem to favor its integration into constructivist environments.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2001

Implications of buyer decision theory for design of e-commerce websites

Barry G. Silverman; Mintu Bachann; Khaled Al-Akharas

In the rush to open their website, e-commerce sites too often fail to support buyer decision-making and search, resulting in a loss of sale and the customers repeat business. This paper reviews why this occurs and the failure of many B2C and B2B website executives to understand that appropriate decision support and search technology cannot be fully bought off-the-shelf. Our contention is that significant investment and effort is required at any given website in order to create the decision support and search agents needed to properly support buyer decision-making. We provide a framework to guide such effort (derived from buyer behavior choice theory); review the open problems that e-catalog sites pose to the framework and to existing search engine technology; discuss underlying design principles and guidelines; validate the framework and guidelines with a case study; and discuss lessons learned and steps needed to better support buyer decision behavior in the future. Future needs are also pinpointed.


Human-Computer Interaction | 1992

Human-computer collaboration

Barry G. Silverman

This article offers a model of collaboration processes in which both parties are sharing the task work load at an equal level of cognitive diffticulty. The model poses six collaboration factors as important in the man-machine collaboration. The six factors are cognitive orientation, deep knowledge, intention sharing, control plasticity, adaptivity, and experience or memory. The model predicts that two clusters of settings of the six factors exist: one for novices and one for experts. Four experiments are presented that support this prediction and that offer several new insights into what makes for effective collaborator design. Also many new questions arise.


systems man and cybernetics | 2007

Sociocultural Games for Training and Analysis

Barry G. Silverman; Gnana K. Bharathy; Michael Johns; Roy J. Eidelson; Tony E. Smith; Benjamin Nye

This paper presents a theory for role-playing simulation games intended to support analysts (and trainees) with generating and testing alternative competing hypotheses on how to influence world conflict situations. Simulated leaders and followers capable of playing these games are implemented in a cognitive modeling framework, called the Performance Moderator Function Server (PMFserv), which covers value systems, personality and cultural factors, emotions, relationships, perception, stress/coping style, and decision making. Of direct interest, as Section I-A explains, is codification and synthesis of best-of-breed social-science models within PMFserv to improve the internal validity of agent implementations. Sections II and III present this for leader profiling instruments and group-membership decision making, respectively. Section IV then offers two real-world case studies (The Third Crusade and SE Asia Today) where agent models are subjected to Turing and correspondence tests under each case study. In sum, substantial effort on game realism, best-of-breed social-science models, and agent validation efforts is essential if analysis and training tools are to help explore cultural issues and alternative ways to influence outcomes. Such exercises, in turn, are likely to improve the state of the science as well.


Operations Research | 1994

Unifying Expert Systems and the Decision Sciences

Barry G. Silverman

There are many tools and much literature that combine the expert systems and mathematical modeling paradigms. This survey focuses on a subset consisting of: decision making and unification, and not mere co-existence, of the two approaches. The unification effort is new and presents many research challenges at the theoretical, methodological, and tool levels. At the theoretical level, accepted prescriptions now exist that stipulate in which situations it is valid to use various forms of mathematical and qualitative reasoning. This is leading to a unified theory of the decision sciences for problems spanning choice, forecasting, risk assessment, design, operations, and many others. At the tool level three forms of synthesis of expert systems and mathematical models are particularly noteworthy: knowledge-based decision aids, intelligent decision modeling systems, and decision analytic expert systems. This survey gives definitions, surveys, and examples of each of these ways of unifying expert systems and modeling. Following this are lessons learned and further research needs. A great deal of synthesis work remains to be done, and a goal of this survey is to highlight some of the issues and invite discussion.


Knowledge Acquisition | 1991

Expert critics: operationalizing the judgement/decision-making literature as a theory of “bugs” and repair strategies

Barry G. Silverman

Abstract Humans are well-known for being adept at using their intuition and expertise in many situations. However, in some settings even human experts are susceptible to errors in judgement, and a failure to recognize the limits of knowledge. This happens often especially in semi-structured situtations, where multi-disciplinary expertise is required, or when uncertainty is a factor. At these times our natural ability to recognize and correct errors fails us, since we have faith in our reasoning. One way to deal with such problems is to have a computerized “critic” to assist in the process. This article introduces the concept of automated critics that collaborate with human experts to help improve their problem solving performance. A critic is a narrowly focused program that uses a knowledge base to help it recognize (1) what types of human error have occurred, and (2) what kinds of criticism strategies could help the user prevent or eliminate those errors. In discussing the “errors” half of this knowledge base, there is a difference between the experts knowledge and his or her judgement. The focus in this article is more on judgement than on knowledge but both are addressed. To build automated critics it is important to understand the use and behavior of human critics. For this reason critic theory, principles and rules for design are described in this article. These are presented by showing various types of criticism encountered across a variety of generic tasks, such as medical diagnosis, coaching forecasting and authoring among many others. Thus a model of expert cognition and rules for identifying cognitive biases are presented. This rule base exploits four decades of literature on the psychology of judgement and decisionmaking as a generative theory of “bugs” in expert intuition and as a deep knowledge from which rules about buggy behavior are drawn. For the commonly recurring expert errors, specific preventive and corrective strategies are also reviewed and considerations for criticism presentation and deployment are explained. Particular attention is given to rules about when and how criticism should be offered. By consulting and attempting to operationalize the judgement and decisionmaking literature within the critiquing approach, this establishes criticism-based problem solving as a novel way to bridge the gap between the traditional domain knowledge-rich approaches of AI and the domain-independent, theory-rich approaches of decision analysis. Attention is also devoted to the obstacles to, and opportunities for, further bridging this gap.


winter simulation conference | 2010

Validating agent based social systems models

Gnana K. Bharathy; Barry G. Silverman

Validating social systems is not a trivial task. The paper outlines some of our past efforts in validating models of social systems with cognitively detailed agents. It also presents some of the challenges faced by us. A social system built primarily of cognitively detailed agents can provide multiple levels of correspondence, both at observable and abstract aggregated levels. Such a system can also pose several challenges including large feature spaces, issues in information elicitation with database, experts and news feeds, counterfactuals, fragmented theoretical base, and limited funding for validation. Our own approach to validity assessment is to consider the entire life cycle and assess the validity under four broad dimensions of methodological validity, internal validity, external validity and qualitative, causal and narrative validity. In the past, we have employed a triangulation of multiple validation techniques, including face validation as well as formal validation tests including correspondence testing.


IEEE Transactions on Engineering Management | 1985

Expert intuition and ill-structured problem solving

Barry G. Silverman

The expert decision maker, when confronted by ill-structured problems, is shown to rely largely on nonverbalizeable intuitive thought processes based on concrete experience. Examples of ill-structured problems used include innovation, executive decision making, and diagnostic evaluations by project managers. Here, neither the goal nor the procedure for accomplishing the goal is well understood at the outset. Problems requiring computation are not treated. The organizational, educational, and analytical approaches for increasing this individuals productivity are then explored.


IEEE Intelligent Systems | 1992

Building a better critic-recent empirical results

Barry G. Silverman

Critic engineering advances a theory of errors and repair strategies that helps a system collaborate with an expert during knowledge acquisition. The result is an expert critiquing system, or critic, designed to improve the collected knowledge. An implementable version of the critic-engineering methodology that includes first principles, generic question sets, and a library of error triggers and correction strategies is defined, and lessons learned about the methodology from applications and experiments in diverse domains are presented. The implementation of influencers, which offer positive criticisms before or during a task to help prevent biases before they occur, and debiasers, which use negative criticisms during or after a task to help correct a bias or error after it occurs, are discussed. Other implementations of critic engineering are also discussed.<<ETX>>

Collaboration


Dive into the Barry G. Silverman's collaboration.

Top Co-Authors

Avatar

Gnana K. Bharathy

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Ransom Weaver

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Benjamin Nye

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Michael Johns

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Kevin O'Brien

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Jason Cornwell

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

John H. Holmes

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Stephen E. Kimmel

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Charles C. Branas

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Christo Andonyadis

George Washington University

View shared research outputs
Researchain Logo
Decentralizing Knowledge