Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeff Rothenberg is active.

Publication


Featured researches published by Jeff Rothenberg.


winter simulation conference | 1986

Object-oriented simulation: where do we go from here?

Jeff Rothenberg

Object-oriented simulation provides a rich and lucid paradigm for building computerized models of real-world phenomena. Its strength lies in its ability to represent objects and their behaviors and interactions in a cogent form that can be designed, evolved and comprehended by domain experts as well as system analysts. It allows encapsulating objects (to hide irrelevant details of their implementation) and viewing the behavior of a model at a meaningful level. It represents special relations among objects (class-subclass hierarchies) and provides “inheritance” of attributes and behaviors along with limited taxonomic inference over these relations. It represents interactions among objects by “messages” sent between them, which provides a natural way of modeling many interactions. Despite these achievements, however, there remain several largely unexplored areas of need, requiring advances in the power and flexibility of modeling, in the representation of knowledge, in the integration of different modeling paradigms, and in the comprehensibility, scalability and reusability of models. The Knowledge-Based Simulation project at Rand is working in several of these areas. In this paper, we will elaborate the existing limitations of object-oriented simulation and discuss some of the ways we believe the paradigm can be extended to surmount these limitations.


Studies in Computer Science and Artificial Intelligence | 1989

Expert System Tool Evaluation

Jeff Rothenberg

Publisher Summary This chapter presents a framework of evaluation criteria and a methodology for selecting an expert system tool. Evaluating and choosing a tool requires matching a tool to its intended use including all aspects of the problem domain, the problem itself, and the anticipated project. Because of the evolving and inconsistent terminology in this new field, comparing features of different tools is of limited utility and limited longevity. Instead, the capabilities provided by these features must be analyzed, evaluated, and compared. The framework shows how to use specific assessment techniques to apply specific metrics to specific capabilities of a tool for a specific application in a specific context. The development of expert system is reflected in the importance of issues, such as integration, database access, portability, fielding, maintainability, robustness, reliability, concurrent access, performance, user interface, debugging support, and documentation. Though the difficulty of comparing and selecting tools may be daunting to a developer faced with a decision, this difficulty is largely a result of the richness of the field and the bewildering pace at which new ideas are being incorporated into tools. The evaluation approach is offered, not as a final answer to a fixed problem, but as a strategy for dealing with a dynamic problem whose complexity reflects the health of a research area whose impact on software engineering is only beginning to be felt.


ACM Transactions on Modeling and Computer Simulation | 1992

AI: what simulationists really need to know

David P. Miller; R. James Firby; Paul A. Fishwick; Jeff Rothenberg

ion. In human problem solving, abstraction is an important technique for managing complexity. One characterization of human expertise is the ability to make the most appropriate abstraction in a particular domain, domain situation, and problem-solving situation. Qualitative representations of system/mechanism primitive behaviors, constraints, state, and behavior are one dimension in which abstraction applies (as opposed to eliminating specific components from a system/mechanism and the associated variables from the state). This abstraction dimension is in fact very useftd, as demonACM TransactIons on Modehng and Computer Slmulatlon, Vol. 2, No. 4, October 1992 What Simulationists Really Need to Know . 275 strated in human reasoning and programs that reason from such qualitative representations. For example, digital circuit simulation abstracts the actual voltages that exist in the circuit to logic values O, 1, and X. Completeness. In deriving behavior via simulation, qualitative simulation (e.g., QSIM) is complete in that all possible behaviors are represented in the envisionment (assuming that generation of such an envisionment is tractable). For numerical simulation approaches, the same claim cannot be made. It should be understood that qualitative distinctions of behaviors are dependent on the specification of the system/mechanism (e.g., introducing a landmark into a variable’s quantity space can result in qualitatively distinct behaviors not observed before the landmark was added). This, however, is the price of abstraction. Operating with Incomplete Knowledge of the Domain. The qualitative model specification and simulation techniques developed in the AI qualitative-reasoning community have emphasized the ability to proceed in the face of incomplete knowledge (theory or model) of the system/mechanism and any initial conditions. This is exhibited not only in qualitative variable and state values, but also in the expression of primitive behaviors used in a system/ mechanism description. For example, QSIM provides monotonic increasing (M + ) and decreasing (M — ) constraints, and Qualitative Process Theory expresses influences between variables. The ability to develop a model and simulate it in the presence of incomplete knowledge is important in that some initial information can be collected and subsequently used in problem solving and model refinement. If we consider qualitative and numerical models and simulation techniques as points or areas on an abstraction spectrum, the problem of developing, validating, and maintaining theories about the domains of interest (either for humans or for autonomous agents) can be viewed as building and validating a theory at some point on the spectrum, and then possibly modifying the theory in the direction most appropriate for the task at hand (i.e., more or less abstract). For a design activity, the modification must necessarily go to a very fine level of detail so that the associated mechanism can be constructed. A diagnosis or explanation capability, however, may not require such a fine-grain description. In fact, for explanation or prediction purposes, a more abstract description is often appropriate (e.g., cyclic, or remains with limits). Issues in model construction and selection are an active area of research in the qualitative-modeling and model-based reasoning communities. The integration of quantitative and qualitative information is also being investigated. One of the central issues in AI has been knowledge representation. Issues of expressive power, tractability and completeness of inference procedures, and conceptual integrity with respect to the problem domain have guided research. The problem domain governs ontological issues for the objects examined in the problem-solving process (e.g., components of a mechanism, observations such as medical data, physical processes) as well as objects/ concepts of the problem-solving process itself (e.g., design goals, explanations). These representation issues (domain objects, problem-solving process ACM TransactIons on Modeling and Computer Simulation, Vol. 2, No. 4, October 1992. 276 . David P. Miller et al. concepts) plus the goals of the problem-solving technique provide an understanding of the current AI approaches to and uses of simulation (e.g., qualitative simulation). Particular choices are sometimes motivated by models of human problem solving, not with the goal of accurately modeling human problem-solving activity, but with the goal of giving programs better problem-solving capabilities. The goal of (AI’s) qualitative-modeling research has been much discussed, ranging from the desire to faithfully model human cognition to the ability to build and utilize precise, accurate models of the real world. To repeat an earlier message, I [D. W. Franke] believe that the appropriate context is the pragmatic one, in which the particular simulation or modeling approach can best be judged by (1) its ability to solve a particular problem and (2) the ability for humans or other programs (autonomous agents) to evaluate and utilize the results of the modeling. Unfortunately, many claims of the form “Yes, we use AI techniques” are made for systems and products. We must be careful in evaluating such claims and must examine the problem-solving capabilities as well as any implementation approaches. 4. PAUL A. FISHWICK: Al AND SIMULATION; SOME LESSONS LEARNED The fields of AI and simulation are fairly large in terms of literature and interdisciplinary tendencies. Discussing, therefore, how the two relate to one another is a formidable task; however, we have learned many key points or “lessons” especially during the AI and simulation workshops, conferences, and panel sessions over the past several years. In this section, I [P. A. Fishwick] will discuss some things that I have learned during my time studying the benefits of AI and simulation to each other. These “lessons learned” are personal reflections that have been gathered from verbal and email conversations, workshops, and literature searches. 4.1 Code A// the Knowledge Perhaps the chief contribution of AI to all fields, including simulation, is the realization that knowledge which is nonequational, quantitative, or precise in nature can still be used for useful problem solving. The primary example of this type of AI research is found within “expert systems,” which, from a problem-solving viewpoint, are not unique because they represent expert knowledge per se. After all, continuous models for aircraft flight or discreteevent models for assembly lines are also reservoirs of knowledge—specifically, “expert knowledge” about the principles of flight and the operation of assembly lines. What, then, makes an expert system unique? Expert systems have been built in those areas where models have been either very weak or nonexistent such as in medical diagnosis; we do not have a simple set of equations that accept symptoms as inputs and produce a correct diagnosis as an output. Mycin [3] provides an excellent example of a program that contains the deepest knowledge available in the domain for which Mycin was coded: the selection of antimicrobial drugs given specific symptoms of bacteACM Transactions on Modeling and Computer Simulation, Vol 2, No. 4, October 1992 What Simulationists Really Need to Know . 277 rial infection. Because we do not have such equations for the automatic calculation of drugs given medical symptoms, AI technology has suggested to us that models based on predicate calculus (of which expert system knowledge is a special case) are indeed useful if we can feed in inputs and obtain reasonable outputs. The AI approach suggests that we code all the knowledge that is available to us for our simulation models and not only that knowledge which yields to traditional forms of systems analysis. The systems problem-solving process is highly iterative; we start with high-level models and progress to complex models. The high-level knowledge that is coded within expert systems is usually of a “decision-making” or diagnostic type. How does this affect the field of computer simulation? It suggests that we code decision-making and planning components within our simulations. As simulations become more complex, they will contain autonomous objects, and simulation of these autonomous objects (such as robots, autonomous vehicles, and humans) will require the fruits of AI research in areas such as expert systems and mental modeling. 4.2 Integrate Qualitative and Quantitative Knowledge One of the problems in the area of AI and simulation is that many researchers have thought of expert systems (and other AI models) as being completely different than models that exist in simulation. Also, the concept of simulation as being inherently numerical has led to some perceived differences between the AI and simulation modeling efforts. These differences have caused a split in the two communities. Since the AI community is primarily concerned with qualitative knowledge and the simulation community with quantitative, the most recent fertile area of research in both groups involves integrating quantitative and qualitative knowledge. The chief difference between the AI and simulation efforts—with regard to the study of qualitative/quantitative integration-resolves around the treatment of uncertainty. In simulation, the term “qualitative” [8, 9] has often been equated with abstraction in terms of the abstraction level associated with systems [22, 27] and system components such as time, state, and event [20]. For instance, while continuous simulation provides us with a model for obtaining continuously changing state values, discrete-event simulation fosters discrete changes in state and event where the values may be either quantitative or qualitative. Mixtures of these two different types of models fall under the general category of combined modeling. By partitioning state space [10], we can formally map


winter simulation conference | 1991

Artificial intelligence and simulation

Jeff Rothenberg

In this tutorial, the author presents some of the major concepts of artificial intelligence and illustrates their applicability to simulation using examples drawn from recent knowledge-based simulation research. He focuses on the present state-of-the-art, current problems and limitations, and future directions and possibilities.<<ETX>>


Proceedings [1990]. AI, Simulation and Planning in High Autonomy Systems | 1990

A 'propagative' approach to sensitivity analysis

Jeff Rothenberg; Norman Shapiro; Charlene A. Hefley

It is shown that the computational cost of traditional approaches to sensitivity analysis is logically unnecessary and can be largely avoided by propagating and combining sensitivities during a computation, rather than recomputing them. This propagative approach to sensitivity analysis is described and the algorithm implemented to explore its potential is presented. Initial results indicate that this approach has tremendous potential, reducing a combinatorial process to a linear one. In addition, it is noted that the approach has implications beyond sensitivity analysis: it suggests a novel computational paradigm in which functions replace themselves by approximations when they are first called and these approximations are used for the remainder of a computation, e.g. to improve performance. Sensitivity analysis is simply one instance of this approach, using linear approximations based on partial derivatives; however, the approach and the computational environment implemented allow arbitrary approximations to be used.<<ETX>>


winter simulation conference | 1987

Dependencies and graphical interfaces in object-oriented simulation languages

Stephanie J. Cammarata; Barbara L. Gates; Jeff Rothenberg

An object-oriented style of computation is especially well-suited to simulation in domains that may be thought of as consisting of intentionally interacting components. In such domains, the programmer can map the constituent domain components onto objects, and intentional interactions (e.g. communications) onto message transmissions. However, some events or interactions between real world objects cannot be modeled as naturally as we might like. Improper modeling of these interactions inevitably leads to inconsistent simulation states and processing errors. The research reported in this paper identifies two categories of simulation activities that are unnatural and difficult to implement in object-oriented simulations: (1) scheduling events which depend on the continuous aspect of time; and (2) presenting a graphical display of a simulation so that any changes in the simulation state are immediately visible. Following a discussion of these deficiencies, we present a methodology for performing these tasks that is transparent to the simulation programmer. Our approach utilizes extensions to the Ross object-oriented language allowing a programmer to declaratively specify characteristics of the simulation dealing with time dependent attributes and graphics display strategies. The example presented in this paper demonstrates the many advantages of our declarative approach to maintaining consistency. With these capabilities, we expect object-oriented simulation languages to become increasingly attractive for modeling dynamic systems.


winter simulation conference | 1990

Proving temporal properties of hybrid systems

Sanjai Narain; Jeff Rothenberg

A formal technique, DMOD, for modeling hybrid systems is presented. It is based on utilizing intuitions about the causality relation, and the logic of definite clauses with the SLD-resolution proof procedure. An algorithm to simulate with DMOD models is presented. Then, a general framework in which temporal properties of hybrid systems can be formulated and proved (given that those systems are modeled using DMOD) is outlined. These ideas are illustrated by proving a liveness property about a railroad crossing.<<ETX>>


winter simulation conference | 1989

A Logic For Simulating Discontinuous Systems

Sanjai Narain; Jeff Rothenberg

This paper presents DMOD, a formalism for simplifying the synthesis and analysis of programs for simulating discontinuous systems. It lacks the concept of explicit state. Its programs are constraints upon event occurrences, based upon a novel view of the causality relation. Constraints can freely refer to the past and future of causing events. Simulation is regarded as inference of event occurrences from constraints. An event is said to occur when an interesting proposition becomes true. The new concept of partially instantiated events is introduced.DMOD can be regarded as a formalization of the widely used event scheduling view of the discrete-event simulation technique. However, it shows how event occurrences can be computed without devices of scheduling, unscheduling or event queues, which are intrinsic to this view. Due to partially instantiated events, DMOD can also be considered more general.


Archive | 1999

Ensuring the longevity of digital information

Jeff Rothenberg


Archive | 1989

Knowledge-based simulation: an interim report

Jeff Rothenberg; Sanjai Narain; Randall Steeb; Charlene A. Hefley; Norman Shapiro

Collaboration


Dive into the Jeff Rothenberg's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David P. Miller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David W. Franke

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge