Jonathan Dinerstein
Brigham Young University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jonathan Dinerstein.
ACM Transactions on Graphics | 2005
Jonathan Dinerstein; Parris K. Egbert
Adaptation (online learning) by autonomous virtual characters, due to interaction with a human user in a virtual environment, is a difficult and important problem in computer animation. In this article we present a novel multi-level technique for fast character adaptation. We specifically target environments where there is a cooperative or competitive relationship between the character and the human that interacts with that character.In our technique, a distinct learning method is applied to each layer of the characters behavioral or cognitive model. This allows us to efficiently leverage the characters observations and experiences in each layer. This also provides a convenient temporal distinction between what observations and experiences provide pertinent lessons for each layer. Thus the character can quickly and robustly learn how to better interact with any given unique human user, relying only on observations and natural performance feedback from the environment (no explicit feedback from the human). Our technique is designed to be general, and can be easily integrated into most existing behavioral animation systems. It is also fast and memory efficient.
Computer Animation and Virtual Worlds | 2004
Kevin L. Steele; David Cline; Parris K. Egbert; Jonathan Dinerstein
We present a particle‐based algorithm for modeling highly viscous liquids. Using a numerical time‐integration of particle acceleration and velocity, we apply external forces to particles and use a convenient organization, the adhesion matrix, to represent forces between different types of liquids and objects. Viscosity is handled by performing a momentum exchange between particle pairs such that momentum is conserved. Volume is maintained by iteratively adjusting particle positions after each time step. We use a two‐tiered approach to time stepping that allows particle positions to be updated many times per frame while expensive operations, such as calculating viscosity and adhesion, are done only a few times per frame. The liquid is rendered using an implicit surface polygonization algorithm, and we present an implicit function that convolves the liquid surface with a Gaussian function, yielding a smooth liquid skin. Copyright
Computer Animation and Virtual Worlds | 2004
Jonathan Dinerstein; Parris K. Egbert; Hugo de Garis; Nelson Dinerstein
Behavioral and cognitive modeling for virtual characters is a promising field. It significantly reduces the workload on the animator, allowing characters to act autonomously in a believable fashion. It also makes interactivity between humans and virtual characters more practical than ever before. In this paper we present a novel technique where an artificial neural network is used to approximate a cognitive model. This allows us to execute the model much more quickly, making cognitively empowered characters more practical for interactive applications. Through this approach, we can animate several thousand intelligent characters in real time on a PC. We also present a novel technique for how a virtual character, instead of using an explicit model supplied by the user, can automatically learn an unknown behavioral/cognitive model by itself through reinforcement learning. The ability to learn without an explicit model appears promising for helping behavioral and cognitive modeling become more broadly accepted and used in the computer graphics community, as it can further reduce the workload on the animator. Further, it provides solutions for problems that cannot easily be modeled explicitly. Copyright
computational intelligence | 2008
Jonathan Dinerstein; Parris K. Egbert; Dan Ventura; Michael A. Goodrich
We present a novel technique for behavioral animation through data‐driven behavior synthesis. This technique has two key features: it provides natural character behavior and has a programming‐by‐demonstration interface. Thus we can quickly create compelling autonomous virtual agents that exhibit stylized behavior. First, the human user demonstrates behavior for the character by specifying its high‐level actions (e.g., with a joystick) during an interactive session. Each demonstration is recorded as a sequence of discrete actions. Later, the character synthesizes novel behavior by concatenating segments of action sequences. The choice of segments is guided by simulations that predict fitness. Thus our technique operates such as a cognitive model, providing a character with deliberative decision making. The actions are abstract and can be mapped to any pertinent motions, even procedurally synthesized motions. Thus our technique complements character animation algorithms. We empirically show that our O(log n) technique is scalable, robust when provided with sufficient data, produces effective behavior for a number of problem domains, and is faster than traditional planning. Also, the interface is intuitive enough that character behavior can be created by nontechnical users.
computational intelligence | 2005
Jonathan Dinerstein; Dan Ventura; Parris K. Egbert
The ability for a given agent to adapt on‐line to better interact with another agent is a difficult and important problem. This problem becomes even more difficult when the agent to interact with is a human, because humans learn quickly and behave nondeterministically. In this paper, we present a novel method whereby an agent can incrementally learn to predict the actions of another agent (even a human), and thereby can learn to better interact with that agent. We take a case‐based approach, where the behavior of the other agent is learned in the form of state–action pairs. We generalize these cases either through continuous k‐nearest neighbor, or a modified bounded minimax search. Through our case studies, our technique is empirically shown to require little storage, learn very quickly, and be fast and robust in practice. It can accurately predict actions several steps into the future. Our case studies include interactive virtual environments involving mixtures of synthetic agents and humans, with cooperative and/or competitive relationships.
systems, man and cybernetics | 2007
Sabra Dinerstein; Jonathan Dinerstein; Dan Ventura
Existing learning-based multi-modal biometric fusion techniques typically employ a single static support vector machine (SVM). This type of fusion improves the accuracy of biometric classification, but it also has serious limitations because it is based on the assumptions that the set of biometric classifiers to be fused is local, static, and complete. We present a novel multi-SVM approach to multi-modal biometric fusion that addresses the limitations of existing fusion techniques and show empirically that our approach retains good classification accuracy even when some of the biometric modalities are unavailable.
computational intelligence | 2010
Jeffrey S. Whiting; Jonathan Dinerstein; Parris K. Egbert; Dan Ventura
Cognitive and behavioral models have become popular methods for creating autonomous self‐animating characters. Creating these models present the following challenges: (1) creating a cognitive or behavioral model is a time‐intensive and complex process that must be done by an expert programmer and (2) the models are created to solve a specific problem in a given environment and because of their specific nature cannot be easily reused. Combining existing models together would allow an animator, without the need for a programmer, to create new characters in less time and to leverage each models strengths, resulting in an increase in the characters performance and in the creation of new behaviors and animations. This article provides a framework that can aggregate existing behavioral and cognitive models into an ensemble. An animator has only to rate how appropriately a character performs in a set of scenarios and the system then uses machine learning to determine how the character should act given the current situation. Empirical results from multiple case studies validate the approach.
nasa dod conference on evolvable hardware | 2003
Jonathan Dinerstein; Nelson Dinerstein; H. de Garis
A major problem in artificial brain building is the automatic construction and training of multi-module systems of neural networks. For example, consider a biological human brain, which has millions of neural nets. If an artificial brain is to have similar complexity, it is unrealistic to require that the training data set for each neural net must be specified explicitly by a human, or that interconnections between evolved nets be performed manually. In this paper we present an original technique to solve this problem. A single large-scale task (too complex to be performed by a single neural net) is automatically split into simpler sub-tasks. A multi-module system of neural nets is then trained so that one of these sub-tasks is performed by each net. We present the results of an experiment using this novel technique for pattern recognition.
The Visual Computer | 2006
Jonathan Dinerstein; Parris K. Egbert; David Cline
Machine learning has experienced explosive growth in the last few decades, achieving sufficient maturity to provide effective tools for sundry scientific and engineering fields. Machine learning provides a firm theoretical foundation upon which to build techniques that leverage existing data to extract interesting information or to synthesize more data.In this paper we survey the uses of machine learning methods and concepts in recent computer graphics techniques. Many graphics techniques are data-driven; however, few graphics papers explicitly leverage the machine learning literature to underpin, validate, and develop their proposed methods. This survey provides novel insights by casting many existing computer graphics techniques into a common learning framework. This not only illuminates how these techniques are related, but also reveals possible ways in which they may be improved. We also use our analysis to propose several directions for future work.
nasa dod conference on evolvable hardware | 2002
Jonathan Dinerstein; H. de Garis
This paper introduces TiPo, a new neural net model with superior evolvabilities. TiPo neural nets can dynamically change their structure with each clock tick. This provides enhanced computability for highly dynamic functions, such as curve following. Curve following is valuable for applications such as robot motion control.