Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lonnie Chrisman is active.

Publication


Featured researches published by Lonnie Chrisman.


intelligent robots and systems | 1995

Experience with rover navigation for lunar-like terrains

Reid G. Simmons; Eric Krotkov; Lonnie Chrisman; Fabio Gagliardi Cozman; Richard Goodwin; Martial Hebert; Lalitesh Katragadda; Sven Koenig; Gita Krishnaswamy; Yoshikazu Shinoda; Paul R. Klarer

Reliable navigation is critical for a lunar rover, both for autonomous traverses and safeguarded remote teleoperation. This paper describes an implemented system that has autonomously driven a prototype wheeled lunar rover over a kilometer in natural, outdoor terrain. The navigation system uses stereo terrain maps to perform local obstacle avoidance, and arbitrates steering recommendations from both the user and the rover. The paper describes the system architecture, each of the major components, and the experimental results to date.


Connection Science | 1991

Learning Recursive Distributed Representations for Holistic Computation

Lonnie Chrisman

A number of connectionist models capable of representing data with compositional structure have recently appeared. These new models suggest the intriguing possibility of performing holistic structure-sensitive computations with distributed representations. Two possible forms of holistic inference, transformational inference and confluent inference, are identified and compared. Transformational inference was successfully demonstrated by Chalmers; however, the pure transformational approach does not consider the eventual inference tasks during the process of learning its representations. Confluent inference is introduced as a method for achieving a tight coupling between the distributed representations of a problem and the solution for the given inference task while the net is still learning its representations. A dual-ported RAAM architecture based on Pollacks Recursive Auto-Associative Memory is implemented and demonstrated in the domain of Natural Language translation.


International Journal of Approximate Reasoning | 1995

Incremental conditioning of lower and upper probabilities

Lonnie Chrisman

Abstract Bayesian-style conditioning of an exact probability distribution can be done incrementally by updating the current distribution each time a new item of evidence is obtained. Many have suggested the use of lower and upper probabilities for representing bounds on probability distributions, which naturally suggests an analogous procedure of incremental conditioning using forms of interval arithemetic. Unfortunately, conditioning of lower and upper probability bounds loses information, yielding incorrect bounds when updates and performed incrementally and making the conditioning operation noncommutative. Furthermore, when lower probability functions are represented by way of their Mobius transforms, the operation of conditioning can cause an exponential explosion in the number of nonzero Mobius assignments used to represent the function. This paper presents an alternative representation for lower probability that overcomes these problems. By representing the results of both Dempster conditioning and strong consitioning, the representation indirectly encodes lower probability bounds in a form that allows updates to be performed incrementally without a loss of information. Conditioning with the new representation does not depend on the order of updates or on whether evidence is incorporated incrementally or all at once. The bounds obtained are exact when the original lower probabilities satisfy a property called 2-monotonicity. Although the new representation encodes more information about probability bounds than the straight representation, updates on the new representation never increase the number of Mobius assignments used to encode the lower probability—a considerable improvement over the worst-case exponential increase seen with the straight representation. The new representation helps to improve the efficiency and convenience of representing and manipulating lower probabilities.


international conference on artificial intelligence planning systems | 1992

Abstract probabilistic modeling of action

Lonnie Chrisman

Action models used in planning systems must necessarily be abstractions of reality. It is therefore natural to include estimates of ignorance and uncertainty as part of an action model. The standard approach of assigning a unique probability distribution over possible outcomes fares poorly in the presence of abstraction because many unmodeled variables are not governed by pure random chance. A constructive interpretation of probability based on abstracted worlds is developed and suggests modeling constraints on the outcome distribution of an action rather than just a single outcome distribution. A belief function representation of upper and lower probabilities is adopted, and a closedform projection rule is introduced and shown to be correct.


international conference on machine learning | 1989

Evaluating bias during pac-learning

Lonnie Chrisman

Publisher Summary This chapter reviews a technique for detecting incorrect bias with arbitrary reliability. When a system is confronted with a new concept learning problem, it is probably not the case that it will be able to select the correct bias before it begins its learning task. It is true that the learner may be able to use knowledge it has to induce an appropriate bias in a few cases. Thus, it might be necessary to utilize an algorithm that makes strong performance guarantees when its bias is correct. Testing of whether the algorithm performs as promised forms the basis for concluding that the bias is bad. Any reasonable pac-learning algorithm can be converted into an algorithm that either returns an accurate concept or reports a bad bias such that its output can be regarded to be 1–δ reliable, whether or not its bias is correct. The Valiant framework derives its power and ability to provide strong performance guarantees from the assumption that the target concept is a member of the given concept class. When the assumption is true, the guarantees on accuracy, reliability, and computational complexity hold; however, when the bias can be incorrect, the model alone makes no guarantees on performance.


national conference on artificial intelligence | 1992

Reinforcement learning with perceptual aliasing: the perceptual distinctions approach

Lonnie Chrisman


national conference on artificial intelligence | 1991

Sensible planning: focusing perceptual attention

Lonnie Chrisman; Reid G. Simmons


uncertainty in artificial intelligence | 1996

Independence with lower and upper probabilities

Lonnie Chrisman


uncertainty in artificial intelligence | 1996

Propagation of 2-monotone lower probabilities on an undirected graph

Lonnie Chrisman


Archive | 1995

Mixed-Mode Control of Navigation for a Lunar Rover

Reid G. Simmons; Eric Krotkov; Lonnie Chrisman; Fabio Gagliardi Cozman; Richard Goodwin; Martial Hebert; Guillermo Heredia; Sven Koenig; Pat Muir; Yoshikazu Shinoda

Collaboration


Dive into the Lonnie Chrisman's collaboration.

Top Co-Authors

Avatar

Reid G. Simmons

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Eric Krotkov

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Richard Goodwin

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sven Koenig

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Yoshikazu Shinoda

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gita Krishnaswamy

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Guillermo Heredia

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge