Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where D. Paul Benjamin is active.

Publication


Featured researches published by D. Paul Benjamin.


Intelligent Robots and Computer Vision XXIV: Algorithms, Techniques, and Active Vision | 2006

Embodying a cognitive model in a mobile robot

D. Paul Benjamin; Damian M. Lyons; Deryle Lonsdale

The ADAPT project is a collaboration of researchers in robotics, linguistics and artificial intelligence at three universities to create a cognitive architecture specifically designed to be embodied in a mobile robot. There are major respects in which existing cognitive architectures are inadequate for robot cognition. In particular, they lack support for true concurrency and for active perception. ADAPT addresses these deficiencies by modeling the world as a network of concurrent schemas, and modeling perception as problem solving. Schemas are represented using the RS (Robot Schemas) language, and are activated by spreading activation. RS provides a powerful language for distributed control of concurrent processes. Also, The formal semantics of RS provides the basis for the semantics of ADAPTs use of natural language. We have implemented the RS language in Soar, a mature cognitive architecture originally developed at CMU and used at a number of universities and companies. Soars subgoaling and learning capabilities enable ADAPT to manage the complexity of its environment and to learn new schemas from experience. We describe the issues faced in developing an embodied cognitive architecture, and our implementation choices.


Proceedings of SPIE | 2009

Locating and Tracking Objects by Efficient Comparison of Real and Predicted Synthetic Video Imagery

Damian M. Lyons; D. Paul Benjamin

A mobile robot moving in an environment in which there are other moving objects and active agents, some of which may represent threats and some of which may represent collaborators, needs to be able to reason about the potential future behaviors of those objects and agents. In this paper we present an approach to tracking targets with complex behavior, leveraging a 3D simulation engine to generate predicted imagery and comparing that against real imagery. We introduce an approach to compare real and simulated imagery and present results using this approach to locate and track objects with complex behaviors. In this approach, the salient points in real and imaged images are identified and an affine image transformation that maps the real scene to the synthetic scene is generated. An image difference operation is developed that ensures that the matched points in both images produce a zero difference. In this way, synchronization differences are reduced and content differences enhanced. A number of image pairs are processed and presented to illustrate the approach.


Proceedings of SPIE | 2013

A Cognitive Approach to Vision for a Mobile Robot

D. Paul Benjamin; Christopher Funk; Damian M. Lyons

We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robots environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole objects mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.


Proceedings of SPIE | 2012

Navigation of Uncertain Terrain by Fusion of Information from Real and Synthetic Imagery

Damian M. Lyons; Paramesh Nirmal; D. Paul Benjamin

We consider the scenario where an autonomous platform that is searching or traversing a building may observe unstable masonry or may need to travel over unstable rubble. A purely behaviour-based system may handle these challenges but produce behaviour that works against long-terms goals such as reaching a victim as quickly as possible. We extend our work on ADAPT, a cognitive robotics architecture that incorporates 3D simulation and image fusion, to allow the robot to predict the behaviour of physical phenomena, such as falling masonry, and take actions consonant with long-term goals. We experimentally evaluate a cognitive only and reactive only approach to traversing a building filled with various numbers of challenges and compare their performance. The reactive only approach succeeds only 38% of the time, while the cognitive only approach succeeds 100% of the time. While the cognitive only approach produces very impressive behaviour, our results indicate how much better the combination of cognitive and behaviour-based can be.


technical symposium on computer science education | 2003

Undergraduate cyber security course projects

D. Paul Benjamin; Charles Border; Robert Montante; Paul J. Wagner

1 Summary Computer science educators are increasingly being asked to provide education in the area of computer security, and a number of institutions are offering computer security courses and developing computer security programs. However, there is a need for computer security educators to develop “hands-on” projects that enable their students to move beyond a theoretical understanding of the field and develop practical skills that can be used in implementing secure computer systems for their future business and government employers.


Proceedings of SPIE | 2012

Using a virtual world for robot planning

D. Paul Benjamin; John V. Monaco; Yixia Lin; Christopher Funk; Damian M. Lyons

We are building a robot cognitive architecture that constructs a real-time virtual copy of itself and its environment, including people, and uses the model to process perceptual information and to plan its movements. This paper describes the structure of this architecture. The software components of this architecture include PhysX for the virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture that controls the perceptual processing and task planning. The RS (Robot Schemas) language is implemented in Soar, providing the ability to reason about concurrency and time. This Soar/RS component controls visual processing, deciding which objects and dynamics to render into PhysX, and the degree of detail required for the task. As the robot runs, its virtual model diverges from physical reality, and errors grow. The Match-Mediated Difference component monitors these errors by comparing the visual data with corresponding data from virtual cameras, and notifies Soar/RS of significant differences, e.g. a new object that appears, or an object that changes direction unexpectedly. Soar/RS can then run PhysX much faster than real-time and search among possible future world paths to plan the robots actions. We report experimental results in indoor environments.


Proceedings of SPIE | 2011

A relaxed fusion of information from real and synthetic images to predict complex behavior

Damian M. Lyons; D. Paul Benjamin

An important component of cognitive robotics is the ability to mentally simulate physical processes and to compare the expected results with the information reported by a robots sensors. In previous work, we have proposed an approach that integrates a 3D game-engine simulation into the robot control architecture. A key part of that architecture is the Match-Mediated Difference (MMD) operation, an approach to fusing sensory data and synthetic predictions at the image level. The MMD operation insists that simulated and predicted scenes are similar in terms of the appearance of the objects in the scene. This is an overly restrictive constraint on the simulation since parts of the predicted scene may not have been previously viewed by the robot. In this paper we propose an extended MMD operation that relaxes the constraint and allows the real and synthetic scenes to differ in some features but not in (selected) other features. Image difference operations that allow a real image and synthetic image generated from an arbitrarily colored graphical model of a scene to be compared. Scenes with the same content show a zero difference. Scenes with varying foreground objects can be controlled to compare the color, size and shape of the foreground.


Proceedings of SPIE | 2010

Synchronizing Real and Predicted Synthetic Video Imagery for Localization of a Robot to a 3D Environment

Damian M. Lyons; Sirhan Chaudhry; D. Paul Benjamin

A mobile robot moving in an environment in which there are other moving objects and active agents, some of which may represent threats and some of which may represent collaborators, needs to be able to reason about the potential future behaviors of those objects and agents. In previous work, we presented an approach to tracking targets with complex behavior, leveraging a 3D simulation engine to generate predicted imagery and comparing that against real imagery. We introduced an approach to compare real and simulated imagery using an affine image transformation that maps the real scene to the synthetic scene in a robust fashion. In this paper, we present an approach to continually synchronize the real and synthetic video by mapping the affine transformation yielded by the real/synthetic image comparison to a new pose for the synthetic camera. We show a series of results for pairs of real and synthetic scenes containing objects including similar and different scenes.


recent advances in intrusion detection | 2008

Anomaly and Specification Based Cognitive Approach for Mission-Level Detection and Response

Paul Rubel; Partha P. Pal; Michael Atighetchi; D. Paul Benjamin; Franklin Webber

In 2005 a survivable system we built was subjected to red-team evaluation. Analyzing, interpreting, and responding to the defense mechanism reports took a room of developers. In May 2008 we took part in another red-team exercise. During this exercise an autonomous reasoning engine took the place of the room of developers. Our reasoning engine uses anomaly and specification-based approaches to autonomously decide if system and mission availability is in jeopardy, and take necessary corrective actions. This extended abstract presents a brief summary of the reasoning capability we developed: how it categorizes the data into an internal representation and how it uses deductive and coherence based reasoning to decide whether a response is warranted.


Archive | 2017

Classification and Prediction of Human Behaviors by a Mobile Robot

D. Paul Benjamin; Hong Yue; Damian M. Lyons

Robots interacting and collaborating with people need to comprehend and predict their movements. We present an approach to perceiving and modeling behaviors using a 3D virtual world. The robot’s visual data is registered with the virtual world to construct a model of the dynamics of the behavior and to predict future motions using a physics engine. This enables the robot to visualize alternative evolutions of the dynamics and to classify them. The goal of this work is to use this ability to interact more naturally with humans and to avoid potentially disastrous mistakes.

Collaboration


Dive into the D. Paul Benjamin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge