Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Bodor is active.

Publication


Featured researches published by Robert Bodor.


Image and Vision Computing | 2009

View-independent human motion classification using image-based reconstruction

Robert Bodor; Andrew Drenner; Duc Fehr; Osama Masoud; Nikolaos Papanikolopoulos

We introduce in this paper a novel method for employing image-based rendering to extend the range of applicability of human motion and gait recognition systems. Much work has been done in the field of human motion and gait recognition, and many interesting methods for detecting and classifying motion have been developed. However, systems that can robustly recognize human behavior in real-world contexts have yet to be developed. A significant reason for this is that the activities of humans in typical settings are unconstrained in terms of the motion path. People are free to move throughout the area of interest in any direction they like. While there have been many good classification systems developed in this domain, the majority of these systems have used a single camera providing input to a training-based learning method. Methods that rely on a single camera are implicitly view-dependent. In practice, the classification accuracy of these systems often becomes increasingly poor as the angle between the camera and the direction of motion varies away from the training view angle. As a result, these methods have limited real-world applications, since it is often impossible to limit the direction of motion of people so rigidly. We demonstrate the use of image-based rendering to adapt the input to meet the needs of the classifier by automatically constructing the proper view (image), that matches the training view, from a combination of arbitrary views taken from several cameras. We tested the method on 162 sequences of video data of human motions taken indoors and outdoors, and promising results were obtained.


Journal of Intelligent and Robotic Systems | 2007

Optimal Camera Placement for Automated Surveillance Tasks

Robert Bodor; Andrew Drenner; Paul R. Schrater; Nikolaos Papanikolopoulos

Camera placement has an enormous impact on the performance of vision systems, but the best placement to maximize performance depends on the purpose of the system. As a result, this paper focuses largely on the problem of task-specific camera placement. We propose a new camera placement method that optimizes views to provide the highest resolution images of objects and motions in the scene that are critical for the performance of some specified task (e.g. motion recognition, visual metrology, part identification, etc.). A general analytical formulation of the observation problem is developed in terms of motion statistics of a scene and resolution of observed actions resulting in an aggregate observability measure. The goal of this system is to optimize across multiple cameras the aggregate observability of the set of actions performed in a defined area. The method considers dynamic and unpredictable environments, where the subject of interest changes in time. It does not attempt to measure or reconstruct surfaces or objects, and does not use an internal model of the subjects for reference. As a result, this method differs significantly in its core formulation from camera placement solutions applied to problems such as inspection, reconstruction or the Art Gallery class of problems. We present tests of the system’s optimized camera placement solutions using real-world data in both indoor and outdoor situations and robot-based experimentation using an all terrain robot vehicle-Jr robot in an indoor setting.


Journal of Intelligent and Robotic Systems | 2008

Multi-Camera Human Activity Monitoring

Loren Fiore; Duc Fehr; Robert Bodor; Andrew Drenner; Guruprasad Somasundaram; Nikolaos Papanikolopoulos

With the proliferation of security cameras, the approach taken to monitoring and placement of these cameras is critical. This paper presents original work in the area of multiple camera human activity monitoring. First, a system is presented that tracks pedestrians across a scene of interest and recognizes a set of human activities. Next, a framework is developed for the placement of multiple cameras to observe a scene. This framework was originally used in a limited X, Y, pan formulation but is extended to include height (Z) and tilt. Finally, an active dual-camera system for task recognition at multiple resolutions is developed and tested. All of these systems are tested under real-world conditions, and are shown to produce usable results.


intelligent robots and systems | 2004

Dual-camera system for multi-level activity recognition

Robert Bodor; Ryan Morlok; Nikolaos Papanikolopoulos

This paper describes a dual-camera system intended to accomplish the task of vision-based activity recognition at multiple resolutions. The system is comprised of a wide-angle, fixed field of view camera coupled with a computer-controlled pan/tilt/zoom-lens camera to make detailed measurements of people for activity recognition applications. We demonstrate the use of the system in both indoor and outdoor environments.


advanced video and signal based surveillance | 2005

Multi-camera positioning to optimize task observability

Robert Bodor; Paul R. Schrater; Nikolaos Papanikolopoulos

The performance of computer vision systems for measurement, surveillance, reconstruction, gait recognition, and many other applications, depends heavily on the placement of cameras observing the scene. This work addresses the question of the optimal placement of cameras to maximize the performance of real-world vision systems in a variety of applications. Specifically, our goal is to optimize the aggregate observability of the tasks being performed by the subjects in an area. We develop a general analytical formulation of the observation problem, in terms of the statistics of the motion in the scene and the total resolution of the observed actions that is applicable to many observation tasks and multi-camera systems. An optimization approach is used to find the internal and external (mounting position and orientation) camera parameters that optimize the observation criteria. We demonstrate the method for multi-camera systems in real-world monitoring applications, both indoor and outdoor.


intelligent robots and systems | 2003

Image-based reconstruction for view-independent human motion recognition

Robert Bodor; Bennett Jackson; Osama Masoud; Nikolaos Papanikolopoulos

In this paper, we introduce a novel method for employing image-based rendering to extend the range of use of human motion recognition systems. We demonstrate the use of image-based rendering to generate additional training sets for view-dependent human motion recognition systems. Input views orthogonal to the direction of motion are created automatically to construct the proper view from a combination of non-orthogonal views taken from several cameras. To extend motion recognition systems, image-based rendering can be utilized in two ways: (i) to generate additional training sets for these systems containing a large number of non-orthogonal views, and (ii) to generate orthogonal views (the views those systems are trained to recognize) from a combination of non-orthogonal views taken from several cameras. In this case, image-based rendering is used to generate views orthogonal to the mean direction of motion. We tested the method using an existing view-dependent human motion recognition system on two different sequences of motion, and promising initial results were obtained.


intelligent robots and systems | 2005

Mobile camera positioning to optimize the observability of human activity recognition tasks

Robert Bodor; Andrew Drenner; Michael Janssen; Paul R. Schrater; Nikolaos Papanikolopoulos

The performance of systems for human activity recognition depends heavily on the placement of cameras observing the scene. This work addresses the question of the optimal placement of cameras to maximize the performance of these types of recognition tasks. Specifically, our goal is to optimize the quality of the joint observability of the tasks being performed by the subjects in an area. We develop a general analytical formulation of the observation problem, in terms of the statistics of the motion in the scene and the total resolution of the observed actions that is applicable to many observation tasks and multi-camera systems. A nonlinear optimization approach is used to find the internal and external (mounting position and orientation) camera parameters that optimize the recognition criteria. In these experiments, a single camera is repositioned using a mobile robot. Initial results for the problem of human activity recognition are presented.


intelligent robots and systems | 2004

Learning static occlusions from interactions with moving figures

Bennett Jackson; Robert Bodor; Nikolaos Papanikolopoulos

We present a simple and efficient algorithm for determining the position of static occluding bodies within a scene viewed by one or more static cameras. All information about the occluding bodies is derived from the perimeter of a figure moving through the scene. Once the positions of the occlusions are learned, successful reasoning about the observability of future figures in the scene is demonstrated. The method is extended to derive an estimate of the 3D position of the occluding bodies from multiple views. Several experimental results are described.


Image and Vision Computing | 2006

Automatic Euclidean reconstruction for turn-table sequences by indirect epipolar search between pairs of views

Michel Schrameck; Richard M. Voyles; Tom Myers; Robert Bodor; Osama Masoud

Abstract A new method is described to solve the structure from motion problem when the camera is only subject to a negligible rotation around its optical axis (the ‘roll’ rotation). The indirect epipolar search is a four-dimensional search, where the parameters are the two angles corresponding to the two remaining rotation axes of each camera. By ‘indirect’, we mean that there is no explicit computation of fundamental matrices or epipole locations. The approach is interesting in the fact that the epipolar space scrutinized is much less sensitive to noise than the fundamental matrix space (which we call the ‘direct’ epipolar space). The method is particularly adapted to turn-table video-sequences, where the roll-rotation is non-existent, but we will show that it could also be used with no a-priori knowledge of the camera motion.


Advanced Robotics | 2005

Deriving occlusions in static scenes from observations of interactions with a moving figure

Bennett Jackson; Robert Bodor; Nikolaos Papanikolopoulos

Many applications in computer vision are based on a single static camera observing a scene which is static except for one or more figures (people, vehicles, etc.) moving through it. In these applications it is useful to understand whether the moving figure is partially occluded by some static element of the scene. Such partial occlusions, when undetected, confuse the analysis of the figures pose and activity. We present an algorithm that uses only the information provided by moving figures to simply and efficiently derive the position of static occluding bodies. Once these occlusions are obtained, we demonstrate successful reasoning about the occlusion status of future figures within the same scene. The occlusion positions from multiple views of the same scene are used to extract an estimate of the three-dimensional position and shape of the occlusion. Experimental results validating the method are included.

Collaboration


Dive into the Robert Bodor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Osama Masoud

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duc Fehr

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefan Atev

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loren Fiore

University of Minnesota

View shared research outputs
Researchain Logo
Decentralizing Knowledge