Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Duc Fehr is active.

Publication


Featured researches published by Duc Fehr.


Image and Vision Computing | 2009

View-independent human motion classification using image-based reconstruction

Robert Bodor; Andrew Drenner; Duc Fehr; Osama Masoud; Nikolaos Papanikolopoulos

We introduce in this paper a novel method for employing image-based rendering to extend the range of applicability of human motion and gait recognition systems. Much work has been done in the field of human motion and gait recognition, and many interesting methods for detecting and classifying motion have been developed. However, systems that can robustly recognize human behavior in real-world contexts have yet to be developed. A significant reason for this is that the activities of humans in typical settings are unconstrained in terms of the motion path. People are free to move throughout the area of interest in any direction they like. While there have been many good classification systems developed in this domain, the majority of these systems have used a single camera providing input to a training-based learning method. Methods that rely on a single camera are implicitly view-dependent. In practice, the classification accuracy of these systems often becomes increasingly poor as the angle between the camera and the direction of motion varies away from the training view angle. As a result, these methods have limited real-world applications, since it is often impossible to limit the direction of motion of people so rigidly. We demonstrate the use of image-based rendering to adapt the input to meet the needs of the classifier by automatically constructing the proper view (image), that matches the training view, from a combination of arbitrary views taken from several cameras. We tested the method on 162 sequences of video data of human motions taken indoors and outdoors, and promising results were obtained.


Journal of Intelligent and Robotic Systems | 2008

Multi-Camera Human Activity Monitoring

Loren Fiore; Duc Fehr; Robert Bodor; Andrew Drenner; Guruprasad Somasundaram; Nikolaos Papanikolopoulos

With the proliferation of security cameras, the approach taken to monitoring and placement of these cameras is critical. This paper presents original work in the area of multiple camera human activity monitoring. First, a system is presented that tracks pedestrians across a scene of interest and recognizes a set of human activities. Next, a framework is developed for the placement of multiple cameras to observe a scene. This framework was originally used in a limited X, Y, pan formulation but is extended to include height (Z) and tilt. Finally, an active dual-camera system for task recognition at multiple resolutions is developed and tested. All of these systems are tested under real-world conditions, and are shown to produce usable results.


international conference on robotics and automation | 2012

Compact covariance descriptors in 3D point clouds for object recognition

Duc Fehr; Anoop Cherian; Ravishankar Sivalingam; Sam Nickolay; Vassilios Morellas and; Nikolaos Papanikolopoulos

One of the most important tasks for mobile robots is to sense their environment. Further tasks might include the recognition of objects in the surrounding environment. Three dimensional range finders have become the sensors of choice for mapping the environment of a robot. Recognizing objects in point clouds provided by such sensors is a difficult task. The main contribution of this paper is the introduction of a new covariance based point cloud descriptor for such object recognition. Covariance based descriptors have been very successful in image processing. One of the main advantages of these descriptors is their relatively small size. The comparisons between different covariance matrices can also be made very efficient. Experiments with real world and synthetic data will show the superior performance of the covariance descriptors on point clouds compared to state-of-the-art methods.


advanced video and signal based surveillance | 2009

Counting People in Groups

Duc Fehr; Ravishankar Sivalingam; Vassilios Morellas; Nikolaos Papanikolopoulos; Osama A. Lotfallah; Youngchoon Park

Cameras are becoming a common tool for automated vision purposes due to their low cost. In an era of growing security concerns, camera surveillance systems have become not only important but also necessary. Algorithms for several tasks such as detecting abandoned objects and tracking people have already been successfully developed. While tracking people is relatively easy, counting people in groups is much more challenging. The mutual occlusions between people in a group make it difficult to provide an exact count. The aim of this work is to present a method of estimating the number of people in group scenarios. Several considerations for counting people are illustrated in this paper, and experimental results of the method are described and discussed.


international conference on robotics and automation | 2014

RGB-D object classification using covariance descriptors

Duc Fehr; William J. Beksi; Dimitris Zermas; Nikolaos Papanikolopoulos

In this paper, we introduce a new covariance based feature descriptor to be used on “colored” point clouds gathered by a mobile robot equipped with an RGB-D camera. Although many recent descriptors provide adequate results, there is not yet a clear consensus on how to best tackle “colored” point clouds. We present the notion of a covariance on RGB-D data. Covariances have not only been proven to be successful in image processing, but in other domains as well. Their main advantage is that they provide a compact and flexible description of point clouds. Our work is a first step towards demonstrating the usability of the concept of covariances in conjunction with RGB-D data. Experiments performed on an RGB-D database and compared to previous results show the increased performance of our method.


intelligent robots and systems | 2009

Issues and solutions in surveillance camera placement

Duc Fehr; Loren Fiore; Nikolaos Papanikolopoulos

Cameras are becoming a common tool for automated vision purposes due to their low cost. Many surveillance and inspection systems include cameras as their sensor of choice. How useful these camera systems are is very dependent upon the positioning of the cameras. This is especially true if the cameras are to be used in automated systems as a beneficial camera placement will simplify image processing operations. Therefore, a reliable positioning algorithm can lower the processing requirements of the system. In this paper several considerations for improving camera placement are investigated with the goal of developing a general algorithm that can be applied to a variety of systems. This paper presents this algorithm for placement problem in the context of computer vision and robotics. Simulated results of our method are then shown and discussed, along with an outline of future work.


Computer Vision and Image Understanding | 2016

Covariance based point cloud descriptors for object detection and recognition

Duc Fehr; William J. Beksi; Dimitris Zermas; Nikolaos Papanikolopoulos

We introduce a covariance-based feature descriptor for object classification.The descriptor is compact (low dimensionality) and computationally fast.Adding new descriptor features amounts to the addition of a new row and column.There is no need to tune parameters such as bin size or number.The descriptor is naturally discriminative and subtracts out common data features. Processing 3D point cloud data is of primary interest in many areas of computer vision, including object grasping, robot navigation, and object recognition. The introduction of affordable RGB-D sensors has created a great interest in the computer vision community towards developing efficient algorithms for point cloud processing. Previously, capturing a point cloud required expensive specialized sensors such as lasers or dedicated range imaging devices; now, range data is readily available from low-cost sensors that provide easily extractable point clouds from a depth map. From here, an interesting challenge is to find different objects in the point cloud. Various descriptors have been introduced to match features in a point cloud. Cheap sensors are not necessarily designed to produce precise measurements, which means that the data is not as accurate as a point cloud provided from a laser or a dedicated range finder. Although some feature descriptors have been shown to be successful in recognizing objects from point clouds, there still exists opportunities for improvement. The aim of this paper is to introduce techniques from other fields, such as image processing, into 3D point cloud processing in order to improve rendering, classification, and recognition. Covariances have proven to be a success not only in image processing, but in other domains as well. This work develops the application of covariances in conjunction with 3D point cloud data.


mediterranean conference on control and automation | 2012

A solution with multiple robots and Kinect systems to implement the parallel coverage problem

Hyeun Jeong Min; Duc Fehr; Nikolaos Papanikolopoulos

The coverage problem has been traditionally solved for a given number of robots with randomly generated positions. However, our recent work presented a solution to the parallel coverage problem that optimizes the number of robots starting at the same location. The motivations are: (i) the number of involved robots affects the total coverage cost, and (ii) it requires extra effort to place them at real-world locations. In this work we present a control algorithm for multiple robots with Kinect systems to implement the solution to the parallel coverage problem. Our algorithm utilizes a multi-robot formation. Robots need to localize themselves to know where they are within a map. To localize the robots and to reduce inter-communication, we introduce a technique to place only certain robots in a team. This work also presents an algorithm on how to manage dynamic changes of a group of formations in order to solve the coverage problem. This paper demonstrates the mission, which is to visit every desired position to cover an indoor environment, with a team of real robots and the Kinect system.


mediterranean conference on control and automation | 2009

Experiments in object reconstruction using a robot-mounted laser range-finder

Pratap Tokekar; Vineet Bhatawadekar; Duc Fehr; Nikolaos Papanikolopoulos

This paper presents a methodology to estimate the 2D reconstruction of an object on a given horizontal plane using a laser range-finder mounted on a mobile robot. To complete the reconstruction process, scans of the object from all sides are required and hence the robot must go completely around the object in a full circle. Since no apriori information about the object is used, the path planning for the robot must be done in an online fashion as more and more of the object is seen. Techniques for such trajectory planning to obtain all round views in a smooth fashion are put forth in this paper. As the object is being seen in parts, these parts must be registered together to form a consistent reconstruction. Scan matching using Iterative Closest Point (ICP) is used to stitch together the scans obtained from various viewpoints to form the required reconstruction. Experimental results for the reconstruction are provided in this paper. The 2D reconstruction can provide information about the area projected onto the ground by the object, thus giving cues about the shape of the object. This provides motivation for further work in the 3D reconstruction of moving objects.


Proceedings of SPIE | 2009

Using a laser range finder mounted on a MicroVision robot to estimate environmental parameters

Duc Fehr; Nikolaos Papanikolopoulos

In this article we will present a new robot (MicroVision) that has been designed at the University of Minnesota (UMN), Center for Distributed Robotics. Its design reminds of the designs of previous robots built at the UMN such as the COTS Scouts or the eROSIs. It is composed of a body with two wheels and a tail just like the two aforementioned robots. However, the MicroVision has more powerful processing and sensing capabilities and we utilized these to compute areas in the surrounding environment by using a convex hull approach. We are trying to estimate the projected area of an object onto the ground. This is done by the computation of convex hulls that are based on the data received from the MicroVisions laser range finder. Although localization of the robot is an important feature in being able to compute these convex hulls, localization and mapping techniques are only used as a tool and are not an end in this work. The main idea of this work is to demonstrate the ability of the laser carrying MicroVision robot to move around an object in order to get a scan from each side. From these scans, the convex hull of the shape is deduced and its projected area onto the ground is estimated.

Collaboration


Dive into the Duc Fehr's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert Bodor

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loren Fiore

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Osama Masoud

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge