Guilherme N. DeSouza
University of Missouri
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guilherme N. DeSouza.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2002
Guilherme N. DeSouza; Avinash C. Kak
Surveys the developments of the last 20 years in the area of vision for mobile robot navigation. Two major components of the paper deal with indoor navigation and outdoor navigation. For each component, we have further subdivided our treatment of the subject on the basis of structured and unstructured environments. For indoor robots in structured environments, we have dealt separately with the cases of geometrical and topological models of space. For unstructured environments, we have discussed the cases of navigation using optical flows, using methods from the appearance-based paradigm, and by recognition of specific objects in the environment.
international conference on robotics and automation | 2003
Youngrock Yoon; Guilherme N. DeSouza; Avinash C. Kak
This paper presents a fast tracking algorithm capable of estimating the complete pose (6DOF) of an industrial object by using its circular-shape features. Since the algorithm is part of a real-time visual servoing system designed for assembly of automotive parts on-the-fly, the main constraints in the design of the algorithm were: speed and accuracy. That is: close to frame-rate performance, and error in pose estimation smaller than a few millimeters. The algorithm proposed uses only three model features, and yet it is very accurate and robust. For that reason both constraints were satisfied: the algorithm runs at 60 fps (30 fps for each stereo image) on a PIII-800 MHz computer, and the pose of the object is calculated within an uncertainty of 2.4 mm in translation and 1.5 degree in rotation.
IEEE Transactions on Fuzzy Systems | 1998
Juiyao Pan; Guilherme N. DeSouza; Avinash C. Kak
There exist in the literature today many contributions dealing with the incorporation of fuzzy logic in expert systems. However, unfortunately, much of what has been proposed can only be applied to small-scale expert systems; that is, when the number of rules is in the dozens as opposed to in the hundreds. The more traditional (nonfuzzy) expert systems are able to cope with large numbers of rules by using Rete networks for maintaining matches of all the rules and all the facts. (A Rete network obviates the need to match the rules with the facts on every cycle of the inference engine.) In this paper, we present a more general Rete network that is particularly suitable for reasoning with fuzzy logic. The generalized Rete network consists of a cascade of three networks: the pattern network, the join network, and the evidence aggregation network. The first two layers are modified versions of similar layers for the traditional Rete networks and the last, the aggregation layer, is a new concept that allows fuzzy evidence to be aggregated when fuzzy inferences are made about the same fuzzy variable by different rules.
international conference on robotics and automation | 2004
P. Mittrapiyanumic; Guilherme N. DeSouza; Avinash C. Kak
This paper presents two different algorithms for object tracking and pose estimation. Both methods are based on an appearance model technique called Active Appearance Model (AAM). The key idea of the first method is to utilize two instances of the AAM to track landmark points in a stereo pair of images and perform 3D reconstruction of the landmarks followed by 3 D pose estimation. The second method, the AAM matching algorithm is an extension of the original AAM that incorporates the full 6 DOF pose parameters as part of the minimization parameters. This extension allows for the estimation of the 3D pose of any object, without any restriction on its geometry. We compare both algorithms with a previously developed algorithm using a geometric-based approach [14]. The results show that the accuracy in pose estimation of our new appearance-based methods is better than using the geometric-based approach. Moreover, since appearance-based methods do not require customized feature extractions, the new methods present a more flexible alternative, especially in situations where extracting features is not simple due to cluttered background, complex and irregular features, etc.
international conference on robotics and automation | 2010
Daniel Conrad; Guilherme N. DeSouza
In this paper, a homography-based approach for determining the ground plane using image pairs is presented. Our approach is unique in that it uses a Modified Expectation Maximization algorithm to cluster pixels on images as belonging to one of two possible classes: ground and non-ground pixels. This classification is very useful in mobile robot navigation because, by segmenting out the ground plane, we are left with all possible objects in the scene, which can then be used to implement many mobile robot navigation algorithms such as obstacle avoidance, path planning, target following, landmark detection, etc. Specifically, we demonstrate the usefulness and robustness of our approach by applying it to a target following algorithm. As the results section shows, the proposed algorithm for ground plane detection achieves an almost perfect detection rate (over 99%) despite the relatively higher number of errors in pixel correspondence from the feature matching algorithm used: SIFT.
international symposium on neural networks | 2009
Ryanne Dolan; Guilherme N. DeSouza
The inherent massive parallelism of cellular neural networks makes them an ideal computational platform for kernel-based algorithms and image processing. General-purpose GPUs provide similar massive parallelism, but it can be difficult to design algorithms to make optimal use of the hardware. The presented research includes a GPU abstraction based on cellular neural networks. The abstraction offers a simplified view of massively parallel computation which remains reasonably efficient. An image processing library with visualization software has been developed to showcase the flexibility and power of cellular computation on GPUs. Benchmarks of the library indicate that commodity GPUs can be used to significantly accelerate CNN research and offer a viable alternative to CPU-based image processing algorithms.
digital identity management | 2001
Johnny Park; Guilherme N. DeSouza; Avinash C. Kak
In this paper we present our Dual-Beam Structured-Light Scanner (DSLS), a scanning system that generates range maps much richer than those obtained from a conventional structured-light scanning system. Range maps produced by DSLS require fewer registrations for 3-D modeling. We show that the DSLS system more easily satisfies what are often difficult-to-satisfy conditions for determining the 3-D coordinates of an arbitrary object point. Two specific advantages of DSLS over conventional structured-light scanning are: (1) A single scan by the DSLS system is capable of generating range data on more surfaces than possible with the conventional approach using the same number of camera images. And (2) since the data collected by DSLS is more free of self-occlusions, the object needs be examined from a smaller number of viewpoints.
Computer Vision and Image Understanding | 2011
Yuanquiang (Evan) Dong; Guilherme N. DeSouza
We propose a new adaptive learning algorithm using multiple eigensubspaces to handle sudden as well as gradual changes in background due for example to illumination variations. To handle such changes, the feature space is organized into clusters representing the different background appearances. A local principal component analysis transformation is used to learn a separate eigensubspace for each cluster and an adaptive learning is used to continuously update the eigensubspaces. When the current image is presented, the system automatically selects a learned subspace that shares the closest appearance and lighting condition with the input image, which is then projected onto the subspace so that both background and foreground pixels can be classified. To efficiently adapt to changes in lighting conditions, an incremental update of the multiple eigensubspaces using synthetic background appearances is included in our framework. By doing so, our system can eliminate any noise or distortions that otherwise would incur from the existence of foreground objects, while it correctly updates the specific eigensubspace representing the current background appearance. A forgetting factor is also employed to control the contribution of earlier observations and limit the number of learned subspaces. As the extensive experimental results with various benchmark sequences demonstrate, the proposed algorithm outperforms, quantitatively and qualitatively, many other appearance-based approaches as well as methods using Gaussian Mixture Model (GMM), especially under sudden and drastic changes in illumination. Finally, the proposed algorithm is demonstrated to be linear with the size of the images d, the number of basis in the local PCA m, and the number of images used for adaptation n: that is, the algorithm is O(dmn) and our C++ implementation runs in real time - i.e. at frame rate for normal resolution (VGA) images.
workshop on applications of computer vision | 2005
Pradit Mittrapiyanuruk; Guilherme N. DeSouza; Avinash C. Kak
In this paper we present a new method for tracking rigid objects using a modified version of the Active Appearance Model. Unlike most of the other appearance-based methods in the literature, our method allows for both partial and self occlusion of the objects. We use ground-truth to demonstrate the accuracy of our tracking algorithm. We show that our method can be applied to track moving objects over wide variations in position and orientation of the object - one meter in translation and 140 degrees in rotation - with an accuracy of a few millimeters.
Sensors | 2017
Ali Shafiekhani; Suhas Kadam; Felix B. Fritschi; Guilherme N. DeSouza
In this paper, a new robotic architecture for plant phenotyping is being introduced. The architecture consists of two robotic platforms: an autonomous ground vehicle (Vinobot) and a mobile observation tower (Vinoculer). The ground vehicle collects data from individual plants, while the observation tower oversees an entire field, identifying specific plants for further inspection by the Vinobot. The advantage of this architecture is threefold: first, it allows the system to inspect large areas of a field at any time, during the day and night, while identifying specific regions affected by biotic and/or abiotic stresses; second, it provides high-throughput plant phenotyping in the field by either comprehensive or selective acquisition of accurate and detailed data from groups or individual plants; and third, it eliminates the need for expensive and cumbersome aerial vehicles or similarly expensive and confined field platforms. As the preliminary results from our algorithms for data collection and 3D image processing, as well as the data analysis and comparison with phenotype data collected by hand demonstrate, the proposed architecture is cost effective, reliable, versatile, and extendable.