Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Niels da Vitoria Lobo is active.

Publication


Featured researches published by Niels da Vitoria Lobo.


Computer Vision and Image Understanding | 1999

Age Classification from Facial Images

Young Ho Kwon; Niels da Vitoria Lobo

This paper presents a theory and practical computations for visual age classification from facial images. Currently, the theory has only been implemented to classify input images into one of three age-groups: babies, young adults, and senior adults. The computations are based on cranio-facial development theory and skin wrinkle analysis. In the implementation, primary features of the face are found first, followed by secondary feature analysis. The primary features are the eyes, nose, mouth, chin, virtual-top of the head and the sides of the face. From these features, ratios that distinguish babies from young adults and seniors are computed. In secondary feature analysis, a wrinkle geography map is used to guide the detection and measurement of wrinkles. The wrinkle index computed is sufficient to distinguish seniors from young adults and babies. A combination rule for the ratios and the wrinkle index thus permits categorization of a face into one of three classes. Results using real images are presented. This is the first work involving age classification, and the first work that successfully extracts and uses natural wrinkles. It is also a successful demonstration that facial features are sufficient for a classification task, a finding that is important to the debate about what are appropriate representations for facial analysis.


Pattern Recognition Letters | 2002

Flame recognition in video

Walter Phillips; Mubarak Shah; Niels da Vitoria Lobo

This paper presents an automatic system for fire detection in video sequences. There are several previous methods to detect fire, however, all except two use spectroscopy or particle sensors. The two that use visual information suffer from the inability to cope with a moving camera or a moving scene. One of these is not able to work on general data, such as movie sequences. The other is too simplistic and unrestrictive in determining what is considered fire; so that it can be used reliably only in aircraft dry bays. We propose a system that uses color and motion information computed from video sequences to locate fire. This is done by first using an approach that is based upon creating a Gaussian-smoothed color histogram to detect the fire-colored pixels, and then using a temporal variation of pixels to determine which of these pixels are actually fire pixels. Next, some spurious fire pixels are automatically removed using an erode operation, and some missing fire pixels are found using region growing method. Unlike the two previous vision-based methods for fire detection, our method is applicable to more areas because of its insensitivity to camera motion. Two specific applications not possible with previous algorithms are the recognition of fire in the presence of global camera motion or scene motion and the recognition of fire in movies for possible use in an automatic rating system. We show that our method works in a variety of conditions, and that it can automatically determine when it has insufficient information.


Computer Vision and Image Understanding | 1999

Features and Classification Methods to Locate Deciduous Trees in Images

Niels Haering; Niels da Vitoria Lobo

We compare features and classification methods to locate deciduous trees in images. From this comparison we conclude that a back-propagation neural network achieves better classification results than the other classifiers we tested. Our analysis of the relevance of 51 features from seven feature extraction methods based on the graylevel co-occurrence matrix, Gabor filters, fractal dimension, steerable filters, the Fourier transform, entropy, and color shows that each feature contributes important information. We show how we obtain a 13-feature subset that significantly reduces the feature extraction time while retaining most of the complete feature sets power and robustness. The best subsets of features were found to be combinations of features of each of the extraction methods. Methods for classification and feature relevance determination that are based on the covariance or correlation matrix of the features (such as eigenanalyses or linear or quadratic classifiers) generally cannot be used, since even small sets of features are usually highly linearly redundant, rendering their covariance or correlation matrices too singular to be invertible. We argue that representing deciduous trees and many other objects by rich image descriptions can significantly aid their classification. We make no assumptions about the shape, location, viewpoint, viewing distance, lighting conditions, and camera parameters, and we only expect scanning methods and compression schemes to retain a “reasonable” image quality.


Archive | 2001

Visual Event Detection

Niels Haering; Niels da Vitoria Lobo

1. Introduction. 2. A Framework for the Design of Visual Event Detectors. 3. Features and Classification Methods. 4. Results. 5. Summary and Discussion of Alternatives. A. Appendix. References. Index.


Pattern Recognition | 1999

Learning affine transformations

George Bebis; Michael Georgiopoulos; Niels da Vitoria Lobo; Mubarak Shah

Under the assumption of weak perspective, two views of the same planar object are related through an affine transformation. In this paper, we consider the problem of training a simple neural network to learn to predict the parameters of the affine transformation. Although the proposed scheme has similarities with other neural network schemes, its practical advantages are more profound. First of all, the views used to train the neural network are not obtained by taking pictures of the object from different viewpoints. Instead, the training views are obtained by sampling the space of affine transformed views of the object. This space is constructed using a single view of the object. Fundamental to this procedure is a methodology, based on singular-value decomposition (SVD) and interval arithmetic (IA), for estimating the ranges of values that the parameters of affine transformation can assume. Second, the accuracy of the proposed scheme is very close to that of a traditional least squares approach with slightly better space and time requirements. A front-end stage to the neural network, based on principal components analysis (PCA), shows to increase its noise tolerance dramatically and also to guides us in deciding how many training views are necessary in order for the network to learn a good, noise tolerant, mapping. The proposed approach has been tested using both artificial and real data.


Journal of Mammalogy | 2010

Computer-aided photo-identification system with an application to polar bears based on whisker spot patterns

Carlos J. R. Anderson; Niels da Vitoria Lobo; James D. Roth; Jane M. Waterman

Abstract Ecologists often rely on unique natural markings to identify individual free-ranging animals without disturbing them. We developed a computer-aided photo-identification system for identifying polar bears (Ursus maritimus) based on whisker spot pattern recognition. We automated our system so that the selection of 3 reference points on the input image is the only manual step required during image preprocessing. Our pattern-matching algorithm is unique in that the variability within spot patterns is considered fully rather than representing them as points and applying a point-pattern matching algorithm. We also measured the reliability of our method as probabilities of true positives and false positives using photographs of various qualities taken at different angles. When we excluded photographs of poor quality and angle the probability of true positives was >80% at a false positive probability of 10%. A new photograph could be preprocessed in <1 min and tested against a reference library of 100 individuals in <10 min. Our computer-aided identification system could be extended for use in other species with variable spot patterns, which could be useful in efforts to estimate various population dynamics parameters essential for the study and conservation of wildlife, particularly threatened and endangered species.


Intelligent Robots and Computer Vision XII: Algorithms and Techniques | 1993

Locating facial features for age classification

Young Ho Kwon; Niels da Vitoria Lobo

In this paper, we outline computations for visual age classification from facial images. For now, input images can only be classified into one of three age-groups: babies, adults, and senior adults. The computations are based on cranio-facial development theory, and wrinkle analysis. In the implementation, first primary features of the face are found, followed by secondary feature analyses. Preliminary results with real data are presented.


international conference on robotics and automation | 2011

Horizon constraint for unambiguous UAV navigation in planar scenes

Omar Oreifej; Niels da Vitoria Lobo; Mubarak Shah

When the UAV goes to high altitudes such that the observed surface of the earth becomes planar, the structure and motion recovery of the earths moving plane becomes ambiguous. This planar degeneracy has been pointed out very often in the literature; therefore, current navigation methods either completely fail or give many confusing solutions in such scenario. Interestingly, the horizon line in planar scenes is straight and distinctive; hence, easily detected. Therefore, we show in this paper that the horizon line provides two degrees of freedom that control the relative orientation between the camera coordinate system and the local surface of earth. The recovered degrees of freedom help linearize and disambiguate the planar flow, and therefore we obtain a unique solution for the UAV motion estimation. Unlike previous work which used the horizon to provide the roll angle and the pitch percentage and only employed them for flight stability, we extract the exact angles and directly use them to estimate the ego motion. Additionally, we propose a novel horizon detector based on the maximum a posteriori estimation of both motion and appearance features which outperforms the other detectors in planar scenarios. We thoroughly experimented on the proposed method against information from GPS and gyroscopes, and obtained promising results.


Computer Vision and Image Understanding | 2006

Integrating multiple levels of zoom to enable activity analysis

Paul Smith; Mubarak Shah; Niels da Vitoria Lobo

In this paper, we present a multi-zoom framework for activity analysis in situations requiring combinations of both detailed and coarse views of the scene. The epipolar geometry is employed in several novel ways in the context of activity analysis. Detecting and tracking objects in time and consistently labeling these objects across zoom levels are two necessary tasks for such activity analysis. First, a multiview approach to automatically detect and track heads and hands in a scene is described. Then, by making use of epipolar, spatial, trajectory, and appearance constraints, objects are labeled consistently across cameras (zooms). Finally, we demonstrate how multiple levels of zoom can cooperate and complement each other to help solve problems related to activity analysis.


ieee virtual reality conference | 2010

Markerless tracking using Polar Correlation of camera optical flow

Prince Gupta; Niels da Vitoria Lobo; Joseph J. LaViola

We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80% over 185 frames, and it is able to track large range motions even in outdoor settings.

Collaboration


Dive into the Niels da Vitoria Lobo's collaboration.

Top Co-Authors

Avatar

Mubarak Shah

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Michael Georgiopoulos

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Niels Haering

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Joseph J. LaViola

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Prince Gupta

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Young Ho Kwon

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Ankur Datta

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael N. Wallick

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge