Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where George Bebis is active.

Publication


Featured researches published by George Bebis.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

On-road vehicle detection: a review

Zehang Sun; George Bebis; Ronald Miller

Developing on-board automotive driver assistance systems aiming to alert drivers about driving environments, and possible collision with other vehicles has attracted a lot of attention lately. In these systems, robust and reliable vehicle detection is a critical step. This paper presents a review of recent vision-based on-road vehicle detection systems. Our focus is on systems where the camera is mounted on the vehicle rather than being fixed such as in traffic/driveway monitoring systems. First, we discuss the problem of on-road vehicle detection using optical sensors followed by a brief review of intelligent vehicle research worldwide. Then, we discuss active and passive sensors to set the stage for vision-based vehicle detection. Methods aiming to quickly hypothesize the location of vehicles in an image as well as to verify the hypothesized locations are reviewed next. Integrating detection with tracking is also reviewed to illustrate the benefits of exploiting temporal continuity for vehicle detection. Finally, we present a critical overview of the methods discussed, we assess their potential for future deployment, and we present directions for future research.


Computer Vision and Image Understanding | 2007

Vision-based hand pose estimation: A review

Ali Erol; George Bebis; Mircea Nicolescu; Richard Boyle; Xander Twombly

Direct use of the hand as an input device is an attractive method for providing natural human-computer interaction (HCI). Currently, the only technology that satisfies the advanced requirements of hand-based input for HCI is glove-based sensing. This technology, however, has several drawbacks including that it hinders the ease and naturalness with which the user can interact with the computer-controlled environment, and it requires long calibration and setup procedures. Computer vision (CV) has the potential to provide more natural, non-contact solutions. As a result, there have been considerable research efforts to use the hand as an input device for HCI. In particular, two types of research directions have emerged. One is based on gesture classification and aims to extract high-level abstract information corresponding to motion patterns or postures of the hand. The second is based on pose estimation systems and aims to capture the real 3D motion of the hand. This paper presents a literature review on the latter research direction, which is a very challenging problem in the context of HCI.


Pattern Recognition | 2004

Object detection using feature subset selection

Zehang Sun; George Bebis; Ronald Miller

Past work on object detection has emphasized the issues of feature extraction and classification, however, relatively less attention has been given to the critical issue of feature selection. The main trend in feature extraction has been representing the data in a lower dimensional space, for example, using principal component analysis (PCA). Without using an effective scheme to select an appropriate set of features in this space, however, these methods rely mostly on powerful classification algorithms to deal with redundant and irrelevant features. In this paper, we argue that feature selection is an important problem in object detection and demonstrate that genetic algorithms (GAs) provide a simple, general, and powerful framework for selecting good subsets of features, leading to improved detection rates. As a case study, we have considered PCA for feature extraction and support vector machines (SVMs) for classification. The goal is searching the PCA space using GAs to select a subset of eigenvectors encoding important information about the target concept of interest. This is in contrast to traditional methods selecting some percentage of the top eigenvectors to represent the target concept, independently of the classification task. We have tested the proposed framework on two challenging applications: vehicle detection and face detection. Our experimental results illustrate significant performance improvements in both cases.


IEEE Transactions on Intelligent Transportation Systems | 2005

On-road vehicle detection using evolutionary Gabor filter optimization

Zehang Sun; George Bebis; Ronald Miller

Robust and reliable vehicle detection from images acquired by a moving vehicle is an important problem with numerous applications including driver assistance systems and self-guided vehicles. Our focus in this paper is on improving the performance of on-road vehicle detection by employing a set of Gabor filters specifically optimized for the task of vehicle detection. This is essentially a kind of feature selection, a critical issue when designing any pattern classification system. Specifically, we propose a systematic and general evolutionary Gabor filter optimization (EGFO) approach for optimizing the parameters of a set of Gabor filters in the context of vehicle detection. The objective is to build a set of filters that are capable of responding stronger to features present in vehicles than to nonvehicles, therefore improving class discrimination. The EGFO approach unifies filter design with filter selection by integrating genetic algorithms (GAs) with an incremental clustering approach. Filter design is performed using GAs, a global optimization approach that encodes the Gabor filter parameters in a chromosome and uses genetic operators to optimize them. Filter selection is performed by grouping filters having similar characteristics in the parameter space using an incremental clustering approach. This step eliminates redundant filters, yielding a more compact optimized set of filters. The resulting filters have been evaluated using an application-oriented fitness criterion based on support vector machines. We have tested the proposed framework on real data collected in Dearborn, MI, in summer and fall 2001, using Fords proprietary low-light camera.


IEEE Transactions on Image Processing | 2006

Monocular precrash vehicle detection: features and classifiers

Zehang Sun; George Bebis; Ronald Miller

Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Fords proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Fords concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.


workshop on applications of computer vision | 2002

Genetic feature subset selection for gender classification: a comparison study

Zehang Sun; George Bebis; Xiaojing Yuan

We consider the problem of gender classification from frontal facial images using genetic feature subset selection. We argue that feature selection is an important issue in gender classification and demonstrate that Genetic Algorithms (GA) can select good subsets of features (i.e., features that encode mostly gender information), reducing the classification error. First, Principal Component Analysis (PCA) is used to represent each image as a feature vector (i.e., eigen-features) in a low-dimensional space. Genetic Algorithms (GAs) are then employed to select a subset of features from the low-dimensional representation by disregarding certain eigenvectors that do not seem to encode important gender information. Four different classifiers were compared in this study using genetic feature subset selection: a Bayes classifier, a Neural Network (NN) classifier, a Support Vector Machine (SVM) classifier, and a classifier based on Linear Discriminant Analysis (LDA). Our experimental results show a significant error rate reduction in all cases. The best performance was obtained using the SVM classifier. Using only 8.4% of the features in the complete set, the SVM classifier achieved an error rate of 4.7% from an average error rate of 8.9% using manually selected features.


international conference on intelligent transportation systems | 2004

On-road vehicle detection using optical sensors: a review

Zehang Sun; George Bebis; Ronald Miller

As one of the most promising applications of computer vision, vision-based vehicle detection for driver assistance has received considerable attention over the last 15 years. There are at least three reasons for the blooming research in this field: first, the startling losses both in human lives and finance caused by vehicle accidents; second, the availability of feasible technologies accumulated within the last 30 years of computer vision research; and third, the exponential growth of processor speed has paved the way for running computation-intensive video-processing algorithms even on a low-end PC in realtime. This paper provides a critical survey of recent vision-based on-road vehicle detection systems appeared in the literature (i.e., the cameras are mounted on the vehicle rather than being static such as in traffic/driveway monitoring systems).


international conference on digital signal processing | 2002

On-road vehicle detection using Gabor filters and support vector machines

Zehang Sun; George Bebis; Ronald Miller

On-road vehicle detection is an important problem with application to driver assistance systems and autonomous, self-guided vehicles. The focus of this paper is on the problem of feature extraction and classification for rear-view vehicle detection. Specifically, we propose using Gabor filters for vehicle feature extraction and support vector machines (SVM) for vehicle detection. Gabor filters provide a mechanism for obtaining some degree of invariance to intensity due to global illumination, selectivity in scale, and selectivity in orientation. Basically, they are orientation and scale tunable edge and line detectors. Vehicles do contain strong edges and lines at different orientation and scales, thus, the statistics of these features (e.g., mean, standard deviation, and skewness) could be very powerful for vehicle detection. To provide robustness, these statistics are not extracted from the whole image but rather are collected from several subimages obtained by subdividing the original image into subwindows. These features are then used to train a SVM classifier. Extensive experimentation and comparisons using real data, different features (e.g., based on principal components analysis (PCA)), and different classifiers (e.g., neural networks (NN)) demonstrate the superiority of the proposed approach which has achieved an average accuracy of 94.81% on completely novel test images.


Image and Vision Computing | 2006

Face recognition by fusing thermal infrared and visible imagery

George Bebis; Aglika Gyaourova; Saurabh Singh; Ioannis T. Pavlidis

Thermal infrared (IR) imagery offers a promising alternative to visible imagery for face recognition due to its relative insensitive to variations in face appearance caused by illumination changes. Despite its advantages, however, thermal IR has several limitations including that it is opaque to glass. The focus of this study is on the sensitivity of thermal IR imagery to facial occlusions caused by eyeglasses. Specifically, our experimental results illustrate that recognition performance in the IR spectrum degrades seriously when eyeglasses are present in the probe image but not in the gallery image and vice versa. To address this serious limitation of IR, we propose fusing IR with visible imagery. Since IR and visible imagery capture intrinsically different characteristics of the observed faces, intuitively, a better face description could be found by utilizing the complimentary information present in the two spectra. Two different fusion schemes have been investigated in this study. The first one is pixelbased and operates in the wavelet domain, while the second one is feature-based and operates in the eigenspace domain. In both cases, we employ a simple and general framework based on Genetic Algorithms (GAs) to find an optimum fusion strategy. We have evaluated our approaches through extensive experiments using the Equinox face database and the eigenface recognition methodology. Our results illustrate significant performance improvements in recognition, suggesting that IR and visible fusion is a viable approach that deserves further consideration. q 2006 Elsevier B.V. All rights reserved.


workshop on applications of computer vision | 2002

A real-time precrash vehicle detection system

Zehang Sun; Ronald Miller; George Bebis; David M. DiMeo

This paper presents an in-vehicle real-time monocular precrash vehicle detection system. The system acquires grey level images through a forward facing low light camera and achieves an average detection rate of 10Hz. The vehicle detection algorithm consists of two main steps: multi-scale driven hypothesis generation and appearance-based hypothesis verification. In the multi-scale hypothesis generation step, possible image locations where vehicles might be present are hypothesized. This step uses multi-scale techniques to speed up detection but also to improve system robustness by making system performance less sensitive to the choice of certain parameters. Appearance-base hypothesis verification verifies those hypothesis using Haar Wavelet decomposition for feature extraction and Support Vector Machines (SVMs) for classification. The monocular system was tested under different traffic scenarios (e.g., simply structured highway, complex urban street, varying weather conditions), illustrating good performance.

Collaboration


Dive into the George Bebis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bahram Parvin

Lawrence Berkeley National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Michael Georgiopoulos

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Erol

University of Nevada

View shared research outputs
Top Co-Authors

Avatar

Darko Koracin

Desert Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge