Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Besma R. Abidi is active.

Publication


Featured researches published by Besma R. Abidi.


Computer Vision and Image Understanding | 2005

Recent advances in visual and infrared face recognition: a review

Seong G. Kong; Jingu Heo; Besma R. Abidi; Joon Ki Paik; Mongi A. Abidi

Face recognition is a rapidly growing research area due to increasing demands for security in commercial and law enforcement applications. This paper provides an up-to-date review of research efforts in face recognition techniques based on two-dimensional (2D) images in the visual and infrared (IR) spectra. Face recognition systems based on visual images have reached a significant level of maturity with some practical success. However, the performance of visual face recognition may degrade under poor illumination conditions or for subjects of various skin colors. IR imagery represents a viable alternative to visible imaging in the search for a robust and practical identification system. While visual face recognition systems perform relatively reliably under controlled illumination conditions, thermal IR face recognition systems are advantageous when there is no control over illumination or for detecting disguised faces. Face recognition using 3D images is another active area of face recognition, which provides robust face recognition with changes in pose. Recent research has also demonstrated that the fusion of different imaging modalities and spectral components can improve the overall performance of face recognition.


IEEE Transactions on Image Processing | 2006

Gray-level grouping (GLG): an automatic method for optimized image contrast Enhancement-part I: the basic method

Zhiyu Chen; Besma R. Abidi; David L. Page; Mongi A. Abidi

Contrast enhancement has an important role in image processing applications. Conventional contrast enhancement techniques either often fail to produce satisfactory results for a broad variety of low-contrast images, or cannot be automatically applied to different images, because their parameters must be specified manually to produce a satisfactory result for a given image. This paper describes a new automatic method for contrast enhancement. The basic procedure is to first group the histogram components of a low-contrast image into a proper number of bins according to a selected criterion, then redistribute these bins uniformly over the grayscale, and finally ungroup the previously grouped gray-levels. Accordingly, this new technique is named gray-level grouping (GLG). GLG not only produces results superior to conventional contrast enhancement techniques, but is also fully automatic in most circumstances, and is applicable to a broad variety of images. An extension of GLG, selective GLG (SGLG), and its variations will be discussed in Part II of this paper. SGLG selectively groups and ungroups histogram components to achieve specific application purposes, such as eliminating background noise, enhancing a specific segment of the histogram, and so on. The extension of GLG to color images will also be discussed in Part II.


International Journal of Computer Vision | 2007

Multiscale Fusion of Visible and Thermal IR Images for Illumination-Invariant Face Recognition

Seong G. Kong; Jingu Heo; Faysal Boughorbel; Yue Zheng; Besma R. Abidi; Andreas F. Koschan; Mingzhong Yi; Mongi A. Abidi

AbstractThis paper describes a new software-based registration and fusion of visible and thermal infrared (IR) image data for face recognition in challenging operating environments that involve illumination variations. The combined use of visible and thermal IR imaging sensors offers a viable means for improving the performance of face recognition techniques based on a single imaging modality. Despite successes in indoor access control applications, imaging in the visible spectrum demonstrates difficulties in recognizing the faces in varying illumination conditions. Thermal IR sensors measure energy radiations from the object, which is less sensitive to illumination changes, and are even operable in darkness. However, thermal images do not provide high-resolution data. Data fusion of visible and thermal images can produce face images robust to illumination variations. However, thermal face images with eyeglasses may fail to provide useful information around the eyes since glass blocks a large portion of thermal energy. In this paper, eyeglass regions are detected using an ellipse fitting method, and replaced with eye template patterns to preserve the details useful for face recognition in the fused image. Software registration of images replaces a special-purpose imaging sensor assembly and produces co-registered image pairs at a reasonable cost for large-scale deployment. Face recognition techniques using visible, thermal IR, and data-fused visible-thermal images are compared using a commercial face recognition software (FaceIt®) and two visible-thermal face image databases (the NIST/Equinox and the UTK-IRIS databases). The proposed multiscale data-fusion technique improved the recognition accuracy under a wide range of illumination changes. Experimental results showed that the eyeglass replacement increased the number of correct first match subjects by 85% (NIST/Equinox) and 67% (UTK-IRIS).


computer vision and pattern recognition | 2004

Fusion of Visual and Thermal Signatures with Eyeglass Removal for Robust Face Recognition

Jingu Heo; Seong G. Kong; Besma R. Abidi; Mongi A. Abidi

This paper describes a fusion of visual and thermal infrared (IR) images for robust face recognition. Two types of fusion methods are discussed: data fusion and decision fusion. Data fusion produces an illumination-invariant face image by adaptively integrating registered visual and thermal face images. Decision fusion combines matching scores of individual face recognition modules. In the data fusion process, eyeglasses, which block thermal energy, are detected from thermal images and replaced with an eye template. Three fusion-based face recognition techniques are implemented and tested: Data fusion of visual and thermal images (Df), Decision fusion with highest matching score (Fh), and Decision fusion with average matching score (Fa). A commercial face recognition software FaceIt® is used as an individual recognition module. Comparison results show that fusion-based face recognition techniques outperformed individual visual and thermal face recognizers under illumination variations and facial expressions.


Sixth International Conference on Quality Control by Artificial Vision | 2003

Real-time video tracking using PTZ cameras

Sangkyu Kang; Joonki Paik; Andreas F. Koschan; Besma R. Abidi; Mongi A. Abidi

Automatic tracking is essential for a 24 hours intruder-detection and, more generally, a surveillance system. This paper presents an adaptive background generation and the corresponding moving region detection techniques for a Pan-Tilt-Zoom (PTZ) camera using a geometric transform-based mosaicing method. A complete system including adaptive background generation, moving regions extraction and tracking is evaluated using realistic experimental results. More specifically, experimental results include generated background images, a moving region, and input video with bounding boxes around moving objects. This experiment shows that the proposed system can be used to monitor moving targets in widely open areas by automatic panning and tilting in real-time.


Journal of Pattern Recognition Research | 2006

An Overview of Color Constancy Algorithms

Vivek Agarwal; Besma R. Abidi; Andreas F. Koschan; Mongi A. Abidi

Color constancy is one of the important research areas with a wide range of applications in the elds of color image processing and computer vision. One such application is video tracking. Color is used as one of the salient features and its robustness to illumination variation is essential to the adaptability of video tracking algorithms. Color constancy can be applied to discount the inuence of changing illuminations. In this paper, we present a review of established color constancy approaches. We also investigate whether these approaches in their present form of implementation can be applied to the video tracking problem. The approaches are grouped into two categories, namely, Pre-Calibrated and Data-driven approaches. The paper also talks about the ill-posedness of the color constancy problem, implementation assumptions of color constancy approaches, and problem statement for tracking. Publications on video tracking algorithms involving color correction or color compensation techniques are not included in this review.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Heterogeneous Fusion of Omnidirectional and PTZ Cameras for Multiple Object Tracking

Chung-Hao Chen; Yi Yao; David L. Page; Besma R. Abidi; Andreas F. Koschan; Mongi A. Abidi

Dual-camera systems have been widely used in surveillance because of the ability to explore the wide field of view (FOV) of the omnidirectional camera and the wide zoom range of the PTZ camera. Most existing algorithms require a priori knowledge of the omnidirectional cameras projection model to solve the nonlinear spatial correspondences between the two cameras. To overcome this limitation, two methods are proposed: 1) geometry and 2) homography calibration, where polynomials with automated model selection are used to approximate the cameras projection model and spatial mapping, respectively. The proposed methods not only improve the mapping accuracy by reducing its dependence on the knowledge of the projection model but also feature reduced computations and improved flexibility in adjusting to varying system configurations. Although the fusion of multiple cameras has attracted increasing attention, most existing algorithms assume comparable FOV and resolution levels among multiple cameras. Different FOV and resolution levels of the omnidirectional and PTZ cameras result in another critical issue in practical tracking applications. The omnidirectional camera is capable of multiple object tracking while the PTZ camera is able to track one individual target at one time to maintain the required resolution. It becomes necessary for the PTZ camera to distribute its observation time among multiple objects and visit them in sequence. Therefore, this paper addresses a novel scheme where an optimal visiting sequence of the PTZ camera is obtained so that in a given period of time the PTZ camera automatically visits multiple detected motions in a target-hopping manner. The effectiveness of the proposed algorithms is illustrated via extensive experiments using both synthetic and real tracking data and comparisons with two reference systems.


Real-time Imaging | 2005

Optical flow-based real-time object tracking using non-prior training active feature model

Jeongho Shin; Sangjin Kim; Sangkyu Kang; Seong-Won Lee; Joon Ki Paik; Besma R. Abidi; Mongi A. Abidi

This paper presents a feature-based object tracking algorithm using optical flow under the non-prior training (NPT) active feature model (AFM) framework. The proposed tracking procedure can be divided into three steps: (i) localization of an object-of-interest, (ii) prediction and correction of the objects position by utilizing spatio-temporal information, and (iii) restoration of occlusion using NPT-AFM. The proposed algorithm can track both rigid and deformable objects, and is robust against the objects sudden motion because both a feature point and the corresponding motion direction are tracked at the same time. Tracking performance is not degraded even with complicated background because feature points inside an object are completely separated from background. Finally, the AFM enables stable tracking of occluded objects with maximum 60% occlusion. NPT-AFM, which is one of the major contributions of this paper, removes the off-line, preprocessing step for generating a priori training set. The training set used for model fitting can be updated at each frame to make more robust objects features under occluded situation. The proposed AFM can track deformable, partially occluded objects by using the greatly reduced number of feature points rather than taking entire shapes in the existing shape-based methods. The on-line updating of the training set and reducing the number of feature points can realize a real-time, robust tracking system. Experiments have been performed using several in-house video clips of a static camera including objects such as a robot moving on a floor and people walking both indoor and outdoor. In order to show the performance of the proposed tracking algorithm, some experiments have been performed under noisy and low-contrast environment. For more objective comparison, PETS 2001 and PETS 2002 datasets were also used.


Pattern Recognition Letters | 2003

Color active shape models for tracking non-rigid objects

Andreas F. Koschan; Sangkyu Kang; Joon Ki Paik; Besma R. Abidi; Mongi A. Abidi

Active shape models can be applied to tracking non-rigid objects in video image sequences. Traditionally these models do not include color information in their formulation. In this paper, we present a hierarchical realization of an enhanced active shape model for color video tracking and we study the performance of both hierarchical and nonhierarchical implementations in the RGB, YUV, and HSI color spaces.


ACM Computing Surveys | 2009

Survey and analysis of multimodal sensor planning and integration for wide area surveillance

Besma R. Abidi; Nash R. Aragam; Yi Yao; Mongi A. Abidi

Although sensor planning in computer vision has been a subject of research for over two decades, a vast majority of the research seems to concentrate on two particular applications in a rather limited context of laboratory and industrial workbenches, namely 3D object reconstruction and robotic arm manipulation. Recently, increasing interest is engaged in research to come up with solutions that provide wide-area autonomous surveillance systems for object characterization and situation awareness, which involves portable, wireless, and/or Internet connected radar, digital video, and/or infrared sensors. The prominent research problems associated with multisensor integration for wide-area surveillance are modality selection, sensor planning, data fusion, and data exchange (communication) among multiple sensors. Thus, the requirements and constraints to be addressed include far-field view, wide coverage, high resolution, cooperative sensors, adaptive sensing modalities, dynamic objects, and uncontrolled environments. This article summarizes a new survey and analysis conducted in light of these challenging requirements and constraints. It involves techniques and strategies from work done in the areas of sensor fusion, sensor networks, smart sensing, Geographic Information Systems (GIS), photogrammetry, and other intelligent systems where finding optimal solutions to the placement and deployment of multimodal sensors covering a wide area is important. While techniques covered in this survey are applicable to many wide-area environments such as traffic monitoring, airport terminal surveillance, parking lot surveillance, etc., our examples will be drawn mainly from such applications as harbor security and long-range face recognition.

Collaboration


Dive into the Besma R. Abidi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sangkyu Kang

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Shafik Huq

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hong Chang

University of Tennessee

View shared research outputs
Top Co-Authors

Avatar

Jingu Heo

University of Tennessee

View shared research outputs
Researchain Logo
Decentralizing Knowledge