Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas F. Koschan is active.

Publication


Featured researches published by Andreas F. Koschan.


computer vision and pattern recognition | 2003

Perception-based 3D triangle mesh segmentation using fast marching watersheds

Andreas F. Koschan

In this paper, we describe an algorithm called fast marching watersheds that segments a triangle mesh into visual parts. This computer vision algorithm leverages a human vision theory known as the minima rule. Our implementation computes the principal curvatures and principal directions at each vertex of a mesh, and then our hill-climbing watershed algorithm identifies regions bounded by contours of negative curvature minima. These regions fit the definition of visual parts according to the minima rule. We present evaluation analysis and experimental results for the proposed algorithm.


IEEE Signal Processing Magazine | 2005

Detection and classification of edges in color images

Andreas F. Koschan; Mongi A. Abidi

Up to now, most of the color edge detection methods are monochromatic-based techniques, which produce, in general, better than when traditional gray-value techniques are applied. In this overview, we focus mainly on vector-valued techniques because it is easy to understand how to apply common edge detection schemes to every color component. Opposed to this, vector-valued techniques are new and different. The second part of the article addresses the topic of edge classification. While edges are often classified into step edges and ramp edges, we address the topic of physical edge classification based on their origin into shadow edges, reflectance edges, orientation edges, occlusion edges, and specular edges. In the rest of this article we discuss various vector-valued techniques for detecting discontinuities in color images. Then operators are presented based on vector order statistics, followed by presentation by examples of a couple of results of color edge detection. We then discuss different approaches to a physical classification of edges by their origin.


International Journal of Computer Vision | 2007

Multiscale Fusion of Visible and Thermal IR Images for Illumination-Invariant Face Recognition

Seong G. Kong; Jingu Heo; Faysal Boughorbel; Yue Zheng; Besma R. Abidi; Andreas F. Koschan; Mingzhong Yi; Mongi A. Abidi

AbstractThis paper describes a new software-based registration and fusion of visible and thermal infrared (IR) image data for face recognition in challenging operating environments that involve illumination variations. The combined use of visible and thermal IR imaging sensors offers a viable means for improving the performance of face recognition techniques based on a single imaging modality. Despite successes in indoor access control applications, imaging in the visible spectrum demonstrates difficulties in recognizing the faces in varying illumination conditions. Thermal IR sensors measure energy radiations from the object, which is less sensitive to illumination changes, and are even operable in darkness. However, thermal images do not provide high-resolution data. Data fusion of visible and thermal images can produce face images robust to illumination variations. However, thermal face images with eyeglasses may fail to provide useful information around the eyes since glass blocks a large portion of thermal energy. In this paper, eyeglass regions are detected using an ellipse fitting method, and replaced with eye template patterns to preserve the details useful for face recognition in the fused image. Software registration of images replaces a special-purpose imaging sensor assembly and produces co-registered image pairs at a reasonable cost for large-scale deployment. Face recognition techniques using visible, thermal IR, and data-fused visible-thermal images are compared using a commercial face recognition software (FaceIt®) and two visible-thermal face image databases (the NIST/Equinox and the UTK-IRIS databases). The proposed multiscale data-fusion technique improved the recognition accuracy under a wide range of illumination changes. Experimental results showed that the eyeglass replacement increased the number of correct first match subjects by 85% (NIST/Equinox) and 67% (UTK-IRIS).


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2002

Normal vector voting: crease detection and curvature estimation on large, noisy meshes

David L. Page; Yiyong Sun; Andreas F. Koschan; Joon Ki Paik; Mongi A. Abidi

This paper describes a robust method for crease detection and curvature estimation on large, noisy triangle meshes. We assume that these meshes are approximations of piecewise-smooth surfaces derived from range or medical imaging systems and thus may exhibit measurement or even registration noise. The proposed algorithm, which we call normal vector voting, uses an ensemble of triangles in the geodesic neighborhood of a vertex-instead of its simple umbrella neighborhood-to estimate the orientation and curvature of the original surface at that point. With the orientation information, we designate a vertex as either lying on a smooth surface, following a crease discontinuity, or having no preferred orientation. For vertices on a smooth surface, the curvature estimation yields both principal curvatures and principal directions while for vertices on a discontinuity we estimate only the curvature along the crease. The last case for no preferred orientation occurs when three or more surfaces meet to form a corner or when surface noise is too large and sampling density is insufficient to determine orientation accurately. To demonstrate the capabilities of the method, we present results for both synthetic and real data and compare these results to the G. Taubin (1995, in Proceedings of the Fifth International Conference on Computer Vision, pp. 902-907) algorithm. Additionally, we show practical results for several large mesh data sets that are the motivation for this algorithm.


Journal of Pattern Recognition Research | 2006

Image Fusion and Enhancement via Empirical Mode Decomposition

Harishwaran Hariharan; Andrei V. Gribok; Mongi A. Abidi; Andreas F. Koschan

In this paper, we describe a novel technique for image fusion and enhancement, using Empirical Mode Decomposition (EMD). EMD is a non-parametric data-driven analysis tool that decomposes non-linear non-stationary signals into Intrinsic Mode Functions (IMFs). In this method, we decompose images, rather than signals, from different imaging modalities into their IMFs. Fusion is performed at the decomposition level and the fused IMFs are reconstructed to realize the fused image. We have devised weighting schemes which emphasize features from both modalities by decreasing the mutual information between IMFs, thereby increasing the information and visual content of the fused image. We demonstrate how the proposed method improves the interpretive information of the input images, by comparing it with widely used fusion schemes. Apart from comparing our method with some advanced techniques, we have also evaluated our method against pixelby-pixel averaging, a comparison, which incidentally, is not common in the literature.


international conference on image processing | 2003

Shape analysis algorithm based on information theory

David L. Page; Andreas F. Koschan; Sreenivas R. Sukumar; Besma Roui-Abidi; Mongi A. Abidi

In this paper, we describe an algorithm to measure the shape complexity for discrete approximations of planar curves in 2D images and manifold surfaces for 3D triangle meshes. We base our algorithm on shape curvature, and thus we compute shape information as the entropy of curvature. We present definitions to estimate curvature for both discrete curves and surfaces and then formulate our theory of shape information from these definitions. We demonstrate our algorithm with experimental results.


Sixth International Conference on Quality Control by Artificial Vision | 2003

Real-time video tracking using PTZ cameras

Sangkyu Kang; Joonki Paik; Andreas F. Koschan; Besma R. Abidi; Mongi A. Abidi

Automatic tracking is essential for a 24 hours intruder-detection and, more generally, a surveillance system. This paper presents an adaptive background generation and the corresponding moving region detection techniques for a Pan-Tilt-Zoom (PTZ) camera using a geometric transform-based mosaicing method. A complete system including adaptive background generation, moving regions extraction and tracking is evaluated using realistic experimental results. More specifically, experimental results include generated background images, a moving region, and input video with bounding boxes around moving objects. This experiment shows that the proposed system can be used to monitor moving targets in widely open areas by automatic panning and tilting in real-time.


Journal of Pattern Recognition Research | 2006

An Overview of Color Constancy Algorithms

Vivek Agarwal; Besma R. Abidi; Andreas F. Koschan; Mongi A. Abidi

Color constancy is one of the important research areas with a wide range of applications in the elds of color image processing and computer vision. One such application is video tracking. Color is used as one of the salient features and its robustness to illumination variation is essential to the adaptability of video tracking algorithms. Color constancy can be applied to discount the inuence of changing illuminations. In this paper, we present a review of established color constancy approaches. We also investigate whether these approaches in their present form of implementation can be applied to the video tracking problem. The approaches are grouped into two categories, namely, Pre-Calibrated and Data-driven approaches. The paper also talks about the ill-posedness of the color constancy problem, implementation assumptions of color constancy approaches, and problem statement for tracking. Publications on video tracking algorithms involving color correction or color compensation techniques are not included in this review.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

Heterogeneous Fusion of Omnidirectional and PTZ Cameras for Multiple Object Tracking

Chung-Hao Chen; Yi Yao; David L. Page; Besma R. Abidi; Andreas F. Koschan; Mongi A. Abidi

Dual-camera systems have been widely used in surveillance because of the ability to explore the wide field of view (FOV) of the omnidirectional camera and the wide zoom range of the PTZ camera. Most existing algorithms require a priori knowledge of the omnidirectional cameras projection model to solve the nonlinear spatial correspondences between the two cameras. To overcome this limitation, two methods are proposed: 1) geometry and 2) homography calibration, where polynomials with automated model selection are used to approximate the cameras projection model and spatial mapping, respectively. The proposed methods not only improve the mapping accuracy by reducing its dependence on the knowledge of the projection model but also feature reduced computations and improved flexibility in adjusting to varying system configurations. Although the fusion of multiple cameras has attracted increasing attention, most existing algorithms assume comparable FOV and resolution levels among multiple cameras. Different FOV and resolution levels of the omnidirectional and PTZ cameras result in another critical issue in practical tracking applications. The omnidirectional camera is capable of multiple object tracking while the PTZ camera is able to track one individual target at one time to maintain the required resolution. It becomes necessary for the PTZ camera to distribute its observation time among multiple objects and visit them in sequence. Therefore, this paper addresses a novel scheme where an optimal visiting sequence of the PTZ camera is obtained so that in a given period of time the PTZ camera automatically visits multiple detected motions in a target-hopping manner. The effectiveness of the proposed algorithms is illustrated via extensive experiments using both synthetic and real tracking data and comparisons with two reference systems.


Pattern Recognition Letters | 2003

Color active shape models for tracking non-rigid objects

Andreas F. Koschan; Sangkyu Kang; Joon Ki Paik; Besma R. Abidi; Mongi A. Abidi

Active shape models can be applied to tracking non-rigid objects in video image sequences. Traditionally these models do not include color information in their formulation. In this paper, we present a hierarchical realization of an enhanced active shape model for color video tracking and we study the performance of both hierarchical and nonhierarchical implementations in the RGB, YUV, and HSI color spaces.

Collaboration


Dive into the Andreas F. Koschan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge