Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A. Murat Tekalp is active.

Publication


Featured researches published by A. Murat Tekalp.


IEEE Transactions on Image Processing | 1997

Superresolution video reconstruction with arbitrary sampling lattices and nonzero aperture time

Andrew J. Patti; M.I. Sezan; A. Murat Tekalp

Printing from an NTSC source and conversion of NTSC source material to high-definition television (HDTV) format are some of the applications that motivate superresolution (SR) image and video reconstruction from low-resolution (LR) and possibly blurred sources. Existing methods for SR image reconstruction are limited by the assumptions that the input LR images are sampled progressively, and that the aperture time of the camera is zero, thus ignoring the motion blur occurring during the aperture time. Because of the observed adverse effects of these assumptions for many common video sources, this paper proposes (i) a complete model of video acquisition with an arbitrary input sampling lattice and a nonzero aperture time, and (ii) an algorithm based on this model using the theory of projections onto convex sets to reconstruct SR still images or video from an LR time sequence of images. Experimental results with real video are provided, which clearly demonstrate that a significant increase in the image resolution can be achieved by taking the motion blurring into account especially when there exists large interframe motion.


Signal Processing-image Communication | 2000

Face and 2-D mesh animation in MPEG-4

A. Murat Tekalp; Jörn Ostermann

This paper presents an overview of some of the synthetic visual objects supported by MPEG-4 version-1, namely animated faces and animated arbitrary 2D uniform and Delaunay meshes. We discuss both specification and compression of face animation and 2D-mesh animation in MPEG-4. Face animation allows to animate a proprietary face model or a face model downloaded to the decoder. We also address integration of the face animation tool with the text-to-speech interface (TTSI), so that face animation can be driven by text input.


Optical Engineering | 1990

Survey of recent developments in digital image restoration

M. Ibrahim Sezan; A. Murat Tekalp

We present a tutorial review of recent developments in restoring images that are degraded by both blur and noise. We consider three fundamental aspects of digital image restoration: modeling, identification algorithms, and restoration algorithms. An overview of modeling the degradations, and certain properties of images are given first. We then survey the methods that identify these models. Image restoration algorithms are surveyed in two categories: general algorithms and specialized algorithms. We briefly discuss present and future research topics in the field. Our emphasis here is on fundamental concepts and ideas rather than mathematical details.


Optical Engineering | 1990

Maximum likelihood image and blur identification: a unifying approach

Reginald L. Lagendijk; A. Murat Tekalp; Jan Biemond

A number of different algorithms have recently been proposed to identify the image and blur model parameters from an image that is


Journal of Electronic Imaging | 1998

Temporal video segmentation using unsupervised clustering and semantic object tracking

Bilge Günsel; A. Mufit Ferman; A. Murat Tekalp

This paper proposes a content-based temporal video segmentation system that integrates syntactic (domain- independent) and semantic (domain-dependent) features for auto- matic management of video data. Temporal video segmentation in- cludes scene change detection and shot classification. The proposed scene change detection method consists of two steps: detection and tracking of semantic objects of interest specified by the user, and an unsupervised method for detection of cuts, and edit effects. Object detection and tracking is achieved using a region matching scheme, where the region of interest is defined by the boundary of the object. A new unsupervised scene change detec- tion method based on two-class clustering is introduced to eliminate the data dependency of threshold selection. The proposed shot classification approach relies on semantic image features and ex- ploits domain-dependent visual properties such as shape, color, and spatial configuration of tracked semantic objects. The system has been applied to segmentation and classification of TV programs col- lected from different channels. Although the paper focuses on news programs, the method can easily be applied to other TV programs with distinct semantic structure.


IEEE Transactions on Multimedia | 2013

An Optimization Framework for QoS-Enabled Adaptive Video Streaming Over OpenFlow Networks

Hilmi E. Egilmez; Seyhan Civanlar; A. Murat Tekalp

OpenFlow is a programmable network protocol and associated hardware designed to effectively manage and direct traffic by decoupling control and forwarding layers of routing. This paper presents an analytical framework for optimization of forwarding decisions at the control layer to enable dynamic Quality of Service (QoS) over OpenFlow networks and discusses application of this framework to QoS-enabled streaming of scalable encoded videos with two QoS levels. We pose and solve optimization of dynamic QoS routing as a constrained shortest path problem, where we treat the base layer of scalable encoded video as a level-1 QoS flow, while the enhancement layers can be treated as level-2 QoS or best-effort flows. We provide experimental results which show that the proposed dynamic QoS framework achieves significant improvement in overall quality of streaming of scalable encoded videos under various coding configurations and network congestion scenarios.


Image and Vision Computing | 1997

Fusion of color and edge information for improved segmentation and edge linking

Eli Saber; A. Murat Tekalp; Gozde Bozdagi

We propose a new method for combined color image segmentation and edge linking. The image is first segmented based on color information only. The segmentation map is modeled by a Gibbs random field, to ensure formation of spatially contiguous regions. Next, spatial edge locations are determined using the magnitude of the gradient of the 3-channel image vector field. Finally, regions in the segmentation map are split and merged by a region-labeling procedure to enforce their consistency with the edge map. The boundaries of the final segmentation map constitute a linked edge map. Experimental results are reported.


Graphical Models and Image Processing | 1998

Region-based parametric motion segmentation using color information

Yucel Altunbasak; P. Erhan Eren; A. Murat Tekalp

This paper presents pixel-based and region-based parametric motion segmentation methods for robust motion segmentation with the goal of aligning motion boundaries with those of real objects in a scene. We first describe a two-step iterative procedure for parametric motion segmentation by either motion-vector or motion-compensated intensity matching. We next present a region-based extension of this method, whereby all pixels within a predefined spatial region are assigned the same motion label. These predefined regions may be fixed- or variable-size blocks or arbitrary-shaped areas defined by color or texture uniformity. A particular combination of these pixel-based and region-based methods is then proposed as a complete algorithm to obtain the best possible segmentation results on a variety of image sequences. Experimental results showing the benefits of the proposed scheme are provided.


Graphical Models and Image Processing | 1996

Automatic image annotation using adaptive color classification

Eli Saber; A. Murat Tekalp; Reiner Eschbach; Keith T. Knox

We describe a system which automatically annotates images with a set of prespecified keywords, based on supervised color classification of pixels intoNprespecified classes using simple pixelwise operations. The conditional distribution of the chrominance components of pixels belonging to each class is modeled by a two-dimensional Gaussian function, where the mean vector and the covariance matrix for each class are estimated from appropriate training sets. Then, a succession of binary hypothesis tests with image-adaptive thresholds has been employed to decide whether each pixel in a given image belongs to one of the predetermined classes. To this effect, a universal decision threshold is first selected for each class based on receiver operating characteristics (ROC) curves quantifying the optimum “true positive” vs “false positive” performance on the training set. Then, a new method is introduced for adapting these thresholds to the characteristics of individual input images based on histogram cluster analysis. If a particular pixel is found to belong to more than one class, a maximuma posterioriprobability (MAP) rule is employed to resolve the ambiguity. The performance improvement obtained by the proposed adaptive hypothesis testing approach over using universal decision thresholds is demonstrated by annotating a database of 31 images.


Pattern Recognition | 2002

Robust watermarking of fingerprint images

Bilge Gunsel; Umut Uludag; A. Murat Tekalp

This paper introduces two spatial methods in order to embed watermark data into fingerprint images, without corrupting their features. The first method inserts watermark data after feature extraction, thus preventing watermarking of regions used for fingerprint classification. The method utilizes an image adaptive strength adjustment technique which results in watermarks with low visibility. The second method introduces a feature adaptive watermarking technique for fingerprints, thus applicable before feature extraction. For both of the methods, decoding does not require original fingerprint image. Unlike most of the published spatial watermarking methods, the proposed methods provide high decoding accuracy for fingerprint images. High data hiding and decoding performance for color images is also observed.

Collaboration


Dive into the A. Murat Tekalp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eli Saber

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ahmet Ekin

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Yucel Altunbasak

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge