Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where A.M. Tekalp is active.

Publication


Featured researches published by A.M. Tekalp.


IEEE Transactions on Image Processing | 2003

Automatic soccer video analysis and summarization

Ahmet Ekin; A.M. Tekalp; R. Mehrotra

We propose a fully automatic and computationally efficient framework for analysis and summarization of soccer videos using cinematic and object-based features. The proposed framework includes some novel low-level processing algorithms, such as dominant color region detection, robust shot boundary detection, and shot classification, as well as some higher-level algorithms for goal detection, referee detection, and penalty-box detection. The system can output three types of summaries: i) all slow-motion segments in a game; ii) all goals in a game; iii) slow-motion segments classified according to object-based features. The first two types of summaries are based on cinematic features only for speedy processing, while the summaries of the last type contain higher-level semantics. The proposed framework is efficient, effective, and robust. It is efficient in the sense that there is no need to compute object-based features when cinematic features are sufficient for the detection of certain events, e.g., goals in soccer. It is effective in the sense that the framework can also employ object-based features when needed to increase accuracy (at the expense of more computation). The efficiency, effectiveness, and robustness of the proposed framework are demonstrated over a large data set, consisting of more than 13 hours of soccer video, captured in different countries and under different conditions.


IEEE Transactions on Image Processing | 2005

Lossless generalized-LSB data embedding

Mehmet Utku Celik; Gaurav Sharma; A.M. Tekalp; Eli Saber

We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes unaltered portions of the host signal as side-information improves the compression efficiency and, thus, the lossless data-embedding capacity.


international conference on acoustics, speech, and signal processing | 1992

High-resolution image reconstruction from lower-resolution image sequences and space-varying image restoration

A.M. Tekalp; M.K. Ozkan; M.I. Sezan

The authors address the problem of reconstruction of a high-resolution image from a number of lower-resolution (possibly noisy) frames of the same scene where the successive frames are uniformly based versions of each other at subpixel displacements. In particular, two previously proposed methods, a frequency-domain method and a method based on projections onto convex sets (POCSs), are extended to take into account the presence of both sensor blurring and observation noise. A new two-step procedure is proposed, and it is shown that the POCS formulation presented for the high-resolution image reconstruction problem can also be used as a new method for the restoration of spatially invariant blurred images. Some simulation results are provided.<<ETX>>


international conference on image processing | 2002

Reversible data hiding

Mehmet Utku Celik; Gaurav Sharma; A.M. Tekalp; Eli Saber

We present a novel reversible (lossless) data hiding (embedding) technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known LSB (least significant bit) modification is proposed as the data embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion, and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes static portions of the host as side-information improves the compression efficiency, and thus the lossless data embedding capacity.


IEEE Transactions on Image Processing | 2006

Lossless watermarking for image authentication: a new framework and an implementation

Mehmet Utku Celik; Gaurav Sharma; A.M. Tekalp

We present a novel framework for lossless (invertible) authentication watermarking, which enables zero-distortion reconstruction of the un-watermarked images upon verification. As opposed to earlier lossless authentication methods that required reconstruction of the original image prior to validation, the new framework allows validation of the watermarked images before recovery of the original image. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not needed. For verified images, integrity of the reconstructed image is ensured by the uniqueness of the reconstruction procedure. The framework also enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization. Effectiveness of the framework is demonstrated by implementing the framework using hierarchical image authentication along with lossless generalized-least significant bit data embedding.


IEEE Transactions on Circuits and Systems for Video Technology | 1993

Adaptive motion-compensated filtering of noisy image sequences

M.K. Ozkan; M.I. Sezan; A.M. Tekalp

The authors propose a novel adaptive spatiotemporal filter, called the adaptive weighted averaging (AWA) filter, for effective noise suppression in image sequences without introducing visually disturbing blurring artifacts. Filtering is performed by computing the weighted average of image values within a spatiotemporal support along the estimated motion trajectory at each pixel. The weights are determined by optimizing a well defined mathematical criterion, which provides an implicit mechanism for deemphasizing the contribution of the outlier pixels within the spatiotemporal filter support to avoid blurring. The AWA filter is therefore particularly well suited for filtering sequences that contain segments with abruptly changing scene content due to, for example, rapid zooming and changes in the view of the camera. The performance of the proposed AWA filter is compared with that of the spatiotemporal, local linear minimum mean square error (LMMSE) filtering. The results demonstrate that the proposed AWA filter-outperforms the LMMSE filter, especially in the cases of low signal-to-noise ratios and abruptly varying scene content. >


IEEE Transactions on Image Processing | 1997

Simultaneous motion estimation and segmentation

M.M. Chang; A.M. Tekalp; M. I. Sezan

We present a Bayesian framework that combines motion (optical flow) estimation and segmentation based on a representation of the motion field as the sum of a parametric field and a residual field. The parameters describing the parametric component are found by a least squares procedure given the best estimates of the motion and segmentation fields. The motion field is updated by estimating the minimum-norm residual field given the best estimate of the parametric field, under the constraint that motion field be smooth within each segment. The segmentation field is updated to yield the minimum-norm residual field given the best estimate of the motion field, using Gibbsian priors. The solution to successive optimization problems are obtained using the highest confidence first (HCF) or iterated conditional mode, (ICM) optimization methods. Experimental results on real video are shown.


IEEE Transactions on Image Processing | 1992

Efficient multiframe Wiener restoration of blurred and noisy image sequences

M.K. Ozkan; A.T. Erdem; M.I. Sezan; A.M. Tekalp

Computationally efficient multiframe Wiener filtering algorithms that account for both intraframe (spatial) and interframe (temporal) correlations are proposed for restoring image sequences that are degraded by both blur and noise. One is a general computationally efficient multiframe filter, the cross-correlated multiframe (CCMF) Wiener filter, which directly utilizes the power and cross power spectra of only NxN matrices, where N is the number of frames used in the restoration. In certain special cases the CCMF lends itself to a closed-form solution that does not involve any matrix inversion. A special case is the motion-compensated multiframe (MCMF) filter, where each frame is assumed to be a globally shifted version of the previous frame. In this case, the interframe correlations can be implicitly accounted for using the estimated motion information. Thus the MCMF filter requires neither explicit estimation of cross correlations among the frames nor matrix inversion. Performance and robustness results are given.


IEEE Transactions on Image Processing | 1992

Maximum likelihood parametric blur identification based on a continuous spatial domain model

G. Pavlovic; A.M. Tekalp

A formulation for maximum-likelihood (ML) blur identification based on parametric modeling of the blur in the continuous spatial coordinates is proposed. Unlike previous ML blur identification methods based on discrete spatial domain blur models, this formulation makes it possible to find the ML estimate of the extent, as well as other parameters, of arbitrary point spread functions that admit a closed-form parametric description in the continuous coordinates. Experimental results are presented for the cases of 1-D uniform motion blur, 2-D out-of-focus blur, and 2-D truncated Gaussian blur at different signal-to-noise ratios.


IEEE Transactions on Image Processing | 1994

POCS-based restoration of space-varying blurred images

M.K. Ozkan; A.M. Tekalp; M.I. Sezan

We propose a new method for space-varying image restoration using the method of projection onto convex sets (POCS). The formulation allows the use of a different blurring function at each pixel of the image in a computationally efficient manner. We illustrate the performance of the proposed approach by comparing the new results with those of the ROMKF method on simulated images. We also present results on a real-life image with unknown space-varying out-of-focus blur.

Collaboration


Dive into the A.M. Tekalp's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eli Saber

Rochester Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M.I. Sezan

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

A.T. Erdem

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Yucel Altunbasak

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

M.K. Ozkan

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

A.J. Patti

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Ahmet Ekin

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Gulcin Caner

University of Rochester

View shared research outputs
Researchain Logo
Decentralizing Knowledge