Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M.I. Sezan is active.

Publication


Featured researches published by M.I. Sezan.


international conference on acoustics, speech, and signal processing | 1991

Temporally adaptive filtering of noisy image sequences using a robust motion estimation algorithm

M.I. Sezan; M.K. Ozkan; S.V. Fogel

A motion-compensated noise suppression algorithm that employs temporally adaptive filtering along motion trajectories is proposed for image sequences. Filtering is performed via linear minimum mean square error (LMMSE) point estimation. Motion trajectories are determined using a recent motion estimation algorithm, which is capable of performing very well at low signal-to-noise ratios (SNRs). The results suggest that the proposed method is far superior to methods that incorporate implicit or explicit motion compensation, especially in cases of low SNR and/or significant interframe motion.<<ETX>>


international conference on acoustics, speech, and signal processing | 1995

High resolution standards conversion of low resolution video

A.J. Patti; M.I. Sezan; A.M. Tekalp

With the advent of frame grabbers capable of acquiring multiple video frames, a great deal of attention is being directed at creating high-resolution (hi-res) imagery from interlaced or low-resolution (low-res) video. This is a multi-faceted problem, which generally necessitates standards conversion and hi-res reconstruction. Standards conversion is the problem of converting from one spatio-temporal sampling lattice to another, while hi-res image reconstruction involves increasing the spatial sampling density. Also of interest is removing degradations that occur during the image acquisition process. These tasks have all received considerable, yet separate, treatment in the literature. A unifying video formation model is presented which addresses these problems simultaneously. Then, a POCS-based algorithm for generating high-resolution imagery from video is delineated. Results with real imagery are included.


international conference on acoustics, speech, and signal processing | 1993

Motion-field segmentation using an adaptive MAP criterion

M.M. Chang; A.M. Tekalp; M.I. Sezan

The authors propose a general formulation for adaptive, maximum a posteriori probability (MAP) segmentation of image sequences on the basis of interframe displacement and gray level information. The segmentation classifies pixel sites to independently moving objects in the scene. In this formulation, two methods for characterizing the conditional probability distribution of the data given the segmentation process are proposed. The a priori probability distribution is characterized on the basis of a Gibbsian model of the segmentation process, where a novel motion-compensated spatiotemporal neighborhood system is defined. The proposed formulation adapts to the displacement field accuracy by appropriately adjusting the relative emphasis on the estimated displacement field, gray level information, and prior knowledge implied by the Gibbsian model. Experiments have been performed with a five-frame simulated sequence containing translation and rotation.<<ETX>>


international conference on acoustics, speech, and signal processing | 1991

On modeling the focus blur in image restoration

M.I. Sezan; G. Pavlovic; A.M. Tekalp; A.T. Erdem

The point spread function (PSF) of a defocused lens, in the continuous spatial coordinates, can be modeled using the principles of either geometrical optics or physical optics. The authors compare the PSFs and the corresponding optical transfer functions obtained on the basis of these two models under different amounts of blurs and different imaging sensor resolutions, in the discrete spatial domain. Different approximations for the discretization of the PSF are considered. The effect of using geometrical versus physical optics-based models on the quality of restored images is investigated experimentally in the case of images recorded by a charge-coupled device (CCD) camera.<<ETX>>


international conference on acoustics, speech, and signal processing | 1993

New approaches for space-invariant image restoration

A.J. Patti; M.K. Ozkan; A.M. Tekalp; M.I. Sezan

The problem of restoring images degraded by space-invariant blurs and noise is addressed. Two approaches, one based on Kalman filtering and the other on projection onto convex sets (POCS), are proposed. The Kalman filtering approach modifies the image model used in the usual reduced-order model Kalman filtering (ROMKF) approach to obtain a more accurate representation of the image distribution. The proposed POCS-based approach utilizes novel space-domain constraints defined in terms of the space-varying blur function. Both approaches have been shown to effectively restore images degraded by LSV (linear space-variant) blur functions in the presence of additive noise.<<ETX>>


international conference on acoustics, speech, and signal processing | 1992

Motion-compensated multiframe Wiener restoration of blurred and noisy image sequences

A.T. Erdem; M.I. Sezan; M.K. Ozkan

A computationally efficient multiframe LMMSE filtering algorithm, the motion-compensated multiframe (MCMF) Wiener filter, for restoring image sequences that are degraded by both blur and noise is proposed. MCMF Wiener filter applies to the cases where each frame of the ideal image sequence can be expressed as a globally shifted version of its previous frame. As opposed to single-frame filtering, the MCMF Wiener filter accounts for interframe (temporal) correlations as well as intraframe (spatial) correlations in restoring a given image sequence. The MCMF filter requires neither the explicit estimation of cross correlations among the frames, nor any matrix inversion. It accounts for the interframe correlations implicitly by using the estimated interframe motion information. The results of an extensive study on the performance and robustness of the proposed algorithm are presented.<<ETX>>


international conference on acoustics, speech, and signal processing | 1993

A set-theoretic phase unwrapping technique for least-squares image reconstruction from the higher-order spectrum

M.I. Sezan; A.T. Erdem

The authors propose a novel technique for unwrapping the phase of the higher-order spectrum (HOS) of an image for the purpose of reconstructing the Fourier phase of the image. This technique solves the combined problem of phase unwrapping and reconstruction, in the least-squares sense. It uses all distinct HOS samples, and is based on alternating projections onto two constraint sets. The results obtained can easily be extended to one dimension and multiple dimensions.<<ETX>>


Archive | 1993

Multiframe Wiener Restoration of Image Sequences

Mehmet K. Ozkan; M.I. Sezan; A.T. Erdem; A. M. Tekalpt

Imagine an image sequence whose frames are both blurred and noisy. Three frames of such a sequence is shown in Fig. 13.1 where each frame suffers from focus blur as well as additive white Gaussian noise. Due to blur and noise contamination, the amount of information that a human observer or a machine can extract from this sequence is rather limited. It is therefore desirable to restore this image sequence. By that we mean the estimation of the original sequence from its blurred and noisy rendition. (The original sequence is shown in Fig. 13.2)


Archive | 1995

Method for multiframe Wiener restoration of noisy and blurred image sequences

A.T. Erdem; M.I. Sezan; Mehmet K. Ozkan


Archive | 1991

Method for temporally adaptive filtering of frames of a noisy image sequence using motion estimation

M.I. Sezan; Mehmet K. Ozkan; Sergei V. Fogel

Collaboration


Dive into the M.I. Sezan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.M. Tekalp

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A.J. Patti

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

M.K. Ozkan

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M.M. Chang

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge