Sevinc Bayram
New York University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sevinc Bayram.
international conference on acoustics, speech, and signal processing | 2009
Sevinc Bayram; Husrev Taha Sencar; Nasir D. Memon
Copy-move forgery is a specific type of image tampering, where a part of the image is copied and pasted on another part of the same image. In this paper, we propose a new approach for detecting copy-move forgery in digital images, which is considerably more robust to lossy compression, scaling and rotation type of manipulations. Also, to improve the computational complexity in detecting the duplicated image regions, we propose to use the notion of counting bloom filters as an alternative to lexicographic sorting, which is a common component of most of the proposed copy-move forgery detection schemes. Our experimental results show that the proposed features can detect duplicated region in the images very accurately, even when the copied region was undergone severe image manipulations. In addition, it is observed that use of counting bloom filters offers a considerable improvement in time efficiency at the expense of a slight reduction in the robustness.
international conference on image processing | 2005
Sevinc Bayram; Husrev T. Sencar; Nasir D. Memon; Ismail Avcibas
In this work, we focus our interest on blind source camera identification problem by extending our results in the direction of M. Kharrazi et al. (2004). The interpolation in the color surface of an image due to the use of a color filter array (CFA) forms the basis of the paper. We propose to identify the source camera of an image based on traces of the proprietary interpolation algorithm deployed by a digital camera. For this purpose, a set of image characteristics are defined and then used in conjunction with a support vector machine based multi-class classifier to determine the originating digital camera. We also provide initial results on identifying source among two and three digital cameras.
Journal of Electronic Imaging | 2006
Sevinc Bayram; Ismail Avcibas; Bülent Sankur; Nasir D. Memon
Techniques and methodologies for validating the authenticity of digital images and testing for the presence of doctoring and manipulation operations on them has recently attracted attention. We review three categories of forensic features and discuss the design of classifiers between doctored and original images. The performance of classifiers with respect to selected controlled manipulations as well as to uncontrolled manipulations is analyzed. The tools for image manipulation detection are treated under feature fusion and decision fusion scenarios.
international conference on image processing | 2004
Ismail Avcibas; Sevinc Bayram; Nasir D. Memon; Mahalingam Ramkumar; Bülent Sankur
In this paper we present a framework for digital image forensics. Based on the assumptions that some processing operations must be done on the image before it is doctored and an expected measurable distortion after processing an image, we design classifiers that discriminates between original and processed images. We propose a novel way of measuring the distortion between two images, one being the original and the other processed. The measurements are used as features in classifier design. Using these classifiers we test whether a suspicious part of a given image has been processed with a particular method or not. Experimental results show that with a high accuracy we are able to tell if some part of an image has undergone a particular or a combination of processing methods.
Digital Investigation | 2008
Sevinc Bayram; Husrev T. Sencar; Nasir D. Memon
We utilize traces of demosaicing operation in digital cameras to identify the source camera-model of a digital image. To identify demosaicing artifacts associated with different camera-models, we employ two methods and define a set of image characteristics which are used as features in designing classifiers that distinguish between digital camera-models. The first method tries to estimate demosaicing parameters assuming linear model while the second one extracts periodicity features to detect simple forms of demosaicing. To determine the reliability of the designated image features in differentiating the source camera-model, we consider both images taken under similar settings at fixed sceneries and images taken under independent conditions. In order to show how to use these methods as a forensics tool, we consider several scenarios where we try to (i) determine which camera-model was used to capture a given image among three, four, and five camera-models, (ii) decide whether or not a given image was taken by a particular camera-model among very large number of camera-models (in the order of hundreds), and (iii) more reliably identify the individual camera, that captured a given image, by incorporating demosaicing artifacts with noise characteristics of the imaging sensor of the camera.
international conference on multimedia and expo | 2007
Yagiz Sutcu; Sevinc Bayram; Husrev T. Sencar; Nasir D. Memon
In a novel method for identifying the source camera of a digital image is proposed. The method is based on first extracting imaging sensors pattern noise from many images and later verifying its presence in a given image through a correlative procedure. In this paper, we investigate the performance of this method in a more realistic setting and provide results concerning its detection performance. To improve the applicability of the method as a forensic tool, we propose an enhancement over it by also verifying that class properties of the image in question are in agreement with those of the camera. For this purpose, we identify and compare characteristics due to demosaicing operation. Our results show that the enhanced method offers a significant improvement in the performance.
international conference on image processing | 2007
Ahmet Emir Dirik; Sevinc Bayram; Husrev T. Sencar; Nasir D. Memon
Discrimination of computer generated images from real images is becoming more and more important. In this paper, we propose the use of new features to distinguish computer generated images from real images. The proposed features are based on the differences in the acquisition process of images. More specifically, traces of demosaicking and chromatic aberration are used to differentiate computer generated images from digital camera images. It is observed that the former features perform very well on high quality images, whereas the latter features perform consistently across a wide range of compression values. The experimental results show that proposed features are capable of improving the accuracy of the state-of-the-art techniques.
Clinical Neurophysiology | 2012
Junshui Ma; Peining Tao; Sevinc Bayram; Vladimir Svetnik
OBJECTIVE To study the characteristics of unintentional muscle activities in clinical EEG, and to develop a high-throughput method to reduce them for better revealing drug or biological effects on EEG. METHODS Two clinical EEG datasets are involved. Pure muscle signals are extracted from EEG using Independent Component Analysis (ICA) for studying their characteristics. A high-throughput method called ICA-SR is introduced based on a new feature named Spectral Ratio (SR). RESULTS The spectral and temporal characteristics of the muscle artifacts are illustrated using representative muscle signals. The spatial characteristics are presented at both the group- and the subject-level, and are consistent under three different electrode reference methodologies. Objectively compared with an existing method, ICA-SR is shown to reduce more artifacts, while introduce less distortion to EEG. Its effectiveness is further demonstrated in real clinical EEG with the help of a CO(2)-inhalation EEG recording session. CONCLUSION The characteristics of unintentional muscle activities align with the reported characteristics of controlled muscle activities. Artifact spatial characteristics can be EEG equipment dependent. The ICA-SR method can effectively and efficiently process clinical EEG. SIGNIFICANCE Armed with advanced signal processing algorithms, this study expands our knowledge of muscle activities in EEG from muscle-controlled experiments to general clinical trials. The ICA-SR method provides an urgently needed solution with validated performance for efficiently processing large volumes of clinical EEG.
Journal of Neuroscience Methods | 2011
Junshui Ma; Sevinc Bayram; Peining Tao; Vladimir Svetnik
After a review of the ocular artifact reduction literature, a high-throughput method designed to reduce the ocular artifacts in multichannel continuous EEG recordings acquired at clinical EEG laboratories worldwide is proposed. The proposed method belongs to the category of component-based methods, and does not rely on any electrooculography (EOG) signals. Based on a concept that all ocular artifact components exist in a signal component subspace, the method can uniformly handle all types of ocular artifacts, including eye-blinks, saccades, and other eye movements, by automatically identifying ocular components from decomposed signal components. This study also proposes an improved strategy to objectively and quantitatively evaluate artifact reduction methods. The evaluation strategy uses real EEG signals to synthesize realistic simulated datasets with different amounts of ocular artifacts. The simulated datasets enable us to objectively demonstrate that the proposed method outperforms some existing methods when no high-quality EOG signals are available. Moreover, the results of the simulated datasets improve our understanding of the involved signal decomposition algorithms, and provide us with insights into the inconsistency regarding the performance of different methods in the literature. The proposed method was also applied to two independent clinical EEG datasets involving 28 volunteers and over 1000 EEG recordings. This effort further confirms that the proposed method can effectively reduce ocular artifacts in large clinical EEG datasets in a high-throughput fashion.
Proceedings of SPIE | 2010
Sevinc Bayram; H. Taha Sencar; Nasir D. Memon
Several promising techniques have been recently proposed to bind an image or video to its source acquisition device. These techniques have been intensively studied to address performance issues, but the computational efficiency aspect has not been given due consideration. Considering very large databases, in this paper, we focus on the efficiency of the sensor fingerprint based source device identification technique.1 We propose a novel scheme based on tree structured vector quantization that offers logarithmic improvements in the search complexity as compared to conventional approach. To demonstrate the effectiveness of the proposed approach several experiments are conducted. Our results show that with the proposed scheme major improvement in search time can be achieved.