Bülent Sankur
Boğaziçi University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bülent Sankur.
Journal of Electronic Imaging | 2004
Mehmet Sezgin; Bülent Sankur
We conduct an exhaustive survey of image thresholding methods, categorize them, express their formulas under a uniform notation, and finally carry their performance comparison. The thresholding methods are categorized according to the information they are exploiting, such as histogram shape, measurement space clustering, entropy, object attributes, spatial correlation, and local gray-level surface. 40 selected thresholding methods from various categories are compared in the context of nondestructive testing applications as well as for document images. The comparison is based on the combined performance measures. We identify the thresholding algorithms that perform uniformly better over nonde- structive testing and document image applications.
Journal of Electronic Imaging | 2002
Ismail Avcibas; Bülent Sankur; Khalid Sayood
In this work we comprehensively categorize image qual- ity measures, extend measures defined for gray scale images to their multispectral case, and propose novel image quality measures. They are categorized into pixel difference-based, correlation-based, edge-based, spectral-based, context-based and human visual sys- tem (HVS)-based measures. Furthermore we compare these mea- sures statistically for still image compression applications. The sta- tistical behavior of the measures and their sensitivity to coding artifacts are investigated via analysis of variance techniques. Their similarities or differences are illustrated by plotting their Kohonen maps. Measures that give consistent scores across an image class and that are sensitive to coding artifacts are pointed out. It was found that measures based on the phase spectrum, the multireso- lution distance or the HVS filtered mean square error are computa- tionally simple and are more responsive to coding artifacts. We also demonstrate the utility of combining selected quality metrics in build- ing a steganalysis tool.
IEEE Transactions on Image Processing | 2006
Erdem Yörük; Ender Konukoglu; Bülent Sankur; Jérôme Darbon
The problem of person recognition and verification based on their hand images has been addressed. The system is based on the images of the right hands of the subjects, captured by a flatbed scanner in an unconstrained pose at 45 dpi. In a preprocessing stage of the algorithm, the silhouettes of hand images are registered to a fixed pose, which involves both rotation and translation of the hand and, separately, of the individual fingers. Two feature sets have been comparatively assessed, Hausdorff distance of the hand contours and independent component features of the hand silhouette images. Both the classification and the verification performances are found to be very satisfactory as it was shown that, at least for groups of about five hundred subjects, hand-based recognition is a viable secure access control scheme.
ieee international conference on information technology and applications in biomedicine | 2000
Gouenou Coatrieux; H. Maitre; Bülent Sankur; Y. Rolland; R. Collorec
Because of the importance of the security issues in the management of medical information, we suggest the use of watermarking techniques to complete the existing measures for protecting medical images. We discuss the necessary requirements for such a system to be accepted by medical staff and its complementary role with respect with existing security systems. We present different scenarios, one devoted to the authentication and tracing of the images, the second to the integrity control of the patients record.
IEEE Transactions on Multimedia | 2006
Baris Coskun; Bülent Sankur; Nasir D. Memon
Identification and verification of a video clip via its fingerprint find applications in video browsing, database search and security. For this purpose, the video sequence must be collapsed into a short fingerprint using a robust hash function based on signal processing operations. We propose two robust hash algorithms for video based both on the discrete cosine transform (DCT), one on the classical basis set and the other on a novel randomized basis set (RBT). The robustness and randomness properties of the proposed hash functions are investigated in detail. It is found that these hash functions are resistant to signal processing and transmission impairments, and therefore can be instrumental in building database search, broadcast monitoring and watermarking applications for video. The DCT hash is more robust, but lacks security aspect, as it is easy to find different video clips with the same hash value. The RBT based hash, being secret key based, does not allow this and is more secure at the cost of a slight loss in the receiver operating curves
Image and Vision Computing | 2005
Hazim Kemal Ekenel; Bülent Sankur
In this paper the contribution of multiresolution analysis to the face recognition performance is examined. We refer to the paradigm that in classification tasks, the use of multiple observations and their judicious fusion at the data, feature or decision level improves the correct decision performance. In our proposed method, prior to the subspace projection operation like principal or independent component analysis, we employ multiresolution analysis to decompose the image into its subbands. Our aim is to search for the subbands that are insensitive to the variations in expression and in illumination. The classification performance is improved by fusing the information coming from the subbands that attain individually high correct recognition rates. The proposed algorithm is tested on face images that differ in expression or illumination separately, obtained from CMU PIE, FERET and Yale databases. Significant performance gains are attained, especially against illumination perturbations.
EURASIP Journal on Advances in Signal Processing | 2005
Ismail Avcibas; Mehdi Kharrazi; Nasir D. Memon; Bülent Sankur
We present a novel technique for steganalysis of images that have been subjected to embedding by steganographic algorithms. The seventh and eighth bit planes in an image are used for the computation of several binary similarity measures. The basic idea is that the correlation between the bit planes as well as the binary texture characteristics within the bit planes will differ between a stego image and a cover image. These telltale marks are used to construct a classifier that can distinguish between stego and cover images. We also provide experimental results using some of the latest steganographic algorithms. The proposed scheme is found to have complementary performance vis-à-vis Farids scheme in that they outperform each other in alternate embedding techniques.
Image and Vision Computing | 2001
Fatih Kurugollu; Bülent Sankur; A. Emre Harmanci
Abstract A novel method for multiband image segmentation has been proposed. The method is based on segmentation of subsets of bands using multithresholding followed by the fusion of the resulting segmentation “channels”. For color images the band subsets are chosen as the RB, RG and BG pairs, whose two-dimensional histograms are processed via a peak-picking algorithm to effect multithresholding. The segmentation maps are first fused by running a label concordance algorithm and then smoothed by a spatial–chromatic majority filter. It is shown that for multiband images, multithresholding subsets of bands followed by a fusion stage results in improved performance and running time.
IEEE Transactions on Information Forensics and Security | 2008
Oya Celiktutan; Bülent Sankur; Ismail Avcibas
The various image-processing stages in a digital camera pipeline leave telltale footprints, which can be exploited as forensic signatures. These footprints consist of pixel defects, of unevenness of the responses in the charge-coupled device sensor, black current noise, and may originate from proprietary interpolation algorithms involved in color filter array. Various imaging device (camera, scanner, etc.) identification methods are based on the analysis of these artifacts. In this paper, we set to explore three sets of forensic features, namely binary similarity measures, image-quality measures, and higher order wavelet statistics in conjunction with SVM classifiers to identify the originating camera. We demonstrate that our camera model identification algorithm achieves more accurate identification, and that it can be made robust to a host of image manipulations. The algorithm has the potential to discriminate camera units within the same model.
electronic imaging | 2003
Hamza Özer; Ismail Avcibas; Bülent Sankur; Nasir D. Memon
Classification of audio documents as bearing hidden information or not is a security issue addressed in the context of steganalysis. A cover audio object can be converted into a stego-audio object via steganographic methods. In this study we present a statistical method to detect the presence of hidden messages in audio signals. The basic idea is that, the distribution of various statistical distance measures, calculated on cover audio signals and on stego-audio signals vis-à-vis their denoised versions, are statistically different. The design of audio steganalyzer relies on the choice of these audio quality measures and the construction of a two-class classifier. Experimental results show that the proposed technique can be used to detect the presence of hidden messages in digital audio data.