Suleyman Serdar Kozat
Bilkent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Suleyman Serdar Kozat.
international conference on image processing | 2004
Suleyman Serdar Kozat; Ramarathnam Venkatesan; Mehmet Kivanc Mihcak
In this paper we suggest viewing images (as well as attacks on them) as a sequence of linear operators and propose novel hashing algorithms employing transforms that are based on matrix invariants. To derive this sequence, we simply cover a two dimensional representation of an image by a sequence of (possibly overlapping) rectangles R/sub i/ whose sizes and locations are chosen randomly/sup 1/ from a suitable distribution. The restriction of the image (representation) to each R/sub i/ gives rise to a matrix A/sub i/. The fact that A/sub i/s will overlap and are random, makes the sequence (respectively) a redundant and non-standard representation of images, but is crucial for our purposes. Our algorithms first construct a secondary image, derived from input image by pseudo-randomly extracting features that approximately capture semi-global geometric characteristics. From the secondary image (which does not perceptually resemble the input), we further extract the final features which can be used as a hash value (and can be further suitably quantized). In this paper, we use spectral matrix invariants as embodied by singular value decomposition. Surprisingly, formation of the secondary image turns out be quite important since it not only introduces further robustness (i.e., resistance against standard signal processing transformations), but also enhances the security properties (i.e. resistance against intentional attacks). Indeed, our experiments reveal that our hashing algorithms extract most of the geometric information from the images and hence are robust to severe perturbations (e.g. up to %50 cropping by area with 20 degree rotations) on images while avoiding misclassification. Our methods are general enough to yield a watermark embedding scheme, which will be studied in another paper.
IEEE Communications Magazine | 2009
Andrew C. Singer; Jill K. Nelson; Suleyman Serdar Kozat
The performance and complexity of signal processing systems for underwater acoustic communications has dramatically increased over the last two decades. With its origins in noncoherent modulation and detection for communication at rates under 100 b/s, phase-coherent digital communication systems employing multichannel adaptive equalization with explicit symbol-timing and phase tracking are being deployed in commercial and military systems, enabling rates in excess of 10 kb/s. Research systems have been shown to further dramatically increase performance through the use of spatial multiplexing. Iterative equalization and decoding has also proven to be an enabling technology for dramatically enhancing the robustness of such systems. This article provides a brief overview of signal processing methods and advances in underwater acoustic communications, discussing both single carrier and emerging multicarrier methods, along with iterative decoding and spatial multiplexing methods.
IEEE Transactions on Signal Processing | 2010
Suleyman Serdar Kozat; Alper T. Erdogan; Andrew C. Singer; Ali H. Sayed
In this paper, we consider mixture approaches that adaptively combine outputs of several parallel running adaptive algorithms. These parallel units can be considered as diversity branches that can be exploited to improve the overall performance. We study various mixture structures where the final output is constructed as the weighted linear combination of the outputs of several constituent filters. Although the mixture structure is linear, the combination weights can be updated in a highly nonlinear manner to minimize the final estimation error such as in Singer and Feder 1999; Arenas-Garcia, Figueiras-Vidal, and Sayed 2006; Lopes, Satorius, and Sayed 2006; Bershad, Bermudez, and Tourneret 2008; and Silva and Nascimento 2008. We distinguish mixture approaches that are convex combinations (where the linear mixture weights are constrained to be nonnegative and sum up to one) [Singer and Feder 1999; Arenas-Garcia, Figueiras-Vidal, and Sayed 2006], affine combinations (where the linear mixture weights are constrained to sum up to one) [Bershad, Bermudez, and Tourneret 2008] and, finally, unconstrained linear combinations of constituent filters [Kozat and Singer 2000]. We investigate mixture structures with respect to their final mean-square error (MSE) and tracking performance in the steady state for stationary and certain nonstationary data, respectively. We demonstrate that these mixture approaches can greatly improve over the performance of the constituent filters. Our analysis is also generic such that it can be applied to inhomogeneous mixtures of constituent adaptive branches with possibly different structures, adaptation methods or having different filter lengths.
IEEE Transactions on Information Theory | 2002
Andrew C. Singer; Suleyman Serdar Kozat; Meir Feder
We consider the problem of sequential linear prediction of real-valued sequences under the square-error loss function. For this problem, a prediction algorithm has been demonstrated whose accumulated squared prediction error, for every bounded sequence, is asymptotically as small as the best fixed linear predictor for that sequence, taken from the class of all linear predictors of a given order p. The redundancy, or excess prediction error above that of the best predictor for that sequence, is upper-bounded by A/sup 2/P ln(n)/n, where n is the data length and the sequence is assumed to be bounded by some A. We provide an alternative proof of this result by connecting it with universal probability assignment. We then show that this predictor is optimal in a min-max sense, by deriving a corresponding lower bound, such that no sequential predictor can ever do better than a redundancy of A/sup 2/p ln(n)/n.
EURASIP Journal on Advances in Signal Processing | 2008
Karen M. Guan; Suleyman Serdar Kozat; Andrew C. Singer
Level-crossing analog-to-digital converters (LC ADCs) have been considered in the literature and have been shown to efficiently sample certain classes of signals. One important aspect of their implementation is the placement of reference levels in the converter. The levels need to be appropriately located within the input dynamic range, in order to obtain samples efficiently. In this paper, we study optimization of the performance of such an LC ADC by providing several sequential algorithms that adaptively update the ADC reference levels. The accompanying performance analysis and simulation results show that as the signal length grows, the performance of the sequential algorithms asymptotically approaches that of the best choice that could only have been chosen in hindsight within a family of possible schemes.
IEEE Transactions on Signal Processing | 2014
Muhammed O. Sayin; N. Denizcan Vanli; Suleyman Serdar Kozat
We introduce a novel family of adaptive filtering algorithms based on a relative logarithmic cost inspired by the “competitive methods” from the online learning literature. The competitive or regret based approaches stabilize or improve the convergence performance of adaptive algorithms through relative cost functions. The new family elegantly and gradually adjusts the conventional cost functions in its optimization based on the error amount. We introduce important members of this family of algorithms such as the least mean logarithmic square (LMLS) and least logarithmic absolute difference (LLAD) algorithms. However, our approach and analysis are generic such that they cover other well-known cost functions as described in the paper. The LMLS algorithm achieves comparable convergence performance with the least mean fourth (LMF) algorithm and enhances the stability performance significantly. The LLAD and least mean square (LMS) algorithms demonstrate similar convergence performance in impulse-free noise environments while the LLAD algorithm is robust against impulsive interferences and outperforms the sign algorithm (SA). We analyze the transient, steady-state and tracking performance of the introduced algorithms and demonstrate the match of the theoretical analyses and simulation results. We show the enhanced stability performance of the LMLS algorithm and analyze the robustness of the LLAD algorithm against impulsive interferences. Finally, we demonstrate the performance of our algorithms in different scenarios through numerical examples.
IEEE Transactions on Signal Processing | 2008
Suleyman Serdar Kozat; Andrew C. Singer
In this paper, we consider sequential regression of individual sequences under the square-error loss. We focus on the class of switching linear predictors that can segment a given individual sequence into an arbitrary number of blocks within each of which a fixed linear regressor is applied. Using a competitive algorithm framework, we construct sequential algorithms that are competitive with the best linear regression algorithms for any segmenting of the data as well as the best partitioning of the data into any fixed number of segments, where both the segmenting of the data and the linear predictors within each segment can be tuned to the underlying individual sequence. The algorithms do not require knowledge of the data length or the number of piecewise linear segments used by the members of the competing class, yet can achieve the performance of the best member that can choose both the partitioning of the sequence as well as the best regressor within each segment. We use a transition diagram (F. M. J. Willems, 1996) to compete with an exponential number of algorithms in the class, using complexity that is linear in the data length. The regret with respect to the best member is O(ln(n)) per transition for not knowing the best transition times and O(ln(n)) for not knowing the best regressor within each segment, where n is the data length. We construct lower bounds on the performance of any sequential algorithm, demonstrating a form of min-max optimality under certain settings. We also consider the case where the members are restricted to choose the best algorithm in each segment from a finite collection of candidate algorithms. Performance on synthetic and real data are given along with a Matlab implementation of the universal switching linear predictor.
IEEE Transactions on Signal Processing | 2014
N. Denizcan Vanli; Suleyman Serdar Kozat
In this paper, we investigate adaptive nonlinear regression and introduce tree based piecewise linear regression algorithms that are highly efficient and provide significantly improved performance with guaranteed upper bounds in an individual sequence manner. We use a tree notion in order to partition the space of regressors in a nested structure. The introduced algorithms adapt not only their regression functions but also the complete tree structure while achieving the performance of the “best” linear mixture of a doubly exponential number of partitions, with a computational complexity only polynomial in the number of nodes of the tree. While constructing these algorithms, we also avoid using any artificial “weighting” of models (with highly data dependent parameters) and, instead, directly minimize the final regression error, which is the ultimate performance goal. The introduced methods are generic such that they can readily incorporate different tree construction methods such as random trees in their framework and can use different regressor or partitioning functions as demonstrated in the paper.
IEEE Transactions on Signal Processing | 2014
Muhammed O. Sayin; Suleyman Serdar Kozat
We study the compressive diffusion strategies over distributed networks based on the diffusion implementation and adaptive extraction of the information from the compressed diffusion data. We demonstrate that one can achieve a comparable performance to the full information exchange configurations, even if the diffused information is compressed into a scalar or a single bit, i.e., a tremendous reduction in the communication load. To this end, we provide a complete performance analysis for the compressive diffusion strategies. We analyze the transient, the steady-state and the tracking performances of the configurations in which the diffused data is compressed into a scalar or a single-bit. We propose a new adaptive combination method improving the convergence performance of the compressive diffusion strategies further. In the new method, we introduce one more freedom-of-dimension in the combination matrix and adapt it by using the conventional mixture approach in order to enhance the convergence performance for any possible combination rule used for the full diffusion configuration. We demonstrate that our theoretical analysis closely follow the ensemble averaged results in our simulations. We provide numerical examples showing the improved convergence performance with the new adaptive combination method while tremendously reducing the communication load.
IEEE Signal Processing Letters | 2013
Muhammed O. Sayin; Suleyman Serdar Kozat
We introduce novel diffusion based adaptive estimation strategies for distributed networks that have significantly less communication load and achieve comparable performance to the full information exchange configurations. After local estimates of the desired data is produced in each node, a single bit of information (or a reduced dimensional data vector) is generated using certain random projections of the local estimates. This newly generated data is diffused and then used in neighboring nodes to recover the original full information. We provide the complete state-space description and the mean stability analysis of our algorithms.