Ertem Tuncel
University of California, Riverside
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ertem Tuncel.
IEEE Transactions on Circuits and Systems for Video Technology | 1998
A. Aydin Alatan; Levent Onural; Michael Wollborn; Roland Mech; Ertem Tuncel; Thomas Sikora
Flexibility and efficiency of coding, content extraction, and content-based search are key research topics in the field of interactive multimedia. Ongoing ISO MPEG-4 and MPEG-7 activities are targeting standardization to facilitate such services. European COST Telecommunications activities provide a framework for research collaboration. At present a significant effort of the COST 211/sup ter/ group activities is dedicated toward image and video sequence analysis and segmentation-an important technological aspect for the success of emerging object-based MPEG-4 and MPEG-7 multimedia applications. The current work of COST 211 is centered around the test model, called the analysis model (AM). The essential feature of the AM is its ability to fuse information from different sources to achieve a high-quality object segmentation. The current information sources are the intermediate results from frame-based (still) color segmentation, motion vector based segmentation, and change-detection-based segmentation. Motion vectors, which form the basis for the motion vector based intermediate segmentation, are estimated from consecutive frames. A recursive shortest spanning tree (RSST) algorithm is used to obtain intermediate color and motion vector based segmentation results. A rule-based region processor fuses the intermediate results; a postprocessor further refines the final segmentation output. The results of the current AM are satisfactory.
international conference on data engineering | 2001
Hakan Ferhatosmanoglu; Ertem Tuncel; Divyakant Agrawal; A. El Abbadi
Develops a general framework for approximate nearest-neighbor queries. We categorize the current approaches for nearest-neighbor query processing based on either their ability to reduce the data set that needs to be examined, or their ability to reduce the representation size of each data object. We first propose modifications to well-known techniques to support the progressive processing of approximate nearest-neighbor queries. A user may therefore stop the retrieval process once enough information has been returned. We then develop a new technique based on clustering that merges the benefits of the two general classes of approaches. Our cluster-based approach allows a user to progressively explore the approximate results with increasing accuracy. We propose a new metric for evaluation of approximate nearest-neighbor searching techniques. Using both the proposed and the traditional metrics, we analyze and compare several techniques with a detailed performance evaluation. We demonstrate the feasibility and efficiency of approximate nearest-neighbor searching. We perform experiments on several real data sets and establish the superiority of the proposed cluster-based technique over the existing techniques for approximate nearest-neighbor searching.
allerton conference on communication, control, and computing | 2008
Deniz Gunduz; Ertem Tuncel; Jayanth Nayak
A two-way relay channel in which two users communicate with each other over a relay terminal is considered. In particular, a ldquoseparatedrdquo two-way relay channel, in which the users do not receive each others signals is studied. Various achievable schemes are proposed and corresponding achievable rate regions are characterized. Specifically, a combination of partial decode-and-forward and compress-and-forward schemes is proposed. In addition, compress-and-forward relaying with two layered quantization, in which one of the users receive a better description of the relay received signal is studied. Extension of these achievable schemes to the Gaussian separated two-way relay channel is presented. It is shown that the compress-and-forward scheme achieves rates within half bit of the capacity region in the Gaussian setting. Numerical results are also presented for comparison of the proposed achievable schemes in the Gaussian case.
conference on information and knowledge management | 2000
Hakan Ferhatosmanoglu; Ertem Tuncel; Divyakant Agrawal; Amr El Abbadi
With the proliferation of multimedia data, there is increasing need to support the indexing and searching of high dimensional data. Recently, a vector approximation based technique called VAle has been proposed for indexing high dimensional data. It has been shown that the VAle is an e ective technique compared to the current approaches based on space and data partitioning. The VAle gives good performance especially when the data set is uniformly distributed. Real data sets are not uniformly distributed, are often clustered, and the dimensions of the feature vectors in real data sets are usually correlated. More careful analysis for nonuniform or correlated data is needed for e ectively indexing high dimensional data. We propose a solution to these problems and propose the VAle, a new technique for indexing high dimensional data sets based on vector approximations. We conclude with an evaluation of nearest neighbor queries and show that the VAle technique results in signi cant improvements over the current VAle approach for several real data sets.
acm multimedia | 2002
Ertem Tuncel; Hakan Ferhatosmanoglu; Kenneth Rose
In this paper, we introduce a novel indexing technique based on efficient compression of the feature space for approximate similarity searching in large multimedia databases. Its main novelty is that state-of-the-art tools from the discipline of data compression are adopted to optimize the complexity-performance tradeoff in large data sets. The design procedure optimizes the query access time by jointly accounting for both database distribution and query statistics. We achieve efficient compression by using appropriate vector quantization (VQ) techniques, namely, multi-stage VQ and split-VQ, which are especially suited for limited memory applications. We partition the data set using the accumulated query history, and each partition of data points is separately compressed using a vector quantizer tailored to its distribution. The employed VQ techniques inherently provide a spectrum of points to choose from on the time/accuracy plane. This property is especially crucial for large multimedia databases where I/O time is a bottleneck, because it offers the flexibility to trade time for better accuracy. Our experiments demonstrate speedups of 20 to 35 over a VA-file technique that has been adapted for approximate nearest neighbor searching.
IEEE Transactions on Information Theory | 2003
Prashant Koulgi; Ertem Tuncel; Shankar L. Regunathan; Kenneth Rose
Let (X,Y) be a pair of random variables distributed over a finite product set V/spl times/W according to a probability distribution P(x,y). The following source coding problem is considered: the encoder knows X, while the decoder knows Y and wants to learn X without error. The minimum zero-error asymptotic rate of transmission is shown to be the complementary graph entropy of an associated graph. Thus, previous results in the literature provide upper and lower bounds for this minimum rate (further, these bounds are tight for the important class of perfect graphs). The algorithmic aspects of instantaneous code design are considered next. It is shown that optimal code design is NP-hard. An optimal code design algorithm is derived. Polynomial-time suboptimal algorithms are also presented, and their average and worst case performance guarantees are established.
IEEE Transactions on Information Theory | 2010
Jayanth Nayak; Ertem Tuncel; Deniz Gunduz
This paper addresses lossy transmission of a common source over a broadcast channel when there is correlated side information at the receivers, with emphasis on the quadratic Gaussian and binary Hamming cases. A digital scheme that combines ideas from the lossless version of the problem, i.e., Slepian-Wolf coding over broadcast channels, and dirty paper coding, is presented and analyzed. This scheme uses layered coding where the common layer information is intended for both receivers and the refinement information is destined only for one receiver. For the quadratic Gaussian case, a quantity characterizing the combined quality of each receiver is identified in terms of channel and side information parameters. It is shown that it is more advantageous to send the refinement information to the receiver with ¿better¿ combined quality. In the case where all receivers have the same overall quality, the presented scheme becomes optimal. Unlike its lossless counterpart, however, the problem eludes a complete characterization.
conference on decision and control | 2009
Yiqian Li; Ertem Tuncel; Jie Chen; Weizhou Su
This paper studies the optimal tracking performance of multiple-input multiple-output (MIMO), finite dimensional, linear time-invariant discrete-time systems with a power-constrained additive white noise (AWN) channel in the feedback path. We adopt the tracking error power as a measure of the performance and examine the best achievable performance by all two-parameter stabilizing controllers. In the due process, a scaling scheme is introduced as a means of integrating controller and channel design, and is optimized to better the tracking performance. In contrast to the standard setting where tracking of a step reference signal is conducted with no communication constraint, in which the tracking error can be made as zero for minimum phase plants, it is shown explicitly herein that the tracking performance will be additionally constrained by the plant unstable poles, as a consequence of noisy, power-constrained channels in the feedback loop.
international symposium on information theory | 2008
Deniz Gunduz; Jayanth Nayak; Ertem Tuncel
This paper deals with the design of coding schemes for transmitting a source over a broadcast channel when there is source side information at the receivers. Based on Slepian-Wolf coding over broadcast channels, three hybrid digital/analog schemes are proposed and their power-distortion tradeoff is investigated for Gaussian sources and Gaussian broadcast channels. All three transmit the same digital and analog information but with varying coding order. Although they are not provably optimal in general, they can significantly outperform uncoded transmission and separate source and channel coding.
IEEE Transactions on Information Theory | 2009
Ertem Tuncel
The asymptotic tradeoff between the number of distinguishable objects and the necessary storage space (or equivalently, the search complexity) in an identification system is investigated. In the discussed scenario, high-dimensional (and noisy) feature vectors extracted from objects are first compressed and then enrolled in the database. When the user submits a random query object, the extracted noisy feature vector is compared against the compressed entries, one of which is output as the identified object. The first result this paper presents is a complete single-letter characterization of achievable storage and identification rates (measured in bits per feature dimension) subject to vanishing probability of identification error as the dimensionality of feature vectors becomes very large. This single-letter characterization is then extended for a multistage system whereby depending on the number of entries, the identification is performed by utilizing part or all of the recorded bits in the database. Finally, it is shown that a necessary and sufficient condition for a two-stage system to achieve single-stage capacities at each stage is Markovity of the optimal test channels.