Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Constantine Caramanis is active.

Publication


Featured researches published by Constantine Caramanis.


IEEE Transactions on Wireless Communications | 2013

User Association for Load Balancing in Heterogeneous Cellular Networks

Qiaoyang Ye; Beiyu Rong; Yudong Chen; Mazin Al-Shalash; Constantine Caramanis; Jeffrey G. Andrews

For small cell technology to significantly increase the capacity of tower-based cellular networks, mobile users will need to be actively pushed onto the more lightly loaded tiers (corresponding to, e.g., pico and femtocells), even if they offer a lower instantaneous SINR than the macrocell base station (BS). Optimizing a function of the long-term rate for each user requires (in general) a massive utility maximization problem over all the SINRs and BS loads. On the other hand, an actual implementation will likely resort to a simple biasing approach where a BS in tier j is treated as having its SINR multiplied by a factor Aj ≥ 1, which makes it appear more attractive than the heavily-loaded macrocell. This paper bridges the gap between these approaches through several physical relaxations of the network-wide association problem, whose solution is NP hard. We provide a low-complexity distributed algorithm that converges to a near-optimal solution with a theoretical performance guarantee, and we observe that simple per-tier biasing loses surprisingly little, if the bias values Aj are chosen carefully. Numerical results show a large (3.5x) throughput gain for cell-edge users and a 2x rate gain for median users relative to a maximizing received power association.


IEEE Transactions on Information Theory | 2012

Robust PCA via Outlier Pursuit

Huan Xu; Constantine Caramanis; Sujay Sanghavi

Singular-value decomposition (SVD) [and principal component analysis (PCA)] is one of the most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented sensitivity to outliers. Recent work has considered the setting where each point has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA, such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield entire points that are completely corrupted. We present an efficient convex optimization-based algorithm that we call outlier pursuit, which under some mild assumptions on the uncorrupted points (satisfied, e.g., by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation is of paramount interest in bioinformatics, financial applications, and beyond. Our techniques involve matrix decomposition using nuclear norm minimization; however, our results, setup, and approach necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the uncorrupted matrix, rather than the exact matrix itself. In any problem where one seeks to recover a structure rather than the exact initial matrices, techniques developed thus far relying on certificates of optimality will fail. We present an important extension of these methods, which allows the treatment of such problems.


IEEE Transactions on Information Theory | 2013

Low-Rank Matrix Recovery From Errors and Erasures

Yudong Chen; Ali Jalali; Sujay Sanghavi; Constantine Caramanis

This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both 1) erasures, most entries are not observed, and 2) errors, values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when minimizing nuclear norm plus l1 norm succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and erasure patterns. By specializing this one single result in different ways, we recover (up to poly-log factors) as corollaries all the existing results in exact matrix completion, and exact sparse and low-rank matrix decomposition. Our unified result also provides the first guarantees for 1) recovery when we observe a vanishing fraction of entries of a corrupted matrix, and 2) deterministic matrix completion.


IEEE Transactions on Image Processing | 2008

Design of Linear Equalizers Optimized for the Structural Similarity Index

Sumohana S. Channappayya; Alan C. Bovik; Constantine Caramanis; Robert W. Heath

We propose an algorithm for designing linear equalizers that maximize the structural similarity (SSIM) index between the reference and restored signals. The SSIM index has enjoyed considerable application in the evaluation of image processing algorithms. Algorithms, however, have not been designed yet to explicitly optimize for this measure. The design of such an algorithm is nontrivial due to the nonconvex nature of the distortion measure. In this paper, we reformulate the nonconvex problem as a quasi-convex optimization problem, which admits a tractable solution. We compute the optimal solution in near closed form, with complexity of the resulting algorithm comparable to complexity of the linear minimum mean squared error (MMSE) solution, independent of the number of filter taps. To demonstrate the usefulness of the proposed algorithm, it is applied to restore images that have been blurred and corrupted with additive white gaussian noise. As a special case, we consider blur-free image denoising. In each case, its performance is compared to a locally adaptive linear MSE-optimal filter. We show that the images denoised and restored using the SSIM-optimal filter have higher SSIM index, and superior perceptual quality than those restored using the MSE-optimal adaptive linear filter. Through these results, we demonstrate that a) designing image processing algorithms, and, in particular, denoising and restoration-type algorithms, can yield significant gains over existing (in particular, linear MMSE-based) algorithms by optimizing them for perceptual distortion measures, and b) these gains may be obtained without significant increase in the computational complexity of the algorithm.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem

Huan Xu; Constantine Caramanis; Shie Mannor

We consider two desired properties of learning algorithms: sparsity and algorithmic stability. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: A sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that ℓ1-regularized regression (Lasso) cannot be stable, while ℓ2-regularized regression is known to have strong stability properties and is therefore not sparse.


IEEE Transactions on Automatic Control | 2010

Finite Adaptability in Multistage Linear Optimization

Dimitris Bertsimas; Constantine Caramanis

In multistage problems, decisions are implemented sequentially, and thus may depend on past realizations of the uncertainty. Examples of such problems abound in applications of stochastic control and operations research; yet, where robust optimization has made great progress in providing a tractable formulation for a broad class of single-stage optimization problems with uncertainty, multistage problems present significant tractability challenges. In this paper we consider an adaptability model designed with discrete second stage variables in mind. We propose a hierarchy of increasing adaptability that bridges the gap between the static robust formulation, and the fully adaptable formulation. We study the geometry, complexity, formulations, algorithms, examples and computational results for finite adaptability. In contrast to the model of affine adaptability proposed in, our proposed framework can accommodate discrete variables. In terms of performance for continuous linear optimization, the two frameworks are complementary, in the sense that we provide examples that the proposed framework provides stronger solutions and vice versa. We prove a positive tractability result in the regime where we expect finite adaptability to perform well, and illustrate this claim with an application to Air Traffic Control.


IEEE Journal on Selected Areas in Communications | 2012

A Cross-Layer Design for Perceptual Optimization Of H.264/SVC with Unequal Error Protection

Amin Abdel Khalek; Constantine Caramanis; Robert W. Heath

Delivering high perceptual quality video over wireless channels is challenging due to the changing channel quality and the variations in the importance of one source packet to the next for the end-users perceptual experience. Leveraging perceptual metrics in concert with link adaptation to maximize perceptual quality and satisfy real-time delay constraints is largely unexplored. We introduce an APP/MAC/PHY cross-layer architecture that enables optimizing perceptual quality for delay-constrained scalable video transmission. We propose an online QoS-to-QoE mapping technique to quantify the loss visibility of packets from each video layer using the ACK history and perceptual metrics. At the PHY layer, we develop a link adaptation technique that uses the QoS-to-QoE mapping to provide perceptually-optimized unequal error protection per layer according to packet loss visibility. At the APP layer, the source rate is adapted by selecting the set of temporal and quality layers to be transmitted based on the channel statistics, source rates, and playback buffer state. The proposed cross-layer optimization framework allows the channel to adapt at a faster time scale than the video codec. Furthermore, it provides a tradeoff between playback buffer occupancy and perceptual quality. We show that the proposed architecture prevents playback buffer starvation, provides immunity against short-term channel fluctuations, regulates the buffer size, and achieves a 30% increase in video capacity versus throughput-optimal link adaptation.


IEEE Transactions on Information Theory | 2012

On Wireless Scheduling With Partial Channel-State Information

Aditya Gopalan; Constantine Caramanis; Sanjay Shakkottai

A time-slotted queueing system for a wireless downlink with multiple flows and a single server is considered, with exogenous arrivals and time-varying channels. It is assumed that only one user can be serviced in a single time slot. Unlike much recent work on this problem, attention is drawn to the case where the server can obtain only partial information about the instantaneous state of the channel. In each time slot, the server is allowed to specify a single subset of flows from a collection of observable subsets, observe the current service rates for that subset, and subsequently pick a user to serve. The stability region for such a system is provided. An online scheduling algorithm is presented that uses information about marginal distributions to pick the subset and the Max-Weight rule to pick a flow within the subset, and which is provably throughput-optimal. In the case where the observable subsets are all disjoint, or where the subsets and channel statistics are symmetric, it is shown that a simple scheduling algorithm-Max-Sum-Queue-that essentially picks subsets having the largest squared-sum of queues, followed by picking a user using Max-Weight within the subset, is throughput-optimal.


global communications conference | 2013

On/off macrocells and load balancing in heterogeneous cellular networks

Qiaoyang Ye; Mazin Al-Shalashy; Constantine Caramanis; Jeffrey G. Andrews

The rate distribution in heterogeneous networks (HetNets) greatly benefits from load balancing, by which mobile users are pushed onto lightly-loaded small cells despite the resulting loss in SINR. This offloading can be made more aggressive and robust if the macrocells leave a fraction of time/frequency resource blank, which reduces the interference to the offloaded users. We investigate the joint optimization of this technique - referred to in 3GPP as enhanced intercell interference coordination (eICIC) via almost blank subframes (ABSs) - with offloading in this paper. Although the joint cell association and blank resource (BR) problem is nominally combinatorial, by allowing users to associate with multiple base stations (BSs), the problem becomes convex, and upper bounds the performance versus a binary association. We show both theoretically and through simulation that the optimal solution of the relaxed problem still results in an association that is mostly binary. The optimal association differs significantly when the macrocell is on or off; in particular the offloading can be much more aggressive when the resource is left blank by macro BSs. Further, we observe that jointly optimizing the offloading with BR is important. The rate gain for cell edge users (the worst 3-10%) is very large - on the order of 5-10x - versus a naive association strategy without macrocell blanking.


IEEE Transactions on Information Theory | 2013

Outlier-Robust PCA: The High-Dimensional Case

Huan Xu; Constantine Caramanis; Shie Mannor

Principal component analysis plays a central role in statistics, engineering, and science. Because of the prevalence of corrupted data in real-world applications, much research has focused on developing robust algorithms. Perhaps surprisingly, these algorithms are unequipped-indeed, unable-to deal with outliers in the high-dimensional setting where the number of observations is of the same magnitude as the number of variables of each observation, and the dataset contains some (arbitrarily) corrupted observations. We propose a high-dimensional robust principal component analysis algorithm that is efficient, robust to contaminated points, and easily kernelizable. In particular, our algorithm achieves maximal robustness-it has a breakdown point of 50% (the best possible), while all existing algorithms have a breakdown point of zero. Moreover, our algorithm recovers the optimal solution exactly in the case where the number of corrupted points grows sublinearly in the dimension.

Collaboration


Dive into the Constantine Caramanis's collaboration.

Top Co-Authors

Avatar

Shie Mannor

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert W. Heath

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Huan Xu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Sanjay Shakkottai

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Sujay Sanghavi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Yudong Chen

University of California

View shared research outputs
Top Co-Authors

Avatar

Xinyang Yi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jeffrey G. Andrews

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Alan C. Bovik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge