Xinyang Yi
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Xinyang Yi.
IEEE Transactions on Information Theory | 2018
Yudong Chen; Xinyang Yi; Constantine Caramanis
We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, as well as a nonconvex formulation that works under more general settings and remains tractable. Upper bounds are provided on the recovery errors for both arbitrary noise and stochastic noise models. We also give matching minimax lower bounds (up to log factors), showing that our algorithm is information-theoretical optimal in a precise sense. Our results represent the first tractable algorithm guaranteeing successful recovery with tight bounds on recovery errors and sample complexity. Moreover, we pinpoint the statistical cost of mixtures: our minimax-optimal results indicate that the mixture poses a fundamentally more difficult problem in the low-SNR regime, where the learning rate changes.
design automation conference | 2015
Ye Wang; Meng Li; Xinyang Yi; Zhao Song; Michael Orshansky; Constantine Caramanis
Model order reduction exploiting the spectral properties of the admittance matrix, known as the graph Laplacian, to control the approximation accuracy is a promising new class of approaches to power grid analysis. In this paper we introduce a method that allows a dramatic increase in the resulting graph sparsity and can handle large dense input graphs. The method is based on the observation that the information about the realistic ranges of port currents can be used to significantly improve the resulting graph sparsity. In practice, port currents cannot vary unboundedly and the estimates of peak currents are often available early in the design cycle. However, the existing methods including the sampling-based spectral sparsification approach [11] cannot utilize this information.We propose a novel framework of graph Sparsification by L1 regularization on Laplacians (SparseLL) to exploit the available range information to achieve a higher degree of sparsity and better approximation quality. By formulating the power grid reduction as a sparsity-inducing optimization problem, we leverage the recent progress in stochastic approximation and develop a stochastic gradient descent algorithm as an efficient solution. Using established benchmarks for experiments, we demonstrate that SparseLL can achieve an up to 10X edge sparsity improvement compared to the spectral sparsification approach assuming the full range of currents, with an up to 10X accuracy improvement. The running time of our algorithm also scales quite favorably due to the low complexity and fast convergence, which leads us to believe that our algorithm is highly suitable for large-scale dense problems.
neural information processing systems | 2016
Xinyang Yi; Dohyung Park; Yudong Chen; Constantine Caramanis
international conference on machine learning | 2014
Xinyang Yi; Constantine Caramanis; Sujay Sanghavi
conference on learning theory | 2014
Yudong Chen; Xinyang Yi; Constantine Caramanis
neural information processing systems | 2015
Xinyang Yi; Zhaoran Wang; Constantine Caramanis; Han Liu
international conference on machine learning | 2015
Xinyang Yi; Constantine Caramanis; Eric Price
arXiv: Learning | 2016
Xinyang Yi; Constantine Caramanis; Sujay Sanghavi
neural information processing systems | 2015
Xinyang Yi; Constantine Caramanis
arXiv: Learning | 2015
Xinyang Yi; Constantine Caramanis