Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yin Tat Lee is active.

Publication


Featured researches published by Yin Tat Lee.


symposium on the theory of computing | 2013

Improved Cheeger's inequality: analysis of spectral partitioning algorithms through higher order spectral gap

Tsz Chiu Kwok; Lap Chi Lau; Yin Tat Lee; Shayan Oveis Gharan; Luca Trevisan

Let φ(G) be the minimum conductance of an undirected graph G, and let 0=λ<sub>1</sub> ≤ λ<sub>2</sub> ≤ ... ≤ λ<sub>n</sub> ≤ 2 be the eigenvalues of the normalized Laplacian matrix of G. We prove that for any graph G and any k ≥ 2, [φ(G) = O(k) l<sub>2</sub>/√l<sub>k</sub>,] and this performance guarantee is achieved by the spectral partitioning algorithm. This improves Cheegers inequality, and the bound is optimal up to a constant factor for any


symposium on the theory of computing | 2017

Kernel-based methods for bandit convex optimization

Sébastien Bubeck; Yin Tat Lee; Ronen Eldan

k


symposium on the theory of computing | 2017

An SDP-based algorithm for linear-sized spectral sparsification

Yin Tat Lee; He Sun

. Our result shows that the spectral partitioning algorithm is a constant factor approximation algorithm for finding a sparse cut if l<sub>k</sub> is a constant for some constant k. This provides some theoretical justification to its empirical performance in image segmentation and clustering problems. We extend the analysis to spectral algorithms for other graph partitioning problems, including multi-way partition, balanced separator, and maximum cut.


symposium on the theory of computing | 2018

Convergence rate of Riemannian Hamiltonian Monte Carlo and faster polytope volume computation

Yin Tat Lee; Santosh Vempala

We consider the adversarial convex bandit problem and we build the first poly(T)-time algorithm with poly(n) √T-regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves Õ(n9.5 #8730;T)-regret, and we show that a simple variant of this algorithm can be run in poly(n log(T))-time per step at the cost of an additional poly(n) To(1) factor in the regret. These results improve upon the Õ(n11 #8730;T)-regret and exp(poly(T))-time result of the first two authors, and the log(T)poly(n) #8730;T-regret and log(T)poly(n)-time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve Õ(n1.5 #8730;T)-regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n #8730;T) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n3 / ϵ2.


symposium on the theory of computing | 2017

Geodesic walks in polytopes

Yin Tat Lee; Santosh Vempala

For any undirected and weighted graph G=(V,E,w) with n vertices and m edges, we call a sparse subgraph H of G, with proper reweighting of the edges, a (1+ε)-spectral sparsifier if (1-ε)xTLGx≤xT LH x≤(1+ε)xTLGx holds for any xΕℝn, where LG and LH are the respective Laplacian matrices of G and H. Noticing that Ω(m) time is needed for any algorithm to construct a spectral sparsifier and a spectral sparsifier of G requires Ω(n) edges, a natural question is to investigate, for any constant ε, if a (1+ε)-spectral sparsifier of G with O(n) edges can be constructed in Ο(m) time, where the Ο notation suppresses polylogarithmic factors. All previous constructions on spectral sparsification require either super-linear number of edges or m1+Ω(1) time. In this work we answer this question affirmatively by presenting an algorithm that, for any undirected graph G and ε>0, outputs a (1+ε)-spectral sparsifier of G with O(n/ε2) edges in Ο(m/εO(1)) time. Our algorithm is based on three novel techniques: (1) a new potential function which is much easier to compute yet has similar guarantees as the potential functions used in previous references; (2) an efficient reduction from a two-sided spectral sparsifier to a one-sided spectral sparsifier; (3) constructing a one-sided spectral sparsifier by a semi-definite program.


symposium on the theory of computing | 2018

The Paulsen problem, continuous operator scaling, and smoothed analysis

Tsz Chiu Kwok; Lap Chi Lau; Yin Tat Lee

We give the first rigorous proof of the convergence of Riemannian Hamiltonian Monte Carlo, a general (and practical) method for sampling Gibbs distributions. Our analysis shows that the rate of convergence is bounded in terms of natural smoothness parameters of an associated Riemannian manifold. We then apply the method with the manifold defined by the log barrier function to the problems of (1) uniformly sampling a polytope and (2) computing its volume, the latter by extending Gaussian cooling to the manifold setting. In both cases, the total number of steps needed is O*(mn2/3), improving the state of the art. A key ingredient of our analysis is a proof of an analog of the KLS conjecture for Gibbs distributions over manifolds.


symposium on the theory of computing | 2017

Subquadratic submodular function minimization

Deeparnab Chakrabarty; Yin Tat Lee; Aaron Sidford; Sam Chiu-wai Wong

We introduce the geodesic walk for sampling Riemannian manifolds and apply it to the problem of generating uniform random points from the interior of polytopes in ℝn specified by m inequalities. The walk is a discrete-time simulation of a stochastic differential equation (SDE) on the Riemannian manifold equipped with the metric induced by the Hessian of a convex function; each step is the solution of an ordinary differential equation (ODE). The resulting sampling algorithm for polytopes mixes in O*(mn3/4) steps. This is the first walk that breaks the quadratic barrier for mixing in high dimension, improving on the previous best bound of O*(mn) by Kannan and Narayanan for the Dikin walk. We also show that each step of the geodesic walk (solving an ODE) can be implemented efficiently, thus improving the time complexity for sampling polytopes. Our analysis of the geodesic walk for general Hessian manifolds does not assume positive curvature and might be of independent interest.


symposium on the theory of computing | 2018

k-server via multiscale entropic regularization

Sébastien Bubeck; Michael B. Cohen; Yin Tat Lee; James R. Lee; Aleksander Mądry

The Paulsen problem is a basic open problem in operator theory: Given vectors u1, …, un ∈ ℝd that are є-nearly satisfying the Parseval’s condition and the equal norm condition, is it close to a set of vectors v1, …, vn ∈ ℝd that exactly satisfy the Parseval’s condition and the equal norm condition? Given u1, …, un, the squared distance (to the set of exact solutions) is defined as infv ∑i=1n || ui − vi ||22 where the infimum is over the set of exact solutions. Previous results show that the squared distance of any є-nearly solution is at most O(poly(d,n,є)) and there are є-nearly solutions with squared distance at least Ω(d є). The fundamental open question is whether the squared distance can be independent of the number of vectors n. We answer this question affirmatively by proving that the squared distance of any є-nearly solution is O(d13/2 є). Our approach is based on a continuous version of the operator scaling algorithm and consists of two parts. First, we define a dynamical system based on operator scaling and use it to prove that the squared distance of any є-nearly solution is O(d2 n є). Then, we show that by randomly perturbing the input vectors, the dynamical system will converge faster and the squared distance of an є-nearly solution is O(d5/2 є) when n is large enough and є is small enough. To analyze the convergence of the dynamical system, we develop some new techniques in lower bounding the operator capacity, a concept introduced by Gurvits to analyzing the operator scaling algorithm.


symposium on the theory of computing | 2018

Stochastic localization + Stieltjes barrier = tight bound for log-Sobolev

Yin Tat Lee; Santosh Vempala

Submodular function minimization (SFM) is a fundamental discrete optimization problem which generalizes many well known problems, has applications in various fields, and can be solved in polynomial time. Owing to applications in computer vision and machine learning, fast SFM algorithms are highly desirable. The current fastest algorithms [Lee, Sidford, Wong, 2015] run in O(n2lognM· EO + n3logO(1)nM) time and O(n3log2n· EO +n4logO(1)n)time respectively, where M is the largest absolute value of the function (assuming the range is integers) and is the time taken to evaluate the function on any set. Although the best known lower bound on the query complexity is only Ω(n) [Harvey, 2008], the current shortest non-deterministic proof [Cunningham, 1985] certifying the optimum value of a function requires Ω(n2) function evaluations. The main contribution of this paper are subquadratic SFM algorithms. For integer-valued submodular functions, we give an SFM algorithm which runs in O(nM3logn· EO) time giving the first nearly linear time algorithm in any known regime. For real-valued submodular functions with range in [-1,1], we give an algorithm which in Õ(n5/3· EO/ε2) time returns an ε-additive approximate solution. At the heart of it, our algorithms are projected stochastic subgradient descent methods on the Lovasz extension of submodular functions where we crucially exploit submodularity and data structures to obtain fast, i.e. sublinear time, subgradient updates. The latter is crucial for beating the n2 bound - we show that algorithms which access only subgradients of the Lovasz extension, and these include the empirically fast Fujishige-Wolfe heuristic [Fujishige, 1980; Wolfe, 1976]


symposium on the theory of computing | 2018

A matrix expander Chernoff bound

Ankit Garg; Yin Tat Lee; Zhao Song; Nikhil Srivastava

We present an O((logk)2)-competitive randomized algorithm for the k-server problem on hierarchically separated trees (HSTs). This is the first o(k)-competitive randomized algorithm for which the competitive ratio is independent of the size of the underlying HST. Our algorithm is designed in the framework of online mirror descent where the mirror map is a multiscale entropy. When combined with Bartal’s static HST embedding reduction, this leads to an O((logk)2 logn)-competitive algorithm on any n-point metric space. We give a new dynamic HST embedding that yields an O((logk)3 logΔ)-competitive algorithm on any metric space where the ratio of the largest to smallest non-zero distance is at most Δ.

Collaboration


Dive into the Yin Tat Lee's collaboration.

Top Co-Authors

Avatar

Santosh Vempala

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael B. Cohen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lap Chi Lau

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Tsz Chiu Kwok

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aleksander Mądry

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James R. Lee

University of Washington

View shared research outputs
Researchain Logo
Decentralizing Knowledge