Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leonid Gurvits is active.

Publication


Featured researches published by Leonid Gurvits.


Linear Algebra and its Applications | 1995

STABILITY OF DISCRETE LINEAR INCLUSION

Leonid Gurvits

Let M = {Ai} be a set of linear operators on Rn. The discrete linear inclusion DLI(M) is the set of possible trajectories (xi: i ⩾ 0) such that xn = AinAin−1 … Ai1x0 where x0 ∈ Rn and Aij ∈ M. We study several notions of stability for DLI(M), including absolute asymptotic stability (AAS), which is that all products Ain … Ai1 → 0 as n → ∞. We mainly study the case that M is a finite set. We give criteria for the various forms of stability. Two new approaches are taken: one relates the question of AAS of DLI(M) to formal language theory and finite automata, while the second connects the AAS property to the structure of a Lie algebra associated to the elements of M. More generally, the discrete linear inclusion DLI(M) makes sense for M contained in a Banach algebra B. We prove some results for AAS in this case, and give counterexamples showing that some results valid for finite sets of operators on Rn are not true for finite sets M in a general Banach algebras B.


intelligent robots and systems | 1994

Mobile robot localization using landmarks

Margrit Betke; Leonid Gurvits

We describe an efficient algorithm for localizing a mobile robot in an environment with landmarks. We assume that the robot has a camera and maybe other sensors that enable it to both identify landmarks and measure the angles subtended by these landmarks. We show how to estimate the robots position using a new technique that involves a complex number representation of the landmarks. Our algorithm runs in time linear in the number of landmarks. We present results of our simulations and propose how to use our method for robot navigation.<<ETX>>


IEEE Transactions on Automatic Control | 1994

Near-optimal nonholonomic motion planning for a system of coupled rigid bodies

C. Fernandes; Leonid Gurvits; Zexiang Li

How does a falling cat change her orientation in midair without violating angular momentum constraint? This has become an interesting problem to both control engineers and roboticists. In this paper, we address this problem together with a constructive solution. First, we show that a falling cat problem is equivalent to the constructive nonlinear controllability problem. Thus, the same principle and algorithm used by a falling cat can be used for space robotic applications, such as reorientation of a satellite using rotors and attitude control of a space structure using internal motion, and other robotic tasks, such as dextrous manipulation with multifingered robotic hands and nonholonomic motion planning for mobile robots. Then, using ideas from Ritz approximation theory, we develop a simple algorithm for motion planning of a falling cat. Finally, we test the algorithm through simulation on two widely accepted models of a falling cat. It is interesting to note that one set of simulation results closely resembles the real trajectories employed by a falling cat. >


Archive | 1993

Smooth Time-Periodic Feedback Solutions for Nonholonomic Motion Planning

Leonid Gurvits; Zexiang Li

In this paper, we present an algorithm for computing time-periodic feedback solutions for nonholonomic motion planning with collision-avoidance. For a first-order Lie bracket system, we begin by computing a holonomic collision-free path using the potential field method. Then, we compute a nonholonomic path approximating the collision-free path within a predetermined bound. For this we first solve for extended inputs of an extended system using Lie bracket completion vectors. We then use averaging techniques to calculate the asymptotic trajectory of the nonholonomic system under application of a family of highly-oscillatory inputs. Comparing the limiting trajectories with the extended system we obtain a system of nonlinear equations from which the desired admissible control inputs can be solved. For higher-order Lie bracket systems we use multi-scale averaging and apply recursively the algorithm for first-order Lie bracket systems. Based on averaging techniques we also provide error bounds between a nonholonomic system and its averaged system.


conference on learning theory | 1993

Rate of approximation results motivated by robust neural network learning

Christian Darken; Michael Donahue; Leonid Gurvits; Eduardo D. Sontag

The set of functions which a single hidden layer neural network can approximate is increasingly well understood, yet our knowledge of how the approximation error depends upon the number of hidden units, i.e. the rate of approximation, remains relatively primitive. Barron [1991] and Jones [1992] give bounds on the rate of approximation valid for Hilbert spaces. We derive bounds for L spaces, 1 < p < m, recovering the 0(1 /&) bounds of Barron and Jones for the case p = 2. The results were motivated in part by the desire to understand approximation in the more “robust” (resistant to exemplar noise) LP, 1 ~ p <2 norms. Consider the task of approximating a given target function f by a linear combination of n functions from a set S. For example, S may be the set of possible sigmoidal activation functions, {g : ~d ~[% 6 ~d, b E ~, s.t. g(z) = a(a . z + b)}, in which case the approximants are single hidden layer neural networks with a linear output layer. It is known that under very weak conditions on IS (it must be Riemann integrable and nonpolynomial), the linear span of S is dense in the set of continuous functions on compact subsets of ~d (i.e. for all positive c there is a linear combination of functions in S which can approximate any continuous function to within c everywhere on a compact domain) [Leshno et al. 1992]. Consider the important rate of approximation issue, i.e. the rate at which the achievable error reduces as we allow larger subsets of S to be used in const rutting the approximant. In the context of neural networks, this is the question of how the approximation error scales with the number of hidden units in the network. Unfortunately, approximation bounds for target functions ~ arbitrarily located in the linear closure (i.e. the closure of the span) of S are unknown. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. ACM COLT ’93 17/931CA, USA @ 1993 ACM 0-89791-61 1-5193 /000710303 . ..


Constructive Approximation | 1997

Rates of convex approximation in non-hilbert spaces

M. J. Donahue; C. Darken; Leonid Gurvits; Eduardo D. Sontag

1 .50 Leonid Gurvits Eduardo Sontag Learning Systems Dept. Dept. of Mathematics Siemens Corp. Research Rutgers University 755 College Road East New Brunswick, NJ 08903 Princeton, NJ 08540 sontag@control. rutgers. edu gurvitsrlscr. aiemena. com However, progress haa been made recently by introducing the assumptions that f is in the convex closure of S, and that S is bounded in the relevant norm. This theory depends neither on the continuity of f nor on the form of the functions in S (i.e. the functions in S do not need to be sigmoidal or obey the constraints on ~ listed above), but only on the properties of the function space and some generic properties of S. Definition 1 Let X be a Banach space v.rith norm II ! II. Let S ~ X and f E X. Dejine lllinnS fl[ := inf ‘&W, – f , (1) i=l where the injimum is over all gl, . . . . gn c S and al, ..., an E ~. Also define llconS – fll := inf ~ a~ga f , (2) %=1 where the infimum is over all gl, . . . . g~ E S and al, ..., crn G ~+ U {O} such that ~ ai = 1. That is, lllinnS – fll is the distance of f from the closest span of n functions from S (linear approximation bound), and llconS – fll is the distance off from the closest convex hull of n functions from S (convex approximation bound). Note that [1Iinn S – f II ~ IIco.S f Il. These bounds converge to zero as n ~ co for approximable f and thus represent the convergence rates of the best approximants to the target function. The study of such rates is standard in approximation theory (e.g. [Powell 1981] ), but the cases of interest for neural networks are not among those classically considered. For spaces of square-integrable functions (or more general Hilbert spaces) and bounded sets S, Barron [1991] presented results at this conference to the effect that llco~S – fllz = 0(1/@). Subsequently, under additional conditions on S, he has shown that the same rate obtains for the uniform norm [Barren 1992]. If we consider the procedure of constructing approximants to f incrementally, bv formimz a convex combination of the last approxirna~t with a-single new element convergence rate in Lz is interestingly again 303 of S, the o(l/fi)


international conference on robotics and automation | 1991

A variational approach to optimal nonholonomic motion planning

C. Fernandes; Leonid Gurvits; Zexiang Li

This paper deals with sparse approximations by means of convex combinations of elements from a predetermined “basis” subsetS of a function space. Specifically, the focus is on therate at which the lowest achievable error can be reduced as larger subsets ofS are allowed when constructing an approximant. The new results extend those given for Hilbert spaces by Jones and Barron, including, in particular, a computationally attractive incremental approximation scheme. Bounds are derived for broad classes of Banach spaces; in particular, forLp spaces with 1<p<∞, theO (n−1/2) bounds of Barron and Jones are recovered whenp=2.One motivation for the questions studied here arises from the area of “artificial neural networks,” where the problem can be stated in terms of the growth in the number of “neurons” (the elements ofS) needed in order to achieve a desired error rate. The focus on non-Hilbert spaces is due to the desire to understand approximation in the more “robust” (resistant to exemplar noise)Lp, 1 ≤p<2, norms.The techniques used borrow from results regarding moduli of smoothness in functional analysis as well as from the theory of stochastic processes on function spaces.


international conference on robotics and automation | 1992

Averaging approach to nonholonomic motion planning

Leonid Gurvits

Nonholonomic motion planning (NMP) problems arise not only from the classical nonholonomic constraints, but also from symmetries and conservation laws of holonomic systems. In NMP problems an admissible configuration space path is constrained to a given nonholonomic distribution. Thus, NMP deals with the problem of (optimal) path finding subject to a nonholonomic distribution and possibly to additional holonomic constraints. The authors first study several representative NM systems and formulate the NMP problem. Variational principles are used to characterize optimal solutions to these problems. A simple algorithm solving an NMP problem is proposed, and simulation results are presented.<<ETX>>


conference on learning theory | 1997

Approximation and Learning of Convex Superpositions

Leonid Gurvits; Pascal Koiran

The author considers the problem of motion planning for a nonholonomic system with drift. Open-loop and feedback solutions for nonholonomic motion planning (NMP) are constructed by using the averaging technique that is well known in applied mathematics. An algorithm for open-loop and feedback solutions of NMP is introduced. The main step in the algorithm is the case of first order Lie brackets. This case, as is shown, is equivalent to NMP for Brocketts system considered over functional commutative algebra. Feedback solutions are constructed in the same manner. From a robotics point of view, it is shown that NMP can be reduced to the holonomic problem. As linear algebra plays a crucial role in linear control theory, polylinear algebra is crucial for NMP. The rolling disk example is used to illustrate the feedback algorithm.<<ETX>>


international conference on robotics and automation | 1992

Attitude control of space platform/manipulator system using internal motion

C. Fernandes; Leonid Gurvits; Zexiang Li

We present a fairly general method for constructing classes of functions of finite scale-sensitive dimension (the scale-sensitive dimension is a generalization of the Vapnik?Chervonenkis dimension to real-valued functions). The construction is as follows: start from a classFof functions of finite VC dimension, take the convex hull coFofF, and then take the closurecoFof coFin an appropriate sense. As an example, we study in more detail the case whereFis the class of threshold functions. It is shown thatcoFincludes two important classes of functions: ?neural networks with one hidden layer and bounded output weights; ?the so-called?class of Barron, which was shown to satisfy a number of interesting approximation and closure properties. We also give an integral representation in the form of a “continuous neural network” which generalizes Barrons. It is shown that the existence of an integral representation is equivalent to bothL2andL∞approximability. A preliminary version of this paper was presented at EuroCOLT95. The main difference with the conference version is the addition of Theorem 7, where we show that a key topological result fails when the VC dimension hypothesis is removed.

Collaboration


Dive into the Leonid Gurvits's collaboration.

Top Co-Authors

Avatar

Zexiang Li

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Greenbaum

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge