Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seth J. Teller is active.

Publication


Featured researches published by Seth J. Teller.


international conference on embedded networked sensor systems | 2004

Robust distributed network localization with noisy range measurements

David Moore; John J. Leonard; Daniela Rus; Seth J. Teller

This paper describes a distributed, linear-time algorithm for localizing sensor network nodes in the presence of range measurement noise and demonstrates the algorithm on a physical network. We introduce the probabilistic notion of robust quadrilaterals as a way to avoid flip ambiguities that otherwise corrupt localization computations. We formulate the localization problem as a two-dimensional graph realization problem: given a planar graph with approximately known edge lengths, recover the Euclidean position of each vertex up to a global rotation and translation. This formulation is applicable to the localization of sensor networks in which each node can estimate the distance to each of its neighbors, but no absolute position reference such as GPS or fixed anchor nodes is available. We implemented the algorithm on a physical sensor network and empirically assessed its accuracy and performance. Also, in simulation, we demonstrate that the algorithm scales to large networks and handles real-world deployment geometries. Finally, we show how the algorithm supports localization of mobile nodes.


acm/ieee international conference on mobile computing and networking | 2001

The cricket compass for context-aware mobile applications

Nissanka Bodhi Priyantha; Allen Miu; Hari Balakrishnan; Seth J. Teller

The ability to determine the orientation of a device is of fundamental importance in context aware and location-dependent mobile computing. By analogy to a traditional compass, knowledge of orientation through the Cricket compass attached to a mobile device enhances various applications, including efficient way-finding and navigation, directional service discovery, and “augmented-reality” displays. Our compass infrastructure enhances the spatial inference capability of the Cric ketindoor location system [20], and enables new pervasive computing applications. Using fixed active beacons and carefully placed passive ultrasonic sensors, we show how to estimate the orientation of a mobile device to within a few degrees, using precise, sub-centimeter differences in distance estimates from a beacon to each sensor on the compass. Then, given a set of fixed, active position beacons whose locations are known, we describe an algorithm that combines several carrier arrival times to produce a robust estimate of the rigid orientation of the mobile compass. The hardware of the Cricket compass is small enough to be integrated with a handheld mobile device. It includes five passive ultrasonic receivers, each 0.8cm in diameter, arrayed in a “V” shape a few centimeters across. Cricket beacons deployed throughout a building broadcast coupled 418MHz RF packet data and a 40KHz ultrasound carrier, which are processed by the compass software to obtain differential distance and position estimates. Our experimental results show that our prototype implementation can determine compass orientation to within 3 degrees when the true angle lies between ±30 degrees, and to within 5 degrees when the true angle lies between ±40 degrees, with respect to a fixed beacon.


international conference on computer graphics and interactive techniques | 1991

Visibility preprocessing for interactive walkthroughs

Seth J. Teller; Carlo H. Séquin

The number of polygons comprising interesting architectural models is many more than can be rendered at interactive frame rates. However, due to occlusion by opaque surfaces (e.g., walls), only a small fraction of a typical model is visible from most viewpoints.We describe a method of visibility preprocessing that is efficient and effective for axis-aligned or axial architectural models. A model is subdivided into rectangular cells whose boundaries coincide with major opaque surfaces. Non-opaque portals are identified on cell boundaries, and used to form an adjacency graph connecting the cells of the subdivision. Next, the cell-to-cell visibility is computed for each cell of the subdivision, by linking pairs of cells between which unobstructed sightlines exist.During an interactive walkthrough phase, an observer with a known position and view cone moves through the model. At each frame, the cell containing the observer is identified, and the contents of potentially visible cells are retrieved from storage. The set of potentially visible cells is further reduced by culling it against the observers view cone, producing the eye-to-cell visibility. The contents of the remaining visible cells are then sent to a graphics pipeline for hidden-surface removal and rendering.Tests on moderately complex 2-D and 3-D axial models reveal substantially reduced rendering loads.


international conference on robotics and automation | 2003

An Atlas framework for scalable mapping

Michael Bosse; Paul Newman; John J. Leonard; Martin Soika; Wendelin Feiten; Seth J. Teller

This paper describes Atlas, a hybrid metrical/topological approach to SLAM that achieves efficient mapping of large-scale environments. The representation is a graph of coordinate frames, with each vertex in the graph representing a local frame, and each edge representing the transformation between adjacent frames. In each frame, we build a map that captures the local environment and the current robot pose along with the uncertainties of each. Each maps uncertainties are modeled with respect to its own frame. Probabilities of entities with respect to arbitrary frames are generated by following a path formed by the edges between adjacent frames, computed via Dijkstras shortest path algorithm. Loop closing is achieved via an efficient map matching algorithm. We demonstrate the technique running in real-time in a large indoor structured environment (2.2 km path length) with multiple nested loops using laser or ultrasonic ranging sensors.


computer vision and pattern recognition | 2006

Particle Video: Long-Range Motion Estimation using Point Trajectories

Peter Sand; Seth J. Teller

This paper describes a new approach to motion estimation in video. We represent video motion using a set of particles. Each particle is an image point sample with a long-duration trajectory and other properties. To optimize particle trajectories we measure appearance consistency along the particle trajectories and distortion between the particles. The resulting motion representation is useful for a variety of applications and cannot be directly obtained using existing methods such as optical flow or feature tracking. We demonstrate the algorithm on challenging real-world videos that include complex scene geometry, multiple types of occlusion, regions with low texture, and non-rigid deformations.


The International Journal of Robotics Research | 2004

Simultaneous Localization and Map Building in Large-Scale Cyclic Environments Using the Atlas Framework

Michael Bosse; Paul Newman; John J. Leonard; Seth J. Teller

In this paper we describe Atlas, a hybrid metrical/topological approach to simultaneous localization and mapping (SLAM) that achieves efficient mapping of large-scale environments. The representation is a graph of coordinate frames, with each vertex in the graph representing a local frame and each edge representing the transformation between adjacent frames. In each frame, we build a map that captures the local environment and the current robot pose along with the uncertainties of each. Each map’s uncertainties are modeled with respect to its own frame. Probabilities of entities with respect to arbitrary frames are generated by following a path formed by the edges between adjacent frames, computed using either the Dijkstra shortest path algorithm or breath-first search. Loop closing is achieved via an efficient map-matching algorithm coupled with a cycle verification step. We demonstrate the performance of the technique for post-processing large data sets, including an indoor structured environment (2.2 km path length) with multiple nested loops using laser or ultrasonic ranging sensors.


international conference on robotics and automation | 2006

Fast iterative alignment of pose graphs with poor initial estimates

Edwin Olson; John J. Leonard; Seth J. Teller

A robot exploring an environment can estimate its own motion and the relative positions of features in the environment. Simultaneous localization and mapping (SLAM) algorithms attempt to fuse these estimates to produce a map and a robot trajectory. The constraints are generally non-linear, thus SLAM can be viewed as a non-linear optimization problem. The optimization can be difficult, due to poor initial estimates arising from odometry data, and due to the size of the state space. We present a fast non-linear optimization algorithm that rapidly recovers the robot trajectory, even when given a poor initial estimate. Our approach uses a variant of stochastic gradient descent on an alternative state-space representation that has good stability and computational properties. We compare our algorithm to several others, using both real and synthetic data sets


international conference on robotics and automation | 2011

Anytime Motion Planning using the RRT

Sertac Karaman; Matthew R. Walter; Alejandro Perez; Emilio Frazzoli; Seth J. Teller

The Rapidly-exploring Random Tree (RRT) algorithm, based on incremental sampling, efficiently computes motion plans. Although the RRT algorithm quickly produces candidate feasible solutions, it tends to converge to a solution that is far from optimal. Practical applications favor “anytime” algorithms that quickly identify an initial feasible plan, then, given more computation time available during plan execution, improve the plan toward an optimal solution. This paper describes an anytime algorithm based on the RRT* which (like the RRT) finds an initial feasible solution quickly, but (unlike the RRT) almost surely converges to an optimal solution. We present two key extensions to the RRT*, committed trajectories and branch-and-bound tree adaptation, that together enable the algorithm to make more efficient use of computation time online, resulting in an anytime algorithm for real-time implementation. We evaluate the method using a series of Monte Carlo runs in a high-fidelity simulation environment, and compare the operation of the RRT and RRT* methods. We also demonstrate experimental results for an outdoor wheeled


interactive 3d graphics and games | 1997

Real-time occlusion culling for models with large occluders

Satyan R. Coorg; Seth J. Teller

Real-Time Occlusion Culling for Models with Large Occluders SATYAN COORG SETH TELLER Computer Graphics Group MIT Laboratory for Computer Science Efficiently identifying polygons that are visible from a dynamic synthetic viewpoint is an important problem in computer graphics. Typically, visibility determination is performed using the z-buffer algorithm. As this algorithm must examine every triangle in the input scene, z-buffering can consume a significant fraction of graphics processing, especially on architectures that have a low performance or software z-buffer. One way to avoid needlessly processing invisible portions of the scene is to use an occlusion culling algorithm to discard invisible polygons early in the graphics pipeline. In this paper, we exploit the presence of large occluders in urban and architectural models to design a real-time occlusion culling algorithm. Our algorithm has the following features: it is conservative, i.e., it overestimates the set of visible polygons; it exploits spatial coherence by using a hierarchical data structure; and it exploits temporal coherence by reusing visibility information computed for previous viewpoints. The new algorithm significantly accelerates rendering of several complex test models. CR


interactive 3d graphics and games | 1992

Management of large amounts of data in interactive building walkthroughs

Thomas A. Funkhouser; Carlo H. Séquin; Seth J. Teller

Management of Large Amounts of Data in Interactive Building Walkthroughs Thomas A. Funkhouser, Carlo H. !Zquin and Seth J. Teller University of California at Berkeley* We describe techniques for managing large amounts of data during an interactive walkthrough of an architectural model. These techniques are based on a spatial subdivision, visibility analysis, and a display database containing objects described at multiple levels of detail. In each frame of the walkthrough, we compute a set of objects to render, i.e. those potentially visible from the observer’s viewpoint, and a set of objects to swap into memory, i.e. those that might become visible in the near future. We choose an appropriate level of detail at which to store and to render each object, possibly using very simple representations for objects that appear small to the observer, thereby saving space and time. Using these techniques, we cull away large portions of the model that are irrelevant from the observer’s viewpoint, and thereby achieve interactive frame rates. CR

Collaboration


Dive into the Seth J. Teller's collaboration.

Top Co-Authors

Avatar

Matthew R. Walter

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar

John J. Leonard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albert S. Huang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edwin Olson

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Nicholas Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Luke Fletcher

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Satyan R. Coorg

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Emilio Frazzoli

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Moore

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge