Alex Teichman
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alex Teichman.
ieee intelligent vehicles symposium | 2011
Jesse Levinson; Jake Askeland; Jan Becker; Jennifer Dolson; David Held; Sören Kammel; J. Zico Kolter; Dirk Langer; Oliver Pink; Vaughan R. Pratt; Michael Sokolsky; Ganymed Stanek; David Stavens; Alex Teichman; Moritz Werling; Sebastian Thrun
In order to achieve autonomous operation of a vehicle in urban situations with unpredictable traffic, several realtime systems must interoperate, including environment perception, localization, planning, and control. In addition, a robust vehicle platform with appropriate sensors, computational hardware, networking, and software infrastructure is essential.
international conference on robotics and automation | 2011
Alex Teichman; Jesse Levinson; Sebastian Thrun
Object recognition is a critical next step for autonomous robots, but a solution to the problem has remained elusive. Prior 3D-sensor-based work largely classifies individual point cloud segments or uses class-specific trackers. In this paper, we take the approach of classifying the tracks of all visible objects. Our new track classification method, based on a mathematically principled method of combining log odds estimators, is fast enough for real time use, is non-specific to object class, and performs well (98.5% accuracy) on the task of classifying correctly-tracked, well-segmented objects into car, pedestrian, bicyclist, and background classes. We evaluate the classifiers performance using the Stanford Track Collection, a new dataset of about 1.3 million labeled point clouds in about 14,000 tracks recorded from an autonomous vehicle research platform. This dataset, which we make publicly available, contains tracks extracted from about one hour of 360-degree, 10Hz depth information recorded both while driving on busy campus streets and parked at busy intersections.
The International Journal of Robotics Research | 2008
Michael Park; Sachin Chitta; Alex Teichman; Mark Yim
Recognizing useful modular robot configurations composed of hundreds of modules is a significant challenge. Matching a new modular robot configuration to a library of known configurations is essential in identifying and applying control schemes. We present three different algorithms to address the problem of (a) matching and (b) mapping new robot configurations onto a library of known configurations. The first method solves the problem using graph isomorphisms and can identify configurations that share the same underlying graph structure, but have different port connections amongst the modules. The second approach compares graph spectra of configuration matrices to find a permutation matrix that maps a given configuration to a known one. The third algorithm exploits the unique structure of the problem for the particular robots used in our research to achieve impressive gains in performance and speed over existing techniques, especially for larger configurations. With these three algorithms, this paper presents novel solutions to the problem of configuration recognition and sheds light on theoretical and practical issues for long-term advances in this important area of modular robotics. Results and examples are provided to compare the performance of the three algorithms and discuss their relative advantages.
robotics: science and systems | 2013
Alex Teichman; Stephen A. Miller; Sebastian Thrun
We present a new, generic approach to the calibration of depth sensor intrinsics that requires only the ability to run SLAM. In particular, no specialized hardware, calibration target, or hand measurement is required. Essential to this approach is the idea that certain intrinsic parameters, identified here as myopic, govern distortions that increase with range. We demonstrate these ideas on the calibration of the popular Kinect and Xtion Pro Live RGBD sensors, which typically exhibit significant depth distortion at ranges greater than three meters. Making use of the myopic property, we show how to efficiently learn a discrete grid of 32,000 depth multipliers that resolve this distortion. Compared to the most similar unsupervised calibration work in the literature, this is a 100-fold increase in the maximum number of calibration parameters previously learned. Compared to the supervised calibration approach, the work of this paper means the difference between A) printing a poster of a checkerboard, mounting it to a rigid plane, and recording data of it from many different angles and ranges a process that often requires two people or repeated use of a special easel versus B) recording a few minutes of data from unmodified, natural environments. This is advantageous both for individuals who wish to calibrate their own sensors as well as for a robot that needs to calibrate automatically while in the field.
The International Journal of Robotics Research | 2012
Alex Teichman; Sebastian Thrun
We consider a semi-supervised approach to the problem of track classification in dense three-dimensional range data. This problem involves the classification of objects that have been segmented and tracked without the use of a class-specific tracker. This paper is an extended version of our previous work. We propose a method based on the expectation–maximization algorithm: iteratively (1) train a classifier, and (2) extract useful training examples from unlabeled data by exploiting tracking information. We evaluate our method on a large multiclass problem in dense range data collected from natural street scenes. When given only three hand-labeled training tracks of each object class, the final accuracy of the semi-supervised algorithm is comparable to that of the fully supervised equivalent which uses two orders of magnitude more. Further, we show experimentally that the accuracy of a classifier considered as a function of human labeling effort can be substantially improved using this method. Finally, we show that a simple algorithmic speedup based on incrementally updating a boosting classifier can reduce learning time by a factor of three.
advanced robotics and its social impacts | 2011
Alex Teichman; Sebastian Thrun
This paper is meant as an overview of the recent object recognition work done on Stanfords autonomous vehicle and the primary challenges along this particular path. The eventual goal is to provide practical object recognition systems that will enable new robotic applications such as autonomous taxis that recognize hailing pedestrians, personal robots that can learn about specific objects in your home, and automated farming equipment that is trained on-site to recognize the plants and materials that it must interact with. Recent work has made some progress towards object recognition that could fulfill these goals, but advances in model-free segmentation and tracking algorithms are required for applicability beyond scenarios like driving in which model-free segmentation is often available. Additionally, online learning may be required to make use of the large amounts of labeled data made available by tracking-based semi-supervised learning.
IEEE Transactions on Automation Science and Engineering | 2013
Alex Teichman; Jake T. Lussier; Sebastian Thrun
We consider the problem of segmenting and tracking deformable objects in color video with depth (RGBD) data available from commodity sensors such as the Asus Xtion Pro Live or Microsoft Kinect. We frame this problem with very few assumptions-no prior object model, no stationary sensor, and no prior 3-D map-thus making a solution potentially useful for a large number of applications, including semi-supervised learning, 3-D model capture, and object recognition. Our approach makes use of a rich feature set, including local image appearance, depth discontinuities, optical flow, and surface normals to inform the segmentation decision in a conditional random field model. In contrast to previous work in this field, the proposed method learns how to best make use of these features from ground-truth segmented sequences. We provide qualitative and quantitative analyses which demonstrate substantial improvement over the state of the art. This paper is an extended version of our previous work. Building on our previous work, we show that it is possible to achieve an order of magnitude speedup and thus real-time performance ( ~ 20 FPS) on a laptop computer by applying simple algorithmic optimizations to the original work. This speedup comes at only a minor cost in overall accuracy and thus makes this approach applicable to a broader range of tasks. We demonstrate one such task: real-time, online, interactive segmentation to efficiently collect training data for an off-the-shelf object detector.
intelligent robots and systems | 2013
Stephen A. Miller; Alex Teichman; Sebastian Thrun
While inexpensive depth sensors are becoming increasingly ubiquitous, field of view and self-occlusion constraints limit the information a single sensor can provide. For many applications one may instead require a network of depth sensors, registered to a common world frame and synchronized in time. Historically such a setup has required a tedious manual calibration procedure, making it infeasible to deploy these networks in the wild, where spatial and temporal drift are common. In this work, we propose an entirely unsupervised procedure for calibrating the relative pose and time offsets of a pair of depth sensors. So doing, we make no use of an explicit calibration target, or any intentional activity on the part of a user. Rather, we use the unstructured motion of objects in the scene to find potential correspondences between the sensor pair. This yields a rough transform which is then refined with an occlusion-aware energy minimization. We compare our results against the standard checkerboard technique, and provide qualitative examples for scenes in which such a technique would be impossible.
international joint conference on artificial intelligence | 2009
Honglak Lee; Rajat Raina; Alex Teichman; Andrew Y. Ng
robotics science and systems | 2011
Alex Teichman; Sebastian Thrun