Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anelia Angelova is active.

Publication


Featured researches published by Anelia Angelova.


International Journal of Computer Vision | 2007

Computer Vision on Mars

Larry H. Matthies; Mark W. Maimone; Andrew Edie Johnson; Yang Cheng; Reg G. Willson; Carlos Y. Villalpando; Steve B. Goldberg; Andres Huertas; Andrew Neil Stein; Anelia Angelova

Increasing the level of spacecraft autonomy is essential for broadening the reach of solar system exploration. Computer vision has and will continue to play an important role in increasing autonomy of both spacecraft and Earth-based robotic vehicles. This article addresses progress on computer vision for planetary rovers and landers and has four main parts. First, we review major milestones in the development of computer vision for robotic vehicles over the last four decades. Since research on applications for Earth and space has often been closely intertwined, the review includes elements of both. Second, we summarize the design and performance of computer vision algorithms used on Mars in the NASA/JPL Mars Exploration Rover (MER) mission, which was a major step forward in the use of computer vision in space. These algorithms did stereo vision and visual odometry for rover navigation and feature tracking for horizontal velocity estimation for the landers. Third, we summarize ongoing research to improve vision systems for planetary rovers, which includes various aspects of noise reduction, FPGA implementation, and vision-based slip perception. Finally, we briefly survey other opportunities for computer vision to impact rovers, landers, and orbiters in future solar system exploration missions.


Journal of Field Robotics | 2007

Learning and prediction of slip from visual information

Anelia Angelova; Larry H. Matthies; Daniel M. Helmick; Pietro Perona

This paper presents an approach for slip prediction from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering such terrain can be very useful for better planning and avoiding these areas. To address this problem, terrain appearance and geometry information about map cells are correlated to the slip measured by the rover while traversing each cell. This relationship is learned from previous experience, so slip can be predicted remotely from visual information only. The proposed method consists of terrain type recognition and nonlinear regression modeling. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The final slip prediction error is about 20%. The system is intended for improved navigation on steep slopes and rough terrain for Mars rovers. (c) 2007 Wiley Periodicals, Inc.


international conference on robotics and automation | 2015

Real-time grasp detection using convolutional neural networks

Joseph Redmon; Anelia Angelova

We present an accurate, real-time approach to robotic grasp detection based on convolutional neural networks. Our network performs single-stage regression to graspable bounding boxes without using standard sliding window or region proposal techniques. The model outperforms state-of-the-art approaches by 14 percentage points and runs at 13 frames per second on a GPU. Our network can simultaneously perform classification so that in a single step it recognizes the object and finds a good grasp rectangle. A modification to this model predicts multiple grasps per object by using a locally constrained prediction mechanism. The locally constrained model performs significantly better, especially on objects that can be grasped in a variety of ways.


british machine vision conference | 2015

Real-Time Pedestrian Detection With Deep Network Cascades

Anelia Angelova; Alex Krizhevsky; Vincent Vanhoucke; Abhijit Ogale; Dave Ferguson

We present a new real-time approach to object detection that exploits the efficiency of cascade classifiers with the accuracy of deep neural networks. Deep networks have been shown to excel at classification tasks, and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and very accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second. The resulting approach achieves a 26.2% average miss rate on the Caltech Pedestrian detection benchmark, which is competitive with the very best reported results. It is the first work we are aware of that achieves very high accuracy while running in real-time.


international conference on robotics and automation | 2006

Learning to predict slip for ground robots

Anelia Angelova; Larry H. Matthies; Daniel M. Helmick; Gabe Sibley; Pietro Perona

In this paper we predict the amount of slip an exploration rover would experience using stereo imagery by learning from previous examples of traversing similar terrain. To do that, the information of terrain appearance and geometry regarding some location is correlated to the slip measured by the rover while this location is being traversed. This relationship is learned from previous experience, so slip can be predicted later at a distance from visual information only. The advantages of the approach are: 1) learning from examples allows the system to adapt to unknown terrains rather than using fixed heuristics or predefined rules; 2) the feedback about the observed slip is received from the vehicles own sensors which can fully automate the process; 3) learning slip from previous experience can replace complex mechanical modeling of vehicle or terrain, which is time consuming and not necessarily feasible. Predicting slip is motivated by the need to assess the risk of getting trapped before entering a particular terrain. For example, a planning algorithm can utilize slip information by taking into consideration that a slippery terrain is costly or hazardous to traverse. A generic nonlinear regression framework is proposed in which the terrain type is determined from appearance and then a nonlinear model of slip is learned for a particular terrain type. In this paper we focus only on the latter problem and provide slip learning and prediction results for terrain types, such as soil, sand, gravel, and asphalt. The slip prediction error achieved is about 15% which is comparable to the measurement errors for slip itself


Journal of Field Robotics | 2006

Towards learned traversability for robot navigation: From underfoot to the far field

Andrew W. Howard; Michael J. Turmon; Larry H. Matthies; Benyang Tang; Anelia Angelova; Eric Mjolsness

Autonomous off-road navigation of robotic ground vehicles has important applications on Earth and in space exploration. Progress in this domain has been retarded by the limited lookahead range of three-dimensional (3D) sensors and by the difficulty of heuristically programming systems to understand the traversability of the wide variety of terrain they can encounter. Enabling robots to learn from experience may alleviate both of these problems. We define two paradigms for this, learning from 3D geometry and learning from proprioception, and describe initial instantiations of them as developed under DARPA and NASA programs. Field test results show promise for learning traversability of vegetated terrain and learning to extend the lookahead range of the vision system.


computer vision and pattern recognition | 2007

Fast Terrain Classification Using Variable-Length Representation for Autonomous Navigation

Anelia Angelova; Larry H. Matthies; Daniel M. Helmick; Pietro Perona

We propose a method for learning using a set of feature representations which retrieve different amounts of information at different costs. The goal is to create a more efficient terrain classification algorithm which can be used in real-time, onboard an autonomous vehicle. Instead of building a monolithic classifier with uniformly complex representation for each class, the main idea here is to actively consider the labels or misclassification cost while constructing the classifier. For example, some terrain classes might be easily separable from the rest, so very simple representation will be sufficient to learn and detect these classes. This is taken advantage of during learning, so the algorithm automatically builds a variable-length visual representation which varies according to the complexity of the classification task. This enables fast recognition of different terrain types during testing. We also show how to select a set of feature representations so that the desired terrain classification task is accomplished with high accuracy and is at the same time efficient. The proposed approach achieves a good trade-off between recognition performance and speedup on data collected by an autonomous robot.


robotics: science and systems | 2006

Slip Prediction Using Visual Information.

Anelia Angelova; Larry H. Matthies; Daniel M. Helmick; Pietro Perona

This paper considers prediction of slip from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering a particular terrain can be very useful for better planning and avoiding terrains with large slip. The proposed method is based on learning from experience and consists of terrain type recognition and nonlinear regression modeling. After learning, slip prediction is done remotely using only the visual information as input. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The slip prediction error is about 20% of the step size.


international conference on robotics and automation | 2015

Pedestrian detection with a Large-Field-Of-View deep network

Anelia Angelova; Alex Krizhevsky; Vincent Vanhoucke

Pedestrian detection is of crucial importance to autonomous driving applications. Methods based on deep learning have shown significant improvements in accuracy, which makes them particularly suitable for applications, such as pedestrian detection, where reducing the miss rate is very important. Although they are accurate, their runtime has been at best in seconds per image, which makes them not practical for onboard applications. We present a Large-Field-Of-View (LFOV) deep network for pedestrian detection, that can achieve high accuracy and is designed to make deep networks work faster for detection problems. The idea of the proposed Large-Field-of-View deep network is to learn to make classification decisions simultaneously and accurately at multiple locations. The LFOV network processes larger image areas at much faster speeds than typical deep networks have been able to, and can intrinsically reuse computations. Our pedestrian detection solution, which is a combination of a LFOV network and a standard deep network, works at 280 ms per image on GPU and achieves 35.85 average miss rate on the Caltech Pedestrian Detection Benchmark.


workshop on applications of computer vision | 2013

Image segmentation for large-scale subcategory flower recognition

Anelia Angelova; Shenghuo Zhu; Yuanqing Lin

We propose a segmentation algorithm for the purposes of large-scale flower species recognition. Our approach is based on identifying potential object regions at the time of detection. We then apply a Laplacian-based segmentation, which is guided by these initially detected regions. More specifically, we show that 1) recognizing parts of the potential object helps the segmentation and makes it more robust to variabilities in both the background and the object appearances, 2) segmenting the object of interest at test time is beneficial for the subsequent recognition. Here we consider a large-scale dataset containing 578 flower species and 250,000 images. This dataset is developed by our team for the purposes of providing a flower recognition application for general use and is the largest in its scale and scope. We tested the proposed segmentation algorithm on the well-known 102 Oxford flowers benchmark [11] and on the new challenging large-scale 578 flower dataset, that we have collected. We observed about 4% improvements in the recognition performance on both datasets compared to the baseline. The algorithm also improves all other known results on the Oxford 102 flower benchmark dataset. Furthermore, our method is both simpler and faster than other related approaches, e.g. [3, 14], and can be potentially applicable to other subcategory recognition datasets.

Collaboration


Dive into the Anelia Angelova's collaboration.

Top Co-Authors

Avatar

Larry H. Matthies

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel M. Helmick

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Pietro Perona

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irfan A. Essa

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark W. Maimone

California Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge