Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dragomir Anguelov is active.

Publication


Featured researches published by Dragomir Anguelov.


computer vision and pattern recognition | 2015

Going deeper with convolutions

Christian Szegedy; Wei Liu; Yangqing Jia; Pierre Sermanet; Scott E. Reed; Dragomir Anguelov; Dumitru Erhan; Vincent Vanhoucke; Andrew Rabinovich

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.


european conference on computer vision | 2016

SSD: Single Shot MultiBox Detector

Wei Liu; Dragomir Anguelov; Dumitru Erhan; Christian Szegedy; Scott E. Reed; Cheng-Yang Fu; Alexander C. Berg

We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Our SSD model is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stage and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, MS COCO, and ILSVRC datasets confirm that SSD has comparable accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. Compared to other single stage methods, SSD has much better accuracy, even with a smaller input image size. For


international conference on computer graphics and interactive techniques | 2005

SCAPE: shape completion and animation of people

Dragomir Anguelov; Praveen Srinivasan; Daphne Koller; Sebastian Thrun; Jim Rodgers; James Davis

300\times 300


computer vision and pattern recognition | 2014

Scalable Object Detection Using Deep Neural Networks

Dumitru Erhan; Christian Szegedy; Alexander Toshev; Dragomir Anguelov

input, SSD achieves 72.1% mAP on VOC2007 test at 58 FPS on a Nvidia Titan X and for


computer vision and pattern recognition | 2005

Discriminative learning of Markov random fields for segmentation of 3D scan data

Dragomir Anguelov; B. Taskarf; V. Chatalbashev; Daphne Koller; D. Gupta; Geremy Heitz; Andrew Y. Ng

500\times 500


IEEE Computer | 2010

Google Street View: Capturing the World at Street Level

Dragomir Anguelov; Carole Dulong; Daniel Joseph Filip; Christian Frueh; Stephane Lafon; Richard F. Lyon; Abhijit Ogale; Luc Vincent; Josh Weaver

input, SSD achieves 75.1% mAP, outperforming a comparable state of the art Faster R-CNN model. Code is available at this https URL .


international conference on robotics and automation | 2004

Detecting and modeling doors with mobile robots

Dragomir Anguelov; Daphne Koller; Evan Parker; Sebastian Thrun

We introduce the SCAPE method (Shape Completion and Animation for PEople)---a data-driven method for building a human shape model that spans variation in both subject shape and pose. The method is based on a representation that incorporates both articulated and non-rigid deformations. We learn a pose deformation model that derives the non-rigid surface deformation as a function of the pose of the articulated skeleton. We also learn a separate model of variation based on body shape. Our two models can be combined to produce 3D surface models with realistic muscle deformation for different people in different poses, when neither appear in the training set. We show how the model can be used for shape completion --- generating a complete surface mesh given a limited set of markers specifying the target shape. We present applications of shape completion to partial view completion and motion capture animation. In particular, our method is capable of constructing a high-quality animated surface model of a moving person, with realistic muscle deformation, using just a single static scan and a marker motion capture sequence of the person.


knowledge discovery and data mining | 2000

Mining the stock market (extended abstract): which measure is best?

Martin Gavrilov; Dragomir Anguelov; Piotr Indyk; Rajeev Motwani

Deep convolutional neural networks have recently achieved state-of-the-art performance on a number of image recognition benchmarks, including the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC-2012). The winning model on the localization sub-task was a network that predicts a single bounding box and a confidence score for each object category in the image. Such a model captures the whole-image context around the objects but cannot handle multiple instances of the same object in the image without naively replicating the number of outputs for each instance. In this work, we propose a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest. The model naturally handles a variable number of instances for each class and allows for cross-class generalization at the highest levels of the network. We are able to obtain competitive recognition performance on VOC2007 and ILSVRC2012, while using only the top few predicted locations in each image and a small number of neural network evaluations.


computer vision and pattern recognition | 2007

Contextual Identity Recognition in Personal Photo Albums

Dragomir Anguelov; Kuang-chih Lee; Salih Burak Gokturk; Baris Sumengen

We address the problem of segmenting 3D scan data into objects or object classes. Our segmentation framework is based on a subclass of Markov random fields (MRFs) which support efficient graph-cut inference. The MRF models incorporate a large set of diverse features and enforce the preference that adjacent scan points have the same classification label. We use a recently proposed maximum-margin framework to discriminatively train the model from a set of labeled scans; as a result we automatically learn the relative importance of the features for the segmentation task. Performing graph-cut inference in the trained MRF can then be used to segment new scenes very efficiently. We test our approach on three large-scale datasets produced by different kinds of 3D sensors, showing its applicability to both outdoor and indoor environments containing diverse objects.


computer vision and pattern recognition | 2006

Object Pose Detection in Range Scan Data

Jim Rodgers; Dragomir Anguelov; Hoi-Cheung Pang; Daphne Koller

Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.

Collaboration


Dive into the Dragomir Anguelov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Baris Sumengen

University of California

View shared research outputs
Top Co-Authors

Avatar

John Flynn

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge