Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Grant Van Horn is active.

Publication


Featured researches published by Grant Van Horn.


International Journal of Computer Vision | 2014

The Ignorant Led by the Blind: A Hybrid Human---Machine Vision System for Fine-Grained Categorization

Steve Branson; Grant Van Horn; Catherine Wah; Pietro Perona; Serge J. Belongie

We present a visual recognition system for fine-grained visual categorization. The system is composed of a human and a machine working together and combines the complementary strengths of computer vision algorithms and (non-expert) human users. The human users provide two heterogeneous forms of information object part clicks and answers to multiple choice questions. The machine intelligently selects the most informative question to pose to the user in order to identify the object class as quickly as possible. By leveraging computer vision and analyzing the user responses, the overall amount of human effort required, measured in seconds, is minimized. Our formalism shows how to incorporate many different types of computer vision algorithms into a human-in-the-loop framework, including standard multiclass methods, part-based methods, and localized multiclass and attribute methods. We explore our ideas by building a field guide for bird identification. The experimental results demonstrate the strength of combining ignorant humans with poor-sighted machines the hybrid system achieves quick and accurate bird identification on a dataset containing 200 bird species.


computer vision and pattern recognition | 2014

Similarity Comparisons for Interactive Fine-Grained Categorization

Catherine Wah; Grant Van Horn; Steve Branson; Subhransu Maji; Pietro Perona; Serge J. Belongie

Current human-in-the-loop fine-grained visual categorization systems depend on a predefined vocabulary of attributes and parts, usually determined by experts. In this work, we move away from that expert-driven and attribute-centric paradigm and present a novel interactive classification system that incorporates computer vision and perceptual similarity metrics in a unified framework. At test time, users are asked to judge relative similarity between a query image and various sets of images, these general queries do not require expert-defined terminology and are applicable to other domains and basic-level categories, enabling a flexible, efficient, and scalable system for fine-grained categorization with humans in the loop. Our system outperforms existing state-of-the-art systems for relevance feedback-based image retrieval as well as interactive classification, resulting in a reduction of up to 43% in the average number of questions needed to correctly classify an image.


computer vision and pattern recognition | 2015

Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection

Grant Van Horn; Steve Branson; Ryan Farrell; Scott Haber; Jessie H. Barry; Panos Ipeirotis; Pietro Perona; Serge J. Belongie

We introduce tools and methodologies to collect high quality, large scale fine-grained computer vision datasets using citizen scientists - crowd annotators who are passionate and knowledgeable about specific domains such as birds or airplanes. We worked with citizen scientists and domain experts to collect NABirds, a new high quality dataset containing 48,562 images of North American birds with 555 categories, part annotations and bounding boxes. We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. We worked with bird experts to measure the quality of popular datasets like CUB-200-2011 and ImageNet and found class label error rates of at least 4%. Nevertheless, we found that learning algorithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. At the same time, we found that an expert-curated high quality test set like NABirds is necessary to accurately measure the performance of fine-grained computer vision systems. We used NABirds to train a publicly available bird recognition service deployed on the web site of the Cornell Lab of Ornithology.


computer vision and pattern recognition | 2017

Lean Crowdsourcing: Combining Humans and Machines in an Online System

Steve Branson; Grant Van Horn; Pietro Perona

We introduce a method to greatly reduce the amount of redundant annotations required when crowdsourcing annotations such as bounding boxes, parts, and class labels. For example, if two Mechanical Turkers happen to click on the same pixel location when annotating a part in a given image–an event that is very unlikely to occur by random chance–, it is a strong indication that the location is correct. A similar type of confidence can be obtained if a single Turker happened to agree with a computer vision estimate. We thus incrementally collect a variable number of worker annotations per image based on online estimates of confidence. This is done using a sequential estimation of risk over a probabilistic model that combines worker skill, image difficulty, and an incrementally trained computer vision model. We develop specialized models and algorithms for binary annotation, part keypoint annotation, and sets of bounding box annotations. We show that our method can reduce annotation time by a factor of 4-11 for binary filtering of websearch results, 2-4 for annotation of boxes of pedestrians in images, while in many cases also reducing annotation error. We will make an end-to-end version of our system publicly available.


european conference on computer vision | 2018

Recognition in Terra Incognita

Sara Beery; Grant Van Horn; Pietro Perona

It is desirable for detection and classification algorithms to generalize to unfamiliar environments, but suitable benchmarks for quantitatively studying this phenomenon are not yet available. We present a dataset designed to measure recognition generalization to novel environments. The images in our dataset are harvested from twenty camera traps deployed to monitor animal populations. Camera traps are fixed at one location, hence the background changes little across images; capture is triggered automatically, hence there is no human bias. The challenge is learning recognition in a handful of locations, and generalizing animal detection and classification to new locations where no training data is available. In our experiments state-of-the-art algorithms show excellent performance when tested at the same location where they were trained. However, we find that generalization to new locations is poor, especially for classification systems.(The dataset is available at https://beerys.github.io/CaltechCameraTraps/)


arXiv: Computer Vision and Pattern Recognition | 2014

Bird Species Categorization Using Pose Normalized Deep Convolutional Nets.

Steve Branson; Grant Van Horn; Serge J. Belongie; Pietro Perona


british machine vision conference | 2014

Improved Bird Species Recognition Using Pose Normalized Deep Convolutional Nets.

Steve Branson; Grant Van Horn; Pietro Perona; Serge J. Belongie


national conference on artificial intelligence | 2015

Tropel: Crowdsourcing Detectors with Minimal Training

Genevieve Patterson; Grant Van Horn; Serge J. Belongie; Pietro Perona; James Hays


arXiv: Computer Vision and Pattern Recognition | 2017

The Devil is in the Tails: Fine-grained Classification in the Wild.

Grant Van Horn; Pietro Perona


arXiv: Computer Vision and Pattern Recognition | 2017

The iNaturalist Challenge 2017 Dataset.

Grant Van Horn; Oisin Mac Aodha; Yang Song; Alexander Shepard; Hartwig Adam; Pietro Perona; Serge J. Belongie

Collaboration


Dive into the Grant Van Horn's collaboration.

Top Co-Authors

Avatar

Pietro Perona

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Branson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Catherine Wah

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oisin Mac Aodha

University College London

View shared research outputs
Top Co-Authors

Avatar

Chen Sun

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Hays

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge