Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Welinder is active.

Publication


Featured researches published by Peter Welinder.


computer vision and pattern recognition | 2010

Cascaded pose regression

Piotr Dollár; Peter Welinder; Pietro Perona

We present a fast and accurate algorithm for computing the 2D pose of objects in images called cascaded pose regression (CPR). CPR progressively refines a loosely specified initial guess, where each refinement is carried out by a different regressor. Each regressor performs simple image measurements that are dependent on the output of the previous regressors; the entire system is automatically learned from human annotated training examples. CPR is not restricted to rigid transformations: ‘pose’ is any parameterized variation of the objects appearance such as the degrees of freedom of deformable and articulated objects. We compare CPR against both standard regression techniques and human performance (computed from redundant human annotations). Experiments on three diverse datasets (mice, faces, fish) suggest CPR is fast (2–3ms per pose estimate), accurate (approaching human performance), and easy to train from small amounts of labeled data.


european conference on computer vision | 2010

Visual recognition with humans in the loop

Steve Branson; Catherine Wah; Florian Schroff; Boris Babenko; Peter Welinder; Pietro Perona; Serge J. Belongie

We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required.


Nature Methods | 2014

Sleep-spindle detection: crowdsourcing and evaluating performance of experts, non-experts and automated methods

Simon C. Warby; Sabrina Lyngbye Wendt; Peter Welinder; Emil Gs Munk; Oscar Carrillo; Helge Bjarup Dissing Sørensen; Poul Jennum; Paul E. Peppard; Pietro Perona; Emmanuel Mignot

Sleep spindles are discrete, intermittent patterns of brain activity observed in human electroencephalographic data. Increasingly, these oscillations are of biological and clinical interest because of their role in development, learning and neurological disorders. We used an Internet interface to crowdsource spindle identification by human experts and non-experts, and we compared their performance with that of automated detection algorithms in data from middle- to older-aged subjects from the general population. We also refined methods for forming group consensus and evaluating the performance of event detectors in physiological data such as electroencephalographic recordings from polysomnography. Compared to the expert group consensus gold standard, the highest performance was by individual experts and the non-expert group consensus, followed by automated spindle detectors. This analysis showed that crowdsourcing the scoring of sleep data is an efficient method to collect large data sets, even for difficult tasks such as spindle identification. Further refinements to spindle detection algorithms are needed for middle- to older-aged subjects.


Hippocampus | 2008

Grid cells: The position code, neural network models of activity, and the problem of learning

Peter Welinder; Yoram Burak; Ila Fiete

We review progress on the modeling and theoretical fronts in the quest to unravel the computational properties of the grid cell code and to explain the mechanisms underlying grid cell dynamics. The goals of the review are to outline a coherent framework for understanding the dynamics of grid cells and their representation of space; to critically present and draw contrasts between recurrent network models of grid cells based on continuous attractor dynamics and independent‐neuron models based on temporal interference; and to suggest open questions for experiment and theory.


international conference on computer vision | 2009

Scaling object recognition: Benchmark of current state of the art techniques

Mohamed Aly; Peter Welinder; Mario E. Munich; Pietro Perona

Scaling from hundreds to millions of objects is the next challenge in visual recognition. We investigate and benchmark the scalability properties (memory requirements, runtime, recognition performance) of the state-of-the-art object recognition techniques: the forest of k-d trees, the locality sensitive hashing (LSH) method, and the approximate clustering procedure with the tf-idf inverted index. The characterization of the images was performed with SIFT features. We conduct experiments on two new datasets of more than 100,000 images each, and quantify the performance using artificial and natural deformations. We analyze the results and point out the pitfalls of each of the compared methodologies suggesting potential new research avenues for the field.


international conference on image processing | 2009

Automatic discovery of image families: Global vs. local features

Mohamed Aly; Peter Welinder; Mario E. Munich; Pietro Perona

Gathering a large collection of images has been made quite easy by social and image sharing websites, e.g. flickr.com. However, using such collections faces the problem that they contain a large number of duplicates and highly similar images. This work tackles the problem of how to automatically organize image collections into sets of similar images, called image families hereinafter. We thoroughly compare the performance of two approaches to measure image similarity: global descriptors vs. a set of local descriptors. We assess the performance of these approaches as the problem scales up to thousands of images and hundreds of families. We present our results on a new dataset of CD/DVD game covers.


computer vision and pattern recognition | 2009

Towards automated large scale discovery of image families

Mohamed Aly; Peter Welinder; Mario E. Munich; Pietro Perona

Gathering large collections of images is quite easy nowadays with the advent of image sharing Web sites, such as flickr.com. However, such collections inevitably contain duplicates and highly similar images, what we refer to as image families. Automatic discovery and cataloguing of such similar images in large collections is important for many applications, e.g. image search, image collection visualization, and research purposes among others. In this work, we investigate this problem by thoroughly comparing two broad approaches for measuring image similarity: global vs. local features. We assess their performance as the image collection scales up to over 11,000 images with over 6,300 families. We present our results on three datasets with different statistics, including two new challenging datasets. Moreover, we present a new algorithm to automatically determine the number of families in the collection with promising results.


computer vision and pattern recognition | 2013

A Lazy Man's Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration

Peter Welinder; Max Welling; Pietro Perona

How many labeled examples are needed to estimate a classifiers performance on a new dataset? We study the case where data is plentiful, but labels are expensive. We show that by making a few reasonable assumptions on the structure of the data, it is possible to estimate performance curves, with confidence bounds, using a small number of ground truth labels. Our approach, which we call Semi supervised Performance Evaluation (SPE), is based on a generative model for the classifiers confidence scores. In addition to estimating the performance of classifiers on new datasets, SPE can be used to recalibrate a classifier by re-estimating the class-conditional confidence distributions.


Clinical Neurophysiology | 2014

O36: Sleep spindle scoring: performance of humans versus machines

Sabrina Lyngbye Wendt; Simon C. Warby; Peter Welinder; Helge Bjarup Dissing Sørensen; Paul E. Peppard; Emmanuel Mignot; Poul Jørgen Jennum

Question: What is the agreement in spindle scoring within, between and among experts? How does spindle scoring by humans compare to automated spindle scoring algorithms? Methods: We crowd-sourced the collection of spindle scorings from 24 experts in a large and varied dataset of EEG (C3-M2) from 110 middle-aged sleeping subjects. Epochs were scored by an average of 5.3 unique experts. Two experts scored parts of the dataset multiple times. We developed a simple method to build a large gold standard by establishing group consensus among expert scorers. We tested the performance of six previously published automated spindle detectors against the gold standard and refined methods of performance analysis for event detection. Results: We found an interrater agreement (F1-score) of 61±6% (Cohen’s Kappa (κ): 0.52±0.07) averaged over 24 expert pairs and an intrarater agreement of 72±7% (κ: 0.66±0.07) averaged over two experts. We tested the performance of individual experts to a gold standard compiled from all the expert scorers and found average agreement of 75±6% (κ: 0.68) over the 24 experts. We recompiled the gold standard and excluded the single expert whose performance was being assessed, and found an average agreement of 67±7% (κ: 0.59). Overall, we found the performance of human experts to be significantly better than the automated sleep spindle detectors we tested (maximum F1-score of detectors: 52%). Conclusions: Sleep spindle characteristics between subjects are very diverse which makes the scoring task difficult. The low interrater reliability suggests using more than one expert when scoring a dataset.


Advances in Water Resources | 2011

The Caltech-UCSD Birds-200-2011 Dataset

Catherine Wah; Steve Branson; Peter Welinder; Pietro Perona; Serge J. Belongie

Collaboration


Dive into the Peter Welinder's collaboration.

Top Co-Authors

Avatar

Pietro Perona

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mohamed Aly

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Branson

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Catherine Wah

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul E. Peppard

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon C. Warby

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge