Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothee Cour is active.

Publication


Featured researches published by Timothee Cour.


computer vision and pattern recognition | 2005

Spectral segmentation with multiscale graph decomposition

Timothee Cour; Florence Bénézit; Jianbo Shi

We present a multiscale spectral image segmentation algorithm. In contrast to most multiscale image processing, this algorithm works on multiple scales of the image in parallel, without iteration, to capture both coarse and fine level details. The algorithm is computationally efficient, allowing to segment large images. We use the normalized cut graph partitioning framework of image segmentation. We construct a graph encoding pairwise pixel affinity, and partition the graph for image segmentation. We demonstrate that large image graphs can be compressed into multiple scales capturing image structure at increasingly large neighborhood. We show that the decomposition of the image segmentation graph into different scales can be determined by ecological statistics on the image grouping cues. Our segmentation algorithm works simultaneously across the graph scales, with an inter-scale constraint to ensure communication and consistency between the segmentations at each scale. As the results show, we incorporate long-range connections with linear-time complexity, providing high-quality segmentations efficiently. Images that previously could not be processed because of their size have been accurately segmented thanks to this method.


computer vision and pattern recognition | 2011

Large-scale image classification: Fast feature extraction and SVM training

Yuanqing Lin; Fengjun Lv; Shenghuo Zhu; Ming Yang; Timothee Cour; Kai Yu; Liangliang Cao; Thomas S. Huang

Most research efforts on image classification so far have been focused on medium-scale datasets, which are often defined as datasets that can fit into the memory of a desktop (typically 4G∼48G). There are two main reasons for the limited effort on large-scale image classification. First, until the emergence of ImageNet dataset, there was almost no publicly available large-scale benchmark data for image classification. This is mostly because class labels are expensive to obtain. Second, large-scale classification is hard because it poses more challenges than its medium-scale counterparts. A key challenge is how to achieve efficiency in both feature extraction and classifier training without compromising performance. This paper is to show how we address this challenge using ImageNet dataset as an example. For feature extraction, we develop a Hadoop scheme that performs feature extraction in parallel using hundreds of mappers. This allows us to extract fairly sophisticated features (with dimensions being hundreds of thousands) on 1.2 million images within one day. For SVM training, we develop a parallel averaging stochastic gradient descent (ASGD) algorithm for training one-against-all 1000-class SVM classifiers. The ASGD algorithm is capable of dealing with terabytes of training data and converges very fast–typically 5 epochs are sufficient. As a result, we achieve state-of-the-art performance on the ImageNet 1000-class classification, i.e., 52.9% in classification accuracy and 71.8% in top 5 hit rate.


international conference on computer vision | 2011

Contextual weighting for vocabulary tree based image retrieval

Xiaoyu Wang; Ming Yang; Timothee Cour; Shenghuo Zhu; Kai Yu; Tony X. Han

In this paper we address the problem of image retrieval from millions of database images. We improve the vocabulary tree based approach by introducing contextual weighting of local features in both descriptor and spatial domains. Specifically, we propose to incorporate efficient statistics of neighbor descriptors both on the vocabulary tree and in the image spatial domain into the retrieval. These contextual cues substantially enhance the discriminative power of individual local features with very small computational overhead. We have conducted extensive experiments on benchmark datasets, i.e., the UKbench, Holidays, and our new Mobile dataset, which show that our method reaches state-of-the-art performance with much less computation. Furthermore, the proposed method demonstrates excellent scalability in terms of both retrieval accuracy and efficiency on large-scale experiments using 1.26 million images from the ImageNet database as distractors.


computer vision and pattern recognition | 2009

Learning from ambiguously labeled images

Timothee Cour; Benjamin Sapp; Chris Jordan; Benjamin Taskar

In many image and video collections, we have access only to partially labeled data. For example, personal photo collections often contain several faces per image and a caption that only specifies who is in the picture, but not which name matches which face. Similarly, movie screenplays can tell us who is in the scene, but not when and where they are on the screen. We formulate the learning problem in this setting as partially-supervised multiclass classification where each instance is labeled ambiguously with more than one label. We show theoretically that effective learning is possible under reasonable assumptions even when all the data is weakly labeled. Motivated by the analysis, we propose a general convex learning formulation based on minimization of a surrogate loss appropriate for the ambiguous label setting. We apply our framework to identifying faces culled from Web news sources and to naming characters in TV series and movies. We experiment on a very large dataset consisting of 100 hours of video, and in particular achieve 6% error for character naming on 16 episodes of LOST.


computer vision and pattern recognition | 2007

Recognizing objects by piecing together the Segmentation Puzzle

Timothee Cour; Jianbo Shi

We present an algorithm that recognizes objects of a given category using a small number of hand segmented images as references. Our method first over segments an input image into superpixels, and then finds a shortlist of optimal combinations of superpixels that best fit one of template parts, under affine transformations. Second, we develop a contextual interpretation of the parts, gluing image segments using top-down fiducial points, and checking overall shape similarity. In contrast to previous work, the search for candidate superpixel combinations is not exponential in the number of segments, and in fact leads to a very efficient detection scheme. Both the storage and the detection of templates only require space and time proportional to the length of the template boundary, allowing us to store potentially millions of templates, and to detect a template anywhere in a large image in roughly 0.01 seconds. We apply our algorithm on the Weizmann horse database, and show our method is comparable to the state of the art while offering a simpler and more efficient alternative compared to previous work.


neural information processing systems | 2006

Balanced Graph Matching

Timothee Cour; Praveen Srinivasan; Jianbo Shi


european conference on computer vision | 2012

Query Specific Fusion for Image Retrieval

Shaoting Zhang; Ming Yang; Timothee Cour; Kai Yu; Dimitris N. Metaxas


Journal of Machine Learning Research | 2011

Learning from Partial Labels

Timothee Cour; Benjamin Sapp; Ben Taskar


european conference on computer vision | 2008

Movie/Script: Alignment and Parsing of Video and Text Transcription

Timothee Cour; Chris Jordan; Eleni Miltsakaki; Ben Taskar


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Query Specific Rank Fusion for Image Retrieval

Shaoting Zhang; Ming Yang; Timothee Cour; Kai Yu; Dimitris N. Metaxas

Collaboration


Dive into the Timothee Cour's collaboration.

Top Co-Authors

Avatar

Jianbo Shi

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Taskar

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Benjamin Sapp

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Jordan

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shaoting Zhang

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Xiaoyu Wang

University of Missouri

View shared research outputs
Researchain Logo
Decentralizing Knowledge