Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trevor Darrell is active.

Publication


Featured researches published by Trevor Darrell.


acm multimedia | 2014

Caffe: Convolutional Architecture for Fast Feature Embedding

Yangqing Jia; Evan Shelhamer; Jeff Donahue; Sergey Karayev; Jonathan Long; Ross B. Girshick; Sergio Guadarrama; Trevor Darrell

Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.


computer vision and pattern recognition | 2014

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

Ross B. Girshick; Jeff Donahue; Trevor Darrell; Jitendra Malik

Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Pfinder: real-time tracking of the human body

Christopher Richard Wren; Ali Azarbayejani; Trevor Darrell; Alex Pentland

Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding.


european conference on computer vision | 2010

Adapting visual category models to new domains

Kate Saenko; Brian Kulis; Mario Fritz; Trevor Darrell

Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.


computer vision and pattern recognition | 1998

Integrated person tracking using stereo, color, and pattern detection

Trevor Darrell; Gaile G. Gordon; Michael Harville; John Iselin Woodfill

We present an approach to real-time person tracking in crowded and/or unknown environments using integration of multiple visual modalities. We combine stereo, color, and face detection modules into a single robust system, and show an initial application in an interactive, face-responsive display. Dense, real-time stereo processing is used to isolate users from other objects and people in the background. Skin-hue classification identifies and tracks likely body parts within the silhouette of a user. Face pattern detection discriminates and localizes the face within the identified body parts. Faces and bodies of users are tracked over several temporal scales: short-term (user stays within the field of view), medium-term (user exits/reenters within minutes), and long term (user returns after hours or days). Short-term tracking is performed using simple region position and size correspondences, while medium and long-term tracking are based on statistics of user appearance. We discuss the failure modes of each individual module, describe our integration method, and report results with the complete system in trials with thousands of users.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

Hidden Conditional Random Fields

Ariadna Quattoni; Sy Bor Wang; Louis-Philippe Morency; Michael Collins; Trevor Darrell

We present a discriminative latent variable model for classification problems in structured domains where inputs can be represented by a graph of local observations. A hidden-state conditional random field framework learns a set of latent variables conditioned on local features. Observations need not be independent and may overlap in space and time.


computer vision and pattern recognition | 2006

Hidden Conditional Random Fields for Gesture Recognition

Sy Bor Wang; Ariadna Quattoni; Louis-Philippe Morency; David Demirdjian; Trevor Darrell

We introduce a discriminative hidden-state approach for the recognition of human gestures. Gesture sequences often have a complex underlying structure, and models that can incorporate hidden structures have proven to be advantageous for recognition tasks. Most existing approaches to gesture recognition with hidden states employ a Hidden Markov Model or suitable variant (e.g., a factored or coupled state model) to model gesture streams; a significant limitation of these models is the requirement of conditional independence of observations. In addition, hidden states in a generative model are selected to maximize the likelihood of generating all the examples of a given gesture class, which is not necessarily optimal for discriminating the gesture class against other gestures. Previous discriminative approaches to gesture sequence recognition have shown promising results, but have not incorporated hidden states nor addressed the problem of predicting the label of an entire sequence. In this paper, we derive a discriminative sequence model with a hidden state structure, and demonstrate its utility both in a detection and in a multi-way classification formulation. We evaluate our method on the task of recognizing human arm and head gestures, and compare the performance of our method to both generative hidden state and discriminative fully-observable models.


computer vision and pattern recognition | 2011

What you saw is not what you get: Domain adaptation using asymmetric kernel transforms

Brian Kulis; Kate Saenko; Trevor Darrell

In real-world applications, “what you saw” during training is often not “what you get” during deployment: the distribution and even the type and dimensionality of features can change from one dataset to the next. In this paper, we address the problem of visual domain adaptation for transferring object models from one dataset or visual domain to another. We introduce ARC-t, a flexible model for supervised learning of non-linear transformations between domains. Our method is based on a novel theoretical result demonstrating that such transformations can be learned in kernel space. Unlike existing work, our model is not restricted to symmetric transformations, nor to features of the same type and dimensionality, making it applicable to a significantly wider set of adaptation scenarios than previous methods. Furthermore, the method can be applied to categories that were not available during training. We demonstrate the ability of our method to adapt object recognition models under a variety of situations, such as differing imaging conditions, feature types and codebooks.


computer vision and pattern recognition | 2016

Context Encoders: Feature Learning by Inpainting

Deepak Pathak; Philipp Krähenbühl; Jeff Donahue; Trevor Darrell; Alexei A. Efros

We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders - a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.


computer vision and pattern recognition | 1993

Space-time gestures

Trevor Darrell; Alex Pentland

A method for learning, tracking, and recognizing human gestures using a view-based approach to model articulated objects is presented. Objects are represented using sets of view models, rather than single templates. Stereotypical space-time patterns, i.e., gestures, are then matched to stored gesture patterns using dynamic time warping. Real-time performance is achieved by using special purpose correlation hardware and view prediction to prune as much of the search space as possible. Both view models and view predictions are learned from examples. Results showing tracking and recognition of human hand gestures at over 10 Hz are presented.<<ETX>>

Collaboration


Dive into the Trevor Darrell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Pentland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judy Hoffman

University of California

View shared research outputs
Top Co-Authors

Avatar

David Demirdjian

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kristen Grauman

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yangqing Jia

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff Donahue

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge