Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Tzeng is active.

Publication


Featured researches published by Eric Tzeng.


international conference on computer vision | 2015

Simultaneous Deep Transfer Across Domains and Tasks

Eric Tzeng; Judy Hoffman; Trevor Darrell; Kate Saenko

Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.


computer vision and pattern recognition | 2017

Adversarial Discriminative Domain Adaptation

Eric Tzeng; Judy Hoffman; Kate Saenko; Trevor Darrell

Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.


computer vision and pattern recognition | 2013

User-Driven Geolocation of Untagged Desert Imagery Using Digital Elevation Models

Eric Tzeng; Andrew Zhai; Matthew Clements; Raphael Townshend; Avideh Zakhor

We propose a system for user-aided visual localization of desert imagery without the use of any metadata such as GPS readings, camera focal length, or field-of-view. The system makes use only of publicly available digital elevation models (DEMs) to rapidly and accurately locate photographs in non-urban environments such as deserts. Our system generates synthetic skyline views from a DEM and extracts stable concavity-based features from these skylines to form a database. To localize queries, a user manually traces the skyline on an input photograph. The skyline is automatically refined based on this estimate, and the same concavity-based features are extracted. We then apply a variety of geometrically constrained matching techniques to efficiently and accurately match the query skyline to a database skyline, thereby localizing the query image. We evaluate our system using a test set of 44 ground-truthed images over a 10, 000 km2 region of interest in a desert and show that in many cases, queries can be localized with precision as fine as 100 m2.


intelligent robots and systems | 2014

Unifying scene registration and trajectory optimization for learning from demonstrations with application to manipulation of deformable objects

Alex X. Lee; Sandy H. Huang; Dylan Hadfield-Menell; Eric Tzeng; Pieter Abbeel

Recent work [1], [2] has shown promising results in enabling robotic manipulation of deformable objects through learning from demonstrations. Their method computes a registration from training scene to test scene, and then applies an extrapolation of this registration to the training scene gripper motion to obtain the gripper motion for the test scene. The warping cost of scene-to-scene registrations is used to determine the nearest neighbor from a set of training demonstrations. Then once the gripper motion has been generalized to the test situation, they apply trajectory optimization [3] to plan for the robot motions that will track the predicted gripper motions. In many situations, however, the predicted gripper motions cannot be followed perfectly due to, for example, joint limits or obstacles. In this case the past work finds a path that minimizes deviation from the predicted gripper trajectory as measured by its Euclidean distance for position and angular distance for orientation. Measuring the error this way during the motion planning phase, however, ignores the underlying structure of the problem-namely the idea that rigid registrations are preferred to generalize from training scene to test scene. Deviating from the gripper trajectory predicted by the extrapolated registration effectively changes the warp induced by the registration in the part of the space where the gripper trajectories are. The main contribution of this paper is an algorithm that considers this effective final warp as the criterion to optimize for in a unified optimization that simultaneously considers the scene-to-scene warping and the robot trajectory (which were separated into two sequential steps by the past work). This results in an approach that adjusts to infeasibility in a way that adapts directly to the geometry of the scene and minimizes the introduction of additional warping cost. In addition, this paper proposes to learn the motion of the gripper pads, whereas past work considered the motion of a coordinate frame attached to the gripper as a whole. This enables learning more precise grasping motions. Our experiments, which consider the task of knot tying, show that both unified optimization and explicit consideration of gripper pad motion result in improved performance.


international world wide web conferences | 2017

Visual Discovery at Pinterest

Andrew Zhai; Dmitry Kislyuk; Yushi Jing; Michael Feng; Eric Tzeng; Jeff Donahue; Yue Li Du; Trevor Darrell

Over the past three years Pinterest has experimented with several visual search and recommendation systems, from enhancing existing products such as Related Pins (2014), to powering new products such as Similar Looks (2015), Flashlight (2016), and Lens (2017). This paper presents an overview of our visual discovery engine powering these services, and shares the rationales behind our technical and product decisions such as the use of object detection and interactive user interfaces. We conclude that this visual discovery engine significantly improves engagement in both search and recommendation tasks.


international conference on robotics and automation | 2015

Beyond lowest-warping cost action selection in trajectory transfer

Dylan Hadfield-Menell; Alex X. Lee; Chelsea Finn; Eric Tzeng; Sandy H. Huang; Pieter Abbeel

We consider the problem of learning from demonstrations to manipulate deformable objects. Recent work [1], [2], [3] has shown promising results that enable robotic manipulation of deformable objects through learning from demonstrations. Their approach is able to generalize from a single demonstration to new test situations, and suggests a nearest neighbor approach to select a demonstration to adapt to a given test situation. Such a nearest neighbor approach, however, ignores important aspects of the problem: brittleness (versus robustness) of demonstrations when generalized through this process, and the extent to which a demonstration makes progress towards a goal. In this paper, we frame the problem of selecting which demonstration to transfer as an options Markov decision process (MDP). We present max-margin Q-function estimation: an approach to learn a Q-function from expert demonstrations. Our learned policies account for variability in robustness of demonstrations and the sequential nature of our tasks. We developed two knot-tying benchmarks to experimentally validate the effectiveness of our proposed approach. The selection strategy described in [2] achieves success rates of 70% and 54%, respectively. Our approach performs significantly better, with success rates of 88% and 76%, respectively.


Large-Scale Visual Geo-Localization | 2016

User-Aided Geo-location of Untagged Desert Imagery

Eric Tzeng; Andrew Zhai; Matthew Clements; Raphael J. L. Townshend; Avideh Zakhor

We propose a system for user-aided visual localization of desert imagery without the use of any metadata such as GPS readings, camera focal length, or field-of-view. The system makes use only of publicly available datasets—in particular, digital elevation models (DEMs)—to rapidly and accurately locate photographs in nonurban environments such as deserts. Our system generates synthetic skyline views from a DEM and extracts stable concavity-based features from these skylines to form a database. To localize queries, a user manually traces the skyline on an input photograph. The skyline is automatically refined based on this estimate, and the same concavity-based features are extracted. We then apply a variety of geometrically constrained matching techniques to efficiently and accurately match the query skyline to a database skyline, thereby localizing the query image. We evaluate our system using a test set of 44 ground-truthed images over a \(\text {10,000}\,\mathrm{km}^{2}\) region of interest in a desert and show that in many cases, queries can be localized with precision as fine as \(100\,\mathrm{m}^{2}\).


international conference on machine learning | 2014

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

Jeff Donahue; Yangqing Jia; Oriol Vinyals; Judy Hoffman; Ning Zhang; Eric Tzeng; Trevor Darrell


arXiv: Computer Vision and Pattern Recognition | 2014

Deep Domain Confusion: Maximizing for Domain Invariance.

Eric Tzeng; Judy Hoffman; Ning Zhang; Kate Saenko; Trevor Darrell


neural information processing systems | 2014

LSDA: Large Scale Detection through Adaptation

Judy Hoffman; Sergio Guadarrama; Eric Tzeng; Ronghang Hu; Jeff Donahue; Ross B. Girshick; Trevor Darrell; Kate Saenko

Collaboration


Dive into the Eric Tzeng's collaboration.

Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar

Judy Hoffman

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeff Donahue

University of California

View shared research outputs
Top Co-Authors

Avatar

Andrew Zhai

University of California

View shared research outputs
Top Co-Authors

Avatar

Avideh Zakhor

University of California

View shared research outputs
Top Co-Authors

Avatar

Chelsea Finn

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pieter Abbeel

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge