Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eunbyung Park is active.

Publication


Featured researches published by Eunbyung Park.


workshop on applications of computer vision | 2016

Combining multiple sources of knowledge in deep CNNs for action recognition

Eunbyung Park; Xufeng Han; Tamara L. Berg; Alexander C. Berg

Although deep convolutional neural networks (CNNs) have shown remarkable results for feature learning and prediction tasks, many recent studies have demonstrated improved performance by incorporating additional handcrafted features or by fusing predictions from multiple CNNs. Usually, these combinations are implemented via feature concatenation or by averaging output prediction scores from several CNNs. In this paper, we present new approaches for combining different sources of knowledge in deep learning. First, we propose feature amplification, where we use an auxiliary, hand-crafted, feature (e.g. optical flow) to perform spatially varying soft-gating on intermediate CNN feature maps. Second, we present a spatially varying multiplicative fusion method for combining multiple CNNs trained on different sources that results in robust prediction by amplifying or suppressing the feature activations based on their agreement. We test these methods in the context of action recognition where information from spatial and temporal cues is useful, obtaining results that are comparable with state-of-the-art methods and outperform methods using only CNNs and optical flow features.


international conference on computer vision | 2015

Visual Madlibs: Fill in the Blank Description Generation and Question Answering

Licheng Yu; Eunbyung Park; Alexander C. Berg; Tamara L. Berg

In this paper, we introduce a new dataset consisting of 360,001 focused natural language descriptions for 10,738 images. This dataset, the Visual Madlibs dataset, is collected using automatically produced fill-in-the-blank templates designed to gather targeted descriptions about: people and objects, their appearances, activities, and interactions, as well as inferences about the general scene or its broader context. We provide several analyses of the Visual Madlibs dataset and demonstrate its applicability to two new description generation tasks: focused description generation, and multiple-choice question-answering for images. Experiments using joint-embedding and deep learning methods show promising results on these tasks.


computer vision and pattern recognition | 2017

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

Eunbyung Park; Jimei Yang; Ersin Yumer; Duygu Ceylan; Alexander C. Berg

We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion/disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.


real time technology and applications symposium | 2017

An Evaluation of the NVIDIA TX1 for Supporting Real-Time Computer-Vision Workloads

Nathan Otterness; Ming Yang; Sarah Rust; Eunbyung Park; James H. Anderson; F. Donelson Smith; Alexander C. Berg; Shige Wang

Autonomous vehicles are an exemplar for forward-looking safety-critical real-time systems where significant computing capacity must be provided within strict size, weight, and power (SWaP) limits. A promising way forward in meeting these needs is to leverage multicore platforms augmented with graphics processing units (GPUs) as accelerators. Such an approach is being strongly advocated by NVIDIA, whose Jetson TX1 board is currently a leading multicore+GPU solution marketed for autonomous systems. Unfortunately, no study has ever been published that expressly evaluates the effectiveness of the TX1, or any other comparable platform, in hosting safety-critical real-time workloads. In this paper, such a study is presented. Specifically, the TX1 is evaluated via benchmarking efforts, blackbox evaluations of GPU behavior, and case-study evaluations involving computer-vision workloads inspired by autonomousdriving use cases. Autonomous vehicles are an exemplar for forward-looking safety-critical real-time systems where significant computing capacity must be provided within strict size, weight, and power (SWaP) limits. A promising way forward in meeting these needs is to leverage multicore platforms augmented with graphics processing units (GPUs) as accelerators. Such an approach is being strongly advocated by NVIDIA, whose Jetson TX1 board is currently a leading multicore+GPU solution marketed for autonomous systems. Unfortunately, no study has ever been published that expressly evaluates the effectiveness of the TX1, or any other comparable platform, in hosting safety-critical real-time workloads. In this paper, such a study is presented. Specifically, the TX1 is evaluated via benchmarking efforts, blackbox evaluations of GPU behavior, and case-study evaluations involving computer-vision workloads inspired by autonomousdriving use cases.


international conference on robotics and automation | 2017

A dataset for developing and benchmarking active vision

Phil Ammirato; Patrick Poirson; Eunbyung Park; Jana Kosecka; Alexander C. Berg

We present a new public dataset with a focus on simulating robotic vision tasks in everyday indoor environments using real imagery. The dataset includes 20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely captured in 9 unique scenes. We train a fast object category detector for instance detection on our data. Using the dataset we show that, although increasingly accurate and fast, the state of the art for object detection is still severely impacted by object scale, occlusion, and viewing direction all of which matter for robotics applications. We next validate the dataset for simulating active vision, and use the dataset to develop and evaluate a deep-network-based system for next best move prediction for object classification using reinforcement learning. Our dataset is available for download at cs.unc.edu/∼ammirato/active_vision_dataset_website/.


european conference on computer vision | 2018

Meta-tracker: Fast and Robust Online Adaptation for Visual Object Trackers

Eunbyung Park; Alexander C. Berg

This paper improves state-of-the-art on-line trackers that use deep learning. Such trackers train a deep network to pick a specified object out from the background in an initial frame (initialization) and then keep training the model as tracking proceeds (updates). Our core contribution is a meta-learning-based method to adjust deep networks for tracking using off-line training. First, we learn initial parameters and per-parameter coefficients for fast online adaptation. Second, we use training signal from future frames for robustness to target appearance variations and environment changes. The resulting networks train significantly faster during the initialization, while improving robustness and accuracy. We demonstrate this approach on top of the current highest accuracy tracking approach, tracking-by-detection based MDNet and close competitor, the correlation-based CREST. Experimental results on both standard benchmarks, OTB and VOT2016, show improvements in speed, accuracy, and robustness on both trackers.


medical image computing and computer assisted intervention | 2016

Registration of pathological images

Xiao Yang; Xu Han; Eunbyung Park; Stephen R. Aylward; Roland Kwitt; Marc Niethammer

This paper proposes an approach to improve atlas-to-image registration accuracy with large pathologies. Instead of directly registering an atlas to a pathological image, the method learns a mapping from the pathological image to a quasi-normal image, for which more accurate registration is possible. Specifically, the method uses a deep variational convolutional encoder-decoder network to learn the mapping. Furthermore, the method estimates local mapping uncertainty through network inference statistics and uses those estimates to down-weight the image registration similarity measure in areas of high uncertainty. The performance of the method is quantified using synthetic brain tumor images and images from the brain tumor segmentation challenge (BRATS 2015).


arXiv: Computer Vision and Pattern Recognition | 2015

Visual Madlibs: Fill in the blank Image Generation and Question Answering.

Licheng Yu; Eunbyung Park; Alexander C. Berg; Tamara L. Berg


arXiv: Computer Vision and Pattern Recognition | 2015

Learning to decompose for object detection and instance segmentation

Eunbyung Park; Alexander C. Berg


design, automation, and test in europe | 2018

Three years of low-power image recognition challenge: Introduction to special session

Kent Gauen; Ryan Dailey; Yung-Hsiang Lu; Eunbyung Park; Wei Liu; Alexander C. Berg; Yiran Chen

Collaboration


Dive into the Eunbyung Park's collaboration.

Top Co-Authors

Avatar

Alexander C. Berg

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Tamara L. Berg

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Licheng Yu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge