Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kwang Moo Yi is active.

Publication


Featured researches published by Kwang Moo Yi.


european conference on computer vision | 2016

The Visual Object Tracking VOT2014 Challenge Results

Matej Kristan; Roman P. Pflugfelder; Aleš Leonardis; Jiri Matas; Luka Cehovin; Georg Nebehay; Tomas Vojir; Gustavo Fernández; Alan Lukezic; Aleksandar Dimitriev; Alfredo Petrosino; Amir Saffari; Bo Li; Bohyung Han; CherKeng Heng; Christophe Garcia; Dominik Pangersic; Gustav Häger; Fahad Shahbaz Khan; Franci Oven; Horst Bischof; Hyeonseob Nam; Jianke Zhu; Jijia Li; Jin Young Choi; Jin-Woo Choi; João F. Henriques; Joost van de Weijer; Jorge Batista; Karel Lebeda

Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge.net).


european conference on computer vision | 2016

LIFT: Learned Invariant Feature Transform

Kwang Moo Yi; Eduard Trulls; Vincent Lepetit; Pascal Fua

We introduce a novel Deep Network architecture that implements the full feature point handling pipeline, that is, detection, orientation estimation, and feature description. While previous works have successfully tackled each one of these problems individually, we show how to learn to do all three in a unified manner while preserving end-to-end differentiability. We then demonstrate that our Deep pipeline outperforms state-of-the-art methods on a number of benchmark datasets, without the need of retraining.


computer vision and pattern recognition | 2015

TILDE: A Temporally Invariant Learned DEtector

Yannick Verdie; Kwang Moo Yi; Pascal Fua; Vincent Lepetit

We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.


international conference on computer vision | 2015

A Novel Representation of Parts for Accurate 3D Object Detection and Tracking in Monocular Images

Alberto Crivellaro; Mahdi Rad; Yannick Verdie; Kwang Moo Yi; Pascal Fua; Vincent Lepetit

We present a method that estimates in real-time and under challenging conditions the 3D pose of a known object. Our method relies only on grayscale images since depth cameras fail on metallic objects, it can handle poorly textured objects, and cluttered, changing environments, the pose it predicts degrades gracefully in presence of large occlusions. As a result, by contrast with the state-of-the-art, our method is suitable for practical Augmented Reality applications even in industrial environments. To be robust to occlusions, we first learn to detect some parts of the target object. Our key idea is to then predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible, when several parts are visible, we can combine them easily to compute a better pose of the object, the 3D pose we obtain is usually very accurate, even when only few parts are visible.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Robust 3D Object Tracking from Monocular Images Using Stable Parts

Alberto Crivellaro; Mahdi Rad; Yannick Verdie; Kwang Moo Yi; Pascal Fua; Vincent Lepetit

We present an algorithm for estimating the pose of a rigid object in real-time under challenging conditions. Our method effectively handles poorly textured objects in cluttered, changing environments, even when their appearance is corrupted by large occlusions, and it relies on grayscale images to handle metallic environments on which depth cameras would fail. As a result, our method is suitable for practical Augmented Reality applications including industrial environments. At the core of our approach is a novel representation for the 3D pose of object parts: We predict the 3D pose of each part in the form of the 2D projections of a few control points. The advantages of this representation is three-fold: We can predict the 3D pose of the object even when only one part is visible; when several parts are visible, we can easily combine them to compute a better pose of the object; the 3D pose we obtain is usually very accurate, even when only few parts are visible. We show how to use this representation in a robust 3D tracking framework. In addition to extensive comparisons with the state-of-the-art, we demonstrate our method on a practical Augmented Reality application for maintenance assistance in the ATLAS particle detector at CERN.


international symposium on mixed and augmented reality | 2017

Learning Lightprobes for Mixed Reality Illumination

David Mandl; Kwang Moo Yi; Peter Mohr; Peter M. Roth; Pascal Fua; Vincent Lepetit; Dieter Schmalstieg; Denis Kalkofen

This paper presents the first photometric registration pipeline for Mixed Reality based on high quality illumination estimation using convolutional neural networks (CNNs). For easy adaptation and deployment of the system, we train the CNNs using purely synthetic images and apply them to real image data. To keep the pipeline accurate and efficient, we propose to fuse the light estimation results from multiple CNN instances and show an approach for caching estimates over time. For optimal performance, we furthermore explore multiple strategies for the CNN training. Experimental results show that the proposed method yields highly accurate estimates for photo-realistic augmentations.


international symposium on mixed and augmented reality | 2014

[DEMO] Tracking Texture-less, Shiny Objects with Descriptor Fields

Alberto Crivellaro; Yannick Verdie; Kwang Moo Yi; Pascal Fua; Vincent Lepetit

Our demo demonstrates the method we published at CVPR this year for tracking specular and poorly textured objects, and lets the visitors experiment with it and with their own patterns. Our approach only requires a standard monocular camera (no need for a depth sensor), and can be easily integrated within existing systems to improve their robustness and accuracy. Code is publicly available.


computer vision and pattern recognition | 2016

Learning to Assign Orientations to Feature Points

Kwang Moo Yi; Yannick Verdie; Pascal Fua; Vincent Lepetit


computer vision and pattern recognition | 2018

Learning to Find Good Correspondences

Kwang Moo Yi; Eduard Trulls; Yuki Ono; Vincent Lepetit; Mathieu Salzmann; Pascal Fua


neural information processing systems | 2018

LF-Net: Learning Local Features from Images

Yuki Ono; Eduard Trulls; Pascal Fua; Kwang Moo Yi

Collaboration


Dive into the Kwang Moo Yi's collaboration.

Top Co-Authors

Avatar

Pascal Fua

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Vincent Lepetit

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alberto Crivellaro

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Eduard Trulls

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Mahdi Rad

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yannick Verdie

French Institute for Research in Computer Science and Automation

View shared research outputs
Top Co-Authors

Avatar

Amir Saffari

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

David Mandl

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Denis Kalkofen

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar

Dieter Schmalstieg

Graz University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge