Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sunyoung Cho is active.

Publication


Featured researches published by Sunyoung Cho.


Signal Processing-image Communication | 2013

Multi-modal user interaction method based on gaze tracking and gesture recognition

Hee-Kyung Lee; Seong Yong Lim; Injae Lee; Jihun Cha; Dong-Chan Cho; Sunyoung Cho

This paper presents a gaze tracking technology which provides a convenient human-centric interface for multimedia consumption without any wearable device. It enables a user to interact with various multimedia on a large display in distance by tracking user movement and acquiring high resolution eye images. This paper also presents a gesture recognition technology which is helpful to interact with scene descriptions in terms of controlling and rendering scene objects. It is based on Hidden Markov Model and CRF using a commercial depth sensor. And then, this paper shows a collaboration method with those new sensors and MPEG standards in order to achieve interoperability among interactive applications, new user interaction devices and users.


Journal of Broadcast Engineering | 2012

Hand Gesture Recognition from Kinect Sensor Data

Sunyoung Cho; Hyeran Byun; Hee Kyung Lee; Jihun Cha

We present a method to recognize hand gestures using skeletal joint data obtained from Microsofts Kinect sensor. We propose a combination feature of multi-angle histograms robust to orientation variations to represent the observation sequence of skeletons. The proposed feature efficiently represents the orientation variations of gestures that can be occurred according to person or environment by combining the multiple angle histograms with various angular-quantization levels. The gesture represented as combination of multi-angle histograms and random decision forest classifier improve the recognition performance. We conduct the experiments in hand gesture dataset obtained from a kinect sensor and show that our method outperforms the other methods by comparing the recognition performance.


international conference on information and communication security | 2011

Human interaction recognition in YouTube videos

Sunyoung Cho; Seongho Lim; Hyeran Byun; Haejin Park; Sooyeong Kwak

This paper introduces the use of annotation tags for human activity recognition in video. Recent methods in human activity recognition use more complex and realistic datasets obtained from TV shows or movies, which makes it difficult to obtain the high recognition accuracies. We improve the recognition accuracies using annotation tags of the video. Tags tend to be related to video contents, and human activity videos frequently contain tags relevant to their activities. We first collect a human activity dataset containing tags from YouTube. Under this dataset, we automatically discover relevant tags and their correlation with human activities. We finally develop a framework using visual content and tags for activity recognition. We show that our approach can improve recognition accuracies compared with other approaches that only use visual content.


IEEE Transactions on Circuits and Systems for Video Technology | 2016

A Space-Time Graph Optimization Approach Based on Maximum Cliques for Action Detection

Sunyoung Cho; Hyeran Byun

We present an efficient action detection method that takes a space-time (ST) graph optimization approach for real-world videos. Given an ST graph representing the entire action video, our method identifies a maximum-weight connected subgraph (MWCS) indicating an action region by applying an optimization approach based on clique information. We define an energy function based on maximum weight cliques for subregions of the graph and formulate it using an optimization problem that can be represented as a linear system. Our energy function includes the maximum and connectivity properties for finding the MWCS, and its optimization solution indicates the probability of belonging to the maximum subgraph for each node. Our graph optimization method efficiently solves the detection problem by applying the clique-based approach and simple linear system solver. We demonstrate that our detection method results in a more accurate localization compared with conventional methods through our experimental results with real-world data sets, such as the Hollywood and MSR action data sets. We also show that our method outperforms the state-of-the-art methods of action detection.


Pattern Recognition Letters | 2013

Recognizing human-human interaction activities using visual and textual information

Sunyoung Cho; Soo Yeong Kwak; Hyeran Byun

We exploit textual information for recognizing human-human interaction activities in YouTube videos. YouTube videos are generally accompanied by various types of textual information, such as title, description, and tags. In particular, since some of the tags describe the visual content of the video, making good use of tags can aid activity recognition in the video. The proposed method uses two-fold information for activity recognition: (i) visual information: correlations among activities, human poses, configurations of human body parts, and image features extracted from visual content and (ii) textual information: correlations with activities extracted from tags. For tag analysis we discover a set of relevant tags and extract the meaningful words. Correlations between words and activities are learned from expanded tags obtained from tags of related videos. We develop a model that jointly captures two-fold information for activity recognition. We consider the model as a structured learning task with latent variables, and estimate the parameters of the model by using a non-convex minimization procedure. The proposed approach is evaluated using a dataset that consists of highly challenging real world videos and their assigned tags collected from YouTube. Experimental results demonstrate that by exploiting the visual and textual information in a structured framework, the proposed method can significantly improve the activity recognition results.


Optical Engineering | 2012

Interactive optimization of photo composition with Gaussian mixture model on mobile platform

Hachon Sung; Guntae Bae; Sunyoung Cho; Hyeran Byun

A good photo is determined using various visual elements of photography and these elements have been implemented in mobile devices with functionalities including zooming, auto-focusing and auto-white-balancing. Although composition is an important element of a good photo and an interesting research topic, most composition-related functionalities have not been added to mobile devices. We propose a guide system for capturing good photos in mobile devices that considers composition elements. A photo composition mixture model (PCMM) is derived based on composition elements such as a Gaussian Mixture Model (GMM), and the best composition of current input is gradually determined by iterating the PCMM optimization. Experimental evaluations are conducted to show the usefulness of the proposed PCMM and its optimization performance. To show the efficiency of recomposition performance and speed, we compare our method with retargeting-based methods. By implementing our method in mobile devices, we show that our system offers valid user guidance for capturing a photo with good composition in realtime.


international conference on ubiquitous information management and communication | 2011

Tag suggestion using visual content and social tag

Won J. Jeon; Sunyoung Cho; Jae-Seong Cha; Hyeran Byun

With the popularity of social media sharing sites such as Flickr or YouTube, tagging has become a more important task to describe the content of the multimedia object. Recently, automatic tagging or tag recommendation has studied to automatically provide a relevant tag to the media by analyzing the user tags. However, the social tags annotated by common users are known to be ambiguous and subjective because of biased tags by individual users. In this paper, we present the task of combining visual content and social tag for tag suggestion. Our method finds the visual neighbors using subject-based visual content analysis, and analyzes the social tags of visual neighbors by weighted neighbor voting technique. This enables to solve the problem that general voting technique can give irrelevant tags caused by the low performance of visual search. We evaluate our method on a social-tagged image database from Flickr by comparing our method to visual-based and tag-based methods. Our experimental results show that our method has an improvement in tag suggestion or image tagging.


international conference on pattern recognition | 2010

Adaptive Color Curve Models for Image Matting

Sunyoung Cho; Hyeran Byun

Image matting is the process of extracting a foreground element from a single image with limited user input. To solve the inherently ill-posed problem, there exist various methods which use specific color model. One representative method assumes that the colors of the foreground and background elements satisfy the linear color model. The other recent method considers line-point color model and point-point color model. In this paper we present a new adaptive color curve model for image matting. We assume that the colors of local region form curve. Based on these pixels in the local region, we adaptively construct a curve model using quadratic Bézier curve model. This curve model enables us to derive a matting equation for estimating alphas of pixels forming a curve using quadratic formula. We show that our model estimates alpha mattes comparable or more accurately than recent existing methods.


international conference on pattern recognition | 2010

Coarse-to-Fine Particle Filter by Implicit Motion Estimation for 3D Head Tracking on Mobile Devices

Hachoen Sung; Kwontaeg Choi; Sunyoung Cho; Hyeran Byun

Due to the widely spread mobile devices over the years, a low cost implementation of an efficient head tracking system is becoming more useful for a wide range of applications. In this paper, we make an attempt to solving real-time 3D head tracking problem on mobile devices by enhancing the fitness of the dynamics. In our method, the particles are generated by implicit motion estimation between two particles rather than the explicit motion estimation using corresponding point matching between consecutive two frames. This generation is applied iteratively using coarse-to fine strategy in order to handle a large motion using a small number of particle. This reduces the computational cost while preserving the performance. We evaluate the efficiency and effectiveness of the proposed algorithm by empirical experiments. Finally, we demonstrate our method on a recent mobile phone.


international conference on 3d vision | 2014

Efficient Colorization of Large-Scale Point Cloud Using Multi-pass Z-Ordering

Sunyoung Cho; Jizhou Yan; Yasuyuki Matsushita; Hyeran Byun

We present an efficient colorization method for a large scale point cloud using multi-view images. To address the practical issues of noisy camera parameters and color inconsistencies across multi-view images, our method takes an optimization approach for achieving visually pleasing point cloud colorization. We introduce a multi-pass Z-ordering technique that efficiently defines a graph structure to a large-scale and un-ordered set of 3D points, and use the graph structure for optimizing the point colors to be assigned. Our technique is useful for defining minimal but sufficient connectivities among 3D points so that the optimization can exploit the sparsity for efficiently solving the problem. We demonstrate the effectiveness of our method using synthetic datasets and a large-scale real-world data in comparison with other graph construction techniques.

Collaboration


Dive into the Sunyoung Cho's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jihun Cha

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hee-Kyung Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Injae Lee

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge