Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Young Chol Song is active.

Publication


Featured researches published by Young Chol Song.


conference on computer supported cooperative work | 2013

Real-time crowd labeling for deployable activity recognition

Walter S. Lasecki; Young Chol Song; Henry A. Kautz; Jeffrey P. Bigham

Systems that automatically recognize human activities offer the potential of timely, task-relevant information and support. For example, prompting systems can help keep people with cognitive disabilities on track and surveillance systems can warn of activities of concern. Current automatic systems are difficult to deploy because they cannot identify novel activities, and, instead, must be trained in advance to recognize important activities. Identifying and labeling these events is time consuming and thus not suitable for real-time support of already-deployed activity recognition systems. In this paper, we introduce Legion:AR, a system that provides robust, deployable activity recognition by supplementing existing recognition systems with on-demand, real-time activity identification using input from the crowd. Legion:AR uses activity labels collected from crowd workers to train an automatic activity recognition system online to automatically recognize future occurrences. To enable the crowd to keep up with real-time activities, Legion:AR intelligently merges input from multiple workers into a single ordered label set. We validate Legion:AR across multiple domains and crowds and discuss features that allow appropriate privacy and accuracy tradeoffs.


ambient intelligence | 2012

Interactive activity recognition and prompting to assist people with cognitive disabilities

Yi Chu; Young Chol Song; Richard Levinson; Henry A. Kautz

This paper presents a model of interactive activity recognition and prompting for use in an assistive system for persons with cognitive disabilities. The system can determine the users state by interpreting sensor data and/or by explicitly querying the user, and can prompt the user to begin, resume, or end tasks. The objective of the system is to help the user maintain a daily schedule of activities while minimizing interruptions from questions or prompts. The model is built upon an option-based hierarchical POMDP. Options can be programmed and customized to specify complex routines for prompting or questioning.The paper proposes a heuristic approach to solving the POMDP based on a dual control algorithm using selective-inquiry that can appeal for help from the user explicitly when the sensor data is ambiguous. The dual control algorithm is working effectively in the unified control model which features the adaptive option and robust state estimation. Simulation results show that the unified dual control model achieves the best performance and efficiency comparing with various alternatives. To further demonstrate the systems performance, lab experiments have been carried out with volunteer actors performing a series of carefully designed scenarios with different kinds of interruption cases. The results show that the system is able to successfully guide the agent through the sample schedule by delivering correct prompts while efficiently dealing with ambiguous situations.


north american chapter of the association for computational linguistics | 2015

Discriminative Unsupervised Alignment of Natural Language Instructions with Corresponding Video Segments

Iftekhar Naim; Young Chol Song; Qiguang Liu; Liang Huang; Henry A. Kautz; Jiebo Luo; Daniel Gildea

We address the problem of automatically aligning natural language sentences with corresponding video segments without any direct supervision. Most existing algorithms for integrating language with videos rely on handaligned parallel data, where each natural language sentence is manually aligned with its corresponding image or video segment. Recently, fully unsupervised alignment of text with video has been shown to be feasible using hierarchical generative models. In contrast to the previous generative models, we propose three latent-variable discriminative models for the unsupervised alignment task. The proposed discriminative models are capable of incorporating domain knowledge, by adding diverse and overlapping features. The results show that discriminative models outperform the generative models in terms of alignment accuracy.


international conference on multimodal interfaces | 2013

A Markov logic framework for recognizing complex events from multimodal data

Young Chol Song; Henry A. Kautz; James F. Allen; Mary D. Swift; Yuncheng Li; Jiebo Luo; Ce Zhang

We present a general framework for complex event recognition that is well-suited for integrating information that varies widely in detail and granularity. Consider the scenario of an agent in an instrumented space performing a complex task while describing what he is doing in a natural manner. The system takes in a variety of information, including objects and gestures recognized by RGB-D and descriptions of events extracted from recognized and parsed speech. The system outputs a complete reconstruction of the agents plan, explaining actions in terms of more complex activities and filling in unobserved but necessary events. We show how to use Markov Logic (a probabilistic extension of first-order logic) to create a model in which observations can be partial, noisy, and refer to future or temporally ambiguous events; complex events are composed from simpler events in a manner that exposes their structure for inference and learning; and uncertainty is handled in a sound probabilistic manner. We demonstrate the effectiveness of the approach for tracking kitchen activities in the presence of noisy and incomplete observations.


conference on computers and accessibility | 2010

Joystick text entry with word prediction for people with motor impairments

Young Chol Song

Joysticks are used by people with motor impairments as an assistive device which acts as a replacement for the keyboard and mouse. Most existing entry methods that use the joystick take the form of on-screen or selection keyboards which require multiple movements of a joystick to enter a single character, making text input slow. We try to reduce the number of required joystick movements by adding word completion and next word prediction. Evaluations show text entry with word prediction is 30% faster compared with entry on a regular selection keyboard and reduces the amount of movements by 50%, even for first-time users with less than 15 minutes of practice.


international conference on pattern recognition | 2016

Aligning movies with scripts by exploiting temporal ordering constraints

Iftekhar Naim; Abdullah Al Mamun; Young Chol Song; Jiebo Luo; Henry A. Kautz; Daniel Gildea

Scripts provide rich textual annotation of movies, including dialogs, character names, and other situational descriptions. Exploiting such rich annotations requires aligning the sentences in the scripts with the corresponding video frames. Previous work on aligning movies with scripts predominantly relies on time-aligned closed-captions or subtitles, which are not always available. In this paper, we focus on automatically aligning faces in movies with their corresponding character names in scripts without requiring closed-captions/subtitles. We utilize the intuition that faces in a movie generally appear in the same sequential order as their names are mentioned in the script. We first apply standard techniques for face detection and tracking, and cluster similar face tracks together. Next, we apply a generative Hidden Markov Model (HMM) and a discriminative Latent Conditional Random Field (LCRF) to align the clusters of face tracks with the corresponding character names. Our alignment models (especially LCRF) significantly outperform the previous state-of-the-art on two different movie datasets and for a wide range of face clustering algorithms.


national conference on artificial intelligence | 2014

Unsupervised alignment of natural language instructions with video segments

Iftekhar Naim; Young Chol Song; Qiguang Liu; Henry A. Kautz; Jiebo Luo; Daniel Gildea


international joint conference on artificial intelligence | 2016

Unsupervised alignment of actions in video with text descriptions

Young Chol Song; Iftekhar Naim; Abdullah Al Mamun; Kaustubh Kulkarni; Parag Singla; Jiebo Luo; Daniel Gildea; Henry A. Kautz


national conference on artificial intelligence | 2011

When did you start doing that thing that you do? interactive activity recognition and prompting

Yi Chu; Young Chol Song; Henry A. Kautz; Richard Levinson


national conference on artificial intelligence | 2013

A general framework for recognizing complex events in Markov logic

Young Chol Song; Henry A. Kautz; Yuncheng Li; Jiebo Luo

Collaboration


Dive into the Young Chol Song's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiebo Luo

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qiguang Liu

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yi Chu

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Yuncheng Li

University of Rochester

View shared research outputs
Top Co-Authors

Avatar

Ce Zhang

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge