Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Myung Geol Choi is active.

Publication


Featured researches published by Myung Geol Choi.


symposium on computer animation | 2007

Group behavior from video: a data-driven approach to crowd simulation

Kang Hoon Lee; Myung Geol Choi; Qyoun Hong; Jehee Lee

Crowd simulation techniques have frequently been used to animate a large group of virtual humans in computer graphics applications. We present a data-driven method of simulating a crowd of virtual humans that exhibit behaviors imitating real human crowds. To do so, we record the motion of a human crowd from an aerial view using a camcorder, extract the two-dimensional moving trajectories of each individual in the crowd, and then learn an agent model from observed trajectories. The agent model decides each agents actions based on features of the environment and the motion of nearby agents in the crowd. Once the agent model is learned, we can simulate a virtual crowd that behaves similarly to the real crowd in the video. The versatility and flexibility of our approach is demonstrated through examples in which various characteristics of group behaviors are captured and reproduced in simulated crowds.


international conference on computer graphics and interactive techniques | 2010

Morphable crowds

Eunjung Ju; Myung Geol Choi; Minji Park; Jehee Lee; Kang Hoon Lee; Shigeo Takahashi

Crowd simulation has been an important research field due to its diverse range of applications that include film production, military simulation, and urban planning. A challenging problem is to provide simple yet effective control over captured and simulated crowds to synthesize intended group motions. We present a new method that blends existing crowd data to generate a new crowd animation. The new animation can include an arbitrary number of agents, extends for an arbitrary duration, and yields a natural-looking mixture of the input crowd data. The main benefit of this approach is to create new spatio-temporal crowd behavior in an intuitive and predictable manner. It is accomplished by introducing a morphable crowd model that allows us to encode the formations and individual trajectories in crowd data. Then, its original spatio-temporal behavior can be reconstructed and interpolated at an arbitrary scale using our morphable model.


symposium on computer animation | 2013

Relationship descriptors for interactive motion adaptation

Rami Ali Al-Asqhar; Taku Komura; Myung Geol Choi

This paper presents an interactive motion adaptation scheme for close interactions between skeletal characters and mesh structures, such as moving through restricted environments, and manipulating objects. This is achieved through a new spatial relationship-based representation, which describes the kinematics of the body parts by the weighted sum of translation vectors relative to points selectively sampled over the surfaces of the mesh structures. In contrast to previous discrete representations that either only handle static spatial relationships, or require offline, costly optimization processes, our continuous framework smoothly adapts the motion of a character to large updates of the mesh structures and character morphologies on-the-fly, while preserving the original context of the scene. The experimental results show that our method can be used for a wide range of applications, including motion retargeting, interactive character control and deformation transfer for scenes that involve close interactions. Our framework is useful for artists who need to design animated scenes interactively, and modern computer games that allow users to design their own characters, objects and environments.


Computer Graphics Forum | 2012

Retrieval and Visualization of Human Motion Data via Stick Figures

Myung Geol Choi; Kyungyong Yang; Takeo Igarashi; Jun Mitani; Jehee Lee

We propose 2D stick figures as a unified medium for visualizing and searching for human motion data. The stick figures can express a wide range or human motion, and they are easy to be drawn by people without any professional training. In our interface, the user can browse overall motion by viewing the stick figure images generated from the database and retrieve them directly by using sketched stick figures as an input query. We started with a preliminary survey to observe how people draw stick figures. Based on the rules observed from the user study, we developed an algorithm converting motion data to a sequence of stick figures. The feature‐based comparison method between the stick figures provides an interactive and progressive search for the users. They assist the users sketching by showing the current retrieval result at each stroke. We demonstrate the utility of the system with a user study, in which the participants retrieved example motion segments from the database with 102 motion files by using our interface.


Computer Graphics Forum | 2011

Deformable Motion: Squeezing into Cluttered Environments

Myung Geol Choi; Manmyung Kim; Kyung Lyul Hyun; Jehee Lee

We present an interactive method that allows animated characters to navigate through cluttered environments. Our characters are equipped with a variety of motion skills to clear obstacles, narrow passages, and highly constrained environment features. Our control method incorporates a behavior model into well‐known, standard path planning algorithms. Our behavior model, called deformable motion, consists of a graph of motion capture fragments. The key idea of our approach is to add flexibility on motion fragments such that we can situate them into a cluttered environment via constraint‐based formulation. We demonstrate our deformable motion for realtime interactive navigation and global path planning in highly constrained virtual environments.


Computer Graphics Forum | 2009

Linkless Octree Using Multi-Level Perfect Hashing

Myung Geol Choi; Eunjung Ju; Jung-Woo Chang; Jehee Lee; Young J. Kim

The standard C/C++ implementation of a spatial partitioning data structure, such as octree and quadtree, is often inefficient in terms of storage requirements particularly when the memory overhead for maintaining parent‐to‐child pointers is significant with respect to the amount of actual data in each tree node. In this work, we present a novel data structure that implements uniform spatial partitioning without storing explicit parent‐to‐child pointer links. Our linkless tree encodes the storage locations of subdivided nodes using perfect hashing while retaining important properties of uniform spatial partitioning trees, such as coarse‐to‐fine hierarchical representation, efficient storage usage, and efficient random accessibility. We demonstrate the performance of our linkless trees using image compression and path planning examples.


Computer Animation and Virtual Worlds | 2013

Interaction capture using magnetic sensors

Peter Sandilands; Myung Geol Choi; Taku Komura

Capturing a close interaction between an actor and an object can be difficult as a result of occlusion and having to recreate the geometry of the scene accurately. In this paper, we propose a technique that allows us to capture the objects motion and geometry alongside the actors movements and optionally the local environment, using a magnetic motion capture system and an RGB‐D sensor. This not only gives greater information when placing a character in a scene but enables us to digitally recreate the scene in motion without significant animator work after capture. The use of magnetic sensors prevents occlusion or marker confusion that is common in optical techniques when dealing with close interactions, as the magnetic sensors do not require direct line of sight to a camera. The geometry reconstruction ensures that the proportions of objects and surfaces the character interacts with are accurate and alleviates the need for an artist to model the object. We perform validation of the results by comparison with an optical system and show a variety of motions, such as using a screwdriver or removing a cap to drink from a bottle, that can be captured using our technique. Copyright


motion in games | 2012

Capturing Close Interactions with Objects Using a Magnetic Motion Capture System and a RGBD Sensor

Peter Sandilands; Myung Geol Choi; Taku Komura

Games and interactive virtual worlds increasingly rely on interactions with the environment, and require animations for displaying them. Manually synthesizing such animations is a daunting task due to the difficulty of handling the close interactions between a character’s body and the object. Capturing such movements using optical motion capture systems, which are the most prevalent devices for motion capturing, is also not very straightforward due to occlusions happening between the body markers and the object or body itself. In this paper, we describe a scheme to capture such movements using a magnetic motion capture system. The advantage of using a magnetic motion capture system is that it can obtain the global data without any concern for the occlusion problem. This allows us to digitally recreate captured close interactions without significant artist work in creating the scene after capture. We show examples of capturing movements including opening a bottle, drawing on a paper, taking on / off pen caps, carrying objects and interacting with furniture. The captured data is currently published as a publicly available database.


Computer Graphics Forum | 2013

Dynamic Comics for Hierarchical Abstraction of 3D Animation Data

Myung Geol Choi; Seung-Tak Noh; Taku Komura; Takeo Igarashi

(b) (c) Figure 1: T hree comic sequences which are automatically generated based on the snapshot tree of Figure 2. They describe th e same locomotion animation in different perspectives. The left image describes the global traveling path of the subject. The middle comic sequence shows the transition of the locomotion style. The right comic sequence presents the detailed body poses and the number steps in both styles. Abstract I mage storyboards of films and videos are useful for quick browsing and automatic video processing. A common approach for producing image storyboards is to display a set of selected key-frames in temporal order, which has been widely used for 2D video data. However, such an approach cannot be applied for 3D animation data because different information is revealed by changing parameters such as the viewing angle and the duration of the animation. Also, the interests of the viewer may be different from person to person. As a result, it is difficult to draw a single image that perfectly abstracts the entire 3D animation data. In this paper, we propose a system that allows users to interactively browse an animation and produce a comic sequence out of it. Each snapshot in the comic optimally visualizes a duration of the original animation, taking into account the geometry and motion of the characters and objects in the scene. This is achieved by a novel algorithm that automatically produces a hierarchy of snapshots from the input animation. Our user interface allows users to arrange the snapshots according to the complexity of the movements by the characters and objects, the duration of the animation and the page area to visualize the comic sequence. Our system is useful for quickly browsing through a large amount of animation data and semi-automatically synthesizing a storyboard from a long sequence of animation.


eurographics | 2017

Character-Object Interaction Retrieval using the Interaction Bisector Surface

Xi Zhao; Myung Geol Choi; Taku Komura

In this paper, we propose a novel approach for the classification and retrieval of interactions between human characters and objects. We propose to use the interaction bisector surface (IBS) between the body and the object as a feature of the interaction. We define a multi‐resolution representation of the body structure, and compute a correspondence matrix hierarchy that describes which parts of the characters skeleton take part in the composition of the IBS and how much they contribute to the interaction. Key‐frames of the interactions are extracted based on the evolution of the IBS and used to align the query interaction with the interaction in the database. Through the experimental results, we show that our approach outperforms existing techniques in motion classification and retrieval, which implies that the contextual information plays a significant role for scene and interaction description. Our method also shows better performance than other techniques that use features based on the spatial relations between the body parts, or the body parts and the object. Our method can be applied for character motion synthesis and robot motion planning.

Collaboration


Dive into the Myung Geol Choi's collaboration.

Top Co-Authors

Avatar

Jehee Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Taku Komura

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kyung Lyul Hyun

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eunjung Ju

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Kyungyong Yang

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xi Zhao

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge