Jongeun Cha
Gwangju Institute of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jongeun Cha.
IEEE MultiMedia | 2010
Yeongmi Kim; Jongeun Cha; Jeha Ryu; Ian Oakley
Viewer expectations for rich, immersive interaction with multimedia are driving new technologies- such as high-definition 3D displays and multichannel audio systems-to greater levels of sophistication. While researchers continue to study ways to develop new capabilities for visual and audio sensory channels, improvements in haptic channels could lead to even more immersive experiences.
IEEE MultiMedia | 2009
Jongeun Cha; Yo-Sung Ho; Yeongmi Kim; Jeha Ryu; Ian Oakley
This article presents a comprehensive exploration of the issues underlying haptic multimedia broadcasting. It also describes the implementation of a prototype system as a proof of concept.
symposium on haptic interfaces for virtual environment and teleoperator systems | 2007
Jongeun Cha; Yong-Won Seo; Yeongmi Kim; Jeha Ryu
In this paper, we propose an authoring/editing framework for haptic broadcasting that provides viewers with passive haptic interactions synchronized with audiovisual media by extending MPEG-4 BIFS (binary format for scenes). Based on this framework, we could conveniently author haptically enhanced broadcast contents and provide them to viewers by streaming via a VOD (video on demand) context. This work will also appear in the demonstration session during the conference
IEICE Transactions on Information and Systems | 2006
Seung-Man Kim; Jongeun Cha; Jeha Ryu; Kwan Heng Lee
We present a depth video enhancement algorithm in order to provide high quality haptic interaction. As the telecommunication technology emerges rapidly, the depth image-based haptic interaction is becoming viable for broadcasting applications. Since a real depth map usually contains discrete and rugged noise, its haptic interaction produces the distorted force feedback. To resolve these problems, we propose a two-step refinement and adaptive sampling algorithm. In the first step, noise is removed by the median-filtering technique in 2D image space. Since not all pixels can be used to reconstruct the 3D mesh due to limited system resources, the filtered map is adaptively sampled based on the depth variation. Sampled 2D pixels, called feature points, are triangulated and projected onto 3D space. In the second refinement step, we apply the Gaussian smoothing technique to the reconstructed 3D surface. Finally, 3D surfaces are rendered to compute a smooth depth map from Z-buffer.
advances in multimedia | 2004
Jongeun Cha; Je-Ha Ryu; Seungjun Kim; Seongeun Eom; Byung-Ha Ahn
In this paper, we discuss a haptically enhanced multimedia broadcasting system. Four stages of a proposed system are briefly analyzed: scene capture, haptic editing, data transmission, and display with haptic interaction. In order to show usefulness of the proposed system, some potential scenarios with haptic interaction are listed. These scenarios are classified into passive and active haptic interaction scenarios, which can be fully authored by scenario writers or producers. Finally, in order to show how the haptically enhanced scenario works, a typical example is demonstrated to explain specifically for a home shopping setting.
international conference on human computer interaction | 2005
Beom-Chan Lee; Junhun Lee; Jongeun Cha; Changhoon Seo; Jeha Ryu
This paper presents a vibrotactile display system designed with an aim of providing immersive live sports experience. Preliminary user studies showed that with this display subjects were 35% more accurate in interpreting an ambiguous visual stimulus showing a ball either entering or narrowly missing a football net. About 80% of subjects could judge the correct ball paths in the presences of ambiguous visual stimuli. Without the tactile display, only 60% correct paths are judged from the visual display.
advances in multimedia | 2005
Jongeun Cha; Seung Man Kim; Ian Oakley; Je-Ha Ryu; Kwan-Heng Lee
In this paper we propose a touch enabled video player system. A conventional video player only allows viewers to passively experience visual and audio media. In virtual environment, touch or haptic interaction has been shown to convey a powerful illusion of the tangible nature – the reality – of the displayed environments and we feel the same benefits may be conferred to a broadcast, viewing domain. To this end, this paper describes a system that uses a video representation based on depth images to add a haptic component to an audio-visual stream. We generate this stream through the combination of a regular RGB image and a synchronized depth image composed of per-pixel depth-from-camera information. The depth video, a unified stream of the color and depth images, can be synthesized from a computer graphics animation by rendering with commercial packages or captured from a real environment by using a active depth camera such as the ZcamTM. In order to provide a haptic representation of this data, we propose a modified proxy graph algorithm for depth video streams. The modified proxy graph algorithm can (i) detect collisions between a moving virtual proxy and time-varying video scenes, (ii) generates smooth touch sensation by handling the implications of the radically different display update rates required by visual (30Hz) and haptic systems (in the order of 1000Hz), (iii) avoid sudden change of contact forces. A sample experiment shows the effectiveness of the proposed system.
IEICE Transactions on Information and Systems | 2006
Seungjun Kim; Jongeun Cha; Jong-Phil Kim; Jeha Ryu; Seongeun Eom; Nitaigour-Premchand Mahalik; Byung-Ha Ahn
In this paper, we demonstrate an immersive and interactive broadcasting production system with a new haptically enhanced multimedia broadcasting chain. The system adapts Augmented Reality (AR) techniques, which merges captured videos and virtual 3D media seamlessly through multimedia streaming technology, and haptic interaction technology in near real-time. In this system, viewers at the haptic multimedia client can interact with AR broadcasting production transmitted via communication network. We demonstrate two test applications, which show that the addition of AR- and haptic-interaction to the conventional audio-visual contents can improve immersiveness and interactivity of viewers with rich contents service.
international conference on human computer interaction | 2005
Jongeun Cha; Beom-Chan Lee; Jong-Phil Kim; Seungjun Kim; Jeha Ryu
This paper presents smooth haptic interaction methods for an immersive and interactive broadcasting system combining haptics in augmented reality. When touching the broadcasted augmented virtual objects in the captured real scene, problems of force trembling and discontinuity occur due to static registration errors and slow marker pose update rate, respectively. In order to solve these problems, threshold and interpolation methods are proposed respectively. The resultant haptic interaction provides smoother continuous tremble-free force sensation.
digital television conference | 2007
Sung-Yeol Kim; Jongeun Cha; Seung-Hyun Lee; Jeha Ryu; Yo-Sung Ho
In this paper, we present a 3DTV system using a new depth image-based representation (DIBR). After obtaining depth images from multi-view cameras or a depth-range camera, we decompose the depth image into three disjoint layer images and a layer descriptor image. Then, we combine the decomposed images with color images to generate a new representation of dynamic 3D scenes, called as a 3D depth video. The 3D depth video is compressed by a H.264/AVC coder, and streamed to clients over IP networks in the MPEG-4 multimedia framework. The proposed 3DTV system enables not only enjoying a high-quality 3D video in real time, but also experiencing various user-friendly interactions such as free viewpoint changing, composition of computer graphics, and even haptic display.