Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jongeun Cha is active.

Publication


Featured researches published by Jongeun Cha.


IEEE MultiMedia | 2010

A tactile glove design and authoring system for immersive multimedia

Yeongmi Kim; Jongeun Cha; Jeha Ryu; Ian Oakley

Viewer expectations for rich, immersive interaction with multimedia are driving new technologies- such as high-definition 3D displays and multichannel audio systems-to greater levels of sophistication. While researchers continue to study ways to develop new capabilities for visual and audio sensory channels, improvements in haptic channels could lead to even more immersive experiences.


IEEE MultiMedia | 2009

A Framework for Haptic Broadcasting

Jongeun Cha; Yo-Sung Ho; Yeongmi Kim; Jeha Ryu; Ian Oakley

This article presents a comprehensive exploration of the issues underlying haptic multimedia broadcasting. It also describes the implementation of a prototype system as a proof of concept.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2007

An Authoring/Editing Framework for Haptic Broadcasting: Passive Haptic Interactions using MPEG-4 BIFS

Jongeun Cha; Yong-Won Seo; Yeongmi Kim; Jeha Ryu

In this paper, we propose an authoring/editing framework for haptic broadcasting that provides viewers with passive haptic interactions synchronized with audiovisual media by extending MPEG-4 BIFS (binary format for scenes). Based on this framework, we could conveniently author haptically enhanced broadcast contents and provide them to viewers by streaming via a VOD (video on demand) context. This work will also appear in the demonstration session during the conference


IEICE Transactions on Information and Systems | 2006

Depth Video Enhancement for Haptic Interaction Using a Smooth Surface Reconstruction

Seung-Man Kim; Jongeun Cha; Jeha Ryu; Kwan Heng Lee

We present a depth video enhancement algorithm in order to provide high quality haptic interaction. As the telecommunication technology emerges rapidly, the depth image-based haptic interaction is becoming viable for broadcasting applications. Since a real depth map usually contains discrete and rugged noise, its haptic interaction produces the distorted force feedback. To resolve these problems, we propose a two-step refinement and adaptive sampling algorithm. In the first step, noise is removed by the median-filtering technique in 2D image space. Since not all pixels can be used to reconstruct the 3D mesh due to limited system resources, the filtered map is adaptively sampled based on the depth variation. Sampled 2D pixels, called feature points, are triangulated and projected onto 3D space. In the second refinement step, we apply the Gaussian smoothing technique to the reconstructed 3D surface. Finally, 3D surfaces are rendered to compute a smooth depth map from Z-buffer.


advances in multimedia | 2004

Haptic interaction in realistic multimedia broadcasting

Jongeun Cha; Je-Ha Ryu; Seungjun Kim; Seongeun Eom; Byung-Ha Ahn

In this paper, we discuss a haptically enhanced multimedia broadcasting system. Four stages of a proposed system are briefly analyzed: scene capture, haptic editing, data transmission, and display with haptic interaction. In order to show usefulness of the proposed system, some potential scenarios with haptic interaction are listed. These scenarios are classified into passive and active haptic interaction scenarios, which can be fully authored by scenario writers or producers. Finally, in order to show how the haptically enhanced scenario works, a typical example is demonstrated to explain specifically for a home shopping setting.


international conference on human computer interaction | 2005

Immersive live sports experience with vibrotactile sensation

Beom-Chan Lee; Junhun Lee; Jongeun Cha; Changhoon Seo; Jeha Ryu

This paper presents a vibrotactile display system designed with an aim of providing immersive live sports experience. Preliminary user studies showed that with this display subjects were 35% more accurate in interpreting an ambiguous visual stimulus showing a ball either entering or narrowly missing a football net. About 80% of subjects could judge the correct ball paths in the presences of ambiguous visual stimuli. Without the tactile display, only 60% correct paths are judged from the visual display.


advances in multimedia | 2005

Haptic interaction with depth video media

Jongeun Cha; Seung Man Kim; Ian Oakley; Je-Ha Ryu; Kwan-Heng Lee

In this paper we propose a touch enabled video player system. A conventional video player only allows viewers to passively experience visual and audio media. In virtual environment, touch or haptic interaction has been shown to convey a powerful illusion of the tangible nature – the reality – of the displayed environments and we feel the same benefits may be conferred to a broadcast, viewing domain. To this end, this paper describes a system that uses a video representation based on depth images to add a haptic component to an audio-visual stream. We generate this stream through the combination of a regular RGB image and a synchronized depth image composed of per-pixel depth-from-camera information. The depth video, a unified stream of the color and depth images, can be synthesized from a computer graphics animation by rendering with commercial packages or captured from a real environment by using a active depth camera such as the ZcamTM. In order to provide a haptic representation of this data, we propose a modified proxy graph algorithm for depth video streams. The modified proxy graph algorithm can (i) detect collisions between a moving virtual proxy and time-varying video scenes, (ii) generates smooth touch sensation by handling the implications of the radically different display update rates required by visual (30Hz) and haptic systems (in the order of 1000Hz), (iii) avoid sudden change of contact forces. A sample experiment shows the effectiveness of the proposed system.


IEICE Transactions on Information and Systems | 2006

A Novel Test-Bed for Immersive and Interactive Broadcasting Production Using Augmented Reality and Haptics

Seungjun Kim; Jongeun Cha; Jong-Phil Kim; Jeha Ryu; Seongeun Eom; Nitaigour-Premchand Mahalik; Byung-Ha Ahn

In this paper, we demonstrate an immersive and interactive broadcasting production system with a new haptically enhanced multimedia broadcasting chain. The system adapts Augmented Reality (AR) techniques, which merges captured videos and virtual 3D media seamlessly through multimedia streaming technology, and haptic interaction technology in near real-time. In this system, viewers at the haptic multimedia client can interact with AR broadcasting production transmitted via communication network. We demonstrate two test applications, which show that the addition of AR- and haptic-interaction to the conventional audio-visual contents can improve immersiveness and interactivity of viewers with rich contents service.


international conference on human computer interaction | 2005

Smooth haptic interaction in broadcasted augmented reality

Jongeun Cha; Beom-Chan Lee; Jong-Phil Kim; Seungjun Kim; Jeha Ryu

This paper presents smooth haptic interaction methods for an immersive and interactive broadcasting system combining haptics in augmented reality. When touching the broadcasted augmented virtual objects in the captured real scene, problems of force trembling and discontinuity occur due to static registration errors and slow marker pose update rate, respectively. In order to solve these problems, threshold and interpolation methods are proposed respectively. The resultant haptic interaction provides smoother continuous tremble-free force sensation.


digital television conference | 2007

3DTV System using Depth Image-Based Video in the MPEG-4 Multimedia Framework

Sung-Yeol Kim; Jongeun Cha; Seung-Hyun Lee; Jeha Ryu; Yo-Sung Ho

In this paper, we present a 3DTV system using a new depth image-based representation (DIBR). After obtaining depth images from multi-view cameras or a depth-range camera, we decompose the depth image into three disjoint layer images and a layer descriptor image. Then, we combine the decomposed images with color images to generate a new representation of dynamic 3D scenes, called as a 3D depth video. The 3D depth video is compressed by a H.264/AVC coder, and streamed to clients over IP networks in the MPEG-4 multimedia framework. The proposed 3DTV system enables not only enjoying a high-quality 3D video in real time, but also experiencing various user-friendly interactions such as free viewpoint changing, composition of computer graphics, and even haptic display.

Collaboration


Dive into the Jongeun Cha's collaboration.

Top Co-Authors

Avatar

Jeha Ryu

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Je-Ha Ryu

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yeongmi Kim

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yong-Won Seo

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ian Oakley

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Seungjun Kim

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Byung-Ha Ahn

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yo-Sung Ho

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jong-Phil Kim

Gwangju Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Seongeun Eom

Gwangju Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge