Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chungeun Kim is active.

Publication


Featured researches published by Chungeun Kim.


international conference on advanced communication technology | 2005

3-dimensional voice communication system for two user groups

Chungeun Kim; Sang Chul Ahn; Ig-Jae Kim; Hyoung-Gon Kim

This paper proposes a 3-dimensional (3D) voice over IP (VoIP) system for two user groups with the smallest device requirements. It also presents design issues of the system and some experience in implementation of the system. The proposed system requires only a desktop computer with a multi-channel soundcard installed, USB microphones as many as the number of users, and 3D sound rendering system such as 5.1ch or 7.1ch loudspeaker system. It not only enables multiple users to communicate with a remote group using a single desktop computer, but also enriches the voice of each user with 3D spatial effect. Using the system, the participants can hear the voice of remote users through a 3D sound rendering system, as if each remote user speaks at his or her corresponding position. This system can be used for immersive teleconference system


Archive | 2014

Object-Based Spatial Audio: Concept, Advantages, and Challenges

Chungeun Kim

One of the primary objectives of modern audiovisual media creation and reproduction techniques is realistic perception of the delivered contents by the consumer. Spatial audio-related techniques in general attempt to deliver the impression of an auditory scene where the listener can perceive the spatial distribution of the sound sources as if he/she were in the actual scene. Advances in spatial audio capturing and rendering techniques have led to a new concept of delivering audio which does not only aim to present to the listener a realistic auditory scene just as captured but also gives more control over the delivered auditory scene to the producer and/or the listener. This is made possible by being able to control the attributes of individual sound objects that appear in the delivered scene. In this section, this so-called “object-based” approach is introduced, with its key features that distinguish it from the conventional spatial audio production and delivery techniques. The related applications and technologies will also be introduced, and the limitations and challenges that this approach is facing will follow.


international conference on image processing | 2014

An approach to immersive audio rendering with wave field synthesis for 3D multimedia content

Hyun Lim; Chungeun Kim; Erhan Ekmekcioglu; Safak Dogan; Andrew P. Hill; Ahmet M. Kondoz; Xiyu Shi

This paper proposes an immersive audio rendering scheme for networked 3D multimedia systems. The spatial audio rendering method based on wave field synthesis is particularly useful for applications where multiple listeners experience a true spatial soundscape while being free to move without losing spatial sound properties. The proposed approach can be considered as a general solution to the static listening restriction imposed by conventional methods, which rely on an accurate sound reproduction within a sweet spot only. The paper reports on the results of numerical analysis and experimental validation using various sound sources. It is demonstrated and confirmed that while covering the majority of the listening area, the developed approach can create a variety of virtual audio objects at target positions with very high accuracy. Subjective evaluation results show that an accurate spatial impression can be achieved with multiple simultaneous audible depth cues improving localization accuracy over single object rendering.


global communications conference | 2013

Investigation into spatial audio quality of experience in the presence of accompanying video cues with spatial mismatch

Chungeun Kim; Ahmet M. Kondoz; Xiyu Shi

This study investigates the metrics for spatial audio quality of experience (QoE) prediction, particularly considering the existence of video whose viewpoint may not match that of the auditory scene. Subjective tests were conducted for 5.1-channel audio quality evaluation following a previously developed testing and prediction methodology, with the addition of accompanying video cues and with the spatial correlation between audio and video as a new variable for the QoE prediction. The first experiment with a synthesized visual cue showed that the spatial mismatch between audio and video affects the perceived audio QoE. A prediction model of the QoE score was suggested through statistical analysis, using the audio-video angular mismatch as well as other known measurable low-level parameters. The model was validated through another set of subjective test using a captured real-life audiovisual content.


Building Acoustics | 2011

Head-Movement-Aware Signal Capture for Evaluation of Spatial Acoustics

Chungeun Kim; Russell Mason; Tim Brookes

This research incorporates the nature of head movement made in listening activities, into the development of a quasi-binaural acoustical measurement technique for the evaluation of spatial impression. A listening test was conducted where head movements were tracked whilst the subjects rated the perceived source width, envelopment, source direction and timbre of a number of stimuli. It was found that the extent of head movements was larger when evaluating source width and envelopment than when evaluating source direction and timbre. It was also found that the locus of ear positions corresponding to these head movements formed a bounded sloped path, higher towards the rear and lower towards the front. This led to the concept of a signal capture device comprising a torso-mounted sphere with multiple microphones. A prototype was constructed and used to measure three binaural parameters related to perceived spatial impression-interaural time and level differences (ITD and ILD) and interaural cross-correlation coefficient (IACC). Comparison of the prototype measurements to those made with a rotating Head and Torso Simulator (HATS) showed that the prototype could be perceptually accurate for the prediction of source direction using ITD and ILD, and for the prediction of perceived spatial impression using IACC. Further investigation into parameter derivation and interpolation methods indicated that 21 pairs of discretely spaced microphones were sufficient to measure the three binaural parameters across the sloped range of ear positions identified in the listening test.


Journal of the Acoustical Society of America | 2010

Investigation into and modelling of head movement for objective evaluation of the spatial impression of audio.

Chungeun Kim; Russell Mason; Tim Brookes

Research was undertaken to determine the nature of head movements made when judging spatial impression and to incorporate these into a system for measuring, in a perceptually relevant manner, the acoustic parameters which contribute to spatial impression: interaural time and level differences and interaural cross‐correlation coefficient. First, a subjective test was conducted that showed that (i) the amount of head movement was larger when evaluating source width and envelopment than when judging localization and timbre and (ii) the pattern of head movement resulted in ear positions that formed a sloped area. These findings led to the design of a binaural signal capture technique using a sphere with multiple microphones, mounted on a simulated torso. Evaluation of this technique revealed that it would be appropriate for the prediction of perceived spatial attributes including both source direction and aspects of spatial impression. Reliable derivation of these attributes across the range of ear positions determined from the earlier subjective test was shown to be possible with a limited number of microphones through an appropriate interpolation and calculation technique. A prototype capture system was suggested as a result, using a sphere with torso, with 21 omnidirectional microphones on each side. [Work supported by the Engineering and Physical Sciences Research Council (EPSRC), UK, Grant No. EP/D049253.]


Journal of The Audio Engineering Society | 2007

An Investigation Into Head Movements Made When Evaluating Various Attributes of Sound

Tim Brookes; Chungeun Kim; Russell Mason


Journal of The Audio Engineering Society | 2008

Improvements to a Spherical Binaural Capture Model for Objective Measurement of Spatial Impression with Consideration of Head Movements

Chungeun Kim; Russell Mason; Tim Brookes


Journal of The Audio Engineering Society | 2008

Initial Investigation of Signal Capture Techniques for Objective Measurement of Spatial Impression Considering Head Movement

Chungeun Kim; Russell Mason; Brookes Tim


Journal of The Audio Engineering Society | 2009

Perception of head-position-dependent variations in interaural cross-correlation coefficient

Russell Mason; Chungeun Kim; Tim Brookes

Collaboration


Dive into the Chungeun Kim's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiyu Shi

University of Surrey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyun Lim

Loughborough University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge