Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Byeong-jun Han is active.

Publication


Featured researches published by Byeong-jun Han.


acm multimedia | 2009

SVR-based music mood classification and context-based music recommendation

Seungmin Rho; Byeong-jun Han; Eenjun Hwang

With the advent of the ubiquitous era, context-based music recommendation has become one of rapidly emerging applications. Context-based music recommendation requires multidisciplinary efforts including low level feature extraction, music mood classification and human emotion prediction. Especially, in this paper, we focus on the implementation issues of context-based mood classification and music recommendation. For mood classification, we reformulate it into a regression problem based on support vector regression (SVR). Through the use of the SVR-based mood classifier, we achieved 87.8% accuracy. For music recommendation, we reason about the users mood and situation using both collaborative filtering and ontology technology. We implement a prototype music recommendation system based on this scheme and report some of the results that we obtained.


international conference on multimedia and expo | 2009

Environmental sound classification based on feature collaboration

Byeong-jun Han; Eenjun Hwang

To date, common acoustic features such as MPEG-7 and Fourier/wavelet transform-based features have been frequently used for environmental sound classification. However, these transforms have difficulty dealing with specific properties of environmental sounds, due to their limited scopes. In this paper, we investigate three types of transforms as yet untried for this purpose, and show that they are more effective than traditional features. This result is mainly due to the fact that they have functionalities that were not easily treatable with traditional transforms. Experimental results show that the combination of these features with traditional features can achieve 86.09% of the maximum accuracy in environmental sound classification, compared to 74.35% of the maximum accuracy when confined to traditional features.


Multimedia Tools and Applications | 2014

Virtual pottery: a virtual 3D audiovisual interface using natural hand motions

Yoon Chung Han; Byeong-jun Han

In this paper, we present our approach towards designing and implementing a virtual 3D sound sculpting interface that creates audiovisual results using hand motions in real time. In the interface “Virtual Pottery,” we use the metaphor of pottery creation in order to adopt the natural hand motions to 3D spatial sculpting. Users can create their own pottery pieces by changing the position of their hands in real time, and also generate 3D sound sculptures based on pre-existing rules of music composition. The interface of Virtual Pottery can be categorized by shape design and camera sensing type. This paper describes how we developed the two versions of Virtual Pottery and implemented the technical aspects of the interfaces. Additionally, we investigate the ways of translating hand motions into musical sound. The accuracy of the detection of hand motions is crucial for translating natural hand motions into virtual reality. According to the results of preliminary evaluations, the accuracy of both motion-capture tracking system and portable depth sensing camera is as high as the actual data. We carried out user studies, which took into account information about the two exhibitions along with the various ages of users. Overall, Virtual Pottery serves as a bridge between the virtual environment and traditional art practices, with the consequence that it can lead to the cultivation of the deep potential of virtual musical instruments and future art education programs.


multimedia and ubiquitous engineering | 2007

An Efficient Voice Transcription Scheme for Music Retrieval

Byeong-jun Han; Seungmin Rho; Eenjun Hwang

In this paper, we propose a new scheme for transcribing sung or hummed queries into a sequence of pitch and duration pairs automatically for efficient music retrieval. More specifically, we present two novel methods called WAE (windowed average energy) and dynamic threshold method for ADF onsets for note segmentation and onset/offset detection in acoustic signal, respectively. The former improves previous energy-based approaches such as AE by defining small but coherent windows with local and global threshold values. The latter also improves the traditional global/local threshold method. By performing various experiments on our prototype music retrieval system, we show the effectiveness of our proposed scheme.


acm multimedia | 2007

M-MUSICS: mobile content-based music retrieval system

Byeong-jun Han; Eenjun Hwang; Seungmin Rho; Minkoo Kim

Accurate voice humming transcription and efficient indexing schemes are essential for a large-scale humming-based music retrieval system. Although many researches have been done to develop such schemes, their performances are not still satisfactory. In our previous works, we proposed (i) a new voice query transcription scheme [4], (ii) a popularity-adaptive indexing structure called FAI [6] for fast retrieval, and (iii) a semi-supervised relevance feedback and query reformulation scheme based on a genetic algorithm [7] in order to improve retrieval efficiency. In this demonstration, we extend our efforts to a mobile environment and develop a prototype mobile music retrieval system called M-MUSICS. Our focus in this implementation includes versatile user interface for easy querying and browsing on a typical mobile device such as PDA phone and satisfactory performance in a wireless mobile environment. We report some of the results.


acm multimedia | 2012

Digiti sonus: an interactive fingerprint sonification

Yoon Chung Han; Byeong-jun Han

Fingerprints are one of the most unique visual patterns on human body. It represents both innate and acquired identities of an individual. In this paper, we focus on relationship between fingerprint patterns and human identities by transforming image into audio. Digiti Sonus, an interactive fingerprint sonification installation, contains a novel idea to facilitate and enhance an interactive auditory meaning by transforming user-intended fingerprint expression into audio spectrogram. In order to enable personalized sonification, the installation employed dynamic filter generation based on minutiae extraction using core-invariant scanning method and image skeletonization.


international conference on computer graphics and interactive techniques | 2013

Digiti sonus

Yoon Chung Han; Byeong-jun Han

Fingerprints are unique biometric patterns on human and primate bodies. They are clearly recognizable patterns that can be manipulated and saved into large databases. Due to their distinct and unique visual patterns, they have been useful for personal identification and security. In this digital era, many computing machines and digital interfaces use fingerprints as secure keys to identify and access personal information.


international conference on multimedia and expo | 2007

MUSEMBLE: A Music Retrieval System Based on Learning Environment

Seungmin Rho; Byeong-jun Han; Eenjun Hwang; Minkoo Kim

Query reformulation has been suggested as an effective way to improve retrieval efficiency in text information retrieval and one of the well-known techniques for query reformulation is user relevance feedback. Recently, there has been an increased interest in the query reformulation using relevance feedback with evolutionary techniques such as genetic algorithm for multimedia information retrieval. However, these techniques have still not been exploited widely in the field of music retrieval. In this paper, we propose a novel music retrieval scheme that is based on user relevance feedback with genetic algorithm and evolutionary method with neural network. The former is for reformulating a user query and the latter is for reducing the population size by learning neural network. We implemented a prototype music retrieval system called MUSEMBLE based on this scheme. Experimental results showed that our proposed scheme achieves a good performance.


Leonardo Music Journal | 2014

Skin Pattern Sonification as a New Timbral Expression

Yoon Chung Han; Byeong-jun Han

ABSTRACT The authors discuss two sonification projects that transform fingerprint and skin patterns into audio: (1) Digiti Sonus, an interactive installation performing fingerprint sonification and visualization and (2) skin pattern sonification, which converts pore networks into sound. The projects include novel techniques for representing user-intended fingerprint expression and skin pattern selection as audio parameters.


2008 5th International Conference on Visual Information Engineering (VIE 2008) | 2008

A fuzzy inference-based music emotion recognition system

Sanghoon Jun; Seungmin Rho; Byeong-jun Han; Eenjun Hwang

Collaboration


Dive into the Byeong-jun Han's collaboration.

Top Co-Authors

Avatar

Yoon Chung Han

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew Wright

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge