Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Youngha Chang is active.

Publication


Featured researches published by Youngha Chang.


IEEE Transactions on Image Processing | 2007

Example-Based Color Transformation of Image and Video Using Basic Color Categories

Youngha Chang; Suguru Saito; Masayuki Nakajima

Color transformation is the most effective method to improve the mood of an image, because color has a large influence in forming the mood. However, conventional color transformation tools have a tradeoff between the quality of the resultant image and the amount of manual operation. To achieve a more detailed and natural result with less labor, we previously suggested a method that performs an example-based color stylization of images using perceptual color categories. In this paper, we extend this method to make the algorithm more robust and to stylize the colors of video frame sequences. We present a variety of results, arguing that these images and videos convey a different, but coherent mood


tests and proofs | 2005

Example-Based Color Stylization of Images

Youngha Chang; Suguru Saito; Keiji Uchikawa; Masayuki Nakajima

We describe a new computational approach to stylize the colors of an image by using a reference image. During processing, we take the characteristics of human color perception into account to generate more appealing results. Our system starts by classifying each pixel value into one of the basic color categories, derived from our psychophysical experiments. The basic color categories are perceptual categories that are universal to everyone, regardless of nationality or cultural background. These categories are used to provide restrictions on color transformations to avoid generating unnatural results. Our system then renders a new image by transferring colors from a reference image to the input image, based on these categorizations. To avoid artifacts due to the explicit clustering, our system defines fuzzy categorization when pseudocontours appear in the resulting image. We present a variety of results and show that our method performs a large, yet natural, color transformation without any sense of incongruity and that the resulting images automatically capture the characteristics of the colors used in the reference image.


applied perception in graphics and visualization | 2004

Example-based color stylization based on categorical perception

Youngha Chang; Keiji Uchikawa; Suguru Saito

We describe a new computational approach to stylize the colors of an image by using a reference image. During processing, we take characteristics of human color perception into account to generate more appealing results. Our system starts by classifying each pixel value into one of a set of the basic color categories, derived from our psycho-physiological experiments. The basic color categories are perceptual categories that are universal to everyone, regardless of nationality or cultural background. These categories provide restrictions on the color transformations to avoid generating unnatural results. Our system then renders a new image by transferring colors from a reference image to the input image, based on this categorizations. To avoid artifacts due to the explicit clustering, our system defines fuzzy categorization when pseudo-contours appear in the resulting image. We present a variety of results and show that our color transformation performs a large, yet natural color transformation without any sense of incongruity, and that the resulting images automatically capture the characteristics of the color use of the reference image.


international conference on computer graphics and interactive techniques | 2005

Example-based color transformation for image and video

Youngha Chang; Suguru Saito; Masayuki Nakajima

Color is very important in setting the mood of images and video sequences. For this reason, color transformation is one of the most important features in photo-editing or video post-production tools because even slight modifications of colors in an image can strongly increase its visual appeal. However, conventional color editing tools require users manual operation for detailed color manipulation. Such manual operation becomes burden especially when editing video frame sequences. To avoid this problem, we previously suggested a method [Chang et al. 2004] that performs an example-based color stylization of images using perceptual color categories. In this paper, we extend this method to make the algorithm more robust and to stylize the colors of video frame sequences. The main extension is the following 5 points: applicable to images taken under a variety of light conditions; speeding up the color naming step; improving the mapping between source and reference colors when there is a disparity in size of the chromatic categories; separate handling of achromatic categories from chromatic categories; and extending the algorithm along the temporal axis to allow video processing. We present a variety of results, arguing that these images and videos convey a different, but coherent mood.


The Visual Computer | 2007

Curvature-based stroke rendering

Suguru Saito; Akane Kani; Youngha Chang; Masayuki Nakajima

This paper describes an algorithm that renders lines that have various thicknesses and have sharp tapered ends. This algorithm does not require any special information on each local point of a line. The thickness is determined by curvature and lengths from both ends. Therefore the algorithm is applicable in a variety of line rendering situations, such as 3D rendering engines for high quality cel-animation-like effects, reuse of geometrical data designed by CAD for advertising purposes, edge enhancement in a photo retouching process with edge detection methods and so on. In addition, using the generated varying thicknesses, we have developed algorithms for shading and embossing effects.


international conference on computer graphics and interactive techniques | 2002

Rich curve drawing

Suguru Saito; Akane Kani; Youngha Chang; Masayuki Nakajima

In simple style drawings, like Comics and traditional cel animation, curved strokes are relatively important. Of course, the shape of the curve is the most important. However, subtle changes of curve width cannot be ignored. We propose a powerful method allowing subtle width changes to be applied to general 2D curve data. The algorithm is based on curvature information of the input curve, and keeps carefully the impression of the original curvature. The resulting image expresses a pen-and-ink drawing style.


international conference on simulation and modeling methodologies, technologies and applications | 2017

Blood Flow and Pressure Change Simulation in the Aorta with the Model Generated from CT Data.

Nobuhiko Mukai; Kazuhiro Aoyama; Yuhei Okamoto; Youngha Chang

We have performed the blood flow and the pressure change simulation in the aorta with the model generated from CT (Computerized Tomography) data. There have been some previous researches related to the aortic valve and the blood flow in the aorta. Some works simulated the aortic valve behavior with artificial models, and others investigated the blood flow in the aorta with models generated from MRI (Magnetic Resonance Imaging) data. In this paper, we demonstrate the simulation of the blood flow and the pressure change in the aorta with a model generated from CT data, which model includes not only the aorta but also the left ventricle. In the simulation, blood flows into the left ventricle through the mitral valve, the pressure increases according to the blood flow that moves into the left ventricle through the mitral valve, and the aortic valve opens by the pressure increase in the left ventricle. Finally, we have confirmed that the pressure change in the left ventricle corresponds to a literature value.


cyberworlds | 2015

Crowd Simulation by Applying Individual Human Model with Vision

Nobuhiko Mukai; Kensuke Tanaka; Youngha Chang

One of the most difficult tasks in the computer graphics field is the simulation of human behavior. Some researchers have already tried to generate human behavior in a virtual space by introducing psychology and/or by considering collision avoidance methods. Many previous works, however, have a control center that knows everything about individual position and going direction and so forth, and informs some of it to each person. In addition, individual and crowd behavior are different and treated separately. This paper proposes a virtual human model who has his/her own eye with which each person can obtain the information necessary for their actions by themselves. In addition, we have tried to simulate crowd behavior by applying the individual human model for many people without introducing particle or flocking systems.


computer graphics international | 2003

Generation of varying line thickness

Suguru Saito; Akane Kani; Youngha Chang; Masayuki Nakajima

We describe an algorithm that generates lines of varying thickness without any special information about the lines. The thickness is derived from only curvature and lengths from both ends. Therefore the algorithm is applicable in a variety of line rendering situations, for example 3D rendering engine for high quality cel-animation-like effects, reuse of geometrical data designed by CAD for advertising purposes, and photo retouching processes with edge detection methods and so on. In addition, using the generated varying thickness, we also describe algorithms for shading and embossing line rendering.


computer graphics international | 2003

A framework for transfer colors based on the basic color categories

Youngha Chang; Suguru Saito; Masayuki Nakajima

Collaboration


Dive into the Youngha Chang's collaboration.

Top Co-Authors

Avatar

Masayuki Nakajima

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Suguru Saito

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akane Kani

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hiroki Takahashi

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Keiji Uchikawa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexis Andre

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kazuaki Suzuki

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge