Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kyoungsu Oh is active.

Publication


Featured researches published by Kyoungsu Oh.


Pattern Recognition | 2004

GPU implementation of neural networks

Kyoungsu Oh; Keechul Jung

Abstract Graphics processing unit (GPU) is used for a faster artificial neural network. It is used to implement the matrix multiplication of a neural network to enhance the time performance of a text detection system. Preliminary results produced a 20-fold performance enhancement using an ATI RADEON 9700 PRO board. The parallelism of a GPU is fully utilized by accumulating a lot of input feature vectors and weight vectors, then converting the many inner-product operations into one matrix operation. Further research areas include benchmarking the performance with various hardware and GPU-aware learning algorithms.


virtual reality software and technology | 2006

Pyramidal displacement mapping: a GPU based artifacts-free ray tracing through an image pyramid

Kyoungsu Oh; Hyunwoo Ki; Cheol-Hi Lee

Displacement mapping enables us to details to polygonal meshes. We present a real-time artifacts-free inverse displacement mapping method using per-pixel ray tracing through an image pyramid on the GPU. In each pixel, we make a ray and trace the ray through a displacement map to find an intersection. To skip empty-regions safely, we use the quad-tree image pyramid of a displacement map in top-down order. For magnification we estimate an intersection between a ray and a bi-linearly interpolated displacement. For minification we perform a mipmap-like prefiltering to improve quality of result images and rendering performance. Results show that our method produces correct images even at steep grazing angles. Rendering speed of test scenes were over hundreds of frames per second and less influence to the resolution of the map. Our method is simple enough to add to existing virtual reality systems easily.


The Visual Computer | 2008

A GPU-based light hierarchy for real-time approximate illumination

Hyunwoo Ki; Kyoungsu Oh

Illumination rendering, including environment lighting, indirect illumination, and subsurface scattering, plays an important role in many graphics applications such as games and VR systems. However, it is difficult to run in real-time due to its highly computational cost. We introduce a GPU-based light hierarchy for real-time approximation of the illumination. We store virtual point lights in images and then build the view-independent hierarchy of the lights into image pyramids, with a simple and rapid clustering strategy. We approximate the illumination with small numbers of groups of lights instead of large numbers of individual lights, using a new tree traversal algorithm on programmable graphics hardware. Although we implemented our method without occlusion, we obtained visually good results in many cases. Entire steps run on programmable graphics hardware in real-time without any preprocessing.


virtual reality software and technology | 2006

Hardware-accelerated jaggy-free visual hulls with silhouette maps

Chulhan Lee; Junho Cho; Kyoungsu Oh

Visual hull is intersection of cones made by back-projections of reference images. We introduce a real-time rendering of jaggy-free visual hull rendering method on programmable graphics hardware. By using texture mapping approach, we render a visual hull quickly. At each silhouette pixel, we produce jaggy-free images using silhouette information. Our implementation demonstrate high-quality image in real-time. The complexity of our algorithm is O(N), where N is the number of reference images. Thus the examples in this paper are rendered over one hundred of frames per second without jaggies.


pacific conference on computer graphics and applications | 2007

Real-Time Approximate Subsurface Scattering on Graphics Hardware

Hyunwoo Ki; Jihye Lyu; Kyoungsu Oh

Both hair rendering and global illumination are known to be computationally expensive, and for this reason we see very few examples using global illumination techniques in hair rendering. In this paper, we elaborate on different simplification approaches to allow practical global illumination solutions for high quality hair rendering. We categorize light paths of a full global illumination solution, and analyze their costs and illumination contributions both theoretically and experimentally. We also propose two different implementation techniques using our novel projection based indirect illumination computation approach and state of the art ray tracing for hair. Our results show that by using our simplifications, a global illumination solution for hair is practical.This paper presents an image-space approximation technique for real-time subsurface scattering. We first create transmitted irradiance samples on shadow maps and then estimate single scattering efficiently using a method similar to shadow mapping, with adaptive deterministic sampling. We incorporate this single-scattering with a recently proposed technique for multiple scattering. We demonstrate that our technique produces high-quality images of animated scenes. We archived hundreds of frames per second on graphics hardware without lengthy preprocessing.


international conference on computational science and its applications | 2007

Real-Time Rendering of Multi-View Images from a Single Image with Depth

Kyoungsu Oh; Hyowon Kim; Chulhan Lee; Hyunwoo Ki

Image reprojection is a technique to generate novel images by projecting a reference image of an arbitrary view. Previous image reprojection methods often run on the CPU, but these approaches take high-cost for rendering. We present a real-time image reprojection method using the GPU entirely. From given a reference image and its depth image of the scene prerendered, we generate novel images at arbitrary view points without the original geometry data. We render a simple plane at a novel view, and for each pixel being rendered, we make a ray that faces the opposite direction of the view. Then, we transform the ray to reference image space and trace the ray through a depth image to find an intersection, using a recently proposed method. In our experience, we archived tens of frames per second and it was independent from the complexity of the scene.


international conference on artificial reality and telexistence | 2006

High-quality shear-warp volume rendering using efficient supersampling and pre-integration technique

Heewon Kye; Kyoungsu Oh

As shear-warp volume rendering is the fastest rendering method, image quality is not good as that of other high-quality rendering methods. In this paper, we propose two methods to improve the image quality of shear-warp volume rendering. First, the supersampling is performed in an intermediate image space. Then is proposed an efficient method to transform between the volume and the image coordinates at the arbitrary ratio. Second, the pre-integrated rendering technique is utilized for shear-warp rendering. To do this, a new data structure called overlapped min-max block is used. Using this structure, the empty space leaping can be performed so the rendering speed is maintained even though when the pre-integrated rendering is applied. Consequently, shear-warp rendering can generate on high-quality images comparable to those generated by the ray-casting without degrading the speed.


Journal of Korea Game Society | 2014

A Real-time Soft Shadow Rendering Method under the Area Lights having an Arbitrary Shape

Youngjae Chun; Kyoungsu Oh

ABSTRACT Presence of soft shadow effects from an area light makes virtual scenes look more realistic. However, since computation of soft shadow effects takes a long time, acceleration methods are required to apply it to real-time 3D applications. Many researches assumed that area lights are white rectangles. We suggest a new method which renders soft shadows under the area light source having arbitrary shape and color. In order to approximate visibility test, we use a shadow mapping result near a pixel. Complexity of shadow near a pixel is used to determine degree of precision of our visibility estimation. Finally, our method can present more realistic soft shadows for the area light that have more general shape and color in real-time. Keywords : Real-time rendering(실시간 렌더링), Area light illumination(면광원 조명), Soft shadow mapping(부드러운 그림자) Received: Mar. 24, 2014 Accepted: Apr. 14, 2014Corresponding Author: Kyoungsu Oh(Soongsil University)E-mail: [email protected]: 1598-4540 / eISSN: 2287-8211Ⓒ The Korea Game Society. All rights reserved. This is an open-access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.otg/licenses/by-nc/3.0), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.


virtual reality continuum and its applications in industry | 2008

Real-time image-based 3D avatar for immersive game

Chulhan Lee; Hohyun Lee; Kyoungsu Oh

We developed an action game which is based on the interaction between real-time 3D avatar of the player and the virtual game characters. The 3D avatar is reconstructed from multi-view images of the player with image-based modeling and rendering techniques. The 3D avatar can be dynamically reconstructed and rendered in real time by using the Hardware-accelerated Visual Hull(HAVH) method. The visual appearances and physical activities of the player are projected onto the avatar. The player can see themselves in the virtual 3D space and interact with the virtual objects through the bodily movements of the avatar in the gaming world. The combination of movement-based interaction and realistic visual appearance makes games more realistic and immersive.


international conference on e learning and games | 2010

Multiple layer displacement mapping with lossless image compression

Youngjae Chun; Sun-Yong Park; Kyoungsu Oh

We introduce a lossless compression and rendering technique for a multiple layer displacement map. Compressed data are handled in GPU for real-time rendering. A multiple layer displacement map is useful in representing general objects which cannot be represented by a single layer displacement map. Multiple layer displacement mapping methods can provide realistic objects in digital content such as 3D games and films for relatively low costs. For this, we have to use many layers to represent high quality object details. However, in a multiple layer displacement map, the first layer has the most data and the lower layers have less data. Therefore, lower layers waste data space, because the same space is allocated for every layer. We only store data used for rendering and make an address map to refer to the stored data. We also compress additional information such as normal vectors and diffuse colors. The main advantage is that all the compressed data share just one address map, and compression efficiency will improve when we use more layers. Since we compress the maps without any data loss, the proposed technique provides the same quality as rendering with the original maps.

Collaboration


Dive into the Kyoungsu Oh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge