Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hanyoung Jang is active.

Publication


Featured researches published by Hanyoung Jang.


intelligent robots and systems | 2005

A visibility-based accessibility analysis of the grasp points for real-time manipulation

Hanyoung Jang; Hadi Moradi; Sukhan Lee; JungHyun Han

This paper presents a novel approach to accessibility analysis for manipulative robotic tasks. The workspace is captured using a stereo camera, and heterogeneously modeled with the recognized plane features, recognized objects with complete solid models, and unrecognized 3D point clouds organized with a multi-resolution octree. When the service robot is requested to manipulate a recognized object, the local accessibility information for the object is retrieved from the object database. Then, accessibility analysis is done to verify the object accessibility and determine the global accessibility. The verification process utilizes the visibility query, which is accelerated by graphics hardware. The experimental results show the feasibility of real-time and behavior-oriented 3D modeling of workspace for robotic manipulative tasks, and also the performance gain of the hardware-accelerated accessibility analysis obtained using the commodity graphics card.


intelligent robots and systems | 2006

Spatial Reasoning for Real-time Robotic Manipulation

Hanyoung Jang; Hadi Moradi; Suyeon Hong; Sukhan Lee; JungHyun Han

Presented in this paper is an approach to real-time spatial reasoning for manipulative robotic tasks. When a service robot is requested to manipulate an object, it should determine the directions along which it can access and remove the object. The potential accessible directions for the object are retrieved from the object database. Then, spatial reasoning with the surrounding environment and the gripper geometry is invoked to verify the directions. The verification process mainly utilizes the visibility test of the commodity graphics hardware. Then, the directions along which both of the object and gripper are translated without colliding with the surrounding obstacles are computed using Minkowski sum and cube map of the graphics hardware. The access and removal directions are passed to the potential field path planning algorithm to determine the robot arms full path for accessing, removing and delivering the object. The experimental results show the feasibility of using graphics hardware for manipulative robotic tasks and further its performance gain in real-time manipulation


Computer-aided Design | 2008

Visibility-based spatial reasoning for object manipulation in cluttered environments

Hanyoung Jang; Hadi Moradi; Phuoc Le Minh; Sukhan Lee; JungHyun Han

In this paper, we present visibility-based spatial reasoning techniques for real-time object manipulation in cluttered environments. When a robot is requested to manipulate an object, a collision-free path should be determined to access, grasp, and move the target object. This often requires processing of time-consuming motion planning routines, making real-time object manipulation difficult or infeasible, especially in a robot with a high DOF and/or in a highly cluttered environment. This paper places special emphasis on developing real-time motion planning, in particular, for accessing and removing an object in a cluttered workspace, as a local planner that can be integrated with a general motion planner for improved overall efficiency. In the proposed approach, the access direction of the object to grasp is determined through visibility query, and the removal direction to retrieve the object grasped by the gripper is computed using an environment map. The experimental results demonstrate that the proposed approach, when implemented by graphics hardware, is fast and robust enough to manipulate 3D objects in real-time applications.


The Visual Computer | 2015

Multi-resolution terrain rendering with GPU tessellation

Hyeongyeop Kang; Hanyoung Jang; Chang Sik Cho; JungHyun Han

GPU tessellation is very efficient and is reshaping the terrain-rendering paradigm. We present a novel terrain-rendering algorithm based on GPU tessellation. The planar domain of the terrain is partitioned into a set of tiles, and a coarse-grained quadtree is constructed for each tile using a screen-space error metric. Then, each node of the quadtree is input to the GPU pipeline together with its own tessellation factors. The nodes are tessellated and the vertices of the tessellated mesh are displaced by filtering the displacement maps. The multi-resolution scheme is designed to optimize the use of GPU tessellation. Further, it accepts not only height maps but also geometry images, which displace more vertices toward the higher curvature feature parts of the terrain surface such that the surface detail can be well reconstructed with a small number of vertices. The efficiency of the proposed method is proven through experiments on large terrain models. When the screen-space error threshold is set to a pixel, a terrain surface tessellated into 8.5 M triangles is rendered at 110 fps on commodity PCs.


Computer Graphics Forum | 2012

Feature-Preserving Displacement Mapping With Graphics Processing Unit (GPU) Tessellation

Hanyoung Jang; JungHyun Han

Displacement mapping reconstructs a high‐frequency surface by adding geometric details encoded in the displacement map to the coarse base surface. In the context of hardware tessellation supported by GPUs, this paper aims at feature‐preserving surface reconstruction, and proposes the generation of a displacement map that displaces more vertices towards the higher‐frequency feature parts of the target mesh. In order to generate the feature‐preserving displacement map, surface features of the target mesh are estimated, and then the target mesh is parametrized and sampled using the features. At run time, the base surface is semi‐uniformly tessellated by hardware, and then the vertices of the tessellated mesh are displaced non‐uniformly along the 3‐D vectors stored in the displacement map. The experimental results show that the surfaces reconstructed by the proposed method are of a higher quality than those reconstructed by other methods.


The Visual Computer | 2008

Fast collision detection using the A-buffer

Hanyoung Jang; JungHyun Han

This paper presents a novel and fast image-space collision detection algorithm with the A-buffer, where the GPU computes the potentially colliding sets (PCSs), and the CPU performs the standard triangle intersection test. When the bounding boxes of two objects intersect, the intersection is passed to the GPU. The object surfaces in the intersection are rendered into the A-buffer. Rendering into the A-buffer is up to eight-times faster than the ordinary approaches. Then, PCSs are computed by comparing the depth values of each texel of the A-buffer. A PCS consists of only two triangles. The PCSs are read back to the CPU, and the CPU computes the intersection points between the triangles. The proposed algorithm runs extremely fast, does not require any preprocessing, can handle dynamic objects including deformable and fracturing models, and can compute self-collisions. Such versatility and performance gain of the proposed algorithm prove its usefulness in real-time applications such as 3D games.


Computer-aided Design | 2013

Technical note: GPU-optimized indirect scalar displacement mapping

Hanyoung Jang; JungHyun Han

Displacement mapping refers to a technique for rendering a high-frequency surface by adding geometric details encoded in a displacement map to a low-frequency base surface. This paper proposes a method for indirectly accessing the base surface using a special displacement map and then carrying out scalar displacement. Given a high-frequency triangle mesh, a coarse PN (point-normal) quad mesh is computed as the base surface. The parameters used to evaluate the base surface are precomputed such that scalar displacement from the evaluated points reaches the features of the original surface. The parameters are stored in the displacement map together with the displacement scalars. The run-time algorithm uses the hardware tessellation capability of GPU and reconstructs the high-frequency surface. Using the proposed method, surface features are accurately preserved, surface deformation is well supported, LOD control becomes quite flexible, and the base surface can be extremely simplified.


Advanced Robotics | 2008

Toward human-like real-time manipulation: From perception to motion planning

Sukhan Lee; Hadi Moradi; Daesik Jang; Hanyoung Jang; Eun Young Kim; Phuoc Minh Le; JeongHyun Seo; JungHyun Han

Human-like behavior is crucial for intelligent service robots that are to perform versatile tasks in day to day life. In this paper, an integrated approach to human-like manipulation is presented, which addresses realtime three-dimensional (3-D) workspace modeling and accessibility analysis for motion planning. The 3-D workspace modeling uses three main principles: identification of global geometric features, substitution of recognized known objects by corresponding solid models in the database and multi-resolution representation of unknown obstacles as required by the task at hand. Accessibility analysis is done through visibility tests. It complements and accelerates general motion planning. The experimental results demonstrate that the human-like behavior-oriented methods are sufficiently fast and robust to model 3-D workspace, and to plan and execute tasks for robotic manipulative applications.


The Visual Computer | 2010

Layered occlusion map for soft shadow generation

Kien T. Nguyen; Hanyoung Jang; JungHyun Han

This paper presents a high-quality high-performance algorithm to compute plausible soft shadows for complex dynamic scenes. Given a rectangular light source, the scene is rendered from the viewpoint placed at the center of the light source, and discretized into a layered depth map. For each scene point sampled in the depth map, the occlusion degree is computed, and stored in a layered occlusion map. When the scene is rendered from the camera’s viewpoint, the occlusion degree of a scene point is computed by filtering the layered occlusion map. The proposed algorithm produces soft shadows the quality of which is quite close to that of the ground truth reference. As it runs very fast, a scene with a million polygons can be rendered in real-time. The proposed method does not require pre-processing and is easy to implement in contemporary graphic hardware.


international symposium on visual computing | 2007

Image-space collision detection through alternate surface peeling

Hanyoung Jang; Taek Sang Jeong; JungHyun Han

This paper presents a new image-space algorithm for real-time collision detection, where the GPU computes the potentially colliding sets, and the CPU performs the standard triangle/triangle intersection test. The major strengths of the proposed algorithm can be listed as follows: it can handle dynamic models including deforming and fracturing objects, it can take both closed and open objects, it does not require any preprocessing but is quite efficient, and its accuracy is proportional to the visual sensitivity or can be controlled on demand. The proposed algorithm would fit well to real-time applications such as 3D games.

Collaboration


Dive into the Hanyoung Jang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sukhan Lee

Sungkyunkwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chang Sik Cho

Electronics and Telecommunications Research Institute

View shared research outputs
Top Co-Authors

Avatar

Daesik Jang

Kunsan National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge