Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pongsak Lasang is active.

Publication


Featured researches published by Pongsak Lasang.


IEEE Transactions on Consumer Electronics | 2010

CFA-based motion blur removal using long/short exposure pairs

Pongsak Lasang; Chin Phek Ong; Sheng Mei Shen

This paper presents an efficient and effective motion blur removal method based on long and short exposure images. The two images are captured sequentially and motion pixels between the images are then robustly detected, with suppression of noise and prevention of artifacts around object boundaries. Object motion blur is removed and high quality image is obtained by merging the two images with taking into account the detected motion pixels. The proposed method is directly performed on the CFA (Color Filter Array) image which only has one color component per pixel. It has low computational complexity and low memory requirements. The proposed method also achieves a HDR (High Dynamic Range) image at the same time.


Neurocomputing | 2016

Multi-sparse descriptor

Yazhou Liu; Pongsak Lasang; Mel Siegel; Quansen Sun

This paper presents a new descriptor, multi-sparse descriptor (MSD), for pedestrian detection in static images. Specifically, the proposed descriptor is based on multi-dictionary sparse coding which contains unsupervised dictionary learning and sparse coding. During unsupervised learning phase, a family of dictionaries with different representation abilities is learnt from the pedestrian data. Then the data are encoded by these dictionaries and the histogram of the sparse coefficients is calculated as the descriptor. The benefit of this multi-dictionary sparse encoding is three-fold: firstly, the dictionaries are learnt from the pedestrian data, they are more efficient for encoding local structures of the pedestrian; secondly, multiple dictionaries can enrich the representation by providing different levels of abstractions; thirdly, since the dictionaries based representation is mainly focused on the low frequency, better generalization ability along the scale range is obtained. Comparisons with the state-of-the-art methods reveal the superiority of the proposed method.


IEEE Transactions on Image Processing | 2015

Geodesic Invariant Feature: A Local Descriptor in Depth

Yazhou Liu; Pongsak Lasang; Mel Siegel; Quansen Sun

Different from the photometric images, depth images resolve the distance ambiguity of the scene, while the properties, such as weak texture, high noise, and low resolution, may limit the representation ability of the well-developed descriptors, which are elaborately designed for the photometric images. In this paper, a novel depth descriptor, geodesic invariant feature (GIF), is presented for representing the parts of the articulate objects in depth images. GIF is a multilevel feature representation framework, which is proposed based on the nature of depth images. Low-level, geodesic gradient is introduced to obtain the invariance to the articulate motion, such as scale and rotation variation. Midlevel, superpixel clustering is applied to reduce depth image redundancy, resulting in faster processing speed and better robustness to noise. High-level, deep network is used to exploit the nonlinearity of the data, which further improves the classification accuracy. The proposed descriptor is capable of encoding the local structures in the depth data effectively and efficiently. Comparisons with the state-of-the-art methods reveal the superiority of the proposed method.


Journal of Visual Communication and Image Representation | 2016

Optimal depth recovery using image guided TGV with depth confidence for high-quality view synthesis

Pongsak Lasang; Wuttipong Kumwilaisak; Yazhou Liu; Sheng Mei Shen

A confidence-based depth recovery and high quality 3D view synthesis are proposed.The depth recovery relies on image edges and high depth confidence pixels.Texture directions of background are effective in hole filling for view synthesis. This paper presents a new depth image recovery method for RGB-D sensors giving a complete, sharp, and accurate object shape from a noisy boundary depth map. The proposed method uses the image guided Total Generalized Variation (TGV) with the depth confidence. A new directional hole filling method of view synthesis is also investigated to produce natural texture in hole regions whereas reducing blurring effect and preventing distortion. Thus, a high-quality image view can be achieved. Experimental results show that the proposed method yields higher quality recovered depth maps and synthesized image views than other previous methods.


international conference on consumer electronics berlin | 2014

Novel edge preserve and depth image recovery method in RGB-D camera systems

Pongsak Lasang; Sheng Mei Shen; Wuttipong Kumwilaisak

We propose a new edge preserve and depth image recovery method in RGB-D camera systems that gives a sharp and accurate object shape from a noisy boundary depth map. The edges of an input depth image are detected and the noisy pixels around them are removed from the depth image. An anisotropic diffusion edge tensor of an input RGB image is computed. Missing depth pixels are then recovered using the total generalized variation optimization with guidance of the RGB-image edge tensor. Thus, accurate object depth boundary can be obtained and well aligned with the object edges in RGB images. The missing or invalid depth pixels in the large hole areas and the thin object can also be recovered. Experimental results show the improvement in edge preserve and depth image recovery with the expense on computation complexity when compared with previous works.


international conference on consumer electronics | 2012

Directional adaptive hole filling for new view synthesis

Pongsak Lasang; Sheng Mei Shen

In this paper, a new hole filling method based on the direction of background image texture for new image view synthesis is presented. Strong texture gradient of background pixel is traced along its direction to obtain the texture orientation. Then texture direction map is computed for the hole pixels, based on the texture orientation. Finally, the hole pixels are filled by the background pixels along the direction guided by the texture direction map. This is to produce natural texture in the hole regions, while reducing blur, and preventing distortion in the foreground objects. Thus, high quality new image view can be achieved, even with large baseline synthesis. When the images are used for 3D viewing, the 3D effect is enhanced.


international conference on consumer electronics | 2010

CFA-based motion blur removal

Pongsak Lasang; Chin Phek Ong; Sheng Mei Shen

In this paper, a simple and effective motion blur removal method based on long and short exposure images is presented. The long and short exposure images are captured sequentially. Motion pixels between the images are robustly detected, with suppressing noise and preventing artifacts around object boundary. The object motion blur is removed and high quality image is obtained by merging the two images with takes into account the detected motion pixels. The proposed method is directly performed on the CFA (Color Filter Array) image which is only one color component per pixel. It has low computational complexity and memory requirements. The proposed method can achieve HDR (High Dynamic Range) image at the same time.


international conference on computer vision systems | 2017

An Efficient Method to Find a Triangle with the Least Sum of Distances from Its Vertices to the Covered Point

Guoyi Chi; KengLiang Loi; Pongsak Lasang

Depth sensors are used to acquire a scene from various viewpoints, with the resultant depth images integrated into a 3d model. Generally, due to surface reflectance properties, absorptions, occlusions and accessibility limitations, certain areas of scenes are not sampled, leading to holes and introducing undesirable artifacts. An efficient algorithm for filling holes on organized depth images is high significance. Points far away from a covered point, are usually low probability in the aspect of spatial information, due to contamination of outliers and distortion. The paper shows an algorithm to find a triangle whose vertices are nearest to the covered point.


international conference on consumer electronics berlin | 2014

Combining high resolution color and depth images for dense 3D reconstruction

Pongsak Lasang; Sheng Mei Shen; Wuttipong Kumwilaisak

In this paper, we present an effective method to reconstruct a dense 3D model of an object or a scene by combining high resolution color and depth images. Conventionally, multiple views of color images can be used for reconstructing a 3D model of the captured scene. Although the conventional method can give the accuracy of the textured-regions of an object, it lacks density and leaves many holes in the texture-less regions. However, a depth camera is capable to capture 3D distance information even in homogenous regions. Still, it gives low resolution and is unable to provide an accurate result of a detailed object. We thus propose a combined method of utilizing both high resolution color and depth images to obtain a high-quality, accurate and dense 3D model. Compared to the conventional methods, our proposed method produces a much denser and more all-over 3D results.


asian conference on computer vision | 2014

Learning Hierarchical Feature Representation in Depth Image

Yazhou Liu; Pongsak Lasang; Quansen Sun; Mel Siegel

This paper presents a novel descriptor, geodesic invariant feature (GIF), for representing objects in depth images. Especially in the context of parts classification of articulated objects, it is capable of encoding the invariance of local structures effectively and efficiently. The contributions of this paper lie in our multi-level feature extraction hierarchy. (1) Low-level feature encodes the invariance to articulation. Geodesic gradient is introduced, which is covariant with the non-rigid deformation of objects and is utilized to rectify the feature extraction process. (2) Mid-level feature reduces the noise and improves the efficiency. With unsupervised clustering, the primitives of objects are changed from pixels to superpixels. The benefit is two-fold: firstly, superpixel reduces the effect of the noise introduced by depth sensors; secondly, the processing speed can be improved by a big margin. (3) High-level feature captures nonlinear dependencies between the dimensions. Deep network is utilized to discover the high-level feature representation. As the feature propagates towards the deeper layers of the network, the ability of the feature capturing the data’s underlying regularities is improved. Comparisons with the state-of-the-art methods reveal the superiority of the proposed method.

Collaboration


Dive into the Pongsak Lasang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wuttipong Kumwilaisak

King Mongkut's University of Technology Thonburi

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yazhou Liu

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Quansen Sun

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Mel Siegel

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge