Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex Yong-Sang Chia is active.

Publication


Featured researches published by Alex Yong-Sang Chia.


acm multimedia | 2012

Image colorization using similar images

Raj Kumar Gupta; Alex Yong-Sang Chia; Deepu Rajan; Ee Sin Ng; Huang Zhiyong

We present a new example-based method to colorize a gray image. As input, the user needs only to supply a reference color image which is semantically similar to the target image. We extract features from these images at the resolution of superpixels, and exploit these features to guide the colorization process. Our use of a superpixel representation speeds up the colorization process. More importantly, it also empowers the colorizations to exhibit a much higher extent of spatial consistency in the colorization as compared to that using independent pixels. We adopt a fast cascade feature matching scheme to automatically find correspondences between superpixels of the reference and target images. Each correspondence is assigned a confidence based on the feature matching costs computed at different steps in the cascade, and high confidence correspondences are used to assign an initial set of chromatic values to the target superpixels. To further enforce the spatial coherence of these initial color assignments, we develop an image space voting framework which draws evidence from neighboring superpixels to identify and to correct invalid color assignments. Experimental results and user study on a broad range of images demonstrate that our method with a fixed set of parameters yields better colorization results as compared to existing methods.


international conference on image processing | 2007

Ellipse Detection with Hough Transform in One Dimensional Parametric Space

Alex Yong-Sang Chia; Maylor K. H. Leung; How-Lung Eng; Susanto Rahardja

The main advantage of using the Hough Transform to detect ellipses is its robustness against missing data points. However, the storage and computational requirements of the Hough Transform preclude practical applications. Although there are many modifications to the Hough Transform, these modifications still demand significant storage requirement. In this paper, we present a novel ellipse detection algorithm which retains the original advantages of the Hough Transform while minimizing the storage and computation complexity. More specifically, we use an accumulator that is only one dimensional. As such, our algorithm is more effective in terms of storage requirement. In addition, our algorithm can be easily parallelized to achieve good execution time. Experimental results on both synthetic and real images demonstrate the robustness and effectiveness of our algorithm in which both complete and incomplete ellipses can be extracted.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Object Recognition by Discriminative Combinations of Line Segments, Ellipses, and Appearance Features

Alex Yong-Sang Chia; Deepu Rajan; Maylor Karhang Leung; Susanto Rahardja

We present a novel contour-based approach that recognizes object classes in real-world scenes using simple and generic shape primitives of line segments and ellipses. Compared to commonly used contour fragment features, these primitives support more efficient representation since their storage requirements are independent of object size. Additionally, these primitives are readily described by their geometrical properties and hence afford very efficient feature comparison. We pair these primitives as shape-tokens and learn discriminative combinations of shape-tokens. Here, we allow each combination to have a variable number of shape-tokens. This, coupled with the generic nature of primitives, enables a variety of class-specific shape structures to be learned. Building on the contour-based method, we propose a new hybrid recognition method that combines shape and appearance features. Each discriminative combination can vary in the number and the types of features, where these two degrees of variability empower the hybrid method with even more flexibility and discriminative potential. We evaluate our methods across a large number of challenging classes, and obtain very competitive results against other methods. These results show the proposed shape primitives are indeed sufficiently powerful to recognize object classes in complex real-world scenes.


international conference on computer vision | 2009

Robust matching of building facades under large viewpoint changes

Jimmy Addison Lee; Kin Choong Yow; Alex Yong-Sang Chia

This paper presents a novel approach to finding point correspondences between images of building facades with wide viewpoint variations, and at the same time returning a large list of true matches between the images. Such images comprise repetitive and symmetric patterns, which render popular algorithms e.g., SIFT to be ineffective. Feature descriptors such as SIFT that are based on region patches are also unstable under large viewing angle variations. In this paper, we integrate both the appearance and geometric properties of an image to find unique matches. First we extract hypotheses of building facades based on a robust line fitting algorithm. Each hypothesis is defined by a planar convex quadrilateral in the image, which we call a “q-region”, and the four corners of each q-region provide the inputs from which a projective transformation model is derived. Next, a set of interest points are extracted from the images and are used to evaluate the correctness of the transformation model. The transformation model with the largest set of matched interest points is selected as the correct model, and this model also returns the best pair of corresponding q-regions and the most number of point correspondences in the two images. Extensive experimental results demonstrate the robustness of our approach in which we achieve a tenfold increase in true matches when compared to state of the art techniques such as SIFT and MSER.


computer vision and pattern recognition | 2010

Object recognition by discriminative combinations of line segments and ellipses

Alex Yong-Sang Chia; Susanto Rahardja; Deepu Rajan; Maylor Karhang Leung

We present a contour based approach to object recognition in real-world images. Contours are represented by generic shape primitives of line segments and ellipses. These primitives offer substantial flexibility to model complex shapes. We pair connected primitives as shape tokens, and learn category specific combinations of shape tokens. We do not restrict combinations to have a fixed number of tokens, but allow each combination to flexibly evolve to best represent a category. This, coupled with the generic nature of primitives, enables a variety of discriminative shape structures of a category to be learned. We compare our approach with related methods and state-of-the-art contour based approaches on two demanding datasets across 17 categories. Highly competitive results are obtained. In particular, on the challenging Weizmann horse dataset, we attain improved image classification and object detection results over the best contour based results published so far.


IEEE Transactions on Multimedia | 2009

Structural Descriptors for Category Level Object Detection

Alex Yong-Sang Chia; Susanto Rahardja; Deepu Rajan; Maylor K. H. Leung

We propose a new class of descriptors which exhibits the ability to yield meaningful structural descriptions of objects. These descriptors are constructed from two types of image primitives: quadrangles and ellipses. The primitives are extracted from an image based on human cognitive psychology and model local parts of objects. Experiments reveal that these primitives densely cover objects in images. In this regard, structural information of an object can be comprehensively described by these primitives. It is found that a combination of simple spatial relationships between primitives plus a small set of geometrical attributes provide rich and accurate local structural descriptions of objects. Category level object detection of four-legged animals, bicycles, and cars images is demonstrated under scaling, moderate viewpoint variations, and background clutter. Promising results are achieved.


international conference on image processing | 2008

A split and merge based ellipse detector

Alex Yong-Sang Chia; Deepu Rajan; Maylor K. H. Leung; Susanto Rahardja

We present an ellipse detector that continually pools lower level information of the edge pixels together to achieve robust detection of the ellipses present in the image. In addition, the parameters of the detected ellipses are continually refined using a close loop system driven by Gestalt psychology. We highlight that we do not rely on the geometrical properties of the ellipses to detect the ellipses. In this aspect, our algorithm is well suited to detect partially occluded ellipses in the image. Experiments on real and synthetic images demonstrate the robustness of our algorithm in which both complete and incomplete ellipses can be detected. In particular, experimental results show that the mean detection accuracy of our algorithm surpasses 92% even with around 90% outliers in the images. This detection performance is superior to that achieved by the robust regression, least squares and the hough transform based ellipse detectors.


international conference on image processing | 2006

Multiple Objects Trackingwith Multiple Hypotheses Dynamic Updating

Alex Yong-Sang Chia; Weimin Huang

We present a novel and robust multi-object tracking algorithm based on multiple hypotheses about the trajectories of the objects. We represent the trajectories of the objects by a set of path graphs in which the path graphs that have the closest temporal relationship with the current frame are stored in a buffer. New hypotheses about the trajectories of the objects are continually generated based upon the spatial and temporal information of the objects. The novelty of our multi-object tracking algorithm lies in our framework in which we update these hypotheses by exploiting information in later frames and dynamically relating this information to the current set of path graphs in the buffer. Our experiments show that even with a small buffer size, our multi-object tracking algorithm achieves more than 75% accuracy in the tracking results of our test video sequences. Furthermore, we demonstrate that by a small increase of the buffer size, we are able to improve the tracking accuracy in the video sequences to above 90%.


computer vision and pattern recognition | 2015

Protecting against screenshots: An image processing approach

Alex Yong-Sang Chia; Udana Bandara; Xiangyu Wang; Hiromi Hirano

Motivated by reasons related to data security and privacy, we propose a method to limit meaningful visual contents of a display from being captured by screenshots. Traditional methods take a system architectural approach to protect against screenshots. We depart from this framework, and instead exploit image processing techniques to distort visual data of a display and present the distorted data to the viewer. Given that a screenshot captures distorted visual contents, it yields limited useful data. We exploit the human visual system to empower viewers to automatically and mentally recover the distorted contents into a meaningful form in real-time. Towards this end, we leverage on findings from psychological studies which show that blending of visual information from recent and current fixations enables human to form meaningful representation of a scene. We model this blending of information by an additive process, and exploit this to design a visual contents distortion algorithm that supports real-time contents recovery by the human visual system. Our experiments and user study demonstrate the feasibility of our method to allow viewers to readily interpret visual contents of a display, while limiting meaningful contents from being captured by screenshots.


acm multimedia | 2015

If You Can't Beat Them, Join Them: Learning with Noisy Data

Pravin Kakar; Alex Yong-Sang Chia

Vision capabilities have been significantly enhanced in recent years due to the availability of powerful computing hardware and sufficiently large and varied databases. However, the labelling of these image databases prior to training still involves considerable effort and is a roadblock for truly scalable learning. For instance, it has been shown that tag noise levels in Flickr images are as high as 80%. In an effort to exploit large images datasets therefore, extensive efforts have been invested to reduce the tag noise of the data by refining the image tags or by developing robust learning frameworks. In this work, we follow the latter approach, where we propose a multi-layer neural network-based noisy learning framework that incorporates noise probabilities of a training dataset. These are then utilized effectively to perform learning with sustained levels of accuracy, even in the presence of significant noise levels. We present results on several datasets of varying sizes and complexity and demonstrate that the proposed mechanism is able to outperform existing methods, despite often employing weaker constraints and assumptions.

Collaboration


Dive into the Alex Yong-Sang Chia's collaboration.

Top Co-Authors

Avatar

Deepu Rajan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maylor K. H. Leung

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maylor Karhang Leung

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kin Choong Yow

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Vitali Zagorodnov

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge