Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robby T. Tan is active.

Publication


Featured researches published by Robby T. Tan.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Separating reflection components of textured surfaces using a single image

Robby T. Tan; Katsushi Ikeuchi

In inhomogeneous objects, highlights are linear combinations of diffuse and specular reflection components. A number of methods have been proposed to separate or decompose these two components. To our knowledge, all methods that use a single input image require explicit color segmentation to deal with multicolored surfaces. Unfortunately, for complex textured images, current color segmentation algorithms are still problematic to segment correctly. Consequently, a method without explicit color segmentation becomes indispensable and this paper presents such a method. The method is based solely on colors, particularly chromaticity, without requiring any geometrical information. One of the basic ideas is to iteratively compare the intensity logarithmic differentiation of an input image and its specular-free image. A specular-free image is an image that has exactly the same geometrical profile as the diffuse component of the input image and that can be generated by shifting each pixels intensity and maximum chromaticity nonlinearly. Unlike existing methods using a single image, all processes in the proposed method are done locally, involving a maximum of only two neighboring pixels. This local operation is useful for handling textured objects with complex multicolored scenes. Evaluations by comparison with the results of polarizing filters demonstrate the effectiveness of the proposed method.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2004

Separating reflection components based on chromaticity and noise analysis

Robby T. Tan; Ko Nishino; Katsushi Ikeuchi

Many algorithms in computer vision assume diffuse only reflections and deem specular reflections to be outliers. However, in the real world, the presence of specular reflections is inevitable since there are many dielectric inhomogeneous objects which have both diffuse and specular reflections. To resolve this problem, we present a method to separate the two reflection components. The method is principally based on the distribution of specular and diffuse points in a two-dimensional maximum chromaticity-intensity space. We found that, by utilizing the space and known illumination color, the problem of reflection component separation can be simplified into the problem of identifying diffuse maximum chromaticity. To be able to identity the diffuse maximum chromaticity correctly, an analysis of the noise is required since most real images suffer from it. Unlike existing methods, the proposed method can separate the reflection components robustly for any kind of surface roughness and light direction.


computer vision and pattern recognition | 2016

Rain Streak Removal Using Layer Priors

Yu Li; Robby T. Tan; Xiaojie Guo; Jiangbo Lu; Michael S. Brown

This paper addresses the problem of rain streak removal from a single image. Rain streaks impair visibility of an image and introduce undesirable interference that can severely affect the performance of computer vision algorithms. Rain streak removal can be formulated as a layer decomposition problem, with a rain streak layer superimposed on a background layer containing the true scene content. Existing decomposition methods that address this problem employ either dictionary learning methods or impose a low rank structure on the appearance of the rain streaks. While these methods can improve the overall visibility, they tend to leave too many rain streaks in the background image or over-smooth the background image. In this paper, we propose an effective method that uses simple patch-based priors for both the background and rain layers. These priors are based on Gaussian mixture models and can accommodate multiple orientations and scales of the rain streaks. This simple approach removes rain streaks better than the existing methods qualitatively and quantitatively. We overview our method and demonstrate its effectiveness over prior work on a number of examples.


ieee intelligent vehicles symposium | 2007

Visibility Enhancement for Roads with Foggy or Hazy Scenes

Robby T. Tan; Niklas Pettersson; Lars Petersson

Bad weather, particularly fog and haze, commonly obstruct drivers from observing road conditions. This could frequently lead to a considerable number of road accidents. To avoid the problem, automatic methods have been proposed to enhance visibility in bad weather. Methods that work on visible wavelengths, based on the type of their input, can be categorized into two approaches: those using polarizing filters, and those using images taken from different fog densities. Both of the approaches require that the images are multiple and taken from exactly the same point of view. While they can produce reasonably good results, their requirement makes them impractical, particularly in real time applications, such as vehicle systems. Considering their drawbacks, our goal is to develop a method that requires solely a single image taken from ordinary digital cameras, without any additional hardware. The method principally uses color and intensity information. It enhances the visibility after estimating the color of skylight and the values of airtight. The experimental results on real images show the effectiveness of the approach.


international conference on computer vision | 2005

Consistent surface color for texturing large objects in outdoor scenes

Rei Kawakami; Katsushi Ikeuchi; Robby T. Tan

Color appearance of an object is significantly influenced by the color of the illumination. When the illumination color changes, the color appearance of the object change accordingly, causing its appearance to be inconsistent. To arrive at color constancy, we have developed a physics-based method of estimating and removing the illumination color. In this paper, we focus on the use of this method to deal with outdoor scenes, since very few physics-based methods have successfully handled outdoor color constancy. Our method is principally based on shadowed and non-shadowed regions. Previously researchers have discovered that shadowed regions are illuminated by sky light, while non-shadowed regions are illuminated by a combination of sky light and sunlight. Based on this difference of illumination, we estimate the illumination colors (both the sunlight and the sky light) and then remove them. To reliably estimate the illumination colors in outdoor scenes, we include the analysis of noise, since the presence of noise is inevitable in natural images. As a result, compared to existing methods, the proposed method is more effective and robust in handling outdoor scenes. In addition, the proposed method requires only a single input image, making it useful for many applications of computer vision


european conference on computer vision | 2014

A Contrast Enhancement Framework with JPEG Artifacts Suppression

Yu Li; Fangfang Guo; Robby T. Tan; Michael S. Brown

Contrast enhancement is used for many algorithms in computer vision. It is applied either explicitly, such as histogram equalization and tone-curve manipulation, or implicitly via methods that deal with degradation from physical phenomena such as haze, fog or underwater imaging. While contrast enhancement boosts the image appearance, it can unintentionally boost unsightly image artifacts, especially artifacts from JPEG compression. Most JPEG implementations optimize the compression in a scene-dependent manner such that low-contrast images exhibit few perceivable artifacts even for relatively high-compression factors. After contrast enhancement, however, these artifacts become significantly visible. Although there are numerous approaches targeting JPEG artifact reduction, these are generic in nature and are applied either as pre- or post-processing steps. When applied as pre-processing, existing methods tend to over smooth the image. When applied as post-processing, these are often ineffective at removing the boosted artifacts. To resolve this problem, we propose a framework that suppresses compression artifacts as an integral part of the contrast enhancement procedure. We show that this approach can produce compelling results superior to those obtained by existing JPEG artifacts removal methods for several types of contrast enhancement problems.


computer vision and pattern recognition | 2005

Reflection components decomposition of textured surfaces using linear basis functions

Robby T. Tan; Katsushi Ikeuchi

Most existing methods of reflection components decomposition using a single color image require color segmentation. Few methods that employ local operations are able to avoid the requirement; however, they usually suffer from color discontinuity problems. In this paper, we introduce a decomposition method using a single color image that does not require (global) color segmentation or (local) color discontinuity detection. The method principally utilizes the coefficients of the reflectance basis functions of input image and its specular-free image. Combining those coefficients enables us to find the diffuse coefficients of the specular pixels for every surface color. As a result, the decomposition becomes a well-posed problem and able to be solved in closed-form equations. Our experimental results on real complex textured images show the effectiveness of our proposed method.


International Journal of Computer Vision | 2013

Camera Spectral Sensitivity and White Balance Estimation from Sky Images

Rei Kawakami; Hongxun Zhao; Robby T. Tan; Katsushi Ikeuchi

Photometric camera calibration is often required in physics-based computer vision. There have been a number of studies to estimate camera response functions (gamma function), and vignetting effect from images. However less attention has been paid to camera spectral sensitivities and white balance settings. This is unfortunate, since those two properties significantly affect image colors. Motivated by this, a method to estimate camera spectral sensitivities and white balance setting jointly from images with sky regions is introduced. The basic idea is to use the sky regions to infer the sky spectra. Given sky images as the input and assuming the sun direction with respect to the camera viewing direction can be extracted, the proposed method estimates the turbidity of the sky by fitting the image intensities to a sky model. Subsequently, it calculates the sky spectra from the estimated turbidity. Having the sky


international conference on computer vision | 2011

UMPM benchmark: A multi-person dataset with synchronized video and motion capture data for evaluation of articulated human motion and interaction

N.P. van der Aa; Xinghan Luo; Geert-Jan Giezeman; Robby T. Tan; Remco C. Veltkamp


computer vision and pattern recognition | 2015

Simultaneous video defogging and stereo reconstruction

Zhuwen Li; Ping Tan; Robby T. Tan; Danping Zou; Steven Zhiying Zhou; Loong Fah Cheong

RGB

Collaboration


Dive into the Robby T. Tan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loong Fah Cheong

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Shaodi You

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Michael S. Brown

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Yu Li

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruoteng Li

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge