Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Siying Liu is active.

Publication


Featured researches published by Siying Liu.


computer vision and pattern recognition | 2011

Smoothly varying affine stitching

Wen-Yan Lin; Siying Liu; Yasuyuki Matsushita; Tian-Tsong Ng; Loong Fah Cheong

Traditional image stitching using parametric transforms such as homography, only produces perceptually correct composites for planar scenes or parallax free camera motion between source frames. This limits mosaicing to source images taken from the same physical location. In this paper, we introduce a smoothly varying affine stitching field which is flexible enough to handle parallax while retaining the good extrapolation and occlusion handling properties of parametric transforms. Our algorithm which jointly estimates both the stitching field and correspondence, permits the stitching of general motion source images, provided the scenes do not contain abrupt protrusions.


european conference on computer vision | 2010

Shape from second-bounce of light transport

Siying Liu; Tian-Tsong Ng; Yasuyuki Matsushita

This paper describes a method to recover scene geometry from the second-bounce of light transport. We show that form factors (up to a scaling ambiguity) can be derived from the second-bounce component of light transport in a Lambertian case. The form factors carry information of the geometric relationship between every pair of scene points, i.e., distance between scene points and relative surface orientations. Modelling the scene as polygonal, we develop a method to recover the scene geometry up to a scaling ambiguity from the form factors by optimization. Unlike other shape-from-intensity methods, our method simultaneously estimates depth and surface normal; therefore, our method can handle discontinuous surfaces as it can avoid surface normal integration. Various simulation and real-world experiments demonstrate the correctness of the proposed theory of shape recovery from light transport.


Proceedings of SPIE | 2013

Subspace methods for computational relighting

Ha Q. Nguyen; Siying Liu; Minh N. Do

We propose a vector space approach for relighting a Lambertian convex object with distant light source, whose crucial task is the decomposition of the reflectance function into albedos (or reflection coefficients) and lightings based on a set of images of the same object and its 3-D model. Making use of the fact that reflectance functions are well approximated by a low-dimensional linear subspace spanned by the first few spherical harmonics, this inverse problem can be formulated as a matrix factorization, in which the basis of the subspace is encoded in the spherical harmonic matrix S. A necessary and sufficient condition on S for unique factorization is derived with an introduction to a new notion of matrix rank called nonseparable full rank. An SVD-based algorithm for exact factorization in the noiseless case is introduced. In the presence of noise, the algorithm is slightly modified by incorporating the positivity of albedos into a convex optimization problem. Implementations of the proposed algorithms are done on a set of synthetic data.


International Journal of Computer Vision | 2012

Simultaneous Camera Pose and Correspondence Estimation with Motion Coherence

Wen-Yan Lin; Loong Fah Cheong; Ping Tan; Guo Dong; Siying Liu

Traditionally, the camera pose recovery problem has been formulated as one of estimating the optimal camera pose given a set of point correspondences. This critically depends on the accuracy of the point correspondences and would have problems in dealing with ambiguous features such as edge contours and high visual clutter. Joint estimation of camera pose and correspondence attempts to improve performance by explicitly acknowledging the chicken and egg nature of the pose and correspondence problem. However, such joint approaches for the two-view problem are still few and even then, they face problems when scenes contain largely edge cues with few corners, due to the fact that epipolar geometry only provides a “soft” point to line constraint. Viewed from the perspective of point set registration, the point matching process can be regarded as the registration of points while preserving their relative positions (i.e. preserving scene coherence). By demanding that the point set should be transformed coherently across views, this framework leverages on higher level perceptual information such as the shape of the contour. While thus potentially allowing registration of non-unique edge points, the registration framework in its traditional form is subject to substantial point localization error and is thus not suitable for estimating camera pose. In this paper, we introduce an algorithm which jointly estimates camera pose and correspondence within a point set registration framework based on motion coherence, with the camera pose helping to localize the edge registration, while the “ambiguous” edge information helps to guide camera pose computation. The algorithm can compute camera pose over large displacements and by utilizing the non-unique edge points can recover camera pose from what were previously regarded as feature-impoverished SfM scenes. Our algorithm is also sufficiently flexible to incorporate high dimensional feature descriptors and works well on traditional SfM scenes with adequate numbers of unique corners.


european conference on computer vision | 2016

RepMatch: Robust Feature Matching and Pose for Reconstructing Modern Cities

Wen Yan Lin; Siying Liu; Nianjuan Jiang; Minh N. Do; Ping Tan; Jiangbo Lu

A perennial problem in recovering 3-D models from images is repeated structures common in modern cities. The problem can be traced to the feature matcher which needs to match less distinctive features (permitting wide-baselines and avoiding broken sequences), while simultaneously avoiding incorrect matching of ambiguous repeated features. To meet this need, we develop RepMatch, an epipolar guided (assumes predominately camera motion) feature matcher that accommodates both wide-baselines and repeated structures. RepMatch is based on using RANSAC to guide the training of match consistency curves for differentiating true and false matches. By considering the set of all nearest-neighbor matches, RepMatch can procure very large numbers of matches over wide baselines. This in turn lends stability to pose estimation. RepMatch’s performance compares favorably on standard datasets and enables more complete reconstructions of modern architectures.


international conference on computer vision | 2015

PatchMatch-Based Automatic Lattice Detection for Near-Regular Textures

Siying Liu; Tian-Tsong Ng; Kalyan Sunkavalli; Minh N. Do; Eli Shechtman; Nathan A. Carr

In this work, we investigate the problem of automatically inferring the lattice structure of near-regular textures (NRT) in real-world images. Our technique leverages the PatchMatch algorithm for finding k-nearest-neighbor (kNN) correspondences in an image. We use these kNNs to recover an initial estimate of the 2D wallpaper basis vectors, and seed vertices of the texture lattice. We iteratively expand this lattice by solving an MRF optimization problem. We show that we can discretize the space of good solutions for the MRF using the kNNs, allowing us to efficiently and accurately optimize the MRF energy function using the Particle Belief Propagation algorithm. We demonstrate our technique on a benchmark NRT dataset containing a wide range of images with geometric and photometric variations, and show that our method clearly outperforms the state of the art in terms of both texel detection rate and texel localization score.


IEEE Transactions on Image Processing | 2017

Inverse Rendering and Relighting From Multiple Color Plus Depth Images

Siying Liu; Minh N. Do

We propose a novel relighting approach that takes advantage of multiple color plus depth images acquired from a consumer camera. Assuming distant illumination and Lambertian reflectance, we model the reflected light field in terms of spherical harmonic coefficients of the bi-directional reflectance distribution function and lighting. We make use of the noisy depth information together with color images taken under different illumination conditions to refine surface normals inferred from depth. We first perform refinement on the surface normals using the first order spherical harmonics. We initialize this non-linear optimization with a linear approximation to greatly reduce computation time. With surface normals refined, we formulate the recovery of albedo and lighting in a matrix factorization setting, involving second order spherical harmonics. Albedo and lighting coefficients are recovered up to a global scaling ambiguity. We demonstrate our method on both simulated and real data, and show that it can successfully recover both illumination and albedo to produce realistic relighting results.


Proceedings of SPIE | 2015

3D quantitative phase imaging of neural networks using WDT

Taewoo Kim; Siying Liu; Raj Iyer; Martha U. Gillette; Gabriel Popescu

White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.


international conference on image processing | 2014

Relighting from multiple color and depth images using matrix factorization

Siying Liu; Minh N. Do

In this paper, we propose a novel relighting approach that takes advantage of the 3D shape information acquired from a depth sensor. Assuming distant illumination and Lambertian reflectance, we model the reflected light field in terms of spherical harmonic coefficients of the Bi-directional Reflectance Distribution Function (BRDF) and lighting. To estimate both the reflectance and illumination, different illumination samples can be generated through moving the object of interest in space while keeping the light source unchanged. The samples are registered onto the base views coordinate frame using camera pose estimated from multiple depth maps. Our results indicate that we can successfully recover both illumination and diffuse BRDF (up to a global scaling ambiguity). Our method can be used to estimate complex illumination in indoor environments for applications such as lighting transfer.


computer vision and pattern recognition | 2012

Aligning images in the wild

Wen-Yan Lin; Linlin Liu; Yasuyuki Matsushita; Kok-Lim Low; Siying Liu

Collaboration


Dive into the Siying Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wen-Yan Lin

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Loong Fah Cheong

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Ping Tan

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Guo Dong

DSO National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kok-Lim Low

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Linlin Liu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Nianjuan Jiang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge