Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lap-Fai Yu is active.

Publication


Featured researches published by Lap-Fai Yu.


international conference on computer graphics and interactive techniques | 2011

Make it home: automatic optimization of furniture arrangement

Lap-Fai Yu; Sai-Kit Yeung; Chi-Keung Tang; Demetri Terzopoulos; Tony F. Chan; Stanley Osher

We present a system that automatically synthesizes indoor scenes realistically populated by a variety of furniture objects. Given examples of sensibly furnished indoor scenes, our system extracts, in advance, hierarchical and spatial relationships for various furniture objects, encoding them into priors associated with ergonomic factors, such as visibility and accessibility, which are assembled into a cost function whose optimization yields realistic furniture arrangements. To deal with the prohibitively large search space, the cost function is optimized by simulated annealing using a Metropolis-Hastings state search step. We demonstrate that our system can synthesize multiple realistic furniture arrangements and, through a perceptual study, investigate whether there is a significant difference in the perceived functionality of the automatically synthesized results relative to furniture arrangements produced by human designers.


computer vision and pattern recognition | 2013

Shading-Based Shape Refinement of RGB-D Images

Lap-Fai Yu; Sai-Kit Yeung; Yu-Wing Tai; Stephen Lin

We present a shading-based shape refinement algorithm which uses a noisy, incomplete depth map from Kinect to help resolve ambiguities in shape-from-shading. In our framework, the partial depth information is used to overcome bas-relief ambiguity in normals estimation, as well as to assist in recovering relative albedos, which are needed to reliably estimate the lighting environment and to separate shading from albedo. This refinement of surface normals using a noisy depth map leads to high-quality 3D surfaces. The effectiveness of our algorithm is demonstrated through several challenging real-world examples.


international conference on computer graphics and interactive techniques | 2012

DressUp!: outfit synthesis through automatic optimization

Lap-Fai Yu; Sai-Kit Yeung; Demetri Terzopoulos; Tony F. Chan

We present an automatic optimization approach to outfit synthesis. Given the hair color, eye color, and skin color of the input body, plus a wardrobe of clothing items, our outfit synthesis system suggests a set of outfits subject to a particular dress code. We introduce a probabilistic framework for modeling and applying dress codes that exploits a Bayesian network trained on example images of real-world outfits. Suitable outfits are then obtained by optimizing a cost function that guides the selection of clothing items to maximize the color compatibility and dress code suitability. We demonstrate our approach on the four most common dress codes: Casual, Sportswear, Business-Casual, and Business. A perceptual study validated on multiple resultant outfits demonstrates the efficacy of our framework.


international conference on 3d vision | 2016

SceneNN: A Scene Meshes Dataset with aNNotations

Binh-Son Hua; Quang-Hieu Pham; Duc Thanh Nguyen; Minh-Khoi Tran; Lap-Fai Yu; Sai-Kit Yeung

Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net.


IEEE Transactions on Visualization and Computer Graphics | 2016

The Clutterpalette: An Interactive Tool for Detailing Indoor Scenes

Lap-Fai Yu; Sai-Kit Yeung; Demetri Terzopoulos

We introduce the Clutterpalette, an interactive tool for detailing indoor scenes with small-scale items. When the user points to a location in the scene, the Clutterpalette suggests detail items for that location. In order to present appropriate suggestions, the Clutterpalette is trained on a dataset of images of real-world scenes, annotated with support relations. Our experiments demonstrate that the adaptive suggestions presented by the Clutterpalette increase modeling speed and enhance the realism of indoor scenes.


international conference on computational photography | 2013

Outdoor photometric stereo

Lap-Fai Yu; Sai-Kit Yeung; Yu-Wing Tai; Demetri Terzopoulos; Tony F. Chan

We introduce a framework for outdoor photometric stereo utilizing natural environmental illumination. Our framework extends beyond existing photometric stereo methods intended for laboratory environments to encompass robust outdoor operation in the real world. In this paper, we motivate our framework, describe the components of its processing pipeline, and assess its performance in synthetic experiments as well as in natural experiments including objects in outdoor environments with complex real-world illuminations.


international conference on computer graphics and interactive techniques | 2016

Crowd-driven mid-scale layout design

Tian Feng; Lap-Fai Yu; Sai-Kit Yeung; KangKang Yin; Kun Zhou

We propose a novel approach for designing mid-scale layouts by optimizing with respect to human crowd properties. Given an input layout domain such as the boundary of a shopping mall, our approach synthesizes the paths and sites by optimizing three metrics that measure crowd flow properties: mobility, accessibility, and coziness. While these metrics are straightforward to evaluate by a full agent-based crowd simulation, optimizing a layout usually requires hundreds of evaluations, which would require a long time to compute even using the latest crowd simulation techniques. To overcome this challenge, we propose a novel data-driven approach where nonlinear regressors are trained to capture the relationship between the agent-based metrics, and the geometrical and topological features of a layout. We demonstrate that by using the trained regressors, our approach can synthesize crowd-aware layouts and improve existing layouts with better crowd flow properties.


IEEE Transactions on Visualization and Computer Graphics | 2017

Earthquake Safety Training through Virtual Drills

Changyang Li; Wei Liang; Chris Quigley; Yibiao Zhao; Lap-Fai Yu

Recent popularity of consumer-grade virtual reality devices, such as the Oculus Rift and the HTC Vive, has enabled household users to experience highly immersive virtual environments. We take advantage of the commercial availability of these devices to provide an immersive and novel virtual reality training approach, designed to teach individuals how to survive earthquakes, in common indoor environments. Our approach makes use of virtual environments realistically populated with furniture objects for training. During a training, a virtual earthquake is simulated. The user navigates in, and manipulates with, the virtual environments to avoid getting hurt, while learning the observation and self-protection skills to survive an earthquake. We demonstrated our approach for common scene types such as offices, living rooms and dining rooms. To test the effectiveness of our approach, we conducted an evaluation by asking users to train in several rooms of a given scene type and then test in a new room of the same type. Evaluation results show that our virtual reality training approach is effective, with the participants who are trained by our approach performing better, on average, than those trained by alternative approaches in terms of the capabilities to avoid physical damage and to detect potentially dangerous objects.


international conference on computer vision | 2015

Fill and Transfer: A Simple Physics-Based Approach for Containability Reasoning

Lap-Fai Yu; Noah Duncan; Sai-Kit Yeung

The visual perception of object affordances has emerged as a useful ingredient for building powerful computer vision and robotic applications. In this paper we introduce a novel approach to reason about liquid containability - the affordance of containing liquid. Our approach analyzes container objects based on two simple physical processes: the Fill and Transfer of liquid. First, it reasons about whether a given 3D object is a liquid container and its best filling direction. Second, it proposes directions to transfer its contained liquid to the outside while avoiding spillage. We compare our simplified model with a common fluid dynamics simulation and demonstrate that our algorithm makes human-like choices about the best directions to fill containers and transfer liquid from them. We apply our approach to reason about the containability of several real-world objects acquired using a consumer-grade depth camera.


IEEE Transactions on Visualization and Computer Graphics | 2017

A Robust 3D-2D Interactive Tool for Scene Segmentation and Annotation

Duc Thanh Nguyen; Binh-Son Hua; Lap-Fai Yu; Sai-Kit Yeung

Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and/or the complexity of 3D scenes (e.g., clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. Both the tool and dataset are available at http://scenenn.net.

Collaboration


Dive into the Lap-Fai Yu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haikun Huang

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar

Noah Duncan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chenfanfu Jiang

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Yibiao Zhao

University of California

View shared research outputs
Top Co-Authors

Avatar

Ni-Ching Lin

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Tony F. Chan

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Darian Springer

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar

Lorenzo Barrett

University of Massachusetts Boston

View shared research outputs
Researchain Logo
Decentralizing Knowledge