Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yibiao Zhao is active.

Publication


Featured researches published by Yibiao Zhao.


computer vision and pattern recognition | 2013

Beyond Point Clouds: Scene Understanding by Reasoning Geometry and Physics

Bo Zheng; Yibiao Zhao; Joey C. Yu; Katsushi Ikeuchi; Song-Chun Zhu

In this paper, we present an approach for scene understanding by reasoning physical stability of objects from point cloud. We utilize a simple observation that, by human design, objects in static scenes should be stable with respect to gravity. This assumption is applicable to all scene categories and poses useful constraints for the plausible interpretations (parses) in scene understanding. Our method consists of two major steps: 1) geometric reasoning: recovering solid 3D volumetric primitives from defective point cloud, and 2) physical reasoning: grouping the unstable primitives to physically stable objects by optimizing the stability and the scene prior. We propose to use a novel disconnectivity graph (DG) to represent the energy landscape and use a Swendsen-Wang Cut (MCMC) method for optimization. In experiments, we demonstrate that the algorithm achieves substantially better performance for i) object segmentation, ii) 3D volumetric recovery of the scene, and iii) better parsing result for scene understanding in comparison to state-of-the-art methods in both public dataset and our own new dataset.


international conference on computer vision | 2013

Modeling 4D Human-Object Interactions for Event and Object Recognition

Ping Wei; Yibiao Zhao; Nanning Zheng; Song-Chun Zhu

Recognizing the events and objects in the video sequence are two challenging tasks due to the complex temporal structures and the large appearance variations. In this paper, we propose a 4D human-object interaction model, where the two tasks jointly boost each other. Our human-object interaction is defined in 4D space: i) the co occurrence and geometric constraints of human pose and object in 3D space, ii) the sub-events transition and objects coherence in 1D temporal dimension. We represent the structure of events, sub-events and objects in a hierarchical graph. For an input RGB-depth video, we design a dynamic programming beam search algorithm to: i) segment the video, ii) recognize the events, and iii) detect the objects simultaneously. For evaluation, we built a large-scale multiview 3D event dataset which contains 3815 video sequences and 383,036 RGBD frames captured by the Kinect cameras. The experiment results on this dataset show the effectiveness of our method.


computer vision and pattern recognition | 2014

MILCut: A Sweeping Line Multiple Instance Learning Paradigm for Interactive Image Segmentation

Jiajun Wu; Yibiao Zhao; Jun-Yan Zhu; Siwei Luo; Zhuowen Tu

Interactive segmentation, in which a user provides a bounding box to an object of interest for image segmentation, has been applied to a variety of applications in image editing, crowdsourcing, computer vision, and medical imaging. The challenge of this semi-automatic image segmentation task lies in dealing with the uncertainty of the foreground object within a bounding box. Here, we formulate the interactive segmentation problem as a multiple instance learning (MIL) task by generating positive bags from pixels of sweeping lines within a bounding box. We name this approach MILCut. We provide a justification to our formulation and develop an algorithm with significant performance and efficiency gain over existing state-of-the-art systems. Extensive experiments demonstrate the evident advantage of our approach.


international conference on computer vision | 2013

Concurrent Action Detection with Structural Prediction

Ping Wei; Nanning Zheng; Yibiao Zhao; Song-Chun Zhu

Action recognition has often been posed as a classification problem, which assumes that a video sequence only have one action class label and different actions are independent. However, a single human body can perform multiple concurrent actions at the same time, and different actions interact with each other. This paper proposes a concurrent action detection model where the action detection is formulated as a structural prediction problem. In this model, an interval in a video sequence can be described by multiple action labels. An detected action interval is determined both by the unary local detector and the relations with other actions. We use a wavelet feature to represent the action sequence, and design a composite temporal logic descriptor to describe the action relations. The model parameters are trained by structural SVM learning. Given a long video sequence, a sequential decision window search algorithm is designed to detect the actions. Experiments on our new collected concurrent action dataset demonstrate the strength of our method.


computer vision and pattern recognition | 2015

Understanding tools: Task-oriented object modeling, learning and recognition

Yixin Zhu; Yibiao Zhao; Song-Chun Zhu

In this paper, we present a new framework - task-oriented modeling, learning and recognition which aims at understanding the underlying functions, physics and causality in using objects as “tools”. Given a task, such as, cracking a nut or painting a wall, we represent each object, e.g. a hammer or brush, in a generative spatio-temporal representation consisting of four components: i) an affordances basis to be grasped by hand; ii) a functional basis to act on a target object (the nut), iii) the imagined actions with typical motion trajectories; and iv) the underlying physical concepts, e.g. force, pressure, etc. In a learning phase, our algorithm observes only one RGB-D video, in which a rational human picks up one object (i.e. tool) among a number of candidates to accomplish the task. From this example, our algorithm learns the essential physical concepts in the task (e.g. forces in cracking nuts). In an inference phase, our algorithm is given a new set of objects (daily objects or stones), and picks the best choice available together with the inferred affordance basis, functional basis, imagined human actions (sequence of poses), and the expected physical quantity that it will produce. From this new perspective, any objects can be viewed as a hammer or a shovel, and object recognition is not merely memorizing typical appearance examples for each category but reasoning the physical mechanisms in various tasks to achieve generalization.


computer vision and pattern recognition | 2014

Single-View 3D Scene Parsing by Attributed Grammar

Xiaobai Liu; Yibiao Zhao; Song-Chun Zhu

In this paper, we present an attributed grammar for parsing man-made outdoor scenes into semantic surfaces, and recovering its 3D model simultaneously. The grammar takes superpixels as its terminal nodes and use five production rules to generate the scene into a hierarchical parse graph. Each graph node actually correlates with a surface or a composite of surfaces in the 3D world or the 2D image. They are described by attributes for the global scene model, e.g. focal length, vanishing points, or the surface properties, e.g. surface normal, contact line with other surfaces, and relative spatial location etc. Each production rule is associated with some equations that constraint the attributes of the parent nodes and those of their children nodes. Given an input image, our goal is to construct a hierarchical parse graph by recursively applying the five grammar rules while preserving the attributes constraints. We develop an effective top-down/bottom-up cluster sampling procedure which can explore this constrained space efficiently. We evaluate our method on both public benchmarks and newly built datasets, and achieve state-of-the-art performances in terms of layout estimation and region segmentation. We also demonstrate that our method is able to recover detailed 3D model with relaxed Manhattan structures which clearly advances the state-of-the-arts of single-view 3D reconstruction.


international conference on robotics and automation | 2014

Detecting potential falling objects by inferring human action and natural disturbance

Bo Zheng; Yibiao Zhao; Joey C. Yu; Katsushi Ikeuchi; Song-Chun Zhu

Detecting potential dangers in the environment is a fundamental ability of living beings. In order to endure such ability to a robot, this paper presents an algorithm for detecting potential falling objects, i.e. physically unsafe objects, given an input of 3D point clouds captured by the range sensors. We formulate the falling risk as a probability or a potential that an object may fall given human action or certain natural disturbances, such as earthquake and wind. Our approach differs from traditional object detection paradigm, it first infers hidden and situated “causes (disturbance) of the scene, and then introduces intuitive physical mechanics to predict possible “effects (falls) as consequences of the causes. In particular, we infer a disturbance field by making use of motion capture data as a rich source of common human pose movement. We show that, by applying various disturbance fields, our model achieves a human level recognition rate of potential falling objects on a dataset of challenging and realistic indoor scenes.


computer vision and pattern recognition | 2016

Inferring Forces and Learning Human Utilities from Videos

Yixin Zhu; Chenfanfu Jiang; Yibiao Zhao; Demetri Terzopoulos; Song-Chun Zhu

We propose a notion of affordance that takes into account physical quantities generated when the human body interacts with real-world objects, and introduce a learning framework that incorporates the concept of human utilities, which in our opinion provides a deeper and finer-grained account not only of object affordance but also of peoples interaction with objects. Rather than defining affordance in terms of the geometric compatibility between body poses and 3D objects, we devise algorithms that employ physicsbased simulation to infer the relevant forces/pressures acting on body parts. By observing the choices people make in videos (particularly in selecting a chair in which to sit) our system learns the comfort intervals of the forces exerted on body parts (while sitting). We account for peoples preferences in terms of human utilities, which transcend comfort intervals to account also for meaningful tasks within scenes and spatiotemporal constraints in motion planning, such as for the purposes of robot task planning.


2011 IEEE Workshop on Person-Oriented Vision | 2011

A benchmark for interactive image segmentation algorithms

Yibiao Zhao; Xiaohan Nie; Yanbiao Duan; Yaping Huang; Siwei Luo

This paper proposes a general benchmark for interactive segmentation algorithms. The main contribution can be summarized as follows: (I) A new dataset of fifty images is released. These images are categorized into five groups: animal, artifact, human, building and plant. They cover several major challenges for the interactive image segmentation task, including fuzzy boundary, complex texture, cluttered background, shading effect, sharp corner, and overlapping color. (II) We propose two types of schemes, point-process and boundary-process, to generate user scribbles automatically. The point-process simulates the human interaction process that users incrementally draw scribbles to some major components of the image. The boundary-process simulates the refining process that users place more scribbles near the segment boundaries to refine the details of result segments. (III) We then apply two precision measures to quantitatively evaluate the result segments of different algorithm. The region precision measures how many pixels are correctly classified, and the boundary precision measures how close is the segment boundary to the real boundary. This benchmark offered a tentative way to guarantee evaluation fairness of person-oriented tasks. Based on the benchmark, five state-of-the-art interactive segmentation algorithms are evaluated. All the images, synthesized user scribbles, running results are publicly available on the webpage1.


International Journal of Computer Vision | 2015

Scene Understanding by Reasoning Stability and Safety

Bo Zheng; Yibiao Zhao; Joey C. Yu; Katsushi Ikeuchi; Song-Chun Zhu

This paper presents a new perspective for 3D scene understanding by reasoning object stability and safety using intuitive mechanics. Our approach utilizes a simple observation that, by human design, objects in static scenes should be stable in the gravity field and be safe with respect to various physical disturbances such as human activities. This assumption is applicable to all scene categories and poses useful constraints for the plausible interpretations (parses) in scene understanding. Given a 3D point cloud captured for a static scene by depth cameras, our method consists of three steps: (i) recovering solid 3D volumetric primitives from voxels; (ii) reasoning stability by grouping the unstable primitives to physically stable objects by optimizing the stability and the scene prior; and (iii) reasoning safety by evaluating the physical risks for objects under physical disturbances, such as human activity, wind or earthquakes. We adopt a novel intuitive physics model and represent the energy landscape of each primitive and object in the scene by a disconnectivity graph (DG). We construct a contact graph with nodes being 3D volumetric primitives and edges representing the supporting relations. Then we adopt a Swendson–Wang Cuts algorithm to partition the contact graph into groups, each of which is a stable object. In order to detect unsafe objects in a static scene, our method further infers hidden and situated causes (disturbances) in the scene, and then introduces intuitive physical mechanics to predict possible effects (e.g., falls) as consequences of the disturbances. In experiments, we demonstrate that the algorithm achieves a substantially better performance for (i) object segmentation, (ii) 3D volumetric recovery, and (iii) scene understanding with respect to other state-of-the-art methods. We also compare the safety prediction from the intuitive mechanics model with human judgement.

Collaboration


Dive into the Yibiao Zhao's collaboration.

Top Co-Authors

Avatar

Song-Chun Zhu

University of California

View shared research outputs
Top Co-Authors

Avatar

Yixin Zhu

University of California

View shared research outputs
Top Co-Authors

Avatar

Lap-Fai Yu

University of Massachusetts Boston

View shared research outputs
Top Co-Authors

Avatar

Siwei Luo

Beijing Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Chenfanfu Jiang

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Joey C. Yu

University of California

View shared research outputs
Top Co-Authors

Avatar

Tao Gao

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nanning Zheng

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Ping Wei

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge