Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yinxiao Li is active.

Publication


Featured researches published by Yinxiao Li.


intelligent robots and systems | 2010

Image-based segmentation of indoor corridor floors for a mobile robot

Yinxiao Li; Stanley T. Birchfield

We present a novel method for image-based floor detection from a single image. In contrast with previous approaches that rely upon homographies, our approach does not require multiple images (either stereo or optical flow). It also does not require the camera to be calibrated, even for lens distortion. The technique combines three visual cues for evaluating the likelihood of horizontal intensity edge line segments belonging to the wall-floor boundary. The combination of these cues yields a robust system that works even in the presence of severe specular reflections, which are common in indoor environments. The nearly real-time algorithm is tested on a large database of images collected in a wide variety of conditions, on which it achieves nearly 90% detection accuracy.


international conference on robotics and automation | 2014

Recognition of deformable object category and pose

Yinxiao Li; Chih-Fan Chen; Peter K. Allen

We present a novel method for classifying and estimating the categories and poses of deformable objects, such as clothing, from a set of depth images. The framework presented here represents the recognition part of the entire pipeline of dexterous manipulation of deformable objects, which contains grasping, recognition, regrasping, placing flat, and folding. We first create an off-line simulation of the deformable objects and capture depth images from different view points as training data. Then by extracting features and applying sparse coding and dictionary learning, we build up a codebook for a set of different poses of a particular deformable object category. The whole framework contains two layers which yield a robust system that first classifies deformable objects on category level and then estimates the current pose from a group of predefined poses of a single deformable object. The system is tested on a variety of similar deformable objects and achieves a high output accuracy. By knowing the current pose of the garment, we can continue with further tasks such as regrasping and folding.


intelligent robots and systems | 2014

Real-time Pose Estimation of Deformable Objects Using a Volumetric Approach

Yinxiao Li; Yan Wang; Michael Case; Shih-Fu Chang; Peter K. Allen

Pose estimation of deformable objects is a fundamental and challenging problem in robotics. We present a novel solution to this problem by first reconstructing a 3D model of the object from a low-cost depth sensor such as Kinect, and then searching a database of simulated models in different poses to predict the pose. Given noisy depth images from 360-degree views of the target object acquired from the Kinect sensor, we reconstruct a smooth 3D model of the object using depth image segmentation and volumetric fusion. Then with an efficient feature extraction and matching scheme, we search the database, which contains a large number of deformable objects in different poses, to obtain the most similar model, whose pose is then adopted as the prediction. Extensive experiments demonstrate better accuracy and orders of magnitude speed-up compared to our previous work. An additional benefit of our method is that it produces a high-quality mesh model and camera pose, which is necessary for other tasks such as regrasping and object manipulation.


european conference on computer vision | 2014

Part-Pair Representation for Part Localization

Jiongxin Liu; Yinxiao Li; Peter N. Belhumeur

In this paper, we propose a novel part-pair representation for part localization. In this representation, an object is treated as a collection of part pairs to model its shape and appearance. By changing the set of pairs to be used, we are able to impose either stronger or weaker geometric constraints on the part configuration. As for the appearance, we build pair detectors for each part pair, which model the appearance of an object at different levels of granularities. Our method of part localization exploits the part-pair representation, featuring the combination of non-parametric exemplars and parametric regression models. Non-parametric exemplars help generate reliable part hypotheses from very noisy pair detections. Then, the regression models are used to group the part hypotheses in a flexible way to predict the part locations. We evaluate our method extensively on the dataset CUB-200-2011 [32], where we achieve significant improvement over the state-of-the-art method on bird part localization. We also experiment with human pose estimation, where our method produces comparable results to existing works.


international conference on robotics and automation | 2015

Regrasping and unfolding of garments using predictive thin shell modeling

Yinxiao Li; Danfei Xu; Yonghao Yue; Yan Wang; Shih-Fu Chang; Eitan Grinspun; Peter K. Allen

Deformable objects such as garments are highly unstructured, making them difficult to recognize and manipulate. In this paper, we propose a novel method to teach a two-arm robot to efficiently track the states of a garment from an unknown state to a known state by iterative regrasping. The problem is formulated as a constrained weighted evaluation metric for evaluating the two desired grasping points during regrasping, which can also be used for a convergence criterion The result is then adopted as an estimation to initialize a regrasping, which is then considered as a new state for evaluation. The process stops when the predicted thin shell conclusively agrees with reconstruction. We show experimental results for regrasping a number of different garments including sweater, knitwear, pants, and leggings, etc.


intelligent robots and systems | 2015

Folding deformable objects using predictive simulation and trajectory optimization

Yinxiao Li; Yonghao Yue; Danfei Xu; Eitan Grinspun; Peter K. Allen

Robotic manipulation of deformable objects remains a challenging task. One such task is folding a garment autonomously. Given start and end folding positions, what is an optimal trajectory to move the robotic arm to fold a garment? Certain trajectories will cause the garment to move, creating wrinkles, and gaps, other trajectories will fail altogether. We present a novel solution to find an optimal trajectory that avoids such problematic scenarios. The trajectory is optimized by minimizing a quadratic objective function in an off-line simulator, which includes material properties of the garment and frictional force on the table. The function measures the dissimilarity between a user folded shape and the folded garment in simulation, which is then used as an error measurement to create an optimal trajectory. We demonstrate that our two-arm robot can follow the optimized trajectories, achieving accurate and efficient manipulations of deformable objects.


Robotics and Autonomous Systems | 2011

Visual detection of lintel-occluded doors by integrating multiple cues using a data-driven Markov chain Monte Carlo process

Zhichao Chen; Yinxiao Li; Stanley T. Birchfield

We present an algorithm to detect doors in images. The key to the algorithms success is its fusion of multiple visual cues, including standard cues (color, texture, and intensity edges) as well as several novel ones (concavity, the kick plate, the vanishing point, and the intensity profile of the gap below the door). We use the Adaboost algorithm to determine the linear weighting of the various cues. Formulated as a maximum a posteriori probability (MAP) problem, a multi-cue functional is minimized by a data-driven Markov chain Monte Carlo (DDMCMC) process that arrives at a solution that is shown empirically to be near the global minimum. Intensity edge information is used in the importance probability distribution to drive the Markov chain dynamics in order to achieve a speedup of several orders of magnitude over traditional jump diffusion methods. Unlike previous approaches, the algorithm does not rely upon range information and yet is able to handle complex environments irrespective of lighting conditions, reflections, wall or door color, or the relative orientation between the camera and the door. Moreover, the algorithm is designed to detect doors for which the lintel is occluded, which often occurs when the camera on a mobile robot is low to the ground. The versatility of the algorithm is tested on a large database of images collected in a wide variety of conditions, on which it achieves approximately 90% detection rate with a low false positive rate. Versions of the algorithm are shown for calibrated and uncalibrated camera systems. Additional experiments demonstrate the suitability of the algorithm for near-real-time applications using a mobile robot equipped with off-the-shelf cameras.


international conference on robotics and automation | 2016

Multi-sensor surface analysis for robotic ironing

Yinxiao Li; Xiuhan Hu; Danfei Xu; Yonghao Yue; Eitan Grinspun; Peter K. Allen

Robotic manipulation of deformable objects remains a challenging task. One such task is to iron a piece of cloth autonomously. Given a roughly flattened cloth, the goal is to have an ironing plan that can iteratively apply a regular iron to remove all the major wrinkles by a robot. We present a novel solution to analyze the cloth surface by fusing two surface scan techniques: a curvature scan and a discontinuity scan. The curvature scan can estimate the height deviation of the cloth surface, while the discontinuity scan can effectively detect sharp surface features, such as wrinkles. We use this information to detect the regions that need to be pulled and extended before ironing, and the other regions where we want to detect wrinkles and apply ironing to remove the wrinkles. We demonstrate that our hybrid scan technique is able to capture and classify wrinkles over the surface robustly. Given detected wrinkles, we enable a robot to iron them using shape features. Experimental results show that using our wrinkle analysis algorithm, our robot is able to iron the cloth surface and effectively remove the wrinkles.


international conference on intelligent robotics and applications | 2012

Extracting minimalistic corridor geometry from low-resolution images

Yinxiao Li; Vidya N. Murali; Stanley T. Birchfield

We propose a minimalistic corridor representation consisting of the orientation line (center) and the wall-floor boundaries (lateral limit). The representation is extracted from low-resolution images using a novel combination of information theoretic measures and gradient cues. Our study investigates the impact of image resolution upon the accuracy of extracting such a geometry, showing that accurate centerline and wall-floor boundaries can be estimated even in texture-poor environments with images as small as 16 ×12. In a database of 7 unique corridor sequences for orientation measurements, less than 2% additional error was observed as the resolution of the image decreased by 99%. One of the advantages of working at such resolutions is that the algorithm operates at hundreds of frames per second, or equivalently requires only a small percentage of the CPU.


national conference on artificial intelligence | 2016

Articulated pose estimation using hierarchical exemplar-based models

Jiongxin Liu; Yinxiao Li; Peter K. Allen; Peter N. Belhumeur

Collaboration


Dive into the Yinxiao Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge