Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yan Cui is active.

Publication


Featured researches published by Yan Cui.


computer vision and pattern recognition | 2010

3D shape scanning with a time-of-flight camera

Yan Cui; Sebastian Schuon; Derek Chan; Sebastian Thrun; Christian Theobalt

We describe a method for 3D object scanning by aligning depth scans that were taken from around an object with a time-of-flight camera. These ToF cameras can measure depth scans at video rate. Due to comparably simple technology they bear potential for low cost production in big volumes. Our easy-to-use, cost-effective scanning solution based on such a sensor could make 3D scanning technology more accessible to everyday users. The algorithmic challenge we face is that the sensors level of random noise is substantial and there is a non-trivial systematic bias. In this paper we show the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality. Established filtering and scan alignment techniques from the literature fail to achieve this goal. In contrast, our algorithm is based on a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensors noise characteristics.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Algorithms for 3D Shape Scanning with a Depth Camera

Yan Cui; Sebastian Schuon; Sebastian Thrun; Didier Stricker; Christian Theobalt

We describe a method for 3D object scanning by aligning depth scans that were taken from around an object with a Time-of-Flight (ToF) camera. These ToF cameras can measure depth scans at video rate. Due to comparably simple technology, they bear potential for economical production in big volumes. Our easy-to-use, cost-effective scanning solution, which is based on such a sensor, could make 3D scanning technology more accessible to everyday users. The algorithmic challenge we face is that the sensors level of random noise is substantial and there is a nontrivial systematic bias. In this paper, we show the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality. Established filtering and scan alignment techniques from the literature fail to achieve this goal. In contrast, our algorithm is based on a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensors noise characteristics.


international conference on computer vision | 2012

KinectAvatar: fully automatic body capture using a single kinect

Yan Cui; William S. C. Chang; Tobias Nöll; Didier Stricker

We present a novel scanning system for capturing a full 3D human body model using just a single depth camera and no auxiliary equipment. We claim that data captured from a single Kinect is sufficient to produce a good quality full 3D human model. In this setting, the challenges we face are the sensors low resolution with random noise and the subjects non-rigid movement when capturing the data. To overcome these challenges, we develop an improved super-resolution algorithm that takes color constraints into account. We then align the super-resolved scans using a combination of automatic rigid and non-rigid registration. As the system is of low price and obtains impressive results in several minutes, full 3D human body scanning technology can now become more accessible to everyday users at home.


international conference on computer graphics and interactive techniques | 2011

3D shape scanning with a Kinect

Yan Cui; Didier Stricker

We describe a method for 3D object scanning by aligning depth and color scans which were taken from around an object with a Kinect camera. Our easy-to-use, cost-effective scanning solution could make 3D scanning technology more accessible to everyday users and turn 3D shape models into a much more widely used asset for many new applications, for instance in community web platforms or online shopping.


international conference on virtual reality | 2011

Dense 3D point cloud generation from multiple high-resolution spherical images

Alain Pagani; Christiano Couto Gava; Yan Cui; Bernd Krolla; Jean-Marc Hengen; Didier Stricker

The generation of virtual models of cultural heritage assets is of high interest for documentation, restoration, development and promotion purposes. To this aim, non-invasive, easy and automatic techniques are required. We present a technology that automatically reconstructs large scale scenes from panoramic, high-resolution, spherical images. The advantage of the spherical panoramas is that they can acquire a complete environment in one single image. We show that the spherical geometry is more suited for the computation of the orientation of the panoramas (Structure from Motion) than the standard images, and introduce a generic error function for the epipolar geometry of spherical images. We then show how to produce a dense representation of the scene with up to 100 million points, that can serve as input for meshing and texturing software or for computer aided reconstruction. We demonstrate the applicability of our concept with reconstruction of complex scenes in the scope of cultural heritage documentation at the Chinese National Palace Museum of the Forbidden City in Beijing.


Footwear Science | 2011

Foot scanning and deformation estimation using time-of-flight cameras

Shuysn Liu; Yan Cui; Stephane Sanchez; Didier Stricker

Combined with the results of input signals, fv always deviated from fGRF with a discrepancy reached 34– 40Hz during DJ for both shoes; on the contrary, during PL, fGRF would move towards fv, which may induce to create a resonance situation (a dashed frame in Figure 2). Furthermore, fGRF of the control condition was much closer to the resonance frequency compared with the basketball shoe for most subjects (Figure 2).


2nd International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 25-26 October 2011 | 2011

3D Body Scanning With One Kinect

Yan Cui; Didier Stricker

In this paper we describe a method for 3D body scanning by aligning depth and color scans which were taken around a human body with a Kinect camera. The Kinect [18] camera is a “controller-free gaming and entertainment experience” by Microsoft for the Xbox 360 video game platform. It delivers depth and color scans at video rate. The proposed scanning solution makes 3D scanning technology more accessible to end-users, since it is easy-to-use and cost-effective. With this technique, 3D models could become a much more widely used asset, just as image and video data are today. This could open the door for many new applications, for instance in community web platforms or online shopping.


international conference on pattern recognition | 2011

Robust point matching in HDRI through estimation of illumination distribution

Yan Cui; Alain Pagani; Didier Stricker

High Dynamic Range Images provide a more detailed information and their use in Computer Vision tasks is therefore desirable. However, the illumination distribution over the image often makes this kind of images difficult to use with common vision algorithms. In particular, the highlights and shadow parts in a HDR image are difficult to analyze in a standard way. In this paper, we propose a method to solve this problem by applying a preliminary step where we precisely compute the illumination distribution in the image. Having access to the illumination distribution allows us to subtract the highlights and shadows from the original image, yielding a material color image. This material color image can be used as input for standard computer vision algorithms, like the SIFT point matching algorithm and its variants.


international conference on image processing | 2010

SIFT in perception-based color space

Yan Cui; Alain Pagani; Didier Stricker

Scale Invariant Feature Transform (SIFT) has been proven to be the most robust local invariant feature descriptor. However, SIFT is designed mainly for grayscale images. Many local features can be misclassified if their color information is ignored. Motivated by perceptual principles, this paper addresses a new color space, called perception-based color space, in which the associated metric approximates perceived distances and color displacements and captures illumination invariant relationship. Instead of using grayscale values to represent the input image, the proposed approach builds the SIFT descriptors in the new color space, resulting in a descriptor that is more robust than the standard SIFT with respect to color and illumination variations. The evaluation results support the potential of the proposed approach.


3rd International Conference on 3D Body Scanning Technologies, Lugano, Switzerland, 16-17 October 2012 | 2012

Exploratory Analysis of College Students' Satisfaction of Body Scanning with Kinect

Shu-Hwa Lin; Rayneld Johnson; Didier Stricker; Yan Cui

This study explores college students’ attitudes toward body scanning and the creation of an avatar using a Kinect operating system. A select sample of 86 female and male college students participated in the study. Using a Windows 7 operating system with Kinect to provide a stable platform for the NUI audio and motor devices, students’ bodies were scanned and an avatar was created. Bodies were scanned from 360 degrees to obtain 360 pictures and 360 depth frames (i.e. about 10 degrees between each view). Outputs with PNG and PLY files were abstrated from the scan data and processed into a 3D model reconstruction or avatar by Wissenschaftlicher Mitarbeiter. The program, MeshLab, was used to view and measure the avatar. Following the scanning process, subjects responded to a 20 item questionnaire about the process and resulting avatar. Overall, participants expressed positive reactions to the body scanning process and satisfaction with their avatar and body shape, and provided information about the use of avatars.

Collaboration


Dive into the Yan Cui's collaboration.

Top Co-Authors

Avatar

Shu-Hwa Lin

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar

Ju-Young M. Kang

University of Hawaii at Manoa

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriele Bleser

Kaiserslautern University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge