Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yueyi Zhang is active.

Publication


Featured researches published by Yueyi Zhang.


IEEE Transactions on Image Processing | 2014

Real-Time Scalable Depth Sensing With Hybrid Structured Light Illumination

Yueyi Zhang; Zhiwei Xiong; Zhe Yang; Feng Wu

Time multiplexing (TM) and spatial neighborhood (SN) are two mainstream structured light techniques widely used for depth sensing. The former is well known for its high accuracy and the latter for its low delay. In this paper, we explore a new paradigm of scalable depth sensing to integrate the advantages of both the TM and SN methods. Our contribution is twofold. First, we design a set of hybrid structured light patterns composed of phase-shifted fringe and pseudo-random speckle. Under the illumination of the hybrid patterns, depth can be decently reconstructed either from a few consecutive frames with the TM principle for static scenes or from a single frame with the SN principle for dynamic scenes. Second, we propose a scene-adaptive depth sensing framework based on which a global or region-wise optimal depth map can be generated through motion detection. To validate the proposed scalable paradigm, we develop a real-time (20 fps) depth sensing system. Experimental results demonstrate that our method achieves an efficient balance between accuracy and speed during depth sensing that has rarely been exploited before.


Applied Optics | 2013

Unambiguous 3D measurement from speckle-embedded fringe

Yueyi Zhang; Zhiwei Xiong; Feng Wu

This paper proposes a novel phase-shifting method for fast, accurate, and unambiguous 3D shape measurement. The basic idea is embedding a speckle-like signal in three sinusoidal fringe patterns to eliminate the phase ambiguity, but without reducing the fringe amplitude or frequency. The absolute depth is then recovered through a robust region-wise voting strategy relying on the embedded signal. Using the theoretical minimum of only three images, the proposed method greatly facilitates the application of phase shifting in time-critical conditions. Moreover, the proposed method is resistant to the global illumination effects, as the fringe patterns used are with a single high frequency. Based on the proposed method, we further demonstrate a real-time, high-precision 3D scanning system with an off-the-shelf projector and a commodity camera.


computer vision and pattern recognition | 2013

Depth Acquisition from Density Modulated Binary Patterns

Zhe Yang; Zhiwei Xiong; Yueyi Zhang; Jiao Wang; Feng Wu

This paper proposes novel density modulated binary patterns for depth acquisition. Similar to Kinect, the illumination patterns do not need a projector for generation and can be emitted by infrared lasers and diffraction gratings. Our key idea is to use the density of light spots in the patterns to carry phase information. Two technical problems are addressed here. First, we propose an algorithm to design the patterns to carry more phase information without compromising the depth reconstruction from a single captured image as with Kinect. Second, since the carried phase is not strictly sinusoidal, the depth reconstructed from the phase contains a systematic error. We further propose a pixel-based phase matching algorithm to reduce the error. Experimental results show that the depth quality can be greatly improved using the phase carried by the density of light spots. Furthermore, our scheme can achieve 20 fps depth reconstruction with GPU assistance.


IEEE Journal of Selected Topics in Signal Processing | 2015

Accurate Dynamic 3D Sensing With Fourier-Assisted Phase Shifting

Pengyu Cong; Zhiwei Xiong; Yueyi Zhang; Shenghui Zhao; Feng Wu

Phase shifting profilometry (PSP) and Fourier transform profilometry (FTP) are two well-known fringe analysis methods for 3D sensing. PSP offers high accuracy but requires multiple images; FTP uses a single image but is limited in its accuracy. In this paper, we propose a novel Fourier-assisted phase shifting (FAPS) method for accurate dynamic 3D sensing. Our key observation is that the motion vulnerability of multi-shot PSP can be overcome through single-shot FTP, while the high accuracy of PSP is preserved. Moreover, to solve the phase ambiguity of complex scenes without additional images, we propose an efficient parallel spatial unwrapping strategy that embeds a sparse set of markers in the fringe patterns. Our dynamic 3D sensing system based on the above principles demonstrates superior performance over previous structured light techniques, including PSP, FTP, and Kinect.


international conference on image processing | 2012

Hybrid structured light for scalable depth sensing

Yueyi Zhang; Zhiwei Xiong; Feng Wu

Time multiplexing and spatial neighborhood are two mainstream structured light techniques widely used for 3D shape measurement. In this paper, we explore a way to subtly integrate their advantages for scalable depth sensing. This is realized through a set of elaborate hybrid structured patterns, which consists of three sinusoidal fringe patterns with different initial phases modulated by a pseudo-random speckle signal. For temporally static scenes, a high resolution, high accuracy depth map can be recovered from the latest three frames by the phase-shifting method; for dynamic scenes, a decent depth map can still be recovered from the current single frame by image matching. Since it provides seamless transition between high quality and quick response options, our method validates a new paradigm of scalable depth sensing in practice.


Journal of Visual Communication and Image Representation | 2014

Robust depth sensing with adaptive structured light illumination

Yueyi Zhang; Zhiwei Xiong; Pengyu Cong; Feng Wu

Automatic focus and exposure are the key components in digital cameras nowadays, which jointly play an essential role for capturing a high quality image/video. In this paper, we make an attempt to address these two challenging issues for future depth cameras. Relying on a programmable projector, we establish a structured light system for depth sensing with focus and exposure adaptation. The basic idea is to change current illumination pattern and intensity locally according to the prior depth information. Consequently, multiple object surfaces appearing at different depths in the scene can receive proper illumination respectively. In this way, more flexible and robust depth sensing can be achieved in comparison with fixed illumination, especially at near depth.


international conference on image processing | 2013

Dense single-shot 3D scanning via stereoscopic fringe analysis

Pengyu Cong; Zhiwei Xiong; Yueyi Zhang; Shenghui Zhao; Feng Wu

In this paper, we present a novel single-shot method for dense and accurate 3D scanning. Our method takes advantage of two conventional techniques, i.e., stereo and Fourier fringe analysis (FFA). While FFA is competent for high-density and high-precision phase measurement, stereo solves the phase ambiguity caused by the periodicity of the fringe. By jointly using the intensity images and unwrapped phase maps from stereo, the pixel-wise absolute depth can be obtained through a sparse matching process efficiently and reliably. Due to its single-shot property and low complexity, the proposed method facilitates dense and accurate 3D scanning in time-critical applications.


IEEE Signal Processing Magazine | 2017

Computational Depth Sensing : Toward high-performance commodity depth cameras

Zhiwei Xiong; Yueyi Zhang; Feng Wu; Wenjun Zeng

Depth information plays an important role in a variety of applications, including manufacturing, medical imaging, computer vision, graphics, and virtual/augmented reality (VR/AR). Depth sensing has thus attracted sustained attention from both academia and industry communities for decades. Mainstream depth cameras can be divided into three categories: stereo, time of flight (ToF), and structured light. Stereo cameras require no active illumination and can be used outdoors, but they are fragile for homogeneous surfaces. Recently, off-the-shelf light field cameras have demonstrated improved depth estimation capability with a multiview stereo configuration. ToF cameras operate at a high frame rate and fit time-critical scenarios well, but they are susceptible to noise and limited to low resolution [3]. Structured light cameras can produce high-resolution, high-accuracy depth, provided that a number of patterns are sequentially used. Due to its promising and reliable performance, the structured light approach has been widely adopted for three-dimensional (3-D) scanning purposes. However, achieving real-time depth with structured light either requires highspeed (and thus expensive) hardware or sacrifices depth resolution and accuracy by using a single pattern instead.


visual communications and image processing | 2013

Accurate 3D reconstruction of dynamic scenes with Fourier transform assisted phase shifting

Pengyu Cong; Yueyi Zhang; Zhiwei Xiong; Shenghui Zhao; Feng Wu

Phase shifting is a widely used method for accurate and dense 3D reconstruction. However, at least three images of the same scene are required for each reconstruction, so measurement errors are inevitable in dynamic scenes, even with high-speed hardware. In this paper, we propose a Fourier transform assisted phase shifting method to overcome the motion vulnerability in phase shifting. A new model with motion-related phase shifts is formulated, and the coarse phase measurements obtained by Fourier transform profilemetry are used to estimate the unknown phase shifts. The phase errors caused by motion are greatly reduced in this way. Experimental results show that the proposed method can obtain accurate and dense 3D reconstruction of dynamic scenes, with regard to different kinds of motion.


international conference on multimedia and expo | 2015

Fusion of Time-of-Flight and Phase Shifting for high-resolution and low-latency depth sensing

Yueyi Zhang; Zhiwei Xiong; Feng Wu

Depth sensors based on Time-of-Flight (ToF) and Phase Shifting (PS) have complementary strengths and weaknesses. ToF can provide real-time depth but limited in resolution and sensitive to noise. PS can generate accurate and robust depth with high resolution but requires a number of patterns that leads to high latency. In this paper, we propose a novel fusion framework to take advantages of both ToF and PS. The basic idea is using the coarse depth from ToF to disambiguate the wrapped depth from PS. Specifically, we address two key technical problems: cross-modal calibration and interference-free synchronization between ToF and PS sensors. Experiments demonstrate that the proposed method generates accurate and robust depth with high resolution and low latency, which is beneficial to tremendous applications.

Collaboration


Dive into the Yueyi Zhang's collaboration.

Top Co-Authors

Avatar

Feng Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Zhiwei Xiong

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Pengyu Cong

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shenghui Zhao

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhe Yang

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Dong Liu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Jiao Wang

Northeastern University

View shared research outputs
Top Co-Authors

Avatar

Jiayong Peng

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar

Pengyu Cong

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge