Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carl Yuheng Ren is active.

Publication


Featured researches published by Carl Yuheng Ren.


computer vision and pattern recognition | 2013

Dense Reconstruction Using 3D Object Shape Priors

Amaury Dame; Victor Adrian Prisacariu; Carl Yuheng Ren; Ian D. Reid

We propose a formulation of monocular SLAM which combines live dense reconstruction with shape priors-based 3D tracking and reconstruction. Current live dense SLAM approaches are limited to the reconstruction of visible surfaces. Moreover, most of them are based on the minimisation of a photo-consistency error, which usually makes them sensitive to specularities. In the 3D pose recovery literature, problems caused by imperfect and ambiguous image information have been dealt with by using prior shape knowledge. At the same time, the success of depth sensors has shown that combining joint image and depth information drastically increases the robustness of the classical monocular 3D tracking and 3D reconstruction approaches. In this work we link dense SLAM to 3D object pose and shape recovery. More specifically, we automatically augment our SLAM system with object specific identity, together with 6D pose and additional shape degrees of freedom for the object(s) of known class in the scene, combining image data and depth information for the pose and shape recovery. This leads to a system that allows for full scaled 3D reconstruction with the known object(s) segmented from the scene. The segmentation enhances the clarity, accuracy and completeness of the maps built by the dense SLAM system, while the dense 3D data aids the segmentation process, yielding faster and more reliable convergence than when using 2D image data alone.


IEEE Transactions on Visualization and Computer Graphics | 2015

Very High Frame Rate Volumetric Integration of Depth Images on Mobile Devices

Olaf Kähler; Victor Adrian Prisacariu; Carl Yuheng Ren; Xin Sun; Philip H. S. Torr; David W. Murray

Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.


international conference on computer vision | 2013

STAR3D: Simultaneous Tracking and Reconstruction of 3D Objects Using RGB-D Data

Carl Yuheng Ren; Victor Adrian Prisacariu; David W. Murray; Ian D. Reid

We introduce a probabilistic framework for simultaneous tracking and reconstruction of 3D rigid objects using an RGB-D camera. The tracking problem is handled using a bag-of-pixels representation and a back-projection scheme. Surface and background appearance models are learned online, leading to robust tracking in the presence of heavy occlusion and outliers. In both our tracking and reconstruction modules, the 3D object is implicitly embedded using a 3D level-set function. The framework is initialized with a simple shape primitive model (e.g. a sphere or a cube), and the real 3D object shape is tracked and reconstructed online. Unlike existing depth-based 3D reconstruction works, which either rely on calibrated/fixed camera set up or use the observed world map to track the depth camera, our framework can simultaneously track and reconstruct small moving objects. We use both qualitative and quantitative results to demonstrate the superior performance of both tracking and reconstruction of our method.


international conference on computer vision | 2012

A unified energy minimization framework for model fitting in depth

Carl Yuheng Ren; Ian D. Reid

In this paper we present a unified energy minimization framework for model fitting and pose recovery problems in depth cameras. 3D level-set embedding functions are used to represent object models implicitly and a novel 3D chamfer matching based energy function is minimized by adjusting the generic projection matrix, which could be parameterized differently according to specific applications. Our proposed energy function takes the advantage of the gradient of 3D level-set embedding function and can be efficiently solved by gradients-based optimization methods. We show various real-world applications, including real-time 3D tracking in depth, simultaneous calibration and tracking, and 3D point cloud modeling. We perform experiments on both real data and synthetic data to show the superior performance of our method for all the applications above.


international conference on 3d vision | 2014

3D Tracking of Multiple Objects with Identical Appearance Using RGB-D Input

Carl Yuheng Ren; Victor Adrian Prisacariu; Olaf Kaehler; Ian D. Reid; David W. Murray

Most current approaches for 3D object tracking rely on distinctive object appearances. While several such trackers can be instantiated to track multiple objects independently, this not only neglects that objects should not occupy the same space in 3D, but also fails when objects have highly similar or identical appearances. In this paper we develop a probabilistic graphical model that accounts for similarity and proximity and leads to robust real-time tracking of multiple objects from RGB-D data, without recourse to bolton collision detection.


International Journal of Computer Vision | 2017

Real-Time Tracking of Single and Multiple Objects from Depth-Colour Imagery Using 3D Signed Distance Functions

Carl Yuheng Ren; Victor Adrian Prisacariu; Olaf Kähler; Ian D. Reid; David W. Murray

We describe a novel probabilistic framework for real-time tracking of multiple objects from combined depth-colour imagery. Object shape is represented implicitly using 3D signed distance functions. Probabilistic generative models based on these functions are developed to account for the observed RGB-D imagery, and tracking is posed as a maximum a posteriori problem. We present first a method suited to tracking a single rigid 3D object, and then generalise this to multiple objects by combining distance functions into a shape union in the frame of the camera. This second model accounts for similarity and proximity between objects, and leads to robust real-time tracking without recourse to bolt-on or ad-hoc collision detection.


International Journal of Computer Vision | 2014

Regressing Local to Global Shape Properties for Online Segmentation and Tracking

Carl Yuheng Ren; Victor Adrian Prisacariu; Ian D. Reid

We propose a novel regression based framework that uses online learned shape information to reconstruct occluded object contours. Our key insight is to regress the global, coarse, properties of shape from its local properties, i.e. its details. We do this by representing shapes using their 2D discrete cosine transforms and by regressing low frequency from high frequency harmonics. We learn this regression model using Locally Weighted Projection Regression which expedites online, incremental learning. After sufficient observation of a set of unoccluded shapes, the learned model can detect occlusion and recover the full shapes from the occluded ones. We demonstrate the ideas using a level-set based tracking system that provides shape and pose, however, the framework could be embedded in any segmentation-based tracking system. Our experiments demonstrate the efficacy of the method on a variety of objects using both real data and artificial data.


international conference on image analysis and processing | 2013

Robust Silhouette Extraction from Kinect Data

Michele Pirovano; Carl Yuheng Ren; I. Frosio; Pier Luca Lanzi; Victor Adrian Prisacariu; David W. Murray; N. Alberto Borghese

Natural User Interfaces allow users to interact with virtual environments with little intermediation. Immersion becomes a vital need for such interfaces to be successful and it is achieved by making the interface invisible to the user. For cognitive rehabilitation, a mirror view is a good interface to the virtual world, but obtaining immersion is not straightforward. An accurate player profile, or silhouette, accurately extracted from the real-world background, increases both the visual quality and the immersion of the player in the virtual environment. The Kinect SDK provides raw data that can be used to extract a simple player profile. In this paper, we present our method for obtaining a smooth player profile extraction from the Kinect image streams.


Archive | 2011

gSLIC: a real-time implementation of SLIC superpixel segmentation

Carl Yuheng Ren; Ian D. Reid


arXiv: Computer Vision and Pattern Recognition | 2014

A Framework for the Volumetric Integration of Depth Images

Victor Adrian Prisacariu; Olaf Kähler; Ming-Ming Cheng; Carl Yuheng Ren; Julien P. C. Valentin; Philip H. S. Torr; Ian D. Reid; David W. Murray

Collaboration


Dive into the Carl Yuheng Ren's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian D. Reid

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge