Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ximing Ren is active.

Publication


Featured researches published by Ximing Ren.


international quantum electronics conference | 2013

Kilometre-range, high resolution depth imaging using 1560 nm wavelength single-photon detection

Aongus McCarthy; Nils J. Krichel; Nathan R. Gemmell; Ximing Ren; Michael G. Tanner; Sander N. Dorenbos; Val Zwiller; Robert H. Hadfield; Gerald S. Buller

This paper highlights a significant advance in time-of-flight depth imaging: by using a scanning transceiver which incorporated a free-running, low noise superconducting nanowire single-photon detector, we were able to obtain centimeter resolution depth images of low-signature objects in daylight at stand-off distances of the order of one kilometer at the relatively eye-safe wavelength of 1560 nm. The detector used had an efficiency of 18% at 1 kHz dark count rate, and the overall system jitter was ~100 ps. The depth images were acquired by illuminating the scene with an optical output power level of less than 250 µW average, and using per-pixel dwell times in the millisecond regime.


Optics Express | 2013

Kilometer-range depth imaging at 1550 nm wavelength using an InGaAs/InP single-photon avalanche diode detector

Aongus McCarthy; Ximing Ren; Adriano Della Frera; Nathan R. Gemmell; Nils J. Krichel; Carmelo Scarcella; Alessandro Ruggeri; Alberto Tosi; Gerald S. Buller

We have used an InGaAs/InP single-photon avalanche diode detector module in conjunction with a time-of-flight depth imager operating at a wavelength of 1550 nm, to acquire centimeter resolution depth images of low signature objects at stand-off distances of up to one kilometer. The scenes of interest were scanned by the transceiver system using pulsed laser illumination with an average optical power of less than 600 µW and per-pixel acquisition times of between 0.5 ms and 20 ms. The fiber-pigtailed InGaAs/InP detector was Peltier-cooled and operated at a temperature of 230 K. This detector was used in electrically gated mode with a single-photon detection efficiency of about 26% at a dark count rate of 16 kilocounts per second. The systems overall instrumental temporal response was 144 ps full width at half maximum. Measurements made in daylight on a number of target types at ranges of 325 m, 910 m, and 4.5 km are presented, along with an analysis of the depth resolution achieved.


IEEE Transactions on Geoscience and Remote Sensing | 2014

Design and Evaluation of Multispectral LiDAR for the Recovery of Arboreal Parameters

Andrew M. Wallace; Aongus McCarthy; Caroline J. Nichol; Ximing Ren; Simone Morak; Daniel Martinez-Ramirez; Iain H. Woodhouse; Gerald S. Buller

Multispectral light detection and ranging (LiDAR) has the potential to recover structural and physiological data from arboreal samples and, by extension, from forest canopies when deployed on aerial or space platforms. In this paper, we describe the design and evaluation of a prototype multispectral LiDAR system and demonstrate the measurement of leaf and bark area and abundance profiles using a series of experiments on tree samples “viewed from above” by tilting living conifers such that the apex is directed on the viewing axis. As the complete recovery of all structural and physiological parameters is ill posed with a restricted set of four wavelengths, we used leaf and bark spectra measured in the laboratory to constrain parameter inversion by an extended reversible jump Markov chain Monte Carlo algorithm. However, we also show in a separate experiment how the multispectral LiDAR can recover directly a profile of Normalized Difference Vegetation Index (NDVI), which is verified against the laboratory spectral measurements. Our work shows the potential of multispectral LiDAR to recover both structural and physiological data and also highlights the fine spatial resolution that can be achieved with time-correlated single-photon counting.


Optics Express | 2015

Metasurface for characterization of the polarization state of light

Dandan Wen; Fuyong Yue; Santosh Kumar; Yong Ma; Ming-Huei Chen; Ximing Ren; Peter E. Kremer; Brian D. Gerardot; Mohammad R. Taghizadeh; Gerald S. Buller; Xianzhong Chen

The miniaturization of measurement systems currently used to characterize the polarization state of light is limited by the bulky optical components used such as polarizers and waveplates. We propose and experimentally demonstrate a simple and compact approach to measure the ellipticity and handedness of the polarized light using an ultrathin (40 nm) gradient metasurface. A completely polarized light beam is decomposed into a left circularly polarized beam and a right circularly polarized beam, which are steered in two directions by the metasurface consisting of nanorods with spatially varying orientations. By measuring the intensities of the refracted light spots, the ellipticity and handedness of various incident polarization states are characterized at a range of wavelengths and used to determine the polarization information of the incident beam. To fully characterize the polarization state of light, an extra polarizer can be used to measure the polarization azimuth angle of the incident light.


IEEE Transactions on Image Processing | 2016

Lidar Waveform-Based Analysis of Depth Images Constructed Using Sparse Single-Photon Data

Yoann Altmann; Ximing Ren; Aongus McCarthy; Gerald S. Buller; Steve McLaughlin

This paper presents a new Bayesian model and algorithm used for depth and reflectivity profiling using full waveforms from the time-correlated single-photon counting measurement in the limit of very low photon counts. The proposed model represents each Lidar waveform as a combination of a known impulse response, weighted by the target reflectivity, and an unknown constant background, corrupted by Poisson noise. Prior knowledge about the problem is embedded through prior distributions that account for the different parameter constraints and their spatial correlation among the image pixels. In particular, a gamma Markov random field (MRF) is used to model the joint distribution of the target reflectivity, and a second MRF is used to model the distribution of the target depth, which are both expected to exhibit significant spatial correlations. An adaptive Markov chain Monte Carlo algorithm is then proposed to perform Bayesian inference. This algorithm is equipped with a stochastic optimization adaptation mechanism that automatically adjusts the parameters of the MRFs by maximum marginal likelihood estimation. Finally, the benefits of the proposed methodology are demonstrated through a series of experiments using real data.


Optics Express | 2015

Underwater depth imaging using time-correlated single photon counting

Aurora Maccarone; Aongus McCarthy; Ximing Ren; Ryan E. Warburton; Andrew M. Wallace; James Moffat; Yvan Petillot; Gerald S. Buller

We investigate the potential of a depth imaging system for underwater environments. This system is based on the timeof- flight approach and the time correlated single-photon counting (TCSPC) technique. We report laboratory-based measurements and explore the potential of achieving sub-centimeter xyz resolution at 10’s meters stand-off distances. Initial laboratory-based experiments demonstrate depth imaging performed over distances of up to 1.8 meters and under a variety of scattering conditions. The system comprised a monostatic transceiver unit, a fiber-coupled supercontinuum laser with a wavelength tunable acousto-optic filter, and a fiber-coupled individual silicon single-photon avalanche diode (SPAD). The scanning in xy was performed using a pair of galvonometer mirrors directing both illumination and scattered returns via a coaxial optical configuration. Target objects were placed in a 110 liter capacity tank and depth images were acquired through approximately 1.7 meters of water containing different concentrations of scattering agent. Depth images were acquired in clear and highly scattering water using per-pixel acquisition times in the range 0.5-100 ms at average optical powers in the range 0.8 nW to 120 μW. Based on the laboratory measurements, estimations of potential performance, including maximum range possible, were performed with a model based on the LIDAR equation. These predictions will be presented for different levels of scattering agent concentration, optical powers, wavelengths and comparisons made with naturally occurring environments. The experimental and theoretical results indicate that the TCSPC technique has potential for highresolution underwater depth profile measurements.


Optical Engineering | 2017

Long-range depth profiling of camouflaged targets using single-photon detection

Rachael Tobin; Abderrahim Halimi; Aongus McCarthy; Ximing Ren; Kenneth J. McEwan; Stephen McLaughlin; Gerald S. Buller

Abstract. We investigate the reconstruction of depth and intensity profiles from data acquired using a custom-designed time-of-flight scanning transceiver based on the time-correlated single-photon counting technique. The system had an operational wavelength of 1550 nm and used a Peltier-cooled InGaAs/InP single-photon avalanche diode detector. Measurements were made of human figures, in plain view and obscured by camouflage netting, from a stand-off distance of 230 m in daylight using only submilliwatt average optical powers. These measurements were analyzed using a pixelwise cross correlation approach and compared to analysis using a bespoke algorithm designed for the restoration of multilayered three-dimensional light detection and ranging images. This algorithm is based on the optimization of a convex cost function composed of a data fidelity term and regularization terms, and the results obtained show that it achieves significant improvements in image quality for multidepth scenarios and for reduced acquisition times.


international conference on acoustics, speech, and signal processing | 2016

Target detection for depth imaging using sparse single-photon data

Yoann Altmann; Ximing Ren; Aongus McCarthy; Gerald S. Buller; Steve McLaughlin

This paper presents a new Bayesian model and associated algorithm for depth and intensity profiling using full waveforms from time-correlated single-photon counting (TCSPC) measurements when the photon count in very low. The model represents each Lidar waveform as an unknown constant background level, which is combined in the presence of a target, to a known impulse response weighted by the target intensity and finally corrupted by Poisson noise. The joint target detection and depth imaging problem is expressed as a pixel-wise model selection problem which is solved using Bayesian inference. A Reversible Jump Markov chain Monte Carlo algorithm is proposed to compute the Bayesian estimates of interest. Finally, the benefits of the methodology are demonstrated through a series of experiments using real data.


Optics Express | 2015

Fill-factor improvement of Si CMOS single-photon avalanche diode detector arrays by integration of diffractive microlens arrays.

Giuseppe Intermite; Aongus McCarthy; Ryan E. Warburton; Ximing Ren; Federica Villa; Rudi Lussana; Andrew J. Waddie; Mohammad R. Taghizadeh; Alberto Tosi; Franco Zappa; Gerald S. Buller

Single-photon avalanche diode (SPAD) detector arrays generally suffer from having a low fill-factor, in which the photo-sensitive area of each pixel is small compared to the overall area of the pixel. This paper describes the integration of different configurations of high efficiency diffractive optical microlens arrays onto a 32 × 32 SPAD array, fabricated using a 0.35 µm CMOS technology process. The characterization of SPAD arrays with integrated microlens arrays is reported over the spectral range of 500-900 nm, and a range of f-numbers from f/2 to f/22. We report an average concentration factor of 15 measured for the entire SPAD array with integrated microlens array. The integrated SPAD and microlens array demonstrated a very high uniformity in overall efficiency.


Optics Express | 2018

High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor

Ximing Ren; Peter W. R. Connolly; Abderrahim Halimi; Yoann Altmann; Stephen McLaughlin; Istvan Gyongy; Robert Henderson; Gerald S. Buller

A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.

Collaboration


Dive into the Ximing Ren's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge