Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Llull is active.

Publication


Featured researches published by Patrick Llull.


Optics Express | 2013

Coded aperture compressive temporal imaging

Patrick Llull; Xuejun Liao; Xin Yuan; Jianbo Yang; David S. Kittle; Lawrence Carin; Guillermo Sapiro; David J. Brady

We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed videos temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.


IEEE Transactions on Image Processing | 2014

Video Compressive Sensing Using Gaussian Mixture Models

Jianbo Yang; Xin Yuan; Xuejun Liao; Patrick Llull; David J. Brady; Guillermo Sapiro; Lawrence Carin

A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.


IEEE Journal of Selected Topics in Signal Processing | 2015

Compressive Hyperspectral Imaging With Side Information

Xin Yuan; Tsung-Han Tsai; Ruoyu Zhu; Patrick Llull; David J. Brady; Lawrence Carin

A blind compressive sensing algorithm is proposed to reconstruct hyperspectral images from spectrally-compressed measurements. The wavelength-dependent data are coded and then superposed, mapping the three-dimensional hyperspectral datacube to a two-dimensional image. The inversion algorithm learns a dictionary in situ from the measurements via global-local shrinkage priors. By using RGB images as side information of the compressive sensing system, the proposed approach is extended to learn a coupled dictionary from the joint dataset of the compressed measurements and the corresponding RGB images, to improve reconstruction quality. A prototype camera is built using a liquid-crystal-on-silicon modulator. Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.


IEEE Transactions on Image Processing | 2015

Compressive Sensing by Learning a Gaussian Mixture Model From Measurements

Jianbo Yang; Xuejun Liao; Xin Yuan; Patrick Llull; David J. Brady; Guillermo Sapiro; Lawrence Carin

Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.


international conference on image processing | 2013

Adaptive temporal compressive sensing for video

Xin Yuan; Jianbo Yang; Patrick Llull; Xuejun Liao; Guillermo Sapiro; David J. Brady; Lawrence Carin

This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scenes temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems.


computer vision and pattern recognition | 2014

Low-Cost Compressive Sensing for Color Video and Depth

Xin Yuan; Patrick Llull; Xuejun Liao; Jianbo Yang; David J. Brady; Guillermo Sapiro; Lawrence Carin

A simple and inexpensive (low-power and low-bandwidth) modification is made to a conventional off-the-shelf color video camera, from which we recover multiple color frames for each of the original measured frames, and each of the recovered frames can be focused at a different depth. The recovery of multiple frames for each measured frame is made possible via high-speed coding, manifested via translation of a single coded aperture, the inexpensive translation is constituted by mounting the binary code on a piezoelectric device. To simultaneously recover depth information, a liquid lens is modulated at high speed, via a variable voltage. Consequently, during the aforementioned coding process, the liquid lens allows the camera to sweep the focus through multiple depths. In addition to designing and implementing the camera, fast recovery is achieved by an anytime algorithm exploiting the group-sparsity of wavelet/DCT coefficients.


Optics Letters | 2015

Spectral-temporal compressive imaging.

Tsung-Han Tsai; Patrick Llull; Xin Yuan; Lawrence Carin; David J. Brady

This Letter presents a compressive camera that integrates mechanical translation and spectral dispersion to compress a multi-spectral, high-speed scene onto a monochrome, video-rate detector. Experimental reconstructions of 17 spectral channels and 11 temporal channels from a single measurement are reported for a megapixel-scale monochrome camera.


Optica | 2015

Image translation for single-shot focal tomography

Patrick Llull; Xin Yuan; Lawrence Carin; David J. Brady

Focus and depth of field are conventionally addressed by adjusting longitudinal lens position. More recently, combinations of deliberate blur and computational processing have been used to extend depth of field. Here we show that dynamic control of transverse and longitudinal lens position can be used to decode focus and extend depth of field without degrading static resolution. Our results suggest that optical image stabilization systems may be used for autofocus, extended depth of field, and 3D imaging.


international conference on image processing | 2013

Gaussian mixture model for video compressive sensing

Jianbo Yang; Xin Yuan; Xuejun Liao; Patrick Llull; Guillermo Sapiro; David J. Brady; Lawrence Carin

A Gaussian Mixture Model (GMM)-based algorithm is proposed for video reconstruction from temporal compressed measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The developed GMM reconstruction method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed GMM with videos reconstructed from simulated compressive video measurements and from a real compressive video camera.


Applied Optics | 2016

Efficient patch-based approach for compressive depth imaging

Xin Yuan; Xuejun Liao; Patrick Llull; David J. Brady; Lawrence Carin

We present efficient camera hardware and algorithms to capture images with extended depth of field. The camera moves its focal plane via a liquid lens and modulates the scene at different focal planes by shifting a fixed binary mask, with synchronization achieved by using the same triangular wave to control the focal plane and the pizeoelectronic translator that shifts the mask. Efficient algorithms are developed to reconstruct the all-in-focus image and the depth map from a single coded exposure, and various sparsity priors are investigated to enhance the reconstruction, including group sparsity, tree structure, and dictionary learning. The algorithms naturally admit a parallel computational structure due to the independent patch-level operations. Experimental results on both simulation and real datasets demonstrate the efficacy of the new hardware and the inversion algorithms.

Collaboration


Dive into the Patrick Llull's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge