J. P. Lüke
University of La Laguna
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by J. P. Lüke.
Proceedings of SPIE | 2011
J. M. Rodríguez-Ramos; J. P. Lüke; R. López; José Gil Marichal-Hernández; I. Montilla; J. M. Trujillo-Sevilla; Bruno Femenia; Marta Puga; M. López; J. J. Fernández-Valdivia; F. Rosa; C. Dominguez-Conde; J. C. Sanluis; Luis Fernando Rodríguez-Ramos
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.
IEEE\/OSA Journal of Display Technology | 2015
J. P. Lüke; F. Rosa; José Gil Marichal-Hernández; J. C. Sanluis; C. Dominguez Conde; J. M. Rodríguez-Ramos
In this paper, we develop a local method to obtain depths from the 4D light field. In contrast to previous local depth from light field methods based on EPIs, e.g., 2D slices of the light field, the proposed method takes into account the 4D nature of the light field and uses its four dimensions. Furthermore, our technique adapts well to parallel hardware. The performance of the method is tested against a publicly available benchmark dataset and compared with other algorithms that previously have been tested with the same benchmark. Results show that the proposed method can achieve competitive results in reasonable time.
Sensors | 2010
Eduardo Magdaleno; J. P. Lüke; Manuel Silva Rodríguez; J. M. Rodríguez-Ramos
In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.
Proceedings of SPIE | 2012
Luis Fernando Rodríguez-Ramos; I. Montilla; J. P. Lüke; R. López; José Gil Marichal-Hernández; J. M. Trujillo-Sevilla; Bruno Femenia; M. López; J. J. Fernández-Valdivia; Marta Puga; F. Rosa; J. M. Rodríguez-Ramos
Plenoptic cameras have been developed the last years as a passive method for 3d scanning, allowing focal stack capture from a single shot. But data recorded by this kind of sensors can also be used to extract the wavefront phases associated to the atmospheric turbulence in an astronomical observation. The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated to the turbulence. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically, taking advantage of the two principal characteristics of the plenoptic sensors at the same time: 3D scanning and wavefront sensing. Then, the plenoptic sensors can be studied and used as an alternative wavefront sensor for Adaptive Optics, particularly relevant when Extremely Large Telescopes projects are being undertaken. In this paper, we will present the first observational wavefront phases extracted from real astronomical observations, using punctual and extended objects, and we show that the restored wavefronts match the Kolmogorov atmospheric turbulence.
Journal of Electronic Imaging | 2012
José Gil Marichal-Hernández; J. P. Lüke; Fernando Rosa; J. M. Rodríguez-Ramos
We develop a new algorithm that extends the bidimensional fast digital radon transform from Gotz and Druckmuller (1996) to digitally simulate the refocusing of a 4-D lightfield into a 3-D volume of photographic planes as previously done by Ng et al. (2005) but with the minimum number of operations. This new algorithm does not require multiplications just sums and its computational complexity is O(N 4 ) to achieve a volume consisting of 2 N photographic planes focused at different depths from a N 4 plenoptic image. This reduced complexity allows for the acquisition and processing of a plenoptic sequence with the purpose of estimating 3-D shape at video rate. Examples are given of implementations on GPU and CPU platforms. Finally, a modified version of the algorithm to deal with domains of sizes different than a power of two is proposed.
IEEE\/OSA Journal of Display Technology | 2015
I. Montilla; Marta Puga; J. P. Lüke; José Gil Marichal-Hernández; J. M. Rodríguez-Ramos
The plenoptic camera was originally created to allow the capture of the light field, a four-variable volume representation of all rays and their directions, which allows the creation by synthesis of an image of the observed object. This method has several advantages with regard to 3D capture systems based on stereo cameras since it does not need frame synchronization or geometric and color calibration. It also has many applications, from 3DTV to medical imaging. A plenoptic camera uses a microlens array to measure the radiance and direction of all the light rays in a scene. The array is placed at a distance from the principal lens, which is conjugated to the distance where the scene is situated, and the sensor is at the focal plane of the microlenses. We have designed a plenoptic objective that incorporates a microlens array and a relay system that reimages the microlens plane. This novel approach has proven successful. Placing it on a camera, the plenoptic objective creates a virtual microlens plane in front of the camera CCD, allowing it to capture the light field of the scene. In this paper we present the experimental results showing that depth information is perfectly captured when using an external plenoptic objective. Using this objective transforms any camera into a 3D sensor, opening up a wide range of applications from microscopy to astronomy .
Proceedings of SPIE | 2011
José Gil Marichal-Hernández; J. P. Lüke; Fernando Rosa; J. M. Rodríguez-Ramos
In this work we develop a new algorithm, that extends the bidimensional Fast Digital Radon transform from Götz and Druckmüller (1996), to digitally simulate the refocusing of a 4D light field into a 3D volume of photographic planes, as previously done by Ren Ng et al. (2005), but with the minimum number of operations. This new algorithm does not require multiplications, just sums, and its computational complexity is O(N4) to achieve a volume consisting of 2N photographic planes focused at different depths, from a N4 plenoptic image. This reduced complexity allows for the acquisition and processing of a plenoptic sequence with the purpose of estimating 3D shape at video rate. Examples are given of implementations on GPU and CPU platforms. Finally, a modified version of the algorithm to deal with domains of sizes different than power of two, is proposed.
workshop on information optics | 2013
J. P. Lüke; Fernando Rosa; J. C. Sanluis; José Gil Marichal-Hernández; J. M. Rodríguez-Ramos
In the last years, interest on depth extraction from 4D light fields has increased. These techniques rely on the detection of slopes of planar structures in the light field function. These structures are also found as linear structures in epipolar image representations of the light field. Since EPI-representations are 2D signals, known orientation detection methods can be applied. This work shows that orientation estimation suffers from two types of errors: Systematic errors and random errors. A theoretical expression for the standard deviation of random errors has been formulated and verified.
Proceedings of SPIE | 2013
I. Montilla; Marta Puga; José Gil Marichal-Hernández; J. P. Lüke; J. M. Rodríguez-Ramos
The plenoptic camera was originally created to allow the capture of the Light Field, a four-variable volume representation of all rays and their directions, that allows the creation by synthesis of a 3D image of the observed object. This method has several advantages with regard to 3D capture systems based on stereo cameras, since it does note need frame synchronization or geometric and color calibration. And it has many applications, from 3DTV to medical imaging. A plenoptic camera uses a microlens array to measure the radiance and direction of all the light rays in a scene. The array is placed at the focal plane of the objective lens, and the sensor is at the focal plane of the microlenses. In this paper we study the application of our super resolution algorithm to mobile phones cameras. With a commercial camera, it is already possible to obtain images of good resolution and enough number of refocused planes, just placing a microlens array in front of the detector.
technologies applied to electronics teaching | 2012
Eduardo Magdaleno Castelló; Manuel Rodríguez Valido; Alejandro Ayala Alfonso; J. P. Lüke
In this paper we describe changes in a subject of digital logic design to fit the new plan of Bologna. The subject has a significant increase in hours of laboratory and students compared to previous years. We present the results of a student satisfaction survey and a comparison of the success of overcoming the remaining subject compared to past years.