Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José Gil Marichal-Hernández is active.

Publication


Featured researches published by José Gil Marichal-Hernández.


International Journal of Digital Multimedia Broadcasting | 2010

Near Real-Time Estimation of Super-Resolved Depth and All-In-Focus Images from a Plenoptic Camera Using Graphics Processing Units

J. P. Lüke; F. Pérez Nava; José Gil Marichal-Hernández; J. M. Rodríguez-Ramos; Fernando Rosa

Depth range cameras are a promising solution for the 3DTV production chain. The generation of color images with their accompanying depth value simplifies the transmission bandwidth problem in 3DTV and yields a direct input for autostereoscopic displays. Recent developments in plenoptic video-cameras make it possible to introduce 3D cameras that operate similarly to traditional cameras. The use of plenoptic cameras for 3DTV has some benefits with respect to 3D capture systems based on dual stereo cameras since there is no need for geometric and color calibration or frame synchronization. This paper presents a method for simultaneously recovering depth and all-in-focus images from a plenoptic camera in near real time using graphics processing units (GPUs). Previous methods for 3D reconstruction using plenoptic images suffered from the drawback of low spatial resolution. A method that overcomes this deficiency is developed on parallel hardware to obtain near real-time 3D reconstruction with a final spatial resolution of pixels. This resolution is suitable as an input to some autostereoscopic displays currently on the market and shows that real-time 3DTV based on plenoptic video-cameras is technologically feasible.


Journal of Electronic Imaging | 2007

Modal Fourier wavefront reconstruction using graphics processing units

José Gil Marichal-Hernández; J. M. Rodríguez-Ramos; Fernando Rosa

Large degree-of-freedom, real-time adaptive optics control requires reconstruction algorithms that are computationally efficient and readily parallelized for hardware implementation. Poyneer et al. [J. Opt. Soc. Am. A 19, 2100–2111 (2002)] have shown that the wavefront reconstruction with the use of the fast Fourier transform (FFT) and spatial filtering is computationally tractable and sufficiently accurate for its use in large Shack–Hartmann-based adaptive optics systems (up to 10,000 actuators). We show here that by the use of graphical processing units (GPUs), a specialized hardware capable of performing FFTs on big sequences almost 5 times faster than a high-end CPU, a problem of up to 50,000 actuators can already be done within a 6-ms limit. We give the method to adapt the FFT in an efficient way for the underlying architecture of GPUs.


Applied Optics | 2005

Atmospheric wavefront phase recovery by use of specialized hardware: graphical processing units and field-programmable gate arrays

José Gil Marichal-Hernández; Luis Fernando Rodríguez-Ramos; Fernando Rosa; J. M. Rodríguez-Ramos

To achieve the wavefront phase-recovery stage of an adaptive-optics loop computed in real time for 32 x 32 or a greater number of subpupils in a Shack-Hartmann sensor, we present here, for what is to our knowledge the first time, preliminary results that we obtained by using innovative techniques: graphical processing units (GPUs) and field-programmable gate arrays (FPGAs). We describe the stream-computing paradigm of the GPU and adapt a zonal algorithm to take advantage of the parallel computational power of the GPU. We also present preliminary results we obtained by use of FPGAs on the same algorithm. GPUs have proved to be a promising technique, but FPGAs are already a feasible solution to adaptive-optics real-time requirements, even for a large number of subpupils.


Proceedings of SPIE | 2008

2D-FFT implementation on FPGA for wavefront phase recovery from the CAFADIS camera

J. M. Rodríguez-Ramos; E. Magdaleno Castelló; C. Domínguez Conde; M. Rodríguez Valido; José Gil Marichal-Hernández

The CAFADIS camera is a new sensor patented by Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can measure the wavefront phase and the distance to the light source at the same time in a real time process. It uses specialized hardware: Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). These two kinds of electronic hardware present an architecture capable of handling the sensor output stream in a massively parallel approach. Of course, FPGAs are faster than GPUs, this is why it is worth it using FPGAs integer arithmetic instead of GPUs floating point arithmetic. GPUs must not be forgotten, as we have shown in previous papers, they are efficient enough to resolve several problems for AO in Extremely Large Telescopes (ELTs) in terms of time processing requirements; in addition, the GPUs show a widening gap in computing speed relative to CPUs. They are much more powerful in order to implement AO simulation than common software packages running on top of CPUs. Our paper shows an FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera. This is done in two steps: the estimation of the telescope pupil gradients from the telescope focus image, and then the very novelty 2D-FFT over the FPGA. Time processing results are compared to our GPU implementation. In fact, what we are doing is a comparison between the two different arithmetic mentioned above, then we are helping to answer about the viability of the FPGAs for AO in the ELTs.


Proceedings of SPIE | 2011

3D imaging and wavefront sensing with a plenoptic objective

J. M. Rodríguez-Ramos; J. P. Lüke; R. López; José Gil Marichal-Hernández; I. Montilla; J. M. Trujillo-Sevilla; Bruno Femenia; Marta Puga; M. López; J. J. Fernández-Valdivia; F. Rosa; C. Dominguez-Conde; J. C. Sanluis; Luis Fernando Rodríguez-Ramos

Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.


IEEE\/OSA Journal of Display Technology | 2015

Depth From Light Fields Analyzing 4D Local Structure

J. P. Lüke; F. Rosa; José Gil Marichal-Hernández; J. C. Sanluis; C. Dominguez Conde; J. M. Rodríguez-Ramos

In this paper, we develop a local method to obtain depths from the 4D light field. In contrast to previous local depth from light field methods based on EPIs, e.g., 2D slices of the light field, the proposed method takes into account the 4D nature of the light field and uses its four dimensions. Furthermore, our technique adapts well to parallel hardware. The performance of the method is tested against a publicly available benchmark dataset and compared with other algorithms that previously have been tested with the same benchmark. Results show that the proposed method can achieve competitive results in reasonable time.


Proceedings of SPIE | 2012

Atmospherical wavefront phases using the plenoptic sensor (real data)

Luis Fernando Rodríguez-Ramos; I. Montilla; J. P. Lüke; R. López; José Gil Marichal-Hernández; J. M. Trujillo-Sevilla; Bruno Femenia; M. López; J. J. Fernández-Valdivia; Marta Puga; F. Rosa; J. M. Rodríguez-Ramos

Plenoptic cameras have been developed the last years as a passive method for 3d scanning, allowing focal stack capture from a single shot. But data recorded by this kind of sensors can also be used to extract the wavefront phases associated to the atmospheric turbulence in an astronomical observation. The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated to the turbulence. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically, taking advantage of the two principal characteristics of the plenoptic sensors at the same time: 3D scanning and wavefront sensing. Then, the plenoptic sensors can be studied and used as an alternative wavefront sensor for Adaptive Optics, particularly relevant when Extremely Large Telescopes projects are being undertaken. In this paper, we will present the first observational wavefront phases extracted from real astronomical observations, using punctual and extended objects, and we show that the restored wavefronts match the Kolmogorov atmospheric turbulence.


Journal of Electronic Imaging | 2012

Fast approximate 4-D/3-D discrete radon transform for lightfield refocusing

José Gil Marichal-Hernández; J. P. Lüke; Fernando Rosa; J. M. Rodríguez-Ramos

We develop a new algorithm that extends the bidimensional fast digital radon transform from Gotz and Druckmuller (1996) to digitally simulate the refocusing of a 4-D lightfield into a 3-D volume of photographic planes as previously done by Ng et al. (2005) but with the minimum number of operations. This new algorithm does not require multiplications just sums and its computational complexity is O(N 4 ) to achieve a volume consisting of 2 N photographic planes focused at different depths from a N 4 plenoptic image. This reduced complexity allows for the acquisition and processing of a plenoptic sequence with the purpose of estimating 3-D shape at video rate. Examples are given of implementations on GPU and CPU platforms. Finally, a modified version of the algorithm to deal with domains of sizes different than a power of two is proposed.


IEEE\/OSA Journal of Display Technology | 2015

Design and Laboratory Results of a Plenoptic Objective: From 2D to 3D With a Standard Camera

I. Montilla; Marta Puga; J. P. Lüke; José Gil Marichal-Hernández; J. M. Rodríguez-Ramos

The plenoptic camera was originally created to allow the capture of the light field, a four-variable volume representation of all rays and their directions, which allows the creation by synthesis of an image of the observed object. This method has several advantages with regard to 3D capture systems based on stereo cameras since it does not need frame synchronization or geometric and color calibration. It also has many applications, from 3DTV to medical imaging. A plenoptic camera uses a microlens array to measure the radiance and direction of all the light rays in a scene. The array is placed at a distance from the principal lens, which is conjugated to the distance where the scene is situated, and the sensor is at the focal plane of the microlenses. We have designed a plenoptic objective that incorporates a microlens array and a relay system that reimages the microlens plane. This novel approach has proven successful. Placing it on a camera, the plenoptic objective creates a virtual microlens plane in front of the camera CCD, allowing it to capture the light field of the scene. In this paper we present the experimental results showing that depth information is perfectly captured when using an external plenoptic objective. Using this objective transforms any camera into a 3D sensor, opening up a wide range of applications from microscopy to astronomy .


Proceedings of SPIE | 2011

Fast approximate 4D:3D discrete radon transform, from light field to focal stack with O(N4) sums

José Gil Marichal-Hernández; J. P. Lüke; Fernando Rosa; J. M. Rodríguez-Ramos

In this work we develop a new algorithm, that extends the bidimensional Fast Digital Radon transform from Götz and Druckmüller (1996), to digitally simulate the refocusing of a 4D light field into a 3D volume of photographic planes, as previously done by Ren Ng et al. (2005), but with the minimum number of operations. This new algorithm does not require multiplications, just sums, and its computational complexity is O(N4) to achieve a volume consisting of 2N photographic planes focused at different depths, from a N4 plenoptic image. This reduced complexity allows for the acquisition and processing of a plenoptic sequence with the purpose of estimating 3D shape at video rate. Examples are given of implementations on GPU and CPU platforms. Finally, a modified version of the algorithm to deal with domains of sizes different than power of two, is proposed.

Collaboration


Dive into the José Gil Marichal-Hernández's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. P. Lüke

University of La Laguna

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

I. Montilla

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marta Puga

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

R. López

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Bruno Femenia

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge