Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J. M. Rodríguez-Ramos is active.

Publication


Featured researches published by J. M. Rodríguez-Ramos.


International Journal of Digital Multimedia Broadcasting | 2010

Near Real-Time Estimation of Super-Resolved Depth and All-In-Focus Images from a Plenoptic Camera Using Graphics Processing Units

J. P. Lüke; F. Pérez Nava; José Gil Marichal-Hernández; J. M. Rodríguez-Ramos; Fernando Rosa

Depth range cameras are a promising solution for the 3DTV production chain. The generation of color images with their accompanying depth value simplifies the transmission bandwidth problem in 3DTV and yields a direct input for autostereoscopic displays. Recent developments in plenoptic video-cameras make it possible to introduce 3D cameras that operate similarly to traditional cameras. The use of plenoptic cameras for 3DTV has some benefits with respect to 3D capture systems based on dual stereo cameras since there is no need for geometric and color calibration or frame synchronization. This paper presents a method for simultaneously recovering depth and all-in-focus images from a plenoptic camera in near real time using graphics processing units (GPUs). Previous methods for 3D reconstruction using plenoptic images suffered from the drawback of low spatial resolution. A method that overcomes this deficiency is developed on parallel hardware to obtain near real-time 3D reconstruction with a final spatial resolution of pixels. This resolution is suitable as an input to some autostereoscopic displays currently on the market and shows that real-time 3DTV based on plenoptic video-cameras is technologically feasible.


Journal of Electronic Imaging | 2007

Modal Fourier wavefront reconstruction using graphics processing units

José Gil Marichal-Hernández; J. M. Rodríguez-Ramos; Fernando Rosa

Large degree-of-freedom, real-time adaptive optics control requires reconstruction algorithms that are computationally efficient and readily parallelized for hardware implementation. Poyneer et al. [J. Opt. Soc. Am. A 19, 2100–2111 (2002)] have shown that the wavefront reconstruction with the use of the fast Fourier transform (FFT) and spatial filtering is computationally tractable and sufficiently accurate for its use in large Shack–Hartmann-based adaptive optics systems (up to 10,000 actuators). We show here that by the use of graphical processing units (GPUs), a specialized hardware capable of performing FFTs on big sequences almost 5 times faster than a high-end CPU, a problem of up to 50,000 actuators can already be done within a 6-ms limit. We give the method to adapt the FFT in an efficient way for the underlying architecture of GPUs.


Applied Optics | 2005

Atmospheric wavefront phase recovery by use of specialized hardware: graphical processing units and field-programmable gate arrays

José Gil Marichal-Hernández; Luis Fernando Rodríguez-Ramos; Fernando Rosa; J. M. Rodríguez-Ramos

To achieve the wavefront phase-recovery stage of an adaptive-optics loop computed in real time for 32 x 32 or a greater number of subpupils in a Shack-Hartmann sensor, we present here, for what is to our knowledge the first time, preliminary results that we obtained by using innovative techniques: graphical processing units (GPUs) and field-programmable gate arrays (FPGAs). We describe the stream-computing paradigm of the GPU and adapt a zonal algorithm to take advantage of the parallel computational power of the GPU. We also present preliminary results we obtained by use of FPGAs on the same algorithm. GPUs have proved to be a promising technique, but FPGAs are already a feasible solution to adaptive-optics real-time requirements, even for a large number of subpupils.


Proceedings of SPIE | 2008

2D-FFT implementation on FPGA for wavefront phase recovery from the CAFADIS camera

J. M. Rodríguez-Ramos; E. Magdaleno Castelló; C. Domínguez Conde; M. Rodríguez Valido; José Gil Marichal-Hernández

The CAFADIS camera is a new sensor patented by Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can measure the wavefront phase and the distance to the light source at the same time in a real time process. It uses specialized hardware: Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). These two kinds of electronic hardware present an architecture capable of handling the sensor output stream in a massively parallel approach. Of course, FPGAs are faster than GPUs, this is why it is worth it using FPGAs integer arithmetic instead of GPUs floating point arithmetic. GPUs must not be forgotten, as we have shown in previous papers, they are efficient enough to resolve several problems for AO in Extremely Large Telescopes (ELTs) in terms of time processing requirements; in addition, the GPUs show a widening gap in computing speed relative to CPUs. They are much more powerful in order to implement AO simulation than common software packages running on top of CPUs. Our paper shows an FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera. This is done in two steps: the estimation of the telescope pupil gradients from the telescope focus image, and then the very novelty 2D-FFT over the FPGA. Time processing results are compared to our GPU implementation. In fact, what we are doing is a comparison between the two different arithmetic mentioned above, then we are helping to answer about the viability of the FPGAs for AO in the ELTs.


Proceedings of SPIE | 2009

The plenoptic camera as a wavefront sensor for the European Solar Telescope (EST)

Luis Fernando Rodríguez-Ramos; Y. Martín; J. J. Díaz; J. Piqueras; J. M. Rodríguez-Ramos

The plenoptic wavefront sensor combines measurements at pupil and image planes in order to obtain wavefront information from different points of view simultaneously, being capable to sample the volume above the telescope to extract the tomographic information of the atmospheric turbulence. After describing the working principle, a laboratory setup has been used for the verification of the capability of measuring the pupil plane wavefront. A comparative discussion with respect to other wavefront sensors is also included.


Proceedings of SPIE | 2008

Wavefront and distance measurement using the CAFADIS camera

J. M. Rodríguez-Ramos; B. Femenía Castellá; F. Pérez Nava; S. Fumero

The CAFADIS camera is a new sensor patented by Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can measure the wavefront phase and the distance to the light source at the same time in a real time process. This could be really useful when using Adaptive Optics with Laser Guide Stars, in order to know the LGS height variations during the observation, or even the 3D LGS profile at Na layer. The CAFADIS camera has been designed using specialized hardware: Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). These two kinds of electronic hardware present an architecture capable of handling the sensor output stream in a massively parallel approach. Previous papers have shown their ability for AO in ELTs. CAFADIS is composed, essentially, by a microlenses array at the telescope image space, sampling the image instead of the telescope pupil. Conceptually, when only 2x2 microlenses are presented it is very similar to the pyramid sensor. But in fact, this optical design can be used to measure distances in the object space using a variety of techniques. Our paper shows a simulation of an observation using Na-LGS and Raylegh-LGS at the same time, where both of the LGS heights are accurately measured. The employed techniques are presented and future applications are introduced.


Proceedings of SPIE | 2011

3D imaging and wavefront sensing with a plenoptic objective

J. M. Rodríguez-Ramos; J. P. Lüke; R. López; José Gil Marichal-Hernández; I. Montilla; J. M. Trujillo-Sevilla; Bruno Femenia; Marta Puga; M. López; J. J. Fernández-Valdivia; F. Rosa; C. Dominguez-Conde; J. C. Sanluis; Luis Fernando Rodríguez-Ramos

Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.


Sensors | 2009

An Efficient Pipeline Wavefront Phase Recovery for the CAFADIS Camera for Extremely Large Telescopes

Eduardo Magdaleno; Manuel Rodriguez; J. M. Rodríguez-Ramos

In this paper we show a fast, specialized hardware implementation of the wavefront phase recovery algorithm using the CAFADIS camera. The CAFADIS camera is a new plenoptic sensor patented by the Universidad de La Laguna (Canary Islands, Spain): international patent PCT/ES2007/000046 (WIPO publication number WO/2007/082975). It can simultaneously measure the wavefront phase and the distance to the light source in a real-time process. The pipeline algorithm is implemented using Field Programmable Gate Arrays (FPGA). These devices present architecture capable of handling the sensor output stream using a massively parallel approach and they are efficient enough to resolve several Adaptive Optics (AO) problems in Extremely Large Telescopes (ELTs) in terms of processing time requirements. The FPGA implementation of the wavefront phase recovery algorithm using the CAFADIS camera is based on the very fast computation of two dimensional fast Fourier Transforms (FFTs). Thus we have carried out a comparison between our very novel FPGA 2D-FFTa and other implementations.


IEEE\/OSA Journal of Display Technology | 2015

Depth From Light Fields Analyzing 4D Local Structure

J. P. Lüke; F. Rosa; José Gil Marichal-Hernández; J. C. Sanluis; C. Dominguez Conde; J. M. Rodríguez-Ramos

In this paper, we develop a local method to obtain depths from the 4D light field. In contrast to previous local depth from light field methods based on EPIs, e.g., 2D slices of the light field, the proposed method takes into account the 4D nature of the light field and uses its four dimensions. Furthermore, our technique adapts well to parallel hardware. The performance of the method is tested against a publicly available benchmark dataset and compared with other algorithms that previously have been tested with the same benchmark. Results show that the proposed method can achieve competitive results in reasonable time.


Optical Engineering | 2013

Tip-tilt restoration of a segmented optical mirror using a geometric sensor

J. J. Fernández-Valdivia; Alberto Lastra Sedano; Sergio Chueca; Javier Sanz Gil; J. M. Rodríguez-Ramos

Abstract. We present a geometric sensor to restore the local tip-tilt in a segmented surface using the Van Dam and Lane algorithm [M. A. van Dam and R. G. Lane, Appl. Opt. 41(26), 5497–5502 (2002)]. The paper also presents an implementation of this algorithm using graphical processing units as specialized hardware. This compute unified device architecture implementation achieves real-time results inside the stability time of the atmosphere for resolutions of up to 1024×1024 pixels.

Collaboration


Dive into the J. M. Rodríguez-Ramos's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. P. Lüke

University of La Laguna

View shared research outputs
Top Co-Authors

Avatar

Marta Puga

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

I. Montilla

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

Alejandro Oscoz

Spanish National Research Council

View shared research outputs
Top Co-Authors

Avatar

R. Rebolo

Spanish National Research Council

View shared research outputs
Researchain Logo
Decentralizing Knowledge