Adrián Dorado
University of Valencia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adrián Dorado.
Applied Optics | 2014
Manuel Martínez-Corral; Adrián Dorado; H. Navarro; Genaro Saavedra; Bahram Javidi
The original aim of the integral-imaging concept, reported by Gabriel Lippmann more than a century ago, is the capture of images of 3D scenes for their projection onto an autostereoscopic display. In this paper we report a new algorithm for the efficient generation of microimages for their direct projection onto an integral-imaging monitor. Like our previous algorithm, the smart pseudoscopic-to-orthoscopic conversion (SPOC) algorithm, this algorithm produces microimages ready to produce 3D display with full parallax. However, this new algorithm is much simpler than the previous one, produces microimages free of black pixels, and permits fixing at will, between certain limits, the reference plane and the field of view of the displayed 3D scene. Proofs of concept are illustrated with 3D capture and 3D display experiments.
IEEE\/OSA Journal of Display Technology | 2013
H. Navarro; Emilio Sánchez-Ortiga; Genaro Saavedra; Anabel Llavador; Adrián Dorado; Manuel Martianez-Corral; Bahram Javidi
We evaluate the lateral resolution in reconstructed integral images. Our analysis takes into account both the diffraction effects in the image capture stage and the lack of homogeneity and isotropy in the reconstruction stage. We have used Monte Carlo simulation in order to assign a value for the resolution limit to any reconstruction plane. We have modelled the resolution behavior. Although in general the resolution limit increases proportionally to the distance to the lens array, there are some periodically distributed singularity planes. The phenomenon is supported by experiments.
IEEE\/OSA Journal of Display Technology | 2015
Seokmin Hong; Dong-Hak Shin; Byung-Gook Lee; Adrián Dorado; Genaro Saavedra; Manuel Martínez-Corral
We report a new procedure for the capture and processing of light proceeding from 3D scenes of some cubic meters in size. Specifically we demonstrate that with the information provided by a kinect device it is possible to generate an array of microimages ready for their projection onto an integral-imaging monitor. We illustrate our proposal with some imaging experiment in which the final result are 3D images displayed with full parallax.
Proceedings of SPIE | 2012
H. Navarro; Adrián Dorado; Genaro Saavedra; Anabel Llavador; Manuel Martínez-Corral; Bahram Javidi
An analysis and comparison of the lateral and the depth resolution in the reconstruction of 3D scenes from images obtained either with a classical two view stereoscopic camera or with an Integral Imaging (InI) pickup setup is presented. Since the two above systems belong to the general class of multiview imaging systems, the best analytical tool for the calculation of lateral and depth resolution is the ray-space formalism, and the classical tools of Fourier information processing. We demonstrate that InI is the optimum system to sampling the spatio-angular information contained in a 3D scene.
Proceedings of SPIE | 2014
Manuel Martínez-Corral; Adrián Dorado; Hector Navarro; Anabel Llavador; Genaro Saavedra; Bahram Javidi
Plenoptic cameras capture a sampled version of the map of rays emitted by a 3D scene, commonly known as the Lightfield. These devices have been proposed for multiple applications as calculating different sets of views of the 3D scene, removing occlusions and changing the focused plane of the scene. They can also capture images that can be projected onto an integral-imaging monitor for display 3D images with full parallax. In this contribution, we have reported a new algorithm for transforming the plenoptic image in order to choose which part of the 3D scene is reconstructed in front of and behind the microlenses in the 3D display process.
Proceedings of the IEEE | 2017
Manuel Martínez-Corral; Adrián Dorado; Juan Carlos Barreiro; Genaro Saavedra; Bahram Javidi
The capture and display of images of 3-D scenes under incoherent and polychromatic illumination is currently a hot topic of research, due to its broad applications in bioimaging, industrial procedures, military and surveillance, and even in the entertainment industry. In this context, Integral Imaging (InI) is a very competitive technology due to its capacity for recording with a single exposure the spatial-angular information of light-rays emitted by the 3-D scene. From this information, it is possible to calculate and display a collection of horizontal and vertical perspectives with high depth of field. It is also possible to calculate the irradiance of the original scene at different depths, even when these planes are partially occluded or even immersed in a scattering medium. In this paper, we describe the fundaments of InI and the main contributions to its development. We also focus our attention on the recent advances of the InI technique. Specifically, the application of InI concept to microscopy is analyzed and the achievements in resolution and depth of field are explained. In a different context, we also present the recent advances in the capture of large scenes. The progresses in the algorithms for the calculation of displayable 3-D images and in the implementation of setups for the 3-D displays are reviewed.
IEEE\/OSA Journal of Display Technology | 2016
Adrián Dorado; Manuel Martínez-Corral; Genaro Saavedra; Seokmin Hong
Integral photography is an auto-stereoscopic technique that allows, among other interesting applications, the display of 3D images with full parallax and avoids the painful effects of the accommodation-convergence conflict. Currently, one of the main drawbacks of this technology is the need of a huge amount of data, which have to be stored and transmitted. This is due to the fact that behind every visual resolution unit, i.e. behind any microlens of an integral-photography monitor, between 100 and 300 pixels should appear. In this paper, we make use of an updated version of our algorithm, SPOC 2.0, to alleviate this situation. We propose the application of SPOC 2.0 for the calculation of complete 3D traveling sequences from a single integral photograph. Specifically, our method permits to generate a sequence of 3D images that simulate the travelling 3D frames captured by a non-static cameraman. In the traveling sequence, we can fix at will, for every frame, the size and position of the field of view, and the parts of the scene that are displayed in front or behind the monitor. Our research is illustrated with experiments in which we generate and display a full traveling sequence.
Journal of information and communication convergence engineering | 2015
Adrián Dorado; Genaro Saavedra; Jorge Sola-Pikabea; Manuel Martínez-Corral
Enlarging the horizontal viewing angle is an important feature of integral imaging monitors. Thus far, the horizontal viewing angle has been enlarged in different ways, such as by changing the size of the elemental images or by tilting the lens array in the capture and reconstruction stages. However, these methods are limited by the microlenses used in the capture stage and by the fact that the images obtained cannot be easily projected into different displays. In this study, we upgrade our previously reported method, called SPOC 2.0. In particular, our new approach, which can be called SPOC 2.1, enlarges the viewing angle by increasing the density of the elemental images in the horizontal direction and by an appropriate application of our transformation and reshape algorithm. To illustrate our approach, we have calculated some high-viewing angle elemental images and displayed them on an integral imaging monitor.
IEEE\/OSA Journal of Display Technology | 2016
Seokmin Hong; Adrián Dorado; Genaro Saavedra; Juan Carlos Barreiro; Manuel Martínez-Corral
We exploit the Kinect capacity of picking up a dense depth map, to display static three-dimensional (3D) images with full parallax. This is done by using the IR and RGB camera of the Kinect. From the depth map and RGB information, we are able to obtain an integral image after projecting the information through a virtual pinhole array. The integral image is displayed on our integral-imaging monitor, which provides the observer with horizontal and vertical perspectives of big 3D scenes. But, due to the Kinect depth-acquisition procedure, many depthless regions appear in the captured depth map. These holes spread to the generated integral image, reducing its quality. To solve this drawback we propose here, both, an optimized camera calibration technique, and the use of an improved hole-filtering algorithm. To verify our method, we performed an experiment where we generated and displayed the integral image of a room size 3D scene.
Journal of information and communication convergence engineering | 2014
H. Navarro; Adrián Dorado; Genaro Saavedra; Manuel Martinez Corral
Here, we present a review of the proposals and advances in the field of three-dimensional (3D) imaging acquisition and display made in the last century. The most popular techniques are based on the concept of stereoscopy. However, stereoscopy does not provide real 3D experience, and produces discomfort due to the conflict between convergence and accommodation. For this reason, we focus this paper on integral imaging, which is a technique that permits the codification of 3D information in an array of 2D images obtained from different perspectives. When this array of elemental images is placed in front of an array of microlenses, the perspectives are integrated producing 3D images with full parallax and free of the convergenceaccommodation conflict. In the paper we describe the principles of this technique, together with some new applications of integral imaging.