Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julia R. Alonso is active.

Publication


Featured researches published by Julia R. Alonso.


Applied Optics | 2016

Reconstruction of perspective shifts and refocusing of a three-dimensional scene from a multi-focus image stack.

Julia R. Alonso; Ariel Fernández; José A. Ferrari

The convergence of optical imaging acquisition and image processing algorithms is a fast-evolving interdisciplinary research field focused on the reconstruction of images with novel features of interest. We propose a method for post-capture perspective shift reconstruction (in the x, y, and z directions) of a three-dimensional scene as well as refocusing with apertures of arbitrary shapes and sizes from an optimal multi-focus image stack. The approach is based on the reorganization of the acquired visual information considering a depth-variant point-spread function, which allows it to be applied to strongly defocused multi-focus image stacks. Our method is performed without estimating the depth map or segmenting the in-focus regions. A conventional camera combined with an electrically tunable lens is used for image acquisition and does not require scale transformation or registration between the acquired images. Experimental results for both real and synthetic data images are provided and compared to state-of-the-art schemes.


Optics Letters | 2015

All-in-focus image reconstruction under severe defocus.

Julia R. Alonso; Ariel Fernández; Gastón A. Ayubi; José A. Ferrari

Limited depth-of-focus is a problem in many fields of optics, e.g., microscopy and macro-photography. We propose a new physically based method with a space variant point spread function (PSF) to accomplish all-in-focus reconstruction (image fusion) from a multi-focus image sequence in order to extend the depth-of-field. The proposed method works well under strong defocus conditions for color image stacks of arbitrary length. Experimental results are provided to demonstrate that our method outperforms state-of-the-art image fusion algorithms for strong defocus on both synthetic as well as real data images.


Optics Letters | 2012

Color encoding of binary fringes for gamma correction in 3-D profiling.

Gastón A. Ayubi; J. Matías Di Martino; Julia R. Alonso; Ariel Fernández; Jorge L. Flores; José A. Ferrari

Three-dimensional profiling by sinusoidal fringe projection using PSI-algorithms are distorted by the nonlinear response of digital cameras and commercial video projectors. To solve the problem, we present a fringe generation technique that consists of projecting and acquiring a temporal sequence of strictly binary color patterns, whose (adequately weighted) average leads to sinusoidal fringe patterns with the required number of bits, which allows for a reliable three-dimensional profile using a PSI-algorithm. Validation experiments are presented.


Optics Express | 2010

Analog image contouring using a twisted-nematic liquid-crystal display

Jorge L. Flores; José A. Ferrari; Javier A. Ramos; Julia R. Alonso; Ariel Fernández

We present a novel image contouring method based on the polarization features of the twisted-nematic liquid-crystal displays (TN-LCDs). TN-LCDs are manufactured to work between a crossed polarizer-analyzer pair. When the analyzer is at 45 deg (instead of 90 deg) with respect to the polarizer, one obtains an optically processed image with pronounced outlines (dark contours) at middle intensity, i.e., the borders between illuminated and dark areas are enhanced. The proposed method is quite robust and does not require precise alignment or coherent illumination. Since it does not involve numerical processing, it could be useful for contouring large images in real-time, which presents potential applications in medical and biological imaging. Validation experiments are presented.


Applied Optics | 2017

Fourier domain post-acquisition aperture reshaping from a multi-focus stack

Julia R. Alonso

Computational optical imaging methods allow the extension of the functionality of traditional cameras. The shape of the aperture in an optical system determines the shape in which the out-of-focus points are blurred in a captured image. In this work we present a method in the Fourier domain that allows us, from an acquired multi-focus image stack, to synthesize images of a three-dimensional scene as if they had been acquired with apertures with arbitrary shapes. Partially extended depth-of-field as well as all-in-focus image reconstruction can be obtained as particular cases.


Applied Optics | 2015

Real-time pattern recognition using an optical generalized Hough transform.

Ariel Fernández; Jorge L. Flores; Julia R. Alonso; José A. Ferrari

We present some pattern recognition applications of a generalized optical Hough transform and the temporal multiplexing strategies for dynamic scale and orientation-variant detection. Unlike computer-based implementations of the Hough transform, in principle its optical implementation does not impose restrictions on the execution time or on the resolution of the images or frame rate of the videos to be processed, which is potentially useful for real-time applications. Validation experiments are presented.


Applied Optics | 2016

Image segmentation by nonlinear filtering of optical Hough transform.

Ariel Fernández; Jorge L. Flores; Julia R. Alonso; José A. Ferrari

The identification and extraction (i.e., segmentation) of geometrical features is crucial in many tasks requiring image analysis. We present a method for the optical segmentation of features of interest from an edge enhanced image. The proposed method is based on the nonlinear filtering (implemented by the use of a spatial light modulator) of the generalized optical Hough transform and is capable of discriminating features by shape and by size. The robustness of the method against noise in the input, low contrast, or overlapping of geometrical features is assessed, and experimental validation of the working principle is presented.


Applications of Digital Image Processing XLI | 2018

All-in-focus image reconstruction robust to ghosting effect

Sergio Gómez Angulo; Julia R. Alonso; Marija Strojnik; Ariel Fernández; Guillermo García Torales; Jorge L. Flores Núñez; José A. Ferrari

Depth of field (DOF) is still a limitation in the acquisition of sharp images. Since DOF depends on the optical system, a shallow DOF is common in macro-photography and microscopy. In this work, we propose a post-processing method to improve an all-in-focus algorithm based on a linear system. Using a multi-focus stack we extract a focused object of interest. Finally, we synthesize a single image with extended DOF including occlusions using a-priori information of the objects of the acquired scene. We present theory and experimental results to show the performance of the proposed method using real images captured with a fixed camera.


Rundbrief Der Gi-fachgruppe 5.10 Informationssystem-architekturen | 2016

Fourier Domain Method for Extended Deph-of-field From a Multi-focus Image Stack

Julia R. Alonso

We present a method to accomplish all-in-focus image reconstruction from a multi-focus image stack. Our approach does not require the estimation of a depth-map of the scene or the segmentation of the in-focus regions.


Proceedings of SPIE | 2016

Stereoscopic 3D-scene synthesis from a monocular camera with an electrically tunable lens

Julia R. Alonso

3D-scene acquisition and representation is important in many areas ranging from medical imaging to visual entertainment application. In this regard, optical imaging acquisition combined with post-capture processing algorithms enable the synthesis of images with novel viewpoints of a scene. This work presents a new method to reconstruct a pair of stereoscopic images of a 3D-scene from a multi-focus image stack. A conventional monocular camera combined with an electrically tunable lens (ETL) is used for image acquisition. The captured visual information is reorganized considering a piecewise-planar image formation model with a depth-variant point spread function (PSF) along with the known focusing distances at which the images of the stack were acquired. The consideration of a depth-variant PSF allows the application of the method to strongly defocused multi-focus image stacks. Finally, post-capture perspective shifts, presenting each eye the corresponding viewpoint according to the disparity, are generated by simulating the displacement of a synthetic pinhole camera. The procedure is performed without estimation of the depth-map or segmentation of the in-focus regions. Experimental results for both real and synthetic data images are provided and presented as anaglyphs, but it could easily be implemented in 3D displays based in parallax barrier or polarized light.

Collaboration


Dive into the Julia R. Alonso's collaboration.

Top Co-Authors

Avatar

José A. Ferrari

University of the Republic

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jorge L. Flores

University of Guadalajara

View shared research outputs
Top Co-Authors

Avatar

Guillermo García Torales

Centro de Investigaciones en Optica

View shared research outputs
Top Co-Authors

Avatar

Marija Strojnik

Centro de Investigaciones en Optica

View shared research outputs
Researchain Logo
Decentralizing Knowledge