Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Santana-Cedrés is active.

Publication


Featured researches published by Daniel Santana-Cedrés.


Image and Vision Computing | 2013

Accurate subpixel edge location based on partial area effect

Agustín Trujillo-Pino; Karl Krissian; Daniel Santana-Cedrés

The estimation of edge features, such as subpixel position, orientation, curvature and change in intensity at both sides of the edge, from the computation of the gradient vector in each pixel is usually inexact, even in ideal images. In this paper, we present a new edge detector based on an edge and acquisition model derived from the partial area effect, which does not assume continuity in the image values. The main goal of this method consists in achieving a highly accurate extraction of the position, orientation, curvature and contrast of the edges, even in difficult conditions, such as noisy images, blurred edges, low contrast areas or very close contours. For this purpose, we first analyze the influence of perfectly straight or circular edges in the surrounding region, in such a way that, when these conditions are fulfilled, the features can exactly be determined. Afterward, we extend it to more realistic situations considering how adverse conditions can be tackled and presenting an iterative scheme for improving the results. We have tested this method in real as well as in sets of synthetic images with extremely difficult edges, and in both cases a highly accurate characterization has been achieved.


Pattern Recognition Letters | 2014

Line detection in images showing significant lens distortion and application to distortion correction

Luis Alvarez; Luis Gomez; Daniel Santana-Cedrés

Lines are one of the basic primitives used by the perceptual system to analyze and interpret a scene. Therefore, line detection is a very important issue for the robustness and flexibility of Computer Vision systems. However, in the case of images showing a significant lens distortion, standard line detection methods fail because lines are not straight. In this paper we present a new technique to deal with this problem: we propose to extend the usual Hough representation by introducing a new parameter which corresponds to the lens distortion, in such a way that the search space is a three-dimensional space, which includes orientation, distance to the origin and also distortion. Using the collection of distorted lines which have been recovered, we are able to estimate the lens distortion, remove it and create a new distortion-free image by using a two-parameter lens distortion model. We present some experiments in a variety of images which show the ability of the proposed approach to extract lines in images showing a significant lens distortion.


Siam Journal on Imaging Sciences | 2015

Invertibility and Estimation of Two-Parameter Polynomial and Division Lens Distortion Models

Daniel Santana-Cedrés; Luis Gomez; Agustín Salgado; Julio Esclarín; Luis Mazorra; Luis Alvarez

In this paper, we study lens distortion for still images considering two well-known distortion models: the two-parameter polynomial model and the two-parameter division model. We study the invertibility of these models, and we mathematically characterize the conditions for the distortion parameters under which the distortion model defines a one-to-one transformation. This ensures that the inverse transformation is well defined and the distortion-free image can be properly computed, which provides robustness to the distortion models. A new automatic method to correct the radial distortion is proposed, and a comparative analysis for this method is extensively performed using the polynomial and the division models. With the aim of obtaining an accurate estimation of the model, we propose an optimization scheme which iteratively improves the parameters to achieve a better matching between the distorted lines and the edge points. The proposed method estimates two-parameter radial distortion models by detecting...


Image Processing On Line | 2014

Automatic Lens Distortion Correction Using One-Parameter Division Models

Luis Alvarez; Luis Gomez; Daniel Santana-Cedrés

We present a method to automatically correct the radial distortion caused by wide-angle lenses using the distorted lines generated by the projection of 3D straight lines onto the image. Lens distortion is estimated by the division model using one parameter, which allows to state the problem into the Hough transform scheme by adding a distortion parameter to better extract straight lines from the image. This paper describes an algorithm which applies this technique, providing all the details of the design of an improved Hough transform. We perform experiments using calibration patterns and real scenes showing a strong distortion to illustrate the performance of the proposed method. Source Code The source code, the code documentation, and the online demo are accessible at the IPOL web page of this article 1 . In this page, an implementation is available for download. Compilation and usage instructions are included in the README.txt file of the archive.


Image Processing On Line | 2016

An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models

Daniel Santana-Cedrés; Luis Gomez; Miguel Alemán-Flores; Agustín Salgado; Julio Esclarín; Luis Mazorra; Luis Alvarez

We present an algorithm to automatically estimate two-parameter radial polynomial and division distortion models in images. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. Once we have obtained a valid initial solution, a two-parameter model is embedded into an iterative nonlinear optimization schema to improve the solution. The minimization aims at reducing the distance from the points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows us to detect more points on the distorted lines in the image, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.


iberoamerican congress on pattern recognition | 2013

Wide-Angle Lens Distortion Correction Using Division Models

Luis Alvarez; Luis Gomez; Daniel Santana-Cedrés

In this paper we propose a new method to automatically correct wide-angle lens distortion from the distorted lines generated by the projection on the image of 3D straight lines. We have to deal with two major problems: on the one hand, wide-angle lenses produce a strong distortion, which makes the detection of distorted lines a particularly difficult task. On the other hand, the usual single parameter polynomial lens distortion models is not able to manage such a strong distortion. We propose an extension of the Hough transform by adding a distortion parameter to detect the distorted lines, and division lens distortion models to manage wide-angle lens distortion. We present some experiments on synthetic and real images to show the ability of the proposed approach to automatically correct this type of distortion. A comparison with a state-of-the-art method is also included to show the benefits of our method.


computer aided systems theory | 2011

A subpixel edge detector applied to aortic dissection detection

Agustín Trujillo-Pino; Karl Krissian; Daniel Santana-Cedrés; J. Esclarín-Monreal; J. M. Carreira-Villamor

The aortic dissection is a disease that can cause a deadly situation, even with a correct treatment. It consists in a rupture of a layer of the aortic artery wall, causing a blood flow inside this rupture, called dissection. The aim of this paper is to contribute to its diagnosis, detecting the dissection edges inside the aorta. A subpixel accuracy edge detector based on the hypothesis of partial volume effect is used, where the intensity of an edge pixel is the sum of the contribution of each color weighted by its relative area inside the pixel. The method uses a floating window centred on the edge pixel and computes the edge features. The accuracy of our method is evaluated on synthetic images of different thickness and noise levels, obtaining an edge detection with a maximal mean error lower than 16 percent of a pixel.


IEEE Sensors Journal | 2017

Estimation of the Lens Distortion Model by Minimizing a Line Reprojection Error

Daniel Santana-Cedrés; Luis Gomez; Miguel Alemán-Flores; Agustín Salgado; Julio Esclarín; Luis Mazorra; Luis Alvarez

Most techniques for camera calibration that use planar calibration patterns require the computation of a lens distortion model and a homography. Both are simultaneously refined using a bundle adjustment that minimizes the reprojection error of a collection of points when projected from the scene onto the camera sensor. These points are usually the corners of the rectangles of a calibration pattern. However, if the lens shows a significant distortion, the location and matching of the corners can be difficult and inaccurate. To cope with this problem, instead of point correspondences, we propose to use line correspondences to compute the reprojection error. We have designed a fully automatic algorithm to estimate the lens distortion model and the homography by computing line correspondences and minimizing the line reprojection error. In the experimental setup, we focus on the analysis of the quality of the obtained lens distortion model. We present some experiments that show that the proposed method outperforms the results obtained by standard methods to compute lens distortion models based on line rectification.


Computer Vision and Image Understanding | 2017

Automatic correction of perspective and optical distortions

Daniel Santana-Cedrés; Luis Gomez; Miguel Alemán-Flores; Agustín Salgado; Julio Esclarín; Luis Mazorra; Luis Alvarez

Abstract Perspective and optical (lens) distortions are aberrations of very different nature that can simultaneously affect an image. Perspective distortion is caused by the position of the camera, especially when it is too close to the scene. Optical distortion is a lens aberration which causes straight lines in the scene to be projected onto the image as distorted lines. Standard methods to correct perspective distortion are based on the estimation of the vanishing points, which can fail if lens distortion is significant. In this paper, we introduce a new method which addresses both problems in a single framework. First we estimate a lens distortion model by extracting a collection of distorted lines in the image. These distorted lines are afterward rectified by means of the lens distortion model and used to estimate the vanishing points. Finally, the vanishing points are used to correct the perspective distortion. We present a variety of experiments to show the reliability of the proposed method.


iberoamerican congress on pattern recognition | 2014

Automatic Corner Matching in Highly Distorted Images of Zhang’s Calibration Pattern

Miguel Alemán-Flores; Luis Alvarez; Luis Gomez; Daniel Santana-Cedrés

Zhang’s method is a widely used technique for camera calibration from different views of a planar calibration pattern. This pattern contains a set of squares arranged in a certain configuration. In order to calibrate the camera, the corners of the squares in the images must be matched with those in the reference model. When the images show a strong lens distortion, the usual methods to compute the corner matching fail because the corners are shifted from their expected positions. We propose a new method which automatically estimates such corner matching taking into account the lens distortion. The method is based on an automatic algorithm for lens distortion correction which allows estimating the distorted lines passing through the edges of the squares. We present some experiments to illustrate the performance of the proposed method, as well as a comparison with the usual technique proposed in a Matlab toolbox.

Collaboration


Dive into the Daniel Santana-Cedrés's collaboration.

Top Co-Authors

Avatar

Luis Alvarez

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Luis Gomez

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Julio Esclarín

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Agustín Salgado

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Agustín Trujillo-Pino

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Karl Krissian

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

Agustín Trujillo

University of Las Palmas de Gran Canaria

View shared research outputs
Top Co-Authors

Avatar

José M. Carreira

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Pablo G. Tahoces

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

J. Esclarín-Monreal

University of Las Palmas de Gran Canaria

View shared research outputs
Researchain Logo
Decentralizing Knowledge