Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicolás Guil is active.

Publication


Featured researches published by Nicolás Guil.


Pattern Recognition | 1997

Lower order circle and ellipse Hough transform

Nicolás Guil; Emilio L. Zapata

In this work we present two new algorithms for the detection of circles and ellipses which use the FHT algorithm as a basis: Fast Circle Hough Transform (FCHT) and Fast Ellipse Hough Transform (FEHT). The first stage of these two algorithms, devoted to obtaining the centers of the figures, is computationally the most costly. With the objective of improving the execution times of this stage it has been implemented using a new focusing algorithm instead of the typical polling process in a parameter space. This new algorithm uses a new strategy that manages to reduce the execution times, especially in the case where multiple figures appear in the image, or when they are of very different sizes. We also perform a labeling of the image points that permits discriminating which of these belong to each figure, saving computations in subsequent stages.


IEEE Transactions on Image Processing | 1995

A fast Hough transform for segment detection

Nicolás Guil; Julio Villalba; Emilio L. Zapata

The authors describe a new algorithm for the fast Hough transform (FHT) that satisfactorily solves the problems other fast algorithms propose in the literature-erroneous solutions, point redundance, scaling, and detection of straight lines of different sizes-and needs less storage space. By using the information generated by the algorithm for the detection of straight lines, they manage to detect the segments of the image without appreciable computational overhead. They also discuss the performance and the parallelization of the algorithm and show its efficiency with some examples.


signal processing systems | 1996

Cordic based parallel/pipelined architecture for the Hough transform

Javier D. Bruguera; Nicolás Guil; Tomás Lang; Julio Villalba; Emilio L. Zapata

We present the design of parallel architectures for the computation of the Hough transform based on application-specific CORDIC processors. The design of the circular CORDIC in rotation mode is simplified by the a priori knowledge of the angles participating in the transform and a high throughput is obtained through a pipelined design combined with the use of redundant arithmetic (carry save adders in this paper). Saving area is essential to the design of a pipelined CORDIC and can be achieved through the reduction in the number of microrotations and/or the size of the coefficient ROM. To reduce the number of microrotations we incorporate radix 4, when it is possible, or mixed radix (radix 2 and radix 4) in the design of the processor, achieving a reduction by half and 25% microrotations, respectively, with respect to a totally radix 2 implementation. Furthermore, if we allocate two circular CORDIC rotators into one processors then the size of the shared coefficient ROM is only 50% of the ROM of a design based on two separated rotators. Finally, we have also incorporated additional microrotations in order to reduce the scale factor to one. The result is a pipelined architecture which can be easily integrated in VLSI technology due to its regularity and modularity.


Pattern Recognition | 1999

Bidimensional shape detection using an invariant approach

Nicolás Guil; José María González-Linares; E.L. Zapata

Abstract Bidimensional shape detection is a process with high computational complexity. In this work, an algorithm, based on the generalized Hough transform (GHT), is presented in order to calculate the orientation, scale, and displacement of a image shape with respect to a template. To reduce the complexity, the uncoupled of the parameter calculation is carried out. The generation of the invariant information needed by the uncoupled is implemented by using three transformation functions that pair shape edge points. Differences between gradient vector angles are used to choose the paired points. An “a priori” study of the template shape is carried out to select the most suitable values for the difference angles.


international conference on computer vision | 2007

Bilinear Active Appearance Models

Jose Gonzalez-Mora; F. De la Torre; R. Murthi; Nicolás Guil; E.L. Zapata

Appearance models have been applied to model the space of human faces over the last two decades. In particular, active appearance models (AAMs) have been successfully used for face tracking, synthesis and recognition, and they are one of the state-of-the-art approaches due to its efficiency and representational power. Although widely employed, AAMs suffer from a few drawbacks, such as the inability to isolate pose, identity and expression changes. This paper proposes Bilinear Active Appearance Models (BAAMs), an extension of AAMs, that effectively decouple changes due to pose and expression/identity. We derive a gradient-descent algorithm to efficiently fit BAAMs to new images. Experimental results show how BAAMs improve generalization and convergence with respect to the linear model. In addition, we illustrate decoupling benefits of BAAMs in face recognition across pose. We show how the pose normalization provided by BAAMs increase the recognition performance of commercial systems.


iberian conference on pattern recognition and image analysis | 2007

A Clustering Technique for Video Copy Detection

Nicolás Guil; José María González-Linares; Julián Ramos Cózar; E.L. Zapata

In this work, a new method for detecting copies of a query video in a videos database is proposed. It includes a new clustering technique that groups frames with similar visual content, maintaining their temporal order. Applying this technique, a keyframe is extracted for each cluster of the query video. Keyframe choice is carried out by selecting the frame in the cluster with maximum similarity to the rest of frames in the cluster. Then, keyframes are compared to target videos frames in order to extract similarity regions in the target video. Relaxed temporal constraints are subsequently applied to the calculated regions in order to identify the copy sequence. The reliability and performance of the method has been tested by using several videos from the MPEG-7 Content Set, encoded with different frame sizes, bit rates and frame rates. Results show that our method obtains a significant improvement with respect to previous approaches in both achieved precision and computation time.


Signal Processing-image Communication | 2007

Logotype detection to support semantic-based video annotation

Julián Ramos Cózar; Nicolás Guil; José María González-Linares; Emilio L. Zapata; Ebroul Izquierdo

In conventional video production, logotypes are used to convey information about content originator or the actual video content. Logotypes contain information that is critical to infer genre, class and other important semantic features of video. This paper presents a framework to support semantic-based video classification and annotation. The backbone of the proposed framework is a technique for logotype extraction and recognition. The method consists of two main processing stages. The first stage performs temporal and spatial segmentation by calculating the minimal luminance variance region (MVLR) for a set of frames. Non-linear diffusion filters (NLDF) are used at this stage to reduce noise in the shape of the logotype. In the second stage, logotype classification and recognition are achieved. The earth movers distance (EMD) is used as a metric to decide if the detected MLVR belongs to one of the following logotype categories: learned or candidate. Learned logos are semantically annotated shapes available in the database. The semantic characterization of such logos is obtained through an iterative learning process. Candidate logos are non-annotated shapes extracted during the first processing stage. They are assigned to clusters grouping different instances of logos of similar shape. Using these clusters, false logotypes are removed and different instances of the same logo are averaged to obtain a unique prototype representing the underlying noisy cluster. Experiments involving several hours of MPEG video and around 1000 of candidate logotypes have been carried out in order to show the robustness of both detection and classification processes.


Journal of Parallel and Distributed Computing | 2012

Performance models for asynchronous data transfers on consumer Graphics Processing Units

Juan Gómez-Luna; José María González-Linares; José Ignacio Benavides; Nicolás Guil

Graphics Processing Units (GPU) have impressively arisen as general-purpose coprocessors in high performance computing applications, since the launch of the Compute Unified Device Architecture (CUDA). However, they present an inherent performance bottleneck in the fact that communication between two separate address spaces (the main memory of the CPU and the memory of the GPU) is unavoidable. The CUDA Application Programming Interface (API) provides asynchronous transfers and streams, which permit a staged execution, as a way to overlap communication and computation. Nevertheless, a precise manner to estimate the possible improvement due to overlapping does not exist, neither a rule to determine the optimal number of stages or streams in which computation should be divided. In this work, we present a methodology that is applied to model the performance of asynchronous data transfers of CUDA streams on different GPU architectures. Thus, we illustrate this methodology by deriving expressions of performance for two different consumer graphic architectures belonging to the more recent generations. These models permit programmers to estimate the optimal number of streams in which the computation on the GPU should be broken up, in order to obtain the highest performance improvements. Finally, we have checked the suitability of our performance models with three applications based on codes from the CUDA Software Development Kit (SDK) with successful results.


Pattern Recognition Letters | 2008

On the computation of the Circle Hough Transform by a GPU rasterizer

Manuel Ujaldon; Antonio Ruiz; Nicolás Guil

This paper presents an alternative for a fast computation of the Hough transform by taking advantage of commodity graphics processors that provide a unique combination of low cost and high performance platforms for this sort of algorithms. Optimizations focus on the features of a GPU rasterizer to evaluate, in hardware, the entire spectrum of votes for circle candidates from a small number of key points or seeds computed by the GPU vertex processor in a typical CPU manner. Number of votes and fidelity of their values are analyzed within the GPU using mathematical models as a function of the radius size for the circles to be detected and the resolution for the texture storing the results. Empirical results validate the output obtained for a much faster execution of the Circle Hough Transform (CHT): On a 1024x1024 sample image containing 20 circles of r=50 pixels, the GPU accelerates the code an order of magnitude and its rasterizer contributes with an additional 4x factor for a total speed-up greater than 40x versus a baseline CPU implementation.


international conference on multimedia and expo | 2004

Reliable real time scene change detection in MPEG compressed video

Edmundo Saez; José Ignacio Benavides; Nicolás Guil

This work presents a new scene change detection method in MPEG compressed video. This method is based on two different video characteristics, edges and luminance, and takes the best of them to define two distance functions. On one hand, a contour based distance function that requires no registration technique is introduced. On the other hand, a new distance function based on luminance histograms correlation is defined. In order to efficiently combine both distance functions, a new estimator based on the divergence of distributions is proposed. The method has been tested using several videos from the MPEG-7 content set.

Collaboration


Dive into the Nicolás Guil's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

E.L. Zapata

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge