Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Miguel Cazorla is active.

Publication


Featured researches published by Miguel Cazorla.


Pattern Analysis and Applications | 2008

Feature selection, mutual information, and the classification of high-dimensional patterns: Applications to image classification and microarray data analysis

Boyan Bonev; Francisco Escolano; Miguel Cazorla

We propose a novel feature selection filter for supervised learning, which relies on the efficient estimation of the mutual information between a high-dimensional set of features and the classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rényi entropy, and the subsequent approximation of the Shannon entropy. Thus, the complexity does not depend on the number of dimensions but on the number of patterns/samples, and the curse of dimensionality is circumvented. We show that it is then possible to outperform algorithms which individually rank features, as well as a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification. For most of the tested data sets, we obtain better classification results than those reported in the literature.


cross language evaluation forum | 2013

ImageCLEF 2013: The Vision, the Data and the Open Challenges

Barbara Caputo; Henning Müller; Bart Thomee; Mauricio Villegas; Roberto Paredes; David Zellhöfer; Hervé Goëau; Alexis Joly; Pierre Bonnet; Jesús Martínez Gómez; Ismael García Varea; Miguel Cazorla

This paper presents an overview of the ImageCLEF 2013 lab. Since its first edition in 2003, ImageCLEF has become one of the key initiatives promoting the benchmark evaluation of algorithms for the cross-language annotation and retrieval of images in various domains, such as public and personal images, to data acquired by mobile robot platforms and botanic collections. Over the years, by providing new data collections and challenging tasks to the community of interest, the ImageCLEF lab has achieved an unique position in the multi lingual image annotation and retrieval research landscape. The 2013 edition consisted of three tasks: the photo annotation and retrieval task, the plant identification task and the robot vision task. Furthermore, the medical annotation task, that traditionally has been under the ImageCLEF umbrella and that this year celebrates its tenth anniversary, has been organized in conjunction with AMIA for the first time. The paper describes the tasks and the 2013 competition, giving an unifying perspective of the present activities of the lab while discussion the future challenges and opportunities.


IEEE Transactions on Image Processing | 2003

Two Bayesian methods for junction classification

Miguel Cazorla; Francisco Escolano

We propose two Bayesian methods for junction classification which evolve from the Kona method: a region-based method and an edge-based method. Our region-based method computes a one-dimensional (1-D) profile where wedges are mapped to intervals with homogeneous intensity. These intervals are found through a growing-and-merging algorithm driven by a greedy rule. On the other hand, our edge-based method computes a different profile which maps wedge limits to peaks of contrast, and these peaks are found through thresholding followed by nonmaximum suppression. Experimental results show that both methods are more robust and efficient than the Kona method, and also that the edge-based method outperforms the region-based one.


cross language evaluation forum | 2014

ImageCLEF 2014: Overview and Analysis of the Results

Barbara Caputo; Henning Müller; Jesus Martínez-Gómez; Mauricio Villegas; Burak Acar; Novi Patricia; Neda Barzegar Marvasti; Suzan Uskudarli; Roberto Paredes; Miguel Cazorla; Ismael García-Varea; Vicente Morell

This paper presents an overview of the ImageCLEF 2014 evaluation lab. Since its first edition in 2003, ImageCLEF has become one of the key initiatives promoting the benchmark evaluation of algorithms for the annotation and retrieval of images in various domains, such as public and personal images, to data acquired by mobile robot platforms and medical archives. Over the years, by providing new data collections and challenging tasks to the community of interest, the ImageCLEF lab has achieved an unique position in the image annotation and retrieval research landscape. The 2014 edition consists of four tasks: domain adaptation, scalable concept image annotation, liver CT image annotation and robot vision. This paper describes the tasks and the 2014 competition, giving a unifying perspective of the present activities of the lab while discussing future challenges and opportunities.


GbRPR'07 Proceedings of the 6th IAPR-TC-15 international conference on Graph-based representations in pattern recognition | 2007

Constellations and the unsupervised learning of graphs

Boyan Bonev; Francisco Escolano; Miguel Angel Lozano; Pablo Suau; Miguel Cazorla; Wendy Aguilar

In this paper, we propose a novel method for the unsupervised clustering of graphs in the context of the constellation approach to object recognition. Such method is an EM central clustering algorithm which builds prototypical graphs on the basis of fast matching with graph transformations. Our experiments, both with random graphs and in realistic situations (visual localization), show that our prototypes improve the set median graphs and also the prototypes derived from our previous incremental method. We also discuss how the method scales with a growing number of images.


international symposium on neural networks | 2016

PointNet: A 3D Convolutional Neural Network for real-time object class recognition

Alberto Garcia-Garcia; Francisco Gomez-Donoso; Jose Garcia-Rodriguez; Sergio Orts-Escolano; Miguel Cazorla; Jorge Azorin-Lopez

During the last few years, Convolutional Neural Networks are slowly but surely becoming the default method solve many computer vision related problems. This is mainly due to the continuous success that they have achieved when applied to certain tasks such as image, speech, or object recognition. Despite all the efforts, object class recognition methods based on deep learning techniques still have room for improvement. Most of the current approaches do not fully exploit 3D information, which has been proven to effectively improve the performance of other traditional object recognition methods. In this work, we propose PointNet, a new approach inspired by VoxNet and 3D ShapeNets, as an improvement over the existing methods by using density occupancy grids representations for the input data, and integrating them into a supervised Convolutional Neural Network architecture. An extensive experimentation was carried out, using ModelNet - a large-scale 3D CAD models dataset - to train and test the system, to prove that our approach is on par with state-of-the-art methods in terms of accuracy while being able to perform recognition under real-time constraints.


Sensors | 2014

A Comparative Study of Registration Methods for RGB-D Video of Static Scenes

Vicente Morell-Gimenez; Marcelo Saval-Calvo; Jorge Azorin-Lopez; Jose Garcia-Rodriguez; Miguel Cazorla; Sergio Orts-Escolano; Andres Fuster-Guillo

The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction.


intelligent robots and systems | 2007

3D plane-based egomotion for SLAM on semi-structured environment

Diego Viejo; Miguel Cazorla

Several works deal with 3D data in SLAM problem. Data come from a 3D laser sweeping unit or a stereo camera, both providing a huge amount of data. In this paper, we detail an efficient method to extract planar patches from 3D raw data. Then, we use these patches in an ICP-like method in order to address the SLAM problem. Using ICP with planes is not a trivial task. It needs some adaptation from the original ICP. Some promising results are shown for outdoor environment.


The International Journal of Robotics Research | 2015

ViDRILO: The Visual and Depth Robot Indoor Localization with Objects information dataset

Jesus Martínez-Gómez; Ismael García-Varea; Miguel Cazorla; Vicente Morell

In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.


Pattern Recognition Letters | 2014

Geometric 3D point cloud compression

Vicente Morell; Sergio Orts; Miguel Cazorla; Jose Garcia-Rodriguez

Our main goal is to compress and decompress 3D data using geometric methods.The proposed method extracts planes and makes color segmentation.The result from segmentation is triangulated and triangles stored.Thus, we can reach great ratio compression with low color and point loss.Its designed to work with man-made scenarios,but can be applied to any general one. The use of 3D data in mobile robotics applications provides valuable information about the robots environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.

Collaboration


Dive into the Miguel Cazorla's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diego Viejo

University of Alicante

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge