Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vicente Morell is active.

Publication


Featured researches published by Vicente Morell.


cross language evaluation forum | 2014

ImageCLEF 2014: Overview and Analysis of the Results

Barbara Caputo; Henning Müller; Jesus Martínez-Gómez; Mauricio Villegas; Burak Acar; Novi Patricia; Neda Barzegar Marvasti; Suzan Uskudarli; Roberto Paredes; Miguel Cazorla; Ismael García-Varea; Vicente Morell

This paper presents an overview of the ImageCLEF 2014 evaluation lab. Since its first edition in 2003, ImageCLEF has become one of the key initiatives promoting the benchmark evaluation of algorithms for the annotation and retrieval of images in various domains, such as public and personal images, to data acquired by mobile robot platforms and medical archives. Over the years, by providing new data collections and challenging tasks to the community of interest, the ImageCLEF lab has achieved an unique position in the image annotation and retrieval research landscape. The 2014 edition consists of four tasks: domain adaptation, scalable concept image annotation, liver CT image annotation and robot vision. This paper describes the tasks and the 2014 competition, giving a unifying perspective of the present activities of the lab while discussing future challenges and opportunities.


The International Journal of Robotics Research | 2015

ViDRILO: The Visual and Depth Robot Indoor Localization with Objects information dataset

Jesus Martínez-Gómez; Ismael García-Varea; Miguel Cazorla; Vicente Morell

In this article we describe a semantic localization dataset for indoor environments named ViDRILO. The dataset provides five sequences of frames acquired with a mobile robot in two similar office buildings under different lighting conditions. Each frame consists of a point cloud representation of the scene and a perspective image. The frames in the dataset are annotated with the semantic category of the scene, but also with the presence or absence of a list of predefined objects appearing in the scene. In addition to the frames and annotations, the dataset is distributed with a set of tools for its use in both place classification and object recognition tasks. The large number of labeled frames in conjunction with the annotation scheme make this dataset different from existing ones. The ViDRILO dataset is released for use as a benchmark for different problems such as multimodal place classification and object recognition, 3D reconstruction or point cloud data compression.


Pattern Recognition Letters | 2014

Geometric 3D point cloud compression

Vicente Morell; Sergio Orts; Miguel Cazorla; Jose Garcia-Rodriguez

Our main goal is to compress and decompress 3D data using geometric methods.The proposed method extracts planes and makes color segmentation.The result from segmentation is triangulated and triangles stored.Thus, we can reach great ratio compression with low color and point loss.Its designed to work with man-made scenarios,but can be applied to any general one. The use of 3D data in mobile robotics applications provides valuable information about the robots environment but usually the huge amount of 3D information is unmanageable by the robot storage and computing capabilities. A data compression is necessary to store and manage this information but preserving as much information as possible. In this paper, we propose a 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information. The compression system can be customized to achieve different data compression or accuracy ratios. It also supports a color segmentation stage to preserve original scene color information and provides a realistic scene reconstruction. The design of the method provides a fast scene reconstruction useful for further visualization or processing tasks.


international symposium on neural networks | 2013

Point cloud data filtering and downsampling using growing neural gas

Sergio Orts-Escolano; Vicente Morell; Jose Garcia-Rodriguez; Miguel Cazorla

3D sensors provide valuable information for mobile robotic tasks like scene classification or object recognition, but these sensors often produce noisy data that makes impossible applying classical keypoint detection and feature extraction techniques. Therefore, noise removal and downsampling have become essential steps in 3D data processing. In this work, we propose the use of a 3D filtering and downsampling technique based on a Growing Neural Gas (GNG) network. GNG method is able to deal with outliers presents in the input data. These features allows to represent 3D spaces, obtaining an induced Delaunay Triangulation of the input space. Experiments show how GNG method yields better input space adaptation to noisy data than other filtering and downsampling methods like Voxel Grid. It is also demonstrated how the state-of-the-art keypoint detectors improve their performance using filtered data with GNG network. Descriptors extracted on improved keypoints perform better matching in robotics applications as 3D scene registration.


Robotics and Autonomous Systems | 2016

Semantic localization in the PCL library

Jesus Martínez-Gómez; Vicente Morell; Miguel Cazorla; Ismael García-Varea

The semantic localization problem in robotics consists in determining the place where a robot is located by means of semantic categories. The problem is usually addressed as a supervised classification process, where input data correspond to robot perceptions while classes to semantic categories, like kitchen or corridor.In this paper we propose a framework, implemented in the PCL library, which provides a set of valuable tools to easily develop and evaluate semantic localization systems. The implementation includes the generation of 3D global descriptors following a Bag-of-Words approach. This allows the generation of fixed-dimensionality descriptors from any type of keypoint detector and feature extractor combinations. The framework has been designed, structured and implemented to be easily extended with different keypoint detectors, feature extractors as well as classification models.The proposed framework has also been used to evaluate the performance of a set of already implemented descriptors, when used as input for a specific semantic localization system. The obtained results are discussed paying special attention to the internal parameters of the BoW descriptor generation process. Moreover, we also review the combination of some keypoint detectors with different 3D descriptor generation techniques. Presentation of a BoW implementation in the Point Cloud Library.Proposal of a general framework for semantic localization systems.The framework allows for integrations of future 3D features and keypoints.The Harris3D detector outperforms uniform sampling with fewer detected keypoints.BoW descriptors obtain better results than the ESF global feature.


Journal of Parallel and Distributed Computing | 2012

GPGPU implementation of growing neural gas: Application to 3D scene reconstruction

Sergio Orts; Jose Garcia-Rodriguez; Diego Viejo; Miguel Cazorla; Vicente Morell

Self-organising neural models have the ability to provide a good representation of the input space. In particular the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time-consuming, especially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This paper proposes a Graphics Processing Unit (GPU) parallel implementation of the GNG with Compute Unified Device Architecture (CUDA). In contrast to existing algorithms, the proposed GPU implementation allows the acceleration of the learning process keeping a good quality of representation. Comparative experiments using iterative, parallel and hybrid implementations are carried out to demonstrate the effectiveness of CUDA implementation. The results show that GNG learning with the proposed implementation achieves a speed-up of 6x compared with the single-threaded CPU implementation. GPU implementation has also been applied to a real application with time constraints: acceleration of 3D scene reconstruction for egomotion, in order to validate the proposal.


international conference on artificial neural networks | 2013

Improving 3D keypoint detection from noisy data using growing neural gas

Jose Garcia-Rodriguez; Miguel Cazorla; Sergio Orts-Escolano; Vicente Morell

3D sensors provides valuable information for mobile robotic tasks like scene classification or object recognition, but these sensors often produce noisy data that makes impossible applying classical keypoint detection and feature extraction techniques. Therefore, noise removal and downsampling have become essential steps in 3D data processing. In this work, we propose the use of a 3D filtering and down-sampling technique based on a Growing Neural Gas (GNG) network. GNG method is able to deal with outliers presents in the input data. These features allows to represent 3D spaces, obtaining an induced Delaunay Triangulation of the input space. Experiments show how the state-of-the-art keypoint detectors improve their performance using GNG output representation as input data. Descriptors extracted on improved keypoints perform better matching in robotics applications as 3D scene registration.


international conference on artificial neural networks | 2011

Fast image representation with GPU-based growing neural gas

Jose Garcia-Rodriguez; Anastassia Angelopoulou; Vicente Morell; Sergio Orts; Alexandra Psarrou; Juan Manuel García-Chamizo

This paper aims to address the ability of self-organizing neural network models to manage real-time applications. Specifically, we introduce a Graphics Processing Unit (GPU) implementation with Compute Unified Device Architecture (CUDA) of the Growing Neural Gas (GNG) network. The Growing Neural Gas network with its attributes of growth, flexibility, rapid adaptation, and excellent quality representation of the input space makes it a suitable model for real time applications. In contrast to existing algorithms the proposed GPU implementation allow the acceleration keeping good quality of representation. Comparative experiments using iterative, parallel and hybrid implementation are carried out to demonstrate the effectiveness of CUDA implementation in representing linear and non-linear input spaces under time restrictions.


Neural Processing Letters | 2016

3D Surface Reconstruction of Noisy Point Clouds Using Growing Neural Gas: 3D Object/Scene Reconstruction

Sergio Orts-Escolano; Jose Garcia-Rodriguez; Vicente Morell; Miguel Cazorla; José Antonio Serra Pérez; Alberto Garcia-Garcia

With the advent of low-cost 3D sensors and 3D printers, scene and object 3D surface reconstruction has become an important research topic in the last years. In this work, we propose an automatic (unsupervised) method for 3D surface reconstruction from raw unorganized point clouds acquired using low-cost 3D sensors. We have modified the growing neural gas network, which is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation, to perform 3D surface reconstruction of different real-world objects and scenes. Some improvements have been made on the original algorithm considering colour and surface normal information of input data during the learning stage and creating complete triangular meshes instead of basic wire-frame representations. The proposed method is able to successfully create 3D faces online, whereas existing 3D reconstruction methods based on self-organizing maps required post-processing steps to close gaps and holes produced during the 3D reconstruction process. A set of quantitative and qualitative experiments were carried out to validate the proposed method. The method has been implemented and tested on real data, and has been found to be effective at reconstructing noisy point clouds obtained using low-cost 3D sensors.


Journal of Real-time Image Processing | 2015

Real-time 3D semi-local surface patch extraction using GPGPU

Sergio Orts-Escolano; Vicente Morell; Jose Garcia-Rodriguez; Miguel Cazorla; Robert B. Fisher

Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.

Collaboration


Dive into the Vicente Morell's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge