Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José Carlos Rangel is active.

Publication


Featured researches published by José Carlos Rangel.


Advanced Robotics | 2016

Scene classification based on semantic labeling

José Carlos Rangel; Miguel Cazorla; Ismael García-Varea; Jesus Martínez-Gómez; Elisa Fromont; Marc Sebban

Finding an appropriate image representation is a crucial problem in robotics. This problem has been classically addressed by means of computer vision techniques, where local and global features are used. The selection or/and combination of different features is carried out by taking into account repeatability and distinctiveness, but also the specific problem to solve. In this article, we propose the generation of image descriptors from general purpose semantic annotations. This approach has been evaluated as source of information for a scene classifier, and specifically using Clarifai as the semantic annotation tool. The experimentation has been carried out using the ViDRILO toolbox as benchmark, which includes a comparison of state-of-the-art global features and tools to make comparisons among them. According to the experimental results, the proposed descriptor performs similarly to well-known domain-specific image descriptors based on global features in a scene classification task. Moreover, the proposed descriptor is based on generalist annotations without any type of problem-oriented parameter tuning.


Advanced Robotics | 2017

LexToMap: lexical-based topological mapping

José Carlos Rangel; Jesus Martínez-Gómez; Ismael García-Varea; Miguel Cazorla

Any robot should be provided with a proper representation of its environment in order to perform navigation and other tasks. In addition to metrical approaches, topological mapping generates graph representations in which nodes and edges correspond to locations and transitions. In this article, we present LexToMap, a topological mapping procedure that relies on image annotations. These annotations, represented in this work by lexical labels, are obtained from pre-trained deep learning models, namely CNNs, and are used to estimate image similarities. Moreover, the lexical labels contribute to the descriptive capabilities of the topological maps. The proposal has been evaluated using the KTH-IDOL 2 data-set, which consists of image sequences acquired within an indoor environment under three different lighting conditions. The generality of the procedure as well as the descriptive capabilities of the generated maps validate the proposal. Graphical Abstract


Pattern Analysis and Applications | 2017

Object recognition in noisy RGB-D data using GNG

José Carlos Rangel; Vicente Morell; Miguel Cazorla; Sergio Orts-Escolano; Jose Garcia-Rodriguez

Object recognition in 3D scenes is a research field in which there is intense activity guided by the problems related to the use of 3D point clouds. Some of these problems are influenced by the presence of noise in the cloud that reduces the effectiveness of a recognition process. This work proposes a method for dealing with the noise present in point clouds by applying the growing neural gas (GNG) network filtering algorithm. This method is able to represent the input data with the desired number of neurons while preserving the topology of the input space. The GNG obtained results which were compared with a Voxel grid filter to determine the efficacy of our approach. Moreover, since a stage of the recognition process includes the detection of keypoints in a cloud, we evaluated different keypoint detectors to determine which one produces the best results in the selected pipeline. Experiments show how the GNG method yields better recognition results than other filtering algorithms when noise is present.


Robot | 2016

Computing Image Descriptors from Annotations Acquired from External Tools

José Carlos Rangel; Miguel Cazorla; Ismael García-Varea; Jesus Martínez-Gómez; Elisa Fromont; Marc Sebban

Visual descriptors are widely used in several recognition and classification tasks in robotics. The main challenge for these tasks is to find a descriptor that could represent the image content without losing representative information of the image. Nowadays, there exists a wide range of visual descriptors computed with computer vision techniques and different pooling strategies. This paper proposes a novel way for building image descriptors using an external tool, namely: Clarifai. This is a remote web tool that allows to automatically describe an input image using semantic tags, and these tags are used to generate our descriptor. The descriptor generation procedure has been tested in the ViDRILO dataset, where it has been compared and merged with some well-known descriptors. Moreover, subset variable selection techniques have been evaluated. The experimental results show that our descriptor is competitive in classification tasks with the results obtained with other kind of descriptors.


Applied Soft Computing | 2018

Semi-supervised 3D object recognition through CNN labeling

José Carlos Rangel; Jesus Martínez-Gómez; Cristina Romero-González; Ismael García-Varea; Miguel Cazorla

Abstract Despite the outstanding results of Convolutional Neural Networks (CNNs) in object recognition and classification, there are still some open problems to address when applying these solutions to real-world problems. Specifically, CNNs struggle to generalize under challenging scenarios, like recognizing the variability and heterogeneity of the instances of elements belonging to the same category. Some of these difficulties are directly related to the input information, 2D-based methods still show a lack of robustness against strong lighting variations, for example. In this paper, we propose to merge techniques using both 2D and 3D information to overcome these problems. Specifically, we take advantage of the spatial information in the 3D data to segment objects in the image and build an object classifier, and the classification capabilities of CNNs to semi-supervisedly label each object image for training. As the experimental results demonstrate, our model can successfully generalize for categories with high intra-class variability and outperform the accuracy of a well-known CNN model.


international work-conference on the interplay between natural and artificial computation | 2015

Object Recognition in Noisy RGB-D Data

José Carlos Rangel; Vicente Morell; Miguel Cazorla; Sergio Orts-Escolano; Jose Garcia-Rodriguez

The object recognition task on 3D scenes is a growing research field that faces some problems relative to the use of 3D point clouds. In this work, we focus on dealing with noisy clouds through the use of the Growing Neural Gas (GNG) network filtering algorithm. Another challenge is the selection of the right keypoints detection method, that allows to identify a model into a scene cloud. The GNG method is able to represent the input data with a desired resolution while preserving the topology of the input space. Experiments show how the introduction of the GNG method yields better recognitions results than others filtering algorithms when noise is present.


Virtual Reality | 2018

An augmented reality application for improving shopping experience in large retail stores

Edmanuel Cruz; Sergio Orts-Escolano; Francisco Gomez-Donoso; Carlos Rizo; José Carlos Rangel; Higinio Mora; Miguel Cazorla

In several large retail stores, such as malls, sport or food stores, the customer often feels lost due to the difficulty in finding a product. Although these large stores usually have visual signs to guide customers toward specific products, sometimes these signs are also hard to find and are not updated. In this paper, we propose a system that jointly combines deep learning and augmented reality techniques to provide the customer with useful information. First, the proposed system learns the visual appearance of different areas in the store using a deep learning architecture. Then, customers can use their mobile devices to take a picture of the area where they are located within the store. Uploading this image to the system trained for image classification, we are able to identify the area where the customer is located. Then, using this information and novel augmented reality techniques, we provide information about the area where the customer is located: route to another area where a product is available, 3D product visualization, user location, analytics, etc. The system developed is able to successfully locate a user in an example store with 98% accuracy. The combination of deep learning systems together with augmented reality techniques shows promising results toward improving user experience in retail/commerce applications: branding, advance visualization, personalization, enhanced customer experience, etc.


Robot | 2017

Robot Semantic Localization Through CNN Descriptors

Edmanuel Cruz; José Carlos Rangel; Miguel Cazorla

Semantic localization for mobile robots involves an accurate determination of the kind of place where a robot is located. Therefore, the representation of the knowledge of this place is crucial for the robot. In this paper we present a study for finding a robust model for scene classification procedure for a mobile robot. The proposed system uses CNN descriptors for representing the input perceptions of the robot. First, we develop comparative experiments in order for finding a model. Experiments include the evaluation of several pre-trained CNN models and training a classifier with different classifications algorithms. These experiments were carried out using the ViDRILO dataset and compared with the benchmark provided by their authors. The results demonstrate the goodness of using CNN descriptors for semantic classification.


international symposium on neural networks | 2015

Using GNG on 3D Object Recognition in Noisy RGB-D data

José Carlos Rangel; Vicente Morell; Miguel Cazorla; Sergio Orts-Escolano; Jose Garcia-Rodriguez

The object recognition task on 3D scenes is a growing research field that faces some problems relative to the use of 3D point clouds. In this work, we focus on dealing with the noise in the clouds through the use of the Growing Neural Gas (GNG) network filtering algorithm. The GNG method is able to represent the input data with a desired amount of neurons while preserving the topology of the input space. The selected recognition pipeline works describing extracted keypoints of the clouds, grouping and comparing it to detect the presence of an object in the scene, through a hypothesis verification algorithm. Experiments show how the GNG method yields better recognitions results that others filtering algorithms when noise is present.


distributed computing and artificial intelligence | 2013

Irrigation System through Intelligent Agents Implemented with Arduino Technology

Rodolfo Salazar; José Carlos Rangel; Cristian Pinzón; Abel Rodríguez

The water has become in recent years a valuable and increasingly scarce. Its proper use in agriculture has demanded incorporate new technologies, mainly in the area of ICT. In this paper we present a smart irrigation system based on multi-agent architecture using fuzzy logic. The architecture incorporates different types of intelligent agents that an autonomous way monitor and are responsible for deciding if required enable / disable the irrigation system. This project proposes a real and innovative solution to the problem of inadequate water use with current irrigation systems employed in agricultural projects. This article presents the different technologies used, their adaptation to the solution of the problem and briefly discusses the first results obtained.

Collaboration


Dive into the José Carlos Rangel's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marc Sebban

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Cristian Pinzón

Technological University of Panama

View shared research outputs
Top Co-Authors

Avatar

Carlos Rizo

University of Alicante

View shared research outputs
Researchain Logo
Decentralizing Knowledge