Rafael Beserra Gomes
Federal University of Rio Grande do Norte
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rafael Beserra Gomes.
Computers & Graphics | 2013
Rafael Beserra Gomes; Bruno Silva; Lourena Rocha; Rafael Vidal Aroca; Luiz Velho; Luiz M. G. Gonçalves
Recent hardware technologies have enabled acquisition of 3D point clouds from real world scenes in real time. A variety of interactive applications with the 3D world can be developed on top of this new technological scenario. However, a main problem that still remains is that most processing techniques for such 3D point clouds are computationally intensive, requiring optimized approaches to handle such images, especially when real time performance is required. As a possible solution, we propose the use of a 3D moving fovea based on a multiresolution technique that processes parts of the acquired scene using multiple levels of resolution. Such approach can be used to identify objects in point clouds with efficient timing. Experiments show that the use of the moving fovea shows a seven fold performance gain in processing time while keeping 91.6% of true recognition rate in comparison with state-of-the-art 3D object recognition methods. Graphical abstractDisplay Omitted HighlightsObject recognition: foveation speedup 7x compared to the non-foveated approach.True recognitions rates are kept high with false recognitions at 8.3%.Faster setups with 91.6% recognition rate and 14x improvement were also achieved.The slowest configuration still shows almost 3x faster computing times
international conference on robotics and automation | 2008
Rafael Beserra Gomes; L.M.G. Gonalves; B.M. de Carvalho
We propose a new approach to reduce and abstract visual data useful for robotics applications. Basically, a moving Fovea in combination with a multi-resolution representation is created from a pair of input images given by a stereo head, that reduces hundreds of times the amount of information from the original images. With this new theoretical approach we are able to compute several feature maps, including several filters, stereo matching, and motion, in real time, that is at more than 30 frames per second. As the main contribution, the moving fovea allows, most of the time, a robot to avoid performing physical motion with the cameras in order to get a desirable region in the images center. We present mathematical formalization of the moving Fovea approach, the algorithms, and details of the implementation of such schema. We validate it with experimental results. This approach has demonstrated to be very useful to robotics vision.
Sensors | 2013
Rafael Vidal Aroca; Rafael Beserra Gomes; Rummennigue R. Dantas; Adonai Gimenez Calbo; Luiz M. G. Gonçalves
Wearable computing is a form of ubiquitous computing that offers flexible and useful tools for users. Specifically, glove-based systems have been used in the last 30 years in a variety of applications, but mostly focusing on sensing peoples attributes, such as finger bending and heart rate. In contrast, we propose in this work a novel flexible and reconfigurable instrumentation platform in the form of a glove, which can be used to analyze and measure attributes of fruits by just pointing or touching them with the proposed glove. An architecture for such a platform is designed and its application for intuitive fruit grading is also presented, including experimental results for several fruits.
virtual environments human computer interfaces and measurement systems | 2006
Samuel O. Azevedo; Aquiles Medeiros Filgueira Burlamaqui; Rummenigge Rudson Dantas; Claudio A. Schneider; Rafael Beserra Gomes; Julio César Paulino de Melo; Josivan S. Xavier; Luiz M. G. Gonçalves
In this work we introduces the concept of interdimensional virtual environments and proposes to it creation an architecture based on the client-server model. Our architecture allows the users with different resources sharing the same environment in a transparent way, same that users are connected by a different graphical interfaces. This interdimensionality provides by the system is possible thanks to the use of a component in the serving side that if put in charge to carry through, when necessary, the transformation of the messages of a dimension for another one, removing or adding indispensable information to the different customers of the system
virtual environments human computer interfaces and measurement systems | 2012
Xiankleber C. Benjamim; Rafael Beserra Gomes; Aquiles Medeiros Filgueira Burlamaqui; Luiz M. G. Gonçalves
This paper proposes to use visual features matching in the identification of medicine boxes for visually impaired people. We use a camera device, available in several popular devices such as computers, televisions and cell phones, to identify relevant features on medicine box. After box detection, audio files are played to inform about dosage, indications and contraindications of the medication. Making use of this vision system can help many visually impaired people to take the right medicine at the time indicated in advance by the doctor. Experiments with 15 blindfolded volunteers demonstrated that 93% of them believes that the system was useful or very useful to identify the medicine boxes.
brazilian symposium on computer graphics and image processing | 2012
Rafael Beserra Gomes; Rafael Vidal Aroca; Bruno M. Carvalho; Luiz M. G. Gonçalves
We propose a novel and fast interactive segmentation methodology for computer vision applications. Basically, the proposed system performs the tracking of seeds so that multiple seeds can be acquired over time, substantially improving the segmentation results. Moreover, instead of image coordinates, the user indicates points in the real-world that become seeds in the image. These seeds can be indicated, for example using a laser pointer or a smart-phone. The seeds can then be tracked and used by a segmentation algorithm. Experiments using the Lucas-Kanade Optical Flow and the Fast Multi-Object Fuzzy Segmentation (Fast-MOFS) algorithm demonstrate that the proposed technique successfully segments images in real-time and improves the user ability to directly segment an object in the real world. The proposed system has a high performance, allowing it to be used with high frame rates in devices with low processing capability and/or with restricted power requirements.
pacific-rim symposium on image and video technology | 2007
Rafael Beserra Gomes; Tiago S. Souza; Bruno M. Carvalho
Mosaic is a Non-Photorealistic Rendering (NPR) style for simulating the appearance of decorative tile mosaics. To simulate realistic mosaics, a method must emphasize edges in the input image, while placing the tiles in an arrangement to minimize the visible grout (the substrate used to glue the tiles that appears between them). This paper proposes a method for generating mosaic animations from input videos (extending previous works on still image mosaics) that uses a combination of a segmentation algorithm and an optical flow method to enforce temporal coherence in the mosaic videos, thus avoiding that the tiles move back and forth the canvas, a problem known as swimming. The result of the segmentation algorithm is used to constrain the result of the optical flow, restricting its computation to the areas detected as being part of a single object. This intra-object coherence scheme is applied to two methods of mosaic rendering, and a technique for adding and removing tiles for one of the mosaic rendering methods is also proposed. Some examples of the renderings produced are shown to illustrate our techniques.
Sensors | 2018
Fabio Fonseca de Oliveira; Anderson A. S. Souza; Marcelo A. C. Fernandes; Rafael Beserra Gomes; Luiz M. G. Gonçalves
Technological innovations in the hardware of RGB-D sensors have allowed the acquisition of 3D point clouds in real time. Consequently, various applications have arisen related to the 3D world, which are receiving increasing attention from researchers. Nevertheless, one of the main problems that remains is the demand for computationally intensive processing that required optimized approaches to deal with 3D vision modeling, especially when it is necessary to perform tasks in real time. A previously proposed multi-resolution 3D model known as foveated point clouds can be a possible solution to this problem. Nevertheless, this is a model limited to a single foveated structure with context dependent mobility. In this work, we propose a new solution for data reduction and feature detection using multifoveation in the point cloud. Nonetheless, the application of several foveated structures results in a considerable increase of processing since there are intersections between regions of distinct structures, which are processed multiple times. Towards solving this problem, the current proposal brings an approach that avoids the processing of redundant regions, which results in even more reduced processing time. Such approach can be used to identify objects in 3D point clouds, one of the key tasks for real-time applications as robotics vision, with efficient synchronization allowing the validation of the model and verification of its applicability in the context of computer vision. Experimental results demonstrate a performance gain of at least 27.21% in processing time while retaining the main features of the original, and maintaining the recognition quality rate in comparison with state-of-the-art 3D object recognition methods.
Archive | 2010
Rafael Beserra Gomes; Renato Q. Gardiman; Luiz Eduardo Leite; Bruno M. Carvalho; Luiz Marcos Garcia Gonçalves
We introduce an approach to accelerate low-level vision in robotics applications, including its formalisms and algorithms. We depict in detail image the processing and computer vision techniques that provide data reduction and feature abstraction from input data, also including algorithms and implementations done in a real robot platform. Our model shows to be helpful in the development of behaviorally active mechanisms for integration of multi-modal sensory features. In the current version, the algorithm allows our system to achieve real-time processing running in a conventional 2.0 GHz Intel processor. This processing rate allows our robotics platform to perform tasks involving control of attention, as the tracking of objects, and recognition. This proposed solution support complex, behaviorally cooperative, active sensory systems as well as different types of tasks including bottom-up and top-down aspects of attention control. Besides being more general, we used features from visual data here to validate the proposed sketch. Our final goal is to develop an active, real-time running vision system able to select regions of interest in its surround and to foveate (verge) robotic cameras on the selected regions, as necessary. This can be performed physically or by software only (by moving the fovea region inside a view of a scene). Our system is also able to keep attention on the same region as necessary, for example, to recognize or manipulate an object, and to eventually shift its focus of attention to another region as a task has been finished. A nice contribution done over our approach to feature reduction and abstraction is the construction of a moving fovea implemented in software that can be used in situations where avoiding to move the robot resources (cameras) works better. On the top of our model, based on reduced data and on a current functional state of the robot, attention strategies could be further developed to decide, on-line, where is the most relevant place to pay attention. Recognition tasks could also be successfully done based on the features in this perceptual buffer. These tasks in conjunction with tracking experiments, including motion calculation, validate the proposed model and its use for data reduction and abstraction of features. As a result, the robot can use this low level module to make control decisions, based on the information contained in its perceptual state and on the current task being executed, selecting the right actions in response to environmental stimuli. 19
Neurocomputing | 2013
Rafael Beserra Gomes; Bruno M. Carvalho; Luiz M. G. Gonçalves
Collaboration
Dive into the Rafael Beserra Gomes's collaboration.
Aquiles Medeiros Filgueira Burlamaqui
Federal University of Rio Grande do Norte
View shared research outputs