Luiz M. G. Gonçalves
Federal University of Rio Grande do Norte
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Luiz M. G. Gonçalves.
Journal of Parallel and Distributed Computing | 2012
Rafael Vidal Aroca; Luiz M. G. Gonçalves
Servers and clusters are fundamental building blocks of high performance computing systems and the IT infrastructure of many companies and institutions. This paper analyzes the feasibility of building servers based on low power computers through an experimental comparison of server applications running on x86 and ARM computer architectures. The comparison executed on web and database servers includes power usage, CPU load, temperature, request latencies and the number of requests handled by each tested system. Floating point performance and power usage are also evaluated. The use of ARM based systems has shown to be a good choice when power efficiency is needed without losing performance.
Computers & Graphics | 2013
Rafael Beserra Gomes; Bruno Silva; Lourena Rocha; Rafael Vidal Aroca; Luiz Velho; Luiz M. G. Gonçalves
Recent hardware technologies have enabled acquisition of 3D point clouds from real world scenes in real time. A variety of interactive applications with the 3D world can be developed on top of this new technological scenario. However, a main problem that still remains is that most processing techniques for such 3D point clouds are computationally intensive, requiring optimized approaches to handle such images, especially when real time performance is required. As a possible solution, we propose the use of a 3D moving fovea based on a multiresolution technique that processes parts of the acquired scene using multiple levels of resolution. Such approach can be used to identify objects in point clouds with efficient timing. Experiments show that the use of the moving fovea shows a seven fold performance gain in processing time while keeping 91.6% of true recognition rate in comparison with state-of-the-art 3D object recognition methods. Graphical abstractDisplay Omitted HighlightsObject recognition: foveation speedup 7x compared to the non-foveated approach.True recognitions rates are kept high with false recognitions at 8.3%.Faster setups with 91.6% recognition rate and 14x improvement were also achieved.The slowest configuration still shows almost 3x faster computing times
international conference on multisensor fusion and integration for intelligent systems | 2001
Luiz M. G. Gonçalves
We present current efforts towards an approach for the integration of features extracted from multi-modal sensors, with which to guide the attentional behavior of robotic agents. The model can be applied in many situations and different tasks including top-down or bottom-up aspects of attention control. Basically, a pre-attention mechanism enhances attentional features that are relevant to the current task according to a weight function that can be learned. Then, an attention shift mechanism can select one between the various activated stimuli, in order for a robot to foveate on it. Also, in this approach, we consider the robot moving resources or to improve the (visual) sensory information.
computational intelligence in robotics and automation | 1999
Luiz M. G. Gonçalves; Gilson A. Giraldi; Antonio A. F. Oliveira; Roderic A. Grupen
We propose two behaviourally active policies for attentional control. These policies must act based on a multi-modal sensory feedback. Two approaches are used to derive the policies: 1) a simple straightforward strategy, and 2) using Q-learning to learn a policy based on the perceptual state of the system. As a practical result of both algorithms, a robotic agent is capable to select a region of interest and perform shifts of attention focusing on the selected region. Then, a multi-feature extraction can take place allowing the system to identify or recognize a pattern representing that region of interest. Also, the policies have the desired property that all objects in the environment are visited at least once, although some of them can be visited more. In this way a robotic agent can relate sensed information to actions, abstracting and providing a feedback (categorization and mapping) for environmental stimuli.
annual simulation symposium | 2003
Douglas Machado Tavares; Aquiles Medeiros Filgueira Burlamaqui; Anfranserai Dias; Meika Monteiro; Viviane Antunes; George Thó; Tatiana Aires Tavares; Carlos Magno de Lima; Luiz M. G. Gonçalves; Guido Lemos; Pablo J. Alsina; Adelardo Adelino Dantas de Medeiros
The HYPERPRESENCE system proposed in this work is a mix between hardware and software platforms developed for control of multi-user agents in a mixed reality environment. The hardware is basically composed of robot systems that manipulate objects and move in a closed, real environment and a video camera, imaging system. The environment can be any place that provides or needs interactions with virtual environments for showing results or else to allow manipulation in it via a virtual reality interface. The software part is composed of three main systems: an acquisition module for position control, a tower (hardware) communication module for manipulating the robots and a multi user VRML server for managing virtual spaces and to provide synchronism between the real places and the virtual ones. We present the design and implementation solution adopted for the HYPERPRESENCE architecture. We analyze problems with communication protocols, precision of positioning, how to relate physical and virtual objects and show our solutions for these problems.
Sensors | 2013
Rafael Vidal Aroca; Rafael Beserra Gomes; Rummennigue R. Dantas; Adonai Gimenez Calbo; Luiz M. G. Gonçalves
Wearable computing is a form of ubiquitous computing that offers flexible and useful tools for users. Specifically, glove-based systems have been used in the last 30 years in a variety of applications, but mostly focusing on sensing peoples attributes, such as finger bending and heart rate. In contrast, we propose in this work a novel flexible and reconfigurable instrumentation platform in the form of a glove, which can be used to analyze and measure attributes of fruits by just pointing or touching them with the proposed glove. An architecture for such a platform is designed and its application for intuitive fruit grading is also presented, including experimental results for several fruits.
intelligent robots and systems | 2000
Luiz M. G. Gonçalves; Cosimo Distante; Antonio A. F. Oliveira; David S. Wheeler; Roderic A. Grupen
We present mechanisms for attention control and pattern categorization as the basis for robot cognition. For attention, we gather information from attentional feature maps extracted from sensory data constructing salience maps to decide where to foveate. For identification, multi-feature maps are used as input to an associative memory, allowing the system to classify a pattern representing a region of interest. As a practical result, our robotic platforms are able to select regions of interest and perform shifts of attention focusing on the selected regions, and to construct and maintain attentional maps of the environment in an efficient manner.
Sensors | 2012
Rafael Vidal Aroca; Aquiles Medeiros Filgueira Burlamaqui; Luiz M. G. Gonçalves
This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks.
Computers & Graphics | 2002
Luiz M. G. Gonçalves; Marcelo Kallmann; Daniel Thalmann
Abstract An integrated framework is proposed in which local perception and close manipulation skills are used in conjunction with a high-level behavioral interface based on a “smart object” paradigm as support for virtual agents to perform autonomous tasks. In our model, virtual “smart objects” encapsulate information about possible interactions with agents, including sub-tasks defined by scripts that the agent can perform. We then use Information provided by low-level sensing mechanisms (based on a simulated retina) to construct a set of local, perceptual features, with which to categorize at run-time possible target objects. Once an object is identified, the associated smart object representation can be retrieved and a predefined interaction might be selected if this is required by the current agent mission defined in a global plan script. A challenging problem solved here is the construction (abstraction) of a mechanism to link individual perceptions to actions, that can exhibit some human like behavior due to the used simulated retina as perception. As a practical result virtual agents are capable of acting with more autonomy, enhancing their performance.
adaptive agents and multi-agents systems | 1999
Luiz M. G. Gonçalves; Roderic A. Grupen; Antonio A. F. Oliveira
In this work, vision and touch (artificial) senses are integrated in a cooperative active system. Multi-modal sensory information acquired on-line is used by a robotic agent to perform real-time tasks involving categorization of objects. The visual-touch system proposed is able to foveate (verge) the eyes onto an object, to move the arms to touch an object, and to choose another object by shifting its focus of attention. Also, the system can detect changes occurred on a previously visited region. We propose a hybrid computational system in which an associative memory remembers the visual and tactile signature of objects for recognition and reinforcement learning tasks (Q-learning) are formulated to learn active sensing policies. Several vision architectures have been proposed recently. Kosslyn’s architecture [6], based on results in experimental neuro-psychology, seems to be technically possible with special hardware. Despite practical issues it is intuitively attractive to attach computational mechanisms to that architecture. Ferrell [3] uses registered, multi-modal, topographically organized maps of the sensory-motor space to orient a robot (COG) towards environmental stimuli. In our approach we also use topographically organized (visual and arm) maps for feature extraction. But, instead of a pixel (patch) representation we have regions of interest (ROIs) segmented in those maps, as suggested in Kosslyn [6]. At each time, only one region is selected to be processed by the high level processes. This selective aspect provides a substantial reduction in the amount of computational efforts necessary for feature extraction.
Collaboration
Dive into the Luiz M. G. Gonçalves's collaboration.
Aquiles Medeiros Filgueira Burlamaqui
Federal University of Rio Grande do Norte
View shared research outputs