Valdir Grassi
University of São Paulo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Valdir Grassi.
IEEE Intelligent Systems | 2007
Saranghi P Parikh; Valdir Grassi; R. Vijay Kumar; Jun Okamoto
Nearly five million individuals in the US have limited arm and hand movement, making it difficult or impossible for them to use computers and products with embedded computers, such as wheelchairs, household appliances, office electronic equipment, and robotic aids. Although some current wheelchair systems have embedded computers, they have very little computer control and require precise, low-level control inputs from the user; interfaces are similar to those found in passenger cars. The rider must continuously specify the chairs direction and, in some cases, velocity using a joystick-like device. Unfortunately, many users who could benefit from powered wheelchairs lack these fine motor skills. For instance, those with cerebral palsy might not be able to guide a chair through a narrow opening, such as a doorway, without repeatedly colliding into the sides. These types of physically challenging environments can be frustrating and require a lot of user effort. At the University of Pennsylvanias general robotics, automation, sensing, and perception lab, we developed the smart chair, a smart wheelchair with intelligent controllers that lets people with physical disabilities overcome these difficulties. By outfitting the wheelchair with cameras, a laser range finder, and onboard processing, we give the user an adaptable, intelligent control system. A computer-controlled wheelchairs shared control framework allows users complete control of the chair while ensuring their safety
international conference on robotics and automation | 2005
Sarangi P. Parikh; Valdir Grassi; Vijay Kumar; Jun Okamoto
We describe the development and assessment of a computer controlled wheelchair called the SMARTCHAIR. A shared control framework with different levels of autonomy allows the human operator to stay in complete control of the chair at each level while ensuring her safety. The framework incorporates deliberative motion plans or controllers, reactive behaviors, and human user inputs. At every instant in time, control inputs from these three different sources are blended continuously to provide a safe trajectory to the destination, while allowing the human to maintain control and safely override the autonomous behavior. In this paper, we present usability experiments with 50 participants and demonstrate quantitatively the benefits of human-robot augmentation.
international conference on robotics and automation | 2004
Sarangi Patel Parikh; Valdir Grassi; R. Vijay Kumar; Jun Okamoto
We describe the development and assessment of a computer controlled wheelchair equipped with a suite of sensors and a novel interface, called the SMARTCHAIR. The main focus of this paper is a shared control framework which allows the human operator to interact with the chair while it is performing an autonomous task. At the highest level, the autonomous system is able to plan paths using high level deliberative navigation behaviors depending on destinations or waypoints commanded by the user. The user is able to locally modify or override previously commanded autonomous behaviors or plans. This is possible because of our hierarchical control strategy that combines three independent sources of control inputs: deliberative plans obtained from maps and user commands, reactive behaviors generated by stimuli from the environment, and user-initiated commands that might arise during the execution of a plan or behavior. The framework we describe ensures the users safety while allowing the user to be in complete control of a potentially autonomous system.
Journal of Systems Architecture | 2014
Leandro Fernandes; Jefferson R. Souza; Gustavo Pessin; Patrick Yuri Shinzato; Daniel O. Sales; Caio Mendes; Marcos Prado; Rafael Luiz Klaser; André Chaves Magalhães; Alberto Yukinobu Hata; Daniel Fernando Pigatto; Kalinka Regina Lucas Jaquie Castelo Branco; Valdir Grassi; Fernando Santos Osório; Denis F. Wolf
Abstract This paper presents the development of two outdoor intelligent vehicles platforms named CaRINA I and CaRINA II, their system architecture, simulation tools, and control modules. It also describes the development of the intelligent control system modules allowing the mobile robots and vehicles to navigate autonomously in controlled urban environments. Research work has been carried out on tele-operation, driver assistance systems, and autonomous navigation using the vehicles as platforms to experiments and validation. Our robotic platforms include mechanical adaptations and the development of an embedded software architecture. This paper addresses the design, sensing, decision making, and acting infrastructure and several experimental tests that have been carried out to evaluate both platforms and proposed algorithms. The main contributions of this work is the proposed architecture, that is modular and flexible, allowing it to be instantiated into different robotic platforms and applications. The communication and security aspects are also investigated.
ieee intelligent vehicles symposium | 2012
Patrick Yuri Shinzato; Valdir Grassi; Fernando Santos Osório; Denis F. Wolf
The development of autonomous vehicles is a highly relevant research topic in mobile robotics. Road recognition using visual information is an important capability for autonomous navigation in urban environments. Over the last three decades, a large number of visual road recognition approaches have been appeared in the literature. This paper proposes a novel visual road detection system based on multiple artificial neural networks that can identify the road based on color and texture. Several features are used as inputs of the artificial neural network such as: average, entropy, energy and variance from different color channels (RGB, HSV, YUV). As a result, our system is able to estimate the classification and the confidence factor of each part of the environment detected by the camera. Experimental tests have been performed in several situations in order to validate the proposed approach.
intelligent vehicles symposium | 2014
Carlos Massera Filho; Denis F. Wolf; Valdir Grassi; Fernando Santos Osório
Robust and stable control is a requirement for navigation of self-driving cars. Some approaches in the literature depend on a high number of parameters that are often difficult to estimate. A poor selection of these parameters often reduces considerably the efficiency of the control algorithms. In this paper we propose a simplified control system for autonomous vehicles that depends on a reduced number of parameters that can be easily set. This control system is composed of longitudinal and lateral controllers. The longitudinal controller is responsible for regulating the vehicles cruise velocity while the lateral controller steers the vehicles wheels for path tracking. Simulated and experimental tests have been carried out with the CaRINA II platform in the university campus with positive results.
IFAC Proceedings Volumes | 2013
André Chaves Magalhães; Marcos Prado; Valdir Grassi; Denis F. Wolf
Abstract Recent advances in mobile robotic research have contributed to the development of autonomous driving systems for intelligent robotic vehicles. The motion planner is the component of the intelligent system responsible for planning a path that leads the vehicle from its current state to the desired goal state avoiding obstacles in the environment. This paper describes the use of a motion planning method based on lattice state space and anytime dynamic A * applied to our autonomous vehicle for navigation in a semi-structured urban environment. We created a 3D simulation model of our vehicle, implemented the motion planner approach described here and conducted experiments in both simulated and real parking lots.
Journal of Intelligent and Robotic Systems | 2012
Jun Okamoto; Valdir Grassi; Paulo F. S. Amaral; Benedito Geraldo Miglio Pinto; Daniel R. Pipa; Gustavo Pinto Pires; Marcus Vinicius Maciel Martins
Inspection for corrosion of gas storage spheres at the welding seam lines must be done periodically. Until now this inspection is being done manually and has a high cost associated to it and a high risk of inspection personel injuries. The Brazilian Petroleum Company, Petrobras, is seeking cost reduction and personel safety by the use of autonomous robot technology. This paper presents the development of a robot capable of autonomously follow a welding line and transporting corrosion measurement sensors. The robot uses a pair of sensors each composed of a laser source and a video camera that allows the estimation of the center of the welding line. The mechanical robot uses four magnetic wheels to adhere to the sphere’s surface and was constructed in a way that always three wheels are in contact with the sphere’s metallic surface which guarantees enough magnetic atraction to hold the robot in the sphere’s surface all the time. Additionally, an independently actuated table for attaching the corrosion inspection sensors was included for small position corrections. Tests were conducted at the laboratory and in a real sphere showing the validity of the proposed approach and implementation.
latin american robotics symposium | 2010
Daniel Alves Barbosa de Oliveira Vaz; Roberto S. Inoue; Valdir Grassi
Motion planning is one of the fundamental problems in autonomous robotic navigation. However, usually the dynamic model of the robot is not considered on most of the basic motion planners, resulting in trajectories that may be difficult to be tracked by real robots. This paper describes the sampling-based Kino dynamic motion planning of a skid-steering mobile robot using the Rapidly-exploring Random Tree (RRT) method. Experimental results using a Pioneer 3-AT are presented. As a consequence of using the kinematic and dynamic model of the robot for planning, a simple proportional controller could be used to track the generated trajectory.
Archive | 2009
João Yoshiyuki Ishihara; Marco H. Terra; Geovany Araujo Borges; Glauco Garcia Scandaroli; Roberto S. Inoue; Valdir Grassi
In this chapter we are interested in designing estimators for the internal variables of two kind of robots, wheeled mobile and robotic leg prosthesis, based on a recently developed robust descriptor Kalman filter. The proposed approach is reasonable since descriptor formulation can cope with algebraic restrictions on system’s signals. Further, the recursiveness of this class of filter is useful for on-line applications. Different procedures have been used to deal with mobile robots localization problem. Measurement systems based on odometric, inertial sensors and ultrasounds are self-contained, simple to use, and able to guarantee a high data rate. However, the problem of these systems is that they integrate the relative increments, and the localization errors considerably grow over time if an appropriate sensor fusion algorithm is not used, see for instance [17], [18] and references therein. The examples developed in these references do not take into account robust approaches, in the line we are proposing here. In the context of robotic leg prosthesis, we deal with the development of devices for above knee amputees. Robotic prosthesis are devices intended to replace parts of the human body. They should be able to sense the environment and complain with the movement of the body in such a way to aid the user to perform the most common tasks. This is a very interesting and current research topic [7]. Environment sensing is one of the most difficult tasks, mainly in the case of leg prosthesis because of the great diversity of walking conditions and terrains. The use of Electromyographic (EMG) signal processing for detecting the main properties of the walking terrain is the focus of [15]. However, in the case of above knee prosthesis, there is no EMG signal available to allow automatic reorientation of the robotic foot. When the foot of a robotic leg is not in contact with ground, its configuration should be estimated to allow its control with respect to ground. This can be useful for controlling its orientation, mainly in the end of phase where the foot is not in contact with ground. In this chapter, it is shown a solution for this problem using multisensor data fusion by a robust descriptor Kalman filter. This chapter is divided in three main parts. In the first part we present basic definitions and concepts of descriptor systems and some examples to clarify the use of this kind of approach. In the second part we present three algorithms for the computation of the