Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Rohmer is active.

Publication


Featured researches published by Eric Rohmer.


intelligent robots and systems | 2013

V-REP: A versatile and scalable robot simulation framework

Eric Rohmer; Surya P. N. Singh; Marc Freese

From exploring planets to cleaning homes, the reach and versatility of robotics is vast. The integration of actuation, sensing and control makes robotics systems powerful, but complicates their simulation. This paper introduces a versatile, scalable, yet powerful general-purpose robot simulation framework called V-REP. The paper discusses the utility of a portable and flexible simulation framework that allows for direct incorporation of various control techniques. This renders simulations and simulation models more accessible to a general-public, by reducing the simulation model deployment complexity. It also increases productivity by offering built-in and ready-to-use functionalities, as well as a multitude of programming approaches. This allows for a multitude of applications including rapid algorithm development, system verification, rapid prototyping, and deployment for cases such as safety/remote monitoring, training and education, hardware control, and factory automation simulation.


2013 IEEE Symposium on Computational Intelligence in Rehabilitation and Assistive Technologies (CIRAT) | 2013

Effects of behavior network as a suggestion system to assist BCI users

Klaus Raizer; Eric Rohmer; André Paraense; Ricardo Ribeiro Gudwin

This work describes the development of an intelligent agent responsible for making relevant action suggestions to a BCI user in the context of an intelligent environment. For the development of this agent, a modified version of a behavior network, embedded into a neuroscience inspired cognitive architecture, has been implemented. A new soft-preconditions list has been introduced in the original model in order for it to be used as an assistant agent. A number of simulated experiments were performed to evaluate if the behavior network was indeed presenting valuable suggestions and performing as expected. Results suggest that the agent was able to take into account predefined goals, scheduled events and topological information about the environment in order to deliberate over the possible behaviors and make relevant suggestions. A discussion is made about the strong points and drawbacks of this approach and future work is suggested.


2015 IEEE Thirty Fifth Central American and Panama Convention (CONCAPAN XXXV) | 2015

Galileo bionic hand: SEMG activated approaches for a multifunction upper-limb prosthetic

Julio Fajardo; Ali Lemus; Eric Rohmer

Surface electromyography (sEMG) commonly used in upper-limb prostheses requires expensive medical equipment to get accurate results, and even then only a few actions can be classified. We propose an sEMG activated embedded system based on Digital Signal Processing and Machine Learning, to interpret the user intention with the purpose of controlling a low-cost 3D printed hand prosthesis with multiple Degrees of Freedom (DOF). The system has three different operating modes with a user-friendly Human Machine Interface (HMI), in order to increase the amount of customized hand postures that can be performed by the user, providing functionalities that fit on their daily chores and allowing to use inexpensive surface mounted passive electrodes in order to keep a low cost approach. Inasmuch as sEMG activation allows the user to consciously perform the desired action, on the other hand a touchscreen enables the possibility to select different predefined actions and operating modes, as well as provide necessary visual feedback. Moreover, in another operating mode, a speech recognition module recognizes user speech in 3 different languages, allowing the user more sEMG activated postures. Finally, an operating mode based on Artificial Neural Networks (ANN) classifies 5 hand gestures that can be easily accomplished by below elbow amputees. The system was tested and obtained high accuracy and great responsiveness on the different modes of operation.


international conference control science and systems engineering | 2014

Arrangement map for task planning and localization for an autonomous robot in a large-scale environment

Paulo Gurgel Pinheiro; Jacques Wainer; Eleri Cardozo; Eric Rohmer

This paper presents a planning approach for solving the global localization problem using an arrangement of rooms to compress the original map. The approach is based on architectural design features of the building such as walls and doors to help the robot on finding the best route to go. Lighter POMDP plans are generated only for representative rooms of the environment, decreasing size of the set of possible states. The plans are created offline only once and used indefinitely regardless of missions combining them online. The plan only requires as input, the environment map and the robot actions and possible observations. We demonstrate the single level approach and the map decomposition with experiments on both V-REP Simulator and the Pioneer 3DX robot. This approach allows the robot to perform both the localization and tasking in a large-scale environment.


international conference on ubiquitous robots and ambient intelligence | 2013

Shared control for assistive mobile robots based on vector fields

Leonardo Olivi; Ricardo Souza; Eric Rohmer; Eleri Cardozo

Technologies such as Brain-Computer Interface (BCI) and Electromyography (EMG) allow people with limited mobility to interact with devices such as computers, home appliances and mobile robots. However, low cost BCI and EMG have not matured yet. Moreover, these technologies present relative low signal-to-noise ratio and classification accuracy. In case of employing BCI or EMG to manually control a mobile robot, a shared control must be inserted in the loop control to compensate misinterpreted commands. This paper presents a novel shared control approach based on vector fields for the manual navigation of assistive mobile robots. Unlike other approaches which take full control of the robot in certain situations, this technique allows full control to the user. Also, this approach reduces interventions to correct the navigation route caused by wrong classification of commands issued by the user. Results show that it is a simple, fast and effective technique.


robot and human interactive communication | 2015

A novel platform supporting multiple control strategies for assistive robots

Eric Rohmer; Paulo Gurgel Pinheiro; Klaus Raizer; Leonardo Olivi; Eleri Cardozo

This work presents a platform for the development of a functional prototype for assistive robotic vehicles supporting various control strategies in the context of a smart environment. The implemented framework allows an operator with a disability to interact with a smart environment by means of handfree devices (small movements of the face or limbs through Electromyograph (EMG), or Electroencephalograph (EEG), among others). The present work also details the integration and testing of four control strategies (manual control, shared control, point to go, and fully autonomous), giving the user the opportunity to choose among them based on the structure of the environment, personal preference, or capability. An intelligent assistive agent was integrated into the framework which helps the operator navigating the user interface and interacting with the environment. The controls performances for a common scenario are compared to validate the platform and compare the implemented navigation algorithms, and experimental results are presented and discussed.


latin american robotics symposium | 2017

A hybrid approach for the actuation of upper limb prostheses based on computer vision

Dandara Thamilys Guedes de Andrade; Akari Ishikawa; Amparo Muñoz; Eric Rohmer

When a prosthesis that has a complicated process of selecting grip modes is presented to an amputee, he tends to give up using the device. Nowadays, most of the commercial interfaces for prosthetic hands are based on electromyography (EMG), and they are difficult to control due to the complexity of the activation methods. The multimodal approaches proposed in the literature to succeed in dealing with this problem have either limited the user to one way of interaction with objects or did not overcome the gap between research and clinical application. To change this scenario, we propose a hybrid technique based on computer vision and EMG for the activation of an upper limb prosthesis. With this interface, the user sends a simple command using an EMG interface to take a picture of the object they want to interact with, leaving the system to suggest the user which interaction is more likely correct from the recognized object name. The main contribution is a prosthetic control interface that allows the user to choose among types of interactions to accomplish the desired task using only three different movements. Without the need to learn more than three different patterns of contractions, the control interface becomes easier and less cognitive effort is required from users. Moreover, the platform presented in this paper does not limit the user regarding the number of possible interactions the user can perform.


advanced robotics and its social impacts | 2017

An Affordable open-source multifunctional upper-limb prosthesis with intrinsic actuation

Julio Fajardo; Victor Ferman; Ali Lemus; Eric Rohmer

The strict development processes of commercial upper-limb prosthesis and complexity of research projects makes them expensive for end users, both in terms of acquisition and maintenance. The advent of 3D printers and the internet, allows for distributed open-source research projects that follow new design principles; these take into account simplicity without neglecting performance in terms of grasping capabilities, power consumption and controllability. We propose a simple yet functional design based on 3D printing with the aim to reduce cost and save time in the manufacturing process. Its modular, parametric and self-contained design is intended to be fitted in a wide range of people with different transradial amputation levels. Moreover, the system brings an original user-friendly user-prosthesis interface (UPI), in order to trigger and increase the amount of customized hand postures that can be performed by the users. Surface electromyography (sEMG) control allows the user to consciously activate the prosthetic actuation mechanism, a graphical interface enables the possibility to select between different sets of predefined gestures. A five-fingered prosthetic hand integrating intuitive myoelectric control and a graphical UPI was tested, obtaining great mechanical performance, in addition to high accuracy and responsiveness of the sEMG controller.


XXV Congresso de Iniciação Cientifica da Unicamp | 2017

HUMAN MACHINE INTERFACE FOR HAND PROSTHESIS BASED ON COMPUTER VISION AND ELECTROMYOGRAPHY

Akari Ishikawa; Eric Rohmer; Dandara Thamilys Guedes de Andrade; Amparo Muñoz

Controlling the upper limb prosthesis using muscular contractions requires the amputees intense and long period training, where frustration can lead to abandonment of the device. In this project we are proposing a Human Machine Interface (HMI) not only focused on muscular activity reading but in an innovative solution, based on computer vision. With a camera attached to the prosthesis, we desined a system that identifies objects, and suggests interactions based on the users preferences.


emerging technologies and factory automation | 2015

Laser based driving assistance for smart robotic wheelchairs

Eric Rohmer; Paulo Gurgel Pinheiro; Eleri Cardozo; Mauro Bellone; Giulio Reina

This paper is presenting the ongoing work toward a novel driving assistance system of a robotic wheelchair, for people paralyzed from down the neck. The users head posture is tracked, to accordingly project a colored spot on the ground ahead, with a pan-tilt mounted laser. The laser dot on the ground represents a potential close range destination the operator wants to reach autonomously. The wheelchair is equipped with a low cost depth-camera (Kinect sensor) that models a traversability map in order to define if the designated destination is reachable or not by the chair. If reachable, the red laser dot turns green, and the operator can validate the wheelchair destination via an Electromyogram (EMG) device, detecting a specific group of muscles contraction. This validating action triggers the calculation of a path toward the laser pointed target, based on the traversability map. The wheelchair is then controlled to follow this path autonomously. In the future, the stream of 3D point cloud acquired during the process will be used to map and self localize the wheelchair in the environment, to be able to correct the estimate of the pose derived from the wheels encoders.

Collaboration


Dive into the Eric Rohmer's collaboration.

Top Co-Authors

Avatar

Eleri Cardozo

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akari Ishikawa

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Amparo Muñoz

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

André Paraense

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Jacques Wainer

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Leonardo Olivi

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge