Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven Cremer is active.

Publication


Featured researches published by Sven Cremer.


international conference on robotics and automation | 2015

Intent aware adaptive admittance control for physical Human-Robot Interaction

Isura Ranatunga; Sven Cremer; Dan O. Popa; Frank L. Lewis

Effective physical Human-Robot Interaction (pHRI) needs to account for variable human dynamics and also predict human intent. Recently, there has been a lot of progress in adaptive impedance and admittance control for human-robot interaction. Not as many contributions have been reported on online adaptation schemes that can accommodate users with varying physical strength and skill level during interaction with a robot. The goal of this paper is to present and evaluate a novel adaptive admittance controller that can incorporate human intent, nominal task models, as well as variations in the robot dynamics. An outer-loop controller is developed using an ARMA model which is tuned using an adaptive inverse control technique. An inner-loop neuroadaptive controller linearizes the robot dynamics. Working in conjunction and online, this two-loop technique offers an elegant way to decouple the pHRI problem. Experimental results are presented comparing the performance of different types of admittance controllers. The results show that efficient online adaptation of the robot admittance model for different human subjects can be achieved. Specifically, the adaptive admittance controller reduces jerk which results in a smooth human-robot interaction.


conference on automation science and engineering | 2015

Neuroadaptive control for safe robots in human environments: A case study

Isura Ranatunga; Sven Cremer; Frank L. Lewis; Dan O. Popa

Safety is an important consideration during physical Human-Robot Interaction (pHRI). Recently the community has tested numerous new safety features for robots, including accurate joint torque sensing, gravity compensation, reduced robot mass, and joint torque limits. Although these methods have reduced the risk of high energy collisions, they rely on reduced speed or accuracy of robot manipulators. Indeed, because lightweight robots are capable of higher velocities, knowledge of dynamical models is required for precise control. However, feedforward compensation is difficult to implement on lightweight robots with flexible and nonlinear joints, links, cables, and so on. Furthermore, unknown objects picked up by the robot will significantly alter the dynamics, leading to deterioration in performance unless high controller gains are used. This paper presents an online learning controller with convergence guarantees, that is able to learn the robot dynamics on the fly and provide feed-forward compensation. The resulting joint torques are significantly lower than conventional independent joint control efforts, thus improving the safety of the robot. Experiments on a PR2 robot arm are conducted to validate the effectiveness of the neuroadaptive controller to reduce control torques during high speed free-motion, lifting unknown objects, and collisions with the environment.


ieee international symposium on assembly and manufacturing | 2016

On the performance of the Baxter research robot

Sven Cremer; Lawrence Mastromoro; Dan O. Popa

The rise of collaborative industrial robotics in the past few years is poised to have a significant impact on manufacturing. Many have joined the movement toward developing a more sustainable and affordable collaborative robotic workforce that can safely work alongside humans; perhaps, none more so than Rethink Robotics. This paper provides a performance assessment of Rethinks Baxter research robot: its kinematic precision was compared experimentally to that of a conventional industrial arm utilizing a point-to-point motion test and writing task. In addition, the Denavit-Hartenberg (DH) parameters were determined and verified in order to model the kinematic chains of the robot arms. Finally, the yield of pick and place tasks consistent with the 2015 IEEE Amazons Picking Challenge were assessed. Results show that while Baxters precision is limited, its ability to handle common household-size items in semi-structured environments is a great asset.


3rd Sensors for Next-Generation Robotics Conference | 2016

Investigation of human-robot interface performance in household environments

Sven Cremer; Fahad Mirza; Yathartha Tuladhar; Rommel Alonzo; Anthony Hingeley; Dan O. Popa

Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.


Proceedings of SPIE | 2015

Multi-modal sensor and HMI integration with applications in personal robotics

Rommel Alonzo; Sven Cremer; Fahad Mirza; Sandesh Gowda; Larry Mastromoro; Dan O. Popa

In recent years, advancements in computer vision, motion planning, task-oriented algorithms, and the availability and cost reduction of sensors, have opened the doors to affordable autonomous robots tailored to assist individual humans. One of the main tasks for a personal robot is to provide intuitive and non-intrusive assistance when requested by the user. However, some base robotic platforms can’t perform autonomous tasks or allow general users operate them due to complex controls. Most users expect a robot to have an intuitive interface that allows them to directly control the platform as well as give them access to some level of autonomous tasks. We aim to introduce this level of intuitive control and autonomous task into teleoperated robotics. This paper proposes a simple sensor-based HMI framework in which a base teleoperated robotic platform is sensorized allowing for basic levels of autonomous tasks as well as provides a foundation for the use of new intuitive interfaces. Multiple forms of HMI’s (Human-Machine Interfaces) are presented and software architecture is proposed. As test cases for the framework, manipulation experiments were performed on a sensorized KUKA YouBot® platform, mobility experiments were performed on a LABO-3 Neptune platform and Nexus 10 tablet was used with multiple users in order to examine the robots ability to adapt to its environment and to its user.


pervasive technologies related to assistive environments | 2014

Implementation of advanced manipulation tasks on humanoids through kinesthetic teaching

Sven Cremer; Matt Middleton; Dan O. Popa

In this paper, we describe a software framework for programming by demonstration (PbD) using kinesthetic teaching. A Personal Robot 2 (PR2) robot platform was used to demonstrate teaching effectiveness and conduct dual arm manipulation operations during a wine pouring demonstration. During the teaching process, an operator directs the PR2 arms to execute complex joint trajectories leading to desired handling of objects in joint or Cartesian space. The user can efficiently store and playback programmed motions using Extensible Markup Language (XML) parsing. We present repeatable wine pouring motion results that were executed by the PR2 during a demo for several thousand guests at a public event.


Smart Biomedical and Physiological Sensor Technology XIV | 2017

Experimental setup for evaluating an adaptive user interface for teleoperation control

Indika B. Wijayasinghe; Srikanth Peetha; Shamsudeen Abubakar; Mohammad Nasser Saadatzi; Sven Cremer; Dan O. Popa

A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.


conference on automation science and engineering | 2016

Neuroadaptive calibration of tactile sensors for robot skin

Sven Cremer; Isura Ranatunga; Sumit K. Das; Indika B. Wijayasinghe; Dan O. Popa

In this paper we present a novel automated neuroadaptive approach that can characterize pressure sensitive “skin” deployed on a robot. Both the safety and performance of future co-robots can be greatly enhanced by such sensorized skin, by measuring multiple contact forces with humans and the environment. A challenge that arises with robot skin is the task of calibration to achieve reliable measurements necessary for safe human-robot interaction. To this end, the traditional method of calibrating each sensor prior to its use is a tedious task, especially with inexpensive, miniaturized hardware that can experience material degradation with time. Therefore, we propose an adaptive strategy that learns the sensor array characteristics together with the unknown dynamics of both the robot and human during physical interaction. Convincing experimental results with deployed pressure skin sensors on a PR2 robot are presented to validate our approach.


3rd Sensors for Next-Generation Robotics Conference | 2016

Application requirements for Robotic Nursing Assistants in hospital environments

Sven Cremer; Kris Doelling; Cody Lee Lundberg; Mike McNair; Jeongsik Shin; Dan O. Popa

In this paper we report on analysis toward identifying design requirements for an Adaptive Robotic Nursing Assistant (ARNA). Specifically, the paper focuses on application requirements for ARNA, envisioned as a mobile assistive robot that can navigate hospital environments to perform chores in roles such as patient sitter and patient walker. The role of a sitter is primarily related to patient observation from a distance, and fetching objects at the patient’s request, while a walker provides physical assistance for ambulation and rehabilitation. The robot will be expected to not only understand nurse and patient intent but also close the decision loop by automating several routine tasks. As a result, the robot will be equipped with sensors such as distributed pressure sensitive skins, 3D range sensors, and so on. Modular sensor and actuator hardware configured in the form of several multi-degree-of-freedom manipulators, and a mobile base are expected to be deployed in reconfigurable platforms for physical assistance tasks. Furthermore, adaptive human-machine interfaces are expected to play a key role, as they directly impact the ability of robots to assist nurses in a dynamic and unstructured environment. This paper discusses required tasks for the ARNA robot, as well as sensors and software infrastructure to carry out those tasks in the aspects of technical resource availability, gaps, and needed experimental studies.


conference on automation science and engineering | 2014

Robotic waiter with physical co-manipulation capabilities

Sven Cremer; Isura Ranatunga; Dan O. Popa

In this paper, we compare the performance of physical and non-physical interfaces for the behavior of a personal robot. The PR2 robot was programmed as a waiter with co-manipulation capabilities and experimentally tested with respect to a trajectory following task. For physical interaction (pushing/pulling), we implemented a compliance controller for compliant, stable arm positioning and a velocity based position controller for moving the robot base. Experiments were conducted to assess the effectiveness and accuracy of single and dual arm control compared to joystick teleoperation. Results indicate that the PR2 in physical collaboration with a human performs better than in teleoperation mode, as measured by task completion time while maintaining a comparable task accuracy. A result of our work is the open source, robot operating system (ROS) package pr2 cartPull, which will be shared with the robotics community.

Collaboration


Dive into the Sven Cremer's collaboration.

Top Co-Authors

Avatar

Dan O. Popa

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Isura Ranatunga

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Fahad Mirza

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Frank L. Lewis

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joe Sanford

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Rommel Alonzo

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Anthony Hingeley

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Carolyn Young

University of North Texas Health Science Center

View shared research outputs
Top Co-Authors

Avatar

Cody Lee Lundberg

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge