Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Sebastien Valois is active.

Publication


Featured researches published by Jean-Sebastien Valois.


intelligent robots and systems | 2012

An integrated system for autonomous robotics manipulation

J. Andrew Bagnell; Felipe Cavalcanti; Lei Cui; Thomas Galluzzo; Martial Hebert; Moslem Kazemi; Matthew Klingensmith; Jacqueline Libby; Tian Yu Liu; Nancy S. Pollard; Mihail Pivtoraiko; Jean-Sebastien Valois; Ranqi Zhu

We describe the software components of a robotics system designed to autonomously grasp objects and perform dexterous manipulation tasks with only high-level supervision. The system is centered on the tight integration of several core functionalities, including perception, planning and control, with the logical structuring of tasks driven by a Behavior Tree architecture. The advantage of the implementation is to reduce the execution time while integrating advanced algorithms for autonomous manipulation. We describe our approach to 3-D perception, real-time planning, force compliant motions, and audio processing. Performance results for object grasping and complex manipulation tasks of in-house tests and of an independent evaluation team are presented.


Journal of Field Robotics | 2015

CHIMP, the CMU Highly Intelligent Mobile Platform

Anthony Stentz; Herman Herman; Alonzo Kelly; Eric Meyhofer; G. Clark Haynes; David Stager; Brian Zajac; J. Andrew Bagnell; Jordan Brindza; Christopher M. Dellin; Michael David George; Jose Gonzalez-Mora; Sean Hyde; Morgan Jones; Michel Laverne; Maxim Likhachev; Levi Lister; Matthew Powers; Oscar Ramos; Justin Ray; David Rice; Justin Scheifflee; Raumi Sidki; Siddhartha S. Srinivasa; Kyle Strabala; Jean-Philippe Tardif; Jean-Sebastien Valois; Michael Vande Weghe; Michael D. Wagner; Carl Wellington

We have developed the CHIMP CMU Highly Intelligent Mobile Platform robot as a platform for executing complex tasks in dangerous, degraded, human-engineered environments. CHIMP has a near-human form factor, work-envelope, strength, and dexterity to work effectively in these environments. It avoids the need for complex control by maintaining static rather than dynamic stability. Utilizing various sensors embedded in the robots head, CHIMP generates full three-dimensional representations of its environment and transmits these models to a human operator to achieve latency-free situational awareness. This awareness is used to visualize the robot within its environment and preview candidate free-space motions. Operators using CHIMP are able to select between task, workspace, and joint space control modes to trade between speed and generality. Thus, they are able to perform remote tasks quickly, confidently, and reliably, due to the overall design of the robot and software. CHIMPs hardware was designed, built, and tested over 15i¾?months leading up to the DARPA Robotics Challenge. The software was developed in parallel using surrogate hardware and simulation tools. Over a six-week span prior to the DRC Trials, the software was ported to the robot, the system was debugged, and the tasks were practiced continuously. Given the aggressive schedule leading to the DRC Trials, development of CHIMP focused primarily on manipulation tasks. Nonetheless, our team finished 3rd out of 16. With an upcoming year to develop new software for CHIMP, we look forward to improving the robots capability and increasing its speed to compete in the DRC Finals.


robotics science and systems | 2012

Robust Object Grasping using Force Compliant Motion Primitives

Moslem Kazemi; Jean-Sebastien Valois; J. Andrew Bagnell; Nancy S. Pollard

We address the problem of grasping everyday objects that are small relative to an anthropomorphic hand, such as pens, screwdrivers, cellphones, and hammers from their natural poses on a support surface, e.g., a table top. In such conditions, state of the art grasp generation techniques fail to provide robust, achievable solutions due to either ignoring or trying to avoid contact with the support surface. In contrast, we show that contact with support surfaces is critical for grasping small objects. This also conforms with our anecdotal observations of human grasping behaviors. We develop a simple closed-loop hybrid controller that mimics this interactive, contact-rich strategy by a position-force, pre-grasp and landing strategy for finger placement. The approach uses a compliant control of the hand during the grasp and release of objects in order to preserve safety. We conducted extensive grasping experiments on a variety of small objects with similar shape and size. The results demonstrate that our approach is robust to localization uncertainties and applies to many everyday objects.


Autonomous Robots | 2014

Human-inspired force compliant grasping primitives

Moslem Kazemi; Jean-Sebastien Valois; J. Andrew Bagnell; Nancy S. Pollard

We address the problem of grasping everyday objects that are small relative to an anthropomorphic hand, such as pens, screwdrivers, cellphones, and hammers from their natural poses on a support surface, e.g., a table top. In such conditions, state of the art grasp generation techniques fail to provide robust, achievable solutions due to either ignoring or trying to avoid contact with the support surface. In contrast, when people grasp small objects, they often make use of substantial contact with the support surface. In this paper we give results of human subjects grasping studies which show the extent and characteristics of environment contact under different task conditions. We develop a simple closed-loop hybrid grasping controller that mimics this interactive, contact-rich strategy by a position-force, pre-grasp and landing strategy for finger placement. The approach uses a compliant control of the hand during the grasp and release of objects in order to preserve safety. We conducted extensive robotic grasping experiments on a variety of small objects with similar shape and size. The results demonstrate that our approach is robust to localization uncertainties and applies to many everyday objects.


robotics: science and systems | 2015

Autonomy Infused Teleoperation with Application to BCI Manipulation

Katharina Mülling; Arun Venkatraman; Jean-Sebastien Valois; John E. Downey; Jeffrey A. Weiss; Shervin Javdani; Martial Hebert; Andrew B. Schwartz; Jennifer L. Collinger; J. Andrew Bagnell

Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with BrainComputer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator’s capabilities and feelings of comfort and control while compensating for a task’s difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments.


Proceedings of SPIE, the International Society for Optical Engineering | 2008

Remote operation of the Black Knight unmanned ground combat vehicle

Jean-Sebastien Valois; Herman Herman; John Bares; David Rice

The Black Knight is a 12-ton, C-130 deployable Unmanned Ground Combat Vehicle (UGCV). It was developed to demonstrate how unmanned vehicles can be integrated into a mechanized military force to increase combat capability while protecting Soldiers in a full spectrum of battlefield scenarios. The Black Knight is used in military operational tests that allow Soldiers to develop the necessary techniques, tactics, and procedures to operate a large unmanned vehicle within a mechanized military force. It can be safely controlled by Soldiers from inside a manned fighting vehicle, such as the Bradley Fighting Vehicle. Black Knight control modes include path tracking, guarded teleoperation, and fully autonomous movement. Its state-of-the-art Autonomous Navigation Module (ANM) includes terrain-mapping sensors for route planning, terrain classification, and obstacle avoidance. In guarded teleoperation mode, the ANM data, together with automotive dials and gages, are used to generate video overlays that assist the operator for both day and night driving performance. Remote operation of various sensors also allows Soldiers to perform effective target location and tracking. This document covers Black Knights system architecture and includes implementation overviews of the various operation modes. We conclude with lessons learned and development goals for the Black Knight UGCV.


international conference on multimedia information networking and security | 2010

Modular countermine payload for small robots

Herman Herman; Doug Few; Roelof Versteeg; Jean-Sebastien Valois; Jeff McMahill; Michael Licitra; Edward Henciak

Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processor that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multimission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.


international conference on multimedia information networking and security | 2010

Mine detection performance comparison between manual sweeping and tele-operated robotic system

Herman Herman; Todd Higgins; Olga Falmier; Jean-Sebastien Valois; Jeff McMahill

Mine detection is a dangerous and physically demanding task that is very well-suited for robotic applications. In the experiment described in this paper, we try to determine whether a remotely-operated robotic mine detection system equipped with a hand-held mine detector can match the performance of a human equipped with a hand-held mine detector. To achieve this objective, we developed the Robotic Mine Sweeper (RMS). The RMS platform is capable of accurately sweeping and mapping mine lanes using common detectors, such as the Minelab F3 Mine Detector or the AN/PSS-14. The RMS is fully remote controlled from a safe distance by a laptop via a redundant wireless connection link. Data collected from the mine detector and various sensors mounted on the robot are transmitted and logged in real-time to the remote user interface and simultaneously graphically displayed. In addition, a stereo color camera mounted on top of the robot sends a live picture of the terrain. The system plays audio feedback from the detector to further enhance the users situational awareness. The user is trained to drag and drop various icons onto the user interface map to locate mines and non-mine clutter objects. We ran experiments with the RMS to compare its detection and false alarm rates with those obtained when the user physically sweeps the detectors in the field. The results of two trials: one with the Minelab F3, the other with the Cyterra AN/PSS-14 are presented here.


Journal of Neuroengineering and Rehabilitation | 2016

Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.

John E. Downey; Jeffrey M. Weiss; Katharina Muelling; Arun Venkatraman; Jean-Sebastien Valois; Martial Hebert; J. Andrew Bagnell; Andrew B. Schwartz; Jennifer L. Collinger


Archive | 2014

A supervised autonomous robotic system for complex surface inspection and processing

Christopher L Baker; Christopher R. Baker; David G. Galati; Justin Haines; Herman Herman; Alonzo J Kelley; Stuart Edwin Lawrence; Eric Meyhofer; Anthony Stentz; Jean-Sebastien Valois

Collaboration


Dive into the Jean-Sebastien Valois's collaboration.

Top Co-Authors

Avatar

J. Andrew Bagnell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Herman Herman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John E. Downey

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Arun Venkatraman

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Moslem Kazemi

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge