François Ferland
Université de Sherbrooke
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by François Ferland.
human-robot interaction | 2009
François Ferland; François Pomerleau; Chon Tam Le Dinh; François Michaud
The user interface is the central element of a telepresence robotic system and its visualization modalities greatly affect the operators situation awareness, and thus its performance. Depending on the task at hand and the operators preferences, going from ego- and exocentric viewpoints and improving the depth representation can provide better perspectives of the operation environment. Our system, which combines a 3D reconstruction of the environment using laser range finder readings with two video projection methods, allows the operator to easily switch from ego- to exocentric viewpoints. This paper presents the interface developed and demonstrates its capabilities by having 13 operators teleoperate a mobile robot in a navigation task.
Autonomous Robots | 2013
François Grondin; Dominic Létourneau; François Ferland; Vincent Rousseau; François Michaud
ManyEars is an open framework for microphone array-based audio processing. It consists of a sound source localization, tracking and separation system that can provide an enhanced speaker signal for improved speech and sound recognition in real-world settings. ManyEars software framework is composed of a portable and modular C library, along with a graphical user interface for tuning the parameters and for real-time monitoring. This paper presents the integration of the ManyEars Library with Willow Garage’s Robot Operating System. To facilitate the use of ManyEars on various robotic platforms, the paper also introduces the customized microphone board and sound card distributed as an open hardware solution for implementation of robotic audition systems.
human robot interaction | 2013
François Ferland; Dominic Létourneau; Arnaud Aumont; Julien Frémy; Marc-Antoine Legault; Michel Lauria; François Michaud
Designing robots that interact naturally with people requires the integration of technologies and algorithms for communication modalities such as gestures, movement, facial expressions and user interfaces. To understand interdependence among these modalities, evaluating the integrated design in feasibility studies provides insights about key considerations regarding the robot and potential interaction scenarios, allowing the design to be iteratively refined before larger-scale experiments are planned and conducted. This paper presents three feasibility studies with IRL-1, a new humanoid robot integrating compliant actuators for motion and manipulation along with artificial audition, vision, and facial expressions. These studies explore distinctive capabilities of IRL-1, including the ability to be physically guided by perceiving forces through elastic actuators used for active steering of the omnidirectional platform; the integration of vision, motion and audition for an augmented telepresence interface; and the influence of delays in responding to sounds. In addition to demonstrating how these capabilities can be exploited in human-robot interaction, this paper illustrates intrinsic interrelations between design and evaluation of IRL-1, such as the influence of the contact point in physically guiding the platform, the synchronization between sensory and robot representations in the graphical display, and facial gestures for responsiveness when computationally expensive processes are used. It also outlines ideas regarding more advanced experiments that could be conducted with the platform.
Autonomous Robots | 2016
Francis Leconte; François Ferland; François Michaud
Autonomous robots cohabiting with humans will have to achieve recurring tasks while adapting to the changing conditions of the world. A spatio-temporal memory categorizes the experiences of a robot to improve its ability to adapt to its environment. In this paper, we present a spatio-temporal (ST) memory model consisting of a cascade of two adaptive resonance theory (ART) networks: one to categorize spatial events and the other to extract temporal episodes from the robot’s experiences. Artificial emotions are used to dynamically modulate learning and recall of the ART networks based on how the robot is able to carry its task, using a simple model of artificial emotions. Once an episode is recalled, future events can be predicted and used to influence the intentions of the robot. Evaluation of our ST model is done using an autonomous robotic platform that has to deliver objects to people within an office area. Results demonstrate that our model can memorize and recall the experiences of a robot, and that emotional episodes are recalled more often, allowing the robot to use its memory of past experiences early on when repeating a task.
Paladyn | 2010
François Michaud; François Ferland; Dominic Létourneau; Marc-Antoine Legault; Michel Lauria
The field of robotics has made steady progress in the pursuit of bringing autonomous machines into real-life settings. Over the last 3 years, we have seen omnidirectional humanoid platforms that now bring compliance, robustness and adaptiveness to handle the unconstrained situations of the real world. However, today’s contributions mostly address only a portion of the physical, cognitive or evaluative dimensions, which are all interdependent. This paper presents an overview of our attempt to integrate as a whole all three dimensions into a robot named Johnny-0. We present Johnny-0’s distinct contributions in simultaneously exploiting compliance at the locomotion level, in grounding reasoning and actions through behaviors, and in considering all possible factors experimenting in the wildness of the real world.
field and service robotics | 2010
François Pomerleau; Francis Colas; François Ferland; François Michaud
Simultaneous Localization and Mapping (SLAM) iteratively builds a map of the environment by putting each new observation in relation with the current map. This relation is usually done by scan matching algorithms such as Iterative Closest Point (ICP) where two sets of features are paired. However as ICP is sensitive to outliers, methods have been proposed to reject them. In this article, we present a new rejection technique called Relative Motion Threshold (RMT). In combination with multiple pairing rejection, RMT identifies outliers based on error produced by paired points instead of a distance measurement, which makes it more applicable to pointto- plane error. The rejection threshold is calculated with a simulated annealing ratio which follows the convergence rate of the algorithm. Experiments demonstrate that RMT performs better than former techniques with outliers created by dynamical obstacles. Those results were achieved without reducing convergence speed of the overall ICP algorithm.
Robotics and Autonomous Systems | 2014
Julien Frémy; François Ferland; Michel Lauria; François Michaud
Physical guidance is a natural interaction capability that would be beneficial for mobile robots. However, placing force sensors at specific locations on the robot limits where physical interaction can occur. This paper presents an approach that uses torque data from four compliant steerable wheels of an omnidirectional non-holonomic mobile platform, to respond to physical commands given by a human. The use of backdrivable and torque-controlled elastic actuators for active steering of this platform intrinsically provides the capability of perceiving applied forces directly from its locomotion mechanism. In this paper, we integrate this capability into a control architecture that allows users to force-guide the platform with shared-control ability, i.e., having the platform being guided by the user while avoiding obstacles and collisions. Results using a real platform demonstrate that users intent can be estimated from the compliant steerable wheels, and used to guide the platform while taking nearby obstacles into consideration.
IEEE Systems Journal | 2016
Ronan Chauvin; Mathieu Hamel; Simon Brière; François Ferland; François Grondin; Dominic Létourneau; Michel Tousignant; François Michaud
One typical remote consultation envisioned for in-home telerehabilitation involves having the patient exercise on a stationary bike. Making sure that the patient is breathing well while pedaling is of primary concern for the remote clinician. One key requirement for in-home telerehabilitation is to make the system as simple as possible for the patients, avoiding, for instance, to have them wear sensors and devices. This paper presents a contact-free respiration rate monitoring system measuring temperature variations between inspired and expired air in the mouth-nose region using thermal imaging. The thermal camera is installed on a pan-tilt unit and coupled to a tracking algorithm, allowing the system to keep track of the mouth-nose region as the patient exercises. Results demonstrate that the system works in real time even when the patient moves or rotates its head while exercising. Recommendations are also made to minimize limitations of the system, such as the presence of people in the background or when the patient is talking, for its eventual use in in-home telerehabilitation sessions.
human robot interaction | 2015
Aurélien Reveleau; François Ferland; Mathieu Labbé; Dominic Létourneau; François Michaud
Commercial telepresence robots provide video, audio, and proximity data to remote operators through a teleoperation user interface running on standard computing devices. As new modalities such as force sensing and sound localization are being developed and tested on advanced robotic platforms, ways to integrate such information on a teleoperation interface are required. This paper demonstrates the use of visual representations of forces and sound localization in a 3D teleoperation interface. Forces are represented using colors, size, bar graphs and arrows, while speech or ring bubbles are used to represents sound positions and types. Validation of these modalities is done with 31 participants using IRL-1/TR, a humanoid platform equipped with differential elastic actuators to provide compliance and force control of its arms and capable of sound source localization. Results suggest that visual representations of interaction force and sound source can provide appropriately useful information to remote operators.
artificial general intelligence | 2014
Francis Leconte; François Ferland; François Michaud
Autonomous service robots must be able to learn from their experiences and adapt to situations encountered in dynamic environments. An episodic memory organizes experiences (e.g., location, specific objects, people, internal states) and can be used to foresee what will occur based on previously experienced situations. In this paper, we present an episodic memory system consisting of a cascade of two Adaptive Resonance Theory (ART) networks, one to categorize spatial events and the other to extract temporal episodes from the robot’s experiences. Artificial emotions are used to dynamically modulate learning and recall of ART networks based on how the robot is able to carry its task. Once an episode is recalled, future events can be predicted and used to influence the robot’s intentions. Validation is done using an autonomous robotic platform that has to deliver objects to people within an office area.