Benoit Ricard
Defence Research and Development Canada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benoit Ricard.
Proceedings of SPIE | 2010
Pascale Sévigny; David J. DiFilippo; Tony Laneve; Brigitte Chan; Jonathan Fournier; Simon Roy; Benoit Ricard; Jean Maheux
Mapping the interior of buildings is of great interest to military forces operating in an urban battlefield. Throughwall radars have the potential of mapping interior room layout, including the location of walls, doors and furniture. They could provide information on the in-wall structure, and detect objects of interest concealed in buildings, such as persons and arms caches. We are proposing to provide further context to the end user by fusing the radar data with LIDAR (Light Detection and Ranging) images of the building exterior. In this paper, we present our system concept of operation, which involves a vehicle driven along a path in front of a building of interest. The vehicle is equipped with both radar and LIDAR systems, as well as a motion compensation unit. We describe our ultra wideband through-wall L-band radar system which uses stretch processing techniques to obtain high range resolution, and synthetic aperture radar (SAR) techniques to achieve good azimuth resolution. We demonstrate its current 2-D capabilities with experimental data, and discuss the current progress in using array processing in elevation to provide a 3-D image. Finally, we show preliminary data fusion of SAR and LIDAR data.
canadian conference on computer and robot vision | 2007
Jonathan Fournier; Benoit Ricard; Denis Laurendeau
An increasingly popular approach to support military forces deployed in urban environments consists in using autonomous robots to carry on critical tasks such as mapping and surveillance. In order to cope with the complex obstacles and structures found in this operational context, robots should be able to perceive and analyze their world in 3D. The method presented in this paper uses a 3D volumetric sensor to efficiency map and explore urban environments with an autonomous robotic platform. A key feature of our work is that the 3D model of the environment is preserved all along the process using a multiresolution octree. This way, every module can access the information it contains to achieve its tasks. Simulation and real word tests were performed to validate the performance of the integrated system and are presented at the end of the paper.
International Symposium on Optical Science and Technology | 2002
Jean Lacoursière; Michel Doucet; Eugene O. Curatu; Maxime Savard; Sonia Verreault; Simon Thibault; Paul C. Chevrette; Benoit Ricard
As part of the Infrared Eye project, this article describes the design of large-deviation, achromatic Risley prisms scanning systems operating in the 0.5 - 0.92 and 8 - 9.5 μm spectral regions. Designing these systems is challenging due to the large deviation required (zero - 25 degrees), the large spectral bandwidth and the mechanical constraints imposed by the need to rotate the prisms to any position in 1/30 second. A design approach making extensive use of the versatility of optical design softwares is described. Designs consisting of different pairs of optical materials are shown in order to illustrate the trade-off between chromatic aberration, mass and vignetting. Control of chromatic aberration and reasonable prism shape is obtained over 8 - 9.5 μm with zinc sulfide and germanium. The design is more difficult for the 0.5 - 0.92 μm band. Trade-offs consist in using sapphire with Cleartran® over a reduced bandwidth (0.75 - 0.9 μm ) or acrylic singlets with the Infrared Eye in active mode (0.85 - 0.86 μm). Non-sequential ray-tracing is used to study the effects of fresnelizing one element of the achromat to reduce its mass, and to evaluate detector narcissus in the 8 - 9.5 μm region.
ieee international symposium on robotic and sensors environments | 2011
Jonathan Fournier; Marielle Mokhtari; Benoit Ricard
These days, robotic platforms are commonly used in operational conditions where manned operations are not practical, not cost-effective or too dangerous. Those robotic devices rely heavily on remote operations using imagery acquired by on-board sensors that provide quite limited situational awareness to the user. In difficult scenarios, this lack of good situational awareness could lead to the failure of the mission. This paper presents a new concept currently in development that will improve situational awareness of the remote platform operator through an immersive virtual environment. The system uses an immersive chamber (CAVE) in which the operator is able to visualize and interact with an avatar of a robot evolving in a 3D model of its area of operation. The 3D model is incrementally built from the remote platform sensor feeds and provides “persistent data” to the user. This paper presents the first phase of the work which involves the development of a concept demonstration prototype. The implementation uses a robot simulator instead of a real world robot in order to rapidly be able to evaluate the concept and perform experiments. The tools developed in simulation will serve as the base for further developments and support the transition to a real robotic platform.
Quantitative InfraRed Thermography | 2007
Amar El-Maadi; Vincent Grégoire; Louis St-Laurent; Hélène Torresan; Benoit Turgeon; Donald Prevost; Patrick Hebert; Denis Laurendeau; Benoit Ricard; Xavier Maldague
In this text, we summarize recent works done in the use of visible and infrared imagery for surveillance applications. Moreover, we also present latest developments that have occurred in three partner institutions of the Québec, Canada area in this field. Our focus is on both hardware and software. Hardware here concerns channel registration and innovative optical systems while software is related to high level information extraction. Extensive literature review is provided
Unmanned ground vehicle technology. Conference | 2004
Jack Collier; Benoit Ricard; Bruce Leonard Digney; David Cheng; Michael Trentini; Blake Beckman
In order for an Unmanned Ground Vehicle (UGV) to operate effectively it must be able to perceive its environment in an accurate, robust and effective manner. This is done by creating a world representation which encompasses all the perceptual information necessary for the UGV to understand its surroundings. These perceptual needs are a function of the robots mobility characteristics, the complexity of the environment in which it operates, and the mission with which the UGV has been tasked. Most perceptual systems are designed with predefined vehicle, environmental, and mission complexity in mind. This can lead the robot to fail when it encounters a situation which it was not designed for since its internal representation is insufficient for effective navigation. This paper presents a research framework currently being investigated by Defence R&D Canada (DRDC), which will ultimately relieve robotic vehicles of this problem by allowing the UGV to recognize representational deficiencies, and change its perceptual strategy to alleviate these deficiencies. This will allow the UGV to move in and out of a wide variety of environments, such as outdoor rural to indoor urban, at run time without reprogramming. We present sensor and perception work currently being done and outline our research in this area for the future.
Thermosense XXIV | 2002
Benoit Ricard; Paul C. Chevrette; Mario Pichette
The Infrared (IR) Eye was developed with support from the National Search-and-Rescue Secretariat (NSS), in view of improving the efficiency of airborne search-and-rescue operations. The IR Eye concept is based on the human eye and uses simultaneously two fields of view to optimize area coverage and detection capability. It integrates two cameras: the first, with a wide field of view of 40 degree(s), is used for search and detection while the second camera, with a narrower field of view of 10 degree(s) for higher resolution and identification, is mobile within the wide field and slaved to the operators line of sight by means of an eye-tracking system. The images from both cameras are fused and shown simultaneously on a standard high resolution CRT display unit, interfaced with the eye-tracking unit in order to optimize the man-machine interface. The system was flight tested using the Advanced System Research Aircraft (Bell 412 helicopter) from the Flight Research Laboratory of the National Research Council of Canada. This paper presents some results of the flight tests, indicates the strengths and deficiencies of the system, and suggests future improvements for an advanced system.
Cockpit Displays VI: Displays for Defense Applications | 1999
Paul C. Chevrette; Benoit Ricard
The Infrared Eye is a new concept of surveillance system that mimics human eye behavior to improve detection of small or low contrast target. In search and rescue operations (SAR), a wide field of view IR camera (WFOV) of approximately 20 degrees is used for detection of target and switched to a narrow field of view (NFOV) of approximately 5 degrees for a better target identification. In current SAR system, both FOVs cannot be used concurrently on the same display. The system presented in this paper fuses on the same high-resolution display the high- sensitivity WFOV image and the high-resolution NFOV image obtained from two IR cameras. The NFOV image movement within the WFOV image is slaved to the operators eye movement by an eye-tracking device. The operators central vision is always looking at the high-resolution IR image of the scene captured by the NFOV camera, while his peripheral vision is filled by the enhanced sensitivity (but low-resolution) image of the WFOV camera. This paper will describe the operation principle and implementation of the display, including its interface with an eye-tracking system and the opto-mechanical system used to steer the NFOV camera.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Jonathan Fournier; Benoit Ricard; Denis Laurendeau
The use of robots for (semi-) autonomous operations in complex terrains such as urban environments poses difficult mobility, mapping, and perception challenges. To be able to work efficiently, a robot should be provided with sensors and software such that it can perceive and analyze the world in 3D. Real-time 3D sensing and perception in this operational context are paramount. To address these challenges, DRDC Valcartier has developed over the past years a compact sensor that combines a wide baseline stereo camera and a laser scanner with a full 360 degree azimuth and 55 degree elevation field of view allowing the robot to view and manage overhang obstacles as well as obstacles at ground level. Sensing in 3D is common but to efficiently navigate and work in complex terrain, the robot should also perceive, decide and act in three dimensions. Therefore, 3D information should be preserved and exploited in all steps of the process. To achieve this, we use a multiresolution octree to store the acquired data, allowing mapping of large environments while keeping the representation compact and memory efficient. Ray tracing is used to build and update the 3D occupancy model. This model is used, via a temporary 2.5D map, for navigation, obstacle avoidance and efficient frontier-based exploration. This paper describes the volumetric sensor concept, describes its design features and presents an overview of the 3D software framework that allows 3D information persistency through all computation steps. Simulation and real-world experiments are presented at the end of the paper to demonstrate the key elements of our approach.
Proceedings of SPIE, the International Society for Optical Engineering | 2006
Michael Trentini; Blake Beckman; Bruce Leonard Digney; Isabelle Vincent; Benoit Ricard