François Berry
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by François Berry.
The International Journal of Robotics Research | 2003
Enric Cervera; A.P. del Pobil; François Berry; Philippe Martinet
Neither of the classical visual servoing approaches, position-based and image-based, are completely satisfactory. In position-based visual servoing the trajectory of the robot is well stated, but the approach suffers mainly from the image features going out of the visual field of the cameras. On the other hand, image-based visual servoing has been found generally satisfactory and robust in the presence of camera and hand-eye calibration errors. However, in some cases, singularities and local minima may arise, and the robot can go further from its joint limits. This paper is a step towards the synthesis of both approaches with their particular advantages, i.e., the trajectory of the camera motion is predictable and the image features remain in the field of view of the camera. The basis is the introduction of three-dimensional information in the feature vector. Point depth and object pose produce useful behavior in the control of the camera. Using the task-function approach, we demonstrate the relationship between the velocity screw of the camera and the current and desired poses of the object in the camera frame. Camera calibration is assumed, at least coarsely. Experimental results on real robotic platforms illustrate the presented approach.
international conference on distributed smart cameras | 2007
Fabio Dias; François Berry; Jocelyn Sérot; François Marmoiton
Processing images to extract useful information in real-time is a complex task, dealing with large amounts of iconic data and requiring intensive computation. Smart cameras use embedded processing to save the host system from the low-level processing load and to reduce communication flows and overheads. Field programmable devices present special interest for smart cameras design: flexibility, reconfigurability and parallel processing skills are some specially important features. In this paper we present a FPGA-based smart camera research platform. The hardware architecture is described, and some design issues are discussed. Our goal is to use the possibility to reconfigure the FPGA device in order to adapt the system architecture to a given application. To that, a design methodology, based on pre-programmed processing elements, is proposed and sketched. Some implementation issues are discussed and a template tracking application is given as example, with its experimental results.
Eurasip Journal on Embedded Systems | 2007
Pierre Chalimbaud; François Berry
In computer vision and more particularly in vision processing, the impressive evolution of algorithms and the emergence of new techniques dramatically increase algorithm complexity. In this paper, a novel FPGA-based architecture dedicated to active vision (and more precisely early vision) is proposed. Active vision appears as an alternative approach to deal with artificial vision problems. The central idea is to take into account the perceptual aspects of visual tasks, inspired by biological vision systems. For this reason, we propose an original approach based on a system on programmable chip implemented in an FPGA connected to a CMOS imager and an inertial set. With such a structure based on reprogrammable devices, this system admits a high degree of versatility and allows the implementation of parallel image processing algorithms.
international conference on computer vision | 2009
Omar Ait-Aider; François Berry
We describe a spatio-temporal triangulation method to be used with rolling shutter cameras. We show how a single pair of rolling shutter images enables the computation of both structure and motion of rigid moving objects. Starting from a set of point correspondences in the left and right images, we introduce the velocity and shutter characteristics in the triangulation equations. This results in a non-linear error criterion whose minimization in the least square sense provides the shape and velocity parameters. Unlike previous work on rolling shutter cameras, the constraining assumption of a-priori knowledge about the object geometry is removed and a full 3D motion model is considered. The aim of this work is thus to make the use of rolling shutter cameras of a broader interest. Experimental evaluation results confirm the feasibility of the approach.
Journal of Systems Architecture | 2014
Merwan Birem; François Berry
DreamCam is a modular smart camera constructed with the use of an FPGA like main processing board. The core of the camera is an Altera Cyclone-III associated with a CMOS imager and six private Ram blocks. The main novel feature of our work consists in proposing a new smart camera architecture and several modules (IP) to efficiently extract and sort the visual features in real time. In this paper, extraction is performed by a Harris and Stephen filtering associated with customized modules. These modules extract, select and sort visual features in real-time. As a result, DreamCam (with such a configuration) provides a description of each visual feature in the form of its position and the grey-level template around it.
field-programmable logic and applications | 2011
Jocelyn Sérot; François Berry; Sameer Ahmed
We introduce CAPH, a new domain-specific language (DSL) suited to the implementation of stream-processing applications on field programmable gate arrays (FPGA). \caph relies upon the actor/dataflow model of computation. Applications are described as networks of purely dataflow actors exchanging tokens through unidirectional channels. The behavior of each actor is defined as a set of transition rules using pattern matching. The \caph suite of tools currently comprises a reference interpreter and a compiler producing both SystemC and synthetizable VHDL code. We describe the implementation with a preliminary version of the compiler, of a simple real-time motion detection application on a FPGA-based smart camera platform. The language reference manual and a prototype compiler are available from http://wwwlasmea.univ-bpclermont.fr/Personnel/Jocelyn.Serot/caph.html.
Archive | 2009
Fábio Dias Real; François Berry
Scientific and industrial communities are showing a growing interest in embedded systems. Technological advances in microelectronics and very large scale integration (VLSI) allow today to obtain more and more complex systems in a single device. Smart cameras are part of this evolution process and can be defined as embedded systems dedicated to image acquisition and processing. Pre-processing images and video flow within the camera presents several advantages, giving more autonomy to the system, lightening the processing load of the host device, and preventing communication bottlenecks. The goal of this chapter is to propose an overview on smart camera technologies, including hardware devices, design, and application of such systems.
Archive | 2013
Jocelyn Sérot; François Berry; Sameer Ahmed
We introduce CAPH, a new domain-specific language (DSL) suited to the implementation of stream-processing applications on field programmable gate arrays (FPGA). CAPH relies upon the actor/dataflow model of computation. Applications are described as networks of purely dataflow actors exchanging tokens through unidirectional channels. The behavior of each actor is defined as a set of transition rules using pattern matching. The CAPH suite of tools currently comprises a reference interpreter and a compiler producing both SystemC and synthetizable VHDL code. We describe the implementation, with a preliminary version of the compiler, of a simple real-time motion detection application on an FPGA-based smart camera platform. The language reference manual and a prototype compiler are available from http://wwwlasmea.univ-bpclermont.fr/Personnel/Jocelyn.Serot/caph.html.
field-programmable technology | 2004
Pierre Chalimbaud; François Berry
A novel architecture dedicated to image processing is presented. The most original aspect of the approach is the use of a high density FPGA to build a versatile embedded smart camera. We show the main advantages of such a system based on a CMOS imaging device and a programmable chip. With its structure, the system proposes a high degree of versatility and allows the implementation of parallel image processing algorithms. As a result, an architecture of template tracking is proposed.
The International Journal of Robotics Research | 2007
Pierre Chalimbaud; François Marmoiton; François Berry
In the neurological system of primates, changes in posture are detected by the central nervous system through a vestibular process. This process, located in the inner ear, coordinates several system outputs to maintain stable balance, visual gaze, and autonomic control in response to changes in posture. Consequently the vestibular data is merged to other sensing data like touch, vision, .... The visuo-inertial merging is crucial for several tasks like navigation, depth estimation, stabilization. This paper proposes a “primate-inspired” sensing hardware, based on a CMOS imaging and an artificial vestibular system. The whole sensor can be considered like a smart embedded sensor where one of the most original aspects of this approach is the use of a System On Chip implemented in a FPGA to manage the whole system. The sensing device is designed around a 4 million pixels CMOS imager and the artificial vestibular set is composed of three linear accelerometers and three gyroscopes. With its structure, the system provides a high degree of versatility and allows the implementation of parallel image and inertial processing algorithms. In order to illustrate the proposed approach, depth estimation with Kalman filtering implementation is carried out.