Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barthélémy Heyrman is active.

Publication


Featured researches published by Barthélémy Heyrman.


IEEE Journal of Solid-state Circuits | 2008

A 10 000 fps CMOS Sensor With Massively Parallel Image Processing

Jérôme Dubois; Dominique Ginhac; Michel Paindavoine; Barthélémy Heyrman

A high-speed analog VLSI image acquisition and pre-processing system has been designed and fabricated in a 0.35 ¿m standard CMOS process. The chip features a massively parallel architecture enabling the computation of programmable low-level image processing in each pixel. Extraction of spatial gradients and convolutions such as Sobel or Laplacian filters are implemented on the circuit. For this purpose, each 35 ¿m × 35 ¿m pixel includes a photodiode, an amplifier, two storage capacitors, and an analog arithmetic unit based on a four-quadrant multiplier architecture. The retina provides address-event coded output on three asynchronous buses: one output dedicated to the gradient and the other two to the pixel values. A 64 × 64 pixel proof-of-concept chip was fabricated. A dedicated embedded platform including FPGA and ADCs has also been designed to evaluate the vision chip. Measured results show that the proposed sensor successfully captures raw images up to 10 000 frames per second and runs low-level image processing at a frame rate of 2000 to 5000 frames per second.


Optical Engineering | 2014

Hardware-based smart camera for recovering high dynamic range video from multiple exposures

Pierre-Jean Lapray; Barthélémy Heyrman; Dominique Ginhac

Abstract. In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.


Journal of Real-time Image Processing | 2016

Fast prototyping of a SoC-based smart-camera: a real-time fall detection case study

Benaoumeur Senouci; Imen Charfi; Barthélémy Heyrman; Julien Dubois; Johel Miteran

Smart camera, i.e. cameras that are able to acquire and process images in real-time, is a typical example of the new embedded computer vision systems. A key example of application is automatic fall detection, which can be useful for helping elderly people in daily life. In this paper, we propose a methodology for development and fast-prototyping of a fall detection system based on such a smart camera, which allows to reduce the development time compared to standard approaches. Founded on a supervised classification approach, we propose a HW/SW implementation to detect falls in a home environment using a single camera and an optimized descriptor adapted to real-time tasks. This heterogeneous implementation is based on Xilinx’s system-on-chip named Zynq. The main contributions of this work are (i) the proposal of a co-design methodology. These methodologies enable the HW/SW partitioning to be delayed using high-level algorithmic description and high-level synthesis tools. Our approach enables fast prototyping which allows fast architecture exploration and optimisation to be performed, (ii) the design of a hardware accelerator dedicated to boosting-based classification, which is a very popular and efficient algorithm used in image analysis, (iii) the proposal of fall-detection embedded in a smart camera and enabling integration into the elderly people environment. Performances of our system are finally compared to the state-of-the-art.


international symposium on circuits and systems | 2012

HDR-ARtiSt: High dynamic range advanced real-time imaging system

Pierre-Jean Lapray; Barthélémy Heyrman; Matthieu Rossé; Dominique Ginhac

This paper describes the HDR-ARtiSt hardware platform, a FPGA-based architecture that can produce a real-time high dynamic range video from successive image acquisition. The hardware platform is built around a standard low dynamic range (LDR) CMOS sensor and a Virtex 5 FPGA board. The CMOS sensor is a EV76C560 provided by e2v. This 1.3 Megapixel device offers novel pixel integration/readout modes and embedded image pre-processing capabilities including multiframe acquisition with various exposure times. Our approach consists of a hardware architecture with different algorithms: double exposure control during image capture, building of an HDR image by combining the multiple frames, and final tone mapping for viewing on a LCD display. Our video camera system is able to achieve a real-time video rate of 30 frames per second for a full sensor resolution of 1,280 × 1,024 pixels.


Eurasip Journal on Embedded Systems | 2008

An SIMD programmable vision chip with high-speed focal plane image processing

Dominique Ginhac; Jérôme Dubois; Michel Paindavoine; Barthélémy Heyrman

A high-speed analog VLSI image acquisition and low-level image processing system are presented. The architecture of the chip is based on a dynamically reconfigurable SIMD processor array. The chip features a massively parallel architecture enabling the computation of programmable mask-based image processing in each pixel. Extraction of spatial gradients and convolutions such as Sobel operators are implemented on the circuit. Each pixel includes a photodiode, an amplifier, two storage capacitors, and an analog arithmetic unit based on a four-quadrant multiplier architecture. A pixel proof-of-concept chip was fabricated in a 0.35 m standard CMOS process, with a pixel size of 35 m 35 m. A dedicated embedded platform including FPGA and ADCs has also been designed to evaluate the vision chip. The chip can capture raw images up to 10 000 frames per second and runs low-level image processing at a framerate of 2 000 to 5 000 frames per second.


Journal of Systems Architecture | 2013

Scene-based non-uniformity correction: From algorithm to implementation on a smart camera

Tomasz Toczek; Faouzi Hamdi; Barthélémy Heyrman; Jérôme Dubois; Johel Miteran; Dominique Ginhac

Raw output data from image sensors tends to exhibit a form of bias due to slight on-die variations between photodetectors, as well as between amplifiers. The resulting bias, called fixed pattern noise (FPN), is often corrected by subtracting its value, estimated through calibration, from the sensors raw signal. This paper introduces an on-line scene-based technique for an improved fixed-pattern noise compensation which does not rely on calibration, and hence is more robust to the dynamic changes in the FPN which may occur slowly over time. This article first gives a quick summary of existing FPN correction methods and explains how our approach relates to them. Three different pipeline architectures for realtime implementation on a FPGA-based smart camera are then discussed. For each of them, FPGA implementations details, performance and hardware costs are provided. Experimental results on a set of seven different scenes are also depicted showing that the proposed correction chain induces little additional resource use while guarantying high quality images on a wide variety of scenes.


international conference on distributed smart cameras | 2011

Smart camera design for realtime high dynamic range imaging

Pierre-Jean Lapray; Barthélémy Heyrman; Matthieu Rossé; Dominique Ginhac

Many camera sensors suffer from limited dynamic range. The result is that there is a lack of clear details in displayed images and videos. This paper describes our approach to generate high dynamic range (HDR) from an image sequence while modifying exposure times for each new frame. For this purpose, we propose an FPGA-based architecture that can produce a real-time high dynamic range video from successive image acquisition. Our hardware platform is build around a standard low dynamic range CMOS sensor and a Virtex 5 FPGA board. The CMOS sensor is a EV76C560 provided by e2v. This 1.3 Megapixel device offers novel pixel integration/readout modes and embedded image pre-processing capabilities including multiframe acquisition with various exposure times, approach consists of a pipeline of different algorithmic phases: automatic exposure control during image capture, alignment of successive images in order to minimize camera and objects movements, building of an HDR image by combining the multiple frames, and final tonemapping for viewing on a LCD display. Our aim is to achieve a realtime video rate of 25 frames per second for a full sensor resolution of 1, 280 × 1, 024 pixels.


international conference on distributed smart cameras | 2013

A 1.3 megapixel FPGA-based smart camera for high dynamic range real time video

Pierre-Jean Lapray; Barthélémy Heyrman; Matthieu Rossé; Dominique Ginhac

A camera is able to capture only a part of a high dynamic range scene information. The same scene can be fully perceived by the human visual system. This is true especially for real scenes where the difference in light intensity between the dark areas and bright areas is high. The imaging technique which can overcome this problem is called HDR (High Dynamic Range). It produces images from a set of multiple LDR images (Low Dynamic Range), captured with different exposure times. This technique appears as one of the most appropriate and a cheap solution to enhance the dynamic range of captured environments. We developed an FPGA-based smart camera that produces a HDR live video colour stream from three successive acquisitions. Our hardware platform is build around a standard LDR CMOS sensor and a Virtex 6 FPGA board. The hardware architecture embeds a multiple exposure control, a memory management unit, the HDR creating, and the tone mapping. Our video camera enables a real-time video at 60 frames per second for a full sensor resolution of 1, 280 × 1, 024 pixels.


Proceedings of the 10th International Conference on Distributed Smart Camera | 2016

Real-time ghost free HDR video stream generation using weight adaptation based method

Mustapha Bouderbane; Pierre-Jean Lapray; Julien Dubois; Barthélémy Heyrman; Dominique Ginhac

Temporal exposure bracketing is a simple and low cost technique to generate a high dynamic range (HDR) images. This technique is widely used to recover the whole dynamic range of a scene by selecting the adequate number of low dynamic range (LDR) images to be fused. Temporal exposure bracketing technique is introduced to be used for static scenes and it cannot be applied directly for dynamic scenes since camera or object motion in bracketed exposures creates ghosts in the resulting HDR image. In this paper we propose a HDR algorithm modification to remove ghost artifact and we present a real-time implementation of this method on a smart camera (HDR video stream 1280 x 1024 at 60fps). We present experimental results to show the ghost removing efficiency of our implemented method.


international conference on computer vision | 2014

Robust spatio-temporal descriptors for real-time SVM-based fall detection

Imen Charfi; Johel Miteran; Julien Dubois; Barthélémy Heyrman; Mohamed Atri

We propose a SVM-based approach to detect falls in several home environments using an optimised descriptor adapted to real-time tasks.We build an optimised spatio-temporal descriptor named STHFa_SBFS using several combinations of transformations of geometrical features, thanks to feature selection. We study the combinations of usual transformations of the features (Fourier Transform, Wavelet transform, first and second derivatives). Automatic feature selection allows to show that the best tradeoff between classification performance and processing time is obtained combining the original low-level features with their first derivative. Hence, we evaluate the robustness of the fall detection regarding location changes. We propose a realistic and pragmatic protocol which enables performance to be improved by updating the training in the current location with normal activities records. An embedded implementation of the fall detection based on a smart camera prototype is briefly depicted and demonstrates that a compact version of the detector can be deployed.

Collaboration


Dive into the Barthélémy Heyrman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre-Jean Lapray

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laurent Letellier

Commissariat à l'énergie atomique et aux énergies alternatives

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mustapha Bouderbane

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge