Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Franck Yonga is active.

Publication


Featured researches published by Franck Yonga.


Journal of Real-time Image Processing | 2016

A hardware/software prototyping system for driving assistance investigations

Jakob Anders; Michael Mefenza; Christophe Bobda; Franck Yonga; Zeyad Aklah; Kevin Gunn

A holistic design and verification environment to investigate driving assistance systems is presented, with an emphasis on system-on-chip architectures for video applications. Starting with an executable specification of a driving assistance application, subsequent transformations are performed across different levels of abstraction until the final implementation is achieved. The hardware/software partitioning is facilitated through the integration of OpenCV and SystemC in the same design environment, as well as OpenCV and Linux in the run-time system. We built a rapid prototyping, FPGA-based camera system, which allows designs to be explored and evaluated in realistic conditions. Using lane departure and the corresponding performance speedup, we show that our platform reduces the design time, while improving the verification efforts.


ERSA | 2014

Reconfigurable Architectures for Distributed Smart Cameras

Christophe Bobda; Michael Mefenza; Franck Yonga; Ali Akbar Zarezadeh

Embedded smart cameras must provide enough computational power to handle complex image understanding algorithms on huge amount of data in-situ. In a distributed set-up, smart cameras must provide efficient communication and flexibility in additional to performance. Programmability and physical constraints such as size, weight and power (SWAP) complicate design and architectural choices. In this chapter, we explore the use of FPGAs as computational engine in distributed smart cameras and present a smart camera system designed to be used as node in a camera sensor network. Beside the performance and flexibility size and power requirements are addressed through a modular and scalable design. The programability of the system is addressed by a seamless integration of the Intel OpenCV computer vision library to the platform.


Journal of Real-time Image Processing | 2016

Efficient network clustering for traffic reduction in embedded smart camera networks

Ali Akbar Zarezadeh; Christophe Bobda; Franck Yonga; Michael Mefenza

AbstractIn this work, a clustering approach for bandwidth reduction in distributed smart camera networks is presented. Properties of the environment such as camera positions and environment pathways, as well as dynamics and features of targets are used to limit the flood of messages in the network. To better understand the correlation between camera positioning and pathways in the scene on one hand and temporal and spatial properties of targets on the other hand, and to devise a sound messaging infrastructure, a unifying probabilistic modeling for object association across multiple cameras with disjointed view is used. Communication is efficiently handled using a task-oriented node clustering that partition the network in different groups according to the pathway among cameras, and the appearance and temporal behavior of targets. We propose a novel asynchronous event exchange strategy to handle sporadic messages generated by non-frequent tasks in a distributed tracking application. Using a Xilinx-FPGA with embedded Microblaze processor, we could show that, with limited resource and speed, the embedded processor was able to sustain a high communication load, while performing complex image processing computations.


international conference on distributed smart cameras | 2014

Self-Coordinated Target Assignment and Camera Handoff in Distributed Network of Embedded Smart Cameras

Franck Yonga; Alfredo G. C. Junior; Michael Mefenza; Luca Bochi Saldanha; Christophe Bobda; Senem Velipassalar

Tracking several objects across multiple cameras is essential for collaborative monitoring in distributed camera networks. The tractability of the related optimization aiming at tracking a maximal number of important targets, decreases with the growing number of objects moving across cameras. To tackle this issue, a viable model and sound object representation, which can leverage the power of existing tool at run-time for a fast computation of solution, is required. In this paper, we provide a formalism to object tracking across multiple cameras. A first assignment of objects to cameras is performed at start-up to initialize a set of distributed trackers in embedded cameras. We model the run-time self-coordination problem with target handover by encoding the problem as a run-time binding of objects to cameras. This approach has successively been used in high-level system synthesis. Our model of distributed tracking is based on Answer Set Programming, a declarative programming paradigm, that helps formulate the distribution and target handover problem as a search problem, such that by using existing answer set solvers, we produce stable solutions in real-time by incrementally solving time-based encoded ASP problems. The effectiveness of the proposed approach is proven on a 3-node camera network deployment.


microprocessor test and verification | 2014

Automatic UVM Environment Generation for Assertion-Based and Functional Verification of SystemC Designs

Michael Mefenza; Franck Yonga; Christophe Bobda

This paper presents an approach for reducing test bench implementation effort of SystemC designs, thus allowing an early verification success. We propose an automatic Universal Verification Methodology (UVM) environment that enables assertions-based, coverage driven and functional verification of SystemC models. The aim of this verification environment is to ease and speed up the verification of SystemC IPs by automatically producing a complete and working UVM test bench with all sub-environments constructed and blocks connected. Our experimentation shows that the proposed environment can rapidly be integrated to a SystemC design while improving its coverage and assertion-based verification.


conference on design and architectures for signal and image processing | 2014

A framework for rapid prototyping of embedded vision applications

Michael Mefenza; Franck Yonga; Luca Bochi Saldanha; Christophe Bobda; Senem Velipassalar

We present a framework for fast prototyping of embedded video applications. Starting with a high-level executable specification written in OpenCV, we apply semi-automatic refinements of the specification at various levels (TLM and RTL), the lowest of which is a system-on-chip prototype in FPGA. The refinement leverages the structure of image processing applications to map high-level representations to lower level implementation with limited user intervention. Our framework integrates the computer vision library OpenCV for software, SystemC/TLM for high-level hardware representation, UVM and QEMU-OS for virtual prototyping and verification into a single and uniform design and verification flow. With applications in the field of driving assistance and object recognition, we prove the usability of our framework in producing performance and correct design.


Mobile Computing and Communications Review | 2013

ACM HotMobile 2013 poster: RazorCam: a prototyping environment for video communication

Michael Mefenza; Franck Yonga; Christophe Bobda

Design verification takes 80 % of times in the flow design of hardware/software applications. To reduce this duration, subsequent transformations are performed across different levels of abstraction until the final implementation. We propose a rapid prototyping camera system based on FPGAs, which allows designs to be explored and evaluated in realistic environments. Our focus is on the design of a generic embedded hardware/software architecture with a symbolic representation of the input application to allow a programmability at a very high abstraction level. The hardware/software partitioning is facilitated through the integration of OpenCV and SystemC in the same environment for rapid simulation and OpenCV and Linux in the run-time environment.


Journal of Parallel and Distributed Computing | 2018

High-level synthesis of on-chip multiprocessor architectures based on answer set programming

Christophe Bobda; Franck Yonga; Martin Gebser; Harold Ishebabi; Torsten Schaub

Abstract We present a system-level synthesis approach for heterogeneous multi-processor on chip, based on Answer Set Programming(ASP). Starting with a high-level description of an application, its timing constraints and the physical constraints of the target device, our goal is to produce the optimal computing infrastructure made of heterogeneous processors, peripherals, memories and communication components. Optimization aims at maximizing speed, while minimizing chip area. Also, a scheduler must be produced that fulfills the real-time requirements of the application. Even though our approach will work for application specific integrated circuits, we have chosen FPGA as target device in this work because of their reconfiguration capabilities which makes it possible to explore several design alternatives. This paper addresses the bottleneck of problem representation size by providing a direct and compact ASP encoding for automatic synthesis that is semantically equivalent to previously established ILP and ASP models. We describe a use-case in which designers specify their applications in C/C++ from which optimum systems can be derived. We demonstrate the superiority of our approach toward existing heuristics and exact methods with synthesis results on a set of realistic case studies.


ACM Transactions on Design Automation of Electronic Systems | 2015

ASP-Based Encoding Model of Architecture Synthesis for Smart Cameras in Distributed Networks

Franck Yonga; Michael Mefenza; Christophe Bobda

A synthesis approach based on Answer Set Programming (ASP) for heterogeneous system-on-chips to be used in distributed camera networks is presented. In such networks, the tight resource limitations represent a major challenge for application development. Starting with a high-level description of applications, the physical constraints of the target devices, and the specification of network configuration, our goal is to produce optimal computing infrastructures made of a combination of hardware and software components for each node of the network. Optimization aims at maximizing speed while minimizing chip area and power consumption. Additionally, by performing the architecture synthesis simultaneously for all cameras in the network, we are able to minimize the overall utilization of communication resources and consequently reduce power consumption. Because of its reconfiguration capabilities, a Field Programmable Gate Array (FPGA) has been chosen as the target device, which enhances the exploration of several design alternatives. We present several realistic network scenarios to evaluate and validate the proposed synthesis approach.


Archive | 2014

Design and Verification Environment for High-Performance Video-Based Embedded Systems

Michael Mefenza; Franck Yonga; Christophe Bobda

In this chapter, we propose a design and verification environment for computational demanding and secure embedded vision-based systems. Starting with an executable specification in OpenCV, we provide subsequent refinements and verification down to a system-on-chip prototype into an FPGA-based smart camera. At each level of abstraction, properties of image processing applications are used along with structure composition to provide a generic architecture that can be automatically verified and mapped to a lower abstraction level, the last of which being the FPGA. The result of this design flow is a framework that encapsulates the computer vision library OpenCV at the highest level, integrates Accelera’s SystemC/TLM with the Universal Verification Methodology (UVM) and QEMU-OS for virtual prototyping, verification, and low-level mapping.

Collaboration


Dive into the Franck Yonga's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kevin Gunn

University of Arkansas

View shared research outputs
Top Co-Authors

Avatar

Zeyad Aklah

University of Arkansas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge