Michael Mefenza
University of Arkansas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Mefenza.
Journal of Real-time Image Processing | 2016
Jakob Anders; Michael Mefenza; Christophe Bobda; Franck Yonga; Zeyad Aklah; Kevin Gunn
A holistic design and verification environment to investigate driving assistance systems is presented, with an emphasis on system-on-chip architectures for video applications. Starting with an executable specification of a driving assistance application, subsequent transformations are performed across different levels of abstraction until the final implementation is achieved. The hardware/software partitioning is facilitated through the integration of OpenCV and SystemC in the same design environment, as well as OpenCV and Linux in the run-time system. We built a rapid prototyping, FPGA-based camera system, which allows designs to be explored and evaluated in realistic conditions. Using lane departure and the corresponding performance speedup, we show that our platform reduces the design time, while improving the verification efforts.
ERSA | 2014
Christophe Bobda; Michael Mefenza; Franck Yonga; Ali Akbar Zarezadeh
Embedded smart cameras must provide enough computational power to handle complex image understanding algorithms on huge amount of data in-situ. In a distributed set-up, smart cameras must provide efficient communication and flexibility in additional to performance. Programmability and physical constraints such as size, weight and power (SWAP) complicate design and architectural choices. In this chapter, we explore the use of FPGAs as computational engine in distributed smart cameras and present a smart camera system designed to be used as node in a camera sensor network. Beside the performance and flexibility size and power requirements are addressed through a modular and scalable design. The programability of the system is addressed by a seamless integration of the Intel OpenCV computer vision library to the platform.
Journal of Real-time Image Processing | 2016
Ali Akbar Zarezadeh; Christophe Bobda; Franck Yonga; Michael Mefenza
AbstractIn this work, a clustering approach for bandwidth reduction in distributed smart camera networks is presented. Properties of the environment such as camera positions and environment pathways, as well as dynamics and features of targets are used to limit the flood of messages in the network. To better understand the correlation between camera positioning and pathways in the scene on one hand and temporal and spatial properties of targets on the other hand, and to devise a sound messaging infrastructure, a unifying probabilistic modeling for object association across multiple cameras with disjointed view is used. Communication is efficiently handled using a task-oriented node clustering that partition the network in different groups according to the pathway among cameras, and the appearance and temporal behavior of targets. We propose a novel asynchronous event exchange strategy to handle sporadic messages generated by non-frequent tasks in a distributed tracking application. Using a Xilinx-FPGA with embedded Microblaze processor, we could show that, with limited resource and speed, the embedded processor was able to sustain a high communication load, while performing complex image processing computations.
international conference on distributed smart cameras | 2014
Franck Yonga; Alfredo G. C. Junior; Michael Mefenza; Luca Bochi Saldanha; Christophe Bobda; Senem Velipassalar
Tracking several objects across multiple cameras is essential for collaborative monitoring in distributed camera networks. The tractability of the related optimization aiming at tracking a maximal number of important targets, decreases with the growing number of objects moving across cameras. To tackle this issue, a viable model and sound object representation, which can leverage the power of existing tool at run-time for a fast computation of solution, is required. In this paper, we provide a formalism to object tracking across multiple cameras. A first assignment of objects to cameras is performed at start-up to initialize a set of distributed trackers in embedded cameras. We model the run-time self-coordination problem with target handover by encoding the problem as a run-time binding of objects to cameras. This approach has successively been used in high-level system synthesis. Our model of distributed tracking is based on Answer Set Programming, a declarative programming paradigm, that helps formulate the distribution and target handover problem as a search problem, such that by using existing answer set solvers, we produce stable solutions in real-time by incrementally solving time-based encoded ASP problems. The effectiveness of the proposed approach is proven on a 3-node camera network deployment.
microprocessor test and verification | 2014
Michael Mefenza; Franck Yonga; Christophe Bobda
This paper presents an approach for reducing test bench implementation effort of SystemC designs, thus allowing an early verification success. We propose an automatic Universal Verification Methodology (UVM) environment that enables assertions-based, coverage driven and functional verification of SystemC models. The aim of this verification environment is to ease and speed up the verification of SystemC IPs by automatically producing a complete and working UVM test bench with all sub-environments constructed and blocks connected. Our experimentation shows that the proposed environment can rapidly be integrated to a SystemC design while improving its coverage and assertion-based verification.
conference on design and architectures for signal and image processing | 2014
Michael Mefenza; Franck Yonga; Luca Bochi Saldanha; Christophe Bobda; Senem Velipassalar
We present a framework for fast prototyping of embedded video applications. Starting with a high-level executable specification written in OpenCV, we apply semi-automatic refinements of the specification at various levels (TLM and RTL), the lowest of which is a system-on-chip prototype in FPGA. The refinement leverages the structure of image processing applications to map high-level representations to lower level implementation with limited user intervention. Our framework integrates the computer vision library OpenCV for software, SystemC/TLM for high-level hardware representation, UVM and QEMU-OS for virtual prototyping and verification into a single and uniform design and verification flow. With applications in the field of driving assistance and object recognition, we prove the usability of our framework in producing performance and correct design.
Mobile Computing and Communications Review | 2013
Michael Mefenza; Franck Yonga; Christophe Bobda
Design verification takes 80 % of times in the flow design of hardware/software applications. To reduce this duration, subsequent transformations are performed across different levels of abstraction until the final implementation. We propose a rapid prototyping camera system based on FPGAs, which allows designs to be explored and evaluated in realistic environments. Our focus is on the design of a generic embedded hardware/software architecture with a symbolic representation of the input application to allow a programmability at a very high abstraction level. The hardware/software partitioning is facilitated through the integration of OpenCV and SystemC in the same environment for rapid simulation and OpenCV and Linux in the run-time environment.
ACM Sigarch Computer Architecture News | 2016
Michael Mefenza; Nicolas Edwards; Christophe Bobda
Image processing applications are computationally intensive and data intensive and rely on memory elements (buffer, window, line buffer, shift register, and frame buffer) to store data flow dependencies between computing components in FPGA. Due to the limited availability of these resources, optimization of memory allocation and the implementation of efficient memory architectures are important issues. We present an interface, the Component Interconnect and Data Access (CIDA), and its implementation, based on interface automata formalism. We used that interface for modeling image processing applications and generating common memory elements. Based on the proposed model and information about the FPGA architecture, we also present an optimization model to achieve allocation memory requirements to embedded memories (Block RAM and Distributed RAM). Allocation results from realistic video systems on Xilinx Zynq FPGAs verify the correctness of the model and show that the proposed approach achieves appreciable reduction in block RAM usage.
ACM Transactions on Design Automation of Electronic Systems | 2015
Franck Yonga; Michael Mefenza; Christophe Bobda
A synthesis approach based on Answer Set Programming (ASP) for heterogeneous system-on-chips to be used in distributed camera networks is presented. In such networks, the tight resource limitations represent a major challenge for application development. Starting with a high-level description of applications, the physical constraints of the target devices, and the specification of network configuration, our goal is to produce optimal computing infrastructures made of a combination of hardware and software components for each node of the network. Optimization aims at maximizing speed while minimizing chip area and power consumption. Additionally, by performing the architecture synthesis simultaneously for all cameras in the network, we are able to minimize the overall utilization of communication resources and consequently reduce power consumption. Because of its reconfiguration capabilities, a Field Programmable Gate Array (FPGA) has been chosen as the target device, which enhances the exploration of several design alternatives. We present several realistic network scenarios to evaluate and validate the proposed synthesis approach.
Archive | 2014
Michael Mefenza; Franck Yonga; Christophe Bobda
In this chapter, we propose a design and verification environment for computational demanding and secure embedded vision-based systems. Starting with an executable specification in OpenCV, we provide subsequent refinements and verification down to a system-on-chip prototype into an FPGA-based smart camera. At each level of abstraction, properties of image processing applications are used along with structure composition to provide a generic architecture that can be automatically verified and mapped to a lower abstraction level, the last of which being the FPGA. The result of this design flow is a framework that encapsulates the computer vision library OpenCV at the highest level, integrates Accelera’s SystemC/TLM with the Universal Verification Methodology (UVM) and QEMU-OS for virtual prototyping, verification, and low-level mapping.