Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Najeem Lawal is active.

Publication


Featured researches published by Najeem Lawal.


norchip | 2010

Exploration of target architecture for a wireless camera based sensor node

Muhammad Imran; Khursheed Khursheed; Mattias O'Nils; Najeem Lawal

The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable.


parallel computing in electrical engineering | 2011

Exploration of Tasks Partitioning between Hardware Software and Locality for a Wireless Camera Based Vision Sensor Node

Khursheed Khursheed; Muhammad Imran; Abdul Waheed Malik; Mattias O'Nils; Najeem Lawal; Benny Thörnberg

In this paper we have explored different possibilities for partitioning the tasks between hardware, software and locality for the implementation of the vision sensor node, used in wireless vision sensor network. Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor networks have been on two different assumptions involving either sending raw data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. Our research work focus on determining an optimal point of hardware/software partitioning as well as partitioning between local and central processing, based on minimum energy consumption for vision processing operation. The lifetime of the vision sensor node is predicted by evaluating the energy requirement of the embedded platform with a combination of FPGA and micro controller for the implementation of the vision sensor node. Our results show that sending compressed images after pixel based tasks will result in a longer battery life time with reasonable hardware cost for the vision sensor node.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2013

Implementation of Wireless Vision Sensor Node With a Lightweight Bi-Level Video Coding

Muhammad Imran; Naeem Ahmad; Khursheed Khursheed; Malik Abdul Waheed; Najeem Lawal; Mattias O'Nils

Wireless vision sensor networks (WVSNs) consist of a number of wireless vision sensor nodes (VSNs) which have limited resources i.e., energy, memory, processing, and wireless bandwidth. The processing and communication energy requirements of individual VSN have been a challenge because of limited energy availability. To meet this challenge, we have proposed and implemented a programmable and energy efficient VSN architecture which has lower energy requirements and has a reduced design complexity. In the proposed system, vision tasks are partitioned between the hardware implemented VSN and a server. The initial data dominated tasks are implemented on the VSN while the control dominated complex tasks are processed on a server. This strategy will reduce both the processing energy consumption and the design complexity. The communication energy consumption is reduced by implementing a lightweight bi-level video coding on the VSN. The energy consumption is measured on real hardware for different applications and proposed VSN is compared against published systems. The results show that, depending on the application, the energy consumption can be reduced by a factor of approximately 1.5 up to 376 as compared to VSN without the bi-level video coding. The proposed VSN offers energy efficient, generic architecture with smaller design complexity on hardware reconfigurable platform and offers easy adaptation for a number of applications as compared to published systems.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Implementation of Wireless Vision Sensor Node for Characterization of Particles in Fluids

Muhammad Imran; Khursheed Khursheed; Najeem Lawal; Mattias O'Nils; Naeem Ahmad

Wireless vision sensor networks (WVSNs) have a number of wireless vision sensor nodes (VSNs), often spread over a large geographical area. Each node has an image capturing unit, a battery or alternative energy source, a memory unit, a light source, a wireless link, and a processing unit. The challenges associated with WVSNs include low energy consumption, low bandwidth, limited memory, and processing capabilities. In order to meet these challenges, our paper is focused on the exploration of energy-efficient reconfigurable architectures for VSN. In this paper, the design and research challenges associated with the implementation of VSN on different computational platforms, such as microcontroller, field-programmable gate arrays, and server, are explored. In relation to this, the effect on the energy consumption and the design complexity at the node, when the functionality is moved from one platform to another, are analyzed. Based on the implementation of the VSN on embedded platforms, the lifetime of the VSN is predicted using the measured energy values of the platforms for different implementation strategies. The implementation results show that an architecture, where the compressed images after pixel-based operation are transmitted, realize a WVSN system with low energy consumption. Moreover, the complex postprocessing tasks are moved to a server which has less constraints.


norchip | 2005

Embedded FPGA memory requirements for real-time video processing applications

Najeem Lawal; Mattias O'Nils

FPGAs show interesting properties for real-time implementation of video processing systems. An important feature is the available on-chip RAM blocks embedded on the FPGAs. This paper presents an analysis of the current and future requirements of video processing systems put on these embedded memory resources. The analysis is performed such that a set of video processing systems are allocated onto different existing and extrapolated FPGA architectures. The analysis shows that FPGAs should support multiple memory sizes to take full advantage of the architecture. These results are valuable for both designers of systems and for planning the development of new FPGA architectures.


digital systems design | 2013

Low Complexity Background Subtraction for Wireless Vision Sensor Node

Muhammad Imran; Naeem Ahmad; Khursheed Khursheed; Mattias O'Nils; Najeem Lawal

Wireless vision sensor nodes consist of limited resources such as energy, memory, wireless bandwidth and processing. Thus it becomes necessary to investigate lightweight vision tasks. To highlight the foreground objects, many machine vision applications depend on the background subtraction technique. Traditional background subtraction approaches employ recursive and non-recursive techniques and store the whole image in memory. This raises issues like complexity on hardware platform, energy requirements and latency. This work presents a low complexity background subtraction technique for a hardware implemented wireless Vision Sensor Node (VSN). The proposed technique utilizes existing image scaling techniques for scaling down the image. The downscaled image is being stored in internal memory of hardware platform. For subtraction operation, the background pixels are generated in real time with up a scaling technique. The performance, and memory requirements of the system is compared for four image scaling techniques including nearest neighbor, averaging, bilinear, and bicubic. The results show that a system with lightweight scaling techniques, i.e., nearest neighbor and averaging, up to a scaling factor of 8, missed on average less than one object as compared to a system which uses a full original background image. The proposed approach will reduce the memory requirement by a factor of up to 64 besides reduction in design/implementation complexity and cost as compared to background model which involve whole frame.


broadband and wireless computing, communication and applications | 2011

Model and Placement Optimization of a Sky Surveillance Visual Sensor Network

Naeem Ahmad; Najeem Lawal; Mattias O'Nils; Bengt Oelmann; Muhammad Imran; Khursheed Khursheed

Visual Sensor Networks (VSNs) are networks which generate two dimensional data. The major difference between VSN and ordinary sensor network is the large amount of data. In VSN, a large number of camera nodes form a distributed system which can be deployed in many potential applications. In this paper we present a model of the physical parameters of a visual sensor network to track large birds, such as Golden Eagle, in the sky. The developed model is used to optimize the placement of the camera nodes in the VSN. A camera node is modeled as a function of its field of view, which is derived by the combination of the lens focal length and camera sensor. From the field of view and resolution of the sensor, a model for full coverage between two altitude limits has been developed. We show that the model can be used to minimize the number of sensor nodes for any given camera sensor, by exploring the focal lengths that both give full coverage and meet the minimum object size requirement. For the case of large bird surveillance we achieve 100% coverage for relevant altitudes using 20 camera nodes per km2 for the investigated camera sensors.


field-programmable logic and applications | 2005

Address generation for FPGA RAMS for efficient implementation of real-time video processing systems

Najeem Lawal; Benny Thörnberg; Mattias O'Nils

FPGA offers the potential of being a reliable, and high-performance reconfigurable platform for the implementation of real-time video processing systems. To utilize the full processing power of FPGA for video processing applications, optimization of memory accesses and the implementation of memory architecture are important issues. This paper presents two approaches, base pointer approach and distributed pointer approach, to implement accesses to on-chip FPGA Block RAMs. A comparison of the experimental results obtained using the two approaches on realistic image processing systems design cases is presented. The results show that compared to the base pointer approach the distributed pointer approach increases the potential processing power of FPGA, as a reconfigurable platform for video processing systems.


International Journal of Distributed Sensor Networks | 2014

Hardware Architecture for Real-time Computation of Image Component Feature Descriptors on a FPGA

Abdul Waheed Malik; Benny Thörnberg; Muhammad Imran; Najeem Lawal

This paper describes a hardware architecture for real-time image component labeling and the computation of image component feature descriptors. These descriptors are object related properties used to describe each image component. Embedded machine vision systems demand a robust performance and power efficiency as well as minimum area utilization, depending on the deployed application. In the proposed architecture, the hardware modules for component labeling and feature calculation run in parallel. A CMOS image sensor (MT9V032), operating at a maximum clock frequency of 27 MHz, was used to capture the images. The architecture was synthesized and implemented on a Xilinx Spartan-6 FPGA. The developed architecture is capable of processing 390 video frames per second of size 640 × 480 pixels. Dynamic power consumption is 13 mW at 86 frames per second.


international conference on distributed smart cameras | 2016

Design Exploration of a Multi-camera Dome for Sky Monitoring

Najeem Lawal; Mattias O'Nils; Muhammad Imran

Sky monitoring has many applications but also many challenges to be addressed before it can be realized. Some of the challenges are cost, energy consumption and complex deployment. One way to address these challenges is to compose a camera dome by grouping cameras that monitor a half sphere of the sky. In this paper, we present a model for design exploration that investigates how characteristics of camera chips and objective lenses affect the overall cost of a node of a camera dome. The investigation showed that by accepting more cameras in a single node can result in a reduced total cost of the system. This concludes that by using suitable design and camera placement technique, a cost-effective solution can be proposed for massive open-area i.e. sky monitoring.

Collaboration


Dive into the Najeem Lawal's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdul Waheed Malik

COMSATS Institute of Information Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge