Khursheed
Mid Sweden University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Khursheed.
norchip | 2010
Muhammad Imran; Khursheed Khursheed; Mattias O'Nils; Najeem Lawal
The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable.
parallel computing in electrical engineering | 2011
Khursheed Khursheed; Muhammad Imran; Abdul Waheed Malik; Mattias O'Nils; Najeem Lawal; Benny Thörnberg
In this paper we have explored different possibilities for partitioning the tasks between hardware, software and locality for the implementation of the vision sensor node, used in wireless vision sensor network. Wireless vision sensor network is an emerging field which combines image sensor, on board computation and communication links. Compared to the traditional wireless sensor networks which operate on one dimensional data, wireless vision sensor networks operate on two dimensional data which requires higher processing power and communication bandwidth. The research focus within the field of wireless vision sensor networks have been on two different assumptions involving either sending raw data to the central base station without local processing or conducting all processing locally at the sensor node and transmitting only the final results. Our research work focus on determining an optimal point of hardware/software partitioning as well as partitioning between local and central processing, based on minimum energy consumption for vision processing operation. The lifetime of the vision sensor node is predicted by evaluating the energy requirement of the embedded platform with a combination of FPGA and micro controller for the implementation of the vision sensor node. Our results show that sending compressed images after pixel based tasks will result in a longer battery life time with reasonable hardware cost for the vision sensor node.
IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2013
Muhammad Imran; Naeem Ahmad; Khursheed Khursheed; Malik Abdul Waheed; Najeem Lawal; Mattias O'Nils
Wireless vision sensor networks (WVSNs) consist of a number of wireless vision sensor nodes (VSNs) which have limited resources i.e., energy, memory, processing, and wireless bandwidth. The processing and communication energy requirements of individual VSN have been a challenge because of limited energy availability. To meet this challenge, we have proposed and implemented a programmable and energy efficient VSN architecture which has lower energy requirements and has a reduced design complexity. In the proposed system, vision tasks are partitioned between the hardware implemented VSN and a server. The initial data dominated tasks are implemented on the VSN while the control dominated complex tasks are processed on a server. This strategy will reduce both the processing energy consumption and the design complexity. The communication energy consumption is reduced by implementing a lightweight bi-level video coding on the VSN. The energy consumption is measured on real hardware for different applications and proposed VSN is compared against published systems. The results show that, depending on the application, the energy consumption can be reduced by a factor of approximately 1.5 up to 376 as compared to VSN without the bi-level video coding. The proposed VSN offers energy efficient, generic architecture with smaller design complexity on hardware reconfigurable platform and offers easy adaptation for a number of applications as compared to published systems.
IEEE Transactions on Circuits and Systems for Video Technology | 2012
Muhammad Imran; Khursheed Khursheed; Najeem Lawal; Mattias O'Nils; Naeem Ahmad
Wireless vision sensor networks (WVSNs) have a number of wireless vision sensor nodes (VSNs), often spread over a large geographical area. Each node has an image capturing unit, a battery or alternative energy source, a memory unit, a light source, a wireless link, and a processing unit. The challenges associated with WVSNs include low energy consumption, low bandwidth, limited memory, and processing capabilities. In order to meet these challenges, our paper is focused on the exploration of energy-efficient reconfigurable architectures for VSN. In this paper, the design and research challenges associated with the implementation of VSN on different computational platforms, such as microcontroller, field-programmable gate arrays, and server, are explored. In relation to this, the effect on the energy consumption and the design complexity at the node, when the functionality is moved from one platform to another, are analyzed. Based on the implementation of the VSN on embedded platforms, the lifetime of the VSN is predicted using the measured energy values of the platforms for different implementation strategies. The implementation results show that an architecture, where the compressed images after pixel-based operation are transmitted, realize a WVSN system with low energy consumption. Moreover, the complex postprocessing tasks are moved to a server which has less constraints.
Proceedings of SPIE | 2012
Khursheed Khursheed; Muhammad Imran; Naeem Ahmad; Mattias O'Nils
Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.
digital systems design | 2013
Muhammad Imran; Naeem Ahmad; Khursheed Khursheed; Mattias O'Nils; Najeem Lawal
Wireless vision sensor nodes consist of limited resources such as energy, memory, wireless bandwidth and processing. Thus it becomes necessary to investigate lightweight vision tasks. To highlight the foreground objects, many machine vision applications depend on the background subtraction technique. Traditional background subtraction approaches employ recursive and non-recursive techniques and store the whole image in memory. This raises issues like complexity on hardware platform, energy requirements and latency. This work presents a low complexity background subtraction technique for a hardware implemented wireless Vision Sensor Node (VSN). The proposed technique utilizes existing image scaling techniques for scaling down the image. The downscaled image is being stored in internal memory of hardware platform. For subtraction operation, the background pixels are generated in real time with up a scaling technique. The performance, and memory requirements of the system is compared for four image scaling techniques including nearest neighbor, averaging, bilinear, and bicubic. The results show that a system with lightweight scaling techniques, i.e., nearest neighbor and averaging, up to a scaling factor of 8, missed on average less than one object as compared to a system which uses a full original background image. The proposed approach will reduce the memory requirement by a factor of up to 64 besides reduction in design/implementation complexity and cost as compared to background model which involve whole frame.
broadband and wireless computing, communication and applications | 2011
Naeem Ahmad; Najeem Lawal; Mattias O'Nils; Bengt Oelmann; Muhammad Imran; Khursheed Khursheed
Visual Sensor Networks (VSNs) are networks which generate two dimensional data. The major difference between VSN and ordinary sensor network is the large amount of data. In VSN, a large number of camera nodes form a distributed system which can be deployed in many potential applications. In this paper we present a model of the physical parameters of a visual sensor network to track large birds, such as Golden Eagle, in the sky. The developed model is used to optimize the placement of the camera nodes in the VSN. A camera node is modeled as a function of its field of view, which is derived by the combination of the lens focal length and camera sensor. From the field of view and resolution of the sensor, a model for full coverage between two altitude limits has been developed. We show that the model can be used to minimize the number of sensor nodes for any given camera sensor, by exploring the focal lengths that both give full coverage and meet the minimum object size requirement. For the case of large bird surveillance we achieve 100% coverage for relevant altitudes using 20 camera nodes per km2 for the investigated camera sensors.
Proceedings of SPIE | 2013
Muhammad Imran; Khaled Benkrid; Khursheed Khursheed; Naeem Ahmad; Mattias O’Nils; Najeem Lawal
The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases, environmental factors, and available technologies. However, common characteristics can be identified to propose an abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects. These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to the development of a system’s taxonomy, in which a number of vision functions as well as their combination characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of systems.
International Journal of Distributed Sensor Networks | 2014
Muhammad Imran; Khursheed Khursheed; Naeem Ahmad; Mattias O'Nils; Najeem Lawal; Malik Abdul Waheed
There are a number of challenges caused by the large amount of data and limited resources such as memory, processing capability, energy consumption, and bandwidth, when implementing vision systems on wireless smart cameras using embedded platforms. It is usual for research in this field to focus on the development of a specific solution for a particular problem. There is a requirement for a tool which facilitates the complexity estimation and comparison of wireless smart camera systems in order to develop efficient generic solutions. To develop such a tool, we have presented, in this paper, a complexity model by using a system taxonomy. In this model, we have investigated the arithmetic complexity and memory requirements of vision functions with the help of system taxonomy. To demonstrate the use of the proposed model, a number of actual systems are analyzed in a case study. The complexity model, together with system taxonomy, is used for the complexity estimation of vision functions and for a comparison of vision systems. After comparison, the systems are evaluated for implementation on a single generic architecture. The proposed approach will assist researchers in benchmarking and will assist in proposing efficient generic solutions for the same class of problems with reduced design and development costs.
International Journal of Space-Based and Situated Computing | 2013
Naeem Ahmad; Muhammad Imran; Khursheed Khursheed; Najeem Lawal; Mattias O'Nils
A visual sensor network (VSN) is a distributed system of a large number of camera nodes, which generates two dimensional data. This paper presents a model of a VSN to track large birds, such as gol ...