Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Muhammad Umar Karim Khan is active.

Publication


Featured researches published by Muhammad Umar Karim Khan.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

A Low-Complexity Pedestrian Detection Framework for Smart Video Surveillance Systems

Muhammad Bilal; Asim Khan; Muhammad Umar Karim Khan; Chong-Min Kyung

Pedestrian detection is a key problem in computer vision and is currently addressed with increasingly complex solutions involving compute-intensive features and classification schemes. In this scope, histogram of oriented gradients (HOG) in conjunction with linear support vector machine (SVM) classifier is considered to be the single most discriminative feature that has been adopted as a stand-alone detector as well as a key instrument in advance systems involving hybrid features and cascaded detectors. In this paper, we propose a pedestrian detection framework that is computationally less expensive as well as more accurate than HOG-linear SVM. The proposed scheme exploits the discriminating power of the locally significant gradients in building orientation histograms without involving complex floating point operations while computing the feature. The integer-only feature allows the use of powerful histogram inter-section kernel SVM classifier in a fast lookup-table-based implementation. Resultantly, the proposed framework achieves at least 3% more accurate detection results than HOG on standard data sets while being 1.8 and 2.6 times faster on conventional desktop PC and embedded ARM platforms, respectively, for a single scale pedestrian detection on VGA resolution video. In addition, hardware implementation on Altera Cyclone IV field-programmable gate array results in more than 40% savings in logic resources compared with its HOG-linear SVM competitor. Hence, the proposed feature and classification setup is shown to be a better candidate as the single most discriminative pedestrian detector than the currently accepted HOG-linear SVM.


ifip ieee international conference on very large scale integration | 2015

Hardware architecture and optimization of sliding window based pedestrian detection on FPGA for high resolution images by varying local features

Asim Khan; Muhammad Umar Karim Khan; Muhammad Bilal; Chong-Min Kyung

Pedestrian detection has lately attracted considerable interest from researchers due to many practical applications. However, the low accuracy and high complexity of pedestrian detection has still not enabled its use in successful commercial applications. In this paper, we present insights into the complexity-accuracy relationship of pedestrian detection. We consider the Histogram of Oriented Gradients (HOG) scheme with linear Support Vector Machine (LinSVM) as a benchmark. We describe parallel implementations of various blocks of the pedestrian detection system which are designed for full-HD (1920×1080) resolution. Features are improved by optimal selection of cell size and histogram bins which have been shown to significantly affect the accuracy and complexity of pedestrian detection. It is seen that with a careful choice of these parameters a frame rate of 39.2 fps is achieved with a negligible loss in accuracy which is 16.3x and 3.8x higher than state of the art GPU and FPGA implementations respectively. Moreover 97.14% and 10.2% reduction in energy consumption is observed to process one frame. Finally, features are further enhanced by removing petty gradients in histograms which result in loss of accuracy. This increases the frame rate to 42.7 fps (18x and 4.1x higher) and lowers the energy consumption by 97.34% and 16.4% while improving the accuracy by 2% as compared to state of the art GPU and FPGA implementations respectively.


IEEE Transactions on Very Large Scale Integration Systems | 2017

EBSCam: Background Subtraction for Ubiquitous Computing

Muhammad Umar Karim Khan; Asim Khan; Chong-Min Kyung

Background subtraction (BS) is a crucial machine vision scheme for detecting moving objects in a scene. With the advent of smart cameras, the embedded implementation of BS finds ever-increasing applications. This paper presents a new BS scheme called efficient BS for smart cameras (EBSCam). EBSCam thresholds the change in the estimated background model, which suppresses variance of the estimates, resulting in competitive performance compared with standard BS schemes. The percentage of wrong classification of EBSCam is lower than those of the Gaussian mixture model (GMM) (10.97%) and the pixel-based adaptive segmenter (PBAS) (4.66%) algorithms in FPGA implementations. Moreover, the memory bandwidth requirement of EBSCam is 6.66%, 41.36%, and 90.48% lower than the state-of-the-art FPGA implementation of GMM, ViBe, and PBAS algorithms, respectively. EBSCam achieves a significant speed up compared with the FPGA implementations of GMM (by 43.3%), ViBe (by 118.6%), and PBAS (by 144.8%) schemes. Similarly, the energy consumption of EBSCam is 80.56% and 99.9% less compared with GMM and PBAS, respectively. In summary, the advantages of EBSCam in accuracy, speed, and energy consumption combined together make it especially suitable for embedded applications.


international symposium on circuits and systems | 2013

Optimized learning rate for energy waste minimization in a background subtraction based surveillance system

Muhammad Umar Karim Khan; Chong-Min Kyung; Khawaja M. Yahya

In this paper surveillance system employing background subtraction scheme is discussed. The aim of the work is to minimize the waste energy of the overall system due to false positives. Pixels in the foreground of a motion detection system remain non-zero even if a moving object has stopped due to the settling time associated with an adaptive background subtraction scheme as Mixture of Gaussians. Temporal variance in a visually static pixel region also triggers false positives. Optimal learning rate for different parameters as threshold values, ROI size and total number of frames in the scene has been derived in this paper.


advanced video and signal based surveillance | 2014

Dual frame rate motion detection for memory- and energy-constrained surveillance systems

Muhammad Umar Karim Khan; Asim Khan; Chong-Min Kyung

CCTV-based surveillance systems gaining widespread popularity still waste computational power, transmission bandwidth and storage space. This paper tries to respond to this necessity by proposing a motion-based video recording scheme with dual frame rate motion detection. Statistical models for memory and energy consumption of the overall system are described. Root-mean-square error of the models for memory consumption is 0.54 and 0.78 for systems with single frame rate and dual frame rate motion detection, respectively. For a typical surveillance video, the proposed dual frame rate system stores 1.22, 4.94, 7.81 less frames per second on at 10fps, 1fps, 0.5fps respectively, compared to the single frame rate motion detection system. We have suggested a criterion for using dual frame rate motion detection in surveillance based on a mathematical model.


international symposium on circuits and systems | 2013

Energy reduction of ultra-low voltage VLSI circuits by digit-serial architectures

Muhammad Umar Karim Khan; Chong-Min Kyung

Ultra-low voltage VLSI designs are gaining widespread attention due to the requirements of minimum energy consuming motes. Leakage energy is a considerable part of the overall energy consumed in the ultra-low voltage domain. This paper deals with the overall energy reduction in ultra-low power systems by the use of digit-serial implementations. Leakage energy is reduced by the reduction of hardware in digit-serial implementations. However, overhead circuitry adds to the energy budget. It has been shown in this paper that digit-serial implementations do reduce the overall energy consumption in the ultra-low power VLSI circuits compared to bit-serial and wordparallel implementations. 73% and 92% reduction in energy per clock cycle was obtained with an 8-bit adder at a source voltage of 0.7V compared to the bit-serial and word-parallel implementations respectively.


Optics Express | 2018

Depth extraction with offset pixels

Woojin Yun; Young-Gyu Kim; Yeongmin Lee; JooWon Lim; Hyosub Kim; Muhammad Umar Karim Khan; Sun Hyok Chang; Hyun Sang Park; Chong-Min Kyung

Numerous depth extraction techniques have been proposed in the past. However, the utility of these techniques is limited as they typically require multiple imaging units, bulky platforms for computation, cannot achieve high speed and are computationally expensive. To counter the above challenges, a sensor with Offset Pixel Apertures (OPA) has been recently proposed. However, a working system for depth extraction with the OPA sensor has not been discussed. In this paper, we propose the first such system for depth extraction using the OPA sensor. We also propose a dedicated hardware implementation for the proposed system, named as the Depth Map Processor (DMP). The DMP can provide depth at 30 frames per second at 1920 × 1080 resolution with 31 disparity levels. Furthermore, the proposed DMP has low power consumption as for the aforementioned speed and resolution it only requires 290.76 mW. The proposed system makes it an ideal choice for depth extraction systems in constrained environments.


Archive | 2017

Depth Estimation Using Single Camera with Dual Apertures

Hyun Sang Park; Young-Gyu Kim; Yeongmin Lee; Woojin Yun; Jinyeon Lim; Dong Hun Kang; Muhammad Umar Karim Khan; Asim Khan; Jang-Seon Park; Won-Seok Choi; Youngbae Hwang; Chong-Min Kyung

Depth sensing is an active area of research in imaging technology. Here, we use a dual-aperture system to infer depth from a single image based on the principle of depth from defocus (DFD). Dual-aperture camera includes a small all-pass aperture (which allows all light through the aperture) and a larger RGB-pass aperture (which allows visible light only). IR image captured through the smaller aperture is sharper than the RGB image captured through the large aperture. Since the difference of blurriness between two images is dependent on the actual distance, using a dual-aperture camera provides an opportunity to estimate depth of a scene. Measuring the absolute blur size is difficult, since it is affected by illuminant’s spectral distribution, noise, specular highlight, vignetting, etc. By using a dual-aperture camera, however, the relative blurriness can be measured in a robust way. In this article,, a detailed description of extracting depth using a dual-aperture camera is provided which includes procedures for fixing each of artifacts that degrade the depth quality based on DFD. Experimental results confirm the improved depth extraction by employing the aforementioned schemes.


asia pacific conference on circuits and systems | 2016

Depth refinement on sparse-depth images using visual perception cues

Muhammad Umar Karim Khan; Asim Khan; Chong-Min Kyung

Numerous depth extraction schemes cannot extract depth on textureless regions, thus generating sparse depth maps. In this paper, we propose using perception cues to improve the sparse depth map. We consider the local neighborhood as well the global surface properties of objects. We use this information to complement depth extraction schemes. The method is not scene or class specific. With quantitative evaluation, the proposed method is shown to perform better compared to previous depth refinement methods. The error in terms of standard deviation of depth has been reduced down by 60%. The computational overhead of the proposed method is also very low, making it a suitable candidate for depth refinement.


ifip ieee international conference on very large scale integration | 2015

A Hardware Accelerator for Real Time Sliding Window Based Pedestrian Detection on High Resolution Images

Asim Khan; Muhammad Umar Karim Khan; Muhammad Bilal; Chong-Min Kyung

Pedestrian detection has lately attracted considerable interest from researchers due to many practical applications. However, the low accuracy and high complexity of pedestrian detection has still not enabled its use in successful commercial applications. In this chapter, we present insights into the complexity-accuracy relationship of pedestrian detection. We consider the Histogram of Oriented Gradients (HOG) scheme with linear Support Vector Machine (LinSVM) as a benchmark. We describe parallel implementations of various blocks of the pedestrian detection system which are designed for full-HD (1920 × 1080) resolution. Features are improved by optimal selection of cell size and histogram bins which have been shown to significantly affect the accuracy and complexity of pedestrian detection. It is seen that with a careful choice of these parameters a frame rate of 39.2 fps is achieved with a negligible loss in accuracy which is 16.3x and 3.8x higher than state of the art GPU and FPGA implementations respectively. Moreover 97.14 % and 10.2 % reduction in energy consumption is observed to process one frame. Finally, features are further enhanced by removing petty gradients in histograms which result in loss of accuracy. This increases the frame rate to 42.7 fps (18x and 4.1x higher) and lowers the energy consumption by 97.34 % and 16.4 % while improving the accuracy by 2 % as compared to state of the art GPU and FPGA implementations respectively.

Collaboration


Dive into the Muhammad Umar Karim Khan's collaboration.

Researchain Logo
Decentralizing Knowledge