Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nabeel Younus Khan is active.

Publication


Featured researches published by Nabeel Younus Khan.


digital image computing: techniques and applications | 2011

SIFT and SURF Performance Evaluation against Various Image Deformations on Benchmark Dataset

Nabeel Younus Khan; Brendan McCane; Geoff Wyvill

Scene classification in indoor and outdoor environments is a fundamental problem to the vision and robotics community. Scene classification benefits from image features which are invariant to image transformations such as rotation, illumination, scale, viewpoint, noise etc. Selecting suitable features that exhibit such invariances plays a key part in classification performance. This paper summarizes the performance of two robust feature detection algorithms namely Scale Invariant Feature Transform (SIFT) and Speeded up Robust Features (SURF) on several classification datasets. In this paper, we have proposed three shorter SIFT descriptors. Results show that the proposed 64D and 96D SIFT descriptors perform as well as traditional 128D SIFT descriptors for image matching at a significantly reduced computational cost. SURF has also been observed to give good classification results on different datasets.


machine vision applications | 2015

Better than SIFT

Nabeel Younus Khan; Brendan McCane; Steven Mills

Independent evaluation of the performance of feature descriptors is an important part of the process of developing better computer vision systems. In this paper, we compare the performance of several state-of-the art image descriptors including several recent binary descriptors. We test the descriptors on an image recognition application and a feature matching application. Our study includes several recently proposed methods and, despite claims to the contrary, we find that SIFT is still the most accurate performer in both application settings. We also find that general purpose binary descriptors are not ideal for image recognition applications but perform adequately in a feature matching application.


image and vision computing new zealand | 2012

Feature set reduction for image matching in large scale environments

Nabeel Younus Khan; Brendan McCane; Steven Mills

Image matching in large scale environments is challenging due to the large number of features used in typical representations. In this paper we investigate methods for reducing the number of SIFT (Scale invariant feature transform) features in an image based localization application. We find that reductions of up to 59% in the number of features can result in improved performance of a naive matching algorithm for highly redundant data sets. However, those improvements do not carry over to visual bag of words, where a more moderate feature reduction (up to 16%) is often needed to maintain performance similar to the non-reduced set. Our reduced features have performed better than other robust feature descriptors namely HoG, GIST and ORB on all data sets with naive matching. The main contribution of this paper is the compact feature representation of a large scale environment for robust 2D image matching.


digital image computing: techniques and applications | 2007

Efficient Fingerprint Matching Technique Using Wavelet Based Features

Nabeel Younus Khan; Muhammad Younus Javed

This paper details the work of an efficient fingerprint matching technique via the use of wavelet based features. It is a kind of image based processing technique. So first core point is detected via using the hybrid technique. Then wavelet is applied on a smaller region cropped around the core point. The proposed system uses three types of features based on energy, standard deviation and Fourier Mellin Transform. These features are extracted from the same cropped portion differently. For the classification, the system uses the distance based classification along with four distances. So due to four distances usage, concept of voting is introduced which is new and saves a lot of computational time. Finally authentication will be performed if required at the end. Proposed system has provided success rate up to 93.75% on standard database and can recognize one fingerprint in less then 0. 5 seconds.


new zealand chapter's international conference on computer-human interaction | 2012

Vision based indoor scene localization via smart phone

Nabeel Younus Khan; Brendan McCane; Steven Mills

Scene localization without GPS in indoor environments is a challenging problem. Robust indoor scene localization is particularly useful for visually impaired people and robots during navigation. We present a prototype Android application which is intended to perform indoor localization and convey location information to a blind person. The application is in its early stages and we are looking to extend its functionality in the future.


conference on computers and accessibility | 2012

Smartphone application for indoor scene localization

Nabeel Younus Khan; Brendan McCane

Blind people are unable to navigate easily in unfamiliar indoor environments without assistance. Knowing the current location is a particularly important aspect of indoor navigation. Scene identification in indoor buildings without any Global Positioning System (GPS) is a challenging problem. We present a smart phone based assistive technology which uses computer vision techniques to localize the indoor location from a scene image. The aim of our work is to guide blind people during navigation inside buildings where GPS is not effective. Our current system uses a client-server model where the user takes a photo from their current location, the image is sent to the server, the location is sent back to the mobile device, and a voice message is used to convey the location information.


image and vision computing new zealand | 2013

3D versus 2D based indoor image matching analysis on images from low cost mobile devices

Nabeel Younus Khan; Brendan McCane; Steven Mills

Because of the increasing popularity of camera-equipped mobile devices, image matching techniques offer a potential solution for indoor localisation problems. However, image matching is challenging indoors because different indoor locations can look very similar. In this paper, we compare two image-based localisation approaches on realistic datasets that include images from cameras of varying quality. The first approach is based on 3D matching and the second on 2D matching. The comparison shows that 3D image matching crucially depends upon on the quality of the camera and its correct image matching accuracy ranges from 62-92% depending on the dataset. In contrast, the matching accuracy of 2D image matching is consistent across all cameras and ranges from 80-95%. In terms of computational efficiency, the 2D method is five times more efficient, but both methods are fast enough for many applications. We further investigate the performance of the 2D approach on four realistic indoor datasets with 50 indoor locations, such as corridors, halls, atrium or offices. Four out of five test sets have correct acceptance greater than 85% showing that image-based methods are viable for indoor localisation applications.


asian conference on pattern recognition | 2013

Analysis of Verification Methods for Indoor Image Matching

Nabeel Younus Khan; Brendan McCane

This paper reports on experiments for indoor image-based location recognition. The basic method makes use of three stages: visual bag-of-words for ranking, a voting method, and a final verification method, if the voting method does not produce a consensus. Such a tiered approach is necessary when there are several visually similar locations in the image database, such as often occurs in office buildings. Three experiments are reported here. In the first, three common term-weighting schemes are compared: ntf, ntfidf and BM25. Surprisingly ntf, the simplest scheme, is shown to be as accurate as BM25, and both are better than ntfidf. These results are surprising because BM25 has been experimentally shown to be one of the best weighting schemes for document information retrieval over many years, and ntfidf has been the preferred weighting scheme for visual BoW in most other image retrieval work. In the second experiment, two verification methods are compared: one based on the fundamental matrix, and one based on a simpler homography computation. Again, surprisingly, the simpler and more efficient homography based method is shown to perform as well as the fundamental matrix method despite the fact that the fundamental matrix method is more physically plausible. The overall system achieves a recognition rate of approximately 80% with a wrong match rate of only 2% (no decision on 18%) on a very challenging office building data set. In the third experiment, the system is evaluated on the same office building dataset with more than one query image. A significant improvement is observed in localisation performance and the overall system achieves a recognition rate of 96% with only two wrong image matches.


digital image computing: techniques and applications | 2007

Optimization of Core Point Detection in Fingerprints

Nabeel Younus Khan; M. Younus Javed; Naveed Sarfraz Khattak; Umer Munir Yongjun Chang


image and vision computing new zealand | 2011

Homography based Visual Bag of Word Model for Scene Matching in Indoor Environments

Nabeel Younus Khan; Brendan McCane; Geoff Wyvill

Collaboration


Dive into the Nabeel Younus Khan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge