Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ehtesham Hassan is active.

Publication


Featured researches published by Ehtesham Hassan.


international symposium on mixed and augmented reality | 2016

An AR Inspection Framework: Feasibility Study with Multiple AR Devices

Perla Ramakrishna; Ehtesham Hassan; Ramya Hebbalaguppe; Monika Sharma; Gaurav Gupta; Lovekesh Vig; Geetika Sharma; Gautam Shroff

We present an Augmented Reality (AR) based re-configurable framework for inspection that can be utilized in cross-domain applications such as maintenance and repair assistance in industrial inspection, health sector to record vitals, and automotive/avionics domain inspection, amongst others. The novelty of the inspection framework as compared to the existing counterparts are three fold. Firstly, the inspection check-list can be prioritized by detecting the parts viewed in inspectors field using deep learning principles. Second, the backend of the framework is easily configurable for different applications where instructions and assistance manuals can be directly imported and visually integrated with inspection type. Third, we conduct a feasibility study on inspection modes such as Google Glass, Google Cardboard, Paper based and Tablet for inspection turnaround time, ease, and usefulness by taking a 3D printer inspection use-case.


international symposium on mixed and augmented reality | 2016

GestAR: Real Time Gesture Interaction for AR with Egocentric View

Srinidhi Hegde; Ramakrishna Perla; Ramya Hebbalaguppe; Ehtesham Hassan

The existing, sophisticated AR gadgets1 in the market today are mostly exorbitantly priced. This limits their usage for the upcoming academic research institutes and also their reach to the mass market in general. Among the most popular and frugal head mounts, Google Cardboard (GC) and Wearality2 are video-see-through devices that can provide immersible AR and VR experiences with a smartphone. Stereo-rendering of camera feed and overlaid information on smartphone helps us experience AR with GC. These frugal devices have limited user-input capability, allowing user interactions with GC such as head tilting, magnetic trigger and conductive lever. Our paper proposes a reliable and intuitive gesture based interaction technique for these frugal devices. The hand gesture recognition employs the Gaussian Mixture Models (GMM) based on human skin pixels and tracks segmented foreground using optical flow to detect hand swipe direction for triggering a relevant event. Realtime performance is achieved by implementing the hand gesture recognition module on a smartphone and thus reducing the latency. We augment real-time hand gestures as new GCs interface with its evaluation done in terms of subjective metrics and with the available user interactions in GC.


computer vision and pattern recognition | 2013

A density based method for automatic hairstyle discovery and recognition

Jyotikrishna Dass; Monika Sharma; Ehtesham Hassan; Hiranmay Ghosh

This paper presents a novel method for discovery and recognition of hairstyles in a collection of colored face images. We propose the use of Agglomerative clustering for automatic discovery of distinct hairstyles. Our method proposes automated approach for generation of hair, background and face-skin probability-masks for different hairstyle category without requiring manual annotation. The probability-masks based density estimates are subsequently applied for recognizing the hairstyle in a new face image. The proposed methodology has been verified with a synthetic dataset of approximately thousand images, randomly collected from the Internet.


workshop on applications of computer vision | 2017

Robust Hand Gestural Interaction for Smartphone Based AR/VR Applications

Shreyash Mohatta; Ramakrishna Perla; Gaurav Gupta; Ehtesham Hassan; Ramya Hebbalaguppe

The future of user interfaces will be dominated by hand gestures. In this paper, we explore an intuitive hand gesture based interaction for smartphones having a limited computational capability. To this end, we present an efficient algorithm for gesture recognition with First Person View (FPV), which focuses on recognizing a four swipe model (Left, Right, Up and Down) for smartphones through single monocular camera vision. This can be used with frugal AR/VR devices such as Google Cardboard1 andWearality2 in building AR/VR based automation systems for large scale deployments, by providing a touch-less interface and real-time performance. We take into account multiple cues including palm color, hand contour segmentation, and motion tracking, which effectively deals with FPV constraints put forward by a wearable. We also provide comparisons of swipe detection with the existing methods under the same limitations. We demonstrate that our method outperforms both in terms of gesture recognition accuracy and computational time.


workshop on applications of computer vision | 2017

Telecom Inventory Management via Object Recognition and Localisation on Google Street View Images

Ramya Hebbalaguppe; Gaurav Garg; Ehtesham Hassan; Hiranmay Ghosh; Ankit Verma

We present a novel method to update assets for telecommunication infrastructure using google street view (GSV) images. The problem is formulated as a object recognition task, followed by use of triangulation to estimate the object coordinates from sensor plane coordinates, To this end, we have explored different state-of-the-art object recognition techniques both from feature engineering and using deep learning namely HOG descriptors with SVM, Deformable parts model (DPM), and Deep learning (DL) using faster RCNNs. While HOG+SVM has proved to be robust human detector, DPM which is based on probabilistic graphical models and DL which is a non-linear classifier have proved their versatility in different types of object recognition problems. Asset recognition from the street view images however pose unique challenge as they could be installed on the ground in various poses, orientations and with occlusions, objects camouflaged in the background and in some cases inter class variation is small. We present comparative performance of these techniques for specific use-case involving telecom equipment for highest precision and recall. The blocks of proposed pipeline are detailed and compared to traditional inventory management methods.


international conference on pattern recognition | 2014

Design of Multi-kernel Distance Based Hashing with Multiple Objectives for Image Indexing

Vaibhav Gaur; Ehtesham Hassan; Santanu Chaudhury

Approximate nearest neighbor (ANN) search provides computationally viable option for retrieval from large document collection. Hashing based techniques are widely regarded as most efficient methods for ANN based retrieval. It has been established that by combination of multiple features in a multiple kernel learning setup can significantly improve the effectiveness of hash codes. The paper presents a novel image indexing method based on multiple kernel learning, which combines multiple features by combinatorial optimization of time and search complexity. The framework is built upon distance based hashing, where the existing kernel distance based hashing formulation adopts linear combination of kernels in tune with optimum search accuracy. In this direction, a novel multiobjective formulation for optimizing the search time as well as accuracy is proposed which is subsequently solved in Genetic algorithm based solution framework for obtaining the pareto-optimal solutions. We have performed extensive experimental evaluation of proposed concepts on different datasets showing improvement in comparison with the existing methods.


Archive | 2016

Searching for Logical Patterns in Multi-sensor Data from the Industrial Internet

Mohit Yadav; Ehtesham Hassan; Gautam Shroff; Puneet Agarwal; Ashwin Srinivasan

Engineers analysing large volumes of multi-sensor data from vehicles, engines etc. often seek to search for events such as “hard-stops”, “lane passing” or “engine overload”. Apart from such visual analysis for engineering purposes, manufactures also need to count occurrences of such events via on-board monitoring sensors that ideally rely on classifiers; searching for patterns in available data is also useful for preparing training sets in this context. In this paper, we propose a method for searching for multi-sensor patterns in large volumes of sensor data using qualitative symbols (QSIM (Say, Functions representable in pure QSIM, 251–255, 1996, [1])) such as “steady”, “increasing”, “decreasing”. Patterns can include symbol-sequences for multiple sensors, as well as approximate duration, level or slope values. Logical symbols are extracted from multi-sensor time-series and registered in a trie-based index structure. We demonstrate the effectiveness of our retrieval and ranking technique on real-life vehicular sensor data in the visual analytics as well as classifier training and detection scenarios.


international conference on machine vision | 2015

Robust visual analysis for planogram compliance problem

Anurag Saran; Ehtesham Hassan; Avinash Kumar Maurya

This paper presents a novel visual analysis based framework for automated planogram compliance check in retail stores. Our framework provides an efficient and convenient solution for ensuring planogram compliance by real-time analysis of the shelf image acquired in freehand manner. We present a novel application of Hausdorff metric for occupancy computation in product shelf images. Subsequently, we present a robust solution for product counting which applies robust row detection algorithm, and exploits texture and color feature for accurate counting. In this context, our system addresses the most general scenario of multiple varieties in single product type. The empirical validation of our framework is demonstrated on range of real-life images from stores located across different geographies, where it has achieved satisfactory and encouraging results.


arXiv: Learning | 2015

Multi-sensor event detection using shape histograms

Ehtesham Hassan; Gautam Shroff; Puneet Agarwal

Vehicular sensor data consists of multiple time-series arising from a number of sensors. Using such multi-sensor data we would like to detect occurrences of specific events that vehicles encounter, e.g., corresponding to particular maneuvers that a vehicle makes or conditions that it encounters. Events are characterized by similar waveform patterns reappearing within one or more sensors. Further such patterns can be of variable duration. In this paper, we propose a method for detecting such events in time-series data using a novel feature descriptor motivated by similar ideas in image processing. We define the shape histogram: a constant dimension descriptor that nevertheless captures patterns of variable duration. We demonstrate the efficacy of using shape histograms as features to detect events in an SVM-based, multi-sensor, supervised learning scenario, i.e., multiple time-series are used to detect an event. We present results on real-life vehicular sensor data and show that our technique performs better than available pattern detection implementations on our data, and that it can also be used to combine features from multiple sensors resulting in better accuracy than using any single sensor. Since previous work on pattern detection in time-series has been in the single series context, we also present results using our technique on multiple standard time-series datasets and show that it is the most versatile in terms of how it ranks compared to other published results.


international symposium on mixed and augmented reality | 2016

InspectAR: An Augmented Reality Inspection Framework for Industry

Ramakrishna Perla; Gaurav Gupta; Ramya Hebbalaguppe; Ehtesham Hassan

With the advancement in camera technologies and data streaming protocols, AR based applications are proving to be an important aid for inspection, training and supervision tasks in various operations including automotive industry, education etc. We demonstrate an AR based re-configurable inspection framework that can be utilized in cross-domain applications such as maintenance and repair assistance in industrial inspection and automotive/avionics domain inspection, amongst others. A deep learning component detects parts viewed in inspectors Field-of-View (FoV) accurately and the corresponding inspection check-list can be prioritized based on detection results. The back-end of the framework is easily configurable for different applications where instructions can be directly imported and visually integrated with inspection type. Accurate recording of status of inspection is provided through evidence capturing of images, notes and videos. Our current framework supports all the Android based devices and will be demonstrated on Google Glass, Google Cardboard with smartphone, and Tablet with the help of 3D printer inspection use-case.

Collaboration


Dive into the Ehtesham Hassan's collaboration.

Top Co-Authors

Avatar

Gautam Shroff

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Puneet Agarwal

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Monika Sharma

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar

Swagat Kumar

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar

Ankit Verma

Jawaharlal Nehru University

View shared research outputs
Top Co-Authors

Avatar

Ashwin Srinivasan

Birla Institute of Technology and Science

View shared research outputs
Top Co-Authors

Avatar

Hiranmay Ghosh

Tata Consultancy Services

View shared research outputs
Top Co-Authors

Avatar

Lovekesh Vig

Jawaharlal Nehru University

View shared research outputs
Researchain Logo
Decentralizing Knowledge