Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Imran N. Junejo is active.

Publication


Featured researches published by Imran N. Junejo.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

View-Independent Action Recognition from Temporal Self-Similarities

Imran N. Junejo; Emilie Dexter; Ivan Laptev; Patrick Pérez

This paper addresses recognition of human actions under view changes. We explore self-similarities of action sequences over time and observe the striking stability of such measures across views. Building upon this key observation, we develop an action descriptor that captures the structure of temporal similarities and dissimilarities within an action sequence. Despite this temporal self-similarity descriptor not being strictly view-invariant, we provide intuition and experimental validation demonstrating its high stability under view changes. Self-similarity descriptors are also shown to be stable under performance variations within a class of actions when individual speed fluctuations are ignored. If required, such fluctuations between two different instances of the same action class can be explicitly recovered with dynamic time warping, as will be demonstrated, to achieve cross-view action synchronization. More central to the current work, temporal ordering of local self-similarity descriptors can simply be ignored within a bag-of-features type of approach. Sufficient action discrimination is still retained in this way to build a view-independent action recognition system. Interestingly, self-similarities computed from different image features possess similar properties and can be used in a complementary fashion. Our method is simple and requires neither structure recovery nor multiview correspondence estimation. Instead, it relies on weak geometric properties and combines them with machine learning for efficient cross-view action recognition. The method is validated on three public data sets. It has similar or superior performance compared to related methods and it performs well even in extreme conditions, such as when recognizing actions from top views while using side views only for training.


international conference on pattern recognition | 2004

Multi feature path modeling for video surveillance

Imran N. Junejo; Omar Javed; Mubarak Shah

This paper proposes a novel method for detecting nonconforming trajectories of objects as they pass through a scene. Existing methods mostly use spatial features to solve this problem. Using only spatial information is not adequate; we need to take into consideration velocity and curvature information of a trajectory along with the spatial information for an elegant solution. Our method has the ability to distinguish between objects traversing spatially dissimilar paths, or objects traversing spatially proximal paths but having different spatio-temporal characteristics. The method consists of a path building training phase and a testing phase. During the training phase, we use graph-cuts for clustering the trajectories, where the Hausdorff distance metric is used to calculate the edge weights. Each cluster represents a path. An envelope boundary and an average trajectory are computed for each path. During the testing phase we use three features for trajectory matching in a hierarchical fashion. The first feature measures the spatial similarity while the second feature compares the velocity characteristics of trajectories. Finally, the curvature features capture discontinuities in velocity, acceleration, and position of the trajectory. We use real-world pedestrian sequences to demonstrate the practicality of our method.


european conference on computer vision | 2008

Cross-View Action Recognition from Temporal Self-similarities

Imran N. Junejo; Emilie Dexter; Ivan Laptev; Patrick Pérez

This paper concerns recognition of human actions under view changes. We explore self-similarities of action sequences over time and observe the striking stability of such measures across views. Building upon this key observation we develop an action descriptor that captures the structure of temporal similarities and dissimilarities within an action sequence. Despite this descriptor not being strictly view-invariant, we provide intuition and experimental validation demonstrating the high stability of self-similarities under view changes. Self-similarity descriptors are also shown stable under action variations within a class as well as discriminative for action recognition. Interestingly, self-similarities computed from different image features possess similar properties and can be used in a complementary fashion. Our method is simple and requires neither structure recovery nor multi-view correspondence estimation. Instead, it relies on weak geometric properties and combines them with machine learning for efficient cross-view action recognition. The method is validated on three public datasets, it has similar or superior performance compared to related methods and it performs well even in extreme conditions such as when recognizing actions from top views while using side views for training only.


international conference on computer vision | 2007

Trajectory Rectification and Path Modeling for Video Surveillance

Imran N. Junejo; Hassan Foroosh

Path modeling for video surveillance is an active area of research. We address the issue of Euclidean path modeling in a single camera for activity monitoring in a multi- camera video surveillance system. The paper proposes (i) a novel linear solution to auto-calibrate any camera observing pedestrians and (ii) to use these calibrated cameras to detect unusual object behavior. During the unsupervised training phase, after auto-calibrating a camera and metric rectifying the input trajectories, the input sequences are registered to the satellite imagery and prototype path models are constructed. This allows us to estimate metric information directly from the video sequences. During the testing phase, using our simple yet efficient similarity measures, we seek a relation between the input trajectories derived from a sequence and the prototype path models. We test the proposed method on synthetic as well as on real-world pedestrian sequences.


international conference on computer vision | 2011

Action recognition using rank-1 approximation of Joint Self-Similarity Volume

Chuan Sun; Imran N. Junejo; Hassan Foroosh

In this paper, we make three main contributions in the area of action recognition: (i) We introduce the concept of Joint Self-Similarity Volume (Joint SSV) for modeling dynamical systems, and show that by using a new optimized rank-1 tensor approximation of Joint SSV one can obtain compact low-dimensional descriptors that very accurately preserve the dynamics of the original system, e.g. an action video sequence; (ii) The descriptor vectors derived from the optimized rank-1 approximation make it possible to recognize actions without explicitly aligning the action sequences of varying speed of execution or different frame rates; (iii) The method is generic and can be applied using different low-level features such as silhouettes, histogram of oriented gradients, etc. Hence, it does not necessarily require explicit tracking of features in the space-time volume. Our experimental results on three public datasets demonstrate that our method produces remarkably good results and outperforms all baseline methods.


advanced video and signal based surveillance | 2006

Robust Auto-Calibration from Pedestrians

Imran N. Junejo; Hassan Foroosh

The knowledge of camera intrinsic and extrinsic parameters is useful, as it allows us to make world measurements. Unfortunately, calibration information is rarely available in video surveillance systems and it is difficult to obtain once the system is installed. Auto-calibrating cameras using moving objects (humans) has recently attracted a lot of interest. Two methods are proposed by Lv-Nevatia(2002) and Krahnstoever-Mendonca(2005). The inherent difficulty of the problem lies in the noise that is generally present in the data. We propose a robust and a general linear solution to the problem by adopting a formulation different from the existing methods. The uniqueness of formulation lies in recognizing two harmonic homologies present in the geometry obtained by observing pedestrians, and then using properties of these homologies to obtain linear constraints on the unknown camera parameters. Experiments with synthetic as well as on real data are presented - indicating the practicality of the proposed system.


systems man and cybernetics | 2007

Autoconfiguration of a Dynamic Nonoverlapping Camera Network

Imran N. Junejo; Xiaochun Cao; Hassan Foroosh

In order to monitor sufficiently large areas of interest for surveillance or any event detection, we need to look beyond stationary cameras and employ an automatically configurable network of nonoverlapping cameras. These cameras need not have an overlapping field of view and should be allowed to move freely in space. Moreover, features like zooming in/out, readily available in security cameras these days, should be exploited in order to focus on any particular area of interest if needed. In this paper, a practical framework is proposed to self-calibrate dynamically moving and zooming cameras and determine their absolute and relative orientations, assuming that their relative position is known. A global linear solution is presented for self-calibrating each zooming/focusing camera in the network. After self-calibration, it is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the dynamic network configuration. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic, as well as on real data.


IEEE Transactions on Image Processing | 2015

Exploring Sparseness and Self-Similarity for Action Recognition

Chuan Sun; Imran N. Junejo; Marshall Friend Tappen; Hassan Foroosh

We propose that the dynamics of an action in video data forms a sparse self-similar manifold in the space-time volume, which can be fully characterized by a linear rank decomposition. Inspired by the recurrence plot theory, we introduce the concept of Joint Self-Similarity Volume (Joint-SSV) to model this sparse action manifold, and hence propose a new optimized rank-1 tensor approximation of the Joint-SSV to obtain compact low-dimensional descriptors that very accurately characterize an action in a video sequence. We show that these descriptor vectors make it possible to recognize actions without explicitly aligning the videos in time in order to compensate for speed of execution or differences in video frame rates. Moreover, we show that the proposed method is generic, in the sense that it can be applied using different low-level features, such as silhouettes, tracked points, histogram of oriented gradients, and so forth. Therefore, our method does not necessarily require explicit tracking of features in the space-time volume. Our experimental results on five public data sets demonstrate that our method produces promising results and outperforms many baseline methods.


Computer Graphics Forum | 2011

Motion Retrieval Using Low-Rank Subspace Decomposition of Motion Volume

Chuan Sun; Imran N. Junejo; Hassan Foroosh

This paper proposes a novel framework that allows for a flexible and an efficient retrieval of motion capture data in huge databases. The method first converts an action sequence into a novel representation, i.e. the Self‐Similarity Matrix (SSM), which is based on the notion of self‐similarity. This conversion of the motion sequences into compact and low‐rank subspace representations greatly reduces the spatiotemporal dimensionality of the sequences. The SSMs are then used to construct order‐3 tensors, and we propose a low‐rank decomposition scheme that allows for converting the motion sequence volumes into compact lower dimensional representations, without losing the nonlinear dynamics of the motion manifold. Thus, unlike existing linear dimensionality reduction methods that distort the motion manifold and lose very critical and discriminative components, the proposed method performs well even when inter‐class differences are small or intra‐class differences are large. In addition, the method allows for an efficient retrieval and does not require the time‐alignment of the motion sequences. We evaluate the performance of our retrieval framework on the CMU mocap dataset under two experimental settings, both demonstrating promising retrieval rates.


Image and Vision Computing | 2008

Euclidean path modeling for video surveillance

Imran N. Junejo; Hassan Foroosh

In this paper, we address the issue of Euclidean path modeling in a single camera for activity monitoring in a multi-camera video surveillance system. The method consists of a path building training phase and a testing phase. During the unsupervised training phase, after auto-calibrating a camera and thereafter metric rectifying the input trajectories, a weighted graph is constructed with trajectories represented by the nodes, and weights determined by a similarity measure. Normalized-cuts are recursively used to partition the graph into prototype paths. Each path, consisting of a partitioned group of trajectories, is represented by a path envelope and an average trajectory. For every prototype path, features such as spatial proximity, motion characteristics, curvature, and absolute world velocity are then recovered directly in the rectified images or by registering to aerial views. During the testing phase, using our simple yet efficient similarity measures for these features, we seek a relation between the trajectories of an incoming sequence and the prototype path models to identify anomalous and unusual behaviors. Real-world pedestrian sequences are used to evaluate the steps, and demonstrate the practicality of the proposed approach.

Collaboration


Dive into the Imran N. Junejo's collaboration.

Top Co-Authors

Avatar

Hassan Foroosh

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adeel A. Bhutta

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Chuan Sun

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Xiaochun Cao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mubarak Shah

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Nazim Ashraf

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge