Mohib Ullah
Norwegian University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohib Ullah.
Proceedings of SPIE | 2014
Habib Ullah; Mohib Ullah; Nicola Conci
In this paper we propose a novel approach to detect anomalies in crowded scenes. This is achieved by analyzing the crowd behavior by extracting the corner features. For each corner feature we collect a set of motion features. The motion features are used to train an MLP neural network during the training stage, and the behavior of crowd is inferred on the test samples. Considering the difficulty of tracking individuals in dense crowds due to multiple occlusions and clutter, in this work we extract corner features and consider them as an approximate representation of the people motion. Corner features are then advected over a temporal window through optical flow tracking. Corner features well match the motion of individuals and their consistency, and accuracy is higher both in structured and unstructured crowded scenes compared to other detectors. In the current work, corner features are exploited to extract motion information, which is used as input prior to train the neural network. The MLP neural network is subsequently used to highlight the dominant corner features that can reveal an anomaly in the crowded scenes. The experimental evaluation is conducted on a set of benchmark video sequences commonly used for crowd motion analysis. In addition, we show that our approach outperforms a state of the art technique proposed in.
advanced video and signal based surveillance | 2016
Mohib Ullah; Faouzi Alaya Cheikh; Ali Shariq Imran
Multi-target tracking is one of the most challenging tasks in computer vision. Several complex techniques have been proposed in the literature to tackle the problem. The main idea of such approaches is to find an optimal set of trajectories within a temporal window. The performance of such approaches are fairly good but their computational complexity is too high making them unpractical. In this paper, we propose a novel tracking-by-detection approach in a Bayesian filtering framework. The appearance of a target is modeled through HoG descriptor and the critical problem of target association is solved through combinatorial optimization. It is a simple yet very efficient approach and experimental results show that it achieves state-of-the-art performance in real time.
international conference on image processing | 2016
Mohib Ullah; Habib Ullah; Nicola Conci; Francesco G. B. De Natale
In this paper we present a novel method for crowd behavior identification. In our method, the motion flow field is obtained from the video by computing the dense optical flow. Then, a thermal diffusion process (TDP) is exploited to increase the coherence of the motion flow. Approximating the moving particles to individuals, their interaction forces are computed using a modified variant of the social force model (M-SFM) to highlight potential particles of interest. Besides capturing the effect of neighboring individuals on each other, the M-SFM also takes into account the crowd disorder, usually triggered by regions of high interactions. The experimental evaluation is conducted on a set of benchmark video sequences, commonly used for crowd motion analysis, and the obtained results are compared against a state of the art technique.
Neurocomputing | 2018
Habib Ullah; Ahmed B. Altamimi; Muhammad Uzair; Mohib Ullah
Abstract We propose a novel Gaussian kernel based integration model (GKIM) for anomalous entities detection and localization in pedestrian flows. The GKIM integrates spatio-temporal features for efficient and robust motion representation to capture the distinctive and meaningful information about the anomalous entities. We next propose a block based detection framework by training a recurrent conditional random field (R-CRF) using the GKIM features. The trained R-CRF model is then used to detect and localize the anomalous entities during the online testing stage. We conduct comprehensive experiments on three benchmark datasets and compare the performance of the proposed method with the state-of-the-art anomalous entities detection methods. Our experiments show that the proposed GKIM outperforms the compared methods in terms of equal error rate (EER) and detection rate (DR) in both frame-level and pixel-level comparisons. The frame-level analysis detects the presence of an anomalous entity in a frame regardless of its location. The pixel-level analysis localizes the anomalous entity in term of its pixels.
Neural Computing and Applications | 2018
Habib Ullah; Mohib Ullah; Muhammad Uzair
A hybrid social influence model (HSIM) has been proposed which is a novel and automatic method for pedestrian motion segmentation. One of the major attractions of the HSIM is its capability to handle motion segmentation when the pedestrian flow is randomly distributed. In the proposed HSIM, first the motion information has been extracted from the input video through particle initialization and optical flow. The particles are then examined to keep only the significant and nonstationary particles. To detect consistent segments, the communal model (CM) is adopted that models the influence of particles on each other. The CM infers influence from uncorrelated behaviors among particles and models the effect that particle interactions have on the spread of social behaviors. Finally, the detected segments are refined to eliminate the effects of oversegmentation. Extensive experiments on four benchmark datasets have been performed, and the results have been compared with two baseline and four state-of-the-art motion segmentation methods. The results show that HSIM achieves superior pedestrian motion segmentation and outperforms the compared methods in terms of both Jaccard Similarity Metric and F-score.
Journal of Imaging | 2018
Mohib Ullah; Ahmed Kedir Mohammed; Faouzi Alaya Cheikh
Articulation modeling, feature extraction, and classification are the important components of pedestrian segmentation. Usually, these components are modeled independently from each other and then combined in a sequential way. However, this approach is prone to poor segmentation if any individual component is weakly designed. To cope with this problem, we proposed a spatio-temporal convolutional neural network named PedNet which exploits temporal information for spatial segmentation. The backbone of the PedNet consists of an encoder–decoder network for downsampling and upsampling the feature maps, respectively. The input to the network is a set of three frames and the output is a binary mask of the segmented regions in the middle frame. Irrespective of classical deep models where the convolution layers are followed by a fully connected layer for classification, PedNet is a Fully Convolutional Network (FCN). It is trained end-to-end and the segmentation is achieved without the need of any pre- or post-processing. The main characteristic of PedNet is its unique design where it performs segmentation on a frame-by-frame basis but it uses the temporal information from the previous and the future frame for segmenting the pedestrian in the current frame. Moreover, to combine the low-level features with the high-level semantic information learned by the deeper layers, we used long-skip connections from the encoder to decoder network and concatenate the output of low-level layers with the higher level layers. This approach helps to get segmentation map with sharp boundaries. To show the potential benefits of temporal information, we also visualized different layers of the network. The visualization showed that the network learned different information from the consecutive frames and then combined the information optimally to segment the middle frame. We evaluated our approach on eight challenging datasets where humans are involved in different activities with severe articulation (football, road crossing, surveillance). The most common CamVid dataset which is used for calculating the performance of the segmentation algorithm is evaluated against seven state-of-the-art methods. The performance is shown on precision/recall, F 1 , F 2 , and mIoU. The qualitative and quantitative results show that PedNet achieves promising results against state-of-the-art methods with substantial improvement in terms of all the performance metrics.
Neurocomputing | 2017
Habib Ullah; Muhammad Uzair; Mohib Ullah; Asif Khan; Ayaz Ahmad; Wilayat Khan
We propose density independent hydrodynamics model (DIHM) which is a novel and automatic method for coherency detection in crowded scenes. One of the major advantages of the DIHM is its capability to handle changing density over time. Moreover, the DIHM avoids oversegmentation and thus achieves refined coherency detection. In the proposed DIHM, we first extract a motion flow field from the input video through particle initialization and dense optical flow. The particles of interest are then collected to retain only the most motile and informative particles. To represent each particle, we accumulate the contribution of each particle in a weighted form, based on a kernel function. Next, the smoothed particle hydrodynamics (SPH) is adopted to detect coherent regions. Finally, the detected coherent regions are refined to remove the effects of oversegmentation. We perform extensive experiments on three benchmark datasets and compare the results with 10 state-of-the-art coherency detection methods. Our results show that DIHM achieves superior coherency detection and outperforms the compared methods in both pixel level and coherent region level average particle error rates (PERs), average coherent number error (CNE) and F-score.
international conference on image processing | 2018
Mohib Ullah; Faouzi Alaya Cheikh
2018 Colour and Visual Computing Symposium (CVCS) | 2018
Mohib Ullah; Mohammed Ahmed Kedir; Faouzi Alaya Cheikh
international conference on image processing | 2017
Mohib Ullah; Ahmed Kedir Mohammed; Faouzi Alaya Cheikh; Zhaohui Wang