Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John S. Zelek is active.

Publication


Featured researches published by John S. Zelek.


computer vision and pattern recognition | 2013

Statistical Textural Distinctiveness for Salient Region Detection in Natural Images

Christian Scharfenberger; Alexander Wong; Khalil Fergani; John S. Zelek; David A. Clausi

A novel statistical textural distinctiveness approach for robustly detecting salient regions in natural images is proposed. Rotational-invariant neighborhood-based textural representations are extracted and used to learn a set of representative texture atoms for defining a sparse texture model for the image. Based on the learnt sparse texture model, a weighted graphical model is constructed to characterize the statistical textural distinctiveness between all representative texture atom pairs. Finally, the saliency of each pixel in the image is computed based on the probability of occurrence of the representative texture atoms, their respective statistical textural distinctiveness based on the constructed graphical model, and general visual attentive constraints. Experimental results using a public natural image dataset and a variety of performance evaluation metrics show that the proposed approach provides interesting and promising results when compared to existing saliency detection methods.


Image and Vision Computing | 2011

Goal-based trajectory analysis for unusual behaviour detection in intelligent surveillance

Frederick Tung; John S. Zelek; David A. Clausi

In a typical surveillance installation, a human operator has to constantly monitor a large array of video feeds for suspicious behaviour. As the number of cameras increases, information overload makes manual surveillance increasingly difficult, adding to other confounding factors such as human fatigue and boredom. The objective of an intelligent vision-based surveillance system is to automate the monitoring and event detection components of surveillance, alerting the operator only when unusual behaviour or other events of interest are detected. While most traditional methods for trajectory-based unusual behaviour detection rely on low-level trajectory features such as flow vectors or control points, this paper builds upon a recently introduced approach that makes use of higher-level features of intentionality. Individuals in the scene are modelled as intentional agents, and unusual behaviour is detected by evaluating the explicability of the agents trajectory with respect to known spatial goals. The proposed method extends the original goal-based approach in three ways: first, the spatial scene structure is learned in a training phase; second, a region transition model is learned to describe normal movement patterns between spatial regions; and third, classification of trajectories in progress is performed in a probabilistic framework using particle filtering. Experimental validation on three published third-party datasets demonstrates the validity of the proposed approach.


Assistive Technology | 2011

Application of a Tactile Way-Finding Device to Facilitate Navigation in Persons With Dementia

Lawrence E. M. Grierson; John S. Zelek; Isabel Lam; Sandra E. Black; Heather Carnahan

Persons with dementias, such as Alzheimers disease, have well‐documented deficiencies in way-finding, which often renders these individuals house bound and/or unable to perform daily activities without significant frustrations. A wearable belt has recently been developed that may have the capability to facilitate navigation for this population. Through a series of four small, vibrating motors that are adjusted to the cardinal positions of front, back, right, and left, the belt provides wearers with a tactile signal indicating the direction to their destination. In this experiment, the applicability of the way-finding signals to persons with dementia was assessed. To do so, participants walked a series of routes through the corridors of a hospital while wearing the belt. The results suggest the way-finding belt has potential as a navigation aid for individuals with dementia. The participants displayed a few deficiencies in attending to the directional signals that led to way-finding errors in which the signal was ignored and the intended turn not made. The article concludes with recommendations that the system of signal delivery be modified in a way that captures and directs the wearers focus more prominently to the vibrotactile stimulus.


Journal of Multimedia | 2007

Robust Face Recognition through Local Graph Matching

Ehsan Fazl-Ersi; John S. Zelek; John K. Tsotsos

A novel face recognition method is proposed, in which face images are represented by a set of local labeled graphs, each containing information about the appearance and geometry of a 3-tuple of face feature points, extracted using Local Feature Analysis (LFA) technique. Our method automatically learns a model set and builds a graph space for each individual. A two-stage method for optimal matching between the graphs extracted from a probe image and the trained model graphs is proposed. The recognition of each probe face image is performed by assigning it to the trained individual with the maximum number of references. Our approach achieves perfect result on the ORL face set and an accuracy rate of 98.4% on the FERET face set, which shows the superiority of our method over all considered state-of-the-art methods. I


british machine vision conference | 2011

Improved Spatio-temporal Salient Feature Detection for Action Recognition

Amir Hossein Shabani; David A. Clausi; John S. Zelek

Spatio-temporal salient features can localize the local motion events and are used to represent video sequences for many computer vision tasks such as action recognition. The robust detection of these features under geometric variations such as affine transformation and view/scale changes is however an open problem. Existing methods use the same filter for both time and space and hence, perform an isotropic temporal filtering. A novel anisotropic temporal filter for better spatio-temporal feature detection is developed. The effect of symmetry and causality of the video filtering is investigated. Based on the positive results of precision and reproducibility tests, we propose the use of temporally asymmetric filtering for robust motion feature detection and action recognition.


Journal of Field Robotics | 2014

Mapping, Planning, and Sample Detection Strategies for Autonomous Exploration

Arun Das; Michael Diu; Neil Mathew; Christian Scharfenberger; James Servos; Alexander Wong; John S. Zelek; David A. Clausi; Steven Lake Waslander

This paper presents algorithmic advances and field trial results for autonomous exploration and proposes a solution to perform simultaneous localization and mapping (SLAM), complete coverage, and object detection without relying on GPS or magnetometer data. We demonstrate an integrated approach to the exploration problem, and we make specific contributions in terms of mapping, planning, and sample detection strategies that run in real-time on our custom platform. Field tests demonstrate reliable performance for each of these three main components of the system individually, and high-fidelity simulation based on recorded data playback demonstrates the viability of the complete solution as applied to the 2013 NASA Sample Return Robot Challenge.


computer vision and pattern recognition | 2006

Tree Trunks as Landmarks for Outdoor Vision SLAM

Daniel C. Asmar; John S. Zelek; Samer M. Abdallah

Simultaneous Localization and Mapping (SLAM) of robots is the process of building a map of the robot milieu, while simultaneously localizing the robot inside that map. Cameras have been recently proposed, as a replacement for laser range finders, for the purpose of detecting and localizing landmarks around the navigating robot. Vision SLAM is either Interest Point (IP) based, where landmarks are images saliencies, or object-based where real objects are used as landmarks. The contribution of this paper is two prong: first, it details an approach based on Perceptual Organization (PO) to detect and track trees in a sequence of images, thereby promoting the use of a camera as a viable exteroceptive sensor for object-based SLAM; second,it demonstrates the superiority of the suggested PO system over two appearance-based algorithms in segmenting trees from difficult settings. Experiments conducted on a database of 873 images containing approximately 2008 tree trunks, show that the proposed system correctly classifies trees at 81 % with a false positive rate of 30%.


canadian conference on computer and robot vision | 2006

Local Feature Matching For Face Recognition

Ehsan Fazl Ersi; John S. Zelek

In this paper a novel technique for face recognition is proposed. Using the statistical Local Feature Analysis (LFA) method, a set of feature points is extracted for each face image at locations with highest deviations from the expectation. Each feature point is described by a sequence of local histograms captured from the Gabor responses at different frequencies and orientations around the feature point. Histogram intersection is used to compare the Gabor histogram sequences in order to find the matched feature points between two faces. Recognition is performed based on the average similarity between the best matched points, in the probe face and each of the gallery faces. Several experiments on the FERET set of faces show the superiority of the proposed technique over all considered state-of-the-art methods (Elastic Bunch Graph Matching, LDA+PCA, Bayesian Intra/extrapersonal Classifier, Boosted Haar Classifier), and validate the robustness of our method against facial expression variation and illumination variation.


canadian conference on computer and robot vision | 2012

Evaluation of Local Spatio-temporal Salient Feature Detectors for Human Action Recognition

Amir Hossein Shabani; David A. Clausi; John S. Zelek

Local spatio-temporal salient features are used for a sparse and compact representation of video contents in many computer vision tasks such as human action recognition. To localize these features (i.e., key point detection), existing methods perform either symmetric or asymmetric multi-resolution temporal filtering and use a structural or a motion saliency criteria. In a common discriminative framework for action classification, different saliency criteria of the structured-based detectors and different temporal filters of the motion-based detectors are compared. We have two main observations. (1) The motion-based detectors localize features which are more effective than those of structured-based detectors. (2) The salient motion features detected using an asymmetric temporal filtering performbetter than all other sparse salient detectors and dense sampling. Based on these two observations, we recommend the use of asymmetric motion features for effective sparse video content representation and action recognition.


Real-time Imaging | 2005

Towards real-time 3-D monocular visual tracking of human limbs in unconstrained environments

David J. Bullock; John S. Zelek

The 3-D visual tracking of human limbs is fundamental to a wide array of computer vision applications including gesture recognition, interactive entertainment, biomechanical analysis, vehicle driver monitoring, and electronic surveillance. The problem of limb tracking is complicated by issues of occlusion, depth ambiguities, rotational ambiguities, and high levels of noise caused by loose fitting clothing. We attempt to solve the 3-D limb tracking problem using only monocular imagery (a single 2-D video source) in largely unconstrained environments. The approach presented is a movement towards full real-time operating capabilities. The described system presents a complete visual tracking system which incorporates target detection, target model acquisition/initialization, and target tracking components into a single, cohesive, probabilistic framework. The presence of a target is detected, using visual cues alone, by recognition of an individual performing a simple pre-defined initialization cue. The physical dimensions of the limb are then learned probabilistically until a statistically stable model estimate has been found. The appearance of the limb is learned in a joint spatial-chromatic domain which incorporates normalized color data with spatial constraints in order to model complex target appearances. The target tracking is performed within a Monte Carlo particle filtering framework which is capable of maintaining multiple state-space hypotheses and propagating ambiguity until less ambiguous data is observed. Multiple image cues are combined within this framework in a principled Bayesian manner. The target detection and model acquisition components are able to perform at near real-time frame rates and are shown to accurately recognize the presence of a target and initialize a target model specific to that user. The target tracking component has demonstrated exceptional resilience to occlusion and temporary target disappearance and contains a natural mechanism for the trade-off between accuracy and speed. At this point, the target tracking component performs at sub real-time frame rates, although several methods to increase the effective operating speed are proposed.

Collaboration


Dive into the John S. Zelek's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel C. Asmar

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Georges Younes

American University of Beirut

View shared research outputs
Top Co-Authors

Avatar

Samer M. Abdallah

American University of Beirut

View shared research outputs
Researchain Logo
Decentralizing Knowledge