Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gang Hu is active.

Publication


Featured researches published by Gang Hu.


interactive tabletops and surfaces | 2014

DT-DT: Top-down Human Activity Analysis for Interactive Surface Applications

Gang Hu; Derek F. Reilly; Mohammed Alnusayri; Ben Swinden; Qigang Gao

As environmental and multi-display configurations become more common, HCI research is becoming increasingly concerned with actions around these displays. Tracking human activity is challenging, and there is currently no single solution that reliably handles all scenarios without excessive instrumentation. In this paper we present a novel tracking and analysis approach using a top-down 3D camera. Our hierarchical tracking approach models local and global affinities, scene constraints and motion patterns to find and track people in space, and a novel salience occupancy pattern (SOP) is used for action recognition. We present experiences applying our approach to build a proxemics-aware tabletop display prototype, and to create an exhibit combining a large vertical display with an interactive floor-projection.


international conference on image analysis and recognition | 2009

An Interactive Image Feature Visualization System for Supporting CBIR Study

Gang Hu; Qigang Gao

CBIR has been an active topic for more than one decade. Current systems still lack in flexibility and accuracy because of semantic gap between images feature-level and semantic-level representations. Although many techniques have been developed for automatic or semi-automatic retrieval (e.g. interactive browsing, relevance feedback (RF)), issues about how to find suitable features and how to measure the image content still remain. It has been a challenging task to choose sound features for coding image content properly. This requires intensive interactive effort for discovering useful regularities between features and content semantics. In this paper, we present an interactive visualization system for supporting feature investigations. It allows users to choose different features, feature combinations, and representations for testing their impacts on measuring content-semantics. The experiments demonstrate how various perceptual edge features and groupings are interactively handled for retrieval measures. The system can be extended to include more features.


designing interactive systems | 2016

Doing While Thinking: Physical and Cognitive Engagement and Immersion in Mixed Reality Games

Gang Hu; Nabil Bin Hannan; Khalid Tearo; Arthur Bastos; Derek F. Reilly

We present a study examining the impact of physical and cognitive challenge on reported immersion for a mixed reality game called Beach Pong. Contrary to prior findings for desktop games, we find significantly higher reported immersion among players who engage physically, regardless of their actual game performance. Building a mental map of the real, virtual, and sensed world is a cognitive challenge for novices, and this appears to influence immersion: in our study, participants who actively attended to both physical and virtual game elements reported higher immersion levels than those who attended mainly or exclusively to virtual elements. Without an integrated mental map, in-game cognitive challenges were ignored or offloaded to motor response when possible in order to achieve the minimum required goals of the game. From our results we propose a model of immersion in mixed reality gaming that is useful for designers and researchers in this space.


canadian conference on computer and robot vision | 2014

N-Gram Based Image Representation and Classification Using Perceptual Shape Features

Albina Mukanova; Qigang Gao; Gang Hu

Rapid growth of visual data processing and analysis applications, such as content based image retrieval, augmented reality, automated inspection and defect detection, medical image understanding, and remote sensing has made the problem of developing accurate and efficient image representation and classification methods one of the key research areas. This research proposes new higher-level perceptual shape features for image representation which are based on Gestalt principles of human vision. The concept of n-gram is adapted from text analysis as a grouping mechanism for coding global shape content of an image. The proposed perceptual shape features are translation, rotation, and scale invariant. Local shape features and n-gram grouping scheme are integrated together to create new Perceptual Shape Vocabulary (PSV). Different image representations based on PSVs with and without n-gram scheme are applied to image classification task using Support Vector Machine (SVM) classifier. The experimental evaluation results indicate that n-gram-based perceptual shape features can efficiently represent global shape information of an image, and augment the accuracy of image representation by low-level image features such as SIFT descriptors.


canadian conference on computer and robot vision | 2011

Gesture Analysis Using 3D Camera, Shape Features and Particle Filters

Gang Hu; Qigang Gao

This paper presents a framework of gesture recognition and tracking using 3D camera, edge features and particle filters. A target gesture is modeled with perceptual shape features qualitatively. The perceptual model is used to guide tracking based on a particle filtering method to achieve reliable results. The system has been applied to a video game control application, Interactive Dart Game, where dart throwing gesture is modeled by learning from a training data set. The experiments are provided to demonstrate the proposed system which has a great potential for gesture analysis applications, such as sensor based video game and patient monitoring system.


international conference on image processing | 2016

A shape feature based bovw method for image classification using N-gram and spatial pyramid coding scheme

Elham Etemad; Gang Hu; Qigang Gao

Image classification is a general visual analysis task based on the image content coded by its representation. In this research, we proposed an image representation method that is based on the perceptual shape features and their spatial distributions. A natural language processing concept, N-gram, is adopted to generate a set of perceptual shape visual words for encoding image features. By combining hierarchical visual words and spatial pyramid, Spatio-Shape Pyramid representation is constructed to reduce the semantic gaps. Experimental results show that the proposed method outperforms other state-of-the-art methods.


canadian conference on computer and robot vision | 2015

A Perceptual Depth Shape-based CRF Model for Deformable Surface Labeling

Gang Hu; Derek F. Reilly; Qigang Gao; Arthur Bastos; Nhu loan Truong

Real-time deformable scene understanding is a challenging task. In this paper, we address this problem by using Conditional random fields (CRFs) framework and perceptual shape salience occupancy patterns. CRF is a powerful probabilistic model that has been widely used for labelling image segments. It is particularly well-suited to modelling local interactions and global consistency among bottom-up regions (e.g. super pixels). However, its capacity could be limited if the underlying feature potentials are not well reflecting the scene properties. We propose a depth shape-based CRF model for deformable surface (sand in our case) labelling by utilizing expressive novel shape salience occupancy patterns (SOP). Experimental results demonstrate the effectiveness and robustness of the method on recorded video datasets. While our work has concentrated on sand surface labelling, the approach can be applied to other surface materials (e.g. snow, mud), and extended to non-planar surfaces as well (e.g. sculpting blocks).


international conference on image analysis and recognition | 2015

Dynamic Perceptual Attribute-Based Hidden Conditional Random Fields for Gesture Recognition

Gang Hu; Qigang Gao

The demand for gesture/action recognition technologies has been increased in the recent years. State-of-the-art systems of gesture/action recognition have been using low-level features or intermediate bag-of-features as gesture/action descriptors. Those methods ignore the spatial and temporal information on shape and internal structures of the targets. Dynamic Perceptual Attributes (DPAs) is a set of descriptors of gesture’s perceptual properties. Their context relations reveal gestures/actions’ intrinsic structures. This paper utilizes the hidden conditional random fields (HCRF) model based on DPAs to describe complex human gestures and facilitate the recognition tasks. Experimental results show our model gains better performance against state-of-the-art methods.


international conference on image analysis and recognition | 2014

Human Activity Analysis in a 3D Bird’s-eye View

Gang Hu; Derek F. Reilly; Ben Swinden; Qigang Gao

Efficient and reliable human tracking in arbitrary environments is challenging, as there is currently no single solution that can successfully handle all scenarios. In this paper we present a novel approach that uses a top view 3D camera, which employs a simplified yet expressive human body model for effective multi-target detection and tracking. Both bottom-up and high level processes are involved to construct a saliency map with selective visual information. We handle the tracking task in a hierarchical data association framework, and a novel salience occupancy pattern (SOP) descriptor is proposed as the motion representation for action recognition. Our real-time bird’s-eye multi-person tracking and recognition approach is being applied in a human-computer interaction (HCI) research prototype, and has a wide range of applications.


international conference on image processing | 2010

A non-parametric statistics based method for generic curve partition and classification

Gang Hu; Qigang Gao

Collaboration


Dive into the Gang Hu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge