Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francis K. H. Quek is active.

Publication


Featured researches published by Francis K. H. Quek.


ACM Computing Surveys | 2004

A review of vessel extraction techniques and algorithms

Cemil Kirbas; Francis K. H. Quek

Vessel segmentation algorithms are the critical components of circulatory blood vessel analysis systems. We present a survey of vessel extraction techniques and algorithms. We put the various vessel extraction approaches and techniques in perspective by means of a classification of the existing research. While we have mainly targeted the extraction of blood vessels, neurosvascular structure in particular, we have also reviewed some of the segmentation methods for the tubular objects that show similar characteristics to vessels. We have divided vessel segmentation algorithms and techniques into six main categories: (1) pattern recognition techniques, (2) model-based approaches, (3) tracking-based approaches, (4) artificial intelligence-based approaches, (5) neural network-based approaches, and (6) tube-like object detection approaches. Some of these categories are further divided into subcategories. We have also created tables to compare the papers in each category against such criteria as dimensionality, input type, preprocessing, user interaction, and result type.


Pattern Recognition | 2003

Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets

Robert K. Bryll; Ricardo Gutierrez-Osuna; Francis K. H. Quek

We present attribute bagging (AB), a technique for improving the accuracy and stability of classifier ensembles induced using random subsets of features. AB is a wrapper method that can be used with any learning algorithm. It establishes an appropriate attribute subset size and then randomly selects subsets of features, creating projections of the training set on which the ensemble classifiers are built. The induced classifiers are then used for voting. This article compares the performance of our AB method with bagging and other algorithms on a hand-pose recognition dataset. It is shown that AB gives consistently better results than bagging, both in accuracy and stability. The performance of ensemble voting in bagging and the AB method as a function of the attribute subset size and the number of voters for both weighted and unweighted voting is tested and discussed. We also demonstrate that ranking the attribute subsets by their classification accuracy and voting using only the best subsets further improves the resulting performance of the ensemble.


international conference on computer vision | 1999

Comparison of five color models in skin pixel classification

Benjamin D. Zarit; Boaz J. Super; Francis K. H. Quek

Detection of skin in video is an important component of systems for detecting, recognizing, and tracking faces and hands. Different skin detection methods have used different color spaces. This paper presents a comparative evaluation of pixel classification performance of two skin detection methods in five color spaces. The skin detection methods used in this paper are color-histogram based approaches that are intended to work with a wide variety of individuals, lighting conditions, and skin tones. One is the widely-used lookup table method, the other makes use of Bayesian decision theory. Two types of enhancements, based on spatial and texture analyses, are also evaluated.


bioinformatics and bioengineering | 2003

Vessel extraction techniques and algorithms: a survey

Cemil Kirbas; Francis K. H. Quek

Vessel segmentation algorithms are critical components of circulatory blood vessel analysis systems. We present a survey of vessel extraction techniques and algorithms, putting the various approaches and techniques in perspective by means of a classification of the existing research. While we target mainly the extraction of blood vessels, neurovascular structure in particular we also review some of the segmentation methods for the tubular objects that show similar characteristics to vessels. We divide vessel segmentation algorithms and techniques into six main categories: (1) pattern recognition techniques, (2) model-based approaches, (3) tracking-based approaches, (4) artificial intelligence-based approaches, (5) neural network-based approaches, and (6) miscellaneous tube-like object detection approaches. Some of these categories are further divided into sub-categories. A table compares the papers against such criteria as dimensionality, input type, preprocessing, user interaction, and result type.


IEEE MultiMedia | 1996

Unencumbered gestural interaction

Francis K. H. Quek

Unencumbered hand gesture interfaces encompass both 3D interaction and 2D pointing. A model developed to study 3D interaction requires determining gestural strokes and hand motion dynamics and recognizing hand poses. Extended variable valued logic and a rule based induction algorithm contribute to inductive learning of hand gesture poses, yielding a recognition rate of 94 percent. FingerMouse, a freehand pointing system, detects pointing hand poses and tracks moving fingertips in close to real time.


international conference on machine learning | 2005

VACE multimodal meeting corpus

Lei Chen; R. Travis Rose; Ying Qiao; Irene Kimbara; Fey Parrill; Haleema Welji; Tony X. Han; Jilin Tu; Zhongqiang Huang; Mary P. Harper; Francis K. H. Quek; Yingen Xiong; David McNeill; Ronald F. Tuttle; Thomas S. Huang

In this paper, we report on the infrastructure we have developed to support our research on multimodal cues for understanding meetings. With our focus on multimodality, we investigate the interaction among speech, gesture, posture, and gaze in meetings. For this purpose, a high quality multimodal corpus is being produced.


Image and Vision Computing | 1995

Eyes in the interface

Francis K. H. Quek

Abstract Computer vision has a significant role to play in the human-computer interaction (HCI) devices of the future. All computer input devices serve one essential purpose. They transduce some motion or energy from a human agent into machine useable signals. One may therefore think of input devices as the ‘perceptual organs’ by which computers sense the intents of their human users. We outline the role computer vision will play, highlight the impediments to the development of vision-based interfaces, and propose an approach for overcoming these impediments. Prospective vision research areas for HCI include human face recognition, facial expression interpretation, lip reading, head orientation detection, eye gaze tracking three-dimensional finger pointing, hand tracking, hand gesture interpretation and body pose tracking. For vision-based interfaces to make any impact, we will have to embark on an expansive approach, which begins with the study of the interaction modality we seek to implement. We illustrate our approach by discussing our work on vision-based hand gesture interfaces. This work is based on information from such varied disciplines as semiotics, anthropology, neurophysiology, neuropsychology and psycholinguistics. Concentrating on communicative (as opposed to manipulative) gestures, we argue that interpretation of a large number of gestures involves analysis of image dynamics to identify and characterize the gestural stroke, locating the stroke extrema in ordinal 3D space, and recognizing the hand pose at stroke extrema. We detail our dynamic image analysis algorithm which enforces our constraints: directional variance, spatial cohesion, directional cohesion and path cohesion. The clustered vectors characterize the motion of a gesturing hand.


tangible and embedded interaction | 2010

Touch & talk: contextualizing remote touch for affective interaction

Rongrong Wang; Francis K. H. Quek

Touch is a unique channel in affect conveyance. A significant aspect of this uniqueness is that the relation of touch to affect is immediate, without the need for symbolic encoding and decoding. However, most pioneering research work in developing remote touch technologies, result in the use of touch as a symbolic channel either by design or user decision. We present a review of relevant psychological and sociological literature of touch and propose a model of immediacy of the touch channel for conveyance of affect. We posit that the strategic provision of contextualizing channels will liberate touch to assume its role in affect conveyance. Armed with this analysis, we propose two design guidelines: first, the touch channel needs to be coupled with other communication channels to clarify its meaning; second, encourage the use touch as an immediate channel by not assigning any symbolic meaning to touch interactions. We proceed to describe our haptic interface design based on these guidelines. Our in-lab experiment shows that remote touch reinforces the meaning of a symbolic channel reducing sadness significantly and showing a trend to reduce general negative mood and to reinforce joviality.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

RIEVL: recursive induction learning in hand gesture recognition

Meide Zhao; Francis K. H. Quek; Xindong Wu

Presents a recursive inductive learning scheme that is able to acquire hand pose models in the form of disjunctive normal form expressions involving multivalued features. Based on an extended variable-valued logic, our rule-based induction system is able to abstract compact rule sets from any set of feature vectors describing a set of classifications. The rule bases which satisfy the completeness and consistency conditions are induced and refined through five heuristic strategies. A recursive induction learning scheme in the RIEVL algorithm is designed to escape local minima in the solution space. A performance comparison of RIEVL with other inductive algorithms, ID3, NewID, C4.5, CN2, and HCV, is given in the paper. In the experiments with hand gestures, the system produced the disjunctive normal form descriptions of each pose and identified the different hand poses based on the classification rules obtained by the RIEVL algorithm. RIEVL classified 94.4 percent of the gesture images in our testing set correctly, outperforming all other inductive algorithms.


International Journal of Computer Vision | 2006

Hand Motion Gesture Frequency Properties and Multimodal Discourse Analysis

Yingen Xiong; Francis K. H. Quek

Gesture and speech are co-expressive and complementary channels of a single human language system. While speech carries the major load of symbolic presentation, gesture provides the imagistic content. We investigate the role of oscillatory/cyclical hand motions in ‘carrying’ this image content. We present our work on the extraction of hand motion oscillation frequencies of gestures that accompany speech. The key challenges are that such motions are characterized by non-stationary oscillations, and multiple frequencies may be simultaneously extant. Also, the duration of the oscillations may be extended over very few cycles. We apply the windowed Fourier transform and wavelet transform to detect and extract gesticulatory oscillations. We tested these against synthetic signals (stationary and non-stationary) and real data sequences of gesticulatory hand movements in natural discourse. Our results show that both filters functioned well for the synthetic signals. For the real data, the wavelet bandpass filter bank is better for detecting and extracting hand gesture oscillations. We relate the hand motion oscillatory gestures detected by wavelet analysis to speech in natural conversation and apply to multimodal language analysis. We demonstrate the ability of our algorithm to extract gesticulatory oscillations and show how oscillatory gestures reveal portions of the multimodal discourse structure.

Collaboration


Dive into the Francis K. H. Quek's collaboration.

Top Co-Authors

Avatar

Cemil Kirbas

Wright State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rashid Ansari

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge