Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert K. Bryll is active.

Publication


Featured researches published by Robert K. Bryll.


Pattern Recognition | 2003

Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets

Robert K. Bryll; Ricardo Gutierrez-Osuna; Francis K. H. Quek

We present attribute bagging (AB), a technique for improving the accuracy and stability of classifier ensembles induced using random subsets of features. AB is a wrapper method that can be used with any learning algorithm. It establishes an appropriate attribute subset size and then randomly selects subsets of features, creating projections of the training set on which the ensemble classifiers are built. The induced classifiers are then used for voting. This article compares the performance of our AB method with bagging and other algorithms on a hand-pose recognition dataset. It is shown that AB gives consistently better results than bagging, both in accuracy and stability. The performance of ensemble voting in bagging and the AB method as a function of the attribute subset size and the number of voters for both weighted and unweighted voting is tested and discussed. We also demonstrate that ranking the attribute subsets by their classification accuracy and voting using only the best subsets further improves the resulting performance of the ensemble.


International Journal of Neural Systems | 2007

NEURAL NETWORK APPROACH FOR IMAGE CHROMATIC ADAPTATION FOR SKIN COLOR DETECTION

Nikolaos G. Bourbakis; P. Kakumanu; Sokratis Makrogiannis; Robert K. Bryll; Sethuraman Panchanathan

The goal of image chromatic adaptation is to remove the effect of illumination and to obtain color data that reflects precisely the physical contents of the scene. We present in this paper an approach to image chromatic adaptation using Neural Networks (NN) with application for detecting--adapting human skin color. The NN is trained on randomly chosen color images containing human subject under various illuminating conditions, thereby enabling the model to dynamically adapt to the changing illumination conditions. The proposed network predicts directly the illuminant estimate in the image so as to adapt to human skin color. The comparison of our method with Gray World, White Patch and NN on White Patch methods for skin color stabilization is presented. The skin regions in the NN stabilized images are successfully detected using a computationally inexpensive thresholding operation. We also present results on detecting skin regions on a data set of test images. The results are promising and suggest a new approach for adapting human skin color using neural networks.


asian conference on computer vision | 1998

Vector Coherence Mapping: A Parallelizable Approach to Image Flow Computation

Francis K. H. Quek; Robert K. Bryll

We present a new parallel approach for the computation of an optical flow field from a video image sequence. This approach incorporates the various local smoothness, spatial and temporal coherence constraints transparently by the application of fuzzy image processing techniques. Our Vector Coherence Mapping VCM approach accomplishes this by a weighted voting process in “local vector space,” where the weights provide high level guidance to the local voting process. Our results show that VCM is capable of extracting flow fields for video streams with global dominant fields (e.g. owing to camera pan or translation, moving camera and moving object(s), and multiple moving objects. Our results also show that VCM is able to operate under strong image noise and motion blur, and is not susceptible to boundary oversmoothing.


workshop on perceptive user interfaces | 2001

Speech driven facial animation

P. Kakumanu; Ricardo Gutierrez-Osuna; Anna Esposito; Robert K. Bryll; A. Ardeshir Goshtasby; Oscar N. Garcia

The results reported in this article are an integral part of a larger project aimed at achieving perceptually realistic animations, including the individualized nuances, of three-dimensional human faces driven by speech. The audiovisual system that has been developed for learning the spatio-temporal relationship between speech acoustics and facial animation is described, including video and speech processing, pattern analysis, and MPEG-4 compliant facial animation for a given speaker. In particular, we propose a perceptual transformation of the speech spectral envelope, which is shown to capture the dynamics of articulatory movements. An efficient nearest-neighbor algorithm is used to predict novel articulatory trajectories from the speech dynamics. The results are very promising and suggest a new way to approach the modeling of synthetic lip motion of a given speaker driven by his/her speech. This would also provide clues toward a more general cross-speaker realistic animation.


Multimedia Tools and Applications | 2002

A Multimedia System for Temporally Situated Perceptual Psycholinguistic Analysis

Francis K. H. Quek; Robert K. Bryll; Cemil Kirbas; Hasan Arslan; David McNeill

Perceptual analysis of video (analysis by unaided ear and eye) plays an important role in such disciplines as psychology, psycholinguistics, linguistics, anthropology, and neurology. In the specific domain of psycholinguistic analysis of gesture and speech, researchers micro-analyze videos of subjects using a high quality video cassette recorder that has a digital freeze capability down to the specific frame. Such analyses are very labor intensive and slow. We present a multimedia system for perceptual analysis of video data using a multiple, dynamically linked representation model. The system components are linked through a time portal with a current time focus. The system provides mechanisms to analyze overlapping hierarchical interpretations of the discourse, and integrates visual gesture analysis, speech analysis, video gaze analysis, and text transcription into a coordinated whole. The various interaction components facilitate accurate multi-point access to the data. While this system is currently used to analyze gesture, speech and gaze in human discourse, the system described may be applied to any other field where careful analysis of temporal synchronies in video is important.


Archive | 2002

Dynamic Imagery in Speech and Gesture

David McNeill; Karl-Erik McCullough; Francis K. H. Quek; Susan Duncan; Robert K. Bryll; Xin-Feng Ma; Rashid Ansari

Someone begins to describe an event and almost immediately her hands start to fly. The movements seem involuntary and indeed unconscious, yet they take place vigorously and abundantly. Why is this happening? Whatever the reason, our person is not alone. Popular beliefs notwithstanding, every culture produces gestures. Gesturing is a phenomenon that passes almost without notice but it is omnipresent. If you watch someone speaking, in almost any language, and under nearly all circumstances, you will see what appears to be a compulsion to move the head, hands and arms in conjunction with speech. Speech, we know, is the actuality of language. But what are these gestures? They are not compensations for missing words or inarticulate speech — if anything, gestures are positively related to fluency and complexity of speech — the more articulate the speech, the more gesture.


bioinformatics and bioengineering | 2001

Audio and vision-based evaluation of parkinson~s disease from discourse video

Francis K. H. Quek; Robert K. Bryll; Mary P. Harper; Lei Chen; Lorraine O. Ramig

Parkinsons disease (PD) belongs to a class of neurodegenerative diseases that affect both the patients speech and motor capabilities. To date, PD diagnosis and the determination of disease progress and treatment efficacy is based entirely on the subjective observation of a trained physician. We present the results of a pilot study of two Idiopathic PD patients who have undergone Lee Silverman Voice Treatment (LSVT). It has been observed subjectively that gestural performance of patients improve in tandem with speech improvements after LSVT. It is hypothesized that these improvements are taking place at a neurological level. Measurements of speech and gesture suggest that LSVT improves the quality of both gesticulation and speech.


ACM Transactions on Computer-Human Interaction | 2002

Multimodal human discourse: gesture and speech

Francis K. H. Quek; David McNeill; Robert K. Bryll; Susan Duncan; Xin-Feng Ma; Cemil Kirbas; Karl E. McCullough; Rashid Ansari


Gesture | 2001

Catchments, prosody and discourse

David Mc Neill; Francis K. H. Quek; Karl Erik Mc Cullough; Susan Duncan; Nobuhiro Furuyama; Robert K. Bryll; Xin Feng Ma; Rashid Ansari


computer vision and pattern recognition | 2000

Gesture, speech, and gaze cues for discourse segmentation

Francis K. H. Quek; David McNeill; Robert K. Bryll; Cemil Kirbas; Hasan Arslan; Karl E. McCullough; Nobuhiro Furuyama; Rashid Ansari

Collaboration


Dive into the Robert K. Bryll's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rashid Ansari

University of Illinois at Chicago

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xin-Feng Ma

Wright State University

View shared research outputs
Top Co-Authors

Avatar

Cemil Kirbas

Wright State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

P. Kakumanu

Wright State University

View shared research outputs
Top Co-Authors

Avatar

Hasan Arslan

Wright State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge