Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Claude C. Chibelushi is active.

Publication


Featured researches published by Claude C. Chibelushi.


IEEE Transactions on Multimedia | 2002

A review of speech-based bimodal recognition

Claude C. Chibelushi; Farzin Deravi; John S. D. Mason

Speech recognition and speaker recognition by machine are crucial ingredients for many important applications such as natural and flexible human-machine interfaces. Most developments in speech-based automatic recognition have relied on acoustic speech as the sole input signal, disregarding its visual counterpart. However, recognition based on acoustic speech alone can be afflicted with deficiencies that preclude its use in many real-world applications, particularly under adverse conditions. The combination of auditory and visual modalities promises higher recognition accuracy and robustness than can be obtained with a single modality. Multimodal recognition is therefore acknowledged as a vital component of the next generation of spoken language systems. The paper reviews the components of bimodal recognizers, discusses the accuracy of bimodal recognition, and highlights some outstanding research issues as well as possible application domains.


IEEE Transactions on Industrial Electronics | 2013

Efficient Object Localization Using Sparsely Distributed Passive RFID Tags

Po Yang; Wenyan Wu; Mansour Moniri; Claude C. Chibelushi

Radio-frequency identification (RFID) technology has been widely used in passive RFID localization application due to its flexible deployment and low cost. However, current passive RFID localization systems cannot achieve both highly accurate and precise moving object localization task owing to tag collisions and variation of the behavior of tags. Most researchers increase the density of tag distribution to improve localization accuracy and then consider using either anti-collision process embedded in the hardware of the RFID reader or advanced localization algorithms to enhance localization precision. However, advanced anti-collision processes for RFID devices are challenged by the physical constraint characteristics of radio frequency; and improved localization algorithm cannot fundamentally reduce the impacts of tag collision on localization precision. This research work attempts to improve localization precision of a passive RFID localization system by using sparsely distributed RFID tags. This paper first defines a measure for accuracy and precision in a passive RFID localization system with regard to RFID tag distribution. An exponential-based function is then derived from experimental measurements, which reflects the relationship between RFID tag distribution and localization precision. This function shows that localization precision is mainly determined by tag density of RFID tag distribution. Based on the experimental findings, a sparse RFID tag distribution approach is proposed. The results show that in comparison with the conventional RFID tag distribution, passive RFID localization system with sparse RFID tag distribution can deliver a higher localization precision for the required accuracy.


ieee international conference on automatic face and gesture recognition | 2002

Robust facial expression recognition using a state-based model of spatially-localised facial dynamics

Fabrice Bourel; Claude C. Chibelushi; Adrian A. Low

The paper proposes a new approach for the robust recognition of facial expressions from video sequences. The goal of the work presented, is to develop robust recognition techniques that will overcome some limitations of current techniques, such as their sensitivity to partial occlusion of the face, and noisy data. The paper investigates a representation of facial expressions which is based on a spatially-localised geometric facial model coupled to a state-based model of facial motion. The experiments show that the proposed facial expression recognition framework yields relatively little degradation in recognition rate, when faces are partially occluded, or under a variety of levels of noise introduced at the feature tracker level.


british machine vision conference | 2000

Robust Facial Feature Tracking

Fabrice Bourel; Claude C. Chibelushi; Adrian A. Low

We present a robust technique for tracking a set of pre-determined points on a human face. To achieve robustness, the Kanade-Lucas-Tomasi point tracker is extended and specialised to work on facial features by embedding knowledge about the configuration and visual characteristics of the face. The resulting tracker is designed to recover from the loss of points caused by tracking drift or temporary occlusion. Performance assessment experiments have been carried out on a set of 30 video sequences of several facial expressions. It is shown that using the original Kanade-Lucas-Tomasi tracker, some of the points are lost, whereas using the new method described in this paper, all lost points are recovered with no or little displacement error.


british machine vision conference | 2001

Recognition of Facial Expressions in the Presence of Occlusion

Fabrice Bourel; Claude C. Chibelushi; Adrian A. Low

We present a new approach for the recognition of facial expressions from video sequences in the presence of occlusion. Although promising results have been reported in the literature on automatic recognition of facial expressions, most techniques have been assessed using experiments performed in controlled laboratory conditions which do not reflect real-world conditions. The goal of the work presented herein, is to develop recognition techniques that will overcome some limitations of current techniques, such as their sensitivity to partial occlusion of the face. The proposed approach is based on a localised representation of facial features, and on data fusion. The experiments show that the proposed approach is robust to partial occlusion of the face.


systems man and cybernetics | 1999

Adaptive classifier integration for robust pattern recognition

Claude C. Chibelushi; Farzin Deravi; John S. D. Mason

The integration of multiple classifiers promises higher classification accuracy and robustness than can be obtained with a single classifier. This paper proposes a new adaptive technique for classifier integration based on a linear combination model. The proposed technique is shown to exhibit robustness to a mismatch between test and training conditions. It often outperforms the most accurate of the fused information sources. A comparison between adaptive linear combination and non-adaptive Bayesian fusion shows that, under mismatched test and training conditions, the former is superior to the latter in terms of identification accuracy and insensitivity to information source distortion.


advanced video and signal based surveillance | 2005

Classification of smart video surveillance systems for commercial applications

Mohamed Sedky; Mansour Moniri; Claude C. Chibelushi

Video surveillance has a large market as the number of installed cameras around us can show. There are immediate commercial needs for smart video surveillance systems that can make use of the existing camera network (e.g. CCTV) for more intelligent security systems and to contribute in more applications (beside or) rather than security applications. This work introduces a new classification for smart video surveillance systems depending on their commercial applications. This paper highlights different links between the research and the commercial applications. The work reported here has both research and commercial motivations. Our goals are first to define a generic model of smart video surveillance systems that can meet requirements of strong commercial applications. Our second goal is to categorize different smart video surveillance applications and to relate capabilities of computer vision algorithms to the requirement of commercial application.


multimedia signal processing | 1999

Lip signatures for automatic person recognition

John S. D. Mason; Jason Brand; Roland Auckenthaler; Farzin Deravi; Claude C. Chibelushi

This paper evaluates lip features for person recognition, and compares the performance with that of the acoustic signal. Recognition accuracy is found to be equivalent in the two domains, agreeing with the findings of Chibelushi (1997). The optimum dynamic window length for both acoustic and visual modalities is found to be about 100 ms. Recognition performance of the upper lip is considerably better than the lower lip, achieving 15% and 35% identification error rates respectively, using a single digit test and training token.


international conference on rfid | 2008

SLAM Algorithm for 2D Object Trajectory Tracking based on RFID Passive Tags

Po Yang; Wenyan Wu; Mansour Moniri; Claude C. Chibelushi

Tracking the physical location of nodes in a 2D environment is critical in many applications such as camera tracking in virtual studio, indoor mobile objects tracking. RFID technique poses an interesting solution to localizing the nodes because the passive RFID tags could store the position unit information according to unique tag ID. Based on tags pattern, algebraic approach could solve the 2D trajectory tracking problem. However, the tracking accuracy of this approach is highly related to the tags position distribution and position unit. It would be inaccurate for some erratic trajectory tracking. Thus, we would try to apply and evaluate the probabilistic approaches, such as SLAM (Simultaneous Localization and Mapping), into RFID tag based trajectory tracking. In this paper, we propose an RFID tag based SLAM algorithm for 2D trajectory tracking. Also a technique called Map adjustment is proposed to increase the efficiency of the algorithm. The simulation results show that the approach could improve the accuracy for some parts of trajectory tracking compared to RFID algebraic approach. The limitation and future work are given in the conclusion.


international conference on image processing | 1994

Face segmentation using fuzzy reasoning

Claude C. Chibelushi; Farzin Deravi; John S. D. Mason

Presents a face segmentation architecture using fuzzy inference. The head of a talker and two key structural features of his face (eyes and mouth) are located based on temporal and spatial information extracted from a head-and-shoulder image sequence. The architecture is modular and the segmentation uses a coarse-to-fine fuzzy reasoning strategy implemented across a three-level multi-resolution image pyramid. Results illustrating the performance of the system are given.<<ETX>>

Collaboration


Dive into the Claude C. Chibelushi's collaboration.

Top Co-Authors

Avatar

Mansour Moniri

Staffordshire University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabrice Bourel

Staffordshire University

View shared research outputs
Top Co-Authors

Avatar

Adrian A. Low

Staffordshire University

View shared research outputs
Top Co-Authors

Avatar

Mohamed Sedky

Staffordshire University

View shared research outputs
Top Co-Authors

Avatar

Po Yang

Liverpool John Moores University

View shared research outputs
Top Co-Authors

Avatar

Wenyan Wu

Staffordshire University

View shared research outputs
Top Co-Authors

Avatar

Adel Aneiba

Staffordshire University

View shared research outputs
Top Co-Authors

Avatar

Amr El-Helw

Staffordshire University

View shared research outputs
Researchain Logo
Decentralizing Knowledge