Cham Athwal
Birmingham City University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cham Athwal.
IEEE Transactions on Plasma Science | 1985
Cham Athwal; K. H. Bayliss; R. Calder; R. V. Latham
It has been experimentally demonstrated that micrometersized particles of graphite deposited on the surface of copper and niobium electrodes can promote the field emission of electrons in the range of 5-20 MV ·m-1. From measurements of the current-voltage characteristic, electron spectrum, and emission image of such sites, it has been concluded that electrons arc emitted by a mechanism similar to that operating at the naturally occurring sites responsible for prebreakdown electron emission. A hot-electron-based metal-insulator-metal (MIM) model is considered.
complex, intelligent and software intensive systems | 2012
Andrew M. Thomas; Hanifa Shah; Philip Moore; Peter Rayson; Anthony J. Wilcox; Keith Osman; Cain Evans; Craig Chapman; Cham Athwal; David While; Hai V. Pham; Sarah Mount
In our exciting world of pervasive computing and always-available mobile internet, meeting the educational needs of students has seen a growing trend toward collaborative electronic and mobile learning systems that build on the vision of Web 2.0. However, other trends relevant to modern students must not be ignored, including data freedom, brokerage and interconnectivity. Such factors are associated with the Internet of Things and the vision for Web 3.0, and so include the needs for greater consideration of data context and educational personalization so important to the future of campus-based, distance and vocational study. Therefore, future education can be expected to require a deeper technological connection between students and learning environments, in a manner requiring significant use of sensors, mobile devices, cloud computing and rich-media visualization. This paper considers the challenges associated with adopting such a futuristic concept as a means of enriching learning materials and environments within a university context. It will be concluded that much of the technology required to embrace the vision of Web 3.0 in education already exists, but that further research in key areas is required for the concept to achieve its full potential.
international conference on biometrics theory applications and systems | 2013
Igor Barros Barbosa; Theoharis Theoharis; Christian Schellewald; Cham Athwal
Transient biometrics, a new concept for biometric recognition, is introduced in this paper. A traditional perspective of biometric recognition systems concentrates on biometric characteristics that are as constant as possible (such as the eye retina), giving accuracy over time but at the same time resulting in resistance to their use for non-critical applications due to the possibility of misuse. In contrast, transient biometrics is based on biometric characteristics that do change over time aiming at increased acceptance in non-critical applications. We show that the fingernail is a transient biometric with a lifetime of approximately two months. Our evaluation datasets are available to the research community.
CMMR'11 Proceedings of the 8th international conference on Speech, Sound and Music Processing: embracing research in India | 2011
Ryan Stables; Cham Athwal; Jamie Bullock
A model is presented for the analysis and synthesis of low frequency human-like pitch deviation, as a replacement for existing modulation techniques in singing voice synthesis systems. Fundamental frequency (f0) measurements are taken from vocalists producing a selected range of utterances without vibrato and trends in the data are observed. A probabilistic function that provides natural sounding low frequency f0 modulation to synthesized singing voices is presented and the perceptual relevance is evaluated with subjective listening tests.
international conference on hybrid information technology | 2012
Greg Hough; Cham Athwal; Ian Williams
Virtual studios typically use a layering method to achieve occlusion. A virtual object can be manually set in the foreground or background layer by a human controller, allowing it to appear in front of or behind an actor. Single point actor tracking systems have been used in virtual studios to automate occlusions. However, the suitability of single point tracking diminishes when considering more ambitious applications of an interactive virtual studio. As interaction often occurs at the extremities of the actor’s body, the automated occlusion offered by single point tracking is insufficient and multiple-point actor tracking is justified. We describe ongoing work towards an automatic occlusion system based on multiple-point skeletal tracking that is compatible with existing virtual studios. We define a set of occlusions required in the virtual studio; describe methods for achieving them; and present our preliminary results.
international symposium on mixed and augmented reality | 2014
Gregory Hough; Ian Williams; Cham Athwal
This paper presents a method for measuring the magnitude and impact of errors in mixed reality interactions. We define the errors as measurements of hand placement accuracy and consistency within bimanual movement of an interactive virtual object. First, a study is presented which illustrates the amount of variability between the hands and the mean distance of the hands from the surfaces of a common virtual object. The results allow a discussion of the most significant factors which should be considered in the frame of developing realistic mixed reality interaction systems. The degree of error was found to be independent of interaction speed, whilst the size of virtual object and the position of the hands are significant. Second, a further study illustrates how perceptible these errors are to a third person viewer of the interaction (e.g. an audience member). We found that interaction errors arising from the overestimation of an object surface affected the visual credibility for the viewer considerably more than an underestimation of the object. This work is presented within the application of a real-time Interactive Virtual Television Studio, which offers convincing realtime interaction for live TV production. We believe the results and methodology presented here could also be applied for designing, implementing and assessing interaction quality in many other Mixed Reality applications.
ECMAST '97 Proceedings of the Second European Conference on Multimedia Applications, Services and Techniques | 1997
Alan Cole; Jimmy Robinson; Cham Athwal
This paper describes the requirements for, and the development and implementation of, an interactive multimedia system for the visualisation and analysis of the results from automobile impact tests and impact simulations. However, the technology described can be applied to a much wider range of scientific and engineering test scenarios.
international symposium on mixed and augmented reality | 2014
Gregory Hough; Ian Williams; Cham Athwal
This demonstration is a live example of the experiment presented in [1], namely a method of assessing the visual credibility of a scene where a real person interacts with a virtual object in realtime. Inconsistencies created by actors incorrect estimation of the virtual object are measured through a series of videos, each containing a defined visual error and rated against interaction credibility on a scale of 1–5 by conference delegates.
workshop on applications of signal processing to audio and acoustics | 2013
Dominic Ward; Cham Athwal; Münevver Köküer
In this paper, we present an efficient loudness model applicable to time-varying sounds. We use the model of Glasberg and Moore (J. Audio Eng. Soc., 2002) as the basis for our developments, proposing a number of optimization techniques to reduce the computational complexity at each stage of the model. Efficient alternatives to computing the multi-resolution DFT, excitation pattern and pre-cochlea filter are presented. Absolute threshold and equal loudness contour predictions are computed and compared against both steady-state and time-varying loudness models to evaluate the combined accuracy of these techniques in the frequency domain. Finally, computational costs and loudness errors are quantified for a range of time-varying stimuli, demonstrating that the optimized model can execute approximately 50 times faster within tolerable error bounds.
international conference on hybrid information technology | 2012
Cham Athwal
This paper describes a novel formulation of the Hough Transform technique to detect lines that can be approximated by polynomial curves between given end-points in bitmap images. The application domain is the analysis of the deformation of flexible linear structures throughout a sequence of video frames. In many applications of this type it is possible to manually or semi-automatically identify two end points to the flexible structure which then remain static or are can be separately tracked throughout the video sequence. The example discussed in this study is the motion of belts between pulleys. The Hough Transform is used for the parameters of polynomial expressions based on combinations of simple parabolic and cubic curves. We demonstrate that these curves are rich enough to represent those that are typically found in the application domains considered. We present mathematical and algorithmic representations that enable intuitive and efficient computation of the Hough Transforms.