Edward Chilton
University of Surrey
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edward Chilton.
Acoustics Research Letters Online-arlo | 2004
Ioannis Paraskevas; Edward Chilton
The increasing demand for the retrieval and classification of audio utterances from multimedia databases, gives rise to the need for the implementation of effective feature extraction techniques. Most recent techniques employ temporal-related features and magnitude spectral features. In the proposed method, we use both the magnitude and phase spectrum of the signals to derive the features. By overcoming the discontinuity problems of phase, phase may be used as an additional feature stream. The experimental results derived from ten classes of gunshots show that, for certain classes, there is an improvement of 14% when both magnitude and phase information is employed, compared to the case when only the magnitude feature vector is used. Also, the results reported here show that the reliability of the method is increased, demonstrating the complementary nature of magnitude and phase.
international conference on electronics circuits and systems | 2001
Edward Gatt; Joseph Micallef; Paul Micallef; Edward Chilton
Among speech researchers, it is widely believed that Hidden Markov Models (HMMs) are the most successful modelling approaches for acoustic events in speech recognition. However, common assumptions limit the classification abilities of HMMs and these can been relaxed by introducing neural networks in the HMM framework. With todays advances in VLSI technology, artificial neural networks (ANNs) can be integrated into a single chip offering adequate circuit complexity required to attain both a high recognition accuracy and an improved learning time. Analogue implementations are considered due to the high processing speeds. The relative performance of different speech coding parameters for use with two different ANN architectures that lend themselves to analogue hardware implementations are investigated. In this case, the dynamic ranges of the different coefficients need to be taken into consideration since they will affect the performance of the analogue chip due to the scaling of the coefficients to voltage signals. The hardware requirements for implementing the two architectures are then discussed.
ieee international workshop on cellular neural networks and their applications | 2000
Edward Gatt; Joseph Micallef; Edward Chilton
The paper proposes an analog VLSI neural network chip, which can be cascaded in order to develop a time-delay neural network system for phoneme recognition. Backpropagation learning has been adopted to train the network to recognise phoneme frames extracted from the TIMIT database. A prototype chip, implemented using CMOS 2.0 /spl mu/m, double metal, double poly technology is also described together with its specifications.
international symposium on communications, control and signal processing | 2008
Edward Gatt; Joseph Micallef; Edward Chilton
This paper presents the implementation of a time-delay radial basis function (TD-RBF) neural network (NN) using three VLSI neural network systems. 1) A chip for implementing self-organising maps; 2) a radial basis function chip 3. A back-propagation learning chip; 3) the first chip was implemented using mixed mode technology, while the other two chips used analogue technology. The chips have been fabricated and a TD-RBF NN been applied successfully to the task of phoneme recognition. Analogue technology has been adopted mainly in order to attain the high processing speeds that analogue neural networks can acquire. Analogue technology also provides the benefit of designs with reduced power dissipation, low voltage operation and lower area cost.
international conference on electronics, circuits, and systems | 2005
Edward Gatt; Joseph Micallef; Edward Chilton
This paper presents a back-propagation neural network for phoneme recognition. The neural network has been implemented on-chip using 0.35 mum three-metal dual-poly CMOS technology. The results obtained for multi-layer perceptrons (MLP) and time-delay neural network (TDNN) implementations for phoneme recognition systems are presented. The paper also presents the performance characteristics for the chip.
Journal of the Acoustical Society of America | 2003
Edward Chilton; Ioannis Paraskevas
The fine classification of audio utterances is an important problem because the features that have to be extracted need to be very accurate in order to contribute to effective classification. In this paper, results are presented for a fine classification problem: namely the classification of two groups of different kinds of gunshots. The problem of accurate classification can be divided into two parts: (i) feature extraction and (ii) classification. The more effective the feature extraction, the more effectively the classifier will be able to categorize the various audio samples. In this paper, a novel method for the automatic recognition of acoustic utterances is presented using acoustic images as the basis for the feature extraction. The feature extraction process is based on the time‐frequency distribution of an acoustic unit. A novel feature extraction technique based on the statistical analysis of the spectrogram Hartley transform (distribution) and Choi–Williams distributions of the data is reported...
international conference on electronics circuits and systems | 2001
Edward Gatt; Joseph Micallef; Edward Chilton
The ability of a neural network to learn on-line is crucial for real time speech recognition systems. In fact, analog neural network systems are preferred to their digital counterparts mainly due to the high speed that they can attain. However, the training method adopted also affects the performance of the neural network. The conventional error backpropagation network usually requires quite a long convergence time for correct weight adjustment since the sigmoid function of a conventional multilayer network gives a smooth response over a wide range of input values. In contrast, the Gaussian function responds significantly only to local regions of the space of input values. Thus, backpropagation training is more efficient in neural networks based on Gaussian functions or radial basis function (RBF) networks, than those based on sigmoid functions in the hidden layer. The paper proposes an analog VLSI chip, which can be cascaded in order to develop an RBF neural network system for phoneme recognition.
international conference on electronics, circuits, and systems | 2005
Edward Gatt; Joseph Micallef; Edward Chilton
This paper presents the use of hardware self-organising map neural networks for phoneme recognition. The neural network has been implemented on-chip using 0.6 Pim dual-metal dual-poly CMOS technology. The results obtained for self-organising maps (SOM) for phoneme recognition systems are presented. The paper also presents tested performance characteristics for the chip.
international conference on signal processing | 2006
Edward Chilton; Elham Hassanain
The Journal of Engineering | 2015
Ioannis Paraskevas; Maria Barbarosou; Edward Chilton