Chris Christodoulou
University of Cyprus
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chris Christodoulou.
systems man and cybernetics | 2004
Andreas Lanitis; Chris Christodoulou
We describe a quantitative evaluation of the performance of different classifiers in the task of automatic age estimation. In this context, we generate a statistical model of facial appearance, which is subsequently used as the basis for obtaining a compact parametric description of face images. The aim of our work is to design classifiers that accept the model-based representation of unseen images and produce an estimate of the age of the person in the corresponding face image. For this application, we have tested different classifiers: a classifier based on the use of quadratic functions for modeling the relationship between face model parameters and age, a shortest distance classifier, and artificial neural network based classifiers. We also describe variations to the basic method where we use age-specific and/or appearance specific age estimation methods. In this context, we use age estimation classifiers for each age group and/or classifiers for different clusters of subjects within our training set. In those cases, part of the classification procedure is devoted to choosing the most appropriate classifier for the subject/age range in question, so that more accurate age estimates can be obtained. We also present comparative results concerning the performance of humans and computers in the task of age estimation. Our results indicate that machines can estimate the age of a person almost as reliably as humans.
Neural Computation | 1997
Guido Bugmann; Chris Christodoulou; John G. Taylor
Partial reset is a simple and powerful tool for controlling the irregularity of spike trains fired by a leaky integrator neuron model with random inputs. In particular, a single neuron model with a realistic membrane time constant of 10 ms can reproduce the highly irregular firing of cortical neurons reported by Softky and Koch (1993). In this article, the mechanisms by which partial reset affects the firing pattern are investigated. Itisshown theoretically that partial reset is equivalent to the use of a time-dependent threshold, similar to a technique proposed by Wilbur and Rinzel (1983) to produce high irregularity. This equivalent model allows establishing that temporal integration and fluctuation detection can coexist and cooperate to cause highly irregular firing. This study also reveals that reverse correlation curves cannot be used reliably to assess the causes of firing. For instance, they do not reveal temporal integration when it takes place. Further, the peak near time zero does not always indicate coincidence detection. An alternative qualitative method is proposed here for that later purpose. Finally, it is noted that as the reset becomes weaker, the firing pattern shows a progressive transition from regular firing, to random, to temporally clustered, and eventually to bursting firing. Concurrently the slope of the transfer function increases. Thus, simulations suggest a correlation between high gain and highly irregular firing.
Neural Networks | 2002
Chris Christodoulou; Guido Bugmann; Trevor G. Clarkson
This paper presents a biologically inspired, hardware-realisable spiking neuron model, which we call the Temporal Noisy-Leaky Integrator (TNLI). The dynamic applications of the model as well as its applications in Computational Neuroscience are demonstrated and a learning algorithm based on postsynaptic delays is proposed. The TNLI incorporates temporal dynamics at the neuron level by modelling both the temporal summation of dendritic postsynaptic currents which have controlled delay and duration and the decay of the somatic potential due to its membrane leak. Moreover, the TNLI models the stochastic neurotransmitter release by real neuron synapses (with probabilistic RAMs at each input) and the firing times including the refractory period and action potential repolarisation. The temporal features of the TNLI make it suitable for use in dynamic time-dependent tasks like its application as a motion and velocity detector system presented in this paper. This is done by modelling the experimental velocity selectivity curve of the motion sensitive H1 neuron of the visual system of the fly. This application of the TNLI indicates its potential applications in artificial vision systems for robots. It is also demonstrated that Hebbian-based learning can be applied in the TNLI for postsynaptic delay training based on coincidence detection, in such a way that an arbitrary temporal pattern can be detected and recognised. The paper also demonstrates that the TNLI can be used to control the firing variability through inhibition; with 80% inhibition to concurrent excitation, firing at high rates is nearly consistent with a Poisson-type firing variability observed in cortical neurons. It is also shown with the TNLI, that the gain of the neuron (slope of its transfer function) can be controlled by the balance between inhibition and excitation, the gain being a decreasing function of the proportion of inhibitory inputs. Finally, in the case of perfect balance between inhibition and excitation, i.e. where the average input current is zero, the neuron can still fire as a result of membrane potential fluctuations. The firing rate is then determined by the average input firing rate. Overall this work illustrates how a hardware-realisable neuron model can capitalise on the unique computational capabilities of biological neurons.
Neurocomputing | 2001
Chris Christodoulou; Guido Bugmann
Abstract A number of models have been produced recently to explain the high variability of natural spike trains (Softky and Koch, J. Neurosci. 13 (1) (1993) 334). These models use a range of different biological mechanisms including partial somatic reset, concurrent inhibition and excitation, correlated inputs and network dynamics effects. In this paper we examine which model is more likely to reflect the mechanisms used in the brain and we evaluate the ability of each model to reproduce the experimental coefficient of variation (CV) vs. mean interspike interval (ISI) curves (CV=standard deviation/mean ISI). The results show that the partial somatic reset mechanism is the most likely candidate to reflect the mechanism used in the brain for reproducing irregular firing.
Expert Systems With Applications | 2011
Maria Moustra; Marios N. Avraamides; Chris Christodoulou
The aim of this study is to evaluate the performance of artificial neural networks in predicting earthquakes occurring in the region of Greece with the use of different types of input data. More specifically, two different case studies are considered: the first concerns the prediction of the earthquake magnitude (M) of the following day and the second the prediction of the magnitude of the impending seismic event following the occurrence of pre-seismic signals, the so-called Seismic Electric Signals (SES), which are believed to occur prior to an earthquake, as well as the time lag between the SES and the seismic event itself. The neural network developed for the first case study used only time series magnitude data as input with the output being the magnitude of the following day. The resulting accuracy rate was 80.55% for all seismic events, but only 58.02% for the major seismic events (M>=5.2 on the Richter scale). Our second case study for earthquake prediction uses SES as input data to the neural networks developed. This case study is separated in two parts with the differentiating element being the way of constructing the missing SES. In the first part, where the missing SES were constructed randomly for all the seismic events, the resulting accuracy rates for the magnitude of upcoming seismic events were just over 60%. In the second part, where the missing SES were constructed for the major seismic events (M>=5.0 on the Richter scale) only by the use of neural networks reversely, the resulting accuracy rate by predicting only the magnitude was 84.01%, and by predicting both the magnitude and time lag was 83.56% for the magnitude and 92.96% for the time lag. Based on the results we conclude that, when the neural networks are trained by using the appropriate data they are able to generalise and predict unknown seismic events relatively accurately.
systems man and cybernetics | 2001
Trevor G. Clarkson; Chris Christodoulou; Yelin Guan; Denise Gorse; David A. Romano-Critchley; John G. Taylor
Speaker identification may be employed as part of a security system requiring user authentication. In this case, the claimed identity of the user is known from a magnetic card and PIN number, for example, and an utterance is requested to confirm the identity of the user. A fast response is necessary in the confirmation phase and a fast registration process for new users is desirable. The time encoded signal processing and recognition (TESPAR) digital language is used to preprocess the speech signal. A speaker cannot be identified directly from the single TESPAR vector since there is a highly nonlinear relationship between the vectors components such that vectors are not linearly separable. Therefore the vector and its characteristics suggest that classification using a neural network will provide an effective solution. Good classification performance has been achieved using a probabilistic RAM (pRAM) neuron. Four probabilistic pRAM neural network architectures are presented. A performance of approximately 97% correct classifications has been obtained, which is similar to results obtained elsewhere (M. Sharma and R.J. Mammone, 1996), and slightly better than a MLP network. No speech recognition stage was used in obtaining these results, so the performance relates only to identifying a speakers voice and is therefore independent of the spoken phrase. This has been achieved in a hardware-realizable system which may be incorporated into a smart-card or similar application.
BioSystems | 2000
Chris Christodoulou; Guido Bugmann
The effect of inhibition on the firing variability is examined in this paper using the biologically-inspired temporal noisy-leaky integrator (TNLI) neuron model. The TNLI incorporates hyperpolarising inhibition with negative current pulses of controlled shapes and it also separates dendritic from somatic integration. The firing variability is observed by looking at the coefficient of variation (C(V)) (standard deviation/mean interspike interval) as a function of the mean interspike interval of firing (delta tM) and by comparing the results with the theoretical curve for random spike trains, as well as looking at the interspike interval (ISI) histogram distributions. The results show that with 80% inhibition, firing at high rates (up to 200 Hz) is nearly consistent with a Poisson-type variability, which complies with the analysis of cortical neuron firing recordings by Softky and Koch [1993, J. Neurosci. 13(1) 334-530]. We also demonstrate that the mechanism by which inhibition increases the C(V) values is by introducing more short intervals in the firing pattern as indicated by a small initial hump at the beginning of the ISI histogram distribution. The use of stochastic inputs and the separation of the dendritic and somatic integration which we model in TNLI, also affect the high firing, near Poisson-type (explained in the paper) variability produced. We have also found that partial dendritic reset increases slightly the firing variability especially at short ISIs.
Managerial Auditing Journal | 2010
Maria Krambia-Kapardis; Chris Christodoulou; Michalis Agathocleous
Purpose - The purpose of the paper is to test the use of artificial neural networks (ANNs) as a tool in fraud detection. Design/methodology/approach - Following a review of the relevant literature on fraud detection by auditors, the authors developed a questionnaire which they distributed to auditors attending a fraud detection seminar. The questionnaire was then used to develop seven ANNs to test the usage of these models in fraud detection. Findings - Utilizing exogenous and endogenous factors as input variables to ANNs and in developing seven different models, an average of 90 per cent accuracy was found in the fraud detection prediction model. It has, therefore, been demonstrated that ANNs can be used by auditors to identify fraud-prone companies. Originality/value - Whilst previous researchers have looked at empirical predictors of fraud, fraud risk assessment methods and mechanically fraud risk assessment methods, no other research has combined both exogenous and endogenous factors in developing ANNs to be used in fraud detection. Thus, auditors can use ANNs as complementary to other techniques at the planning stage of their audit to predict if a particular audit client is likely to have been victimized by a fraudster.
PLOS ONE | 2013
Margarita Zachariou; Stephen P.H. Alexander; Stephen Coombes; Chris Christodoulou
Memories are believed to be represented in the synaptic pathways of vastly interconnected networks of neurons. The plasticity of synapses, that is, their strengthening and weakening depending on neuronal activity, is believed to be the basis of learning and establishing memories. An increasing number of studies indicate that endocannabinoids have a widespread action on brain function through modulation of synap–tic transmission and plasticity. Recent experimental studies have characterised the role of endocannabinoids in mediating both short- and long-term synaptic plasticity in various brain regions including the hippocampus, a brain region strongly associated with cognitive functions, such as learning and memory. Here, we present a biophysically plausible model of cannabinoid retrograde signalling at the synaptic level and investigate how this signalling mediates depolarisation induced suppression of inhibition (DSI), a prominent form of short-term synaptic depression in inhibitory transmission in hippocampus. The model successfully captures many of the key characteristics of DSI in the hippocampus, as observed experimentally, with a minimal yet sufficient mathematical description of the major signalling molecules and cascades involved. More specifically, this model serves as a framework to test hypotheses on the factors determining the variability of DSI and investigate under which conditions it can be evoked. The model reveals the frequency and duration bands in which the post-synaptic cell can be sufficiently stimulated to elicit DSI. Moreover, the model provides key insights on how the state of the inhibitory cell modulates DSI according to its firing rate and relative timing to the post-synaptic activation. Thus, it provides concrete suggestions to further investigate experimentally how DSI modulates and is modulated by neuronal activity in the brain. Importantly, this model serves as a stepping stone for future deciphering of the role of endocannabinoids in synaptic transmission as a feedback mechanism both at synaptic and network level.
Journal of Physiology-paris | 2010
Chris Christodoulou; Gaye Banfield; Aristodemos Cleanthous
Self-control can be defined as choosing a large delayed reward over a small immediate reward, while precommitment is the making of a choice with the specific aim of denying oneself future choices. Humans recognise that they have self-control problems and attempt to overcome them by applying precommitment. Problems in exercising self-control, suggest a conflict between cognition and motivation, which has been linked to competition between higher and lower brain functions (representing the frontal lobes and the limbic system respectively). This premise of an internal process conflict, lead to a behavioural model being proposed, based on which, we implemented a computational model for studying and explaining self-control through precommitment behaviour. Our model consists of two neural networks, initially non-spiking and then spiking ones, representing the higher and lower brain systems viewed as cooperating for the benefit of the organism. The non-spiking neural networks are of simple feed forward multilayer type with reinforcement learning, one with selective bootstrap weight update rule, which is seen as myopic, representing the lower brain and the other with the temporal difference weight update rule, which is seen as far-sighted, representing the higher brain. The spiking neural networks are implemented with leaky integrate-and-fire neurons with learning based on stochastic synaptic transmission. The differentiating element between the two brain centres in this implementation is based on the memory of past actions determined by an eligibility trace time constant. As the structure of the self-control problem can be likened to the Iterated Prisoners Dilemma (IPD) game in that cooperation is to defection what self-control is to impulsiveness or what compromising is to insisting, we implemented the neural networks as two players, learning simultaneously but independently, competing in the IPD game. With a technique resembling the precommitment effect, whereby the payoffs for the dilemma cases in the IPD payoff matrix are differentially biased (increased or decreased), it is shown that increasing the precommitment effect (through increasing the differential bias) increases the probability of cooperating with oneself in the future, irrespective of whether the implementation is with spiking or non-spiking neural networks.