Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David C. Tam is active.

Publication


Featured researches published by David C. Tam.


Computer Methods and Programs in Biomedicine | 2000

Identification of reliable spike templates in multi-unit extracellular recordings using fuzzy clustering.

George Zouridakis; David C. Tam

A method for extracting single-unit spike trains from extracellular recordings containing the activity of several simultaneously active cells is presented. The technique is particularly effective when spikes overlap temporally. It is capable of identifying the exact number of neurons contributing to a recording and of creating reliable spike templates. The procedure is based on fuzzy clustering and its performance is controlled by minimizing a cluster-validity index which optimizes the compactness and separation of the identified clusters. Application examples with synthetic spike trains generated from real spikes and segments of background noise show the advantage of the fuzzy method over conventional template-creation approaches in a wide range of signal-to-noise ratios.


Computers in Biology and Medicine | 1997

Multi-unit spike discrimination using wavelet transforms

George Zouridakis; David C. Tam

A new spike discrimination procedure addressing the specific problem of spike superposition is described. The method, based on a shift-invariant wavelet transform and its amplitude-and-phase representation, has the advantage of both reducing the effect of noise present in the data and correcting the latency of specific components in a waveform. When spikes overlap and produce unknown patterns, the procedure extracts the constituent spikes and also estimates their exact time of occurrence. Fast implementation algorithms, having complexity of at most O (N log N), allow the use of the method in real-time applications.


Neurocomputing | 2002

An alternate burst analysis for detecting intra-burst firings based on inter-burst periods

David C. Tam

Abstract A time-scale invariant burst-detection algorithm for single-unit spike train is described. This burst analysis is an auto-adaptive algorithm that uses inter- and intra-burst intervals for identifying the burst itself. By using a self-adaptive algorithm, a burst is defined by the characteristic firing patterns within and between bursts. Bursts are detected by auto-adaptive method when the inter-burst periods (inter-spike intervals (ISIs) between bursts) exceed the intra-burst periods (the sum of ISIS within a burst). This adaptive method is time-scale invariant because bursts are defined by the firing patterns rather than the specific time scale of the bursts or ISIs.


Biological Cybernetics | 1999

A joint interspike interval difference stochastic spike train analysis: detecting local trends in the temporal firing patterns of single neurons

Michelle A. Fitzurka; David C. Tam

Abstract. We introduce a stochastic spike train analysis method called joint interspike interval difference (JISID) analysis. By design, this method detects changes in firing interspike intervals (ISIs), called local trends, within a 4-spike pattern in a spike train. This analysis classifies 4-spike patterns that have similar incremental changes. It characterizes the higher-order serial dependence in spike firing relative to changes in the firing history. Mathematically, this spike train analysis describes the statistical joint distribution of consecutive changes in ISIs, from which the serial dependence of the changes in higher-order intervals can be determined. It is similar to the joint interspike interval (JISI) analysis, except that the joint distribution of consecutive ISI differences (ISIDs) is quantified. The graphical location of points in the JISID scatter plot reveals the local trends in firing (i.e., monotonically increasing, monotonically decreasing, or transitional firing). The trajectory of these points in the serial-JISID plot traces the time evolution of these trends represented by a 5-spike pattern, while points in the JISID scatter plot represent trends of a 4-spike pattern. We provide complete theoretical interpretations of the JISID analysis. We also demonstrate that this method indeed identifies firing trends in both simulated spike trains and spike trains recorded from cultured neurons.


Biological Cybernetics | 1998

A cross-interval spike train analysis: the correlation between spike generation and temporal integration of doublets

David C. Tam

Abstract. A stochastic spike train analysis technique is introduced to reveal the correlation between the firing of the next spike and the temporal integration period of two consecutive spikes (i.e., a doublet). Statistics of spike firing times between neurons are established to obtain the conditional probability of spike firing in relation to the integration period. The existence of a temporal integration period is deduced from the time interval between two consecutive spikes fired in a reference neuron as a precondition to the generation of the next spike in a compared neuron. This analysis can show whether the coupled spike firing in the compared neuron is correlated with the last or the second-to-last spike in the reference neuron. Analysis of simulated and experimentally recorded biological spike trains shows that the effects of excitatory and inhibitory temporal integration are extracted by this method without relying on any subthreshold potential recordings. The analysis also shows that, with temporal integration, a neuron driven by random firing patterns can produce fairly regular firing patterns under appropriate conditions. This regularity in firing can be enhanced by temporal integration of spikes in a chain of polysynaptically connected neurons. The bandpass filtering of spike firings by temporal integration is discussed. The results also reveal that signal transmission delays may be attributed not just to conduction and synaptic delays, but also to the delay time needed for temporal integration.


The Open Statistics & Probability Journal | 2009

A Theoretical Analysis of Cumulative Sum Slope (CUSUM-Slope) Statisticfor Detecting Signal Onset (begin) and Offset (end) Trends from BackgroundNoise Level

David C. Tam

A theoretical analysis of the cumulative sum (CUSUM) technique for detecting a series of time signals from noisy background is provided. The statistic using CUSUM-slope is introduced as a measure for capturing the average of signals within the time-window, in which the slope is computed. This provides a time-independent method for estimating the signal content within the time-window. The detection criterion is provided for different window-lengths. The results showed that this CUSUM-slope statistic is highly sensitive to the detection of subtle hidden trends in the data sequence with noise filtered even in very low signal-to-noise environment.


BMC Neuroscience | 2010

Variables governing emotion and decision- making: human objectivity underlying its subjective perception

David C. Tam

This article accompanies a poster presentation on the variables governing emotion and decision-making.


The Open Cybernetics & Systemics Journal | 2007

Theoretical Analysis of Cross-Correlation of Time-Series Signals Computed by a Time-Delayed Hebbian Associative Learning Neural Network

David C. Tam

A theoretical proof of the computational function performed by a time-delayed neural network implementing a Hebbian associative learning-rule is shown to compute the equivalent of cross-correlation of time-series functions, show- ing the relationship between correlation coefficients and connection-weights. The values of the computed correlation coef- ficients can be retrieved from the connection-weights.


Psychology of Learning and Motivation | 1989

Quantitative modeling of synaptic plasticity

David C. Tam; Donald H. Perkel

Publisher Summary This chapter describes the quantitative approach to neuronal modeling through a compartmental description of synapses, nerve cells, and small circuits of neuronal networks. It presents specific examples of different mechanisms underlying synaptic plasticity in different systems. Learning of any sort involves a relatively long-lasting change in the behavior of an organism and, perforce, in the functioning of parts of its nervous system. To model learning in neural systems requires two steps: (1) the quantitative modeling of the involved neuron or circuit before the learning is induced and (2) the incorporation of the neuronal modifications that underlie learning—anatomical, electrical, or chemical—into the first model. The model described in the chapter is implemented in a family of programs collectively termed “MANUEL,” which were written in FORTRAN. The chapter also elaborates functional reconstruction through compartmental models. A functional reconstruction is a working model of the neuron or circuit, intended to furnish reliable, quantitative predictions of the behavior of the system based on a distillation of its neurobiological characteristics. One highly useful and versatile form of functional reconstruction uses a compartmental description of the neurons structure. In this approach, the membrane of the neuron is divided into a finite number of regions (patches), each of which is judged to be small enough so that the transmembrane electrical potential at the center of the region is representative of the potential throughout the region. This use of equipotential regions of membrane for neuronal modeling was introduced by Rall.


BMC Neuroscience | 2009

A theoretical model of emotion processing for optimizing the cost function of discrepancy errors between wants and gets

David C. Tam

Article accompanying a poster presentation for the 2009 Annual Computational Neuroscience Meeting. This article discusses a theoretical model of emotion processing for optimizing the cost function of discrepancy errors between wants and gets.

Collaboration


Dive into the David C. Tam's collaboration.

Top Co-Authors

Avatar

Michelle A. Fitzurka

The Catholic University of America

View shared research outputs
Top Co-Authors

Avatar

George Zouridakis

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar

Garrett T. Kenyon

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge