Dominic Standage
Queen's University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dominic Standage.
Frontiers in Neuroscience | 2014
Dominic Standage; Gunnar Blohm; Michael C. Dorris
Decisions are faster and less accurate when conditions favor speed, and are slower and more accurate when they favor accuracy. This phenomenon is referred to as the speed-accuracy trade-off (SAT). Behavioral studies of the SAT have a long history, and the data from these studies are well characterized within the framework of bounded integration. According to this framework, decision makers accumulate noisy evidence until the running total for one of the alternatives reaches a bound. Lower and higher bounds favor speed and accuracy respectively, each at the expense of the other. Studies addressing the neural implementation of these computations are a recent development in neuroscience. In this review, we describe the experimental and theoretical evidence provided by these studies. We structure the review according to the framework of bounded integration, describing evidence for (1) the modulation of the encoding of evidence under conditions favoring speed or accuracy, (2) the modulation of the integration of encoded evidence, and (3) the modulation of the amount of integrated evidence sufficient to make a choice. We discuss commonalities and differences between the proposed neural mechanisms, some of their assumptions and simplifications, and open questions for future work. We close by offering a unifying hypothesis on the present state of play in this nascent research field.
Biological Cybernetics | 2007
Dominic Standage; Sajiya Jalil; Thomas P. Trappenberg
We present two weight- and spike-time dependent synaptic plasticity rules consistent with the physiological data of Bi and Poo (J Neurosci 18:10464–10472, 1998). One rule assumes synaptic saturation, while the other is scale free. We extend previous analyses of the asymptotic consequences of weight-dependent STDP to the case of strongly correlated pre- and post-synaptic spiking, more closely resembling associative learning. We further provide a general formula for the contribution of any number of spikes to synaptic drift. Asymptotic weights are shown to principally depend on the correlation and rate of pre- and post-synaptic activity, decreasing with increasing rate under correlated activity, and increasing with rate under uncorrelated activity. Spike train statistics reveal a quantitative effect only in the pre-asymptotic regime, and we provide a new interpretation of the relation between BCM and STDP data.
Neural Networks | 2011
Dominic Standage; Martin Paré
Two long-standing questions in neuroscience concern the mechanisms underlying our abilities to make decisions and to store goal-relevant information in memory for seconds at a time. Recent experimental and theoretical advances suggest that NMDA receptors at intrinsic cortical synapses play an important role in both these functions. The long NMDA time constant is suggested to support persistent mnemonic activity by maintaining excitatory drive after the removal of a stimulus and to enable the slow integration of afferent information in the service of decisions. These findings have led to the hypothesis that the local circuit mechanisms underlying decisions must also furnish persistent storage of information. We use a local circuit cortical model of spiking neurons to test this hypothesis, controlling intrinsic drive by scaling NMDA conductance strength. Our simulations provide further evidence that persistent storage and decision making are supported by common mechanisms, but under biophysically realistic parameters, our model demonstrates that the processing requirements of persistent storage and decision making may be incompatible at the local circuit level. Parameters supporting persistent storage lead to strong dynamics that are at odds with slow integration, whereas weaker dynamics furnish the speed-accuracy trade-off common to psychometric data and decision theory.
PLOS Computational Biology | 2013
Dominic Standage; Hongzhi You; Da-Hui Wang; Michael C. Dorris
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing networks interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification.
Neurocomputing | 2005
Thomas P. Trappenberg; Dominic Standage
Continuous attractor neural networks are recurrent networks with center-surround interaction profiles that are common ingredients in many neuroscientific models. We study realizations of multiple non-equidistant activity packets in this model. These states are not stable without further stabilizing mechanisms, but we show they can exist for long periods. While these states must be avoided in winner-take-all applications, they demonstrate that multiple working memories can be sustained in a model with global inhibition.
international symposium on neural networks | 2007
Dominic Standage; Thomas P. Trappenberg
We fit a weight-dependent STDP rule to the classic data of Bi and Poo (1998), showing that this rule leads to slow learning in a simulation with an integrate-and-fire neuron. The slowness of learning is explained by an inequality between the range of initial weights in the data and the largest relative potentiation. We show that slow learning can be overcome with an increased learning rate, but that this approach leads to rapid forgetting in the presence of realistic levels of background spiking. Our study demonstrates that weight-dependent STDP rules, commonly used in neural simulations, have biologically unrealistic consequences. We discuss the implications of this finding for several interpretations of weight-dependent plasticity and STDP more generally, and recommend directions for further research.
international symposium on neural networks | 2005
Dominic Standage; Thomas P. Trappenberg
Many spiking neuron models have been proposed over the last decades with varying computational complexity and abstraction from biological neurons. Among the few studies that have compared spiking models, little emphasis has been given to the formal description of calibration methods in tuning model parameters. We give an example of calibrating a leaky integrate-and-fire neuron with the first-spike time of a Hodgkin-Huxley neuron. We further demonstrate how model parameters can be tuned to minimize subthreshold differences in membrane potential. This example emphasizes the dependencies of calibration methods on other experimental parameters, complicating detailed comparisons of spiking models.
Journal of Neurophysiology | 2015
Zahra Dargaei; Dominic Standage; Christopher J. Groten; Gunnar Blohm; Neil S. Magoski
Electrical transmission is a dynamically regulated form of communication and key to synchronizing neuronal activity. The bag cell neurons of Aplysia are a group of electrically coupled neuroendocrine cells that initiate ovulation by secreting egg-laying hormone during a prolonged period of synchronous firing called the afterdischarge. Accompanying the afterdischarge is an increase in intracellular Ca2+ and the activation of protein kinase C (PKC). We used whole cell recording from paired cultured bag cell neurons to demonstrate that electrical coupling is regulated by both Ca2+ and PKC. Elevating Ca2+ with a train of voltage steps, mimicking the onset of the afterdischarge, decreased junctional current for up to 30 min. Inhibition was most effective when Ca2+ entry occurred in both neurons. Depletion of Ca2+ from the mitochondria, but not the endoplasmic reticulum, also attenuated the electrical synapse. Buffering Ca2+ with high intracellular EGTA or inhibiting calmodulin kinase prevented uncoupling. Furthermore, activating PKC produced a small but clear decrease in junctional current, while triggering both Ca2+ influx and PKC inhibited the electrical synapse to a greater extent than Ca2+ alone. Finally, the amplitude and time course of the postsynaptic electrotonic response were attenuated after Ca2+ influx. A mathematical model of electrically connected neurons showed that excessive coupling reduced recruitment of the cells to fire, whereas less coupling led to spiking of essentially all neurons. Thus a decrease in electrical synapses could promote the afterdischarge by ensuring prompt recovery of electrotonic potentials or making the neurons more responsive to current spreading through the network.
Frontiers in Neuroscience | 2014
Dominic Standage; Da-Hui Wang; Gunnar Blohm
Decisions are faster and less accurate when conditions favor speed, and are slower and more accurate when they favor accuracy. This speed-accuracy trade-off (SAT) can be explained by the principles of bounded integration, where noisy evidence is integrated until it reaches a bound. Higher bounds reduce the impact of noise by increasing integration times, supporting higher accuracy (vice versa for speed). These computations are hypothesized to be implemented by feedback inhibition between neural populations selective for the decision alternatives, each of which corresponds to an attractor in the space of network states. Since decision-correlated neural activity typically reaches a fixed rate at the time of commitment to a choice, it has been hypothesized that the neural implementation of the bound is fixed, and that the SAT is supported by a common input to the populations integrating evidence. According to this hypothesis, a stronger common input reduces the difference between a baseline firing rate and a threshold rate for enacting a choice. In simulations of a two-choice decision task, we use a reduced version of a biophysically-based network model (Wong and Wang, 2006) to show that a common input can control the SAT, but that changes to the threshold-baseline difference are epiphenomenal. Rather, the SAT is controlled by changes to network dynamics. A stronger common input decreases the models effective time constant of integration and changes the shape of the attractor landscape, so the initial state is in a more error-prone position. Thus, a stronger common input reduces decision time and lowers accuracy. The change in dynamics also renders firing rates higher under speed conditions at the time that an ideal observer can make a decision from network activity. The difference between this rate and the baseline rate is actually greater under speed conditions than accuracy conditions, suggesting that the bound is not implemented by firing rates per se.
Frontiers in Neuroscience | 2015
Dominic Standage; Da-Hui Wang; Richard P. Heitz; Patrick Simen
Hasty decisions often lead to poor choices, whereas accurate decisions are ineffective if they take too long. Thus, good choices require cognitive mechanisms to determine the appropriate balance between speed and accuracy, and to control decision processing accordingly. This balance is referred to as the speed-accuracy trade-off (SAT) and the mechanisms by which it is determined and imposed are the subject of this Frontiers Research Topic. Given the near-ubiquity of the SAT across species and experimental tasks, it is not surprising that a wide range of methods have been used to investigate it. Our aim is to provide a unified view of the SAT in light of this diverse methodology. Computationally, decision making and the SAT are well characterized by the framework of bounded integration, providing a solid foundation for this view. Under this framework, noisy evidence for the available choices is added up (integrated) until the running total for one of them reaches a criterion (the bound). The SAT is readily controlled by the bound, where a higher bound favors accuracy at the expense of speed and vice versa. In this collection, we use bounded integration as a reference point for considering the factors that determine the optimal balance between speed and accuracy, the interpretation of behavior by different models from this general class, and the neural implementation of the computations captured by these models. Articles herein further consider conditions under which the above descriptions of the SAT and bounded integration do not explain behavior, and the utility of the SAT for manipulating the context of decisions.