Denis Cousineau
University of Ottawa
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Denis Cousineau.
International journal of psychological research | 2010
Denis Cousineau; Sylvain Chartier
Outliers are observations or measures that are suspicious because they are much smaller or much larger than the vast majority of the observations. These observations are problematic because they may not be caused by the mental process under scrutiny or may not reflect the ability under examination. The problem is that a few outliers is sometimes enough to distort the group results (by altering the mean performance, by increasing variability, etc.). In this paper, various techniques aimed at detecting potential outliers are reviewed. These techniques are subdivided into two classes, the ones regarding univariate data and those addressing multivariate data. Within these two classes, we consider the cases where the population distribution is known to be normal, the population is not normal but known, or the population is unknown. Recommendations will be put forward in each case.
Behavior Research Methods Instruments & Computers | 2004
Denis Cousineau; Scott D. Brown; Andrew Heathcote
The most powerful tests of response time (RT) models often involve the whole shape of the RT distribution, thus avoiding mimicking that can occur at the level of RT means and variances. Nonpara-metric distribution estimation is, in principle, the most appropriate approach, but such estimators are sometimes difficult to obtain. On the other hand, distribution fitting, given an algebraic function, is both easy and compact. We review the general approach to performing distribution fitting with maximum likelihood (ML) and a method based on quantiles (quantile maximum probability, QMP). We show that QMP has both small bias and good efficiency when used with common distribution functions (the ex-Gaussian, Gumbel, lognormal, Wald, and Weibull distributions). In addition, we review some software packages performing ML (PASTIS, QMPE, DISFIT, and MATHEMATICA) and compare their results. In general, the differences between packages have little influence on the optimal solution found, but the form of the distribution function has: Both the lognormal and the Wald distributions have nonlinear dependencies between the parameter estimates that tend to increase the overall bias in parameter recovery and to decrease efficiency. We conclude by laying out a few pointers on how to relate descriptive models of RT to cognitive models of RT. A program that generated the random deviates used in our studies may be downloaded fromwww.psychonomic.org/archive/.
Behavior Research Methods Instruments & Computers | 2004
Andrew Heathcote; Scott D. Brown; Denis Cousineau
We describe and test quantile maximum probability estimator (QMPE), an open-source ANSI Fortran 90 program for response time distribution estimation.1 QMPE enables users to estimate parameters for the ex-Gaussian and Gumbel (1958) distributions, along with three “shifted” distributions (i.e., distributions with a parameter-dependent lower bound): the Lognormal, Wald, and Weibull distributions. Estimation can be performed using either the standard continuous maximum likelihood (CML) method or quantile maximum probability (QMP; Heathcote & Brown, in press). We review the properties of each distribution and the theoretical evidence showing that CML estimates fail for some cases with shifted distributions, whereas QMP estimates do not. In cases in which CML does not fail, a Monte Carlo investigation showed that QMP estimates were usually as good, and in some cases better, than CML estimates. However, the Monte Carlo study also uncovered problems that can occur with both CML and QMP estimates, particularly when samples are small and skew is low, highlighting the difficulties of estimating distributions with parameter-dependent lower bounds.
Spatial Vision | 2004
Denis Cousineau; Richard M. Shiffrin
The ability to locate an object in the visual field is a collaboration of at least three intermingled processes: scanning multiple locations, recognizing the object sought (the target), and ending the search in cases when the target is not found. In this paper, we focus on the termination rule. Using distribution analyses, it is possible to assess the probability of termination conditional on the number of locations examined. The results show that on some trials without target, the participants carried out more comparisons than there are objects in the display; in other conditions, they carried out fewer comparisons than objects. Because there were very few errors, the premature stops were not pure guesses. We present models to account for these findings. The distributions of terminations help determine the slopes of the functions relating response time to set size.
Brain and Cognition | 2005
Guy L. Lacroix; Ioana R. Constantinescu; Denis Cousineau; Roberto G. de Almeida; Norman Segalowitz; Michael von Grünau
The goal of this study was to evaluate the possibility that dyslexic individuals require more working memory resources than normal readers to shift attention from stimulus to stimulus. To test this hypothesis, normal and dyslexic adolescents participated in a Rapid Serial Visual Presentation experiment (Raymond, Shapiro, & Arnell, 1992). Surprisingly, the result showed that the participants with dyslexia produced a shallower attentional blink than normal controls. This result may be interpreted as showing differences in the way the two groups encode information in episodic memory. They also fit in a cascade-effect perspective of developmental dyslexia.
Behavior Research Methods Instruments & Computers | 1997
Denis Cousineau; Serge Larochelle
Reaction time (RT) data afford different types of analyses. One type of analysis, called “curve analysis,” can be used to characterize the evolution of performance at different moments over the course of learning. By contrast, distribution analysis aims at characterizing the spread of RTs at a specific moment. Techniques to deduce free parameters are described for both types of analyses, given an a priori choice of the curve or distribution one wants to fit, along with statistical tests of significance for distribution analysis: The log likelihood technique is used if the probability density function is given; otherwise, a root-mean-square-deviation minimization technique is used. A program—PASTIS—that searches for the optimal parameters of the following curves is presented: power law, exponential, and e-based exponential. PASTIS also searches for Weibull and the ex-Gaussian distributions. Some tests of the software are presented.
Brain and Cognition | 2012
Marja Laasonen; Jonna Salomaa; Denis Cousineau; Sami Leppämäki; Pekka Tani; Laura Hokkanen; Matthew W. G. Dye
In this study of the project DyAdd, three aspects of visual attention were investigated in adults (18-55years) with dyslexia (n=35) or attention deficit/hyperactivity disorder (ADHD, n=22), and in healthy controls (n=35). Temporal characteristics of visual attention were assessed with Attentional Blink (AB), capacity of visual attention with Multiple Object Tracking (MOT), and spatial aspects of visual attention with Useful Field of View (UFOV) task. Results showed that adults with dyslexia had difficulties performing the AB and UFOV tasks, which were explained by an impaired ability to process dual targets, longer AB recovery time, and deficits in processing rapidly changing visual displays. The ADHD group did not have difficulties in any of the tasks. Further, performance in the visual attention tasks predicted variation in measures of phonological processing and reading when all of the participants were considered together. Thus, difficulties in tasks of visual attention were related to dyslexia and variation of visual attention had a role in the reading ability of the general population.
Behavior Research Methods | 2014
Denis Cousineau; Fearghal O’Brien
The problem of calculating error bars in within-subject designs has proven to be a difficult problem and has received much attention in recent years. Baguley (Behavior Research Methods, 44, 158–175, 2012) recommended what he called the Cousineau–Morey method. This method requires two steps: first, centering the data set in a certain way to remove between-subject differences and, second, integrating a correction factor to debias the standard errors obtained from the normalized data set. However, within some statistical packages, it can be difficult to integrate this correction factor. Baguley (2012) proposed a solution that works well in most statistical packages in which the alpha level is altered to incorporate the correction factor. However, with this solution, it is possible to plot confidence intervals, but not standard errors. Here, we propose a second solution that can return confidence intervals or standard error bars in a mean plot.
Psychonomic Bulletin & Review | 2004
Denis Cousineau
This article presents a generalization of race models involving multiple channels. The major contribution of this article is the implementation of a learning rule that enables networks based on such a parallel race model to learn stimulus—response associations. This model is called aparallel race network. Surprisingly, with a two-layer architecture, a parallel race network learns the XOR problem without the benefit of hidden units. The model described here can be seen as a reduction-of-information system (Haider & Frensch, 1996). An emergent property of this model is seriality: In some conditions, responses are performed with a fixed order, although the system is parallel. The mere existence of this supervised network demonstrates that networks can perform cognitive processes without the weighted sum metric that characterizes strength-based networks.
Behavior Research Methods Instruments & Computers | 2003
Denis Cousineau; Sébastien Hélie; Christine Lefebvre
Many models offer different explanations of learning processes, some of them predicting equal learning rates between conditions. The simplest method by which to assess this equality is to evaluate the curvature parameter for each condition, followed by a statistical test. However, this approach is highly dependent on the fitting procedure, which may come with built-in biases difficult to identify. Averaging the data per block of training would help reduce the noise present in the trial data, but averaging introduces a severe distortion on the curve, which can no longer be fitted by the original function. In this article, we first demonstrate what is the distortion resulting from block averaging. Theblock average learning function, once known, can be used to extract parameters when the performance is averaged over blocks or sessions. The use of averages eliminates an important part of the noise present in the data and allows good recovery of the learning curve parameters. Equality of curvatures can be tested with a test of linear hypothesis. This method can be performed on trial data or block average data, but it is more powerful with block average data.