Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gilles Raîche is active.

Publication


Featured researches published by Gilles Raîche.


Journal of Educational and Behavioral Statistics | 2012

A Didactic Presentation of Snijders’s lz* Index of Person Fit With Emphasis on Response Model Selection and Ability Estimation:

David Magis; Gilles Raîche; Sébastien Béland

This paper focuses on two likelihood-based indices of person fit, the index lz and the Snijders’s modified index lz *. The first one is commonly used in practical assessment of person fit, although its asymptotic standard normal distribution is not valid when true abilities are replaced by sample ability estimates. The lz * index is a generalization of lz , which corrects for this sampling variability. Surprisingly, it is not yet popular in the psychometric and educational assessment community. Moreover, there is some ambiguity about which type of item response model and ability estimation method can be used to compute the lz * index. The purpose of this article is to present the index lz * in a simple and didactic approach. Starting from the relationship between lz and lz *, we develop the framework according to the type of logistic item response theory (IRT) model and the likelihood-based estimators of ability. The practical calculation of lz * is illustrated by analyzing a real data set about language skill assessment.


International Journal of Testing | 2011

A generalized logistic regression procedure to detect differential item functioning among multiple groups

David Magis; Gilles Raîche; Sébastien Béland; Paul Gérard

We present an extension of the logistic regression procedure to identify dichotomous differential item functioning (DIF) in the presence of more than two groups of respondents. Starting from the usual framework of a single focal group, we propose a general approach to estimate the item response functions in each group and to test for the presence of uniform DIF, nonuniform DIF, or both. This generalized procedure is compared to other existing DIF methods for multiple groups with a real data set on language skill assessment. Emphasis is put on the flexibility, completeness, and computational easiness of the generalized method.


Applied Psychological Measurement | 2011

catR : An R Package for Computerized Adaptive Testing

David Magis; Gilles Raîche

Computerized adaptive testing (CAT) is an active current research field in psychometrics and educational measurement. However, there is very little software available to handle such adaptive tasks. The R package catR was developed to perform adaptive testing with as much flexibility as possible, in an attempt to provide a developmental and testing platform to the interested user. Several item-selection rules and ability estimators are implemented. The item bank can be provided by the user or randomly generated from parent distributions of item parameters. Three stopping rules are available. The output can be graphically displayed.Computerized adaptive testing (CAT) is an active current research field in psychometrics and educational measurement. However, there is very little software available to handle such adaptive tasks. The R package catR was developed to perform adaptive testing with as much flexibility as possible, in an attempt to provide a developmental and testing platform to the interested user. Several item-selection rules and ability estimators are implemented. The item bank can be provided by the user or randomly generated from parent distributions of item parameters. Three stopping rules are available. The output can be graphically displayed.


Applied Psychological Measurement | 2011

A Test-Length Correction to the Estimation of Extreme Proficiency Levels

David Magis; Sébastien Béland; Gilles Raîche

In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an appropriate prior distribution, the Bayesian approach of maximum a posteriori (MAP) yields finite estimates, but it suffers from severe estimation bias at the extremes of the proficiency scale. As a first step, a simple correction to the MAP estimator is proposed to reduce this estimation bias. The correction factor is determined through a simulation study and depends only on the length of the test. In a second step, some additional simulations emphasize that the corrected estimator behaves like the ML estimator and outperforms the standard MAP method for extremely small or extremely large abilities. Although based on the Rasch model, the method could be adapted to other logistic item response models.


Applied Psychological Measurement | 2010

An iterative maximum a posteriori estimation of proficiency level to detect multiple local likelihood maxima

David Magis; Gilles Raîche

In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its use. The authors then propose an iteratively based MAP estimator (IMAP), which can be useful in detecting multiple local likelihood maxima. The efficiency of the IMAP estimator is studied and is compared to the ML and MAP methods by means of a simulation study.


Applied Psychological Measurement | 2006

SIMCAT 1.0 : A SAS computer program for simulating computer adaptive testing

Gilles Raîche; Jean-Guy Blais

Monte Carlo methodologies are frequently applied to study the sampling distribution of the estimated proficiency level in adaptive testing. These methods eliminate real situational constraints. However, these Monte Carlo methodologies are not currently supported by the available software programs, and when these programs are available, their flexibility is limited. Here, a commented computer program coded in SAS 6.08—and ulterior versions—language is proposed. SIMCAT 1.0 is aimed at the simulation of adaptive testing sessions under different adaptive expected a posteriori (EAP) proficiency-level estimation methods (Blais & Raı̂che, 2005; Raı̂che & Blais, 2005) based on the one-parameter Rasch logistic model. These methods are all adaptive in the a priori proficiency-level estimation, the proficiency-level estimation bias correction, the integration interval, or a combination of these factors. The use of these adaptive EAP estimation methods diminishes considerably the shrinking, and therefore biasing, effect of the estimated a priori proficiency level encountered when this a priori is fixed at a constant value independently of the computed previous value of the proficiency level. SIMCAT 1.0 also computes empirical and estimated skewness and kurtosis coefficients, such as the standard error, of the estimated proficiency-level sampling distribution. In this way, the program allows one to compare empirical and estimated properties of the estimated proficiency-level sampling distribution under different variations of the EAP estimation method: standard error and bias, like the skewness and kurtosis coefficients.


international conference on advanced learning technologies | 2007

The adaptive and intelligent testing framework: PersonFit

Komi Sodoke; Gilles Raîche; Roger Nkambou

E-learning has advanced considerably in the last decades allowing the interoperability of different systems and different kinds of adaptation to the student profile or learning objectives. However, some of its aspects, such as E-testing are still in their early age. As a consequence of this delay, most of the actual e-learning platforms only offer basic e-testing functionalities. By making efficient use of well known techniques in artificial intelligence, theories in psychometry and standards in E-learning, it could be possible to integrate adaptive testing functionalities in the actual e-learning platform. This is one of the goals for the platform that we developed named PersonFit. In this paper we will present some of its architectural elements and the algorithms used.


Archive | 2015

The Internet Implementation of the Hierarchical Aggregate Assessment Process with the “Cluster” Wi-Fi E-Learning and EAssessment Application — A Particular Case of Teamwork Assessment

Martin Lesage; Gilles Raîche; Martin Riopel; Frédérick Fortin; Dalila Sebkhi

A Wi-Fi e-learning and e-assessment Internet application named “Cluster” was devel‐ oped in the context of a research project concerning the implementation of a team‐ work assessment mobile application able to assess teams with several levels of hierarchy. Usually, teamwork assessment software and Internet applications for sev‐ eral hierarchy level teams are included in the field of Management Information Sys‐ tems (MIS). However, some assessment tasks in teams with several levels of hierarchy and assessment may be performed in an educational context, and the existing applica‐ tions for the assessment and evaluation of teams with several levels of hierarchy are not applications dedicated to the assessment of students in an educational context. The “Cluster” application is able to present the course material, to train the students in teams as well as to present individual and team assessment tasks. The application’s special functionalities enable it to assess the teams at several levels of hierarchy, which constitute the hierarchical aggregate assessment process. In effect, the members of the teams may have appointments of team member, team leader and team adminis‐ trator that supervises team leaders. This application can therefore evaluate simultane‐ ously different knowledge and skills in the same assessment task based on the hierarchical position of the team member. The summative evaluation of the applica‐ tion consists of work to submit as well as objective examinations in HTML format, while the formative evaluation is composed of assessment grid computer forms of self-assessment and peer assessment. The application contains two mutually exclusive modes, the assessor mode and the student mode. The assessor mode allows the teach‐ er to create courses, manage students, form the teams and also assess the students and the teams in a summative manner. The student mode allows the students to follow


Methodology: European Journal of Research Methods for The Behavioral and Social Sciences | 2013

Non-Graphical Solutions for Cattell’s Scree Test

Gilles Raîche; Theodore A. Walls; David Magis; Martin Riopel; Jean Guy Blais


Journal of Statistical Software | 2012

Random Generation of Response Patterns under Computerized Adaptive Testing with the R Package catR

David Magis; Gilles Raîche

Collaboration


Dive into the Gilles Raîche's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sébastien Béland

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar

Jean-Guy Blais

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar

Martin Riopel

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar

Martin Lesage

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar

Komi Sodoke

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diane Leduc

Université du Québec à Montréal

View shared research outputs
Top Co-Authors

Avatar

François Pichette

Université du Québec à Montréal

View shared research outputs
Researchain Logo
Decentralizing Knowledge