Eva Mary Bures
Bishop's University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eva Mary Bures.
Journal of Computing in Higher Education | 2011
Philip C. Abrami; Robert M. Bernard; Eva Mary Bures; Eugene Borokhovski
In a recent meta-analysis of distance and online learning, Bernard et al. (2009) quantitatively verified the importance of three types of interaction: among students, between the instructor and students, and between students and course content. In this paper we explore these findings further, discuss methodological issues in research and suggest how these results may foster instructional improvement. We highlight several evidence-based approaches that may be useful in the next generation of distance and online learning. These include principles and applications stemming from the theories of self-regulation and multimedia learning, research-based motivational principles and collaborative learning principles. We also discuss the pedagogical challenges inherent in distance and online learning that need to be considered in instructional design and software development.
American Journal of Distance Education | 1996
Philip C. Abrami; Eva Mary Bures
(1996). Computer‐supported collaborative learning and distance education. American Journal of Distance Education: Vol. 10, No. 2, pp. 37-42.
Research in Higher Education | 2000
Eva Mary Bures; Philip C. Abrami; Cheryl Amundsen
This study investigates why some university students appear motivated to learn via computer conferencing (CC) whereas others do not, exploring the correlations of three key aspects of student motivation—reasons for engaging in academic learning (goal orientation), beliefs that they can acquire the ability to use CC (self-efficacy), and beliefs that learning to use CC will help them learn the course material (outcome expectations)—with satisfaction and with the frequency of CC contributions. Participants (n = 79) came from 4 graduate-level face-to-face courses and 1 undergraduate DE course. The results suggest that students who believe that CC will help them learn the course material are more likely to express satisfaction and to be active online, that students who believe that they are capable of learning how to use CC are more likely to be active online, and that students who are concerned about their relative performance compared to others tend to send fewer messages to conferences where online activity is not graded. Practical implications for instructors and suggestions for future research are described.
Educational Research and Evaluation | 2004
Philip C. Abrami; Gretchen Lowerison; Eva Mary Bures
According to van Wyhe (2002), 19th-century phrenologists called their interest ‘‘the only true science of mind.’’ The basic tenets of phrenology were: (a) the brain is the organ of the mind; (b) the mind is composed of multiple distinct, innate faculties; (c) because they are distinct, each faculty must have a separate seat or ‘‘organ’’ in the brain; (d) the size of an organ, other things being equal, is a measure of its power; (e) the shape of the brain is determined by the development of the various organs; and, importantly; (f) as the skull takes its shape from the brain, the surface of the skull can be read as an accurate index of psychological aptitudes and tendencies. Phrenologists believed that by examining the shape and unevenness of a head or skull, one could discover the development of the particular cerebral ‘‘organs’’ responsible for different intellectual aptitudes and character traits. Long since discredited, the topic of phrenology reminds me, the senior author of this introductory article, of my unhappy early days as an undergraduate student. I remember spending too many hours engaged in the process of transcription as my instructors, usually senior graduate students, wrote notes on the blackboard as quickly as they and most of the class members could write. When I did pause in my labors to look up, I seldom saw more than the back of the instructor’s head as the instructor was usually too
computer supported collaborative learning | 2010
Eva Mary Bures; Philip C. Abrami; Richard F. Schmid
This paper explores a labelling feature designed to support higher-level online dialogue. It investigates whether students use labels less often during a structured online dialogue than during an unstructured one, and looks at students’ reactions to labelling and to both types of tasks. Participants are from three successive course offerings of a Master’s-level course (n = 37). All students are allowed but not required to use a labelling feature which enables them to insert phrases such as “Building on your point” directly into their online messages. All students participate in two types of online activities in small groups—first an unstructured online dialogue, then a structured online dialogue. Students tended to use labels significantly less often during the structured dialogue: F(1, 36) = 5.950, p < 0.05. Sixty-two percent of students used the feature more than once during the unstructured dialogue compared to 46% during the structured dialogue. The maximum number of labels that a student used in the unstructured dialogue was 28 versus 16 in the structured dialogue. Students generally found the structured dialogue to be more interesting and relevant, and to have clearer expectations. Student reactions to the labelling feature were mixed: The mean of satisfaction was 18.35, SD = 3.88 (six items on a 5-point Likert scale). Students did not find labelling as useful during the structured dialogue: Perhaps labelling and the activity provided redundant scaffolding. These results imply that features built into the software should be implemented flexibly with thought to the other pedagogical scaffolds in the environment, particularly to the type of activity.
Archive | 2013
Philip C. Abrami; Eva Mary Bures; Einat Idan; Elizabeth J. Meyer; Vivek Venkatesh; Anne Wade
At the Centre for the Study of Learning and Performance we have developed, tested, and disseminated to schools without charge, an Electronic Portfolio Encouraging Active and Reflective Learning (ePEARL). ePEARL is designed to be faithful to predominant models of self-regulation, scaffolding and supporting learners and their educators from grade one (level one) through grade twelve and beyond (level four). ePEARL encourages learners to engage in the cyclical phases and sub-phases of forethought, performance, and self-reflection. In a series of studies, including two longitudinal quasi-experiments, we have explored the positive impacts of ePEARL on the enhancement of students’ self-regulated learning skills, their literacy skills and changes in teaching, while simultaneously researching classroom implementation fidelity and teacher professional development. This chapter briefly explains the development of ePEARL, our research program, and issues in the scalability and sustainability of knowledge tools.
Canadian Journal of Learning and Technology / La revue canadienne de l’apprentissage et de la technologie | 2013
Eva Mary Bures; Alexandra Barclay; Philip C. Abrami; Elizabeth J. Meyer
This study explores electronic portfolios and their potential to assess student literacy and self-regulated learning in elementary-aged children. Assessment tools were developed and include a holistic rubric that assigns a mark from 1 to 5 to self-regulated learning (SRL) and a mark to literacy, and an analytical rubric measuring multiple sub-scales of SRL and literacy. Participants in grades 4, 5 and 6 across two years created electronic portfolios, with n=369 volunteers. Some classes were excluded from statistical analyses in the first year due to low implementation and some individuals were excluded in both years, leaving n=251 included in analyses. All portfolios were coded by two coders, and the inter-rater reliability explored. During the first year Cohen’s kappa ranged from 0.70 to 0.79 for literacy and SRL overall, but some sub-scales were unacceptably weak. The second year showed improvement in Cohen’s kappa overall and especially for the sub-scales, reflecting improved implementation of the portfolios and use of the assessment tools. Validity was explored by comparing the relationship of portfolio scores to other measures, including the government scores on the open-response literacy questions for the Canadian Achievement Tests (version 4), the scores we assigned to the CAT-4s using our assessment tools, and scores on the Student Learning Strategies Questionnaire (SLSQ) measuring SRL. The portfolio literacy scores correlated (p<0.01) to scores we assigned the CAT-4s using our assessment tools, and to government pre-CAT-4 scores, but the self-regulatory learning scores did not correlate to our measure of student’s self-regulation. The results suggest that electronic portfolio assessment is time-consuming and difficult due to the range of varying evidence within even a single individual’s portfolio and that it may not be fair to do across diverse classrooms unless there are shared guidelines or tasks. L’evaluation d’e-portfolio «authentiques» : Les e-portfolios peuvent-ils servir d’evaluation standardisee pour mesurer la litteratie et l’apprentissage autoregule a l’elementaire? Cette etude explore les e-portfolios et leur potentiel pour l’evaluation de la litteratie et de l’apprentissage autoregule chez les enfants de l’ecole primaire. Des outils d’evaluation ont ete elabores et comprennent un bareme general qui attribue une note de 1 a 5 pour l’apprentissage autoregule (AAR), une note pour la litteratie et un bareme d’analyse permettant de mesurer plusieurs sous-echelles de l’AAR et de l’alphabetisation. Les participants en 4e, 5e et 6e annees ont cree sur une periode de deux ans des e-portfolios, avec n = 369 benevoles. Certaines classes ont ete exclues des analyses statistiques dans la premiere annee en raison d’une faible mise en œuvre et certaines personnes ont ete exclues dans les deux annees, reduisant les analyses a n = 251. Tous les e-portfolios ont ete codes par deux codeurs et la fiabilite entre les evaluateurs a ete exploree. Au cours de la premiere annee, le coefficient kappa de Cohen variait globalement de 0,70 a 0,79 pour la litteratie et l’AAR, mais certaines sous-echelles etaient trop faibles. A la deuxieme annee, il y a eu une amelioration dans le coefficient kappa de Cohen en general et en particulier pour les sous-echelles, refletant une amelioration de la mise en œuvre des e-portfolios et de l’utilisation des outils d’evaluation. La validite a ete evaluee en comparant la relation entre les resultats des e-portfolios et d’autres mesures, y compris les resultats du gouvernement sur les questions de litteratie a reponses ouvertes du Canadian Achievement Tests (version 4), les resultats que nous avions assignes au CAT-4 a l’aide de nos outils d’evaluation et les resultats du Student Learning Strategies Questionnaire (SLSQ) mesurant l’AAR. Les resultats de litteratie du e-portfolio sont en correlation (p < 0,01) avec les resultats que nous avions attribues aux CAT-4 a l’aide de nos outils d’evaluation et avec les resultats du pre-CAT du gouvernement, mais les resultats de l’AAR ne sont pas en correlation avec notre mesure de l’autoregulation des etudiants. Les resultats suggerent d’une part que l’evaluation par e-portfolio prend beaucoup de temps et s’avere difficile en raison de l’eventail de donnees variables dans le e-portfolio meme d’une seule personne et, d’autre part, qu’elle ne peut pas etre faite de facon appropriee dans plusieurs classes a la fois, sauf s’il existe des lignes directrices ou des tâches communes.
Canadian Journal of Learning and Technology | 2009
Philip C. Abrami; C. Anne Wade; Vanitha Pillay; Ofra Aslan; Eva Mary Bures; Caitlin Bentley
Journal of Educational Computing Research | 2002
Eva Mary Bures; Cheryl Amundsen; Philip C. Abrami
Canadian Journal of Learning and Technology | 2013
Eva Mary Bures; Alexandra Barclay; Philip C. Abrami; Elizabeth J. Meyer