Robert M. Bernard
Concordia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert M. Bernard.
Review of Educational Research | 2004
Robert M. Bernard; Philip C. Abrami; Yiping Lou; Evgueni Borokhovski; Anne Wade; Lori Wozney; Peter Andrew Wallet; Manon Fiset; Binru Huang
A meta-analysis of the comparative distance education (DE) literature between 1985 and 2002 was conducted. In total, 232 studies containing 688 independent achievement, attitude, and retention outcomes were analyzed. Overall results indicated effect sizes of essentially zero on all three measures and wide variability. This suggests that many applications of DE outperform their classroom counterparts and that many perform more poorly. Dividing achievement outcomes into synchronous and asynchronous forms of DE produced a somewhat different impression. In general, mean achievement effect sizes for synchronous applications favored classroom instruction, while effect sizes for asynchronous applications favored DE. However, significant heterogeneity remained in each subset.
Review of Educational Research | 2009
Robert M. Bernard; Philip C. Abrami; Eugene Borokhovski; C. Anne Wade; Michael A. Surkes; Edward Clement Bethel
This meta-analysis of the experimental literature of distance education (DE) compares different types of interaction treatments (ITs) with other DE instructional treatments. ITs are the instructional and/or media conditions designed into DE courses, which are intended to facilitate student–student (SS), student–teacher (ST), or student–content (SC) interactions. Seventy-four DE versus DE studies that contained at least one IT are included in the meta-analysis, which yield 74 achievement effects. The effect size valences are structured so that the IT or the stronger IT (i.e., in the case of two ITs) serve as the experimental condition and the other treatment, the control condition. Effects are categorized as SS, ST, or SC. After adjustment for methodological quality, the overall weighted average effect size for achievement is 0.38 and is heterogeneous. Overall, the results support the importance of the three types of ITs and strength of ITs is found to be associated with increasing achievement outcomes. A strong association is found between strength and achievement for asynchronous DE courses compared to courses containing mediated synchronous or face-to-face interaction. The results are interpreted in terms of increased cognitive engagement that is presumed to be promoted by strengthening ITs in DE courses.
Review of Educational Research | 2011
Robert M. Bernard; Eugene Borokhovski; Philip C. Abrami; Richard F. Schmid
This research study employs a second-order meta-analysis procedure to summarize 40 years of research activity addressing the question, does computer technology use affect student achievement in formal face-to-face classrooms as compared to classrooms that do not use technology? A study-level meta-analytic validation was also conducted for purposes of comparison. An extensive literature search and a systematic review process resulted in the inclusion of 25 meta-analyses with minimal overlap in primary literature, encompassing 1,055 primary studies. The random effects mean effect size of 0.35 was significantly different from zero. The distribution was heterogeneous under the fixed effects model. To validate the second-order meta-analysis, 574 individual independent effect sizes were extracted from 13 out of the 25 meta-analyses. The mean effect size was 0.33 under the random effects model, and the distribution was heterogeneous. Insights about the state of the field, implications for technology use, and prospects for future research are discussed.
Review of Educational Research | 2008
Philip C. Abrami; Robert M. Bernard; Evgueni Borokhovski; Anne Wade; Michael A. Surkes; Dai Zhang
Critical thinking (CT), or the ability to engage in purposeful, self-regulatory judgment, is widely recognized as an important, even essential, skill. This article describes an ongoing meta-analysis that summarizes the available empirical evidence on the impact of instruction on the development and enhancement of critical thinking skills and dispositions. We found 117 studies based on 20,698 participants, which yielded 161 effects with an average effect size (g+) of 0.341 and a standard deviation of 0.610. The distribution was highly heterogeneous (Q T = 1,767.86, p < .001). There was, however, little variation due to research design, so we neither separated studies according to their methodological quality nor used any statistical adjustment for the corresponding effect sizes. Type of CT intervention and pedagogical grounding were substantially related to fluctuations in CT effects sizes, together accounting for 32% of the variance. These findings make it clear that improvement in students’ CT skills and dispositions cannot be a matter of implicit expectation. As important as the development of CT skills is considered to be, educators must take steps to make CT objectives explicit in courses and also to include them in both preservice and in-service training and faculty development.
Journal of Educational Computing Research | 1995
Roger Azevedo; Robert M. Bernard
A quantitative research synthesis (meta-analysis) was conducted on the literature concerning the effects of feedback on learning from computer-based instruction (CBI). Despite the widespread acceptance of feedback in computerized instruction, empirical support for particular types of feedback information has been inconsistent and contradictory. Effect size calculations from twenty-two studies involving the administration of immediate achievement posttests resulted in a weighted mean effect size of .80. Also, a mean weighted effect size of .35 was obtained from nine studies involving delayed posttest administration. Feedback effects on learning and retention were found to vary with CBI typology, format of unit content and access to supplemental materials. Results indicate that the diagnostic and prescriptive management strategies of computer-based adaptive instructional systems provide the most effective feedback. The implementation of effective feedback in computerized instruction involves the computers ability to verify the correctness of the learners answer and the underlying causes of error.
Distance Education | 2000
Robert M. Bernard; Beatriz Rojo de Rubalcava; Denise St-Pierre
This article provides an overview of issues of practice and research relating to the use of collaborative online learning in distance education (DE). It begins with an examination of the traditional problems of DE. Following that is a discussion of what collaborative online learning encompasses and a review of the primary instructional design issues that relate to it. These are: (a) course preparation; (b) creating a good social climate and sense of community; (c) the role of the instructor; (d) encouraging true collaboration; and (e) the effective use of technology. As well some of the literature relating to problem‐based learning is referenced and its application to collaborative online learning is discussed. The authors conclude that using new technologies in combination with a collaborative online learning approach in DE may prove to be highly effective when learner characteristics and the learning context are considered carefully. Recommendations for future areas of research are also provided, along with a matrix of variables that may be combined to conceptualise further study in the area.
Journal of Computing in Higher Education | 2011
Philip C. Abrami; Robert M. Bernard; Eva Mary Bures; Eugene Borokhovski
In a recent meta-analysis of distance and online learning, Bernard et al. (2009) quantitatively verified the importance of three types of interaction: among students, between the instructor and students, and between students and course content. In this paper we explore these findings further, discuss methodological issues in research and suggest how these results may foster instructional improvement. We highlight several evidence-based approaches that may be useful in the next generation of distance and online learning. These include principles and applications stemming from the theories of self-regulation and multimedia learning, research-based motivational principles and collaborative learning principles. We also discuss the pedagogical challenges inherent in distance and online learning that need to be considered in instructional design and software development.
Distance Education | 2004
Robert M. Bernard; Aaron Brauer; Philip C. Abrami; Mike Surkes
The study reported here concerns the development and predictive validation of an instrument to assess the achievement outcomes of DE/online learning success. A 38‐item questionnaire was developed and administered to 167 students who were about to embark on an online course. Factor analysis indicated a four‐factor solution, interpreted as “general beliefs about DE,” “confidence in prerequisite skills,” “self‐direction and initiative” and “desire for interaction.” Using multiple regression we found that two of these factors predicted achievement performance (i.e., Cumulative Course Grade). Comparisons of pretest and posttest administrations of the questionnaire revealed that some changes in opinion occurred between the beginning and the end of the course. Also, categories of demographic characteristics were compared on the four factors. The overall results suggest that this instrument has some predictive validity in terms of achievement, but that Cumulative Grade Point Average (i.e., the universitys record of overall achievement) is a much better predictor.
Canadian Journal of Learning and Technology | 2008
Philip C. Abrami; Robert M. Bernard; Anne Wade; Richard F. Schmid; Eugene Borokhovski; Rana Tamin; Michael A. Surkes; Gretchen Lowerison; Dai Zhang; Iolie Nicolaidou; Sherry Newman; Lori Wozney; Anna Peretiatkowicz
This review provides a rough sketch of the evidence, gaps and promising directions in e-learning from 2000 onwards, with a particular focus on Canada. We searched a wide range of sources and document types to ensure that we represented, comprehensively, the arguments surrounding e-learning. Overall, there were 2,042 entries in our database, of which we reviewed 1,146, including all the Canadian primary research and all scholarly reviews of the literature. In total, there were 726 documents included in our review: 235 – general public opinion; 131 – trade/practitioners’ opinion; 88 – policy documents; 120 – reviews; and 152 – primary empirical research. The Argument Catalogue codebook included the following eleven classes of variables: 1) Document Source; 2) Areas/Themes of e-learning; 3) Value/Impact; 4) Type of evidence; 5) Research design; 6) Area of applicability; 7) Pedagogical implementation factors; 8) A-priori attitudes; 9) Types of learners; 10) Context; and 11) Technology Factors. We examined the data from a number of perspectives, including their quality as evidence. In the primary research literature, we examined the kinds of research designs that were used. We found that over half of the studies conducted in Canada are qualitative in nature, while the rest are split in half between surveys and quantitative studies (correlational and experimental). When we looked at the nature of the research designs, we found that 51% are qualitative case studies and 15.8% are experimental or quasi-experimental studies. It seems that studies that can help us understand “what works” in e-learning settings are underrepresented in the Canadian research literature. The documents were coded to provide data on outcomes of e-learning (we also refer to them as “impacts” of e-learning). Outcomes/impacts are the perceived or measured benefits of e-learning, whereas predictors are the conditions or features of e-learning that can potentially affect the outcomes/impacts. The impacts were coded on a positive to negative scale and included: 1) achievement; 2) motivation/satisfaction; 3) interactivity/ communication; 4) meeting social demands; 5) retention/attrition; 6) learning flexibility; and 7) cost. Based on an analysis of the correlations among these impacts, we subsequently collapsed them (all but cost) into a single impact scale ranging from –1 to +1. We found, generally, that the perception of impact or actual measured impact varies across the types of documents. They appear to be lower in general opinion documents, practitioner documents and policy making reports than in scholarly reviews and primary research. While this represents an expression of hope for positive impact, on the one hand, it possibly represents reality, on the other. Where there were sufficient documents to examine and code, impact was high across each of the CCL Theme Areas. Health and Learning was the highest, with a mean of 0.80 and Elementary/Secondary was the lowest, with a mean of 0.77. However, there was no significant difference between these means. The impact of e-learning and technology use was highest in distance education, where its presence is required (Mean = 0.80) and lowest in face-to-face instructional settings (Mean = 0.60) where its presence is not required. Network-based technologies (e.g., Internet, Web-based, CMC) produced a higher impact score (Mean = 0.72) than straight technology integration in educational settings (Mean = 0.66), although this difference was considered negligible. Interestingly, among the Pedagogical Uses of Technology, student applications (i.e., students using technology) and communication applications (both Mean = 0.78) had a higher impact score than instructional or informative uses (Mean = 0.63). This result suggests that the student manipulation of technology in achieving educational goals is preferable to teacher manipulation of technology. In terms of predictor variables (professional training, course design, infrastructure/ logistics, type of learners [general population, special needs, gifted], gender issues and ethnicity/race/religion/aboriginal, location, school setting, context of technology use, type of tool used and pedagogical function of technology) we found the following: professional development was underrepresented compared to issues of course design and infrastructure/ logistics; most attention is devoted to general population students, with little representation of special needs, the gifted students, issues of gender or ethnic/race/religious/aboriginal status; the greatest attention is paid to technology use in distance education and the least attention paid to the newly emerging area of hybrid/blended learning; the most attention is paid to networked technologies such as the Internet, the WWW and CMC and the least paid to virtual reality and simulations. Using technology for instruction and using technology for communication are the two highest categories of pedagogical use. In the final stage, the primary e-learning studies from the Canadian context that could be summarized quantitatively were identified. We examined 152 studies and found a total of 7 that were truly experimental (i.e., random assignment with treatment and control groups) and 10 that were quasi-experimental (i.e., not randomized but possessing a pretest and a posttest). For these studies we extracted 29 effect sizes or standardized mean differences, which were included in the composite measure. The mean effect size was +0.117, a small positive effect. Approximately 54% of the e-learning participants performed at or above the mean of the control participants (50 th percentile), an advantage of 4%. However, the heterogeneity analysis was significant, indicating that the effect sizes were widely dispersed. It is clearly not the case that e-learning is always the superior condition for educational impact. Overall, we know that research in e-learning has not been a Canadian priority; the culture of educational technology research, as distinct from development, has not taken on great import. In addition, there appears to have been a disproportionate emphasis on qualitative research in the Canadian e-learning research culture. We noted that there are gaps in areas of research related to early childhood education and adult education. Finally, we believe that more emphasis must be placed on implementing longitudinal research, whether qualitative or quantitative (preferably a mixture of the two), and that all development efforts be accompanied by strong evaluation components that focus on learning impact. It is a shame to attempt innovation and not be able to tell why it works or doesn’t work. In this sense, the finest laboratories for e-learning research are the institutions in which it is being applied. Implications for K-12 Practitioners When implemented appropriately, technology tools are beneficial to students’ learning, and may facilitate the development of higher order thinking skills. Student manipulation of technology in achieving the goals of education is preferable to teacher manipulation of technology. Teachers need to be aware of differences between instructional design for e-learning as compared to traditional face-to-face situations. Immediate, extensive, and sustained support should be offered to teachers in order to make the best out of e-learning. Implications for Post-Secondary Some educators suggest that e-learning has the potential to transform learning, but there is limited empirical research to assess the benefits. Post-secondary education would benefit from a Pan-Canadian plan to assess the impact of e-learning initiatives. It is important that instructional design match the goals and potential of e-learning. Research is needed to determine the feasibility and effectiveness of such things as learning objects and multimedia applications. Properly implemented computer mediated communication can enrich the learning environment; help reduce low motivation and feelings of isolation in distance learners. E-learning appears to be more effective in distance education, where technology use is required than in face-to-face instructional settings. Implications for Policy Makers Effective and efficient implementation of e-learning technologies represents new, and difficult, challenges to practitioners, researchers, and policymakers. The term e-learning has been used to describe many different applications of technology, which may be implemented in a wide variety of ways (some of which are much more beneficial than others). School administrators must balance the needs of all stakeholders, and the cost-benefit ratios of technology tools, in deciding not only which technologies to use, but also when and how to implement new technologies. Traditional methods of instructional design and school administration must be adjusted to deal with the demands of distance education and other contexts of technology use. Professional education, development, and training for educators must ensure that teachers will be equipped to make optimal pedagogical use of new methods.
Review of Educational Research | 2015
Philip C. Abrami; Robert M. Bernard; Eugene Borokhovski; David I. Waddington; C. Anne Wade; Tonje J. Persson
Critical thinking (CT) is purposeful, self-regulatory judgment that results in interpretation, analysis, evaluation, and inference, as well as explanations of the considerations on which that judgment is based. This article summarizes the available empirical evidence on the impact of instruction on the development and enhancement of critical thinking skills and dispositions and student achievement. The review includes 341 effects sizes drawn from quasi- or true-experimental studies that used standardized measures of CT as outcome variables. The weighted random effects mean effect size (g+) was 0.30 (p < .001). The collection was heterogeneous (p < .001). Results demonstrate that there are effective strategies for teaching CT skills, both generic and content specific, and CT dispositions, at all educational levels and across all disciplinary areas. Notably, the opportunity for dialogue, the exposure of students to authentic or situated problems and examples, and mentoring had positive effects on CT skills.