Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yigal Rosen is active.

Publication


Featured researches published by Yigal Rosen.


Review of Educational Research | 2009

Peace Education in Societies Involved in Intractable Conflicts: Direct and Indirect Models

Daniel Bar-Tal; Yigal Rosen

The present article deals with the crucial question: Can peace education facilitate change in the sociopsychological infrastructure that feeds continued intractable conflict and then how the change can be carried? Intractable conflicts still rage in various parts of the globe, and they not only cause local misery and suffering but also threaten the well-being of the international community at large. The present article examines the nature of peace education in societies that were, or are still, involved in intractable conflict. It presents the political–societal and educational conditions for successful implementation of peace education and describes two models for peace education: direct and indirect peace education. Finally, the article offers a number of conclusions.


Journal of Educational Computing Research | 2007

The Differential Learning Achievements of Constructivist Technology-Intensive Learning Environments as Compared with Traditional Ones: A Meta-Analysis

Yigal Rosen; Gavriel Salomon

Different learning environments provide different learning experiences and ought to serve different achievement goals. We hypothesized that constructivist learning environments lead to the attainment of achievements that are consistent with the experiences that such settings provide and that more traditional settings lead to the attainments of other kinds of achievement in accordance with the experiences they provide. A meta-analytic study was carried out on 32 methodologically-appropriate experiments in which these 2 settings were compared. Results supported 1 of our hypotheses showing that overall constructivist learning environments are more effective than traditional ones (ES = .460) and that their superiority increases when tested against constructivist-appropriate measures (ES = .902). However, contrary to expectations, traditional settings did not differ from constructivist ones when traditionally-appropriate measures were used. A number of possible interpretations are offered among them the possibility that traditional settings have come to incorporate some constructivist elements. This possibility is supported by other findings of ours such as smaller effect sizes for more recent studies and for longer lasting periods of instruction.


Journal of Educational Computing Research | 2009

The Effects of an Animation-Based On-Line Learning Environment on Transfer of Knowledge and on Motivation for Science and Technology Learning.

Yigal Rosen

The study described here is among the first of its kind to investigate systematically the effect of learning with integrated animations on transfer of knowledge and on motivation to learn science and technology. Four hundred eighteen 5th and 7th grade students across Israel participated in a study. Students in the experimental group participated at least once a week in science and technology lessons that integrated the animation environment. The experiment was continued for 2 to 3 months. The findings showed a significant impact of animation-based on-line learning environment on transfer of knowledge and on learning motivation. Additionally, the findings showed that students changed their perception of science and technology learning as a result of teaching and learning with integrated animations. Students perceived themselves as playing a more central role in classroom interactions, felt greater interest in learning, and emphasized more the use of technology and experiments during lessons.


Archive | 2018

Challenges of Assessing Collaborative Problem Solving

Arthur C. Graesser; Peter W. Foltz; Yigal Rosen; David Williamson Shaffer; Carol Forsyth

An assessment of Collaborative Problem Solving (CPS) proficiency was developed by an expert group for the PISA 2015 international evaluation of student skills and knowledge. The assessment framework defined CPS skills by crossing three major CPS competencies with four problem solving processes that were adopted from PISA 2012 Complex Problem Solving to form a matrix of 12 specific skills. The three CPS competencies are (1) establishing and maintaining shared understanding, (2) taking appropriate action, and (3) establishing and maintaining team organization. For the assessment, computer-based agents provide the means to assess students by varying group composition and discourse across multiple collaborative situations within a short period of time. Student proficiency is then measured by the extent to which students respond to requests and initiate actions or communications to advance the group goals. This chapter identifies considerations and challenges in the design of a collaborative problem solving assessment for large-scale testing.


international learning analytics knowledge conference | 2017

Exploring the measurement of collaborative problem solving using a human-agent educational game

Kristin Stoeffler; Yigal Rosen; Alina von Davier

Collaborative problem solving (CPS) is a process that relies on both cognitive and social skills contributions by those involved in the joint activity. If a student is matched with a problematic group of peers, then there will be no valid measurement of the CPS skills. In the human-agent settings, CPS skills are measured by pairing each individual student with a computer agent or agents that can be programmed to act as team members with varying characteristics relevant to different CPS skills and contexts. This paper describes current research on measuring CPS skills through human-agent interactions in a prototype of a collaborative educational game.


learning at scale | 2018

The effects of adaptive learning in a massive open online course on learners' skill development

Yigal Rosen; Ilia Rushkin; Rob Rubin; Liberty Munson; Andrew M. Ang; Gregory Weber; Glenn Lopez; Dustin Tingley

We report an experimental implementation of adaptive learning functionality in a self-paced Microsoft MOOC (massive open online course) on edX. In a personalized adaptive system, the learners progress toward clearly defined goals is continually assessed, the assessment occurs when a student is ready to demonstrate competency, and supporting materials are tailored to the needs of each learner. Despite the promise of adaptive personalized learning, there is a lack of evidence-based instructional design, transparency in many of the models and algorithms used to provide adaptive technology or a framework for rapid experimentation with different models. ALOSI (Adaptive Learning Open Source Initiative) provides open source adaptive learning technology and a common framework to measure learning gains and learner behavior. This study explored the effects of two different strategies for adaptive learning and assessment: Learners were randomly assigned to three groups. In the first adaptive group ALOSI prioritized a strategy of remediation - serving learners items on topics with the least evidence of mastery; in the second adaptive group ALOSI prioritized a strategy of continuity - that is learners would be more likely served items on similar topic in a sequence until mastery is demonstrated. The control group followed the pathways of the course as set out by the instructional designer, with no adaptive algorithms. We found that the implemented adaptivity in assessment, with emphasis on remediation is associated with a substantial increase in learning gains, while producing no big effect on the drop-out. Further research is needed to confirm these findings and explore additional possible effects and implications to course design.


artificial intelligence in education | 2018

Adaptive Learning Open Source Initiative for MOOC Experimentation.

Yigal Rosen; Ilia Rushkin; Rob Rubin; Liberty Munson; Andrew M. Ang; Gregory Weber; Glenn Lopez; Dustin Tingley

In personalized adaptive systems, the learner’s progress toward clearly defined goals is continually assessed, the assessment occurs when a student is ready to demonstrate competency, and supporting materials are tailored to the needs of each learner. Despite the promise of adaptive personalized learning, there is a lack of evidence-based instructional design, transparency in many of the models and algorithms used to provide adaptive technology or a framework for rapid experimentation with different models. ALOSI (Adaptive Learning Open Source Initiative) provides open source adaptive learning technology and a common framework to measure learning gains and learner behavior. This paper provides an overview of adaptive learning functionality developed by Harvard and Microsoft in collaboration with edX and other partners, and shared results the recent deployment in Microsoft MOOC on edX. The study explored the effects of two different strategies for adaptive problems (i.e., assessment items) on knowledge and skills development. We found that the implemented adaptivity in assessment, with emphasis on remediation is associated with a substantial increase in learning gains, while producing no big effect on the drop-out. Further research is needed to confirm these findings and explore additional possible effects and implications to course design.


artificial intelligence in education | 2018

Gamified Assessment of Collaborative Skills with Chatbots

Kristin Stoeffler; Yigal Rosen; Maria Bolsinova; Alina A. von Davier

Game-based assessments and learning environments create unique opportunities to provide learners with the ability to demonstrate their proficiency with cognitive skills and behaviors in increasingly authentic environments. Effective task designs, and the effective alignment of tasks with constructs, are also improving our ability to provide learners with insights about their proficiency with these skills. Sharing these insights within the industry with those working toward the same goal contributes to the rising tide that lifts all boats. In this paper we present insights from our work to develop and measure collaborative problem solving skills using a game-based assessment “Circuit Runner.” Our innovative educational game design allows us to incorporate item response data, telemetry data, and stealth- telemetry data to provide a more authentic measure of collaborative problem solving skills. Our study design included 379 study participants on Amazon Mechanical Turk (MTurk), who completed the “Circuit Runner” CPS assessment. The paper provides details on the design of educational games and scoring techniques and discusses findings from the pilot study.


artificial intelligence in education | 2018

Human-Agent Assessment: Interaction and Sub-skills Scoring for Collaborative Problem Solving

Pravin Chopade; Kristin Stoeffler; Saad M. Khan; Yigal Rosen; Spencer Swartz; Alina A. von Davier

Collaborative problem solving (CPS) is one of the 21st century skills identified as a critical competency for education and workplace success. Students entering the workforce will be expected to have a level of proficiency with both cognitive and social-emotional skills. This paper presents an approach to measuring features and sub-skills associated with CPS ability and provides a methodology for CPS based performance assessment using an educational problem solving video game. Our method incorporates K-Means clustering to evaluate and analyze the feature space of the CPS evidence that was gathered from game log-data. Our results illustrate distinct participant clusters of high, medium and low-CPS skills proficiency levels that can help focus remediation efforts.


learning at scale | 2017

Getting to Know English Language Learners in MOOCs: Their Motivations, Behaviors, and Outcomes

Selen Turkay; Hadas Eidelman; Yigal Rosen; Daniel T. Seaton; Glenn Lopez; Jacob Whitehill

Massive Open Online Courses (MOOCs) promise to engage a global audience and emphasize the democratic achievement of free, university-level education. While such open access enables participation, it is unclear how learners who are not fluent in English (ELLs) engage with MOOC content. After all, the language of MOOCs is English. In order to improve accessibility for ELLs in digital learning environments, we must first have a clear understanding of the educational landscape: who are the non-native English speakers enrolled in MOOCs? Where are they located geographically? What are their current online learning behaviors, motivations and outcomes? In this paper we start answering some of these questions by analyzing data from 100 HarvardX courses, using self-report and log data. Preliminary analysis show evidence that ELLs are motivated by more utilitarian goals compared to non-ELLs.

Collaboration


Dive into the Yigal Rosen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge