Giora Alexandron
Weizmann Institute of Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Giora Alexandron.
learning at scale | 2016
José A. Ruipérez-Valiente; Giora Alexandron; Zhongzhou Chen; David E. Pritchard
The study presented in this paper deals with copying answers in MOOCs. Our findings show that a significant fraction of the certificate earners in the course that we studied have used what we call harvesting accounts to find correct answers that they later submitted in their main account, the account for which they earned a certificate. In total, around 2.5% of the users who earned a certificate in the course obtained the majority of their points by using this method, and around 10% of them used it to some extent. This paper has two main goals. The first is to define the phenomenon and demonstrate its severity. The second is characterizing key factors within the course that affect it, and suggesting possible remedies that are likely to decrease the amount of cheating. The immediate implication of this study is to MOOCs. However, we believe that the results generalize beyond MOOCs, since this strategy can be used in any learning environments that do not identify all registrants.
international conference on software engineering | 2014
Giora Alexandron; Michal Armoni; Michal Gordon; David Harel
We examine how students work in scenario-based and object- oriented programming (OOP) languages, and qualitatively analyze the use of abstraction through the prism of the dif- ferences between the paradigms. The findings indicate that when working in a scenario-based language, programmers think on a higher level of abstraction than when working with OOP languages. This is explained by other findings, which suggest how the declarative, incremental nature of scenario-based programming facilitates separation of con- cerns, and how it supports a kind of programming that al- lows programmers to work with a less detailed mental model of the system they develop. The findings shed light on how declarative approaches can reduce the cognitive load involved in programming, and how scenario-based program- ming might solve some of the difficulties involved in the use of declarative languages. This is applicable to the design of learning materials, and to the design of programming lan- guages and tools.
Computers in Education | 2017
Giora Alexandron; Jos A. Ruiprez-Valiente; Zhongzhou Chen; Pedro J. Muoz-Merino; David E. Pritchard
This paper presents a detailed study of a form of academic dishonesty that involves the use of multiple accounts for harvesting solutions in a Massive Open Online Course (MOOC). It is termed CAMEO Copying Answers using Multiple Existence Online. A person using CAMEO sets up one or more harvesting accounts for collecting correct answers; these are then submitted in the users master account for credit.The study has three main goals: Determining the prevalence of CAMEO, studying its detailed characteristics, and inferring the motivation(s) for using it. For the physics course that we studied, about 10% of the certificate earners used this method to obtain more than 1% of their correct answers, and more than 3% of the certificate earners used it to obtain the majority (>50%) of their correct answers. We discuss two of the likely consequences of CAMEO: jeopardizing the value of MOOC certificates as academic credentials, and generating misleading conclusions in educational research. Based on our study, we suggest methods for reducing CAMEO. Although this study was conducted on a MOOC, CAMEO can be used in any learning environment that enables students to have multiple accounts. Studying the use of multiple accounts for harvesting solutions in a MOOCs.10% of the certificate earners used it to some extent; 3% for the majority of their points.Motivation is earning a certificate with less effort.This behavior is typically premediated, applied without attempting to solve the question legitimately first.Randomizing question parameters and delaying feedback reduce it significantly.
workshop in primary and secondary computing education | 2013
Giora Alexandron; Michal Armoni; Michal Gordon; David Harel
Non-determinism (ND) is a fundamental concept in computer science, and comes in two main flavors. One is the kind of ND that appears in automata theory and formal languages. The other, which we term operative, appears in non-deterministic programming languages and in the context of concurrent and distributed systems. We believe that it is important to teach the two types of ND, especially as ND has become a very prominent characteristic of computerized systems. Currently, students are mainly introduced to ND of the first type, which is known to be hard to teach and learn. Our findings suggest that learning operative ND might be easier, and that students can reach a significant understanding of this concept when it is introduced in the context of a programming course that deals with a non-deterministic programming language like the language of Live Sequence Charts (LSC). Based on that, we suggest teaching operative ND in the context of concurrent and distributed programming, a topic which is covered by a new knowledge area that was added in Computer Science Curricula 2013.
Computing in Science and Engineering | 2017
Giora Alexandron; Michal Armoni; Michal Gordon; David Harel
This is the second part of a two-part series that describes a pilot programming course in which high school students majoring in computer science were introduced to the visual, scenario-based programming language of live sequence charts. The main rationale for the course was that computer science students should be exposed to at least two very different programming paradigms and that LSCs, with their unique characteristics, can be a good vehicle for that. Part 1 (see the previous issue) focused on the pedagogic rationale of the pilot, on introducing LSC, and on the structure of the course. Part 2 centers on the evaluation of the pilot’s results.
Research and Practice in Technology Enhanced Learning | 2016
Zhongzhou Chen; Christopher Chudzicki; Daniel C. Palumbo; Giora Alexandron; Youn-Jeng Choi; Qian Zhou; David E. Pritchard
We conducted two AB experiments (treatment vs. control) in a massive open online course. The first experiment evaluates deliberate practice activities (DPAs) for developing problem solving expertise as measured by traditional physics problems. We find that a more interactive drag-and-drop format of DPA generates quicker learning than a multiple choice format but DPAs do not improve performance on solving traditional physics problems more than normal homework practice. The second experiment shows that a different video shooting setting can improve the fluency of the instructor which in turn improves the engagement of the students although it has no significant impact on the learning outcomes. These two cases demonstrate the potential of MOOC AB experiments as an open-ended research tool but also reveal limitations. We discuss the three most important challenges: wide student distribution, “open-book” nature of assessments, and large quantity and variety of data. We suggest possible methods to cope with those.
ACM Transactions on Computing Education | 2014
Giora Alexandron; Michal Armoni; Michal Gordon; David Harel
In this article, we discuss the possible connection between the programming language and the paradigm behind it, and programmers’ tendency to adopt an external or internal perspective of the system they develop. Based on a qualitative analysis, we found that when working with the visual, interobject language of live sequence charts (LSC), programmers tend to adopt an external and usability-oriented view of the system, whereas when working with an intraobject language, they tend to adopt an internal and implementation-oriented viewpoint. This is explained by first discussing the possible effect of the programming paradigm on programmers’ perception and then offering a more comprehensive explanation. The latter is based on a cognitive model of programming with LSC, which is an interpretation and a projection of the model suggested by Adelson and Soloway [1985] onto LSC and scenario-based programming, the new paradigm on which LSC is based. Our model suggests that LSC fosters a kind of programming that enables iterative refinement of the artifact with fewer entries into the solution domain. Thus, the programmer can make less context switching between the solution domain and the problem domain, and consequently spend more time in the latter. We believe that these findings are interesting mainly in two ways. First, they characterize an aspect of problem-solving behavior that to the best of our knowledge has not been studied before—the programmer’s perspective. The perspective can potentially affect the outcome of the problem-solving process, such as by leading the programmer to focus on different parts of the problem. Second, relating the structure of the language to the change in perspective sheds light on one of the ways in which the programming language can affect the programmer’s behavior.
european conference on technology enhanced learning | 2018
Giora Alexandron; José A. Ruipérez-Valiente; Sunbok Lee; David E. Pritchard
Massive Open Online Courses (MOOCs) collect large amounts of rich data. A primary objective of Learning Analytics (LA) research is studying these data in order to improve the pedagogy of interactive learning environments. Most studies make the underlying assumption that the data represent truthful and honest learning activity. However, previous studies showed that MOOCs can have large cohorts of users that break this assumption and achieve high performance through behaviors such as Cheating Using Multiple Accounts or unauthorized collaboration, and we therefore denote them fake learners. Because of their aberrant behavior, fake learners can bias the results of Learning Analytics (LA) models. The goal of this study is to evaluate the robustness of LA results when the data contain a considerable number of fake learners. Our methodology follows the rationale of ‘replication research’. We challenge the results reported in a well-known, and one of the first LA/Pedagogic-Efficacy MOOC papers, by replicating its results with and without the fake learners (identified using machine learning algorithms). The results show that fake learners exhibit very different behavior compared to true learners. However, even though they are a significant portion of the student population (\(\sim \)15%), their effect on the results is not dramatic (does not change trends). We conclude that the LA study that we challenged was robust against fake learners. While these results carry an optimistic message on the trustworthiness of LA research, they rely on data from one MOOC. We believe that this issue should receive more attention within the LA research community, and can explain some ‘surprising’ research results in MOOCs.
IEEE Transactions on Learning Technologies | 2017
José A. Ruipérez-Valiente; Pedro J. Muñoz-Merino; Giora Alexandron; David E. Pritchard
One of the reported methods of cheating in online environments in the literature is CAMEO (Copying Answers using Multiple Existences Online), where harvesting accounts are used to obtain correct answers that are later submitted in the master account which gives the student credit to obtain a certificate. In previous research, we developed an algorithm to identify and label submissions that were cheated using the CAMEO method; this algorithm relied on the IP of the submissions. In this study, we use this tagged sample of submissions to i) compare the influence of student and problems characteristics on CAMEO and ii) build a random forest classifier that detects submissions as CAMEO without relying on IP, achieving sensitivity and specificity levels of 0.966 and 0.996, respectively. Finally, we analyze the importance of the different features of the model finding that student features are the most important variables towards the correct classification of CAMEO submissions, concluding also that student features have more influence on CAMEO than problem features.
2015 Physics Education Research Conference Proceedings | 2015
Christopher Chudzicki; Zhongzhou Chen; Qian Zhou; Giora Alexandron; David E. Pritchard
A standard method for measuring learning is to administer the same assessment before and after instruction. This pre/post-test technique is widely used in education research and has been used in our introductory physics MOOC to measure learning. One potential weakness of this paradigm is that post-test performance gains may result from exposure on the pre-test instead of instruction. This possibility is exacerbated in MOOCs where students receive multiple attempts per item, instant correct/incorrect feedback, and unlimited time (until the due date). To find the size of this problem in our recent MOOCs, we split the student population into two groups, each of which received identical post-tests but different subsets of post-test items on their group pre-test. We report a small overall advantage (2.9% ± 1.7%) on post-test items due to pre-test exposure. However, this advantage is not robust and is strongly diminished when one obviously anomalous item is removed. PACS: 01.40.Fk, 01.40.gf