Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Petersen is active.

Publication


Featured researches published by Andrew Petersen.


technical symposium on computer science education | 2011

Reviewing CS1 exam question content

Andrew Petersen; Michelle Craig; Daniel Zingaro

Many factors have been cited for poor performance of students in CS1. To investigate how assessment mechanisms may impact student performance, nine experienced CS1 instructors reviewed final examinations from a variety of North American institutions. The majority of the exams reviewed were composed predominantly of high-value, integrative code-writing questions, and the reviewers regularly underestimated the number of CS1 concepts required to answer these questions. An evaluation of the content and cognitive requirements of individual questions suggests that in order to succeed, students must internalize a large amount of CS1 content. This emphasizes the need for focused assessment techniques to provide students with the opportunity to demonstrate their knowledge.


technical symposium on computer science education | 2013

Facilitating code-writing in PI classes

Daniel Zingaro; Yuliya Cherenkova; Olessia Karpova; Andrew Petersen

We present the Python Classroom Response System, a web-based tool that enables instructors to use code-writing and multiple choice questions in a classroom setting. The system is designed to extend the principles of peer instruction, an active learning technique built around discussion of multiple- choice questions, into the domain of introductory programming education. Code submissions are evaluated by a suite of tests designed to highlight common misconceptions, so the instructor receives real-time feedback as students submit code. The system also allows an instructor to pull specific submissions into an editor and visualizer for use as in-class examples. We motivate the use of this system, describe its support for and extension of peer instruction, and offer use cases and scenarios for classroom implementation.


koli calling international conference on computing education research | 2016

Revisiting why students drop CS1

Andrew Petersen; Michelle Craig; Jennifer Campbell; Anya Tafliovich

This paper describes a qualitative study of the factors that contribute to a students decision to withdraw from CS1. Individual interviews were held with 18 students in a majors-focused CS1 at a large, research-intensive North American university, and results both validate and extend previous work on the experience of students who struggle in introductory computer science. In particular, our analysis confirms the complexity of the decision to drop, with students citing a combination of interrelated factors that contribute to the decision. Lack of time, combined with ineffective study strategies or with a prioritization of other courses, were the most commonly cited combinations of factors. Interestingly, when compared to the experience of students who chose to complete the course, there is evidence that students encounter a decision point when they realize they are or soon will be behind. Students who drop speak of focusing on other priorities or being unable to catch up, while students who complete speak of understanding the need to use new techniques for learning and increasing their efforts.


australasian computing education conference | 2017

The Compound Nature of Novice Programming Assessments

Andrew Luxton-Reilly; Andrew Petersen

Failure rates in introductory programming courses are notoriously high, and researchers have noted that students struggle with the assessments that we typically use to evaluate programming ability. Current assessment practices in introductory courses consist predominantly of questions that involve a multitude of different concepts and facts. Students with fragile knowledge in any of the areas required may be unable to produce a working solution, even when they may know most of the required material. These assessments also make it difficult for a teacher to distinguish between the specific information that a student does and does not know. In this paper, we analyse examination questions used to assess novice programming at the syntax level and describe the extent to which each syntax component is used across the various examination questions. We also explore the degree to which questions involve multiple syntax elements as an indicator of how independently concepts are examined.


Proceedings of the Australasian Computer Science Week Multiconference on | 2016

Student difficulties with pointer concepts in C

Michelle Craig; Andrew Petersen

C has long been a popular language of instruction, partly because of the interface to memory exposed by pointers. However, pointers are difficult to use correctly, even for students who already have experience with basic control structures and a memory model. In this study, we define a set of key pointer concepts, presented as a taxonomy, and then evaluate their difficulty by mining submissions to a set of online lab exercises that focus on the concepts in our taxonomy. The set of exercises analyzed includes multiple-choice questions designed to evaluate pre- and post-lab understanding of pointer topics. We use these questions to place the topics in the taxonomy in a rough order of difficulty. Additionally, we analyze student submissions to coding exercises, revealing inefficient behaviours students use to solve pointer problems and identifying the most common errors committed.


integrating technology into computer science education | 2016

Employing Multiple-Answer Multiple Choice Questions

Andrew Petersen; Michelle Craig; Paul Denny

Increasing enrollments and adoption of online resources have encouraged the use of multiple choice questions as a means of providing scalable assessment. However, in contexts where formative feedback is desired, standard multiple choice questions may lead students to a false sense of confidence -- a result of their small solution space and the temptation to guess. We propose the use of multiple-answer multiple choice questions in situations where formative feedback is desired and present evidence that these questions are well suited for that role.


koli calling international conference on computing education research | 2015

Modern goto: novice programmer usage of non-standard control flow

Stewart D. Smith; Nicholas Zemljic; Andrew Petersen

While many programmers would agree that unrestricted use of goto and similar structures is undesirable, modern languages still provide statements that support non-standard control flow: structures that do not obey the guidelines of structured programming. Novice programmers who have not been exposed to the arguments for and against the use of these structures may find them tempting -- or even natural -- when struggling to solve problems. We analyze a large-scale repository of novice programmer source code and find that 7% of the solutions in our set use non-standard control structures. While many of these uses are ineffective, some students use non-standard control to simplify their code.


integrating technology into computer science education | 2015

PCRS-C: Helping Students Learn C

Daniel Marchena Parreira; Andrew Petersen; Michelle Craig

The C programming language is an important piece of many undergraduate CS programs, as it provides an environment for interacting directly with memory and exploring systems-programming concepts. However, while many common introductory languages have rich tools that support instruction, C has received relatively little attention [2, 1]. To provide students with rapid feedback and tools for understanding C, we have extended PCRS, a web-based platform for deploying programming exercises and content such as videos. Students submit C code to solve programming exercises and receive immediate feedback generated by running the submission against a set of instructor-defined testcases. Students also have access to graphical traces of execution, so they can explore how their code manipulates memory. The system has been deployed to two second-year systems-programming courses with a total enrollment over 600, and a set of modules, consisting of videos and exercises, is being developed for use by the community.


technical symposium on computer science education | 2017

Evaluating Neural Networks as a Method for Identifying Students in Need of Assistance

Karo Castro-Wunsch; Alireza Ahadi; Andrew Petersen

Course instructors need to be able to identify students in need of assistance as early in the course as possible. Recent work has suggested that machine learning approaches applied to snapshots of small programming exercises may be an effective solution to this problem. However, these results have been obtained using data from a single institution, and prior work using features extracted from student code has been highly sensitive to differences in context. This work provides two contributions: first, a partial reproduction of previously published results, but in a different context, and second, an exploration of the efficacy of neural networks in solving this problem. Our findings confirm the importance of two features (the number of steps required to solve a problem and the correctness of key problems), indicate that machine learning techniques are relatively stable across contexts (both across terms in a single course and across courses), and suggest that neural network based approaches are as effective as the best Bayesian and decision tree methods. Furthermore, neural networks can be tuned to be reliably pessimistic, so they may serve a complementary role in solving the problem of identifying students who need assistance.


integrating technology into computer science education | 2016

Negotiating the Maze of Academic Integrity in Computing Education

Simon; Judy Sheard; Michael Morgan; Andrew Petersen; Amber Settle; Jane Sinclair; Gerry W. Cross; Charles Riedesel

Academic integrity in computing education is a source of much confusion and disagreement. Studies of student and academic approaches to academic integrity in computing indicate considerable variation in practice along with confusion as to what practices are acceptable. The difficulty appears to arise in part from perceived differences between academic practice in computing education and professional practice in the computing industry, which lead to challenges in devising a consistent and meaningful approach to academic integrity. Coding practices in industry rely heavily on teamwork and use of external resources, but when computing educators seek to model industry practice in the classroom these techniques tend to conflict with standard academic integrity policies, which focus on assessing individual achievement. We have surveyed both industry professionals and computing academics about practices relating to academic integrity, and can confirm the uncertainty and variability that permeates the field. We find clear divergence in the views of these two groups, and also a broad range of practices considered acceptable by the academics. Our findings establish a clear need to clarify academic integrity issues in the context of computing education. Educators must carefully consider how academic integrity issues relate to their learning objectives, teaching approaches, and the industry practice for which they are preparing students. To this end we propose a process that fulfils two purposes: to guide academics in the consideration of academic integrity issues when designing assessment items, and to effectively communicate the resulting guidelines to students so as to reduce confusion and improve educational practice.

Collaboration


Dive into the Andrew Petersen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Denny

University of Auckland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arto Hellas

University of Helsinki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Hovemeyer

York College of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge