Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marie A. Bienkowski is active.

Publication


Featured researches published by Marie A. Bienkowski.


learning analytics and knowledge | 2012

The learning registry: building a foundation for learning resource analytics

Marie A. Bienkowski; John Brecht; Jim Klo

We describe our experimentation with the current implementation of a distribution system used to share descriptive and social metadata about learning resources. The Learning Registry, developed and released in a beta version in October 2011, is intended to store and forward learning-resource metadata among a distributed, de-centralized network of nodes. The Learning Registry also accepts social/attention metadata---data about users of and activity around the learning resource. The Learning Registry open-source community has proposed a schema for sharing social metadata, and has experimented with a number of organizations representing their social metadata using that schema. This paper describes the results and challenges, and the learning-resource analytics applications that will use Learning Registry data as their foundation.


international learning analytics knowledge conference | 2017

An instructor dashboard for real-time analytics in interactive programming assignments

Nicholas Diana; Michael Eagle; John C. Stamper; Shuchi Grover; Marie A. Bienkowski; Satabdi Basu

Many introductory programming environments generate a large amount of log data, but making insights from these data accessible to instructors remains a challenge. This research demonstrates that student outcomes can be accurately predicted from student program states at various time points throughout the course, and integrates the resulting predictive models into an instructor dashboard. The effectiveness of the dashboard is evaluated by measuring how well the dashboard analytics correctly suggest that the instructor help students classified as most in need. Finally, we describe a method of matching low-performing students with high-performing peer tutors, and show that the inclusion of peer tutors not only increases the amount of help given, but the consistency of help availability as well.


ieee international workshop on wireless and mobile technologies in education | 2005

G1:1 scenarios: envisioning the context for WMTE in 2015

Jeremy Roschelle; Charles Patton; Tak-Wai Chan; John Brecht; Marie A. Bienkowski

The G1:1 international network of learning researchers met to identify major trends and uncertainties that could drive the evolution of learning technology. Using a technique called scenario-based planning, the group created stories of plausible futures that bring to life what collaborative learning may be like in 2015. These stories present contextual changes in technology and education practices that could occur, with each scenario considering a different trajectory. The group considered changes in the political and social goals of education and in the main role of teachers, as well as changes in the economies of publishing content. Using these scenarios as a way to think about long-term research plans could serve to make programs of research in wireless and mobile technology more robust.


learning analytics and knowledge | 2016

Multimodal analytics to study collaborative problem solving in pair programming

Shuchi Grover; Marie A. Bienkowski; Amir Tamrakar; Behjat Siddiquie; David A. Salter; Ajay Divakaran

Collaborative problem solving (CPS) is seen as a key skill in K-12 education---in computer science as well as other subjects. Efforts to introduce children to computing rely on pair programming as a way of having young learners engage in CPS. Characteristics of quality collaboration are joint exploring or understanding, joint representation, and joint execution. We present a data driven approach to assessing and elucidating collaboration through modeling of multimodal student behavior and performance data.


Archive | 2014

The Learning Registry: Applying Social Metadata for Learning Resource Recommendations

Marie A. Bienkowski; James Klo

The proliferation of online teaching, learning, and assessment resources is hampering efforts to make finding relevant resources easy. Metadata, while valuable for curating digital collections, is difficult to keep current or, in some cases, to obtain in the first place. Social metadata, paradata, usage data, and contextualized attention metadata all refer to data about doing with digital resources that can be harnessed for recommendations. To centralize this data for aggregation and amplification, the Learning Registry, a store and forward, distributed, de-centralized network of nodes was created. The Learning Registry makes it possible for disparate sources to publish learning resource social/attention metadata—data about users of and activity around resources. We describe our experimentation with social metadata, including that which describes alignment of learning resources to U.S. teaching standards, as a means to generate relationships among resources and people, and how it can be used for recommendations.


IEEE Intelligent Systems | 1995

The Common Prototyping Environment: a framework for software technology integration, evaluation, and transition

Mark H. Burstein; Richard E. Schantz; Marie A. Bienkowski; Marie desJardins; Stephen F. Smith

The CPE promotes the development of collaborative, distributed planners by combining a repository for shared software and data, integrated software systems, and a testbed for experimentation. >


ACM Transactions on Computing Education | 2017

A Framework for Using Hypothesis-Driven Approaches to Support Data-Driven Learning Analytics in Measuring Computational Thinking in Block-Based Programming Environments

Shuchi Grover; Satabdi Basu; Marie A. Bienkowski; Michael Eagle; Nicholas Diana; John C. Stamper

Systematic endeavors to take computer science (CS) and computational thinking (CT) to scale in middle and high school classrooms are underway with curricula that emphasize the enactment of authentic CT skills, especially in the context of programming in block-based programming environments. There is, therefore, a growing need to measure students’ learning of CT in the context of programming and also support all learners through this process of learning computational problem solving. The goal of this research is to explore hypothesis-driven approaches that can be combined with data-driven ones to better interpret student actions and processes in log data captured from block-based programming environments with the goal of measuring and assessing students’ CT skills. Informed by past literature and based on our empirical work examining a dataset from the use of the Fairy Assessment in the Alice programming environment in middle schools, we present a framework that formalizes a process where a hypothesis-driven approach informed by Evidence-Centered Design effectively complements data-driven learning analytics in interpreting students’ programming process and assessing CT in block-based programming environments. We apply the framework to the design of Alice tasks for high school CS to be used for measuring CT during programming.


learning analytics and knowledge | 2018

Data-driven generation of rubric criteria from an educational programming environment

Nicholas Diana; Michael Eagle; John C. Stamper; Shuchi Grover; Marie A. Bienkowski; Satabdi Basu

We demonstrate that, by using a small set of hand-graded student work, we can automatically generate rubric criteria with a high degree of validity, and that a predictive model incorporating these rubric criteria is more accurate than a previously reported model. We present this method as one approach to addressing the often challenging problem of grading assignments in programming environments. A classic solution is creating unit-tests that the student-generated program must pass, but the rigid, structured nature of unit-tests is suboptimal for assessing the more open-ended assignments students encounter in introductory programming environments like Alice. Furthermore, the creation of unit-tests requires predicting the various ways a student might correctly solve a problem - a challenging and time-intensive process. The current study proposes an alternative, semi-automated method for generating rubric criteria using low-level data from the Alice programming environment.


learning at scale | 2016

Assessing Problem-Solving Process At Scale

Shuchi Grover; Marie A. Bienkowski; John Niekrasz; Matthias Hauswirth

Authentic problem solving tasks in digital environments are often open-ended with ill-defined pathways to a goal state. Scaffolds and formative feedback during this process help learners develop the requisite skills and understanding, but require assessing the problem-solving process. This paper describes a hybrid approach to assessing process at scale in the context of the use of computational thinking practices during programming. Our approach combines hypothesis-driven analysis, using an evidence-centered design framework, with discovery-driven data analytics. We report on work-in-progress involving novices and expert programmers working on Blockly games.


technical symposium on computer science education | 2015

Assessments for Computational Thinking in K-12 (Abstract Only)

Shuchi Grover; Marie A. Bienkowski; Eric Snow

As computer science (CS) and computational thinking (CT) make their way into all levels of K-12 education, valid assessments aligned with new curricula can assist in measuring student learning, easing the way for adoption of new computing courses, and evaluation of pedagogical approaches for teaching computing ideas and concepts. Without attention to rigorous assessment, CT can have little hope of making its way successfully into K-12 school education settings at scale. This BoF session will involve discussion around ongoing work at SRI International (under several NSF-funded projects) on the design and development of formative and summative assessments for the ECS curriculum. Additionally, various forms of assessment (including free response and multiple-choice questions, and computational artifacts), and insights from past research on their use will be discussed. BOF attendees will be able to discuss multiple viewpoints, connect with others who care about assessment of CT, and share resources and ideas.

Collaboration


Dive into the Marie A. Bienkowski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John C. Stamper

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Michael Eagle

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nicholas Diana

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge