Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nicholas Diana is active.

Publication


Featured researches published by Nicholas Diana.


Cognitive Neuropsychology | 2016

Identifying thematic roles from neural representations measured by functional magnetic resonance imaging

Jing Wang; Vladimir L. Cherkassky; Ying Yang; Kai-min Kevin Chang; Roberto Vargas; Nicholas Diana; Marcel Adam Just

ABSTRACT The generativity and complexity of human thought stem in large part from the ability to represent relations among concepts and form propositions. The current study reveals how a given object such as rabbit is neurally encoded differently and identifiably depending on whether it is an agent (“the rabbit punches the monkey”) or a patient (“the monkey punches the rabbit”). Machine-learning classifiers were trained on functional magnetic resonance imaging (fMRI) data evoked by a set of short videos that conveyed agent–verb–patient propositions. When tested on a held-out video, the classifiers were able to reliably identify the thematic role of an object from its associated fMRI activation pattern. Moreover, when trained on one subset of the study participants, classifiers reliably identified the thematic roles in the data of a left-out participant (mean accuracy = .66), indicating that the neural representations of thematic roles were common across individuals.


international learning analytics knowledge conference | 2017

An instructor dashboard for real-time analytics in interactive programming assignments

Nicholas Diana; Michael Eagle; John C. Stamper; Shuchi Grover; Marie A. Bienkowski; Satabdi Basu

Many introductory programming environments generate a large amount of log data, but making insights from these data accessible to instructors remains a challenge. This research demonstrates that student outcomes can be accurately predicted from student program states at various time points throughout the course, and integrates the resulting predictive models into an instructor dashboard. The effectiveness of the dashboard is evaluated by measuring how well the dashboard analytics correctly suggest that the instructor help students classified as most in need. Finally, we describe a method of matching low-performing students with high-performing peer tutors, and show that the inclusion of peer tutors not only increases the amount of help given, but the consistency of help availability as well.


ACM Transactions on Computing Education | 2017

A Framework for Using Hypothesis-Driven Approaches to Support Data-Driven Learning Analytics in Measuring Computational Thinking in Block-Based Programming Environments

Shuchi Grover; Satabdi Basu; Marie A. Bienkowski; Michael Eagle; Nicholas Diana; John C. Stamper

Systematic endeavors to take computer science (CS) and computational thinking (CT) to scale in middle and high school classrooms are underway with curricula that emphasize the enactment of authentic CT skills, especially in the context of programming in block-based programming environments. There is, therefore, a growing need to measure students’ learning of CT in the context of programming and also support all learners through this process of learning computational problem solving. The goal of this research is to explore hypothesis-driven approaches that can be combined with data-driven ones to better interpret student actions and processes in log data captured from block-based programming environments with the goal of measuring and assessing students’ CT skills. Informed by past literature and based on our empirical work examining a dataset from the use of the Fairy Assessment in the Alice programming environment in middle schools, we present a framework that formalizes a process where a hypothesis-driven approach informed by Evidence-Centered Design effectively complements data-driven learning analytics in interpreting students’ programming process and assessing CT in block-based programming environments. We apply the framework to the design of Alice tasks for high school CS to be used for measuring CT during programming.


learning analytics and knowledge | 2018

Data-driven generation of rubric criteria from an educational programming environment

Nicholas Diana; Michael Eagle; John C. Stamper; Shuchi Grover; Marie A. Bienkowski; Satabdi Basu

We demonstrate that, by using a small set of hand-graded student work, we can automatically generate rubric criteria with a high degree of validity, and that a predictive model incorporating these rubric criteria is more accurate than a previously reported model. We present this method as one approach to addressing the often challenging problem of grading assignments in programming environments. A classic solution is creating unit-tests that the student-generated program must pass, but the rigid, structured nature of unit-tests is suboptimal for assessing the more open-ended assignments students encounter in introductory programming environments like Alice. Furthermore, the creation of unit-tests requires predicting the various ways a student might correctly solve a problem - a challenging and time-intensive process. The current study proposes an alternative, semi-automated method for generating rubric criteria using low-level data from the Alice programming environment.


artificial intelligence in education | 2018

An Instructional Factors Analysis of an Online Logical Fallacy Tutoring System

Nicholas Diana; John C. Stamper; Kenneth R. Koedinger

The proliferation of fake news has underscored the importance of critical thinking in the civic education curriculum. Despite this recognized importance, systems designed to foster these kinds of critical thinking skills are largely absent from the educational technology space. In this work, we utilize an instructional factors analysis in conjunction with an online tutoring system to determine if logical fallacies are best learned through deduction, induction, or some combination of both. We found that while participants were able to learn the informal fallacies using inductive practice alone, deductive explanations were more beneficial for learning.


artificial intelligence in education | 2018

Leveraging Educational Technology to Improve the Quality of Civil Discourse.

Nicholas Diana

The ability to critically assess the information we consume is vital to productive civil discourse. However, recent research indicates that Americans are generally not adept at, for instance, identifying if a news story is real or fake. We propose a three-part research agenda aimed at providing accessible, evidence-based technological support for critical thinking in civic life. In the first stage, we built an online tutoring system for teaching logical fallacy identification. In stage two, we will leverage this system to train crowd workers to identify potentially fallacious arguments. Finally, in stage three, we will utilize these labeled examples to train a computational model of logical fallacies. We discuss how our current research into instructional factors and Belief Bias has impacted the course of this agenda, and how these three stages help to realize our ultimate goal of fostering critical thinking in civil discourse.


artificial intelligence in education | 2017

Data-Driven Generation of Rubric Parameters from an Educational Programming Environment

Nicholas Diana; Michael Eagle; John C. Stamper; Shuchi Grover; Marie A. Bienkowski; Satabdi Basu

We demonstrate that, by using a small set of hand-graded students, we can automatically generate rubric parameters with a high degree of validity, and that a predictive model incorporating these rubric parameters is more accurate than a previously reported model. We present this method as one approach to addressing the often challenging problem of grading assignments in programming environments. A classic solution is creating unit-tests that the student-generated program must pass, but the rigid, structured nature of unit-tests is suboptimal for assessing more open-ended assignments. Furthermore, the creation of unit-tests requires predicting the various ways a student might correctly solve a problem – a challenging and time-intensive process. The current study proposes an alternative, semi-automated method for generating rubric parameters using low-level data from the Alice programming environment.


artificial intelligence in education | 2017

Teaching Informal Logical Fallacy Identification with a Cognitive Tutor

Nicholas Diana; Michael Eagle; John C. Stamper; Kenneth R. Koedinger

In this age of fake news and alternative facts, the need for a citizenry capable of critical thinking has never been greater. While teaching critical thinking skills in the classroom remains an enduring challenge, research on an ill-defined domain like critical thinking in the educational technology space is even more scarce. We propose a difficulty factors assessment (DFA) to explore two factors that may make learning to identify fallacies more difficult: type of instruction and belief bias. This study will allow us to make two key contributions. First, we will better understand the relationship between sense-making and induction when learning to identify informal fallacies. Second, we will contribute to the limited work examining the impact of belief bias on informal (rather than formal) reasoning. The results of this DFA will also be used to improve the next iteration of our fallacy tutor, which may ultimately contribute to a computational model of informal fallacies.


international learning analytics knowledge conference | 2017

A framework for hypothesis-driven approaches to support data-driven learning analytics in measuring computational thinking in block-based programming

Shuchi Grover; Marie A. Bienkowski; Satabdi Basu; Michael Eagle; Nicholas Diana; John C. Stamper


educational data mining | 2017

Automatic Peer Tutor Matching: Data-Driven Methods to Enable New Opportunities for Help.

Nicholas Diana; Michael Eagle; John C. Stamper; Shuchi Grover; Marie A. Bienkowski; Satabdi Basu

Collaboration


Dive into the Nicholas Diana's collaboration.

Top Co-Authors

Avatar

John C. Stamper

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Michael Eagle

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jing Wang

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcel Adam Just

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Roberto Vargas

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge