Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Brooks is active.

Publication


Featured researches published by Christopher Brooks.


Lawrence Berkeley National Laboratory | 1999

The NetLogger Methodology for High Performance Distributed Systems Performance Analysis

Brian Tierney; William E. Johnston; Brian Crowley; Gary Hoo; Christopher Brooks; Dan Gunter

The authors describe a methodology that enables the real-time diagnosis of performance problems in complex high-performance distributed systems. The methodology includes tools for generating precision event logs that can be used to provide detailed end-to-end application and system level monitoring; a Java agent-based system for managing the large amount of logging data; and tools for visualizing the log data and real-time state of the distributed system. The authors developed these tools for analyzing a high-performance distributed system centered around the transfer of large amounts of data at high speeds from a distributed storage server to a remote visualization client. However, this methodology should be generally applicable to any distributed system. This methodology, called NetLogger, has proven invaluable for diagnosing problems in networks and in distributed systems code. This approach is novel in that it combines network, host, and application-level monitoring, providing a complete view of the entire system.


International journal of continuing engineering education and life-long learning | 2008

LOCO-Analyst : semantic web technologies in learning content usage analysis

Jelena Jovanovic; Dragan Gasevic; Christopher Brooks; Vladan Devedzic; Marek Hatala; Timmy Eap; Griff Richards

This paper demonstrates how we use semantic web technologies to improve the state-of-the-art in e-learning environments and bridge the gap between students and learning-content authors/teachers. In particular, we use our Learning Object Context Ontology (LOCO) framework to formalise the notion of learning object context as a complex interplay of learning activities, learning objects and learners. In addition, we rely on semantic annotation for establishing semantic relations among diverse learning artefacts (e.g. lessons and chat messages). These technologies enabled us to implement a number of feedback types (identified by interviewing several web-oriented professional educators) for content authors and teachers to help them improve their online courses. The implemented feedback is integrated in a tool named LOCO-Analyst, which actually extends the well-known learning content packing tool Reload Editor. LOCO-Analyst is tested on the real data obtained from the iHelp Courses Learning Content Management System and is evaluated by several educational practitioners.


learning analytics and knowledge | 2015

A time series interaction analysis method for building predictive models of learners using log data

Christopher Brooks; Craig Thompson; Stephanie D. Teasley

As courses become bigger, move online, and are deployed to the general public at low cost (e.g. through Massive Open Online Courses, MOOCs), new methods of predicting student achievement are needed to support the learning process. This paper presents a novel method for converting educational log data into features suitable for building predictive models of student success. Unlike cognitive modelling or content analysis approaches, these models are built from interactions between learners and resources, an approach that requires no input from instructional or domain experts and can be applied across courses or learning environments.


Archive | 2014

The Data-Assisted Approach to Building Intelligent Technology-Enhanced Learning Environments

Christopher Brooks; Jim E. Greer; Carl Gutwin

This chapter deals with the sensemaking activity in learning analytics. It provides a detailed description of the data-assisted approach to building intelligent technology-enhanced learning systems, which focuses on helping instructional experts discover insight into the teaching and learning process, and leverages that insight as instructional interventions. To accomplish this, three different scenarios and associated case studies are provided: the use of information visualization in online discussion forums, the use of clustering for lecture capture viewership, and the ability to customize indexes in lecture capture playback. As each case study is described, the sensemaking process is contextualized to the different instructional experts that are involved.


human factors in computing systems | 2016

Enabling Designers to Foresee Which Colors Users Cannot See

Katharina Reinecke; David R. Flatla; Christopher Brooks

Users frequently experience situations in which their ability to differentiate screen colors is affected by a diversity of situations, such as when bright sunlight causes glare, or when monitors are dimly lit. However, designers currently have no way of choosing colors that will be differentiable by users of various demographic backgrounds and abilities and in the wide range of situations where their designs may be viewed. Our goal is to provide designers with insight into the effect of real-world situational lighting conditions on peoples ability to differentiate colors in applications and imagery. We therefore developed an online color differentiation test that includes a survey of situational lighting conditions, verified our test in a lab study, and deployed it in an online environment where we collected data from around 30,000 participants. We then created ColorCheck, an image-processing tool that shows designers the proportion of the population they include (or exclude) by their color choices.


learning at scale | 2015

Who You Are or What You Do: Comparing the Predictive Power of Demographics vs. Activity Patterns in Massive Open Online Courses (MOOCs)

Christopher Brooks; Craig Thompson; Stephanie D. Teasley

Demographics factors have been used successfully as predictors of student success in traditional higher education systems, but their relationship to achievement in MOOC environments has been largely untested. In this work we explore the predictive power of user demographics compared to learner interaction trace data generated by students in two MOOCs. We show that demographic information offers minimal predictive power compared to activity models, even when compared to models created very early on in the course before substantial interaction data has accrued.


learning analytics and knowledge | 2015

Reducing selection bias in quasi-experimental educational studies

Christopher Brooks; Omar Chavez; Jared Tritz; Stephanie D. Teasley

In this paper we examine the issue of selection bias in quasi-experimental (non-randomly controlled) educational studies. We provide background about common sources of selection bias and the issues involved in evaluating the outcomes of quasi-experimental studies. We describe two methods, matched sampling and propensity score matching, that can be used to overcome this bias. Using these methods, we describe their application through one case study that leverages large educational datasets drawn from higher education institutional data warehouses. The contribution of this work is the recommendation of a methodology and case study that educational researchers can use to understand, measure, and reduce selection bias in real-world educational interventions.


learning at scale | 2015

Learn With Friends: The Effects of Student Face-to-Face Collaborations on Massive Open Online Course Activities

Christopher Brooks; Caren M. Stalburg; Tawanna R. Dillahunt; Lionel P. Robert

This work investigates whether enrolling in a Massive Open Online Course (MOOC) with friends or colleagues can improve a learners performance and social interaction during the course. Our results suggest that signing up for a MOOC with peers correlates positively with the rate of course completion, level of achievement, and discussion forum usage. Further analysis seems to suggest that a learners interaction with their friends compliments a MOOC by acting as a form of self-blended learning.


human factors in computing systems | 2015

Detecting and Visualizing Filter Bubbles in Google and Bing

Tawanna R. Dillahunt; Christopher Brooks; Samarth Gulati

Despite the pervasiveness of search engines, most users know little about the implications of search engine algorithms and are unaware of how they work. People using web search engines assume that search results are unbiased and neutral. Filter bubbles, or personalized results, could lead to polarizing effects across populations, which could create divisions in society. This preliminary work explores whether the filter bubble can be measured and described and is an initial investigation towards the larger goal of identifying how non-search experts might understand how the filter bubble impacts their search results.


User Modeling and User-adapted Interaction | 2018

Student success prediction in MOOCs

Josh Gardner; Christopher Brooks

Predictive models of student success in Massive Open Online Courses (MOOCs) are a critical component of effective content personalization and adaptive interventions. In this article we review the state of the art in predictive models of student success in MOOCs and present a categorization of MOOC research according to the predictors (features), prediction (outcomes), and underlying theoretical model. We critically survey work across each category, providing data on the raw data source, feature engineering, statistical model, evaluation method, prediction architecture, and other aspects of these experiments. Such a review is particularly useful given the rapid expansion of predictive modeling research in MOOCs since the emergence of major MOOC platforms in 2012. This survey reveals several key methodological gaps, which include extensive filtering of experimental subpopulations, ineffective student model evaluation, and the use of experimental data which would be unavailable for real-world student success prediction and intervention, which is the ultimate goal of such models. Finally, we highlight opportunities for future research, which include temporal modeling, research bridging predictive and explanatory student models, work which contributes to learning theory, and evaluating long-term learner success in MOOCs.

Collaboration


Dive into the Christopher Brooks's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig Thompson

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim E. Greer

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar

Brian Tierney

Lawrence Berkeley National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge