Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael D. Ekstrand is active.

Publication


Featured researches published by Michael D. Ekstrand.


Foundations and Trends in Human-computer Interaction | 2011

Collaborative Filtering Recommender Systems

Michael D. Ekstrand; John Riedl; Joseph A. Konstan

Recommender systems are an important part of the information and e-commerce ecosystem. They represent a powerful method for enabling users to filter through large information and product spaces. Nearly two decades of research on collaborative filtering have led to a varied set of algorithms and a rich collection of tools for evaluating their performance. Research in the field is moving in the direction of a richer understanding of how recommender technology may be embedded in specific domains. The differing personalities exhibited by different recommender algorithms show that recommendation is not a one-size-fits-all problem. Specific tasks, information needs, and item domains represent unique problems for recommenders, and design and evaluation of recommenders needs to be done based on the user tasks to be supported. Effective deployments must begin with careful analysis of prospective users and their goals. Based on this analysis, system designers have a host of options for the choice of algorithm and for its embedding in the surrounding user experience. This paper discusses a~wide variety of the choices available and their implications, aiming to provide both practicioners and researchers with an introduction to the important issues underlying recommenders and current best practices for addressing these issues.


conference on recommender systems | 2011

Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit

Michael D. Ekstrand; Michael Ludwig; Joseph A. Konstan; John Riedl

Recommender systems research is being slowed by the difficulty of replicating and comparing research results. Published research uses various experimental methodologies and metrics that are difficult to compare. It also often fails to sufficiently document the details of proposed algorithms or the evaluations employed. Researchers waste time reimplementing well-known algorithms, and the new implementations may miss key details from the original algorithm or its subsequent refinements. When proposing new algorithms, researchers should compare them against finely-tuned implementations of the leading prior algorithms using state-of-the-art evaluation methodologies. With few exceptions, published algorithmic improvements in our field should be accompanied by working code in a standard framework, including test harnesses to reproduce the described results. To that end, we present the design and freely distributable source code of LensKit, a flexible platform for reproducible recommender systems research. LensKit provides carefully tuned implementations of the leading collaborative filtering algorithms, APIs for common recommender system use cases, and an evaluation framework for performing reproducible offline evaluations of algorithms. We demonstrate the utility of LensKit by replicating and extending a set of prior comparative studies of recommender algorithms --- showing limitations in some of the original results --- and by investigating a question recently raised by a leader in the recommender systems community on problems with error-based prediction evaluation.


conference on recommender systems | 2010

Automatically building research reading lists

Michael D. Ekstrand; Praveen Kannan; James A. Stemper; John T. Butler; Joseph A. Konstan; John Riedl

All new researchers face the daunting task of familiarizing themselves with the existing body of research literature in their respective fields. Recommender algorithms could aid in preparing these lists, but most current algorithms do not understand how to rate the importance of a paper within the literature, which might limit their effectiveness in this domain. We explore several methods for augmenting existing collaborative and content-based filtering algorithms with measures of the influence of a paper within the web of citations. We measure influence using well-known algorithms, such as HITS and PageRank, for measuring a nodes importance in a graph. Among these augmentation methods is a novel method for using importance scores to influence collaborative filtering. We present a task-centered evaluation, including both an offline analysis and a user study, of the performance of the algorithms. Results from these studies indicate that collaborative filtering outperforms content-based approaches for generating introductory reading lists.


conference on recommender systems | 2015

Letting Users Choose Recommender Algorithms: An Experimental Study

Michael D. Ekstrand; Daniel Kluver; F. Maxwell Harper; Joseph A. Konstan

Recommender systems are not one-size-fits-all; different algorithms and data sources have different strengths, making them a better or worse fit for different users and use cases. As one way of taking advantage of the relative merits of different algorithms, we gave users the ability to change the algorithm providing their movie recommendations and studied how they make use of this power. We conducted our study with the launch of a new version of the MovieLens movie recommender that supports multiple recommender algorithms and allows users to choose the algorithm they want to provide their recommendations. We examine log data from user interactions with this new feature to under-stand whether and how users switch among recommender algorithms, and select a final algorithm to use. We also look at the properties of the algorithms as they were experienced by users and examine their relationships to user behavior. We found that a substantial portion of our user base (25%) used the recommender-switching feature. The majority of users who used the control only switched algorithms a few times, trying a few out and settling down on an algorithm that they would leave alone. The largest number of users prefer a matrix factorization algorithm, followed closely by item-item collaborative filtering; users selected both of these algorithms much more often than they chose a non-personalized mean recommender. The algorithms did produce measurably different recommender lists for the users in the study, but these differences were not directly predictive of user choice.


conference on recommender systems | 2012

When recommenders fail: predicting recommender failure for algorithm selection and combination

Michael D. Ekstrand; John Riedl

Hybrid recommender systems --- systems using multiple algorithms together to improve recommendation quality --- have been well-known for many years and have shown good performance in recent demonstrations such as the NetFlix Prize. Modern hybridization techniques, such as feature-weighted linear stacking, take advantage of the hypothesis that the relative performance of recommenders varies by circumstance and attempt to optimize each item score to maximize the strengths of the component recommenders. Less attention, however, has been paid to understanding what these strengths and failure modes are. Understanding what causes particular recommenders to fail will facilitate better selection of the component recommenders for future hybrid systems and a better understanding of how individual recommender personalities can be harnessed to improve the recommender user experience. We present an analysis of the predictions made by several well-known recommender algorithms on the MovieLens 10M data set, showing that for many cases in which one algorithm fails, there is another that will correctly predict the rating.


conference on recommender systems | 2013

Rating support interfaces to improve user experience and recommender accuracy

Tien T. Nguyen; Daniel Kluver; Ting-Yu Wang; Pik-Mai Hui; Michael D. Ekstrand; Martijn C. Willemsen; John Riedl

One of the challenges for recommender systems is that users struggle to accurately map their internal preferences to external measures of quality such as ratings. We study two methods for supporting the mapping process: (i) reminding the user of characteristics of items by providing personalized tags and (ii) relating rating decisions to prior rating decisions using exemplars. In our study, we introduce interfaces that provide these methods of support. We also present a set of methodologies to evaluate the efficacy of the new interfaces via a user experiment. Our results suggest that presenting exemplars during the rating process helps users rate more consistently, and increases the quality of the data.


learning at scale | 2014

Teaching recommender systems at large scale: evaluation and lessons learned from a hybrid MOOC

Joseph A. Konstan; J. D. Walker; D. Christopher Brooks; Keith Brown; Michael D. Ekstrand

In Fall 2013 we offered an open online Introduction to Recommender Systems through Coursera, while simultaneously offering a for-credit version of the course on-campus using the Coursera platform and a flipped classroom instruction model. As the goal of offering this course was to experiment with this type of instruction, we performed extensive evaluation including surveys of demographics, self-assessed skills, and learning intent; we also designed a knowledge-assessment tool specifically for the subject matter in this course, administering it before and after the course to measure learning. We also tracked students through the course, including separating out students enrolled for credit from those enrolled only for the free, open course. This article reports on our findings.


international symposium on wikis and open collaboration | 2009

rv you're dumb: identifying discarded work in Wiki article history

Michael D. Ekstrand; John Riedl

Wiki systems typically display article history as a linear sequence of revisions in chronological order. This representation hides deeper relationships among the revisions, such as which earlier revision provided most of the content for a later revision, or when a revision effectively reverses the changes made by a prior revision. These relationships are valuable in understanding what happened between editors in conflict over article content. We present methods for detecting when a revision discards the work of one or more other revisions, a means of visualizing these relationships in-line with existing history views, and a computational method for detecting discarded work. We show through a series of examples that these tools can aid mediators of wiki content disputes by making salient the structure of the ongoing conflict. Further, the computational tools provide a means of determining whether or not a revision has been accepted by the community of editors surrounding the article.


conference on recommender systems | 2011

LensKit: a modular recommender framework

Michael D. Ekstrand; Michael Ludwig; John Kolb; John Riedl

LensKit is a new recommender systems toolkit aiming to be a platform for recommender research and education. It provides a common API for recommender systems, modular implementations of several collaborative filtering algorithms, and an evaluation framework for consistent, reproducible offline evaluation of recommender algorithms. In this demo, we will showcase the ease with which LensKit allows recommenders to be configured and evaluated.


ACM Transactions on Computer-Human Interaction | 2015

Teaching recommender systems at large scale

Joseph A. Konstan; J. D. Walker; D. Christopher Brooks; Educause Keith Brown; Michael D. Ekstrand

In the fall of 2013, we offered an open online Introduction to Recommender Systems through Coursera, while simultaneously offering a for-credit version of the course on-campus using the Coursera pl...In the fall of 2013, we offered an open online Introduction to Recommender Systems through Coursera, while simultaneously offering a for-credit version of the course on-campus using the Coursera platform and a flipped classroom instruction model. As the goal of offering this course was to experiment with this type of instruction, we performed extensive evaluation including surveys of demographics, self-assessed skills, and learning intent; we also designed a knowledge-assessment tool specifically for the subject matter in this course, administering it before and after the course to measure learning, and again 5 months later to measure retention. We also tracked students through the course, including separating out students enrolled for credit from those enrolled only for the free, open course. Students had significant knowledge gains across all levels of prior knowledge and across all demographic categories. The main predictor of knowledge gain was effort expended in the course. Students also had significant knowledge retention after the course. Both of these results are limited to the sample of students who chose to complete our knowledge tests. Student completion of the course was hard to predict, with few factors contributing predictive power; the main predictor of completion was intent to complete. Students who chose a concepts-only track with hand exercises achieved the same level of knowledge of recommender systems concepts as those who chose a programming track and its added assignments, though the programming students gained additional programming knowledge. Based on the limited data we were able to gather, face-to-face students performed as well as the online-only students or better; they preferred this format to traditional lecture for reasons ranging from pure convenience to the desire to watch videos at a different pace (slower for English language learners; faster for some native English speakers). This article also includes our qualitative observations, lessons learned, and future directions.

Collaboration


Dive into the Michael D. Ekstrand's collaboration.

Top Co-Authors

Avatar

John Riedl

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mucun Tian

Boise State University

View shared research outputs
Top Co-Authors

Avatar

Martijn C. Willemsen

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge