Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Simone Stumpf is active.

Publication


Featured researches published by Simone Stumpf.


intelligent user interfaces | 2009

Fixing the program my computer learned: barriers for end users, challenges for the machine

Todd Kulesza; Weng-Keen Wong; Simone Stumpf; Stephen Perona; Rachel White; Margaret M. Burnett; Ian Oberst; Andrew J. Ko

The results of a machine learning from user behavior can be thought of as a program, and like all programs, it may need to be debugged. Providing ways for the user to debug it matters, because without the ability to fix errors users may find that the learned programs errors are too damaging for them to be able to trust such programs. We present a new approach to enable end users to debug a learned program. We then use an early prototype of our new approach to conduct a formative study to determine where and when debugging issues arise, both in general and also separately for males and females. The results suggest opportunities to make machine-learned programs more effective tools.


Ksii Transactions on Internet and Information Systems | 2011

Why-oriented end-user debugging of naive Bayes text classification

Todd Kulesza; Simone Stumpf; Weng-Keen Wong; Margaret M. Burnett; Stephen Perona; Andrew J. Ko; Ian Oberst

Machine learning techniques are increasingly used in intelligent assistants, that is, software targeted at and continuously adapting to assist end users with email, shopping, and other tasks. Examples include desktop SPAM filters, recommender systems, and handwriting recognition. Fixing such intelligent assistants when they learn incorrect behavior, however, has received only limited attention. To directly support end-user “debugging” of assistant behaviors learned via statistical machine learning, we present a Why-oriented approach which allows users to ask questions about how the assistant made its predictions, provides answers to these “why” questions, and allows users to interactively change these answers to debug the assistants current and future predictions. To understand the strengths and weaknesses of this approach, we then conducted an exploratory study to investigate barriers that participants could encounter when debugging an intelligent assistant using our approach, and the information those participants requested to overcome these barriers. To help ensure the inclusiveness of our approach, we also explored how gender differences played a role in understanding barriers and information needs. We then used these results to consider opportunities for Why-oriented approaches to address user barriers and information needs.


intelligent user interfaces | 2009

Detecting and correcting user activity switches: algorithms and interfaces

Jianqiang Shen; Jed Irvine; Xinlong Bao; Michael Goodman; Stephen Kolibaba; Anh Tran; Fredric Carl; Brenton Kirschner; Simone Stumpf; Thomas G. Dietterich

The TaskTracer system allows knowledge workers to define a set of activities that characterize their desktop work. It then associates with each user-defined activity the set of resources that the user accesses when performing that activity. In order to correctly associate resources with activities and provide useful activity-related services to the user, the system needs to know the current activity of the user at all times. It is often convenient for the user to explicitly declare which activity he/she is working on. But frequently the user forgets to do this. TaskTracer applies machine learning methods to detect undeclared activity switches and predict the correct activity of the user. This paper presents TaskPredictor2, a complete redesign of the activity predictor in TaskTracer and its notification user interface. TaskPredictor2 applies a novel online learning algorithm that is able to incorporate a richer set of features than our previous predictors. We prove an error bound for the algorithm and present experimental results that show improved accuracy and a 180-fold speedup on real user data. The user interface supports negotiated interruption and makes it easy for the user to correct both the predicted time of the task switch and the predicted activity.


international symposium on end-user development | 2013

End-User Experiences of Visual and Textual Programming Environments for Arduino

Tracey Booth; Simone Stumpf

Arduino is an open source electronics platform aimed at hobbyists, artists, and other people who want to make things but do not necessarily have a background in electronics or programming. We report the results of an exploratory empirical study that investigated the potential for a visual programming environment to provide benefits with respect to efficacy and user experience to end-user programmers of Arduino as an alternative to traditional text-based coding. We also investigated learning barriers that participants encountered in order to inform future programming environment design. Our study provides a first step in exploring end-user programming environments for open source electronics platforms.


human factors in computing systems | 2012

Tell me more?: the effects of mental model soundness on personalizing an intelligent agent

Todd Kulesza; Simone Stumpf; Margaret M. Burnett; Irwin Kwan

What does a user need to know to productively work with an intelligent agent? Intelligent agents and recommender systems are gaining widespread use, potentially creating a need for end users to understand how these systems operate in order to fix their agents personalized behavior. This paper explores the effects of mental model soundness on such personalization by providing structural knowledge of a music recommender system in an empirical study. Our findings show that participants were able to quickly build sound mental models of the recommender systems reasoning, and that participants who most improved their mental models during the study were significantly more likely to make the recommender operate to their satisfaction. These results suggest that by helping end users understand a systems reasoning, intelligent agents may elicit more and better feedback, thus more closely aligning their output with each users intentions.


intelligent user interfaces | 2008

Integrating rich user feedback into intelligent user interfaces

Simone Stumpf; Erin Sullivan; Erin Fitzhenry; Ian Oberst; Weng-Keen Wong; Margaret M. Burnett

The potential for machine learning systems to improve via a mutually beneficial exchange of information with users has yet to be explored in much detail. Previously, we found that users were willing to provide a generous amount of rich feedback to machine learning systems, and that the types of some of this rich feedback seem promising for assimilation by machine learning algorithms. Following up on those findings, we ran an experiment to assess the viability of incorporating real-time keyword-based feedback in initial training phases when data is limited. We found that rich feedback improved accuracy but an initial unstable period often caused large fluctuations in classifier behavior. Participants were able to give feedback by relying heavily on system communication in order to respond to changes. The results show that in order to benefit from the users knowledge, machine learning systems must be able to absorb keyword-based rich feedback in a graceful manner and provide clear explanations of their predictions.


symposium on visual languages and human-centric computing | 2010

Explanatory Debugging: Supporting End-User Debugging of Machine-Learned Programs

Todd Kulesza; Simone Stumpf; Margaret M. Burnett; Weng-Keen Wong; Yann Riche; Travis Moore; Ian Oberst; Amber Shinsel; Kevin McIntosh

Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a “program” that tells the computer what to do when future inputs arrive. Little research has explored how an end user can debug these programs when they make mistakes. We present our progress toward enabling end users to debug these learned programs via a Natural Programming methodology. We began with a formative study exploring how users reason about and correct a text-classification program. From the results, we derived and prototyped a concept based on “explanatory debugging”, then empirically evaluated it. Our results contribute methods for exposing a learned program’s logic to end users and for eliciting user corrections to improve the program’s predictions.


IEEE Transactions on Software Engineering | 2014

You Are the Only Possible Oracle: Effective Test Selection for End Users of Interactive Machine Learning Systems

Alex Groce; Todd Kulesza; Chaoqiang Zhang; Shalini Shamasunder; Margaret M. Burnett; Weng-Keen Wong; Simone Stumpf; Shubhomoy Das; Amber Shinsel; Forrest Bice; Kevin McIntosh

How do you test a program when only a single user, with no expertise in software testing, is able to determine if the program is performing correctly? Such programs are common today in the form of machine-learned classifiers. We consider the problem of testing this common kind of machine-generated program when the only oracle is an end user: e.g., only you can determine if your email is properly filed. We present test selection methods that provide very good failure rates even for small test suites, and show that these methods work in both large-scale random experiments using a “gold standard” and in studies with real users. Our methods are inexpensive and largely algorithm-independent. Key to our methods is an exploitation of properties of classifiers that is not possible in traditional software testing. Our results suggest that it is plausible for time-pressured end users to interactively detect failures-even very hard-to-find failures-without wading through a large number of successful (and thus less useful) tests. We additionally show that some methods are able to find the arguably most difficult-to-detect faults of classifiers: cases where machine learning algorithms have high confidence in an incorrect result.


Knowledge Based Systems | 2010

Explaining how to play real-time strategy games

Ronald A. Metoyer; Simone Stumpf; Christoph Neumann; Jonathan Dodge; Jill Cao; Aaron Schnabel

Real-time strategy games share many aspects with real situations in domains such as battle planning, air traffic control, and emergency response team management which makes them appealing test-beds for Artificial Intelligence (AI) and machine learning. End-user annotations could help to provide supplemental information for learning algorithms, especially when training data is sparse. This paper presents a formative study to uncover how experienced users explain game play in real-time strategy games. We report the results of our analysis of explanations and discuss their characteristics that could support the design of systems for use by experienced real-time strategy game users in specifying or annotating strategy-oriented behavior.


human factors in computing systems | 2016

Crossed Wires: Investigating the Problems of End-User Developers in a Physical Computing Task

Tracey Booth; Simone Stumpf; Jon Bird; Sara Jones

Considerable research has focused on the problems that end users face when programming software, in order to help them overcome their difficulties, but there is little research into the problems that arise in physical computing when end users construct circuits and program them. In an empirical study, we observed end-user developers as they connected a temperature sensor to an Arduino microcontroller and visualized its readings using LEDs. We investigated how many problems participants encountered, the problem locations, and whether they were overcome. We show that most fatal faults were due to incorrect circuit construction, and that often problems were wrongly diagnosed as program bugs. Whereas there are development environments that help end users create and debug software, there is currently little analogous support for physical computing tasks. Our work is a first step towards building appropriate tools that support end-user developers in overcoming obstacles when constructing physical computing artifacts.

Collaboration


Dive into the Simone Stumpf's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Todd Kulesza

Oregon State University

View shared research outputs
Top Co-Authors

Avatar

Ian Oberst

Oregon State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ayse Göker

Robert Gordon University

View shared research outputs
Top Co-Authors

Avatar

Janet McDonnell

University of the Arts London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge