Steven A. Wolfman
University of British Columbia
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steven A. Wolfman.
Machine Learning | 2003
Tessa A. Lau; Steven A. Wolfman; Pedro M. Domingos; Daniel S. Weld
Programming by demonstration enables users to easily personalize their applications, automating repetitive tasks simply by executing a few examples. We formalize programming by demonstration as a machine learning problem: given the changes in the application state that result from the users demonstrated actions, learn the general program that maps from one application state to the next. We present a methodology for learning in this space of complex functions. First we extend version spaces to learn arbitrary functions, not just concepts. Then we introduce the version space algebra, a method for composing simpler version spaces to construct more complex spaces. Finally, we apply our version space algebra to the text-editing domain and describe an implemented system called SMARTedit that learns repetitive text-editing procedures by example. We evaluate our approach by measuring the number of examples required for the system to learn a procedure thatworks on the remainder of examples, and by an informal user study measuring the effort users spend using our system versus performing the task by hand. The results show that SMARTedit is capable of generalizing correctly from as few as one or two examples, and that users generally save a significant amount of effort when completing tasks with SMARTedits help.
human factors in computing systems | 2004
Richard J. Anderson; Crystal Hoyer; Steven A. Wolfman; Ruth E. Anderson
Digital inking systems are becoming increasingly popular across a variety of domains. In particular, many systems now allow instructors to write on digital surfaces in the classroom. Yet, our understanding of how people actually use writing in these systems is limited. In this paper, we report on classroom use of writing in one such system, in which the instructor annotates projected slides using a Tablet PC. Through a detailed analysis of lecture archives, we identify key use patterns. In particular, we categorize a major use of ink as analogous to physical gestures and present a framework for analyzing this ink; we explore the relationship between the ephemeral meaning of many annotations and their persistent representation; and we observe that instructors make conservative use of the systems features. Finally, we discuss implications of our study to the design of future digital inking systems.
computer supported collaborative learning | 2003
Richard J. Anderson; Ruth E. Anderson; Tammy VanDeGrift; Steven A. Wolfman; Ken Yasuhara
Eliciting student participation in large college classes is difficult yet critical to learning. This paper describes a design experiment with the Classroom Feedback System (CFS), a computer-mediated system for promoting class interaction. We delineate challenges to interaction based on successive background and pilot studies. CFS addresses these challenges by enabling students to post annotations (e. g., More Explanation) directly on lecture slides. The instructor sees the annotations in real time. Evidence from a large lecture study shows that CFS enhances interaction by addressing challenges to interaction.
technical symposium on computer science education | 2004
Andrew Begel; Daniel D. Garcia; Steven A. Wolfman
We present a tutorial (based on [1]) focusing on kinesthetic learning activities, i.e., physically engaging classroom exercises. These might, for example, involve throwing a frisbee around the classroom to represent transfer of control in a procedure call, or simulating polygon scan conversion with rope for edges and students for pixels. The session begins with a brief kinesthetic learning activity to motivate the value of these activities. We follow with a variety of examples, and discuss how to use these successfully in the classroom. The audience then divides into facilitated groups to design their own activities. Finally, we all mingle to share and discuss the results. These results are posted on a public web forum---the KLA wiki [2]---for continued discussion and generation of new ideas.
Knowledge Engineering Review | 2001
Steven A. Wolfman; Daniel S. Weld
Compilation to Boolean satisfiability has become a powerful paradigm for solving artificial intelligence problems. However, domains that require metric reasoning cannot be compiled efficiently to satisfiability even if they would otherwise benefit from compilation. We address this problem by combining techniques from the artificial intelligence and operations research communities. In particular, we introduce the LCNF (Linear Conjunctive Normal Form) representation that combines propositional logic with metric constraints. We present LPSAT (Linear Programming plus SATisfiability), an engine that solves LCNF problems by interleaving calls to an incremental Simplex algorithm with systematic satisfaction methods. We explore several techniques for enhancing LPSATs efficiency and expressive power by adjusting the interaction between the satisfiability and linear programming components of LPSAT. Next, we describe a compiler that converts metric resource planning problems into LCNF for processing by LPSAT. Finally, the experimental section of the paper explores several optimisations to LPSAT, including learning from constraint failure and randomised cutoffs.
acm multimedia | 2004
Richard J. Anderson; Crystal Hoyer; Craig Prince; Jonathan Su; Fred Videon; Steven A. Wolfman
In this paper, we report on an empirical exploration of digital ink and speech usage in lecture presentation. We studied the video archives of five Masters level Computer Science courses to understand how instructors use ink and speech together while lecturing, and to evaluate techniques for analyzing digital ink. Our interest in understanding how ink and speech are used together is to inform the development of future tools for supporting classroom presentation, distance education, and viewing of archived lectures. We want to make it easier to interact with electronic materials and to extract information from them. We want to provide an empirical basis for addressing challenging problems such as automatically generating full text transcripts of lectures, matching speaker audio with slide content, and recognizing the meaning of the instructors ink. Our results include an evaluation of handwritten word recognition in the lecture domain, an approach for associating attentional marks with content, an analysis of linkage between speech and ink, and an application of recognition techniques to infer speaker actions.
intelligent user interfaces | 2001
Steven A. Wolfman; Tessa A. Lau; Pedro M. Domingos; Daniel S. Weld
Applications of machine learning can be viewed as teacher-student interactions in which the teacher provides training examples and the student learns a generalization of the training examples. One such application of great interest to the IUI community is adaptive user interfaces. In the traditional learning interface, the scope of teacher-student interactions consists solely of the teacher/user providing some number of training examples to the student/learner and testing the learned model on new examples. Active learning approaches go one step beyond the traditional interaction model and allow the student to propose new training examples that are then solved by the teacher. In this paper, we propose that interfaces for machine learning should even more closely resemble human teacher-student relationships. A teachers time and attention are precious resources. An intelligent student must proactively contribute to the learning process, by reasoning about the quality of its knowledge, collaborating with the teacher, and suggesting new examples for her to solve. The paper describes a variety of rich interaction modes that enhance the learning process and presents a decision-theoretic framework, called DIAManD, for choosing the best interaction. We apply the framework to the SMARTedit programming by demonstration system and describe experimental validation and preliminary user feedback.
technical symposium on computer science education | 2002
Steven A. Wolfman
Pedagogy of large lecture classes has traditionally focussed on deemphasizing the problems their size creates. This approach has yielded valuable practical advice for instructors. However, this paper argues that there are pedagogical advantages to the large lecture format and that exploiting these advantages can further improve classroom instruction. I present some advantages of large classes and anecdotes that demonstrate how to exploit these advantages.
Your wish is my command | 2001
Tessa A. Lau; Steven A. Wolfman; Pedro M. Domingos; Daniel S. Weld
Publisher Summary Programming by demonstration (PBD) has the potential to allow users to customize their applications. Rather than writing a program in an abstract programming language to automate a task, users demonstrate how to perform the task in the existing interface, and the system learns a generalized program that can perform it in new contexts. The SMARTedit system automates repetitive text-editing tasks by learning programs to perform them using techniques drawn from machine learning. SMARTedit represents a text-editing program as a series of functions that alter the state of the text editor (that is, the contents of the file or the cursor position). Like macro recording systems, SMARTedit learns the program by observing a user performing her or his task. However, unlike macro recorders, SMARTedit examines the context in which the users actions are performed and learns programs that work correctly in new contexts. Using a machine-learning concept called version space algebra SMARTedit is able to learn useful text-editing procedures after only a small number of demonstrations.
technical symposium on computer science education | 2014
Kuba Karpierz; Steven A. Wolfman
In this paper, we triangulate evidence for five misconceptions concerning binary search trees and hash tables. In addition, we design and validate multiple-choice concept inventory questions to measure the prevalence of four of these misconceptions. We support our conclusions with quantitative analysis of grade data and closed-ended problems, and qualitative analysis of interview data and open-ended problems. Instructors and researchers can inexpensively measure the impact of pedagogical changes on these misconceptions by using these questions in a larger concept inventory.