Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew J. Ko is active.

Publication


Featured researches published by Andrew J. Ko.


ACM Computing Surveys | 2011

The state of the art in end-user software engineering

Andrew J. Ko; Robin Abraham; Laura Beckwith; Alan F. Blackwell; Margaret M. Burnett; Martin Erwig; Christopher Scaffidi; Joseph Lawrance; Henry Lieberman; Brad A. Myers; Mary Beth Rosson; Gregg Rothermel; Mary Shaw; Susan Wiedenbeck

Most programs today are written not by professional software developers, but by people with expertise in other domains working towards goals for which they need computational support. For example, a teacher might write a grading spreadsheet to save time grading, or an interaction designer might use an interface builder to test some user interface design ideas. Although these end-user programmers may not have the same goals as professional developers, they do face many of the same software engineering challenges, including understanding their requirements, as well as making decisions about design, reuse, integration, testing, and debugging. This article summarizes and classifies research on these activities, defining the area of End-User Software Engineering (EUSE) and related terminology. The article then discusses empirical research about end-user software engineering activities and the technologies designed to support them. The article also addresses several crosscutting issues in the design of EUSE tools, including the roles of risk, reward, and domain complexity, and self-efficacy in the design of EUSE tools and the potential of educating users about software engineering principles.


symposium on visual languages and human-centric computing | 2004

Six Learning Barriers in End-User Programming Systems

Andrew J. Ko; Brad A. Myers; Htet Htet Aung

As programming skills increase in demand and utility, the learnability of end-user programming systems is of utmost importance. However, research on learning barriers in programming systems has primarily focused on languages, overlooking potential barriers in the environment and accompanying libraries. To address this, a study of beginning programmers learning Visual Basic.NET was performed. This identified six types of barriers: design, selection, coordination, use, understanding, and information. These barriers inspire a new metaphor of computation, which provides a more learner-centric view of programming system design


Communications of The ACM | 2004

Natural programming languages and environments

Brad A. Myers; John F. Pane; Andrew J. Ko

Over the last six years, we have been working to create programming languages and environments that are more natural, or closer to the way people think about their tasks. Our goal is to make it possible for people to express their ideas in the same way they think about them. To achieve this, we have performed various studies about how people think about programming tasks, both when trying to create a new program and when trying to find and fix bugs in existing programs. We then use this knowledge to develop new tools for programming and debugging. Our user studies have shown the resulting systems provide significant benefits to users.


symposium on visual languages and human-centric computing | 2006

A Linguistic Analysis of How People Describe Software Problems

Andrew J. Ko; Brad A. Myers; Duen Horng Chau

There is little understanding of how people describe software problems, but a variety of tools solicit, manage, and analyze these descriptions in order to streamline software development. To inform the design of these tools and generate ideas for new ones, an study of nearly 200,000 bug report titles was performed. The titles of the reports generally described a software entity or behavior, its inadequacy, and an execution context, suggesting new designs for more structured report forms. About 95% of noun phrases referred to visible software entities, physical devices, or user actions, suggesting the feasibility of allowing users to select these entities in debuggers and other tools. Also, the structure of the titles exhibited sufficient regularity to parse with an accuracy of 89%, enabling a number of new automated analyses. These findings and others have many implications for tool design and software engineering


human factors in computing systems | 2005

Examining task engagement in sensor-based statistical models of human interruptibility

James Fogarty; Andrew J. Ko; Htet Htet Aung; Elspeth Golden; Karen P. Tang; Scott E. Hudson

The computer and communication systems that office workers currently use tend to interrupt at inappropriate times or unduly demand attention because they have no way to determine when an interruption is appropriate. Sensor?based statistical models of human interruptibility offer a potential solution to this problem. Prior work to examine such models has primarily reported results related to social engagement, but it seems that task engagement is also important. Using an approach developed in our prior work on sensor?based statistical models of human interruptibility, we examine task engagement by studying programmers working on a realistic programming task. After examining many potential sensors, we implement a system to log low?level input events in a development environment. We then automatically extract features from these low?level event logs and build a statistical model of interruptibility. By correctly identifying situations in which programmers are non?interruptible and minimizing cases where the model incorrectly estimates that a programmer is non?interruptible, we can support a reduction in costly interruptions while still allowing systems to convey notifications in a timely manner.


symposium on visual languages and human-centric computing | 2008

How designers design and program interactive behaviors

Brad A. Myers; Sun Young Park; Yoko Nakano; Greg Mueller; Andrew J. Ko

Designers are skilled at sketching and prototyping the look of interfaces, but to explore various behaviors (what the interface does in response to input) typically requires programming using Javascript, ActionScript for Flash, or other languages. In our survey of 259 designers, 86% reported that the behavior is more difficult to prototype than the appearance. Often (78% of the time), designing the behavior requires collaborating with developers, but 76% of designers reported that communicatin1g the behavior to developers was more difficult than the appearance. Other results include that annotations such as arrows and paragraphs of text are used on top of sketches and storyboards to explain behaviors, and designers want to explore multiple versions of behaviors, but todaypsilas tools make this difficult. The results provide new ideas for future tools.


human factors in computing systems | 2009

Finding causes of program output with the Java Whyline

Andrew J. Ko; Brad A. Myers

Debugging and diagnostic tools are some of the most important software development tools, but most expect developers choose the right code to inspect. Unfortunately, this rarely occurs. A new tool called the Whyline is described which avoids such speculation by allowing developers to select questions about a programs output. The tool then helps developers work backwards from output to its causes. The prototype, which supports Java programs, was evaluated in an experiment in which participants investigated two real bug reports from an open source project using either the Whyline or a breakpoint debugger. Whyline users were successful about three times as often and about twice as fast compared to the control group, and were extremely positive about the tools ability to simplify diagnostic tasks in software development work.


user interface software and technology | 2013

Interactive record/replay for web application debugging

Brian Burg; Richard Bailey; Andrew J. Ko; Michael D. Ernst

During debugging, a developer must repeatedly and manually reproduce faulty behavior in order to inspect different facets of the programs execution. Existing tools for reproducing such behaviors prevent the use of debugging aids such as breakpoints and logging, and are not designed for interactive, random-access exploration of recorded behavior. This paper presents Timelapse, a tool for quickly recording, reproducing, and debugging interactive behaviors in web applications. Developers can use Timelapse to browse, visualize, and seek within recorded program executions while simultaneously using familiar debugging tools such as breakpoints and logging. Testers and end-users can use Timelapse to demonstrate failures in situ and share recorded behaviors with developers, improving bug report quality by obviating the need for detailed reproduction steps. Timelapse is built on Dolos, a novel record/replay infrastructure that ensures deterministic execution by capturing and reusing program inputs both from the user and from external sources such as the network. Dolos introduces negligible overhead and does not interfere with breakpoints and logging. In a small user evaluation, participants used Timelapse to accelerate existing reproduction activities, but were not significantly faster or more successful in completing the larger tasks at hand. Together, the Dolos infrastructure and Timelapse developer tool support systematic bug reporting and debugging practices.


human factors in computing systems | 2010

Understanding usability practices in complex domains

Parmit K. Chilana; Jacob O. Wobbrock; Andrew J. Ko

Although usability methods are widely used for evaluating conventional graphical user interfaces and websites, there is a growing concern that current approaches are inadequate for evaluating complex, domain-specific tools. We interviewed 21 experienced usability professionals, including in-house experts, external consultants, and managers working in a variety of complex domains, and uncovered the challenges commonly posed by domain complexity and how practitioners work around them. We found that despite the best efforts by usability professionals to get familiar with complex domains on their own, the lack of formal domain expertise can be a significant hurdle for carrying out effective usability evaluations. Partnerships with domain experts lead to effective results as long as domain experts are willing to be an integral part of the usability team. These findings suggest that for achieving usability in complex domains, some fundamental educational changes may be needed in the training of usability professionals.


intelligent user interfaces | 2009

Fixing the program my computer learned: barriers for end users, challenges for the machine

Todd Kulesza; Weng-Keen Wong; Simone Stumpf; Stephen Perona; Rachel White; Margaret M. Burnett; Ian Oberst; Andrew J. Ko

The results of a machine learning from user behavior can be thought of as a program, and like all programs, it may need to be debugged. Providing ways for the user to debug it matters, because without the ability to fix errors users may find that the learned programs errors are too damaging for them to be able to trust such programs. We present a new approach to enable end users to debug a learned program. We then use an early prototype of our new approach to conduct a formative study to determine where and when debugging issues arise, both in general and also separately for males and females. The results suggest opportunities to make machine-learned programs more effective tools.

Collaboration


Dive into the Andrew J. Ko's collaboration.

Top Co-Authors

Avatar

Brad A. Myers

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael J. Lee

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Htet Htet Aung

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Irwin Kwan

Oregon State University

View shared research outputs
Top Co-Authors

Avatar

James Fogarty

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Mary Shaw

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge