Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Austin Z. Henley is active.

Publication


Featured researches published by Austin Z. Henley.


human factors in computing systems | 2014

The patchworks code editor: toward faster navigation with less code arranging and fewer navigation mistakes

Austin Z. Henley; Scott D. Fleming

Increasingly, people are faced with navigating large information spaces, and making such navigation efficient is of paramount concern. In this paper, we focus on the problems programmers face in navigating large code bases, and propose a novel code editor, Patchworks, that addresses the problems. In particular, Patchworks leverages two new interface idioms - the patch grid and the ribbon - to help programmers navigate more quickly, make fewer navigation errors, and spend less time arranging their code. To validate Patchworks, we conducted a user study that compared Patchworks to two existing code editors: the traditional file-based editor, Eclipse, and the newer canvas-based editor, Code Bubbles. Our results showed (1) that programmers using Patchworks were able to navigate significantly faster than with Eclipse (and comparably with Code Bubbles), (2) that programmers using Patchworks made significantly fewer navigation errors than with Code Bubbles or Eclipse, and (3) that programmers using Patchworks spent significantly less time arranging their code than with Code Bubbles (and comparably with Eclipse).


symposium on visual languages and human-centric computing | 2014

Helping programmers navigate code faster with Patchworks: A simulation study

Austin Z. Henley; Alka Singh; Scott D. Fleming; Maria V. Luong

Programmers spend considerable time navigating source code, and we recently proposed the Patchworks code editor to help address this problem. A prior preliminary study of Patchworks found that it significantly reduced programmer navigation time and navigation errors. In this paper, we expand on these findings by investigating the effect of various patch-arranging strategies in Patchworks. To evaluate these strategies, we ran a simulation study based on actual programmer navigation data. Our simulator results showed (1) that none of the strategies tested had a significant effect on programmer-navigation time, and (2) that navigating code using Patchworks, regardless of strategy, was significantly faster than using Eclipse.


symposium on visual languages and human-centric computing | 2016

Yestercode: Improving code-change support in visual dataflow programming environments

Austin Z. Henley; Scott D. Fleming

In this paper, we present the Yestercode tool for supporting code changes in visual dataflow programming environments. In a formative investigation of LabVIEW programmers, we found that making code changes posed a significant challenge. To address this issue, we designed Yestercode to enable the efficient recording, retrieval, and juxtaposition of visual dataflow code while making code changes. To evaluate Yestercode, we implemented our design as a prototype extension to the LabVIEW programming environment, and ran a user study involving 14 professional LabVIEW programmers that compared Yestercode-extended LabVIEW to the standard LabVIEW IDE. Our results showed that Yestercode users introduced fewer bugs during tasks, completed tasks in about the same time, and experienced lower cognitive loads on tasks. Moreover, participants generally reported that Yestercode was easy to use and that it helped in making change tasks easier.


international conference on software maintenance | 2016

An Empirical Evaluation of Models of Programmer Navigation

Alka Singh; Austin Z. Henley; Scott D. Fleming; Maria V. Luong

In this paper, we report an evaluation study of predictive models of programmer navigation. In particular, we compared two operationalizations of navigation from the literature (click-based versus view-based) to see which more accurately records a developers navigation behaviors. Moreover, we also compared the predictive accuracy of seven models of programmer navigation from the literature, including ones based on navigation history and code-structural relationships. To address our research goals, we performed a controlled laboratory study of the navigation behavior of 10 participants engaged in software evolution tasks. The study was a partial replication of a previous comprehensive evaluation of predictive models by Piorkowski et al., and also served to test the generalizability of their results. Key findings of the study included that the click-based navigations agreed closely with those reported by human observers, whereas view-based navigations diverged significantly. Furthermore, our data showed that the predictive model based on recency was significantly more accurate than the other models, suggesting the strong potential for tools that leverage recency-type models. Finally, our model-accuracy results had a strong correlation with the Piorkowski results, however, our results differed in several noteworthy ways, potentially caused by differences in task type and code familiarity.


human factors in computing systems | 2018

CFar: A Tool to Increase Communication, Productivity, and Review Quality in Collaborative Code Reviews

Austin Z. Henley; KIotavanç Muçlu; Maria Christakis; Scott D. Fleming; Christian Bird

Collaborative code review has become an integral part of the collaborative design process in the domain of software development. However, there are well-documented challenges and limitations to collaborative code review---for instance, high-quality code reviews may require significant time and effort for the programmers, whereas faster, lower-quality reviews may miss code defects. To address these challenges, we introduce CFar, a novel tool design for extending collaborative code review systems with an automated code reviewer whose feedback is based on program-analysis technologies. To validate this design, we implemented CFar as a production-quality tool and conducted a mixed-method empirical evaluation of the tool usage at Microsoft. Through the field deployment of our tool and a laboratory study of professional programmers using the tool, we produced several key findings showing that CFar enhances communication, productivity, and review quality in human--human collaborative code review.


symposium on visual languages and human-centric computing | 2017

Foraging goes mobile: Foraging while debugging on mobile devices

David Piorkowski; Sean Penney; Austin Z. Henley; Marco Pistoia; Margaret M. Burnett; Omer Tripp; Pietro Ferrara

Although Information Foraging Theory (IFT) research for desktop environments has provided important insights into numerous information foraging tasks, we have been unable to locate IFT research for mobile environments. Despite the limits of mobile platforms, mobile apps are increasingly serving functions that were once exclusively the territory of desktops — and as the complexity of mobile apps increases, so does the need for foraging. In this paper we investigate, through a theory-based, dual replication study, whether and how foraging results from a desktop IDE generalize to a functionally similar mobile IDE. Our results show ways prior foraging research results from desktop IDEs generalize to mobile IDEs and ways they do not, and point to challenging open research questions for foraging on mobile environments.


human factors in computing systems | 2017

Toward Principles for the Design of Navigation Affordances in Code Editors: An Empirical Investigation

Austin Z. Henley; Scott D. Fleming; Maria V. Luong

Design principles are a key tool for creators of interactive systems; however, a cohesive set of principles has yet to emerge for the design of code editors. In this paper, we conducted a between-subjects empirical study comparing the navigation behaviors of 32 professional LabVIEW programmers using two different code-editor interfaces: the ubiquitous tabbed editor and the experimental Patchworks editor. Our analysis focused on how the programmers arranged and navigated among open information patches (i.e., code modules and program output). Key findings of our study included that Patchworks users made significantly fewer click actions per navigation, juxtaposed patches side by side significantly more, and exhibited significantly fewer navigation mistakes than tabbed-editor users. Based on these findings and more, we propose five general principles for the design of effective navigation affordances in code editors.


international conference on software maintenance | 2015

To fix or to learn? How production bias affects developers' information foraging during debugging

David Piorkowski; Scott D. Fleming; Christopher Scaffidi; Margaret M. Burnett; Irwin Kwan; Austin Z. Henley; Jamie Macbeth; Charles Hill; Amber Horvath


foundations of software engineering | 2016

Foraging and navigations, fundamentally: developers' predictions of value and cost

David Piorkowski; Austin Z. Henley; Tahmid Nabi; Scott D. Fleming; Christopher Scaffidi; Margaret M. Burnett


symposium on visual languages and human-centric computing | 2018

CodeDeviant: Helping Programmers Detect Edits That Accidentally Alter Program Behavior

Austin Z. Henley; Scott D. Fleming

Collaboration


Dive into the Austin Z. Henley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles Hill

Oregon State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irwin Kwan

Oregon State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge