Michael Hilton
Oregon State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Hilton.
automated software engineering | 2016
Michael Hilton; Timothy Tunnell; Kai Huang; Darko Marinov; Danny Dig
Continuous integration (CI) systems automate the compilation, building, and testing of software. Despite CI rising as a big success story in automated software engineering, it has received almost no attention from the research community. For example, how widely is CI used in practice, and what are some costs and benefits associated with CI? Without answering such questions, developers, tool builders, and researchers make decisions based on folklore instead of data. In this paper, we use three complementary methods to study the usage of CI in open-source projects. To understand which CI systems developers use, we analyzed 34,544 open-source projects from GitHub. To understand how developers use CI, we analyzed 1,529,291 builds from the most commonly used CI system. To understand why projects use or do not use CI, we surveyed 442 developers. With this data, we answered several key questions related to the usage, costs, and benefits of CI. Among our results, we show evidence that supports the claim that CI helps projects release more often, that CI is widely adopted by the most popular projects, as well as finding that the overall percentage of projects using CI continues to grow, making it important and timely to focus more research on CI.
foundations of software engineering | 2016
Anh Tuan Nguyen; Michael Hilton; Mihai Codoban; Hoan Anh Nguyen; Lily Mast; Eli Rademacher; Tien N. Nguyen; Danny Dig
Learning and remembering how to use APIs is difficult. While code-completion tools can recommend API methods, browsing a long list of API method names and their documentation is tedious. Moreover, users can easily be overwhelmed with too much information. We present a novel API recommendation approach that taps into the predictive power of repetitive code changes to provide relevant API recommendations for developers. Our approach and tool, APIREC, is based on statistical learning from fine-grained code changes and from the context in which those changes were made. Our empirical evaluation shows that APIREC correctly recommends an API call in the first position 59% of the time, and it recommends the correct API call in the top five positions 77% of the time. This is a significant improvement over the state-of-the-art approaches by 30-160% for top-1 accuracy, and 10-30% for top-5 accuracy, respectively. Our result shows that APIREC performs well even with a one-time, minimal training dataset of 50 publicly available projects.
foundations of software engineering | 2017
Michael Hilton; Nicholas Nelson; Timothy Tunnell; Darko Marinov; Danny Dig
Continuous integration (CI) systems automate the compilation, building, and testing of software. Despite CI being a widely used activity in software engineering, we do not know what motivates developers to use CI, and what barriers and unmet needs they face. Without such knowledge, developers make easily avoidable errors, tool builders invest in the wrong direction, and researchers miss opportunities for improving the practice of CI. We present a qualitative study of the barriers and needs developers face when using CI. We conduct semi-structured interviews with developers from different industries and development scales. We triangulate our findings by running two surveys. We find that developers face trade-offs between speed and certainty (Assurance), between better access and information security (Security), and between more configuration options and greater ease of use (Flexi- bility). We present implications of these trade-offs for developers, tool builders, and researchers.
international conference on software engineering | 2013
David S. Janzen; John Clements; Michael Hilton
WebIDE is a framework that enables instructors to develop and deliver online lab content with interactive feedback. The ability to create lock-step labs enables the instructor to guide students through learning experiences, demonstrating mastery as they proceed. Feedback is provided through automated evaluators that vary from simple regular expression evaluation to syntactic parsers to applications that compile and run programs and unit tests. This paper describes WebIDE and its use in a CS0 course that taught introductory Java and Android programming using a test-driven learning approach. We report results from a controlled experiment that compared the use of dynamic WebIDE labs with more traditional static programming labs. Despite weaker performance on pre-study assessments, students who used WebIDE performed two to twelve percent better on all assessments than the students who used traditional labs. In addition, WebIDE students were consistently more positive about their experience in CS0.
integrating technology into computer science education | 2012
Michael Hilton; David S. Janzen
Test-driven development (TDD) has been shown to reduce defects and to lead to better code, but can it help beginning students learn basic programming topics, specifically arrays? We performed a controlled experiment where we taught arrays to two CS0 classes, one using WebIDE, an intelligent tutoring system that enforced the use of Test-Driven Learning (TDL) methods, and one using more traditional static methods and a development environment that instructed, but did not enforce the use of TDD. Students who used the TDL approach with WebIDE performed significantly better in assessments and had significantly higher opinions of their experiences than students who used traditional methods and tools.
Proceedings of the 1st International Conference on Mobile Software Engineering and Systems | 2014
Michael Hilton; Arpit Christi; Danny Dig; Michal Moskal; Sebastian Burckhardt; Nikolai Tillmann
Mobile cloud computing can greatly enrich the capabilities of today’s pervasive mobile devices. Storing data on the cloud can enable features such as automatic backup, seamless transition between multiple devices, and multiuser support for existing apps. However, the process of converting local into cloud data types requires high expertise, is difficult, and time-consuming. Refactoring techniques can greatly simplify this process. In this paper we present a formative study where we analyzed and successfully converted four real-world touchdevelop apps into cloud-enabled apps. Based on these lessons, we designed and implemented, CLOUDIFYER, a tool that automatically refactors local data types into cloud data types on the touchdevelop platform. Our empirical evaluation on a corpus of 123 mobile apps resulting in 2722 transformations shows (i) that the refactoring is widely applicable, (ii) CLOUDIFYER saves human effort, and (iii) CLOUDIFYER is accurate.
2013 7th International Workshop on Traceability in Emerging Forms of Software Engineering (TEFSE) | 2013
Alex Dekhtyar; Michael Hilton
It has been generally accepted that not all trace links in a given requirements traceability matrix are equal - both human analysts and automated methods are good at spotting some links, but have blind spots for some other. One way to choose automated techniques for inclusion in assisted tracing processes (i.e., the tracing processes that combine the expertise of a human analyst and special-purpose tracing software) is to select the techniques that tend to discover more links that are hard for human analysts to observe and establish on their own. This paper proposes a new measure of performance of a tracing method: human recoverability index-based recall. In the presence of knowledge about the difficulty of link recovery by human analysts, this measure rewards methods that are able to recover such links over methods that tend to recover the same links as the human analysts. We describe a TraceLab experiment we designed to evaluate automated trace recovery methods based on this measure and provide a case study of the use of this experiment to profile and evaluate different automated tracing techniques.
international conference on agile software development | 2016
Michael Hilton; Nicholas Nelson; Hugh McDonald; Sean McDonald; Ronald A. Metoyer; Danny Dig
A bad software development process leads to wasted effort and inferior products. In order to improve a software process, it must be first understood. Our unique approach in this paper uses code and test changes to understand conformance to the Test Driven Development (TDD) process.
human factors in computing systems | 2017
Jonathan Dodge; Michael Hilton; Ronald A. Metoyer; Josie Hunter; Karl Smeltzer; Catharina Vijay; Andrew Atkinson
Eco-feedback technology is generally concerned with the communication of information to affect individual or group behavior with respect to environmental impact. Electricity consumption feedback, in particular, has been studied from various viewpoints to understand its effects on consumption behavior and to explore the design space. Recent efforts have resulted in a wide array of device designs ranging from individual appliance feedback at the outlet to centralized devices for home consumption awareness. However, adoption rates for these technologies remain relatively poor, perhaps due to a lack of emphasis on specific user needs. In this paper, we contribute a participatory design study to examine differences and similarities among three targeted household demographics: older adults, families with children, and students in shared housing. In addition, we present our process for extracting personas from participatory design study data, alongside the set of resulting persona skeletons and one finished persona.
foundations of software engineering | 2016
Michael Hilton
Continuous Integration (CI) has been widely adopted in the software development industry. However, the usage of CI in practice has been ignored for far too long by the research community. We propose to fill this blind spot by doing in- depth research into CI usage in practice. We will answer how questions by using using quantitative methods, such as investigating open source data that is publicly available. We will answer why questions using qualitative methods, such as semi-structured interviews and large scale surveys. In the course of our research, we plan on identifying barriers that developers face when using CI. We will develop techniques to overcome those barriers via automation. This work is advised by Professor Danny Dig.