Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lori L. Pollock is active.

Publication


Featured researches published by Lori L. Pollock.


automated software engineering | 2010

Towards automatically generating summary comments for Java methods

Giriprasad Sridhara; Emily Hill; Divya Muppaneni; Lori L. Pollock; K. Vijay-Shanker

Studies have shown that good comments can help programmers quickly understand what a method does, aiding program comprehension and software maintenance. Unfortunately, few software projects adequately comment the code. One way to overcome the lack of human-written summary comments, and guard against obsolete comments, is to automatically generate them. In this paper, we present a novel technique to automatically generate descriptive summary comments for Java methods. Given the signature and body of a method, our automatic comment generator identifies the content for the summary and generates natural language text that summarizes the methods overall actions. According to programmers who judged our generated comments, the summaries are accurate, do not miss important content, and are reasonably concise.


aspect-oriented software development | 2007

Using natural language program analysis to locate and understand action-oriented concerns

David C. Shepherd; Zachary P. Fry; Emily Hill; Lori L. Pollock; K. Vijay-Shanker

Most current software systems contain undocumented high-level ideas implemented across multiple files and modules. When developers perform program maintenance tasks, they often waste time and effort locating and understanding these scattered concerns. We have developed a semi-automated concern location and comprehension tool, Find-Concept, designed to reduce the time developers spend on maintenance tasks and to increase their confidence in the results of these tasks. Find-Concept is effective because it searches a unique natural language-based representation of source code, uses novel techniques to expand initial queries into more effective queries, and displays search results in an easy-to-comprehend format. We describe the Find-Concept tool, the underlying program analysis, and an experimental study comparing Find-Concepts search effectiveness with two state-of-the-art lexical and information retrieval-based search tools. Across nine action-oriented concern location tasks derived from open source bug reports, our Eclipse-based tool produced more effective queries more consistently than either competing search tool with similar user effort.


international conference on software engineering | 2009

Automatically capturing source code context of NL-queries for software maintenance and reuse

Emily Hill; Lori L. Pollock; K. Vijay-Shanker

As software systems continue to grow and evolve, locating code for maintenance and reuse tasks becomes increasingly difficult. Existing static code search techniques using natural language queries provide little support to help developers determine whether search results are relevant, and few recommend alternative words to help developers reformulate poor queries. In this paper, we present a novel approach that automatically extracts natural language phrases from source code identifiers and categorizes the phrases and search results in a hierarchy. Our contextual search approach allows developers to explore the word usage in a piece of software, helping them to quickly identify relevant program elements for investigation or to quickly recognize alternative words for query reformulation. An empirical evaluation of 22 developers reveals that our contextual search approach significantly outperforms the most closely related technique in terms of effort and effectiveness.


mining software repositories | 2009

Mining source code to automatically split identifiers for software analysis

Eric Enslen; Emily Hill; Lori L. Pollock; K. Vijay-Shanker

Automated software engineering tools (e.g., program search, concern location, code reuse, quality assessment, etc.) increasingly rely on natural language information from comments and identifiers in code. The first step in analyzing words from identifiers requires splitting identifiers into their constituent words. Unlike natural languages, where space and punctuation are used to delineate words, identifiers cannot contain spaces. One common way to split identifiers is to follow programming language naming conventions. For example, Java programmers often use camel case, where words are delineated by uppercase letters or non-alphabetic characters. However, programmers also create identifiers by concatenating sequences of words together with no discernible delineation, which poses challenges to automatic identifier splitting. In this paper, we present an algorithm to automatically split identifiers into sequences of words by mining word frequencies in source code. With these word frequencies, our identifier splitter uses a scoring technique to automatically select the most appropriate partitioning for an identifier. In an evaluation of over 8000 identifiers from open source Java programs, our Samurai approach outperforms the existing state of the art techniques.


automated software engineering | 2007

Exploring the neighborhood with dora to expedite software maintenance

Emily Hill; Lori L. Pollock; K. Vijay-Shanker

Completing software maintenance and evolution tasks for todayâ s large, complex software systems can be difficult, often requiring considerable time to understand the system well enough to make correct changes. Despite evidence that successful programmers use program structure as well as identifier names to explore software, most existing program exploration techniques use either structural or lexical identifier information. By using only one type of information, automated tools ignore valuable clues about a developers intentions - clues critical to the human program comprehension process. In this paper, we present and evaluate a technique that exploits both program structure and lexical information to help programmers more effectively explore programs. Our approach uses structural information to focus automated program exploration and lexical information to prune irrelevant structure edges from consideration. For the important program exploration step of expanding from a seed, our experimental results demonstrate that an integrated lexical-and structural-based approach is significantly more effective than a state-of-the-art structural program exploration technique


international conference on program comprehension | 2013

Automatic generation of natural language summaries for Java classes

Laura Moreno; Jairo Aponte; Giriprasad Sridhara; Andrian Marcus; Lori L. Pollock; K. Vijay-Shanker

Most software engineering tasks require developers to understand parts of the source code. When faced with unfamiliar code, developers often rely on (internal or external) documentation to gain an overall understanding of the code and determine whether it is relevant for the current task. Unfortunately, the documentation is often absent or outdated. This paper presents a technique to automatically generate human readable summaries for Java classes, assuming no documentation exists. The summaries allow developers to understand the main goal and structure of the class. The focus of the summaries is on the content and responsibilities of the classes, rather than their relationships with other classes. The summarization tool determines the class and method stereotypes and uses them, in conjunction with heuristics, to select the information to be included in the summaries. Then it generates the summaries using existing lexicalization tools. A group of programmers judged a set of generated summaries for Java classes and determined that they are readable and understandable, they do not include extraneous information, and, in most cases, they are not missing essential information.


automated software engineering | 2005

Automated replay and failure detection for web applications

Sara Sprenkle; Emily Gibson; Sreedevi Sampath; Lori L. Pollock

User-session-based testing of web applications gathers user sessions to create and continually update test suites based on real user input in the field. To support this approach during maintenance and beta testing phases, we have built an automated framework for testing web-based software that focuses on scalability and evolving the test suite automatically as the applications operational profile changes. This paper reports on the automation of the replay and oracle components for web applications, which pose issues beyond those in the equivalent testing steps for traditional, stand-alone applications. Concurrency, nondeterminism, dependence on persistent state and previous user sessions, a complex application infrastructure, and a large number of output formats necessitate developing different replay and oracle comparator operators, which have tradeoffs in fault detection effectiveness, precision of analysis, and efficiency. We have designed, implemented, and evaluated a set of automated replay techniques and oracle comparators for user-session-based testing of web applications. This paper describes the issues, algorithms, heuristics, and an experimental case study with user sessions for two web applications. From our results, we conclude that testers performing user-session-based testing should consider their expectations for program coverage and fault detection when choosing a replay and oracle technique.


international symposium on software testing and analysis | 1998

All-du-path coverage for parallel programs

Cheer-Sun D. Yang; Amie L. Souter; Lori L. Pollock

One significant challenge in bringing the power of parallel machines to application programmers is providing them with a suite of software tools similar to the tools that sequential programmers currently utilize. In particular, automatic or semi-automatic testing tools for parallel programs are lacking. This paper describes our work in automatic generation of all-du-paths for testing parallel programs. Our goal is to demonstrate that, with some extension, sequential test data adequacy criteria are still applicable to parallel program testing. The concepts and algorithms in this paper have been incorporated as the foundation of our DELaware PArallel Software Testing Aid, della pasta.


international conference on software engineering | 2011

Automatically detecting and describing high level actions within methods

Giriprasad Sridhara; Lori L. Pollock; K. Vijay-Shanker

One approach to easing program comprehension is to reduce the amount of code that a developer has to read. Describing the high level abstract algorithmic actions associated with code fragments using succinct natural language phrases potentially enables a newcomer to focus on fewer and more abstract concepts when trying to understand a given method. Unfortunately, such descriptions are typically missing because it is tedious to create them manually. We present an automatic technique for identifying code fragments that implement high level abstractions of actions and expressing them as a natural language description. Our studies of 1000 Java programs indicate that our heuristics for identifying code fragments implementing high level actions are widely applicable. Judgements of our generated descriptions by 15 experienced Java programmers strongly suggest that indeed they view the fragments that we identify as representing high level actions and our synthesized descriptions accurately express the abstraction.


IEEE Transactions on Software Engineering | 2007

Applying Concept Analysis to User-Session-Based Testing of Web Applications

Sreedevi Sampath; Sara Sprenkle; Emily Gibson; Lori L. Pollock; Amie Souter Greenwald

The continuous use of the Web for daily operations by businesses, consumers, and the government has created a great demand for reliable Web applications. One promising approach to testing the functionality of Web applications leverages the user-session data collected by Web servers. User-session-based testing automatically generates test cases based on real user profiles. The key contribution of this paper is the application of concept analysis for clustering user sessions and a set of heuristics for test case selection. Existing incremental concept analysis algorithms are exploited to avoid collecting and maintaining large user-session data sets and to thus provide scalability. We have completely automated the process from user session collection and test suite reduction through test case replay. Our incremental test suite update algorithm, coupled with our experimental study, indicates that concept analysis provides a promising means for incrementally updating reduced test suites in response to newly captured user sessions with little loss in fault detection capability and program coverage.

Collaboration


Dive into the Lori L. Pollock's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emily Hill

Montclair State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sara Sprenkle

Washington and Lee University

View shared research outputs
Top Co-Authors

Avatar

Kostadin Damevski

Virginia Commonwealth University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Breech

University of Delaware

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge