Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ian J. Davis is active.

Publication


Featured researches published by Ian J. Davis.


working conference on reverse engineering | 2010

From Whence It Came: Detecting Source Code Clones by Analyzing Assembler

Ian J. Davis; Michael W. Godfrey

To date, most clone detection techniques have concentrated on various forms of source code analysis, often by analyzing token streams. In this paper, we introduce a complementary technique of analyzing generated assembler for clones. This approach is appealing as it is mostly impervious to trivial changes in the source, with compilation serving as a kind of normalization technique. We have built detectors to analyze both Java VM code as well as GCC Linux assembler for C and C++. In the paper, we describe our approach and show how it can serve as a valuable complementary semantic approach to syntactic source code based detection.


international workshop on software clones | 2010

Clone detection by exploiting assembler

Ian J. Davis; Michael W. Godfrey

In this position paper, we describe work-in-progress in detecting source code clones by means of analyzing and comparing the assembler that is produced when the source code is compiled.


mining software repositories | 2011

A tale of two browsers

Olga Baysal; Ian J. Davis; Michael W. Godfrey

We explore the space of open source systems and their user communities by examining the development artifact histories of two popular web browsers -- Firefox and Chrome -- as well as usage data. By examining the data and addressing a number of research questions, two very different profiles emerge: Firefox, as the older and established system, with long product version cycles but short bug fix cycles, and a user base that is slow to adopt newer versions; and Chrome, as the new and fast evolving system, with short version cycles, longer bug fix cycles, and a user base that very quickly adopts new versions as they become available (due largely to Chromes mandatory automatic updates).


workshop on program comprehension | 2005

Browsing software architectures with LSEdit

Nikita Synytskyy; Richard C. Holt; Ian J. Davis

Fact bases produced by software comprehension tools are large and complex, reaching several gigabytes in size for large software systems. To effectively study these databases, either a query engine or a visualization engine is necessary. In our proposed demo we showcase LSEdit, a full-featured graph visualizer and editor, which is suitable for, but not limited to, visualizing architectural diagrams of software. LSEdit is equipped with advanced searching, elision, layout and editing capabilities. It has been successfully used in the past to visualize extractions of Mozilla, Linux, Vim, Gnumeric, Apache and other large applications.


The Computer Journal | 1992

A Fast Radix Sort

Ian J. Davis

Almost all computers regularly sort data. Many different sort algorithms have therefore been proposed, and the properties of thses algorithms studied in great detail. It is known that no sort algorithm based on key comparisons can sort N keys in less than O(NlogN) operations, and that many perform O(n 2 ) operations in the worst case. The radix sort has the attractive feature that it can sort N keys in O(N) operations, ant it is therefore natural to consider methods of implementing such a sort efficiently. In this paper one efficient implementation of a radix sort is presented, and the performance of this algorithm compared with that of Quicksort


Theory and Practice of Object Systems | 1998

A structured text ADT for object-relational databases

L. J. Brown; Mariano P. Consens; Ian J. Davis; Christopher Palmer; Frank Wm. Tompa

There is a growing need to develop tools that are able to retrieve relevant textual information rapidly, to present textual information in a meaningful way, and to integrate textual information with related data retrieved from other sources. These tools are critical to support applications within corporate intranets and across the rapidly evolving World Wide Web. This paper introduces a framework for modelling structured text and presents a small set of operations that may be applied against such models. Using these operations structured text may be selected, marked, fragmented, and transformed into relations for use in relational and object-oriented database systems. The extended functionality has been accepted for inclusion within the SQL/MM standard, and a prototype database engine has been implemented to support SQL with the proposed extensions. This prototype serves as a proof of concept intended to address industrial concerns, and it demonstrates the power of the proposed abstract data type for structured text. 1. The challenge Database technology is essential to the operation of conventional business enterprises, and it is becoming increasingly important in the development of distributed information systems. However, most database systems, and in particular relational database systems, provide few facilities for effectively managing the vast body of electronic information embedded within text. Many customers require that large texts be searched both vertically, with respect to their internal structure, and horizontally, with respect to their textual content [Wei85]. Texts often need to be fragmented at appropriate structural boundaries. Sometimes selected text needs to be extracted as separate units, but often the appropriate context surrounding selected text must be recovered, and thus the selected text needs to be marked in some manner, so that it can be subsequently located within a potentially much larger context.


IEEE Transactions on Computers | 1989

Local correction of helix(k) lists

Ian J. Davis

A helix (k) list is a robust multiply linked list having k pointers in each node. In general, the ith pointer in each node addresses the ith previous node. However, the first pointer in each node addresses the next node, rather than the previous. An algorithm for performing local correction in a helix (k>or=3) list is presented. Given the assumption that at most k errors are encountered during any single correction step, this algorithm performs correction whenever possible, and otherwise reports failure. The algorithm generally reports failure only if all k pointers addressing a specific node are damaged, causing this node to become disconnected. However, in a helix (k=3) structure, one specific type of damage that causes disconnection is indistinguishable from alternative damage that does not. This also causes the algorithm to report failure. >


principles of engineering service-oriented systems | 2013

Storm prediction in a cloud

Ian J. Davis; Hadi Hemmati; Richard C. Holt; Michael W. Godfrey; Douglas M. Neuse; Serge Mankovskii

Predicting future behavior reliably and efficiently is key for systems that manage virtual services; such systems must be able to balance loads within a cloud environment to ensure that service level agreements are met at a reasonable expense. In principle accurate predictions can be achieved by mining a variety of data sources, which describe the historic behavior of the services, the requirements of the programs running on them, and the evolving demands placed on the cloud by end users. Of particular importance is accurate prediction of maximal loads likely to be observed in the short term. However, standard approaches to modeling system behavior, by analyzing the totality of the observed data, tend to predict average rather than exceptional system behavior and ignore important patterns of change over time. In this paper, we study the ability of a simple multivariate linear regression for forecasting of peak CPU utilization (storms) in an industrial cloud environment. We also propose several modifications to the standard linear regression to adjust it for storm prediction.


conference of the centre for advanced studies on collaborative research | 2009

DRACA: decision support for root cause analysis and change impact analysis for CMDBs

Sarah Nadi; Ric Holt; Ian J. Davis; Serge Mankovskii

As business services become increasingly dependent on information technology (IT), it also becomes increasingly important to maximize the decision support for managing IT. Configuration Management Data Bases (CMDBs) store fundamental information about IT systems, such as the systems hardware, software and services. This information can help provide decision support for root cause analysis and change impact analysis. We have worked with our industrial research partner, CA, and with CA customers to identify challenges to the use of CMDBs to semi-automatically solve these problems. In this paper we propose a framework called DRACA (Decision Support for Root Cause Analysis and Change Impact Analysis). This framework mines key facts from the CMDB and in a sequence of three steps combines these facts with incident reports, change reports and expert knowledge, along with temporal information, to construct a probabilistic causality graph. Root causes are predicted and ranked by probabilistically tracing causality edges backwards from incidents to likely causes. Conversely, change impacts can be predicted and ranked by tracing from a proposed change forward along causality edges to locate likely undesirable impacts.


conference on software maintenance and reengineering | 2012

Analyzing Assembler to Eliminate Dead Functions: An Industrial Experience

Ian J. Davis; Michael W. Godfrey; Richard C. Holt; Serge Mankovskii; Nick Minchenko

Industrial software systems often contain fragments of code that are vestigial, that is, they were created long ago for a specific purpose but are no longer useful within the current design of the system. In this work, we describe how we have adapted some research tools to remove such code, we use a hybrid static analysis approach of both source code and assembler to construct a model of the system, and then use graph querying to detect possible dead functions. Suspected dead functions are then commented out of the source. The system is then rebuilt and run against existing test suites to verify that the removals do not affect the semantics of the system. Finally, we discuss the results of performing this technique on a large and long-lived industrial software system as well as a large open source system.

Collaboration


Dive into the Ian J. Davis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge