Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffery von Ronne is active.

Publication


Featured researches published by Jeffery von Ronne.


Software Testing, Verification & Reliability | 2002

Empirical studies of test‐suite reduction

Gregg Rothermel; Mary Jean Harrold; Jeffery von Ronne; Christie Hong

Test‐suite reduction techniques attempt to reduce the costs of saving and reusing test cases during software maintenance by eliminating redundant test cases from test suites. A potential drawback of these techniques is that reducing the size of a test suite might reduce its ability to reveal faults in the software. Previous studies have suggested that test‐suite reduction techniques can reduce test‐suite size without significantly reducing the fault‐detection capabilities of test suites. These studies, however, involved particular programs and types of test suites, and to begin to generalize their results, further work is needed. This paper reports on the design and execution of additional studies, examining the costs and benefits of test‐suite reduction, and the factors that influence these costs and benefits. In contrast to previous studies, results of these studies reveal that the fault‐detection capabilities of test suites can be severely compromised by test‐suite reduction. Copyright


programming language design and implementation | 2001

SafeTSA: a type safe and referentially secure mobile-code representation based on static single assignment form

Wolfram Amme; Niall J. Dalton; Jeffery von Ronne; Michael Franz

This tines assembly is for attachment to a drive shaft and includes a front shaft portion, a rear shaft portion connected between the front shaft portion and the drive shaft, and front and rear tines mounted in removable relation to the rear shaft portion. The front tines, front shaft and rear shaft are connected and located by a pin, and the rear tines, rear shaft and drive shaft are connected and located by a pin.


symposium on access control models and technologies | 2013

Privacy promises that can be kept: a policy analysis method with application to the HIPAA privacy rule

Omar Chowdhury; Andreas Gampe; Jianwei Niu; Jeffery von Ronne; Jared Bennatt; Anupam Datta; Limin Jia; William H. Winsborough

Organizations collect personal information from individuals to carry out their business functions. Federal privacy regulations, such as the Health Insurance Portability and Accountability Act (HIPAA), mandate how this collected information can be shared by the organizations. It is thus incumbent upon the organizations to have means to check compliance with the applicable regulations. Prior work by Barth et. al. introduces two notions of compliance, weak compliance (WC) and strong compliance (SC). WC ensures that present requirements of the policy can be met whereas SC also ensures obligations can be met. An action is compliant with a privacy policy if it is both weakly and strongly compliant. However, their definitions of compliance are restricted to only propositional linear temporal logic (pLTL), which cannot feasibly specify HIPAA. To this end, we present a policy specification language based on a restricted subset of first order temporal logic (FOTL) which can capture the privacy requirements of HIPAA. We then formally specify WC and SC for policies of our form. We prove that checking WC is feasible whereas checking SC is undecidable. We then formally specify the property WC entails SC, denoted by Δ, which requires that each weakly compliant action is also strongly compliant. To check whether an action is compliant with such a policy, it is sufficient to only check whether the action is weakly compliant with that policy. We also prove that when a policy ℘ has the Δ-property, the present requirements of the policy reduce to the safety requirements imposed by ℘. We then develop a sound, semi-automated technique for checking whether practical policies have the Δ-property. We finally use HIPAA as a case study to demonstrate the efficacy of our policy analysis technique.


Electronic Notes in Theoretical Computer Science | 2004

Code Annotation for Safe and Efficient Dynamic Object Resolution

Andreas Hartmann; Wolfram Amme; Jeffery von Ronne; Michael Franz

Abstract The execution time of object oriented programs can be drastically reduced by transforming “non escaping” objects into a collection of its component scalar data fields. But for languages that support dynamic linking, this kind of optimization (which we call “object resolution”) can usually only be performed at runtime, when the entire program is available for analysis. In such cases, the resulting performance increases will be offset by the additional costs that arise during the analysis and restructuring phases. In this paper, we describe work in progress, which provides an annotation technique that reduces the runtime overhead required for performing object resolutions. Our method performs a partial static escape analysis of each class at compile-time and then annotates the intermediate representation of that class with information which the just-in-time (JIT) compiler can use for object resolution. We apply this technique to the safe TSA intermediate representation, producing a simple extension to safe TSAs type system that guarantees a safe and verifiable transmission of the annotated program.


Software Testing, Verification & Reliability | 2002

Can fault-exposure-potential estimates improve the fault detection abilities of test suites?

Wei Chen; Roland H. Untch; Gregg Rothermel; Sebastian G. Elbaum; Jeffery von Ronne

Code‐coverage‐based test data adequacy criteria typically treat all coverable code elements (such as statements, basic blocks or outcomes of decisions) as equal. In practice, however, the probability that a test case can expose a fault in a code element varies: some faults are more easily revealed than others. Thus, several researchers have suggested that if one could estimate the probability that a fault in a code element will cause a failure, one could use this estimate to determine the number of executions of a code element that are required to achieve a certain level of confidence in that elements correctness. This estimate, in turn, could be used to improve the fault‐detection effectiveness of test suites and help testers distribute testing resources more effectively. This conjecture is intriguing; however, like many such conjectures it has never been directly examined empirically. If empirical evidence were to support this conjecture, it would motivate further research into methodologies for obtaining fault‐exposure‐potential estimates and incorporating them into test data adequacy criteria. This paper reports the results of experiments conducted to investigate the effects of incorporating an estimate of fault‐exposure probability into the statement coverage test data adequacy criterion. The results of these experiments, however, ran contrary to the conjectures of previous researchers. Although incorporation of the estimates did produce statistically significant increases in the fault‐detection effectiveness of test suites, these increases were quite small, suggesting that the approach might not be able to produce the gains hoped for and might not be worth the cost of its employment. Copyright


european conference on genetic programming | 2007

FIFTH™: a stack based GP language for vector processing

Kenneth L. Holladay; Kay A. Robbins; Jeffery von Ronne

FIFTH™, a new stack-based genetic programming language, efficiently expresses solutions to a large class of feature recognition problems. This problem class includes mining time-series data, classification of multivariate data, image segmentation, and digital signal processing (DSP). FIFTH is based on FORTH principles. Key features of FIFTH are a single data stack for all data types and support for vectors and matrices as single stack elements. We demonstrate that the language characteristics allow simple and elegant representation of signal processing algorithms while maintaining the rules necessary to automatically evolve stack correct and control flow correct programs. FIFTH supports all essential program architecture constructs such as automatically defined functions, loops, branches, and variable storage. An XML configuration file provides easy selection from a rich set of operators, including domain specific functions such as the Fourier transform (FFT). The fully-distributed FIFTH environment (GPE5) uses CORBA for its underlying process communication.


ACM Transactions on Architecture and Code Optimization | 2007

SSA-based mobile code: Implementation and empirical evaluation

Wolfram Amme; Jeffery von Ronne; Michael Franz

Although one might expect transportation formats based on static single-assignment form (SSA) to yield faster just-in-time compilation times than those based on stack-based virtual machines, this claim has not previously been validated, in practice. We attempt to quantify the effect of using an SSA-based mobile code representation by integrating support for a verifiable SSA-based IR into Jikes RVM. Performance results, measured with various optimizations and on both the IA32 and PowerPC, show improvements in both compilation time and code quality.


static analysis symposium | 2009

A Verifiable, Control Flow Aware Constraint Analyzer for Bounds Check Elimination

David Niedzielski; Jeffery von Ronne; Andreas Gampe; Kleanthis Psarris

The Java platform requires that out-of-bounds array accesses produce runtime exceptions. In general, this requires a dynamic bounds check each time an array element is accessed. However, if it can be proven that the array index is within the bounds of the array, the check can be eliminated. We present a new algorithm based on extended Static Single Assignment (eSSA) form that builds a constraint system representing control flow qualified, linear constraints among program variables derived from program statements. Our system then derives relationships among variables, and provides a verifiable proof of its conclusions. This proof can be verified by a runtime system to minimize the analysiss performance impact. Our system simultaneously considers both control flow and data flow when analyzing the constraint system, handles general linear inequalities instead of simple difference constraints, and provides verifiable proofs for its claims. We present experimental results demonstrating that this method eliminates more bounds checks, and when combined with runtime verification, results in a lower runtime cost than prior work. Our algorithm improves benchmark performance by up to nearly 10% over the baseline SafeTSA system.


principles and practice of programming in java | 2008

Speculative improvements to verifiable bounds check elimination

Andreas Gampe; Jeffery von Ronne; David Niedzielski; Kleanthis Psarris

As a safety measure, the Java programming language requires bounds checking of array accesses. This usually translates to dynamic checks each time an array element is accessed. Static analysis can help eliminate some of those checks by proving them to be redundant, reducing the runtime overhead. Compilation of Java programs is usually method-based, and dynamic dispatch complicates interprocedural analysis. The result is a severely restricted static analysis. This paper presents a novel combination of two techniques to alleviate this problem. By assuming constraints that cannot safely be inferred from the program, the amount of provable safe bounds can be greatly extended. These constraints, called speculations, can be derived automatically from the program code by an analyzer, which assumes that there will be no violation of the array bounds. To ensure that the speculations hold at runtime, additional checks have to be injected into the code. Finding good speculations that benefit the runtime performance can be expensive. This paper shows that the speculation technique can be combined with a verifiable annotation framework, allowing most of the work to be shifted to compile-time. Experimental results show that this combination of techniques increases the number of eliminated bounds checks and can result in speedups that approach unconditional bounds check removal.


international conference on parallel processing | 2014

A Distributed Dataflow Model for Task-Uncoordinated Parallel Program Execution

Lucas A. Wilson; Jeffery von Ronne

High Performance Computing (HPC) systems now consist of many thousands of individual servers. While relatively scalable and cost effective, these systems suffer from a complexity of scale that will not improve with increasing machine size. It will become increasingly difficult, if not impossible, for HPC systems to maintain node availability long enough for any type of worthwhile scientific calculations to be performed. Existing execution and programming models, which are dependent on guaranteed hardware reliability, are not well suited to future distributed memory parallel systems where hardware reliability cannot be guaranteed. We propose a distributed dataflow execution model which utilizes a distributed dictionary for data memoization, allowing each parallel task to schedule instructions without direct inter-task coordination. We provide a description of the proposed execution model, including program formulation and autonomous dataflow task selection. Experiments performed demonstrate the proposed models ability to automatically distribute work across tasks, as well as the proposed models ability to scale in both shared memory and distributed memory.

Collaboration


Dive into the Jeffery von Ronne's collaboration.

Top Co-Authors

Avatar

Michael Franz

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas Gampe

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

David Niedzielski

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Kleanthis Psarris

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Gregg Rothermel

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Jianwei Niu

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Lucas A. Wilson

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Claiborne Johnson

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Mary Jean Harrold

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge