Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason D. Hiser is active.

Publication


Featured researches published by Jason D. Hiser.


ieee symposium on security and privacy | 2012

ILR: Where'd My Gadgets Go?

Jason D. Hiser; Anh Nguyen-Tuong; Matthew Hall; Jack W. Davidson

Through randomization of the memory space and the confinement of code to non-data pages, computer security researchers have made a wide range of attacks against program binaries more difficult. However, attacks have evolved to exploit weaknesses in these defenses. To thwart these attacks, we introduce a novel technique called Instruction Location Randomization (ILR). Conceptually, ILR randomizes the location of every instruction in a program, thwarting an attackers ability to re-use program functionality (e.g., arc-injection attacks and return-oriented programming attacks). ILR operates on arbitrary executable programs, requires no compiler support, and requires no user interaction. Thus, it can be automatically applied post-deployment, allowing easy and frequent re-randomization. Our preliminary prototype, working on 32-bit x86 Linux ELF binaries, provides a high degree of entropy. Individual instructions are randomly placed within a 31-bit address space. Thus, attacks that rely on a priori knowledge of the location of code or derandomization are not feasible. We demonstrated ILRs defensive capabilities by defeating attacks against programs with vulnerabilities, including Adobes PDF viewer, acroread, which had an in-the-wild vulnerability. Additionally, using an industry-standard CPU performance benchmark suite, we compared the run time of prototype ILR-protected executables to that of native executables. The average run-time overhead of ILR was 13% with more than half the programs having effectively no overhead (15 out of 29), indicating that ILR is a realistic and cost-effective mitigation technique.


programming language design and implementation | 2004

Fast searches for effective optimization phase sequences

Prasad A. Kulkarni; Stephen Hines; Jason D. Hiser; David B. Whalley; Jack W. Davidson; Douglas L. Jones

It has long been known that a fixed ordering of optimization phases will not produce the best code for every application. One approach for addressing this phase ordering problem is to use an evolutionary algorithm to search for a specific sequence of phases for each module or function. While such searches have been shown to produce more efficient code, the approach can be extremely slow because the application is compiled and executed to evaluate each sequences effectiveness. Consequently, evolutionary or iterative compilation schemes have been promoted for compilation systems targeting embedded applications where longer compilation times may be tolerated in the final stage of development. In this paper we describe two complementary general approaches for achieving faster searches for effective optimization sequences when using a genetic algorithm. The first approach reduces the search time by avoiding unnecessary executions of the application when possible. Results indicate search time reductions of 65% on average, often reducing searches from hours to minutes. The second approach modifies the search so fewer generations are required to achieve the same results. Measurements show that the average number of required generations decreased by 68%. These improvements have the potential for making evolutionary compilation a viable choice for tuning embedded applications.


virtual execution environments | 2006

Secure and practical defense against code-injection attacks using software dynamic translation

Wei Hu; Jason D. Hiser; Daniel W. Williams; Adrian Filipi; Jack W. Davidson; David Evans; John C. Knight; Anh Nguyen-Tuong; Jonathan C. Rowanhill

One of the most common forms of security attacks involves exploiting a vulnerability to inject malicious code into an executing application and then cause the injected code to be executed. A theoretically strong approach to defending against any type of code-injection attack is to create and use a process-specific instruction set that is created by a randomization algorithm. Code injected by an attacker who does not know the randomization key will be invalid for the randomized processor effectively thwarting the attack. This paper describes a secure and efficient implementation of instruction-set randomization (ISR) using software dynamic translation. The paper makes three contributions beyond previous work on ISR. First, we describe an implementation that uses a strong cipher algorithm--the Advanced Encryption Standard (AES), to perform randomization. AES is generally believed to be impervious to known attack methodologies. Second, we demonstrate that ISR using AES can be implemented practically and efficiently (considering both execution time and code size overheads) without requiring special hardware support. The third contribution is that our approach detects malicious code before it is executed. Previous approaches relied on probabilistic arguments that execution of non-randomized foreign code would eventually cause a fault or runtime exception.


ieee symposium on security and privacy | 2009

Security through Diversity: Leveraging Virtual Machine Technology

Daniel W. Williams; Wei Hu; Jack W. Davidson; Jason D. Hiser; John C. Knight; Anh Nguyen-Tuong

Biologists have long recognized the dangers of the lack of diversity or monocultures in biological systems. Recently, it has been noted that much of the fragility of our networked computing systems can be attributed to the lack of diversity or monoculture of our software systems. The problem is severe. Because it is virtually inevitable that software will ship with flaws, our software monoculture leaves systems open to large-scale attacks by knowledgeable adversaries. Inspired by the resilience of diverse biological systems, the authors developed the genesis software development toolchain. An innovative aspect of genesis is the use of an application-level virtual machine technology that enables the application of diversity transforms at any point in the software toolchain. Using Genesis, they authors demonstrated that diversity, when judiciously applied, is a practical and effective defense against two widely used types of attacks - return-to-libc and code injection.


symposium on code generation and optimization | 2007

Evaluating Indirect Branch Handling Mechanisms in Software Dynamic Translation Systems

Jason D. Hiser; Daniel W. Williams; Wei Hu; Jack W. Davidson; Jason Mars; Bruce R. Childers

Software Dynamic Translation (SDT) systems are used for program instrumentation, dynamic optimization, security, intrusion detection, and many other uses. As noted by many researchers, a major source of SDT overhead is the execution of code which is needed to translate an indirect branchs target address into the address of the translated destination block. This paper discusses the sources of indirect branch (IB) overhead in SDT systems and evaluates several techniques for overhead reduction. Measurements using SPEC CPU2000 show that the appropriate choice and configuration of IB translation mechanisms can significantly reduce the IB handling overhead. In addition, cross-architecture evaluation of IB handling mechanisms reveals that the most efficient implementation and configuration can be highly dependent on the implementation of the underlying architecture.


languages compilers and tools for embedded systems | 2002

VISTA: a system for interactive code improvement

Wankang Zhao; Baosheng Cai; David B. Whalley; Mark W. Bailey; Robert van Engelen; Xin Yuan; Jason D. Hiser; Jack W. Davidson; Kyle A. Gallivan; Douglas L. Jones

Software designers face many challenges when developing appli¿cations for embedded systems. A major challenge is meeting the conflicting constraints of speed, code density, and power consump¿tion. Traditional optimizing compiler technology is usually of little help in addressing this challenge. To meet speed, power, and size constraints, application developers typically resort to hand-coded assembly language. The results are software systems that are not portable, less robust, and more costly to develop and maintain. This paper describes a new code improvement paradigm imple¿mented in a system called vista that can help achieve the cost/performance trade-offs that embedded applications demand. Unlike traditional compilation systems where the smallest unit of compilation is typically a function and the programmer has no con¿trol over the code improvement process other than what types of code improvements to perform, the vista system opens the code improvement process and gives the application programmer, when necessary, the ability to finely control it. In particular, vista allows the application developer to (1) direct the order and scope in which the code improvement phases are applied, (2) manually specify code transformations, (3) undo previously applied transformations, and (4) view the low-level program representation graphically. vista can be used by embedded systems developers to produce applications, by compiler writers to prototype and debug new low-level code transformations, and by instructors to illustrate code transformations (e.g., in a compilers course).


languages, compilers, and tools for embedded systems | 2004

EMBARC: an efficient memory bank assignment algorithm for retargetable compilers

Jason D. Hiser; Jack W. Davidson

Many architectures today, especially embedded systems, have multiple memory partitions, each with potentially different performance and energy characteristics. To meet the strict time-to-market requirements of systems containing these chips, compilers require retargetable alogrithms for effectively assigning values to the memory partitions. The EMBARC algorithm described in this paper is the first algorithm to attempt to realize a comprehensive, retargetable algorithm for effective partition assignment of variables in an arbitrary memory hierarchy. It supports a wide variety of memory models including on-chip SRAMs, multiple layers of caches, and even uncached DRAM partitions. Even though it is designed to handle such a range of memory hierarchies, EMBARC is capable of generating partition assignments of similar quality to algorithms designed for specific memory hierarchies. We use a large range of benchmarks and memory models to demonstrate the effectiveness of the EMBARC algorithm. We found that EMBARC can achieve 99% of the improvement of a dedicated algorithm for cacheless systems without SRAM. Also, for cacheless systems with SRAM, EMBARC generated the optimal partition assignment for benchmarks that were simple enough to hand-generate an optimal partition assignment. As further proof of EMBARCs generality, we also show how EMBARC can be used to generate partition assignments for a memory hierarchies with two on-chip caches that can be accessed in parallel.


ACM Transactions on Architecture and Code Optimization | 2005

Fast and efficient searches for effective optimization-phase sequences

Prasad A. Kulkarni; Stephen Hines; David B. Whalley; Jason D. Hiser; Jack W. Davidson; Douglas L. Jones

It has long been known that a fixed ordering of optimization phases will not produce the best code for every application. One approach for addressing this phase-ordering problem is to use an evolutionary algorithm to search for a specific sequence of phases for each module or function. While such searches have been shown to produce more efficient code, the approach can be extremely slow because the application is compiled and possibly executed to evaluate each sequences effectiveness. Consequently, evolutionary or iterative compilation schemes have been promoted for compilation systems targeting embedded applications where meeting strict constraints on execution time, code size, and power consumption is paramount and longer compilation times may be tolerated in the final stage of development, when an application is compiled one last time and embedded in a product. Unfortunately, even for small embedded applications, the search process can take many hours or even days making the approach less attractive to developers. In this paper, we describe two complementary general approaches for achieving faster searches for effective optimization sequences when using a genetic algorithm. The first approach reduces the search time by avoiding unnecessary executions of the application when possible. Results indicate search time reductions of 62%, on average, often reducing searches from hours to minutes. The second approach modifies the search so fewer generations are required to achieve the same results. Measurements show this approach decreases the average number of required generations by 59%. These improvements have the potential for making evolutionary compilation a viable choice for tuning embedded applications.


languages, compilers, and tools for embedded systems | 2009

Addressing the challenges of DBT for the ARM architecture

Ryan W. Moore; José A. Baiocchi; Bruce R. Childers; Jack W. Davidson; Jason D. Hiser

Dynamic binary translation (DBT) can provide security, virtualization, resource management and other desirable services to embedded systems. Although DBT has many benefits, its run-time performance overhead can be relatively high. The run-time overhead is important in embedded systems due to their slow processor clock speeds, simple microarchitectures, and small caches. This paper addresses how to implement efficient DBT for ARM-based embedded systems, taking into account instruction set and cache/TLB nuances. We develop several techniques that reduce DBT overhead for the ARM. Our techniques focus on cache and TLB behavior. We tested the techniques on an ARM-based embedded device and found that DBT overhead was reduced by 54% in comparison to a general-purpose DBT configuration that is known to perform well, thus further enabling DBT for a wide range of purposes.


virtual execution environments | 2006

Evaluating fragment construction policies for SDT systems

Jason D. Hiser; Daniel W. Williams; Adrian Filipi; Jack W. Davidson; Bruce R. Childers

Software Dynamic Translation (SDT) systems have been used for program instrumentation, dynamic optimization, security policy enforcement, intrusion detection, and many other uses. To be widely applicable, the overhead (runtime, memory usage, and power consumption) should be as low as possible. For instance, if an SDT system is protecting a web server against possible attacks, but causes 30% slowdown, a company may need 30% more machines to handle the web traffic they expect. Consequently, the causes of SDT overhead should be studied rigorously. This work evaluates many alternative policies for the creation of fragments within the Strata SDT framework. In particular, we examine the effects of ending translation at conditional branches; ending translation at unconditional branches; whether to use partial inlining for call instructions; whether to build the target of calls immediately or lazily; whether to align branch targets; and how to place code to transition back to the dynamic translator. We find that effective translation strategies are vital to program performance, improving performance from as much as 28% overhead, to as little as 3% overhead on average for the SPEC CPU2000 benchmark suite. We further demonstrate that these translation strategies are effective across several platforms, including Sun SPARC UltraSparc IIi, AMD Athlon Opteron, and Intel Pentium IV processors.

Collaboration


Dive into the Jason D. Hiser's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michele Co

University of Virginia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Hu

University of Virginia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge