Lorie M. Liebrock
New Mexico Institute of Mining and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lorie M. Liebrock.
visualization for computer security | 2009
Daniel Quist; Lorie M. Liebrock
Reverse engineering compiled executables is a task with a steep learning curve. It is complicated by the task of translating assembly into a series of abstractions that represent the overall flow of a program. Most of the steps involve finding interesting areas of an executable and determining their overall functionality. This paper presents a method using dynamic analysis of program execution to visually represent the overall flow of a program. We use the Ether hypervisor framework to covertly monitor a program. The data is processed and presented for the reverse engineer. Using this method the amount of time needed to extract key features of an executable is greatly reduced, improving productivity. A preliminary user study indicates that the tool is useful for both new and experienced users.
ieee symposium on security and privacy | 2011
Hugh Wimberly; Lorie M. Liebrock
Choosing the security architecture and policies for a system is a demanding task that must be informed by an understanding of user behavior. We investigate the hypothesis that adding visible security features to a system increases user confidence in the security of a system and thereby causes users to reduce how much effort they spend in other security areas. In our study, 96 volunteers each created a pair of accounts, one secured only by a password and one secured by both a password and a fingerprint reader. Our results strongly support our hypothesis -- on average. When using the fingerprint reader, users created passwords that would take one three-thousandth as long to break, thereby potentially negating the advantage two-factor authentication could have offered.
computer software and applications conference | 2010
Victor Echeverria; Lorie M. Liebrock; Dongwan Shin
One of the challenging problems cloud computing is facing today is the security of data in the cloud. Since the physical location of user data in the cloud is unknown and the data are often distributed across multiple cloud services, a user controllable and privacy preserving access control mechanism is necessary for the success of cloud computing in general and for the protection of user data in specific. In this paper, we discuss a novel approach to controlling access to user data in the cloud; the concept is called Permission as a Service (PaaS). Specifically, PaaS separates access control from other services to provide a separate service in the cloud. This allows users to set permissions for all data in a single location. In PaaS, user data are encrypted to maintain confidentiality and permissions are managed via decryption keys. As a proof-of-concept, we discuss the design and implementation of our prototype leveraging attribute based encryption (ABE).
Computers & Security | 2015
Alexander D. Kent; Lorie M. Liebrock; Joshua Neil
User authentication over the network builds a foundation of trust within large-scale computer networks. The collection of this network authentication activity provides valuable insight into user behavior within an enterprise network. Representing this authentication data as a set of user-specific graphs and graph features, including time-constrained attributes, enables novel and comprehensive analysis opportunities. We show graph-based approaches to user classification and intrusion detection with practical results. We also show a method for assessing network authentication trust risk and cyber attack mitigation within an enterprise network using bipartite authentication graphs. We demonstrate the value of these graph-based approaches on a real-world authentication data set collected from an enterprise network.
international workshop on runtime and operating systems for supercomputers | 2012
Hakan Akkan; Michael Lang; Lorie M. Liebrock
Scientific applications are interrupted by the operating system far too often. Historically operating systems have been written efficiently to time-share a single resource, the CPU. We now have an abundance of cores but we are still swapping out the application to run other tasks and therefore increasing the applications time to solution. Current task scheduling in Linux is not tuned for a high performance computing environment, where a single job is running on all available cores. For example, checking for context switches hundreds of times per second is counter-productive in this setting. One solution to this problem is to partition the cores between operating system and application; with the advent of many-core processors this approach is more attractive. This work describes our investigation of isolation of application processes from the operating system using a soft-partitioning scheme. We use increasingly invasive approaches; from configuration changes with available Linux features such as control groups and pinning interrupts using the CPU affinity settings, to invasive source level code changes to try to reduce, or in some cases completely eliminate, application interruptions such as OS clock ticks and timers. Explained here are the measures that can be taken to reduce application interruption solely with compile and run time configurations in a recent unmodified Linux kernel. Although these measures have been available for a some time, to our knowledge, they have never been addressed in an HPC context. We then introduce our invasive method, where we remove the involuntary preemption induced by task scheduling. Our experiments show that parallel applications benefit from these modifications even at relatively small scales. At the modest scale of our testbed, we see a 1.72% improvement that should project into higher benefits at extreme scales.
acm symposium on applied computing | 2007
Lorie M. Liebrock; Nico Marrero; David P. Burton; Ron Prine; E. Cornelius; M. Shakamuri; Vincent Urias
Digital forensics is computationally intensive and current analysis systems do not handle the multiple terabyte size data sets that are now becoming a major issue for analysis. For these data sets, RAID file system analysis, parallel computing, collaboration, and visualization will be essential. Here we outline the preliminary design for a parallel digital forensics framework that is being developed to handle multiple terabyte size data set analysis.
Journal of Computer Virology and Hacking Techniques | 2011
Daniel Quist; Lorie M. Liebrock; Joshua Neil
Modern malware protection systems bring an especially difficult problem to antivirus scanners. Simple obfuscation methods can diminish the effectiveness of a scanner significantly, often times rendering them completely ineffective. This paper outlines the usage of a hypervisor based deobfuscation engine that greatly improves the effectiveness of existing scanning engines. We have modified the Ether malware analysis framework to add the following features to deobfuscation: section and header rebuilding and automated kernel virtual address descriptor import rebuilding. Using these repair mechanisms we have shown as high as 45% improvement in the effectiveness of antivirus scanning engines.
visualization for computer security | 2008
Moses Schwartz; Lorie M. Liebrock
Digital forensic string search is vital to the forensic discovery process, but there has been little research on improving tools or methods for this task. We propose the use of term distribution visualizations to aid digital forensic string search tasks. Our visualization model enables an analyst to quickly identify relevant sections of a text and provides brushing and drilling-down capabilities to support analysis of large datasets. Initial user study results suggest that the visualizations are useful for information retrieval tasks, but further studies must be performed to obtain statistically significant results and to determine specific utility in digital forensic investigations.
Applied Mathematics and Computation | 2000
D.L. Hicks; Lorie M. Liebrock
Generalizations of the Lanczos derivative provide a basis for certain grid-free Finite Interpolation Methods (FIMs) which appear to have advantages over alternatives such as the standard Finite Difference Methods (FDMs) or Finite Element Methods (FEMs) for certain problems, e.g., hypervelocity impact problems in computational material dynamics.
Computers & Mathematics With Applications | 1999
D.L. Hicks; Lorie M. Liebrock
Grid free methods, such as SPH (Smoothed Particle Hydrodynamics), may, eventually, be more efficacious in their representations of material dynamics than the standard fixed grid methods. However, standard SPH (with stress = σ and interpolation weight = W) has instabilities when σW″ 0) and in tension (σ < 0). Conservative smoothing can control the SPH instabilities, but it may smooth out more of the short wave length structure than desired. SPH can also be stabilized by shifting the shape of W to change the sign of W″.