Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniela A. S. de Oliveira is active.

Publication


Featured researches published by Daniela A. S. de Oliveira.


european conference on computer systems | 2014

Cooperation and security isolation of library OSes for multi-process applications

Chia-Che Tsai; Kumar Saurabh Arora; Nehal Bandi; Bhushan Jain; William Jannen; Jitin John; Harry A. Kalodner; Vrushali Kulkarni; Daniela A. S. de Oliveira; Donald E. Porter

Library OSes are a promising approach for applications to efficiently obtain the benefits of virtual machines, including security isolation, host platform compatibility, and migration. Library OSes refactor a traditional OS kernel into an application library, avoiding overheads incurred by duplicate functionality. When compared to running a single application on an OS kernel in a VM, recent library OSes reduce the memory footprint by an order-of-magnitude. Previous library OS (libOS) research has focused on single-process applications, yet many Unix applications, such as network servers and shell scripts, span multiple processes. Key design challenges for a multi-process libOS include management of shared state and minimal expansion of the security isolation boundary. This paper presents Graphene, a library OS that seamlessly and efficiently executes both single and multi-process applications, generally with low memory and performance overheads. Graphene broadens the libOS paradigm to support secure, multi-process APIs, such as copy-on-write fork, signals, and System V IPC. Multiple libOS instances coordinate over pipe-like byte streams to implement a consistent, distributed POSIX abstraction. These coordination streams provide a simple vantage point to enforce security isolation.


architectural support for programming languages and operating systems | 2006

ExecRecorder: VM-based full-system replay for attack analysis and system recovery

Daniela A. S. de Oliveira; Jedidiah R. Crandall; Gary Wassermann; S. Felix Wu; Zhendong Su; Frederic T. Chong

Log-based recovery and replay systems are important for system reliability, debugging and postmortem analysis/recovery of malware attacks. These systems must incur low space and performance overhead, provide full-system replay capabilities, and be resilient against attacks. Previous approaches fail to meet these requirements: they replay only a single process, or require changes in the host and guest OS, or do not have a fully-implemented replay component. This paper studies full-system replay for uniprocessors by logging and replaying architectural events. To limit the amount of logged information, we identify architectural nondeterministic events, and encode them compactly. Here we present ExecRecorder, a full-system, VM-based, log and replay framework for post-attack analysis and recovery. ExecRecorder can replay the execution of an entire system by checkpointing the system state and logging architectural nondeterministic events, and imposes low performance overhead (less than 4% on average). In our evaluation its log files grow at about 5.4 GB/hour (arithmetic mean). Thus it is practical to log on the order of hours or days between checkpoints. It can also be integrated naturally with an IDS and a post-attack analysis tool for intrusion analysis and recovery.


annual computer security applications conference | 2009

Protecting Kernel Code and Data with a Virtualization-Aware Collaborative Operating System

Daniela A. S. de Oliveira; S. Felix Wu

The traditional virtual machine usage model advocates placing security mechanisms in a trusted VM layer and letting the untrusted guest OS run unaware of the presence of virtualization. In this work we challenge this traditional model and propose a collaboration approach between a virtualization-aware operating system and a VM layer to prevent tampering against kernel code and data. Our integrity model is a relaxed version of Bibas and the main idea is to have all attempted writes into kernel code and data segments checked for validity at VM level. The OS-VM collaboration bridges the semantic gap between tracing low integrity objects at OS-level (files, processes, modules, allocated areas) and architecture-level (memory and registers). We have implemented this approach in a proof-of-concept prototype and have successfully tested it against 6 rootkits (including a non-control data attack) and 4 real-world benign LKM/drivers. All rootkits were prevented from corrupting kernel space and no false positive was triggered for benign modules. Performance measurements show that the average overhead to the VM for the OS-VM communication is low (7%, CPU benchmarks). The greatest overhead is caused by the memory monitoring module inside the VM: 1.38X alone and 1.46X when combined with the OS-VM communication. For OS microbenchmarks the slowdown for the OS-VM communication was 1.16X on average.


ieee international conference semantic computing | 2011

Understanding Cancer-Based Networks in Twitter Using Social Network Analysis

Dhiraj Murthy; Alexander Gross; Daniela A. S. de Oliveira

Web-based social media networks have an increasing frequency of health-related information, resources, and networks (both support and professional). Although we are aware of the presence of these health networks, we do not yet know their ability to (1) influence the flow of health-related behaviors, attitudes, and information and (2) what resources have the most influence in shaping particular health outcomes. Lastly, the health research community lacks easy-to-use data gathering tools to conduct applied research using data from social media websites. In this position paper we discuss and sketch our current work on addressing fundamental questions about information flow in cancer-related social media networks by visualizing and understanding authority, trust, and cohesion. We discuss the development of methods to visualize these networks and information flow on them using real-time data from the social media website Twitter and how these networks influence health outcomes by examining responses to specific health messages.


annual computer security applications conference | 2014

It's the psychology stupid: how heuristics explain software vulnerabilities and how priming can illuminate developer's blind spots

Daniela A. S. de Oliveira; Marissa Rosenthal; Nicole Morin; Kuo-Chuan Yeh; Justin Cappos; Yanyan Zhuang

Despite the security communitys emphasis on the importance of building secure software, the number of new vulnerabilities found in our systems is increasing. In addition, vulnerabilities that have been studied for years are still commonly reported in vulnerability databases. This paper investigates a new hypothesis that software vulnerabilities are blind spots in developers heuristic-based decision-making processes. Heuristics are simple computational models to solve problems without considering all the information available. They are an adaptive response to our short working memory because they require less cognitive effort. Our hypothesis is that as software vulnerabilities represent corner cases that exercise unusual information flows, they tend to be left out from the repertoire of heuristics used by developers during their programming tasks. To validate this hypothesis we conducted a study with 47 developers using psychological manipulation. In this study each developer worked for approximately one hour on six vulnerable programming scenarios. The sessions progressed from providing no information about the possibility of vulnerabilities, to priming developers about unexpected results, and explicitly mentioning the existence of vulnerabilities in the code. The results show that (i) security is not a priority in software development environments, (ii) security is not part of developers mindset while coding, (iii) developers assume common cases for their code, (iv) security thinking requires cognitive effort, (v) security education helps, but developers can have difficulties correlating a particular learned vulnerability or security information with their current working task, and (vi) priming or explicitly cueing about vulnerabilities on-the-spot is a powerful mechanism to make developers aware about potential vulnerabilities.


network operations and management symposium | 2008

Bezoar: Automated virtual machine-based full-system recovery from control-flow hijacking attacks

Daniela A. S. de Oliveira; Jedidiah R. Crandall; Gary Wassermann; Shaozhi Ye; Shyhtsun Felix Wu; Zhendong Su; Frederic T. Chong

System availability is difficult for systems to maintain in the face of Internet worms. Large systems have vulnerabilities, and if a system attempts to continue operation after an attack, it may not behave properly. Traditional mechanisms for detecting attacks disrupt service and current recovery approaches are application-based and cannot guarantee recovery in the face of exploits that corrupt the kernel, involve multiple processes or target multithreaded network services. This paper presents Bezoar, an automated full-system virtual machine-based approach to recover from zero-day control-flow hijacking attacks. Bezoar tracks down the source of network bytes in the system and after an attack, replays the checkpointed run while ignoring inputs from the malicious source. We evaluated our proof-of-concept prototype on six notorious exploits for Linux and Windows. In all cases, it recovered the full system state and resumed execution. Bezoar incurs low overhead to the virtual machine: less than 1% for the recovery and log components and approximately 1.4X for the memory monitor component that tracks down network bytes, for five SPEC INT 2000 benchmarks.


new security paradigms workshop | 2012

Holographic vulnerability studies: vulnerabilities as fractures in interpretation as information flows across abstraction boundaries

Jedidiah R. Crandall; Daniela A. S. de Oliveira

We are always patching our systems against specific nstances of whatever the latest new, hot, trendy vulnerability type is. First it was time-of-check-to-time-of-use, then buffer overflows, then SQL injection, then cross-site scripting. Vulnerability studies are supposed to accomplish two main goals: to classify vulnerabilities into general classes so that unknown vulnerabilities of that class can be discovered in a proactive way, and to enable us to understand the fundamental nature of vulnerabilities so that when we build new systems we know how to make them secure. In this paper we propose a new paradigm for vulnerability studies: we view vulnerabilities as fractures in the interpretation of information as the information flows across the boundaries of different abstractions. We argue that categorizing vulnerabilities based on this view, as opposed to the types of categories that have been used in past vulnerability studies, makes vulnerability types more easily generalizable and avoids problems where vulnerabilities could be put in multiple categories.


international symposium on software reliability engineering | 2016

Bear: A Framework for Understanding Application Sensitivity to OS (Mis) Behavior

Ruimin Sun; Andrew R. Lee; Aokun Chen; Donald E. Porter; Matt Bishop; Daniela A. S. de Oliveira

Applications are generally written assuming a predictable and well-behaved OS. In practice, they experience unpredictable misbehavior at the OS level and across OSes: different OSes can handle network events differently, APIs can behave differently across OSes, and OSes may be compromised or buggy. This unpredictability is challenging because its sources typically manifest during deployment and are hard to reproduce. This paper introduces Bear, a framework for statistical analysis of application sensitivity to OS unpredictability that can help developers build more resilient software, discover challenging bugs and identify the scenarios that most need validation. Bear analyzes a program with a set of perturbation strategies on a set of commonly used system calls in order to discover the most sensitive system calls for each application, the most impactful strategies, and how they predict abnormal program outcome. We evaluated Bear with 113 CPU and IO-bound programs, and our results show that null memory dereferencing and erroneous buffer operations are the most impactful strategies for predicting abnormal program execution and that their impacts increase ten-fold with workload increase (e.g. number of network requests from 10 to 1000). Generic system calls are more sensitive than specialized system calls-for example, write and sendto can both be used to send data through a socket, but the sensitivity of write is twice that of sendto. System calls with an array parameter (e.g. read) are more sensitive to perturbations than those having a struct parameter with a buffer (e.g readv). Moreover, the fewer parameters a system call has, the more sensitive it is.


ieee symposium on security and privacy | 2012

Bridging the Semantic Gap to Mitigate Kernel-Level Keyloggers

Jesús Navarro; Enrique Naudon; Daniela A. S. de Oliveira

Kernel-level key loggers, which are installed as part of the operating system (OS) with complete control of kernel code, data and resources, are a growing and very serious threat to the security of current systems. Defending against this type of malware means defending the kernel itself against compromise and it is still an open and difficult problem. This paper details the implementation of two classical kernel-level key loggers for Linux 2.6.38 and how current defense approaches still fail to protect OSes against this type of malware. We further present our current research directions to mitigate this threat by employing an architecture where a guest OS and a virtual machine layer actively collaborate to guarantee kernel integrity. This collaborative approach allows us to better bridge the semantic gap between the OS and architecture layers and devise stronger and more flexible defense solutions to protect the integrity of OS kernels.


information security | 2016

An Information Flow-Based Taxonomy to Understand the Nature of Software Vulnerabilities

Daniela A. S. de Oliveira; Jedidiah R. Crandall; Harry A. Kalodner; Nicole Morin; Megan Maher; Jesús Navarro; Felix Emiliano

Despite the emphasis on building secure software, the number of vulnerabilities found in our systems is increasing every year, and well-understood vulnerabilities continue to be exploited. A common response to vulnerabilities is patch-based mitigation, which does not completely address the flaw and is often circumvented by an adversary. The problem actually lies in a lack of understanding of the nature of vulnerabilities. Vulnerability taxonomies have been proposed, but their usability is limited because of their ambiguity and complexity. This paper presents a taxonomy that views vulnerabilities as fractures in the interpretation of information as it flows in the system. It also presents a machine learning study validating the taxonomy’s unambiguity. A manually labeled set of 641 vulnerabilities trained a classifier that automatically categorized more than 70000 vulnerabilities from three distinct databases with an average success rate of 80 %. Important lessons learned are discussed such as (i) approximately 12 % of the studied reports provide insufficient information about vulnerabilities, and (ii) the roles of the reporter and developer are not leveraged, especially regarding information about tools used to find vulnerabilities and approaches to address them.

Collaboration


Dive into the Daniela A. S. de Oliveira's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Donald E. Porter

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matt Bishop

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

S. Felix Wu

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

André Grégio

Federal University of Paraná

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge