Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rekha Bachwani is active.

Publication


Featured researches published by Rekha Bachwani.


Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop | 2014

Adversarial Active Learning

Brad Miller; Alex Kantchelian; Sadia Afroz; Rekha Bachwani; Edwin Dauber; Ling Huang; Michael Carl Tschantz; Anthony D. Joseph; J. D. Tygar

Active learning is an area of machine learning examining strategies for allocation of finite resources, particularly human labeling efforts and to an extent feature extraction, in situations where available data exceeds available resources. In this open problem paper, we motivate the necessity of active learning in the security domain, identify problems caused by the application of present active learning techniques in adversarial settings, and propose a framework for experimentation and implementation of active learning systems in adversarial contexts. More than other contexts, adversarial contexts particularly need active learning as ongoing attempts to evade and confuse classifiers necessitate constant generation of labels for new content to keep pace with adversarial activity. Just as traditional machine learning algorithms are vulnerable to adversarial manipulation, we discuss assumptions specific to active learning that introduce additional vulnerabilities, as well as present vulnerabilities that are amplified in the active learning setting. Lastly, we present a software architecture, Security-oriented Active Learning Testbed (SALT), for the research and implementation of active learning applications in adversarial contexts.


international conference on detection of intrusions and malware and vulnerability assessment | 2016

Reviewer Integration and Performance Measurement for Malware Detection

Brad Miller; Alex Kantchelian; Michael Carl Tschantz; Sadia Afroz; Rekha Bachwani; Riyaz Faizullabhoy; Ling Huang; Vaishaal Shankar; Tony Wu; George Yiu; Anthony D. Joseph; J. D. Tygar

We present and evaluate a large-scale malware detection system integrating machine learning with expert reviewers, treating reviewers as a limited labeling resource. We demonstrate that even in small numbers, reviewers can vastly improve the systems ability to keep pace with evolving threats. We conduct our evaluation on a sample of VirusTotal submissions spanning 2.5i?źyears and containing 1.1 million binaries with 778i?źGB of raw feature data. Without reviewer assistance, we achieve 72i?ź% detection at a 0.5i?ź% false positive rate, performing comparable to the best vendors on VirusTotal. Given a budget of 80 accurate reviews daily, we improve detection to 89i?ź% and are able to detect 42i?ź% of malicious binaries undetected upon initial submission to VirusTotal. Additionally, we identify a previously unnoticed temporal inconsistency in the labeling of training datasets. We compare the impact of training labels obtained at the same time training data is first seen with training labels obtained months later. We find that using training labels obtained well after samples appear, and thus unavailable in practice for current training data, inflates measured detection by almost 20i?ź% points. We release our cluster-based implementation, as well as a list of all hashes in our evaluation and 3i?ź% of our entire dataset.


Journal of Systems and Software | 2014

Recommending software upgrades with Mojave

Rekha Bachwani; Olivier Crameri; Ricardo Bianchini; Willy Zwaenepoel

Software upgrades are frequent. Unfortunately, many of the upgrades either fail or misbehave. We argue that many of these failures can be avoided for users of the new version of the software by exploiting the characteristics of the upgrade and feedback from the users that have already installed it. To demonstrate that this can be achieved, we build Mojave, the first recommendation system for software upgrades. Mojave leverages data from the existing and new users, machine learning, and static and dynamic source analyses. For each new user, Mojave computes the likelihood that the upgrade will fail for him/her. Based on this value, Mojave recommends for or against the upgrade. We evaluate Mojave for three real upgrade problems with the OpenSSH suite, and one synthetic upgrade problem each in the SQLite database and the uServer Web server. Our results show that it provides accurate recommendations to the new users


Archive | 2012

Preventing and diagnosing software upgrade failures

Ricardo Bianchini; Rekha Bachwani

Modern software systems are complex and comprise many interacting and dependent components. Frequent upgrades are required to fix bugs, patch security vulnerabilities, and add or remove features. Unfortunately, many upgrades either fail or produce undesired behavior resulting in service disruption, user dissatisfaction, and/or monetary loss. To make matters worse, when upgrades fail or misbehave, developers are given limited (and often unstructured) information to pinpoint and correct the problems. In this dissertation, we propose two systems to improve the management of software upgrades. Both systems rely on environment information and dynamic execution data collected from users who have previously upgraded the software. The first (called Mojave) is an upgrade recommendation system that informs a user who intends to upgrade the software about whether the upgrade is likely to succeed. Regardless of Mojaves recommendation, if the user decides to upgrade and it fails, our second system (called Sahara) comes into play. Sahara is a failed upgrade debugging system that identifies a small subset of routines that are likely to contain the root cause of the failure. We evaluate both systems using several real upgrade failures with widely used software. Our results demonstrate that our systems are very accurate in predicting upgrade failures and identifying the likely culprits for upgrade failures.


usenix annual technical conference | 2006

Understanding and validating database system administration

Fábio Oliveira; Kiran Nagaraja; Rekha Bachwani; Ricardo Bianchini; Richard P. Martin; Thu D. Nguyen


Proceedings of the 8th ACM Workshop on Artificial Intelligence and Security | 2015

Better Malware Ground Truth: Techniques for Weighting Anti-Virus Vendor Labels

Alex Kantchelian; Michael Carl Tschantz; Sadia Afroz; Brad Miller; Vaishaal Shankar; Rekha Bachwani; Anthony D. Joseph; J. D. Tygar


international conference on software maintenance | 2011

Sahara: Guiding the debugging of failed software upgrades

Rekha Bachwani; Olivier Crameri; Ricardo Bianchini; Dejan Kostic; Willy Zwaenepoel


Archive | 2009

Oasis: Concolic Execution Driven by Test Suites and Code Modifications

Olivier Crameri; Rekha Bachwani; Tim Brecht; Ricardo Bianchini; Dejan Kostic; Willy Zwaenepoel


MAD | 2012

Mojave: A Recommendation System for Software Upgrades.

Rekha Bachwani; Olivier Crameri; Ricardo Bianchini


arXiv: Cryptography and Security | 2015

Back to the Future: Malware Detection with Temporally Consistent Labels.

Brad Miller; Alex Kantchelian; Michael Carl Tschantz; Sadia Afroz; Rekha Bachwani; Riyaz Faizullabhoy; Ling Huang; Vaishaal Shankar; Tony Wu; George Yiu; Anthony D. Joseph; J. D. Tygar

Collaboration


Dive into the Rekha Bachwani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brad Miller

University of California

View shared research outputs
Top Co-Authors

Avatar

J. D. Tygar

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olivier Crameri

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Ling Huang

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge