Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Janak J. Parekh is active.

Publication


Featured researches published by Janak J. Parekh.


recent advances in intrusion detection | 2006

Anagram: a content anomaly detector resistant to mimicry attack

Ke Wang; Janak J. Parekh; Salvatore J. Stolfo

In this paper, we present Anagram, a content anomaly detector that models a mixture ofhigh-order n-grams (n > 1) designed to detect anomalous and “suspicious” network packet payloads. By using higher-order n-grams, Anagram can detect significant anomalous byte sequences and generate robust signatures of validated malicious packet content. The Anagram content models are implemented using highly efficient Bloom filters, reducing space requirements and enabling privacy-preserving cross-site correlation. The sensor models the distinct content flow of a network or host using a semi-supervised training regimen. Previously known exploits, extracted from the signatures of an IDS, are likewise modeled in a Bloom filter and are used during training as well as detection time. We demonstrate that Anagram can identify anomalous traffic with high accuracy and low false positive rates. Anagrams high-order n-gram analysis technique is also resilient against simple mimicry attacks that blend exploits with “normal” appearing byte padding, such as the blended polymorphic attack recently demonstrated in [1]. We discuss randomized n-gram models, which further raises the bar and makes it more difficult for attackers to build precise packet structures to evade Anagram even if they know the distribution of the local site content flow. Finally, Anagrams speed and high detection rate makes it valuable not only as a standalone sensor, but also as a network anomaly flow classifier in an instrumented fault-tolerant host-based environment; this enables significant cost amortization and the possibility of a “symbiotic” feedback loop that can improve accuracy and reduce false positive rates over time.


Cluster Computing | 2006

Retrofitting Autonomic Capabilities onto Legacy Systems

Janak J. Parekh; Gail E. Kaiser; Philip Gross; Giuseppe Valetto

Abstractsec:abstractnak Autonomic computing—self-configuring, self-healing, self-managing applications, systems and networks—is a promising solution to ever-increasing system complexity and the spiraling costs of human management as systems scale to global proportions. Most results to date, however, suggest ways to architect new software designed from the ground up as autonomic systems, whereas in the real world organizations continue to use stovepipe legacy systems and/or build “systems of systems” that draw from a gamut of disparate technologies from numerous vendors. Our goal is to retrofit autonomic computing onto such systems, externally, without any need to understand, modify or even recompile the target systems code. We present an autonomic infrastructure that operates similarly to active middleware, to explicitly add autonomic services to pre-existing systems via continual monitoring and a feedback loop that performs reconfiguration and/or repair as needed. Our lightweight design and separation of concerns enables easy adoption of individual components for use with a variety of target systems, independent of the rest of the full infrastructure. This work has been validated by several case studies spanning multiple real-world application domains.


computer and communications security | 2003

A holistic approach to service survivability

Angelos D. Keromytis; Janak J. Parekh; Philip Gross; Gail E. Kaiser; Vishal Misra; Jason Nieh; Dan Rubenstein; Salvatore J. Stolfo

We present SABER (Survivability Architecture: Block, Evade, React), a proposed survivability architecture that blocks, evades and reacts to a variety of attacks by using several security and survivability mechanisms in an automated and coordinated fashion. Contrary to the ad hoc manner in which contemporary survivable systems are built-using isolated, independent security mechanisms such as firewalls, intrusion detection systems and software sandboxes-SABER integrates several different technologies in an attempt to provide a unified framework for responding to the wide range of attacks malicious insiders and outsiders can launch. This coordinated multi-layer approach will be capable of defending against attacks targeted at various levels of the network stack, such as congestion-based DoS attacks, software-based DoS or code-injection attacks, and others. Our fundamental insight is that while multiple lines of defense are useful, most conventional, uncoordinated approaches fail to exploit the full range of available responses to incidents. By coordinating the response, the ability to survive successful security breaches increases substantially. We discuss the key components of SABER, how they will be integrated together, and how we can leverage on the promising results of the individual components to improve survivability in a variety of coordinated attack scenarios. SABER is currently in the prototyping stages, with several interesting open research topics.


Archive | 2001

An Active Events Model for Systems Monitoring

Philip Gross; Suhit Gupta; Gail E. Kaiser; Gaurav S. Kc; Janak J. Parekh

We present an interaction model enabling data-source probes and action-based gauges to communicate using an intelligent event model known as ActEvents. ActEvents build on conventional event concepts by associating structural and semantic information with raw data, thereby allowing recipients to be able to dynamically understand the content of new kinds of events. Two submodels of ActEvents are proposed: SmartEvents, which are XML-structured events containing references to their syntactic and semantic models, and Gaugents, which are heavier but more flexible intelligent mobile software agents. This model is presented in light of DARPA’s DASADA program, where ActEvents are used in a larger-scale subsystem, called KX, which supports continual validation of distributed, componentbased systems. ActEvents are emitted by probes in this architecture, and propagated to gauges, where “measurements” of the raw data associated with probes are made, thereby continually determining updated target-system properties. ActEvents are also proposed as solutions for a number of other applications, including a distributed collaborative virtual environment (CVE) known as CHIME. Introduction and Motivation DARPA’s DASADA program has focused on standards for distributed systems to ease assembly and maintenance of systems that are composed of components “from anywhere” (e.g., COTS, GOTS, open source, etc.). This program has focused on four areas: architecture description languages to describe the composed system, probes to gather information about the current system configuration and state, gauges to interpret this information, and adaptation engines that can reconfigure the system as necessary. This paper focuses on the interaction between probes and gauges, and proposes a standard for data interchange between them. The control interfaces for both probes and gauges have been developed extensively, and standards have been proposed by others. However, the format and transmission mechanism for data collected from probes is underdeveloped. We examine the problem and suggest possible models and architectures, along with a description of our implementation and experience using it. Probes, Gauges and Events A probe is defined as “an individual sensor attached, either statically or dynamically, to a running program” [1]. Probes emit events that describe some aspect of a program’s execution, either at a specific point in time or over some duration. Probes usually: • are integrated into or wrapped onto the application itself; • communicate with the application via an API; or • look at indirect measures such as operating system or network resource usage. The proposed control interface for probes consists of the following methods: Deploy, Install, Activate (and their inverses), Query-Sensed and Generate-Sensed to enumerate the events that a probe can send, and the Sensed method to publish an event. The newer Focus interface allows additional probes to be activated for detailed examination of a problem. The DASADA standard assumes that probe data will be emitted in the form of Siena events. For the purposes of this paper, we define an event as “a collection of data produced by a system component, and of interest to zero or more other system components.” Note that this description makes no assertions about formatting, routing, or transport. The University of Colorado at Boulder’s Siena event system [2] enables Internet-scale content-based event delivery. Siena models events as an unordered, flat collection of attribute-value pairs. Gauges [3] are defined as “software entities that gather, aggregate compute, analyze, disseminate and/or visualize measurement information about software systems.” Gauges support a simple configuration interface. The proposed gauge standard includes the concept of a “Gauge Reporting Bus,” which is specifically for communicating gauge reports to consumers (who might, e.g., authorize repairs). Consumers supply callbacks to the reporting bus, which are called when an event of interest occurs. Probe-Gauge Interaction Probes use system-specific techniques to extract data from the target system. Gauges use the Gauge Reporting Bus interface to report to higher-level components. While the respective APIs for Probes and Gauges are clearly specified, there is no proposed standard for formatting probe data and sending it to the appropriate gauges. Since one cannot assume that probes and gauges will be located on the same machine, some form of networked interprocess communication (IPC) is necessary. Since the machines may be of heterogeneous type, the format for probe data should be as portable as possible. While we do not address the issue in this paper, we also note that the standard interface for controlling probes, although presumably intended to be an event-based interface, is in fact specified as an RPCstyle function API. The Problem There are three aspects of the probe-gauge relationship that make the problem of connection difficult: the dynamic nature of individual probes, the dynamic topology of the various components, and the heterogeneous nature of the systems involved. Individual probes may be frequently added and removed from the system. Probes may be heterogeneously sourced, with possibly different semantics for similar-looking data; simply labeling the type of data elements within the event, as in traditional attribute/value pairs, is insufficient. Instead, the semantic information required for proper interpretation of the probe data must be associated with the event. Probes and gauges will be activated and deactivated, and may migrate from machine to machine. Some of these components (especially probes) may be running on constrained devices, and requiring every component to maintain a complete network topology is not feasible. Further, since the main tasks for most probes are straightforward, requiring all of them to add the data and logic necessary to manage bidirectional RPC with gauges in a changing environment would increase their complexity considerably. Detailed knowledge of event routing and dispatch should ideally be removed from most probes and gauges. While more advanced systems such as CORBA can help with component discovery, probes will typically have many consumers for a single event, which is not handled efficiently under the CORBA model nor under analogous RPC extensions. The systems involved may be completely heterogeneous with different byte-ordering, operating systems, architectures, etc. Message formatting should be completely architecture independent, and leverage industry standards to the degree possible.


Archive | 2010

Systems and methods for correlating and distributing intrusion alert information among collaborating computer systems

Salvatore J. Stolfo; Tal Malkin; Angelos D. Keromytis; Vishal Misra; Michael E. Locasto; Janak J. Parekh


Archive | 2007

Systems, methods, and media for outputting data based upon anomaly detection

Salvatore J. Stolfo; Ke Wang; Janak J. Parekh


autonomic computing workshop | 2003

Kinesthetics eXtreme: an external infrastructure for monitoring distributed legacy systems

Gail E. Kaiser; Janak J. Parekh; Philip Gross; Giuseppe Valetto


systems man and cybernetics | 2005

Towards collaborative security and P2P intrusion detection

Michael E. Locasto; Janak J. Parekh; Angelos D. Keromytis; Salvatore J. Stolfo


Computer Science Technical Report Series | 2002

An Approach to Autonomizing Legacy Systems

Gail E. Kaiser; Philip Gross; Gaurav S. Kc; Janak J. Parekh; Giuseppe Valetto


Archive | 2004

Collaborative Distributed Intrusion Detection

Michael E. Locasto; Janak J. Parekh; Salvatore J. Stolfo; Angelos D. Keromytis; Tal Malkin; Vishal Misra

Collaboration


Dive into the Janak J. Parekh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge