Reza Farivar
University of Illinois at Urbana–Champaign
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Reza Farivar.
international conference on cluster computing | 2009
Reza Farivar; Abhishek Verma; Ellick M. Chan; Roy H. Campbell
With the advent of high-performance COTS clusters, there is a need for a simple, scalable and fault-tolerant parallel programming and execution paradigm. In this paper, we show that the popular MapReduce programming model can be utilized to solve many interesting scientific simulation problems with much higher performance than regular cluster computers by leveraging GPGPU accelerators in cluster nodes. We use the Massive Unordered Distributed (MUD) formalism and establish a one-to-one correspondence between it and general Monte Carlo simulation methods. Our architecture, MITHRA, leverages NVIDIA CUDA technology along with Apache Hadoop to produce scalable performance gains using the MapReduce programming model. The evaluation of our proposed architecture using the Black Scholes option pricing model shows that a MITHRA cluster of 4 GPUs can outperform a regular cluster of 62 nodes, achieving a speedup of about 254 times in our testbed, while providing scalable near linear performance with additional nodes.
international middleware conference | 2015
Boyang Peng; Mohammad Hosseini; Zhihao Hong; Reza Farivar; Roy H. Campbell
The era of big data has led to the emergence of new systems for real-time distributed stream processing, e.g., Apache Storm is one of the most popular stream processing systems in industry today. However, Storm, like many other stream processing systems lacks an intelligent scheduling mechanism. The default round-robin scheduling currently deployed in Storm disregards resource demands and availability, and can therefore be inefficient at times. We present R-Storm (Resource-Aware Storm), a system that implements resource-aware scheduling within Storm. R-Storm is designed to increase overall throughput by maximizing resource utilization while minimizing network latency. When scheduling tasks, R-Storm can satisfy both soft and hard resource constraints as well as minimizing network distance between components that communicate with each other. We evaluate R-Storm on set of micro-benchmark Storm applications as well as Storm applications used in production at Yahoo! Inc. From our experimental results we conclude that R-Storm achieves 30-47% higher throughput and 69-350% better CPU utilization than default Storm for the micro-benchmarks. For the Yahoo! Storm applications, R-Storm outperforms default Storm by around 50% based on overall throughput. We also demonstrate that R-Storm performs much better when scheduling multiple Storm applications than default Storm.
document analysis systems | 2007
Lin Tan; Ellick M. Chan; Reza Farivar; Nevedita Mallick; Jeffrey C. Carlyle; Francis M. David; Roy H. Campbell
The users of todays operating systems demand high reliability and security. However, faults introduced outside of the core operating system by buggy and malicious device drivers can significantly impact these dependability attributes. To help improve driver isolation, we propose an approach that utilizes the latest hardware virtualization support to efficiently sandbox each device driver in its own minimal virtual machine (VM) so that the kernel is protected from faults in these drivers. We present our implementation of a low-overhead virtual-machine based framework which allows reuse of existing drivers. We have constructed a prototype to demonstrate that it is feasible to utilize existing hardware virtualization techniques to allow device drivers in a VM to communicate with devices directly without frequent hardware traps into the virtual machine monitor (VMM). We have implemented a prototype parallel port driver which interacts through iKernel to communicate with a physical LED device.
ieee symposium on security and privacy | 2007
Ravishankar K. Iyer; Zbigniew Kalbarczyk; Karthik Pattabiraman; William Healey; Wen-mei W. Hwu; Peter Klemperer; Reza Farivar
Two trends - the increasing complexity of computer systems and their deployment in mission- and life-critical applications - are driving the need to provide applications with security and reliability support. Compounding the situation, the Internets ubiquity has made systems much more vulnerable to malicious attacks that can have far-reaching implications on our daily lives. Clearly, the traditional one-size-fits-all approach to security and reliability is no longer sufficient or acceptable from the user perspective. In this article, we introduce the concept of application-aware checking as an alternative. By extracting application via recent breakthroughs in compiler analysis and enforcing the characteristics at runtime with the hardware modules embedded in the reliability and security engine (RSE), its possible to achieve security and reliability with low overheads and false-positive rates
computer and communications security | 2008
Ellick M. Chan; Jeffrey C. Carlyle; Francis M. David; Reza Farivar; Roy H. Campbell
BootJacker is a proof-of-concept attack tool which demonstrates that authentication mechanisms employed by an operating system can be bypassed by obtaining physical access and simply forcing a restart. The key insight that enables this attack is that the contents of memory on some machines are fully preserved across a warm boot. Upon a reboot, BootJacker uses this residual memory state to revive the original host operating system environment and run malicious payloads. Using BootJacker, an attacker can break into a locked user session and gain access to open encrypted disks, web browser sessions or other secure network connections. BootJackers non-persistent design makes it possible for an attacker to leave no traces on the victim machine.
Proceedings of the Workshop on Secure and Dependable Middleware for Cloud Monitoring and Management | 2012
Faraz Faghri; Sobir Bazarbayev; Mark Overholt; Reza Farivar; Roy H. Campbell; William H. Sanders
As the use of cloud computing resources grows in academic research and industry, so does the likelihood of failures that catastrophically affect the applications being run on the cloud. For that reason, cloud service providers as well as cloud applications need to expect failures and shield their services accordingly. We propose a new model called Failure Scenario as a Service (FSaaS). FSaaS will be utilized across the cloud for testing the resilience of cloud applications. In an effort to provide both Hadoop service and application vendors with the means to test their applications against the risk of massive failure, we focus our efforts on the Hadoop platform. We have generated a series of failure scenarios for certain types of jobs. Customers will be able to choose specific scenarios based on their jobs to evaluate their systems.
Journal of Internet Services and Applications | 2012
Roy H. Campbell; Mirko Montanari; Reza Farivar
This paper considers mission assurance for critical cloud applications, a set of applications with growing importance to governments and military organizations. Specifically, we consider applications in which assigned tasks or duties are performed in accordance with an intended purpose or plan in order to accomplish an assured mission. Mission-critical cloud computing may possibly involve hybrid (public, private, heterogeneous) clouds and require the realization of “end-to-end” and “cross-layered” security, dependability, and timeliness. We propose the properties and building blocks of a middleware for assured cloud computing that can support critical missions. In this approach, we assume that mission critical cloud computing must be designed with assurance in mind. In particular, the middleware in such systems must include sophisticated monitoring, assessment of policies, and response to manage the configuration and management of dynamic systems-of-systems with both trusted and partially trusted resources (data, sensors, networks, computers, etc.) and services sourced from multiple organizations.
The Reference Librarian | 2010
James F. Hahn; Michael B. Twidale; Alejandro Gutierrez; Reza Farivar
Navigation within the physical library building can be supported with mobile computing technology; specifically, a path suggestion software application on a patrons mobile device can direct her to the location of the physical item on the shelf. This is accomplished by leveraging existing WiFi access points within a library building as well as supplementing wireless infrastructures with additional wireless beacons for collections-based wayfinding.
symposium on reliable distributed systems | 2011
John Bellessa; Evan Kroske; Reza Farivar; Mirko Montanari; Kevin Larson; Roy H. Campbell
The networking environments found in cloud computing systems are highly complex and dynamic. Consequently, they have strained current policy management and enforcement systems that are based on writing explicit rules about individual hosts. In response, we propose NetODESSA, an inference-based system for network configuration and dynamic policy enforcement. NetODESSA permits the construction of flexible and resilient dynamic networks by allowing network administrators to write general policies about classes of hosts that are combined with runtime information to form network-level actions. Moreover, NetODESSA will infer refinements to the policy from network and host-level data, ensuring that the network remains secure. We have created an initial design for the system and implemented a basic prototype, demonstrating the practicality of this scheme.
international conference on cluster computing | 2012
Reza Farivar; Anand Raghunathan; Srimat T. Chakradhar; Harshit Kharbanda; Roy H. Campbell
Iterative-convergence algorithms are frequently used in a variety of domains to build models from large data sets. Cluster implementations of these algorithms are commonly realized using parallel programming models such as MapReduce. However, these implementations suffer from significant performance bottlenecks, especially due to large volumes of network traffic resulting from intermediate data and model updates during the iterations. To address these challenges, we propose partitioned iterative convergence (PIC), a new approach to programming and executing iterative convergence algorithms on frameworks like MapReduce. In PIC, we execute the iterative-convergence computation in two phases - the best-effort phase, which quickly produces a good initial model and the top-off phase, which further refines this model to produce the final solution. The best-effort phase iteratively performs the following steps: (a) partition the input data and the model to create several smaller, model-building sub-problems, (b) independently solve these sub-problems using iterative convergence computations, and (c) merge solutions of the sub-problems to create the next version of the model. This partitioned, loosely coupled execution of the computation produces a model of good quality, while drastically reducing network traffic due to intermediate data and model updates. The top-off phase further refines this model by employing the original iterative-convergence computation on the entire (un-partitioned) problem until convergence. However, the number of iterations executed in the top-off phase is quite small, resulting in a significant overall improvement in performance. We have implemented a library for PIC on top of the Hadoop MapReduce framework, and evaluated it using five popular iterative-convergence algorithms (Page Rank, K-Means clustering, neural network training, linear equation solver and image smoothing). Our evaluations on clusters ranging from 6 nodes to 256 nodes demonstrate a 2.5X-4X speedup compared to conventional implementations using Hadoop.