Kaveh Razavi
VU University Amsterdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kaveh Razavi.
ieee symposium on security and privacy | 2016
Erik Bosman; Kaveh Razavi; Herbert Bos; Cristiano Giuffrida
Memory deduplication, a well-known technique to reduce the memory footprint across virtual machines, is now also a default-on feature inside the Windows 8.1 and Windows 10 operating systems. Deduplication maps multiple identical copies of a physical page onto a single shared copy with copy-on-write semantics. As a result, a write to such a shared page triggers a page fault and is thus measurably slower than a write to a normal page. Prior work has shown that an attacker able to craft pages on the target system can use this timing difference as a simple single-bit side channel to discover that certain pages exist in the system. In this paper, we demonstrate that the deduplication side channel is much more powerful than previously assumed, potentially providing an attacker with a weird machine to read arbitrary data in the system. We first show that an attacker controlling the alignment and reuse of data in memory is able to perform byte-by-byte disclosure of sensitive data (such as randomized 64 bit pointers). Next, even without control over data alignment or reuse, we show that an attacker can still disclose high-entropy randomized pointers using a birthday attack. To show these primitives are practical, we present an end-to-end JavaScript-based attack against the new Microsoft Edge browser, in absence of software bugs and with all defenses turned on. Our attack combines our deduplication-based primitives with a reliable Rowhammer exploit to gain arbitrary memory read and write access in the browser. We conclude by extending our JavaScript-based attack to cross-process system-wide exploitation (using the popular nginx web server as an example) and discussing mitigation strategies.
computer and communications security | 2016
Victor van der Veen; Yanick Fratantonio; Martina Lindorfer; Daniel Gruss; Clémentine Maurice; Giovanni Vigna; Herbert Bos; Kaveh Razavi; Cristiano Giuffrida
Recent work shows that the Rowhammer hardware bug can be used to craft powerful attacks and completely subvert a system. However, existing efforts either describe probabilistic (and thus unreliable) attacks or rely on special (and often unavailable) memory management features to place victim objects in vulnerable physical memory locations. Moreover, prior work only targets x86 and researchers have openly wondered whether Rowhammer attacks on other architectures, such as ARM, are even possible. We show that deterministic Rowhammer attacks are feasible on commodity mobile platforms and that they cannot be mitigated by current defenses. Rather than assuming special memory management features, our attack, DRAMMER, solely relies on the predictable memory reuse patterns of standard physical memory allocators. We implement DRAMMER on Android/ARM, demonstrating the practicability of our attack, but also discuss a generalization of our approach to other Linux-based platforms. Furthermore, we show that traditional x86-based Rowhammer exploitation techniques no longer work on mobile platforms and address the resulting challenges towards practical mobile Rowhammer attacks. To support our claims, we present the first Rowhammer-based Android root exploit relying on no software vulnerability, and requiring no user permissions. In addition, we present an analysis of several popular smartphones and find that many of them are susceptible to our DRAMMER attack. We conclude by discussing potential mitigation strategies and urging our community to address the concrete threat of faulty DRAM chips in widespread commodity platforms.
ieee international conference on high performance computing data and analytics | 2013
Kaveh Razavi; Thilo Kielmann
In IaaS clouds, VM startup times are frequently perceived as slow, negatively impacting both dynamic scaling of web applications and the startup of high-performance computing applications consisting of many VM nodes. A significant part of the startup time is due to the large transfers of VM image content from a storage node to the actual compute nodes, even when copy-on-write schemes are used. We have observed that only a tiny part of the VM image is needed for the VM to be able to start up. Based on this observation, we propose using small caches for VM images to overcome the VM startup bottlenecks. We have implemented such caches as an extension to KVM/QEMU. Our evaluation with up to 64 VMs shows that using our caches reduces the time needed for simultaneous VM startups to the one of a single VM.
acm special interest group on data communication | 2015
Paolo Costa; Hitesh Ballani; Kaveh Razavi; Ian A. Kash
Rack-scale computers, comprising a large number of micro-servers connected by a direct-connect topology, are expected to replace servers as the building block in data centers. We focus on the problem of routing and congestion control across the racks network, and find that high path diversity in rack topologies, in combination with workload diversity across it, means that traditional solutions are inadequate. We introduce R2C2, a network stack for rack-scale computers that provides flexible and efficient routing and congestion control. R2C2 leverages the fact that the scale of rack topologies allows for low-overhead broadcasting to ensure that all nodes in the rack are aware of all network flows. We thus achieve rate-based congestion control without any probing; each node independently determines the sending rate for its flows while respecting the providers allocation policies. For routing, nodes dynamically choose the routing protocol for each flow in order to maximize overall utility. Through a prototype deployed across a rack emulation platform and a packet-level simulator, we show that R2C2 achieves very low queuing and high throughput for diverse and bursty workloads, and that routing flexibility can provide significant throughput gains.
high performance distributed computing | 2014
Kaveh Razavi; Ana Ion; Thilo Kielmann
In IaaS clouds, virtual machines are booted on demand from user-provided disk images. Both the number of virtual machine images (VMIs) and their large size(GBs), challenge storage and network transfer solutions, and lead to perceivably slow VM startup times. In previous work, we proposed using small VMI caches (O(100MB)) that contain those parts of a VMI that are actually needed for booting. Here, we present Squirrel, a fully replicated storage architecture that exploits deduplication, compression, and snapshots from the ZFS file system, and lets us keep large quantities of VMI caches on all compute nodes of a data center with modest storage requirements. (Much like rodents cache precious food in many distributed places.) Our evaluation shows that we can store VMI caches for all 600+ community images of Windows Azure, worth 16.4TB of raw data, within 10GB of disk space and 60MB of main memory on each compute node of our DAS-4 cluster. Extrapolation to several thousands of images predicts the scalability of our approach.
ieee international conference on cloud engineering | 2015
Kaveh Razavi; Ana Ion; Genc Tato; Kyuho Jeong; Renato J. O. Figueiredo; Guillaume Pierre; Thilo Kielmann
Applications on cloud infrastructures acquire virtual machines (VMs) from providers when necessary. The current interface for acquiring VMs from most providers, however, is too limiting for the tenants, in terms of granularity in which VMs can be acquired (e.g., small, medium, large, etc.), while giving very limited control over their placement. The former leads to VM underutilization, and the latter has performance implications, both translating into higher costs for the tenants. In this work, we leverage nested virtualization and a networking overlay to tackle these problems. We present Kangaroo, an Open Stack-based virtual infrastructure provider, and IPOPsm, a virtual networking switch for communication between nested VMs over different infrastructure VMs. In addition, we design and implement Skippy, the realization of our proposed virtual infrastructure API for programming Kangaroo. Our benchmarks show that through careful mapping of nested VMs to infrastructure VMs, Kangaroo achieves up to an order of magnitude better performance, with only half the cost on Amazon EC2. Further, Kangaroos unified Open Stack API allows us to migrate an entire application between Amazon EC2 and our local Open Nebula deployment within a few minutes, without any downtime or modification to the application code.
european conference on parallel processing | 2013
Kaveh Razavi; Liviu Mihai Razorea; Thilo Kielmann
Elastic cloud applications rely on fast virtual machine (VM) startup, e.g. when scaling out for handling increased workload. While there have been recent studies into the VM startup time in clouds, the effects of the VM image (VMI) disk size and its contents are little understood. To fill this gap, we present a detailed study of these factors on Amazon EC2. Based on our findings, we developed a novel approach for consolidating size and contents of VMIs. We then evaluated our approach with the ConPaaS VMI, an open-source Platform-as-a-Service runtime. Compared to an unmodified ConPaaS VMI, our approach results in up to four times reduction of the disk size, three times speedup for the VM startup time, and three times reduction of storage cost.
international conference on distributed computing systems | 2015
Kaveh Razavi; Gerrit Van Der Kolk; Thilo Kielmann
IaaS clouds promise instantaneously available resources to elastic applications. In practice, however, virtual machine (VM) start up times are in the order of several minutes, or at best, several tens of seconds, negatively impacting the elasticity of applications like Web servers that need to scale out to handle dynamically increasing load. VM start up time is strongly influenced by booting the VMs operating system. In this work, we propose using so-called prebaked uVMs to speed up VM start up. Uvms are snapshots of minimal VMs that can be quickly resumed and then configured to application needs by hot-plugging resources. To serve uVMs, we extend our VM boot cache service, Squirrel, allowing to store uVMs for large numbers of VM images on the hosts of a data center. Our experiments show that uVMs can start up in less than one second on a standard file system. Using 1000+ VM images from a production cloud, we show that the respective uVMs can be stored in a compressed and deduplicated file system within 50GB storage per host, while starting up within 2 -- 3 seconds on average.
scientific cloud computing | 2015
Kaveh Razavi; Stefania Costache; Andrea Gardiman; Kees Verstoep; Thilo Kielmann
Interactive High Performance Computing (HPC) workloads take advantage of the elasticity of clouds to scale their computation based on user demand by dynamically provisioning virtual machines during their runtime. As in this case users require the results of their computation in a short time, the time to start the provisioned virtual instances becomes crucial. In this paper we study the deployment scalability of OpenNebula, an open-source cloud stack, with respect to these workloads. We describe our efforts for tuning the infrastructures and OpenNebulas configuration as well as solving scalability issues in its implementation. After tuning both infrastructure and cloud stack, the simultaneous deployment of 512 VMs improved by 5.9x on average, from 615 to 104 seconds, and after optimizing the implementation, the deployment time improved by 12x on average, to 53.54 seconds. These results suggest two possible improvement opportunities that can be useful for both cloud developers and scientific users deploying a cloud stack to avoid such scalability issues in the future. First, the tuning process of a cloud stack can be improved through automatic tools that adapt the configuration to the workload and infrastructure characteristics. Second, the code scalability issues can be avoided through a testing infrastructure that supports large scale emulation.
international conference on detection of intrusions and malware, and vulnerability assessment | 2018
Victor van der Veen; Martina Lindorfer; Yanick Fratantonio; Harikrishnan Padmanabha Pillai; Giovanni Vigna; Christopher Kruegel; Herbert Bos; Kaveh Razavi
Over the last two years, the Rowhammer bug transformed from a hard-to-exploit DRAM disturbance error into a fully weaponized attack vector. Researchers demonstrated exploits not only against desktop computers, but also used single bit flips to compromise the cloud and mobile devices, all without relying on any software vulnerability.