Florian Kerschbaum
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Florian Kerschbaum.
workshop on privacy in the electronic society | 2003
Mikhail J. Atallah; Florian Kerschbaum; Wenliang Du
We give an efficient protocol for sequence comparisons of the edit-distance kind, such that neither party reveals anything about their private sequence to the other party (other than what can be inferred from the edit distance between their two sequences - which is unavoidable because computing that distance is the purpose of the protocol). The amount of communication done by our protocol is proportional to the time complexity of the best-known algorithm for performing the sequence comparison.The problem of determining the similarity between two sequences arises in a large number of applications, in particular in bioinformatics. In these application areas, the edit distance is one of the most widely used notions of sequence similarity: It is the least-cost set of insertions, deletions, and substitutions required to transform one string into the other. The generalizations of edit distance that are solved by the same kind of dynamic programming recurrence relation as the one for edit distance, cover an even wider domain of applications.
Computers & Security | 2003
Seny Kamara; Sonia Fahmy; E. Eugene Schultz; Florian Kerschbaum; Michael Frantzen
Firewalls protect a trusted network from an untrusted network by filtering traffic according to a specified security policy. A diverse set of firewalls is being used today. As it is infeasible to examine and test each firewall for all possible potential problems, a taxonomy is needed to understand firewall vulnerabilities in the context of firewall operations. This paper describes a novel methodology for analyzing vulnerabilities in Internet firewalls. A firewall vulnerability is defined as an error made during firewall design, implementation, or configuration, that can be exploited to attack the trusted network that the firewall is supposed to protect. We examine firewall internals, and cross-reference each firewall operation with causes and effects of weaknesses in that operation, analyzing twenty reported problems with available firewalls. The result of our analysis is a set of matrices that illustrate the distribution of firewall vulnerability causes and effects over firewall operations. These matrices are useful in avoiding and detecting unforeseen problems during both firewall implementation and firewall testing. Two case studies of Firewall-1 and Raptor illustrate our methodology.
international conference on trust management | 2006
Florian Kerschbaum; Jochen Haller; Yücel Karabulut; Philip Robinson
Virtual Organizations enable new forms of collaboration for businesses in a networked society. During their formation business partners are selected on an as-needed basis. We consider the problem of using a reputation system to enhance the member selection in Virtual Organizations. The paper identifies the requirements for and the benefits of using a reputation system for this task. We identify attacks and analyze their impact and threat to using reputation systems. Based on these findings we propose the use of a specific model of reputation different from the prevalent models of reputation. The major contribution of this paper is an algorithm (called PathTrust) in this model that exploits the graph of relationships among the participants. It strongly emphasizes the transitive model of trust in a web of trust. We evaluate its performance, especially under attack, and show that it provides a clear advantage in the design of a Virtual Organization infrastructure.
computer and communications security | 2014
Florian Hahn; Florian Kerschbaum
Searchable (symmetric) encryption allows encryption while still enabling search for keywords. Its immediate application is cloud storage where a client outsources its files while the (cloud) service provider should search and selectively retrieve those. Searchable encryption is an active area of research and a number of schemes with different efficiency and security characteristics have been proposed in the literature. Any scheme for practical adoption should be efficient -- i.e. have sub-linear search time --, dynamic -- i.e. allow updates -- and semantically secure to the most possible extent. Unfortunately, efficient, dynamic searchable encryption schemes suffer from various drawbacks. Either they deteriorate from semantic security to the security of deterministic encryption under updates, they require to store information on the client and for deleted files and keywords or they have very large index sizes. All of this is a problem, since we can expect the majority of data to be later added or changed. Since these schemes are also less efficient than deterministic encryption, they are currently an unfavorable choice for encryption in the cloud. In this paper we present the first searchable encryption scheme whose updates leak no more information than the access pattern, that still has asymptotically optimal search time, linear, very small and asymptotically optimal index size and can be implemented without storage on the client (except the key). Our construction is based on the novel idea of learning the index for efficient access from the access pattern itself. Furthermore, we implement our system and show that it is highly efficient for cloud storage.
computer and communications security | 2014
Florian Kerschbaum; Axel Schroepfer
Order-preserving encryption enables performing many classes of queries -- including range queries -- on encrypted databases. Popa et al. recently presented an ideal-secure order-preserving encryption (or encoding) scheme, but their cost of insertions (encryption) is very high. In this paper we present an also ideal-secure, but significantly more efficient order-preserving encryption scheme. Our scheme is inspired by Reeds referenced work on the average height of random binary search trees. We show that our scheme improves the average communication complexity from O(n log n) to O(n) under uniform distribution. Our scheme also integrates efficiently with adjustable encryption as used in CryptDB. In our experiments for database inserts we achieve a performance increase of up to 81% in LANs and 95% in WANs.
computer and communications security | 2015
Florian Kerschbaum
Order-preserving encryption allows encrypting data, while still enabling efficient range queries on the encrypted data. This makes its performance and functionality very suitable for data outsourcing in cloud computing scenarios, but the security of order-preserving is still debatable. We present a scheme that achieves a strictly stronger notion of security than any other scheme so far. The basic idea is to randomize the ciphertexts to hide the frequency of plaintexts. Still, the client storage size remains small, in our experiments up to 1/15 of the plaintext size. As a result, one can more securely outsource large data sets, since we can also show that our security increases with larger data sets.
Computers & Security | 2001
Michael Frantzen; Florian Kerschbaum; E. Eugene Schultz; Sonia Fahmy
Vulnerabilities in vendor as well as freeware implementations of firewalls continue to emerge at a rapid pace. Each vulnerability superficially appears to be the result of something such as a coding flaw in one case, or a configuration weakness in another. Given the large number of firewall vulnerabilities that have surfaced in recent years, it is important to develop a comprehensive framework for understanding both what firewalls actually do when they receive incoming traffic and what can go wrong when they process this traffic. An intuitive starting point is to create a firewall dataflow model composed of discrete processing stages that reflect the processing characteristics of a given firewall. These stages do not necessarily all occur in all firewalls, nor do they always conform to the sequential order indicated in this paper. This paper also provides a more complete view of what happens inside a firewall, other than handling the filtering and possibly other rules that the administrator may have established. Complex interactions that influence the security that a firewall delivers frequently occur. Firewall administrators too often blindly believe that filtering rules solely decide the fate of any given packet. Distinguishing between the surface functionality (i.e., functionality related to packet filtering) and the deeper, dataflow-related functionality of firewalls provides a framework for understanding vulnerabilities that have surfaced in firewalls.Vulnerabilities in vendor as well as freeware implementations of firewalls continue to emerge at a rapid pace. Each vulnerability superficially appears to be the result of something such as a coding flaw in one case, or a configuration weakness in another. Given the large number of firewall vulnerabilities that have surfaced in recent years, it is important to develop a comprehensive framework for understanding both what firewalls actually do when they receive incoming traffic and what can go wrong when they process this traffic. An intuitive starting point is to create a firewall dataflow model composed of discrete processing stages that reflect the processing characteristics of a given firewall.These stages do not necessarily all occur in all firewalls, nor do they always conform to the sequential order indicated in this paper.This paper also provides a more complete view of what happens inside a firewall, other than handling the filtering and possibly other rules that the administrator may have established. Complex interactions that influence the security that a firewall delivers frequently occur. Firewall administrators too often blindly believe that filtering rules solely decide the fate of any given packet. Distinguishing between the surface functionality (i.e., functionality related to packet filtering) and the deeper, dataflow-related functionality of firewalls provides a framework for understanding vulnerabilities that have surfaced in firewalls.
computer software and applications conference | 2011
Axel Schröpfer; Florian Kerschbaum; Günter Müller
Secure Computation (SC) enables secure distributed computation of arbitrary functions of private inputs. It has many useful applications, e.g. benchmarking or auctions. Several general protocols for SC have been proposed and recently been implemented in a number of compilers and frameworks. These compilers or frameworks implement one general SC protocol and then require the programmer to implement the function he wants the protocol to compute. Performance remains a challenge for this approach and it has been realized early on that special protocols for important problems can deliver superior performance. In this paper we propose a new intermediate language (L1) for optimizing SC compilers which enables efficient implementation of special protocols potentially mixing several general SC protocols. We show by three case studies -- one for computation of the median, one for weighted average, one for division -- that special protocols and mixed-protocol implementations in our language L1 can lead to superior performance. Moreover, we show that only a combined view on algorithm \emph{and} cryptographic protocol can discover SCs with best run-time performance.
Electronic Notes in Theoretical Computer Science | 2007
Yücel Karabulut; Florian Kerschbaum; Fabio Massacci; Philip Robinson; Artsiom Yautsiukhin
Nowadays many companies understand the benefit of outsourcing. Yet, in current outsourcing practices, clients usually focus primarily on business objectives and security is negotiated only for communication links. It is however not determined how data must be protected after transmission. Strong protection of a communication link is of little value if data can be easily stolen or corrupted while on a suppliers server. The problem raises a number of related challenges such as: identification of metrics which are more suitable for security-level negotiation, client and contractor perspective and security guarantees in service composition scenarios. These challenges and some others are discussed in depth in the article.
information security conference | 2008
Florian Kerschbaum
Benchmarking is an important process for companies to stay competitive in today’s markets. The basis for benchmarking are statistics of performancemeasures of a group of companies. The companies need to collaborate in order to compute these statistics.