Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Loiseau is active.

Publication


Featured researches published by Patrick Loiseau.


IEEE ACM Transactions on Networking | 2010

Investigating self-similarity and heavy-tailed distributions on a large-scale experimental facility

Patrick Loiseau; Paulo Gonçalves; Guillaume Dewaele; Pierre Borgnat; Patrice Abry; Pascale Vicat-Blanc Primet

After the seminal work by Taqqu relating self-similarity to heavy-tailed distributions, a number of research articles verified that aggregated Internet traffic time series show self-similarity and that Internet attributes, like Web file sizes and flow lengths, were heavy-tailed. However, the validation of the theoretical prediction relating self-similarity and heavy tails remains unsatisfactorily addressed, being investigated using either numerical or network simulations, or from uncontrolled Web traffic data. Notably, this prediction has never been conclusively verified on real networks using controlled and stationary scenarios, prescribing specific heavy-tailed distributions, and estimating confidence intervals. With this goal in mind, we use the potential and facilities offered by the large-scale, deeply reconfigurable and fully controllable experimental Grid5000 instrument, combined with state-of-the-art estimators, to investigate the predictions observability on real networks. To this end, we organize a large number of controlled traffic circulation sessions on a nationwide real network involving 200 independent hosts. We use a FPGA-based measurement system to collect the corresponding traffic at packet level. We then estimate both the self-similarity exponent of the aggregated time series and the heavy-tail index of flow-size distributions, independently. Not only do our results complement and validate, with a striking accuracy, some conclusions drawn from a series of pioneering studies, but they also bring in new insights on the controversial role of certain components of real networks.


knowledge discovery and data mining | 2015

On the Reliability of Profile Matching Across Large Online Social Networks

Oana Goga; Patrick Loiseau; Robin Sommer; Renata Teixeira; Krishna P. Gummadi

Matching the profiles of a user across multiple online social networks brings opportunities for new services and applications as well as new insights on user online behavior, yet it raises serious privacy concerns. Prior literature has showed that it is possible to accurately match profiles, but their evaluation focused only on sampled datasets. In this paper, we study the extent to which we can reliably match profiles in practice, across real-world social networks, by exploiting public attributes, i.e., information users publicly provide about themselves. Todays social networks have hundreds of millions of users, which brings completely new challenges as a reliable matching scheme must identify the correct matching profile out of the millions of possible profiles. We first define a set of properties for profile attributes--Availability, Consistency, non-Impersonability, and Discriminability (ACID)--that are both necessary and sufficient to determine the reliability of a matching scheme. Using these properties, we propose a method to evaluate the accuracy of matching schemes in real practical cases. Our results show that the accuracy in practice is significantly lower than the one reported in prior literature. When considering entire social networks, there is a non-negligible number of profiles that belong to different users but have similar attributes, which leads to many false matches. Our paper sheds light on the limits of matching profiles in the real world and illustrates the correct methodology to evaluate matching schemes in realistic scenarios.


Fundamental & Clinical Pharmacology | 2007

Empirical mode decomposition to assess cardiovascular autonomic control in rats

Edmundo Pereira de Souza Neto; Patrice Abry; Patrick Loiseau; Jean Christophe Cejka; Marc Antoine Custaud; Jean Frutoso; Claude Gharib; Patrick Flandrin

Heart beat rate and blood pressure, together with baroreflex sensitivity, have become important tools in assessing cardiac autonomic system control and in studying sympathovagal balance. These analyses are usually performed thanks to spectral indices computed from standard spectral analysis techniques. However, standard spectral analysis and its corresponding rigid band‐pass filter formulation suffer from two major drawbacks. It can be significantly distorted by non‐stationarity issues and it proves unable to adjust to natural intra‐ and inter‐individual variability. Empirical mode decomposition (EMD), a tool recently introduced in the literature, provides us with a signal‐adaptive decomposition that proves useful for the analysis of non‐stationary data and shows a strong capability to precisely adjust to the spectral content of the analyzed data. It is based on the concept that any complicated set of data can be decomposed into a finite number of components, called intrinsic mode functions, associated with different spectral contributions. The aims of this study were twofold. First, we studied the changes in the sympathovagal balance induced by various pharmacological blockades (phentolamine, atropine and atenolol) of the autonomic nervous system in normotensive rats. Secondly, we assessed the use of EMD for the analysis of the cardiac sympathovagal balance after pharmacological injections. For this, we developed a new (EMD‐based) low frequency vs. high frequency spectral decomposition of heart beat variability and systolic blood pressure, we define the corresponding EMD spectral indices and study their relevance to detect and analyze changes accurately in the sympathovagal balance without having recourse to any a priori fixed high‐pass/low‐pass filters.


workshop on internet and network economics | 2013

Linear Regression as a Non-cooperative Game

Stratis Ioannidis; Patrick Loiseau

Linear regression amounts to estimating a linear model that maps features e.g., age or gender to corresponding data e.g., the answer to a survey or the outcome of a medical exam. It is a ubiquitous tool in experimental sciences. We study a setting in which features are public but the data is private information. While the estimation of the linear model may be useful to participating individuals, if, e.g., it leads to the discovery of a treatment to a disease, individuals may be reluctant to disclose their data due to privacy concerns. In this paper, we propose a generic game-theoretic model to express this trade-off. Users add noise to their data before releasing it. In particular, they choose the variance of this noise to minimize a cost comprising two components: a a privacy cost, representing the loss of privacy incurred by the release; and b an estimation cost, representing the inaccuracy in the linear model estimate. We study the Nash equilibria of this game, establishing the existence of a unique non-trivial equilibrium. We determine its efficiency for several classes of privacy and estimation costs, using the concept of the price of stability. Finally, we prove that, for a specific estimation cost, the generalized least-square estimator is optimal among all linear unbiased estimators in our non-cooperative setting: this result extends the famous Aitken/Gauss-Markov theorem in statistics, establishing that its conclusion persists even in the presence of strategic individuals.


measurement and modeling of computer systems | 2009

Maximum likelihood estimation of the flow size distribution tail index from sampled packet data

Patrick Loiseau; Paulo Gonçalves; Stéphane Girard; Florence Forbes; Pascale Vicat-Blanc Primet

In the context of network traffic analysis, we address the problem of estimating the tail index of flow (or more generally of any group) size distribution from the observation of a sampled population of packets (individuals). We give an exhaustive bibliography of the existing methods and show the relations between them. The main contribution of this work is then to propose a new method to estimate the tail index from sampled data, based on the resolution of the maximum likelihood problem. To assess the performance of our method, we present a full performance evaluation based on numerical simulations, and also on a real traffic trace corresponding to internet traffic recently acquired.


ieee computer security foundations symposium | 2015

A Game-Theoretic Study on Non-monetary Incentives in Data Analytics Projects with Privacy Implications

Michela Chessa; Jens Grossklags; Patrick Loiseau

The amount of personal information contributed by individuals to digital repositories such as social network sites has grown substantially. The existence of this data offers unprecedented opportunities for data analytics research in various domains of societal importance including medicine and public policy. The results of these analyses can be considered a public good which benefits data contributors as well as individuals who are not making their data available. At the same time, the release of personal information carries perceived and actual privacy risks to the contributors. Our research addresses this problem area. In our work, we study a game-theoretic model in which individuals take control over participation in data analytics projects in two ways: 1) individuals can contribute data at a self-chosen level of precision, and 2) individuals can decide whether they want to contribute at all (or not). From the analysts perspective, we investigate to which degree the research analyst has flexibility to set requirements for data precision, so that individuals are still willing to contribute to the project, and the quality of the estimation improves. We study this tradeoffs scenario for populations of homogeneous and heterogeneous individuals, and determine Nash equilibrium that reflect the optimal level of participation and precision of contributions. We further prove that the analyst can substantially increase the accuracy of the analysis by imposing a lower bound on the precision of the data that users can reveal.


Performance Evaluation | 2010

Modeling TCP throughput: An elaborated large-deviations-based model and its empirical validation

Patrick Loiseau; Paulo Gonçalves; Julien Barral; Pascale Vicat-Blanc Primet

In todays Internet, a large part of the traffic is carried using the TCP transport protocol. Characterization of the variations of TCP traffic is thus an important issue, both for resource provisioning and Quality of Service purposes. However, most existing models are limited to the prediction of the (almost-sure) mean TCP throughput and are unable to characterize deviations from this value. In this paper, we propose a method to describe the deviations of a long TCP flows throughput from its almost-sure mean value. This method relies on an ergodic large-deviations result, which was recently proved to hold on almost every single realization for a large class of stochastic processes. Applying this result to a Markov chain modeling the congestion windows evolution of a long-lived TCP flow, we show that it is practically possible to quantify and to statistically bound the throughputs variations at different scales of interest for applications. Our Markov-chain model can take into account various network conditions and we demonstrate the accuracy of our methods prediction in different situations using simulations, experiments and real-world Internet traffic. In particular, in the classical case of Bernoulli losses, we demonstrate: (i) the consistency of our method with the widely-used square-root formula predicting the almost-sure mean throughput, and (ii) its ability to additionally predict finer properties reflecting the traffics variability at different scales.


ACM Transactions on Intelligent Systems and Technology | 2016

A Causal Approach to the Study of TCP Performance

Hadrien Hours; Ernst W. Biersack; Patrick Loiseau

Communication networks are complex systems whose operation relies on a large number of components that work together to provide services to end users. As the quality of these services depends on different parameters, understanding how each of them impacts the final performance of a service is a challenging but important problem. However, intervening on individual factors to evaluate the impact of the different parameters is often impractical due to the high cost of intervention in a network. It is, therefore, desirable to adopt a formal approach to understand the role of the different parameters and to predict how a change in any of these parameters will impact performance. The approach of causality pioneered by J. Pearl provides a powerful framework to investigate these questions. Most of the existing theory is non-parametric and does not make any assumption on the nature of the system under study. However, most of the implementations of causal model inference algorithms and most of the examples of usage of a causal model to predict intervention rely on assumptions such linearity, normality, or discrete data. In this article, we present a methodology to overcome the challenges of working with real-world data and extend the application of causality to complex systems in the area of telecommunication networks, for which assumptions of normality, linearity and discrete data do no hold. Specifically, we study the performance of TCP, which is the prevalent protocol for reliable end-to-end transfer in the Internet. Analytical models of the performance of TCP exist, but they take into account the state of network only and disregard the impact of the application at the sender and the receiver, which often influences TCP performance. To address this point, we take as application the file transfer protocol (FTP), which uses TCP for reliable transfer. Studying a well-understood protocol such as TCP allows us to validate our approach and compare its results to previous studies. We first present and evaluate our methodology using TCP traffic obtained via network emulation, which allows us to experimentally validate the prediction of an intervention. We then apply the methodology to real-world TCP traffic sent over the Internet. Throughout the article, we compare the causal approach for studying TCP performance to other approaches such as analytical modeling or simulation and and show how they can complement each other.


financial cryptography | 2015

A short paper on the incentives to share private information for population estimates

Michela Chessa; Jens Grossklags; Patrick Loiseau

Consumers are often willing to contribute their personal data for analytics projects that may create new insights into societal problems. However, consumers also have justified privacy concerns about the release of their data.


decision and game theory for security | 2012

Computing the Nash Equilibria of Intruder Classification Games

Lemonia Dritsoula; Patrick Loiseau; John Musacchio

We investigate the problem of classifying an intruder of two different types (spy or spammer). The classification is based on the number of file server and mail server attacks a network defender observes during a fixed window. The spammer naively attacks (with a known distribution) his main target: the mail server. The spy strategically selects the number of attacks on his main target: the file server. The defender strategically selects his classification policy: a threshold on the number of file server attacks. We first develop parameterized families of payoff functions for both players and analyze the Nash equilibria of the noncooperative nonzero-sum game. We analyze the strategic interactions of the two players and the tradeoffs each one of them faces: The defender chooses a classification threshold that balances the cost of missed detections and false alarms while the spy seeks to hit the file server as much as possible while still evading detection. We give a characterization of the Nash equilibria in mixed strategies, and demonstrate how the Nash equilibria can be computed in polynomial time. We give two examples of the general model, one that involves forensics on the side of the defender and one that does not. Finally, we evaluate how investments in forensics and data logging could improve the Nash equilibrium payoff of the defender.

Collaboration


Dive into the Patrick Loiseau's collaboration.

Top Co-Authors

Avatar

Paulo Gonçalves

École normale supérieure de Lyon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Musacchio

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ludovic Hablot

École normale supérieure de Lyon

View shared research outputs
Researchain Logo
Decentralizing Knowledge