Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hung X. Nguyen is active.

Publication


Featured researches published by Hung X. Nguyen.


ieee international conference computer and communications | 2007

The Boolean Solution to the Congested IP Link Location Problem: Theory and Practice

Hung X. Nguyen; Patrick Thiran

Like other problems in network tomography or traffic matrix estimation, the location of congested IP links from end-to-end measurements requires solving a system of equations that relate the measurement outcomes with the variables representing the status of the IP links. In most networks, this system of equations does not have a unique solution. To overcome this critical problem, current methods use the unrealistic assumption that all IP links have the same prior probability of being congested. We find that this assumption is not needed, because these probabilities can be uniquely identified from a small set of measurements by using properties of Boolean algebra. We can then use the learnt probabilities as priors to find rapidly the congested links at any time, with an order of magnitude gain in accuracy over existing algorithms. We validate our results both by simulation and real implementation in the PlanetLab network.


ieee international conference computer and communications | 2006

Using End-to-End Data to Infer Lossy Links in Sensor Networks

Hung X. Nguyen; Patrick Thiran

Compared to wired networks, sensor networks pose two additional challenges for monitoring functions: they support much less probing traffic, and they change their routing topologies much more frequently. We propose therefore to use only end-to-end application traffic to infer performance of internal network links. End-to-end data do not provide sufficient information to calculate link loss rates exactly, but enough to identify poorly performing (lossy) links. We introduce inference techniques based on Maximum likelihood and Bayesian principles, which handle well noisy measurements and routing changes. We evaluate the performance of both inference algorithms in simulation and on real network traces. We find that these techniques achieve high detection and low false positive rates.


internet measurement conference | 2007

Network loss inference with second order statistics of end-to-end flows

Hung X. Nguyen; Patrick Thiran

We address the problem of calculating link loss rates from end-to-end measurements. Contrary to existing works that use only the average end-to-end loss rates or strict temporal correlations between probes, we exploit second-order moments of end-to-end flows. We first prove that the variances of link loss rates can be uniquely calculated from the covariances of the measured end-to-end loss rates in any realistic topology. After calculating the link variances, we remove the un-congested links with small variances from the first-order moment equations to obtain a full rank linear system of equations, from which we can calculate precisely the loss rates of the remaining congested links. This operation is possible because losses due to congestion occur in bursts and hence the loss rates of congested links have high variances. On the contrary, most links on the Internet are un-congested, and hence the averages and variances of their loss rates are virtually zero. Our proposed solution uses only regular unicast probes and thus is applicable in todays Internet. It is accurate and scalable, as shown in our simulations and experiments on PlanetLab.


passive and active network measurement | 2004

Active Measurement for Multiple Link Failures Diagnosis in IP Networks

Hung X. Nguyen; Patrick Thiran

Simultaneous link failures are common in IP networks [1]. In this paper, we develop a technique for locating multiple failures in Service Provider or Enterprise IP networks using active measurement. We propose a two-phased approach that minimizes both the additional traffic due to probe messages and the measurement infrastructure costs. In the first phase, using elements from max-plus algebra theory, we show that the optimal set of probes can be determined in polynomial time, and we provide an algorithm to find this optimal set of probes. In the second phase, given the optimal set of probes, we compute the location of a minimal set of measurement points (beacons) that can generate these probes. We show that the beacon placement problem is NP-hard and propose a constant factor approximation algorithm for this problem. We then apply our algorithms to existing ISP networks using topologies inferred by the Rocketfuel tool [2]. We study in particular the difference between the number of probes and beacons that are required for multiple and single failure(s) diagnosis.


international conference on network protocols | 2011

Generalized graph products for network design and analysis

Eric Parsonage; Hung X. Nguyen; Rhys Alistair Bowden; Simon Knight; Nickolas J. G. Falkner; Matthew Roughan

Network design, as it is currently practiced, involves putting devices together to create a network. However, a network is more than the sum of its parts, both in terms of the services it provides, and the potential for bugs. Devices are important, but their combination into a network should follow from expression of high-level policy, not the minutiae of network device configuration. Ideally we want to consider the network as a whole object. In this paper we develop generalized graph products that allow the mathematical design of a network in terms of small subgraphs that directly express business policy. The result is a flexible algebraic description of networks suitable for manipulation and proof. The approach is more than just design — it allows for analysis of existing networks providing an understanding of the policies used in their construction, something which can be difficult if the original designers no longer work on that network. We apply the approach to several real world networks to demonstrate how it can provide insight, and improve design.


international conference on computer communications | 2009

Minimizing Probing Cost for Detecting Interface Failures: Algorithms and Scalability Analysis

Hung X. Nguyen; Renata Teixeira; Patrick Thiran; Christophe Diot

The automatic detection of failures in IP paths is an essential step for operators to perform diagnosis or for overlays to adapt. We study a scenario where a set of monitors send probes toward a set of target end-hosts to detect failures in a given set of IP interfaces. Unfortunately, there is a large probing cost to monitor paths between all monitors and targets at a very high frequency. We make two major contributions to reduce this probing cost. First, we propose a formulation of the probe optimization problem which, in contrast to the established formulation, is not NP complete. Second, we propose two linear programming algorithms to minimize probing cost. Our algorithms combine low frequency per-path probing to detect per-interface failures at a higher frequency. We analyze our solutions both analytically and experimentally. Our theoretical results show that the probing cost increases linearly with the number of interfaces in a random power-law graph. We confirm this linear increase in Internet graphs measured from PlanetLab and RON. Hence, Internet graphs belong to the most costly class of graph to probe.


testbeds and research infrastructures for the development of networks and communities | 2010

How to Build Complex, Large-Scale Emulated Networks

Hung X. Nguyen; Matthew Roughan; Simon Knight; Nick Falkner; Olaf Maennel; Randy Bush

This paper describes AutoNetkit, an auto-configuration tool for com- plex network emulations using Netkit, allowing large-scale networks to be tested on commodity hardware. AutoNetkit uses an object orientated approach for router configuration management, significantly reducing the complexities in large-scale network configuration. Using AutoNetkit, a user can generate large and complex emulations quickly without errors. We have used AutoNetkit to successfully generate a number of different large networks with complex routing/security policies. In our test case, AutoNetkit can generate 100,000 lines of device configuration code from only 50 lines of high-level network specification code.


international conference on computer communications | 2008

Balanced Relay Allocation on Heterogeneous Unstructured Overlays

Hung X. Nguyen; Daniel R. Figueiredo; Matthias Grossglauser; Patrick Thiran

Due to the increased usage of NAT boxes and firewalls, it has become harder for applications to establish direct connections seamlessly among two end-hosts. A recently adopted proposal to mitigate this problem is to use relay nodes, end-hosts that act as intermediary points to bridge connections. Efficiently selecting a relay node is not a trivial problem, specially in a large-scale unstructured overlay system where end-hosts are heterogeneous. In such environment, heterogeneity among the relay nodes comes from the inherent differences in their capacities and from the way overlay networks are constructed. Despite this fact, good relay selection algorithms should effectively balance the aggregate load across the set of relay nodes. In this paper, we address this problem using algorithms based on the two random choices method. We first prove that the classic load-based algorithm can effectively balance the load even when relays are heterogeneous, and that its performance depends directly on relay heterogeneity. Second, we propose an utilization-based random choice algorithm to distribute load in order to balance relay utilization. Numerical evaluations through simulations illustrate the effectiveness of this algorithm, indicating that it might also yield provable performance (which we conjecture). Finally, we support our theoretical findings through simulations of various large-scale scenarios, with realistic relay heterogeneity.


australasian telecommunication networks and applications conference | 2008

On the Correlation of Internet Packet Losses

Hung X. Nguyen; Matthew Roughan

In this paper we analyze more than 100 hours of packet traces from Planet-Lab measurements to study the correlation of Internet packet losses. We first apply statistical tests to identify the correlation timescale of the binary loss data. We find that in half of the traces packet losses are far from independent. More significantly, the correlation timescale of packet losses is correlated with the network load. We then examine the loss runs and the success runs of packets. The loss runs are typically short, regardless of the network load. We find that the success runs in the majority of our traces are also uncorrelated. Furthermore, their correlation timescale also does not depend on the network load. All of these results show that the impact of network load on the correlation of packet losses is non-trivial and that loss runs and success runs are better modelled as being independent than the binary losses themselves.


network operations and management symposium | 2012

Multi-observer privacy-preserving Hidden Markov Models

Hung X. Nguyen; Matthew Roughan

Detection of malicious traffic and network health problems would be much easier if Internet Service Providers (ISPs) shared their data. Unfortunately, they are reluctant to share because doing so would either violate privacy legislation or expose business secrets. Secure distributed computation allows calculations to be made using private data and provides an ideal mechanism for ISPs to share their data. This paper presents such a method, allowing multiple parties to jointly infer a Hidden Markov Model (HMM) for network traffic, which can then be used to detect anomalies. We extend prior work on HMMs in network security to include observations from multiple ISPs and develop secure protocols to infer the model parameters without revealing the private data. We implemented a prototype of the protocols and have tested our implementation on simulated data of realistic network attack models. The experiments show that our protocols have small computation and communication overheads. The protocols therefore are suitable for adoption by ISPs.

Collaboration


Dive into the Hung X. Nguyen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick Thiran

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge